repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
karlstroetmann/Artificial-Intelligence
Python/1 Search/A-Star-Search-Weighted.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as f:\n css = f.read()\nHTML(css)", "Weighted A$^*$ Search\nThe module heapq provides \npriority queues \nthat are implemented as heaps.\nTechnically, these heaps are just lists. In order to use them as priority queues, the entries of these lists will be pairs of the form $(p, o)$, where $p$ is the priority of the object $o$. Usually, the priorities are numbers \nand, contra-intuitively, high priorities correspond to <b>small</b> numbers, that is $(p_1, o_1)$ has a higher priority than $(p_2, o_2)$ iff $p_1 < p_2$.\nWe need only two functions from the module heapq:\n- Given a heap $H$, the function $\\texttt{heapq.heappop}(H)$ removes the pair\n from H that has the highest priority. This pair is also returned.\n- Given a heap $H$, the function $\\texttt{heapq.heappush}\\bigl(H, (p, o)\\bigr)$ \n pushes the pair $(p, o)$ onto the heap $H$. This method does not return a \n value. Instead, the heap $H$ is changed in place.", "import heapq", "The function search takes three arguments to solve a search problem:\n- start is the start state of the search problem,\n- goal is the goal state, and\n- next_states is a function with signature $\\texttt{next_states}:Q \\rightarrow 2^Q$, where $Q$ is the set of states.\n For every state $s \\in Q$, $\\texttt{next_states}(s)$ is the set of states that can be reached from $s$ in one step.\n- heuristic is a function that takes two states as arguments. It returns an estimate of the \n length of the shortest path between these states.\nIf successful, search returns a path from start to goal that is a solution of the search problem\n$$ \\langle Q, \\texttt{next_states}, \\texttt{start}, \\texttt{goal} \\rangle. $$\nThe variable PrioQueue that is used in the implementation contains pairs of the form\n$$ \\bigl(\\texttt{len}(\\texttt{Path}) + \\texttt{heuristic}(\\texttt{state},\\; \\texttt{goal}), \\texttt{Path}\\bigr), $$\nwhere Path is a path from start to state and $\\texttt{heuristic}(\\texttt{state}, \\texttt{goal})$\nis an estimate of the distance from state to goal. The idea is to always extend the most promising Path, i.e. to extend the Path whose completed version would be shortest.", "def search(start, goal, next_states, heuristic):\n Visited = set()\n PrioQueue = [ (2*heuristic(start, goal), [start]) ]\n while PrioQueue:\n _, Path = heapq.heappop(PrioQueue)\n state = Path[-1]\n if state in Visited:\n continue\n if state == goal:\n return Path\n for ns in next_states(state): \n if ns not in Visited:\n prio = 2*heuristic(ns, goal) + len(Path) + 1\n heapq.heappush(PrioQueue, (prio, Path + [ns]))\n Visited.add(state)\n\n%run Sliding-Puzzle.ipynb\n\n%load_ext memory_profiler\n\n%%time\n%memit Path = search(start, goal, next_states, manhattan)\nprint(len(Path)-1)\n\nanimation(Path)\n\n%%time\nPath = search(start2, goal2, next_states, manhattan)\nprint(len(Path)-1)\n\nanimation(Path)" ]
[ "code", "markdown", "code", "markdown", "code" ]
microsoft/dowhy
docs/source/example_notebooks/dowhy_causal_api.ipynb
mit
[ "Demo for the DoWhy causal API\nWe show a simple example of adding a causal extension to any dataframe.", "import dowhy.datasets\nimport dowhy.api\n\nimport numpy as np\nimport pandas as pd\n\nfrom statsmodels.api import OLS\n\ndata = dowhy.datasets.linear_dataset(beta=5,\n num_common_causes=1,\n num_instruments = 0,\n num_samples=1000,\n treatment_is_binary=True)\ndf = data['df']\ndf['y'] = df['y'] + np.random.normal(size=len(df)) # Adding noise to data. Without noise, the variance in Y|X, Z is zero, and mcmc fails.\n#data['dot_graph'] = 'digraph { v ->y;X0-> v;X0-> y;}'\n\ntreatment= data[\"treatment_name\"][0]\noutcome = data[\"outcome_name\"][0]\ncommon_cause = data[\"common_causes_names\"][0]\ndf\n\n# data['df'] is just a regular pandas.DataFrame\ndf.causal.do(x=treatment,\n variable_types={treatment: 'b', outcome: 'c', common_cause: 'c'},\n outcome=outcome,\n common_causes=[common_cause],\n proceed_when_unidentifiable=True).groupby(treatment).mean().plot(y=outcome, kind='bar')\n\ndf.causal.do(x={treatment: 1}, \n variable_types={treatment:'b', outcome: 'c', common_cause: 'c'}, \n outcome=outcome,\n method='weighting', \n common_causes=[common_cause],\n proceed_when_unidentifiable=True).groupby(treatment).mean().plot(y=outcome, kind='bar')\n\ncdf_1 = df.causal.do(x={treatment: 1}, \n variable_types={treatment: 'b', outcome: 'c', common_cause: 'c'}, \n outcome=outcome, \n dot_graph=data['dot_graph'],\n proceed_when_unidentifiable=True)\n\ncdf_0 = df.causal.do(x={treatment: 0}, \n variable_types={treatment: 'b', outcome: 'c', common_cause: 'c'}, \n outcome=outcome, \n dot_graph=data['dot_graph'],\n proceed_when_unidentifiable=True)\n\n\ncdf_0\n\ncdf_1", "Comparing the estimate to Linear Regression\nFirst, estimating the effect using the causal data frame, and the 95% confidence interval.", "(cdf_1['y'] - cdf_0['y']).mean()\n\n1.96*(cdf_1['y'] - cdf_0['y']).std() / np.sqrt(len(df))", "Comparing to the estimate from OLS.", "model = OLS(np.asarray(df[outcome]), np.asarray(df[[common_cause, treatment]], dtype=np.float64))\nresult = model.fit()\nresult.summary()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/noresm2-lmec/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: NORESM2-LMEC\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:24\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lmec', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eriksalt/jupyter
Python Quick Reference/Collections.ipynb
mit
[ "Python Collections Quick Reference\nTable Of Contents\n\n<a href=\"#1.-Deque\">Deque</a>\n<a href=\"#2.-Heapq\">Heapq</a>\n<a href=\"#3.-Counter\">Counter</a>\n\n1. Deque", "from collections import deque\n\ndq = deque()\ndq.append(1)\ndq.append(2)\ndq.appendleft(3)\ndq\n\nv = dq.pop()\nv\n\ndq.popleft()\n\ndq", "Using maxlen to limit the num of items in a deque", "dq = deque(maxlen = 3)\nfor n in range(10):\n dq.append(n)\ndq", "2. Heapq\nheapq provides O(1) access to the smallest item in the heap.", "import heapq\nnums = [1, 8, 2, 23, 7, -4, 18, 23, 42, 37, 2]\n\n#heapq is created from a list\n\nheap = list(nums)\nheapq.heapify(heap)\n\n#now the 1st element is guarenteed to be the smallest\nheap\n\nheapq.heappop(heap)\n\nheap\n\nheapq.heappush(heap, -10)\n\nheap", "nlargest / nsmallest wraps creation of a heap for one-time access", "# nlargest and nsmallest wrap a heapq to provide its results\nprint(heapq.nlargest(3, nums)) # Prints [42, 37, 23]\nprint(heapq.nsmallest(3, nums)) # Prints [-4, 1, 2]\n\n# providing an alternate sort key to nlargest/nsmallest\nportfolio = [\n{'name': 'IBM', 'shares': 100, 'price': 91.1},\n{'name': 'AAPL', 'shares': 50, 'price': 543.22},\n{'name': 'FB', 'shares': 200, 'price': 21.09},\n{'name': 'HPQ', 'shares': 35, 'price': 31.75},\n{'name': 'YHOO', 'shares': 45, 'price': 16.35},\n{'name': 'ACME', 'shares': 75, 'price': 115.65}\n]\n\nheapq.nsmallest(3, portfolio, key=lambda s: s['price'])", "3. Counter", "words = [\n'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes',\n'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the',\n'eyes', \"don't\", 'look', 'around', 'the', 'eyes', 'look', 'into',\n'my', 'eyes', \"you're\", 'under'\n]\n\nfrom collections import Counter\nword_counts = Counter(words) #Works with any hashable items, not just strings!\nword_counts.most_common(3)\n\nmorewords = ['why','are','you','not','looking','in','my','eyes']\nfor word in morewords:\n word_counts[word] += 1\nword_counts.most_common(3)\n\nevenmorewords = ['seriously','look','into','them','while','i','look','at', 'you']\nword_counts.update(evenmorewords)\nword_counts.most_common(3)\n\na = Counter(words)\nb = Counter(morewords)\nc = Counter(evenmorewords)\n\n# combine counters\nd = b + c\nd\n\n# subtract counts\ne = a-d\ne" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSim
soln/chap20.ipynb
gpl-2.0
[ "Chapter 20\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International", "# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *", "So far the differential equations we've worked with have been first\norder, which means they involve only first derivatives. In this\nchapter, we turn our attention to second order ODEs, which can involve\nboth first and second derivatives.\nWe'll revisit the falling penny example from\nChapter xxx, and use run_solve_ivp to find the position and velocity of the penny as it falls, with and without air resistance.\nNewton's second law of motion\nFirst order ODEs can be written \n$$\\frac{dy}{dx} = G(x, y)$$ \nwhere $G$ is some function of $x$ and $y$ (see http://modsimpy.com/ode). Second order ODEs can be written \n$$\\frac{d^2y}{dx^2} = H(x, y, \\frac{dy}{dt})$$\nwhere $H$ is a function of $x$, $y$, and $dy/dx$.\nIn this chapter, we will work with one of the most famous and useful\nsecond order ODEs, Newton's second law of motion: \n$$F = m a$$ \nwhere $F$ is a force or the total of a set of forces, $m$ is the mass of a moving object, and $a$ is its acceleration.\nNewton's law might not look like a differential equation, until we\nrealize that acceleration, $a$, is the second derivative of position,\n$y$, with respect to time, $t$. With the substitution\n$$a = \\frac{d^2y}{dt^2}$$ \nNewton's law can be written\n$$\\frac{d^2y}{dt^2} = F / m$$ \nAnd that's definitely a second order ODE.\nIn general, $F$ can be a function of time, position, and velocity.\nOf course, this \"law\" is really a model in the sense that it is a\nsimplification of the real world. Although it is often approximately\ntrue:\n\n\nIt only applies if $m$ is constant. If mass depends on time,\n position, or velocity, we have to use a more general form of\n Newton's law (see http://modsimpy.com/varmass).\n\n\nIt is not a good model for very small things, which are better\n described by another model, quantum mechanics.\n\n\nAnd it is not a good model for things moving very fast, which are\n better described by yet another model, relativistic mechanics.\n\n\nHowever, for medium-sized things with constant mass, moving at\nmedium-sized speeds, Newton's model is extremely useful. If we can\nquantify the forces that act on such an object, we can predict how it\nwill move.\nDropping pennies\nAs a first example, let's get back to the penny falling from the Empire State Building, which we considered in\nChapter xxx. We will implement two models of this system: first without air resistance, then with.\nGiven that the Empire State Building is 381 m high, and assuming that\nthe penny is dropped from a standstill, the initial conditions are:", "from modsim import State\n\ninit = State(y=381, v=0)", "where y is height above the sidewalk and v is velocity. \nThe units m and s are from the units object provided by Pint:\nThe only system parameter is the acceleration of gravity:", "g = 9.8", "In addition, we'll specify the duration of the simulation and the step\nsize:", "t_end = 10\ndt = 0.1", "With these parameters, the number of time steps is 100, which is good\nenough for many problems. Once we have a solution, we will increase the\nnumber of steps and see what effect it has on the results.\nWe need a System object to store the parameters:", "from modsim import System\n\nsystem = System(init=init, g=g, t_end=t_end, dt=dt)", "Now we need a slope function, and here's where things get tricky. As we have seen, run_solve_ivp can solve systems of first order ODEs, but Newton's law is a second order ODE. However, if we recognize that\n\n\nVelocity, $v$, is the derivative of position, $dy/dt$, and\n\n\nAcceleration, $a$, is the derivative of velocity, $dv/dt$,\n\n\nwe can rewrite Newton's law as a system of first order ODEs:\n$$\\frac{dy}{dt} = v$$ \n$$\\frac{dv}{dt} = a$$ \nAnd we can translate those\nequations into a slope function:", "def slope_func(t, state, system):\n y, v = state\n\n dydt = v\n dvdt = -system.g\n \n return dydt, dvdt", "The first parameter, state, contains the position and velocity of the\npenny. The last parameter, system, contains the system parameter g,\nwhich is the magnitude of acceleration due to gravity.\nThe second parameter, t, is time. It is not used in this slope\nfunction because none of the factors of the model are time dependent. I include it anyway because this function will be called by run_solve_ivp, which always provides the same arguments,\nwhether they are needed or not.\nThe rest of the function is a straightforward translation of the\ndifferential equations, with the substitution $a = -g$, which indicates that acceleration due to gravity is in the direction of decreasing $y$. slope_func returns a sequence containing the two derivatives.\nBefore calling run_solve_ivp, it is a good idea to test the slope\nfunction with the initial conditions:", "dydt, dvdt = slope_func(0, system.init, system)\nprint(dydt)\nprint(dvdt)", "The result is 0 m/s for velocity and 9.8 m/s$^2$ for acceleration. Now we call run_solve_ivp like this:", "from modsim import run_solve_ivp\n\nresults, details = run_solve_ivp(system, slope_func)\ndetails\n\nresults.head()", "results is a TimeFrame with two columns: y contains the height of\nthe penny; v contains its velocity.\nWe can plot the results like this:", "from modsim import decorate\n\nresults.y.plot()\n\ndecorate(xlabel='Time (s)',\n ylabel='Position (m)')", "Since acceleration is constant, velocity increases linearly and position decreases quadratically; as a result, the height curve is a parabola.\nThe last value of results.y is negative, which means we ran the simulation too long.", "t_end = results.index[-1]\nresults.y[t_end]", "One way to solve this problem is to use the results to\nestimate the time when the penny hits the sidewalk.\nThe ModSim library provides crossings, which takes a TimeSeries and a value, and returns a sequence of times when the series passes through the value. We can find the time when the height of the penny is 0 like this:", "from modsim import crossings\n\nt_crossings = crossings(results.y, 0)\nt_crossings", "The result is an array with a single value, 8.818 s. Now, we could run\nthe simulation again with t_end = 8.818, but there's a better way.\nEvents\nAs an option, run_solve_ivp can take an event function, which\ndetects an \"event\", like the penny hitting the sidewalk, and ends the\nsimulation.\nEvent functions take the same parameters as slope functions, state,\nt, and system. They should return a value that passes through 0\nwhen the event occurs. Here's an event function that detects the penny\nhitting the sidewalk:", "def event_func(t, state, system):\n y, v = state\n return y", "The return value is the height of the penny, y, which passes through\n0 when the penny hits the sidewalk.\nWe pass the event function to run_solve_ivp like this:", "results, details = run_solve_ivp(system, slope_func,\n events=event_func)\ndetails", "Then we can get the flight time and final velocity like this:", "t_end = results.index[-1]\nt_end\n\ny, v = results.iloc[-1]\nprint(y)\nprint(v)", "If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.\nSo it's a good thing there is air resistance.\nSummary\nBut air resistance...\nExercises\nExercise: Here's a question from the web site Ask an Astronomer:\n\"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed.\"\nUse run_solve_ivp to answer this question.\nHere are some suggestions about how to proceed:\n\n\nLook up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.\n\n\nWhen the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.\n\n\nExpress your answer in days, and plot the results as millions of kilometers versus days.\n\n\nIf you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.\nYou might also be interested to know that it's actually not that easy to get to the Sun.", "# Solution\n\nr_0 = 150e9 # 150 million km in m\nv_0 = 0\ninit = State(r=r_0,\n v=v_0)\n\n# Solution\n\nradius_earth = 6.37e6 # meters\nradius_sun = 696e6 # meters\nr_final = radius_sun + radius_earth\nr_final\n\nr_0 / r_final\n\nt_end = 1e7 # seconds\n\nsystem = System(init=init,\n G=6.674e-11, # N m^2 / kg^2\n m1=1.989e30, # kg\n m2=5.972e24, # kg\n r_final=radius_sun + radius_earth,\n t_end=t_end)\n\n# Solution\n\ndef universal_gravitation(state, system):\n \"\"\"Computes gravitational force.\n \n state: State object with distance r\n system: System object with m1, m2, and G\n \"\"\"\n r, v = state\n G, m1, m2 = system.G, system.m1, system.m2\n \n force = G * m1 * m2 / r**2\n return force\n\n# Solution\n\nuniversal_gravitation(init, system)\n\n# Solution\n\ndef slope_func(t, state, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object containing `m2`\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n m2 = system.m2 \n\n force = universal_gravitation(state, system)\n dydt = v\n dvdt = -force / m2\n \n return dydt, dvdt\n\n# Solution\n\nslope_func(0, system.init, system)\n\n# Solution\n\ndef event_func(t, state, system):\n r, v = state\n return r - system.r_final\n\n# Solution\n\nevent_func(0, init, system)\n\n# Solution\n\nresults, details = run_solve_ivp(system, slope_func, \n events=event_func)\ndetails\n\n# Solution\n\nt_event = results.index[-1]\nt_event\n\n# Solution\n\nfrom modsim import units\n\nseconds = t_event * units.second\ndays = seconds.to(units.day)\n\n# Solution\n\nresults.index /= 60 * 60 * 24\n\n# Solution\n\nresults.r /= 1e9\n\n# Solution\n\nresults.r.plot(label='r')\n\ndecorate(xlabel='Time (day)',\n ylabel='Distance from sun (million km)')", "Under the hood\nsolve_ivp\nHere is the source code for crossings so you can see what's happening under the hood:", "%psource crossings", "The documentation of InterpolatedUnivariateSpline is here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sangheestyle/ml2015project
howto/model05_Au_Aq_RMSE_linear_model_and_DictVectorizer.ipynb
mit
[ "model05\nModel\n\nLinear models: LinearRegression, Ridge, Lasso, ElasticNet\n\nFeatures\n\nuid\nqid\nq_length\ncategory\nanswer\navg_per_uid: average response time per user\navg_per_qid: average response time per question", "import gzip\nimport cPickle as pickle\n\nwith gzip.open(\"../data/train.pklz\", \"rb\") as train_file:\n train_set = pickle.load(train_file)\n\nwith gzip.open(\"../data/test.pklz\", \"rb\") as test_file:\n test_set = pickle.load(test_file)\n\nwith gzip.open(\"../data/questions.pklz\", \"rb\") as questions_file:\n questions = pickle.load(questions_file)", "Make training set\nFor training model, we might need to make feature and lable pair. In this case, we will use only uid, qid, and position for feature.", "print train_set[1]\nprint questions[1].keys()\n\nX = []\nY = []\navg_time_per_user = {}\navg_time_per_que = {}\n\nfor key in train_set:\n # We only care about positive case at this time\n #if train_set[key]['position'] < 0:\n # continue\n uid = train_set[key]['uid']\n qid = train_set[key]['qid']\n pos = train_set[key]['position']\n q_length = max(questions[qid]['pos_token'].keys())\n category = questions[qid]['category'].lower()\n answer = questions[qid]['answer'].lower()\n \n # Calculate average response time per user\n temp = 0; num = 0\n if uid not in avg_time_per_user.keys():\n for keysubset in train_set:\n if train_set[keysubset]['uid'] == uid:\n temp += train_set[keysubset]['position']\n num += 1\n avg_time_per_user[uid] = temp/num\n temp=0; num = 0\n\n # Calculate average response time per question\n temp=0; num = 0\n if qid not in avg_time_per_que.keys():\n for keysubset in train_set:\n if train_set[keysubset]['qid'] == qid:\n temp += train_set[keysubset]['position']\n num += 1\n avg_time_per_que[qid] = temp/num\n temp=0; num = 0\n \n feat = {\"uid\": str(uid), \"qid\": str(qid), \"q_length\": q_length, \"category\": category, \"answer\": answer, \"avg_per_uid\": avg_time_per_user[uid], \"avg_per_qid\":avg_time_per_que[qid]}\n X.append(feat)\n Y.append([pos])\n\nprint len(X)\nprint len(Y)\nprint X[0], Y[0]", "It means that user 0 tried to solve question number 1 which has 77 tokens for question and he or she answered at 61st token.\nTrain model and make predictions\nLet's train model and make predictions.", "from sklearn.feature_extraction import DictVectorizer\n\n\nvec = DictVectorizer()\nX = vec.fit_transform(X)\nprint X[0]\n\nfrom sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet\nfrom sklearn.cross_validation import train_test_split, cross_val_score\nimport math\n\nX_train, X_test, Y_train, Y_test = train_test_split (X, Y)\n\nregressor = LinearRegression()\nscores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')\n# Flip the sign of MSE and take sqrt of that values.\nfor ii in xrange(len(scores)):\n scores[ii] = math.sqrt(-1*scores[ii])\nprint 'Linear Cross validation RMSE scores:', scores.mean()\nprint scores\n\nregressor = Ridge()\nscores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')\n# Flip the sign of MSE and take sqrt of that values.\nfor ii in xrange(len(scores)):\n scores[ii] = math.sqrt(-1*scores[ii])\nprint 'Ridge Cross validation RMSE scores:', scores.mean()\nprint scores\n\nregressor = Lasso()\nscores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')\n# Flip the sign of MSE and take sqrt of that values.\nfor ii in xrange(len(scores)):\n scores[ii] = math.sqrt(-1*scores[ii])\nprint 'Lasso Cross validation RMSE scores:', scores.mean()\nprint scores\n\nregressor = ElasticNet()\nscores = cross_val_score(regressor, X, Y, cv=10, scoring= 'mean_squared_error')\n# Flip the sign of MSE and take sqrt of that values.\nfor ii in xrange(len(scores)):\n scores[ii] = math.sqrt(-1*scores[ii])\nprint 'ElasticNet Cross validation RMSE scores:', scores.mean()\nprint scores\n\na = [{1: 2}, {2: 3}]\nb = [{3: 2}, {4: 3}]\nc = a + b\nprint c[:len(a)]\nprint c[len(a):]\n\n\nX_train = []\nY_train = []\n\nfor key in train_set:\n # We only care about positive case at this time\n #if train_set[key]['position'] < 0:\n # continue\n uid = train_set[key]['uid']\n qid = train_set[key]['qid']\n pos = train_set[key]['position']\n q_length = max(questions[qid]['pos_token'].keys())\n category = questions[qid]['category'].lower()\n answer = questions[qid]['answer'].lower()\n feat = {\"uid\": str(uid), \"qid\": str(qid), \"q_length\": q_length, \"category\": category, \"answer\": answer}\n X_train.append(feat)\n Y_train.append(pos)\n\nX_test = []\nY_test = []\n\nfor key in test_set:\n uid = test_set[key]['uid']\n qid = test_set[key]['qid']\n q_length = max(questions[qid]['pos_token'].keys())\n category = questions[qid]['category'].lower()\n answer = questions[qid]['answer'].lower()\n feat = {\"uid\": str(uid), \"qid\": str(qid), \"q_length\": q_length, \"category\": category, \"answer\": answer}\n X_test.append(feat)\n Y_test.append(key)\n\nprint \"Before transform: \", len(X_test)\nX_train_length = len(X_train)\nX = vec.fit_transform(X_train + X_test)\nX_train = X[:X_train_length]\nX_test = X[X_train_length:]\n\nregressor = Ridge()\nregressor.fit(X_train, Y_train)\n\npredictions = regressor.predict(X_test)\npredictions = sorted([[id, predictions[index]] for index, id in enumerate(Y_test)])\nprint len(predictions)\npredictions[:5]", "Here is 4749 predictions.\nWriting submission.\nOK, let's writing submission into guess.csv file. In the given submission form, we realized that we need to put header. So, we will insert header at the first of predictions, and then make it as a file.", "import csv\n\n\npredictions.insert(0,[\"id\", \"position\"])\nwith open('guess.csv', 'wb') as fp:\n writer = csv.writer(fp, delimiter=',')\n writer.writerows(predictions)", "All right. Let's submit!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ioggstream/python-course
ansible-101/notebooks/05_inventories.ipynb
agpl-3.0
[ "Inventories\nInventories are a fundamental doc entrypoint for our infrastructures. \nThey contain a lot of informations, including:\n- ansible_user\n- configuration variables in [group_name:vars]\n- host grouping eg. by geographical zones in [group_name:children]\n\nFiles:\n\ninventory", "cd /notebooks/exercise-05\n\n!cat inventory", "The ansible executable can process inventory files", "!ansible -i inventory --list-host all", "Exercise\nUse ansible to show:\n- all hosts of the web group.", "# Use this cell for the exercise\n\n# The ping module is very useful. \n# Use it whenever you want to check connectivity!\n!ansible -m ping -i inventory web_rome", "Inventory scripts", "#To create custom inventory scripts just use python ;) and set it in\n!grep inventory ansible.cfg # inventory = ./docker-inventory.py", "Exercise\nin the official ansible documentation find at least 3 ansible_connection=docker parameters", "\"\"\"List our containers. \n\n Note: this only works with docker-compose containers.\n\n\"\"\"\nfrom __future__ import print_function\n# \n# Manage different docker libraries\n#\ntry:\n from docker import Client\nexcept ImportError:\n from docker import APIClient as Client\n\n\nc = Client(base_url=\"http://172.17.0.1:2375\")\n\n# Define a function to make it clear!\ncontainer_fmt = lambda x: (\n x['Names'][0][1:],\n x['Labels']['com.docker.compose.service'], \n x['NetworkSettings']['Networks']['bridge']['IPAddress'],\n)\n\nfor x in c.containers():\n try:\n print(*container_fmt(x), sep='\\t\\t')\n except KeyError:\n # skip non-docker-compose containers\n pass\n\n# Ansible accepts\nimport json\n\ninventories = {\n 'web': {\n 'hosts': ['ws-1', 'ws-2'],\n },\n 'db': {\n 'hosts': ['db-1', 'db-2'],\n }\n}\n\n# like this \nprint(json.dumps(inventories, indent=1))\n \n\n# You can pass variables to generated inventories too\ninventories['web']['host_vars'] = {\n 'ansible_ssh_common_args': ' -o GSSApiAuthentication=no'\n}\nprint(json.dumps(inventories, indent=1))", "Exercise:\nReuse the code in inventory-docker.py to print a json inventory that:\n\nconnects via docker to \"web\" hosts\nconnects via ssh to \"ansible\" hosts \n\nTest it in the cell below.\n NOTE: there's a docker inventory script shipped with ansible", "!ansible -m ping -i inventory-docker-solution.py all ", "Exercise\nModify the inventory-docker.py to skip StrictHostKeyChecking only on web hosts.", "# Test here your inventory", "Configurations\nYou may want to split inventory files and separate prod and test environment.\nExercise:\nsplit inventory in two inventory files:\n\nprod for production servers \ntest for test servers\n\nThen use ansible -i to explicitly use the different ones.", "# Use this cell to test the exercise", "group_vars and host_vars\nYou can move variables out of inventories - eg to simplify inventory scripts - and store them in files:\n\nunder group_vars for host groups\nunder host_vars for single hosts", "!tree group_vars", "If you have different inventories, you can store different set of variable in custom files.\nThe all ones will be shared between all inventories\nExercise:\n\nedit group_vars/all and move there all common variables from inventory", "# Test here the new inventory file", "Inventory variables can store almost everything and even describe the architecture of your deployment", "!cat group_vars/example", "We can even mix and mojo group_vars and inventory, as we'll see in the next lessons.\nhost_vars\nHost vars can be used in automated or cloud deployments where:\n\nevery new host or vm, at boot, populate its own entries in host_vars (Eg. via file)\nansible is run after that setup and uses host_vars to configure the server and expose that values to the other machines." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/nerc/cmip6/models/sandbox-2/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: SANDBOX-2\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:27\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'sandbox-2', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Mahdisadjadi/phoenixcrime
map.ipynb
mit
[ "Inspired by this gist!\nTo get data from go to this website:\nhttp://www.census.gov/cgi-bin/geo/shapefiles2010/main\nused this: http://www.christianpeccei.com/zipmap/\nstates: ftp://ftp2.census.gov/geo/tiger/TIGER2010/STATE/2010/", "import shapefile\nimport matplotlib.patches as patches\nfrom matplotlib.collections import PatchCollection\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n%matplotlib inline", "Find which zipcodes we needd first", "df = pd.read_csv('./data/cleaneddataset.csv')\n\n#list of unique zipcodes\n#zipcodes = df['zip'].unique().tolist()\n\nzipval = df['zip'].value_counts()\nzipval = zipval[zipval>10] # only more than 10 crimes\n#normalized\nzipval = zipval/zipval.max()\n#list of unique zipcodes\nzipcodes = zipval.index.tolist()", "Plot them", "sfile = shapefile.Reader('./tl_2010_04_zcta510/tl_2010_04_zcta510.shp')\nshape_recs = sfile.shapeRecords()\n\nfig = plt.figure(figsize=(5,5))\nax = fig.add_subplot(111)\n\nallpatches=[] \nfor rec in shape_recs:\n # points that create each zipcode\n points = rec.shape.points\n # metadata\n meta = rec.record\n zipcode=int(meta[1])\n # color map\n cmap = plt.cm.PuRd\n #If only zipcode was part of our database,plot it!\n if zipcode in zipcodes:\n # pick out the right color\n c = cmap(zipval[zipcode]) #np.random.rand(3,1) \n #create a patch\n patch = patches.Polygon(points,closed=True,facecolor=c,\n edgecolor=(0.3, 0.3, 0.3, 1.0), linewidth=0.2)\n # collect the patches\n # allpatches.append(patch)\n ax.add_patch(patch)\n \n # if you want to see irrelany zipcodes\n #else:\n # patch = patches.Polygon(points,True,facecolor='k',edgecolor='white',linewidth=0.2)\n # ax.add_patch(patch)\n\n#p = PatchCollection(allpatches, match_original=False, alpha=0.3 , linewidth=1)\n#ax.add_collection(p)\n\nax.autoscale()\nax.set_title('Number of Crimes per ZIP Code in Phoenix (2016)')\nplt.tight_layout()\nplt.axis('off')\nplt.savefig(\"my_map.png\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
cysuncn/python
study/machinelearning/tensorflow/TensorFlow-Examples-master/notebooks/0_Prerequisite/mnist_dataset_intro.ipynb
gpl-3.0
[ "MNIST Dataset Introduction\nMost examples are using MNIST dataset of handwritten digits. It has 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image, so each sample is represented as a matrix of size 28x28 with values from 0 to 1.\nOverview\n\nUsage\nIn our examples, we are using TensorFlow input_data.py script to load that dataset.\nIt is quite useful for managing our data, and handle:\n\n\nDataset downloading\n\n\nLoading the entire dataset into numpy array:", "# Import MNIST\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\n\n# Load data\nX_train = mnist.train.images\nY_train = mnist.train.labels\nX_test = mnist.test.images\nY_test = mnist.test.labels", "A next_batch function that can iterate over the whole dataset and return only the desired fraction of the dataset samples (in order to save memory and avoid to load the entire dataset).", "# Get the next 64 images array and labels\nbatch_X, batch_Y = mnist.train.next_batch(64)", "Link: http://yann.lecun.com/exdb/mnist/" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
shareactorIO/pipeline
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/GPU/BasicGPU.ipynb
apache-2.0
[ "Basic TensorFlow with GPU", "!nvidia-smi\n\nimport tensorflow as tf\n\nsess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\n\nlogdir = '/root/pipeline/logs/tensorflow'\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport datetime\n\nfrom tensorflow.python.framework import ops\nfrom tensorflow.python.platform import gfile\n\nfrom IPython.display import clear_output, Image, display, HTML", "Multiply 2 matrices", "matrix1 = tf.placeholder(\"float\",name=\"matrix1\")\nmatrix2 = tf.placeholder(\"float\",name=\"matrix2\")\nproduct = tf.matmul(matrix1, matrix2)\n\nsess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\nresult = sess.run(product,feed_dict={matrix1: [[3., 3.]], matrix2: [[6.],[6.]]})\nprint result\nsess.close()", "Sessions must be closed to release resources. We may use the 'with' syntax to close sessions automatically when completed.", "with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:\n with tf.device(\"/gpu:0\"):\n result = sess.run(product,feed_dict={matrix1: [[3., 3.]], matrix2: [[6.],[6.]]})\n print result", "Here we have included a device reference, which will determine which GPU to use for operations. Indexing of devices starts at 0.\nWe may define variables that maintain their properties across executions of the graph. For instance, a variable is used to track runs of the session.\nFirst, create a Variable, that will be initialized to the scalar value 0. Then, create an Op to add one to state. Variables must be initialized through the use of an 'init' Op after having launched the graph.", "state = tf.Variable(0, name=\"counter\")\n\none = tf.constant(1)\nnew_value = tf.add(state, one)\nupdate = tf.assign(state, new_value)\n\ninit_op = tf.initialize_all_variables()\n\nwith tf.Session() as sess:\n sess.run(init_op)\n print sess.run(state)\n for _ in range(3):\n sess.run(update)\n print sess.run(state)", "Linear Regression\nIn the following example, we perform simple linear regression. The target data is $y = 2x + \\eta $ where $ \\eta $ has the distribution ~ $ N(0, \\sigma^2) $", "%matplotlib inline\nx_batch = np.linspace(-1, 1, 101)\ny_batch = x_batch * 2 + np.random.randn(*x_batch.shape) * 0.3\nplt.scatter(x_batch, y_batch)", "We can initialize input Ops using the placeholder function", "x = tf.placeholder(tf.float32, shape=(None,), name=\"x\")\ny = tf.placeholder(tf.float32, shape=(None,), name=\"y\")", "We also create a variable for the weights and note that a NumPy array is convertible to a Tensor.", "w = tf.Variable(np.random.normal(), name=\"W\")", "Our approach here is to perform gradient descent to update a predictor, y_pred, using the least squares cost function. Updating y_pred is simply done through a matrix multiplication similar to what we have performed earlier.", "sess = tf.InteractiveSession()\nsess.run(tf.initialize_all_variables())\ny_pred = tf.mul(w, x)\ny0 = sess.run(y_pred, {x: x_batch})\nplt.figure(1)\nplt.scatter(x_batch, y_batch)\nplt.plot(x_batch, y0)\n\ncost = tf.reduce_mean(tf.square(y_pred - y))\nsummary_op = tf.scalar_summary(\"cost\", cost)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\ntrain_op = optimizer.minimize(cost)", "The initial predictor has little relation to the data.\nWe've selected the optimizer to reduce the cost function, which is the sum of squared differences with the data. We can then define a Summary Writer which will output logs and enable visualizations in TensorBoard. \nWe start our optimizer:", "summary_writer = tf.train.SummaryWriter(logdir, sess.graph_def)\nfor t in range(30):\n cost_t, summary, _ = sess.run([cost, summary_op, train_op], {x: x_batch, y: y_batch})\n summary_writer.add_summary(summary, t)\n print cost_t.mean()\n\ny_pred_batch = sess.run(y_pred, {x: x_batch}) \nplt.figure(1)\nplt.scatter(x_batch, y_batch)\nplt.plot(x_batch, y_pred_batch)\n\n# Helper functions for TF Graph visualization\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = \"<stripped %d bytes>\"%size\n return strip_def\n\ndef rename_nodes(graph_def, rename_func):\n res_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = res_def.node.add() \n n.MergeFrom(n0)\n n.name = rename_func(n.name)\n for i, s in enumerate(n.input):\n n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])\n return res_def\n \ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n \n iframe = \"\"\"\n <iframe seamless style=\"width:800px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '&quot;'))\n display(HTML(iframe))\n\ntmp_def = rename_nodes(sess.graph_def, lambda s:\"/\".join(s.split('_',1)))\nshow_graph(tmp_def)", "Check you're able to navigate around TensorBoard and navigate to the items below visualizing the graph, weights, and gradient descent parameters.\n\n\n\nMultilayer Convolutional Network\nIn the following section, we use convolutional layers, a crucial tool in networks providing advances over traditional image recognition techniques on large datasets. Here, we work with a dataset consisting of handwritten integers, the MNIST dataset. \nWe use a class that stores the MNIST training, validation, and test sets as NumPy arrays.\nWe first initialize the weights and biases. Weights are typically set to a low noise-like background to avoid 0 gradients providing a small perturbation to the start of optimization.", "import tensorflow.examples.tutorials.mnist.input_data as input_data\n\n#import input_data\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\n\ndef weight_variable(shape):\n initial = tf.truncated_normal(shape, stddev=0.1)\n return tf.Variable(initial)\n\ndef bias_variable(shape):\n initial = tf.constant(0.1, shape=shape)\n return tf.Variable(initial)", "We may now define a helper function calling the convolution with a stride of one and zero padded to match the input and output size and standard 2x2 max pooling layers. Under the hood, the TensorFlow functions use the NVIDIA cuDNN (CUDA Deep Neural Network) library to perform assembly optimized implementations on the GPU.", "def conv2d(x, W):\n return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')\n\ndef max_pool_2x2(x):\n return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1], padding='SAME')", "Convolutional + Pooling Layers", "x = tf.placeholder(\"float\", shape=[None, 784])\ny_ = tf.placeholder(\"float\", shape=[None, 10])\n\nx_image = tf.reshape(x, [-1,28,28,1])\n\nW_conv1 = weight_variable([5, 5, 1, 32])\nb_conv1 = bias_variable([32])\n\nh_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)\nh_pool1 = max_pool_2x2(h_conv1)\n\nW_conv2 = weight_variable([5, 5, 32, 64])\nb_conv2 = bias_variable([64])\n\nh_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)\nh_pool2 = max_pool_2x2(h_conv2)", "Regularization / Dropout Layer Avoids Overfitting", "W_fc1 = weight_variable([7 * 7 * 64, 1024])\nb_fc1 = bias_variable([1024])\n\nh_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])\nh_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)\n\nkeep_prob = tf.placeholder(\"float\")\nh_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)\n\nW_fc2 = weight_variable([1024, 10])\nb_fc2 = bias_variable([10])", "Softmax Layer Produces Class Probabilities", "y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)", "We apply a Dropout layer, which undersamples the neurons during training to regularize (reduce overfitting) of our model.\nWe now train our model using the similar cross entropy as the objective function and the more robust Adam optimizer. The output is logged for every 100th iteration in the training process.", "sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))\ncross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))\ntrain_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)\ncorrect_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\nsess.run(tf.initialize_all_variables())\nfor i in range(100):\n batch = mnist.train.next_batch(50)\n if i%10 == 0:\n train_accuracy = accuracy.eval(session=sess, feed_dict={\n x:batch[0], y_: batch[1], keep_prob: 1.0})\n print \"step %d, training accuracy %g\"%(i, train_accuracy)\n train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})\n\nprint \"test accuracy %g\"%accuracy.eval(session=sess, feed_dict={\n x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})\n\n\ntmp_def = rename_nodes(sess.graph_def, lambda s:\"/\".join(s.split('_',1)))\nshow_graph(tmp_def)", "Now try tuning the model for better performance. There are many options:\n\nAdd more layers\nIncrease the number of nodes\nUse different types of activation function\nIncrease the number of epochs\n\nSee if you can reach over 98% accuracy\nSequence Autoencoder\nIn the next example, we demonstrate an autoencoder which learns a lower-dimensional representation of sequential input data.", "sess.close()\nops.reset_default_graph()\nfrom tensorflow.models.rnn import rnn_cell, seq2seq\nsess = tf.InteractiveSession()\nseq_length = 5\nbatch_size = 64\n\nvocab_size = 7\nembedding_dim = 50\n\nmemory_dim = 100", "For each time point, we define an associated Tensor and label. Finally, a weights constant is invariant with respect to time.", "enc_inp = [tf.placeholder(tf.int32, shape=(None,),\n name=\"inp%i\" % t)\n for t in range(seq_length)]\n\nlabels = [tf.placeholder(tf.int32, shape=(None,),\n name=\"labels%i\" % t)\n for t in range(seq_length)]\n\nweights = [tf.ones_like(labels_t, dtype=tf.float32)\n for labels_t in labels]\n\ndec_inp = ([tf.zeros_like(enc_inp[0], dtype=np.int32, name=\"GO\")]\n + enc_inp[:-1])\n\nprev_mem = tf.zeros((batch_size, memory_dim))", "We have defined a decoder input with the name \"GO\" and dropped the final value of the encoder. We now initialize the seq2seq embedding structure with the previously defined values and apply a loss function that is the cross-entropy across each item in the sequence.", "cell = rnn_cell.GRUCell(memory_dim)\ndec_outputs, dec_memory = seq2seq.embedding_rnn_seq2seq(enc_inp, dec_inp, cell, vocab_size, vocab_size)\n\nloss = seq2seq.sequence_loss(dec_outputs, labels, weights, vocab_size)", "We specify the outputs during training as the loss and the magnitude of activations.", "tf.scalar_summary(\"loss\", loss)\nmagnitude = tf.sqrt(tf.reduce_sum(tf.square(dec_outputs[1])))\n\ntf.scalar_summary(\"magnitude at t=1\", magnitude)\n\nsummary_op = tf.merge_all_summaries()\n\nlogdir = '~/'\nsummary_writer = tf.train.SummaryWriter(logdir, sess.graph_def)", "We specify the learning rate and momentum to our momentum operator.", "learning_rate = 0.05\nmomentum = 0.9\noptimizer = tf.train.MomentumOptimizer(learning_rate, momentum)\ntrain_op = optimizer.minimize(loss)", "What would happen if we tripled our learning rate and momentum? (answer at end).\nWe train in batches on the GPU.", "def train_batch(batch_size):\n X = [np.random.choice(vocab_size, size=(seq_length,), replace=False)\n for _ in range(batch_size)]\n Y = X[:]\n X = np.array(X).T\n Y = np.array(Y).T\n feed_dict = {enc_inp[t]: X[t] for t in range(seq_length)}\n feed_dict.update({labels[t]: Y[t] for t in range(seq_length)})\n _, loss_t, summary = sess.run([train_op, loss, summary_op], feed_dict)\n return loss_t, summary\n\nwith tf.device('/gpu:0'):\n sess.run(tf.initialize_all_variables())\n for t in range(500):\n loss_t, summary = train_batch(batch_size)\n summary_writer.add_summary(summary, t)\n\nsummary_writer.flush()", "We can now test our lower dimensional autoencoder by passing data through the embedding to determine if the similar input was recovered.", "X_batch = [np.random.choice(vocab_size, size=(seq_length,), replace=False)\n for _ in range(10)]\nX_batch = np.array(X_batch).T\n\nfeed_dict = {enc_inp[t]: X_batch[t] for t in range(seq_length)}\ndec_outputs_batch = sess.run(dec_outputs, feed_dict)\n\nprint(X_batch)\n\n[logits_t.argmax(axis=1) for logits_t in dec_outputs_batch]\n\ntmp_def = rename_nodes(sess.graph_def, lambda s:\"/\".join(s.split('_',1)))\nshow_graph(tmp_def) ", "At this point, we may return and implement the changes in learning rate and momemtum to inform us on question 1.\nAcknowledgements\nSignificant content from TensorFlow documentation here, and Aymeric Damien's repository here. Original model parallel code is licensed under the Apache 2.0 license\nWhat would happen if we tripled our learning rate and momentum? \nIncreasing the learning rate and momentum comes at the risks of skipping local minima. Here, the learning rate and momentum results in a search that is too coarse and unable to converge. Increasing momentum reduces the effect of the current time point compared to the previous time points. This must be balanced with regards to the optimization landscape. \nWhat is the effect of performing the summation on a GPU device?\nThe time is increased marginally. The extra time stems from transferring data from two GPUs to one GPU (luckily the GPU memory is large enough to fit extra copies or otherwise the operation would fail). There is a degree of parallelism that is exploited during matrix summations.\nWhat is the largest batch size that can be used?\nThe maximum batch size is 8192 for a 12 GB GPU, but diminishing returns are observed after 2048 batch size." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
blua/deep-learning
gan_mnist/Intro_to_GANs_Exercises.ipynb.LOCAL.17033.ipynb
mit
[ "Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.", "%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')", "Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32,(None,real_dim), name='input_real') \n inputs_z = tf.placeholder(tf.float32,(None, z_dim), name='input_z') \n \n return inputs_real, inputs_z", "Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Along with the $tanh$ output, we also need to return the logits for use in calculating the loss with tf.nn.sigmoid_cross_entropy_with_logits.\n\nExercise: Implement the generator network in the function below. You'll need to return both the logits and the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.", "def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim, activation=None)\n out = tf.tanh(logits)\n \n return out, logits", "Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.", "def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1, activation=None)\n out = tf.sigmoid(logits)\n \n return out, logits", "Hyperparameters", "# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1", "Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.", "tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Generator network here\ng_model, g_logits = generator(input_z, input_size)\n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = discriminator(input_real)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True)", "Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.", "# Calculate losses\nd_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels=tf.ones_like(d_logits_real) * (1 - smooth)))\nd_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels=tf.zeros_like(d_logits_real)))\n\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,\n labels=tf.ones_like(d_logits_fake)))", "Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.", "# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)", "Training", "batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples, _ = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)", "Training loss\nHere we'll check out the training losses for the generator and discriminator.", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()", "Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.", "def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)", "These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.", "_ = view_samples(-1, samples)", "Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!", "rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)", "It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!", "saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples, _ = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sainathadapa/fastai-courses
deeplearning1/nbs-custom-mine/lesson3_03_imagenet_batchnorm.ipynb
apache-2.0
[ "This notebook explains how to add batch normalization to VGG. The code shown here is implemented in vgg_bn.py, and there is a version of vgg_ft (our fine tuning function) with batch norm called vgg_ft_bn in utils.py.", "from theano.sandbox import cuda\n\n%matplotlib inline\nimport utils; reload(utils)\nfrom utils import *\nfrom __future__ import print_function, division", "The problem, and the solution\nThe problem\nThe problem that we faced in the lesson 3 is that when we wanted to add batch normalization, we initialized all the dense layers of the model to random weights, and then tried to train them with our cats v dogs dataset. But that's a lot of weights to initialize to random - out of 134m params, around 119m are in the dense layers! Take a moment to think about why this is, and convince yourself that dense layers are where most of the weights will be. Also, think about whether this implies that most of the time will be spent training these weights. What do you think?\nTrying to train 120m params using just 23k images is clearly an unreasonable expectation. The reason we haven't had this problem before is that the dense layers were not random, but were trained to recognize imagenet categories (other than the very last layer, which only has 8194 params).\nThe solution\nThe solution, obviously enough, is to add batch normalization to the VGG model! To do so, we have to be careful - we can't just insert batchnorm layers, since their parameters (gamma - which is used to multiply by each activation, and beta - which is used to add to each activation) will not be set correctly. Without setting these correctly, the new batchnorm layers will normalize the previous layer's activations, meaning that the next layer will receive totally different activations to what it would have without new batchnorm layer. And that means that all the pre-trained weights are no longer of any use!\nSo instead, we need to figure out what beta and gamma to choose when we insert the layers. The answer to this turns out to be pretty simple - we need to calculate what the mean and standard deviation of that activations for that layer are when calculated on all of imagenet, and then set beta and gamma to these values. That means that the new batchnorm layer will normalize the data with the mean and standard deviation, and then immediately un-normalize the data using the beta and gamma parameters we provide. So the output of the batchnorm layer will be identical to it's input - which means that all the pre-trained weights will continue to work just as well as before.\nThe benefit of this is that when we wish to fine-tune our own networks, we will have all the benefits of batch normalization (higher learning rates, more resiliant training, and less need for dropout) plus all the benefits of a pre-trained network.\nTo calculate the mean and standard deviation of the activations on imagenet, we need to download imagenet. You can download imagenet from http://www.image-net.org/download-images . The file you want is the one titled Download links to ILSVRC2013 image data. You'll need to request access from the imagenet admins for this, although it seems to be an automated system - I've always found that access is provided instantly. Once you're logged in and have gone to that page, look for the CLS-LOC dataset section. Both training and validation images are available, and you should download both. There's not much reason to download the test images, however.\nNote that this will not be the entire imagenet archive, but just the 1000 categories that are used in the annual competition. Since that's what VGG16 was originally trained on, that seems like a good choice - especially since the full dataset is 1.1 terabytes, whereas the 1000 category dataset is 138 gigabytes.\nAdding batchnorm to Imagenet\nSetup\nSample\nAs per usual, we create a sample so we can experiment more rapidly.", "%pushd data/imagenet\n%cd train\n\n%mkdir ../sample\n%mkdir ../sample/train\n%mkdir ../sample/valid\n\nfrom shutil import copyfile\n\ng = glob('*')\nfor d in g: \n os.mkdir('../sample/train/'+d)\n os.mkdir('../sample/valid/'+d)\n\ng = glob('*/*.JPEG')\nshuf = np.random.permutation(g)\nfor i in range(25000): copyfile(shuf[i], '../sample/train/' + shuf[i])\n\n%cd ../valid\n\ng = glob('*/*.JPEG')\nshuf = np.random.permutation(g)\nfor i in range(5000): copyfile(shuf[i], '../sample/valid/' + shuf[i])\n\n%cd ..\n\n%mkdir sample/results\n\n%popd", "Data setup\nWe set up our paths, data, and labels in the usual way. Note that we don't try to read all of Imagenet into memory! We only load the sample into memory.", "sample_path = 'data/jhoward/imagenet/sample/'\n# This is the path to my fast SSD - I put datasets there when I can to get the speed benefit\nfast_path = '/home/jhoward/ILSVRC2012_img_proc/'\n#path = '/data/jhoward/imagenet/sample/'\npath = 'data/jhoward/imagenet/'\n\nbatch_size=64\n\nsamp_trn = get_data(path+'train')\nsamp_val = get_data(path+'valid')\n\nsave_array(samp_path+'results/trn.dat', samp_trn)\nsave_array(samp_path+'results/val.dat', samp_val)\n\nsamp_trn = load_array(sample_path+'results/trn.dat')\nsamp_val = load_array(sample_path+'results/val.dat')\n\n(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_classes(path)\n\n(samp_val_classes, samp_trn_classes, samp_val_labels, samp_trn_labels, \n samp_val_filenames, samp_filenames, samp_test_filenames) = get_classes(sample_path)", "Model setup\nSince we're just working with the dense layers, we should pre-compute the output of the convolutional layers.", "vgg = Vgg16()\nmodel = vgg.model\n\nlayers = model.layers\nlast_conv_idx = [index for index,layer in enumerate(layers) \n if type(layer) is Convolution2D][-1]\nconv_layers = layers[:last_conv_idx+1]\n\ndense_layers = layers[last_conv_idx+1:]\n\nconv_model = Sequential(conv_layers)\n\nsamp_conv_val_feat = conv_model.predict(samp_val, batch_size=batch_size*2)\nsamp_conv_feat = conv_model.predict(samp_trn, batch_size=batch_size*2)\n\nsave_array(sample_path+'results/conv_val_feat.dat', samp_conv_val_feat)\nsave_array(sample_path+'results/conv_feat.dat', samp_conv_feat)\n\nsamp_conv_feat = load_array(sample_path+'results/conv_feat.dat')\nsamp_conv_val_feat = load_array(sample_path+'results/conv_val_feat.dat')\n\nsamp_conv_val_feat.shape", "This is our usual Vgg network just covering the dense layers:", "def get_dense_layers():\n return [\n MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),\n Flatten(),\n Dense(4096, activation='relu'),\n Dropout(0.5),\n Dense(4096, activation='relu'),\n Dropout(0.5),\n Dense(1000, activation='softmax')\n ]\n\ndense_model = Sequential(get_dense_layers())\n\nfor l1, l2 in zip(dense_layers, dense_model.layers):\n l2.set_weights(l1.get_weights())", "Check model\nIt's a good idea to check that your models are giving reasonable answers, before using them.", "dense_model.compile(Adam(), 'categorical_crossentropy', ['accuracy'])\n\ndense_model.evaluate(samp_conv_val_feat, samp_val_labels)\n\nmodel.compile(Adam(), 'categorical_crossentropy', ['accuracy'])\n\n# should be identical to above\nmodel.evaluate(val, val_labels)\n\n# should be a little better than above, since VGG authors overfit\ndense_model.evaluate(conv_feat, trn_labels)", "Adding our new layers\nCalculating batchnorm params\nTo calculate the output of a layer in a Keras sequential model, we have to create a function that defines the input layer and the output layer, like this:", "k_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], \n [dense_model.layers[2].output])", "Then we can call the function to get our layer activations:", "d0_out = k_layer_out([samp_conv_val_feat, 0])[0]\n\nk_layer_out = K.function([dense_model.layers[0].input, K.learning_phase()], \n [dense_model.layers[4].output])\n\nd2_out = k_layer_out([samp_conv_val_feat, 0])[0]", "Now that we've got our activations, we can calculate the mean and standard deviation for each (note that due to a bug in keras, it's actually the variance that we'll need).", "mu0,var0 = d0_out.mean(axis=0), d0_out.var(axis=0)\nmu2,var2 = d2_out.mean(axis=0), d2_out.var(axis=0)", "Creating batchnorm model\nNow we're ready to create and insert our layers just after each dense layer.", "nl1 = BatchNormalization()\nnl2 = BatchNormalization()\n\nbn_model = insert_layer(dense_model, nl2, 5)\nbn_model = insert_layer(bn_model, nl1, 3)\n\nbnl1 = bn_model.layers[3]\nbnl4 = bn_model.layers[6]", "After inserting the layers, we can set their weights to the variance and mean we just calculated.", "bnl1.set_weights([var0, mu0, mu0, var0])\nbnl4.set_weights([var2, mu2, mu2, var2])\n\nbn_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])", "We should find that the new model gives identical results to those provided by the original VGG model.", "bn_model.evaluate(samp_conv_val_feat, samp_val_labels)\n\nbn_model.evaluate(samp_conv_feat, samp_trn_labels)", "Optional - additional fine-tuning\nNow that we have a VGG model with batchnorm, we might expect that the optimal weights would be a little different to what they were when originally created without batchnorm. So we fine tune the weights for one epoch.", "feat_bc = bcolz.open(fast_path+'trn_features.dat')\n\nlabels = load_array(fast_path+'trn_labels.dat')\n\nval_feat_bc = bcolz.open(fast_path+'val_features.dat')\n\nval_labels = load_array(fast_path+'val_labels.dat')\n\nbn_model.fit(feat_bc, labels, nb_epoch=1, batch_size=batch_size,\n validation_data=(val_feat_bc, val_labels))", "The results look quite encouraging! Note that these VGG weights are now specific to how keras handles image scaling - that is, it squashes and stretches images, rather than adding black borders. So this model is best used on images created in that way.", "bn_model.save_weights(path+'models/bn_model2.h5')\n\nbn_model.load_weights(path+'models/bn_model2.h5')", "Create combined model\nOur last step is simply to copy our new dense layers on to the end of the convolutional part of the network, and save the new complete set of weights, so we can use them in the future when using VGG. (Of course, we'll also need to update our VGG architecture to add the batchnorm layers).", "new_layers = copy_layers(bn_model.layers)\nfor layer in new_layers:\n conv_model.add(layer)\n\ncopy_weights(bn_model.layers, new_layers)\n\nconv_model.compile(Adam(1e-5), 'categorical_crossentropy', ['accuracy'])\n\nconv_model.evaluate(samp_val, samp_val_labels)\n\nconv_model.save_weights(path+'models/inet_224squash_bn.h5')", "The code shown here is implemented in vgg_bn.py, and there is a version of vgg_ft (our fine tuning function) with batch norm called vgg_ft_bn in utils.py." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/tfx/tutorials/transform/simple.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Preprocess data with TensorFlow Transform\nThe Feature Engineering Component of TensorFlow Extended (TFX)\nNote: We recommend running this tutorial in a Colab notebook, with no setup required! Just click \"Run in Google Colab\".\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/transform/simple\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n<td><a target=\"_blank\" href=\"https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/transform/simple.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/download_logo_32px.png\">Download notebook</a></td>\n</table></div>\n\nThis example colab notebook provides a very simple example of how <a target='_blank' href='https://www.tensorflow.org/tfx/transform/get_started/'>TensorFlow Transform (<code>tf.Transform</code>)</a> can be used to preprocess data using exactly the same code for both training a model and serving inferences in production.\nTensorFlow Transform is a library for preprocessing input data for TensorFlow, including creating features that require a full pass over the training dataset. For example, using TensorFlow Transform you could:\n\nNormalize an input value by using the mean and standard deviation\nConvert strings to integers by generating a vocabulary over all of the input values\nConvert floats to integers by assigning them to buckets, based on the observed data distribution\n\nTensorFlow has built-in support for manipulations on a single example or a batch of examples. tf.Transform extends these capabilities to support full passes over the entire training dataset.\nThe output of tf.Transform is exported as a TensorFlow graph which you can use for both training and serving. Using the same graph for both training and serving can prevent skew, since the same transformations are applied in both stages.\nUpgrade Pip\nTo avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.", "try:\n import colab\n !pip install --upgrade pip\nexcept:\n pass", "Install TensorFlow Transform", "!pip install -q -U tensorflow_transform\n\n# This cell is only necessary because packages were installed while python was\n# running. It avoids the need to restart the runtime when running in Colab.\nimport pkg_resources\nimport importlib\n\nimportlib.reload(pkg_resources)", "Imports", "import pathlib\nimport pprint\nimport tempfile\n\nimport tensorflow as tf\nimport tensorflow_transform as tft\n\nimport tensorflow_transform.beam as tft_beam\nfrom tensorflow_transform.tf_metadata import dataset_metadata\nfrom tensorflow_transform.tf_metadata import schema_utils", "Data: Create some dummy data\nWe'll create some simple dummy data for our simple example:\n\nraw_data is the initial raw data that we're going to preprocess\nraw_data_metadata contains the schema that tells us the types of each of the columns in raw_data. In this case, it's very simple.", "raw_data = [\n {'x': 1, 'y': 1, 's': 'hello'},\n {'x': 2, 'y': 2, 's': 'world'},\n {'x': 3, 'y': 3, 's': 'hello'}\n ]\n\nraw_data_metadata = dataset_metadata.DatasetMetadata(\n schema_utils.schema_from_feature_spec({\n 'y': tf.io.FixedLenFeature([], tf.float32),\n 'x': tf.io.FixedLenFeature([], tf.float32),\n 's': tf.io.FixedLenFeature([], tf.string),\n }))", "Transform: Create a preprocessing function\nThe preprocessing function is the most important concept of tf.Transform. A preprocessing function is where the transformation of the dataset really happens. It accepts and returns a dictionary of tensors, where a tensor means a <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Tensor'><code>Tensor</code></a> or <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/SparseTensor'><code>SparseTensor</code></a>. There are two main groups of API calls that typically form the heart of a preprocessing function:\n\nTensorFlow Ops: Any function that accepts and returns tensors, which usually means TensorFlow ops. These add TensorFlow operations to the graph that transforms raw data into transformed data one feature vector at a time. These will run for every example, during both training and serving.\nTensorflow Transform Analyzers/Mappers: Any of the analyzers/mappers provided by tf.Transform. These also accept and return tensors, and typically contain a combination of Tensorflow ops and Beam computation, but unlike TensorFlow ops they only run in the Beam pipeline during analysis requiring a full pass over the entire training dataset. The Beam computation runs only once, (prior to training, during analysis), and typically make a full pass over the entire training dataset. They create tf.constant tensors, which are added to your graph. For example, tft.min computes the minimum of a tensor over the training dataset.\n\nCaution: When you apply your preprocessing function to serving inferences, the constants that were created by analyzers during training do not change. If your data has trend or seasonality components, plan accordingly.\nNote: The preprocessing_fn is not directly callable. This means that\ncalling preprocessing_fn(raw_data) will not work. Instead, it must\nbe passed to the Transform Beam API as shown in the following cells.", "def preprocessing_fn(inputs):\n \"\"\"Preprocess input columns into transformed columns.\"\"\"\n x = inputs['x']\n y = inputs['y']\n s = inputs['s']\n x_centered = x - tft.mean(x)\n y_normalized = tft.scale_to_0_1(y)\n s_integerized = tft.compute_and_apply_vocabulary(s)\n x_centered_times_y_normalized = (x_centered * y_normalized)\n return {\n 'x_centered': x_centered,\n 'y_normalized': y_normalized,\n 's_integerized': s_integerized,\n 'x_centered_times_y_normalized': x_centered_times_y_normalized,\n }", "Syntax\nYou're almost ready to put everything together and use <a target='_blank' href='https://beam.apache.org/'>Apache Beam</a> to run it.\nApache Beam uses a <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/#applying-transforms'>special syntax to define and invoke transforms</a>. For example, in this line:\nresult = pass_this | 'name this step' &gt;&gt; to_this_call\nThe method to_this_call is being invoked and passed the object called pass_this, and <a target='_blank' href='https://stackoverflow.com/questions/50519662/what-does-the-redirection-mean-in-apache-beam-python'>this operation will be referred to as name this step in a stack trace</a>. The result of the call to to_this_call is returned in result. You will often see stages of a pipeline chained together like this:\nresult = apache_beam.Pipeline() | 'first step' &gt;&gt; do_this_first() | 'second step' &gt;&gt; do_this_last()\nand since that started with a new pipeline, you can continue like this:\nnext_result = result | 'doing more stuff' &gt;&gt; another_function()\nPutting it all together\nNow we're ready to transform our data. We'll use Apache Beam with a direct runner, and supply three inputs:\n\nraw_data - The raw input data that we created above\nraw_data_metadata - The schema for the raw data\npreprocessing_fn - The function that we created to do our transformation", "def main(output_dir):\n # Ignore the warnings\n with tft_beam.Context(temp_dir=tempfile.mkdtemp()):\n transformed_dataset, transform_fn = ( # pylint: disable=unused-variable\n (raw_data, raw_data_metadata) | tft_beam.AnalyzeAndTransformDataset(\n preprocessing_fn))\n\n transformed_data, transformed_metadata = transformed_dataset # pylint: disable=unused-variable\n\n # Save the transform_fn to the output_dir\n _ = (\n transform_fn\n | 'WriteTransformFn' >> tft_beam.WriteTransformFn(output_dir))\n\n return transformed_data, transformed_metadata\n\noutput_dir = pathlib.Path(tempfile.mkdtemp())\n\ntransformed_data, transformed_metadata = main(str(output_dir))\n\nprint('\\nRaw data:\\n{}\\n'.format(pprint.pformat(raw_data)))\nprint('Transformed data:\\n{}'.format(pprint.pformat(transformed_data)))", "Is this the right answer?\nPreviously, we used tf.Transform to do this:\nx_centered = x - tft.mean(x)\ny_normalized = tft.scale_to_0_1(y)\ns_integerized = tft.compute_and_apply_vocabulary(s)\nx_centered_times_y_normalized = (x_centered * y_normalized)\n\nx_centered - With input of [1, 2, 3] the mean of x is 2, and we subtract it from x to center our x values at 0. So our result of [-1.0, 0.0, 1.0] is correct.\ny_normalized - We wanted to scale our y values between 0 and 1. Our input was [1, 2, 3] so our result of [0.0, 0.5, 1.0] is correct.\ns_integerized - We wanted to map our strings to indexes in a vocabulary, and there were only 2 words in our vocabulary (\"hello\" and \"world\"). So with input of [\"hello\", \"world\", \"hello\"] our result of [0, 1, 0] is correct. Since \"hello\" occurs most frequently in this data, it will be the first entry in the vocabulary.\nx_centered_times_y_normalized - We wanted to create a new feature by crossing x_centered and y_normalized using multiplication. Note that this multiplies the results, not the original values, and our new result of [-0.0, 0.0, 1.0] is correct.\n\nUse the resulting transform_fn", "!ls -l {output_dir}", "The transform_fn/ directory contains a tf.saved_model implementing with all the constants tensorflow-transform analysis results built into the graph. \nIt is possible to load this directly with tf.saved_model.load, but this not easy to use:", "loaded = tf.saved_model.load(str(output_dir/'transform_fn'))\nloaded.signatures['serving_default']", "A better approach is to load it using tft.TFTransformOutput. The TFTransformOutput.transform_features_layer method returns a tft.TransformFeaturesLayer object that can be used to apply the transformation:", "tf_transform_output = tft.TFTransformOutput(output_dir)\n\ntft_layer = tf_transform_output.transform_features_layer()\ntft_layer", "This tft.TransformFeaturesLayer expects a dictionary of batched features. So create a Dict[str, tf.Tensor] from the List[Dict[str, Any]] in raw_data:", "raw_data_batch = {\n 's': tf.constant([ex['s'] for ex in raw_data]),\n 'x': tf.constant([ex['x'] for ex in raw_data], dtype=tf.float32),\n 'y': tf.constant([ex['y'] for ex in raw_data], dtype=tf.float32),\n}", "You can use the tft.TransformFeaturesLayer on it's own:", "transformed_batch = tft_layer(raw_data_batch)\n\n{key: value.numpy() for key, value in transformed_batch.items()}", "Export\nA more typical use case would use tf.Transform to apply the transformation to the training and evaluation datasets (see the next tutorial for an example). Then, after training, before exporting the model attach the tft.TransformFeaturesLayer as the first layer so that you can export it as part of your tf.saved_model. For a concrete example, keep reading.\nAn example training model\nBelow is a model that:\n\ntakes the transformed batch,\nstacks them all together into a simple (batch, features) matrix,\nruns them through a few dense layers, and\nproduces 10 linear outputs.\n\nIn a real use case you would apply a one-hot to the s_integerized feature.\nYou could train this model on a dataset transformed by tf.Transform:", "class StackDict(tf.keras.layers.Layer):\n def call(self, inputs):\n values = [\n tf.cast(v, tf.float32)\n for k,v in sorted(inputs.items(), key=lambda kv: kv[0])]\n return tf.stack(values, axis=1)\n\nclass TrainedModel(tf.keras.Model):\n def __init__(self):\n super().__init__(self)\n self.concat = StackDict()\n self.body = tf.keras.Sequential([\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10),\n ])\n\n def call(self, inputs, training=None):\n x = self.concat(inputs)\n return self.body(x, training)\n\ntrained_model = TrainedModel()", "Imagine we trained the model.\ntrained_model.compile(loss=..., optimizer='adam')\ntrained_model.fit(...)\nThis model runs on the transformed inputs", "trained_model_output = trained_model(transformed_batch)\ntrained_model_output.shape", "An example export wrapper\nImagine you've trained the above model and want to export it.\nYou'll want to include the transform function in the exported model:", "class ExportModel(tf.Module):\n def __init__(self, trained_model, input_transform):\n self.trained_model = trained_model\n self.input_transform = input_transform\n\n @tf.function\n def __call__(self, inputs, training=None):\n x = self.input_transform(inputs)\n return self.trained_model(x)\n\nexport_model = ExportModel(trained_model=trained_model,\n input_transform=tft_layer)", "This combined model works on the raw data, and produces exactly the same results as calling the trained model directly:", "export_model_output = export_model(raw_data_batch)\nexport_model_output.shape\n\ntf.reduce_max(abs(export_model_output - trained_model_output)).numpy()", "This export_model includes the tft.TransformFeaturesLayer and is entierly self-contained. You can save it and restore it in another environment and still get exactly the same result:", "import tempfile\nmodel_dir = tempfile.mkdtemp(suffix='tft')\n\ntf.saved_model.save(export_model, model_dir)\n\nreloaded = tf.saved_model.load(model_dir)\n\nreloaded_model_output = reloaded(raw_data_batch)\nreloaded_model_output.shape\n\ntf.reduce_max(abs(export_model_output - reloaded_model_output)).numpy()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mrcinv/moodle-questions
python/example_images.ipynb
gpl-3.0
[ "Exercise sample: images\nChange this sample according to your needs. Run all the cells, and upload resulting .xml file to Moodle.\nAuxilary functions", "%pylab inline\nfrom moodle import *\nnum_q(-1.2,0.001), multi_q([(\"12\",50),(\"23\",50),(\"34\",-100)])", "Question parameters\nGenerate parameters, that appear in the questions.", "from scipy.interpolate import interp1d\n\nx0 = sort(hstack((array([0,1]),rand(2)/2+0.25)))\ny0 = sort(hstack((array([0,1]),rand(2)*abs(x0[1]-x0[2])/2+transpose(x0[1:2]))))\nsp = interp1d(x0,y0,kind='cubic')\nf = lambda x: cos(pi/2*x)\nfunctions = [(lambda x: (1-x)**1.5,0,1), (lambda x: (1+x)**0.7,-1,0), (lambda x: cos(pi/2*x),4,5)]\nrandom_points = lambda a,b: [randint(1,9)/10*(b-a) + a for i in range(3)] # 3 random points in [a,b]\nparameters = [fun + tuple(random_points(fun[1],fun[2])) for fun in functions]\nparameters", "Question body\nWrite the function, that generates the text of the question. You can use the following syntax to add different inputs to \nquestion string q:\n\nvalue of a variable: q = q + str(x)\nPython expressions: q = q + str(1+2*x)\nanswer input field: q = q + num_q(correct_answer, precision) \n\nNote on embedding images\nImages can be embeded in question text in the form of BASE64 encoded string via &lt;img src=\"data:image/base64,...\"/&gt; tag. To save matplotlib image as an encoded string, one has to use io.BytesIO virtual bytes stream.", "import io\nimport base64\n\ndef question_text(parameter):\n fun, a, b, x0, y0, xi = parameter # parameter contains function, interval boundaries\n clf() # clear the plot\n t = linspace(0,1)*(b-a) + a\n y = fun(t)\n plot(t,y)\n grid()\n xticks(arange(a,b,0.1))\n yticks(arange(0,1,0.1))\n xlim(a,b)\n ylim(min(y),max(y))\n strio = io.BytesIO() # plot figure into a string\n savefig(strio,format=\"png\")\n val = strio.getvalue() # get image string and decode it with base64 \n img = base64.b64encode(val).decode() \n strio.close()\n q = \"\"\"<p>Below is a graph of an unknown function f</p>\n <img src=\"data:image/png;base64,%s\" />\n <p>What are approximate values of the following numbers (round it on 1 decimal place):</p>\n <ul>\n <li>f(%0.2f) %s </li>\n <li>x, such that f(x)=%0.2f %s</li>\n <li>\\\\(f^{-1}(%0.2f)\\\\) %s </li>\n </ul>\"\"\" % (img,x0,num_q(fun(x0),0.05),fun(y0),num_q(y0,0.05),fun(xi),num_q(xi,0.05))\n return q\n\n# display the first question\nfrom IPython.display import HTML\nHTML(question_text(parameters[0]))", "Write to file", "# Write the questions to a file\nname = \"read_from_graph\"\ncategory = 'functions/graph/'\nquestions = []\nfor param in parameters:\n b = question_text(param)\n questions.append(b)\nfile = open(name + \".xml\",\"w\",encoding=\"utf8\")\n# Write to Moodle xml file\nmoodle_xml(name,questions, cloze_question, category = category, iostream = file)\nfile.close()\nprint(\"Questions were saved in \" + name + \".xml, that can be imported into Moodle\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Autodesk/molecular-design-toolkit
moldesign/_notebooks/Tutorial 1. Making a molecule.ipynb
apache-2.0
[ "<span style=\"float:right\"><a href=\"http://moldesign.bionano.autodesk.com/\" target=\"_blank\" title=\"About\">About</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href=\"https://github.com/autodesk/molecular-design-toolkit/issues\" target=\"_blank\" title=\"Issues\">Issues</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href=\"http://bionano.autodesk.com/MolecularDesignToolkit/explore.html\" target=\"_blank\" title=\"Tutorials\">Tutorials</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href=\"http://autodesk.github.io/molecular-design-toolkit/\" target=\"_blank\" title=\"Documentation\">Documentation</a></span>\n</span>\n\n<br>\n<center><h1>Tutorial 1: Making a molecule</h1></center>\nThis notebook gets you started with MDT - you'll build a small molecule, visualize it, and run a basic calculation.\nContents\n\n\n1. Import the toolkit\nA. Optional: Set up your computing backend\n\n\n2. Build it\n3. View it\n4. Simulate it\n5. Minimize it\n6. Write it\n7. Examine it\n\n1. Import the toolkit\nThis cell loads the toolkit and its unit system. To execute a cell, click on it, then press <kbd>shift</kbd> + <kbd>enter</kbd>. (If you're new to the notebook environment, you may want to check out this helpful cheat sheet).", "import moldesign as mdt\nimport moldesign.units as u", "Optional: configuration options\nIf you'd like to set some basic MDT configuration options, you can execute the following cell to create a GUI configuration editor:", "mdt.configure()", "2. Read in a molecular structure\nLet's get started by reading in a molecular structure file.\nWhen you execute this cell, you'll use mdt.read function to parse an XYZ-format file to create an MDT molecule object named, appropriately enough, molecule:", "molecule = mdt.read('data/butane.xyz')", "Jupyter notebooks will automatically print out the value of the last statement in any cell. When you evaluate a Molecule, as in the cell below, you'll get some quick summary data:", "molecule", "3. Visualize it\nMDT molecules have three built-in visualization methods - draw, draw2d, and draw3d. Try them out!", "viewer = molecule.draw()\nviewer # we tell Jupyter to draw the viewer by putting it on the last line of the cell", "Try clicking on some of the atoms in the visualization you've just created.\nAfterwards, you can retrieve a list of the Python objects representing the atoms you clicked on:", "print(viewer.selected_atoms)", "4. Simulate it\nSo far, we've created a 3D molecular structure and visualized it right in the notebook.\nIf you sat through VSEPR theory in P. Chem, you might notice this molecule (butane) is looking decidedly non-optimal. Luckily, we can use simulation to predict a better structure.\nWe're specifically going to run a basic type of Quantum Chemistry calculation called \"Hartree-Fock\", which will give us information about the molecule's orbitals and energy.", "molecule.set_energy_model(mdt.models.RHF, basis='sto-3g')\nproperties = molecule.calculate()\n\nprint(properties.keys())\nprint('Energy: ', properties['potential_energy'])\n\nmolecule.draw_orbitals()", "5. Minimize it\nNext, an energy minimization - that is, we're going to move the atoms around in order to find a minimum energy conformation. This is a great way to start cleaning up the messy structure we started with. The calculation might take a second or two ...", "mintraj = molecule.minimize()\n\nmintraj.draw_orbitals()", "6. Write it", "molecule.write('my_first_molecule.xyz')\n\nmintraj.write('my_first_minimization.P.gz')", "7. Play with it\nThere are any number of directions to go from here. See how badly you can distort the geometry:", "mdt.widgets.GeometryBuilder(molecule)\n\nmolecule.calculate_potential_energy()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aburrell/davitpy
docs/notebook/maps.ipynb
gpl-3.0
[ "Mapping utilities and options\n\nThis notebook illustrate how to map SuperDARN radars and FoVs", "%pylab inline\nfrom davitpy.pydarn.radar import *\nfrom davitpy.pydarn.plotting import *\nfrom davitpy.utils import *\nimport datetime as dt", "Plot all radars in AACGM coordinates\nBe patient, this takes a few seconds (so many radars, not to mention the coordinate calculatsions)", "figure(figsize=(15,10))\n# Plot map\nsubplot(121)\nm1 = plotUtils.mapObj(boundinglat=30., gridLabels=True, coords='mag')\noverlayRadar(m1, fontSize=8, plot_all=True, markerSize=5)\nsubplot(122)\nm2 = plotUtils.mapObj(boundinglat=-30., gridLabels=True, coords='mag')\noverlayRadar(m2, fontSize=8, plot_all=True, markerSize=5)", "Plot all radars in geographic coordinates\nThis is a bit faster (but there are still lots of radars)", "figure(figsize=(15,10))\n# Plot map\nsubplot(121)\nm1 = plotUtils.mapObj(boundinglat=30., gridLabels=False)\noverlayRadar(m1, fontSize=8, plot_all=True, markerSize=5)\nsubplot(122)\nm2 = plotUtils.mapObj(boundinglat=-30., gridLabels=False)\noverlayRadar(m2, fontSize=8, plot_all=True, markerSize=5)", "Plot a single radar, highlight beams\nStill a bit slow due to aacgm coordinates", "# Set map\nfigure(figsize=(10,10))\nwidth = 111e3*40\nm = plotUtils.mapObj(width=width, height=width, lat_0=60., lon_0=-30, coords='mag')\ncode = 'bks'\n# Plotting some radars\noverlayRadar(m, fontSize=12, codes=code)\n# Plot radar fov\noverlayFov(m, codes=code, maxGate=75, beams=[0,4,7,8,23])", "Plot a nice view of the mid-latitude radars", "# Set map\nfig = figure(figsize=(10,10))\nm = plotUtils.mapObj(lat_0=70., lon_0=-60, width=111e3*120, height=111e3*55, coords='mag')\ncodes = ['wal','fhe','fhw','cve','cvw','hok','ade','adw','bks']\n# Plotting some radars\noverlayRadar(m, fontSize=12, codes=codes)\n# Plot radar fov\noverlayFov(m, codes=codes[:-1], maxGate=70)#, fovColor=(.8,.9,.9))\noverlayFov(m, codes=codes[-1], maxGate=70, fovColor=(.8,.7,.8), fovAlpha=.5)\nfig.tight_layout(pad=2)\nrcParams.update({'font.size': 12})", "Plot the RBSP mode\nThis is sloooooooow...", "# Set map\nfigure(figsize=(8,8))\nlon_0 = -70.\nm = plotUtils.mapObj(boundinglat=35., lon_0=lon_0)\n\n# Go through each radar\ncodes = ['gbr','kap','sas','pgr', \\\n 'kod','sto','pyk','han', \\\n 'ksr','cve','cvw','wal', \\\n 'bks','hok','fhw','fhe', \\\n 'inv','rkn']\nbeams = [[3,4,6],[10,11,13],[2,3,5],[12,13,15], \\\n [2,3,5],[12,13,15],[0,1,3],[5,6,8], \\\n [12,13,15],[0,1,3],[19,20,22],[0,1,3], \\\n [12,13,15],[0,1,3],[18,19,21],[0,1,3],\\\n [6,7,9],[6,7,9]]\nfor i,rad in enumerate(codes):\n # Plot radar\n overlayRadar(m, fontSize=12, codes=rad)\n # Plot radar fov\n overlayFov(m, codes=rad, maxGate=75, beams=beams[i])\n#savefig('rbsp_beams.pdf')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/starthinker
colabs/dataset.ipynb
apache-2.0
[ "BigQuery Dataset\nCreate and permission a dataset in BigQuery.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter BigQuery Dataset Recipe Parameters\n\nSpecify the name of the dataset.\nIf dataset exists, it is inchanged.\nAdd emails and / or groups to add read permission.\nCAUTION: Removing permissions in StarThinker has no effect.\nCAUTION: To remove permissions you have to edit the dataset.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_write':'service', # Credentials used for writing data.\n 'dataset_dataset':'', # Name of Google BigQuery dataset to create.\n 'dataset_emails':[], # Comma separated emails.\n 'dataset_groups':[], # Comma separated groups.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute BigQuery Dataset\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'dataset':{\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'dataset':{'field':{'name':'dataset_dataset','kind':'string','order':1,'default':'','description':'Name of Google BigQuery dataset to create.'}},\n 'emails':{'field':{'name':'dataset_emails','kind':'string_list','order':2,'default':[],'description':'Comma separated emails.'}},\n 'groups':{'field':{'name':'dataset_groups','kind':'string_list','order':3,'default':[],'description':'Comma separated groups.'}}\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cpcloud/ibis
docs/user_guide/extending/elementwise.ipynb
apache-2.0
[ "Adding an Elementwise Operation\nThis notebook will show you how to add a new elementwise operation to an existing backend.\nWe are going to add julianday, a function supported by the SQLite database, to the SQLite Ibis backend.\nThe Julian day of a date, is the number of days since January 1st, 4713 BC. For more information check the Julian day wikipedia page.\nStep 1: Define the Operation\nLet's define the julianday operation as a function that takes one string input argument and returns a float.\npython\ndef julianday(date: str) -&gt; float:\n \"\"\"Julian date\"\"\"", "import ibis.expr.datatypes as dt\nimport ibis.expr.rules as rlz\nfrom ibis.expr.operations import ValueOp\n\n\nclass JulianDay(ValueOp):\n arg = rlz.string\n\n output_dtype = dt.float32\n output_shape = rlz.shape_like('arg')", "We just defined a JulianDay class that takes one argument of type string or binary, and returns a float.\nStep 2: Define the API\nBecause we know the output type of the operation, to make an expression out of JulianDay we simply need to construct it and call its ibis.expr.types.Node.to_expr method.\nWe still need to add a method to StringValue and BinaryValue (this needs to work on both scalars and columns).\nWhen you add a method to any of the expression classes whose name matches *Value both the scalar and column child classes will pick it up, making it easy to define operations for both scalars and columns in one place.\nWe can do this by defining a function and assigning it to the appropriate class\nof expressions.", "from ibis.expr.types import BinaryValue, StringValue\n\n\ndef julianday(string_value):\n return JulianDay(string_value).to_expr()\n\n\nStringValue.julianday = julianday", "Interlude: Create some expressions with sha1", "import ibis\n\nt = ibis.table([('string_col', 'string')], name='t')\n\nt.string_col.julianday()", "Step 3: Turn the Expression into SQL", "import sqlalchemy as sa\n\n\n@ibis.sqlite.add_operation(JulianDay)\ndef _julianday(translator, expr):\n # pull out the arguments to the expression\n (arg,) = expr.op().args\n\n # compile the argument\n compiled_arg = translator.translate(arg)\n\n # return a SQLAlchemy expression that calls into the SQLite julianday function\n return sa.func.julianday(compiled_arg)", "Step 4: Putting it all Together", "!curl -LsS -o $TEMPDIR/geography.db 'https://storage.googleapis.com/ibis-tutorial-data/geography.db'\n\nimport os\nimport tempfile\n\nimport ibis\n\ndb_fname = os.path.join(tempfile.gettempdir(), 'geography.db')\n\ncon = ibis.sqlite.connect(db_fname)", "Create and execute a julianday expression", "independence = con.table('independence')\nindependence\n\nday = independence.independence_date.cast('string')\nday\n\njulianday_expr = day.julianday().name(\"jday\")\njulianday_expr\n\nsql_expr = julianday_expr.compile()\nprint(sql_expr)\n\nresult = julianday_expr.execute()\nresult.head()", "Because we've defined our operation on StringValue, and not just on StringColumn we get operations on both string scalars and string columns for free", "scalar = ibis.literal('2010-03-14')\nscalar\n\njulianday_scalar = scalar.julianday()\n\ncon.execute(julianday_scalar)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lmoresi/UoM-VIEPS-Intro-to-Python
Notebooks/Mapping/1 - Introducting Cartopy.ipynb
mit
[ "Cartopy\nIs a mapping and imaging package originating from the Met. Office in the UK. The home page for the package is http://scitools.org.uk/cartopy/. Like many python packages, the documentation is patchy and the best way to learn is to try to do things and ask other people who have figured out this and that. \nWe are going to work through a number of the examples and try to extend them to do the kinds of things you might find interesting and useful in the future. The examples are in the form of a gallery\nYou might also want to look at the list of map projections from time to time. Not all maps can be plotted in every projection (sometimes because of bugs and sometimes because they are not supposed to work for the data you have) but you can try them and see what happens.\nCartopy is built on top of a lot of the matplotlib graphing tools. It works by introducing a series of projections associated with the axes of a graph. On top of that there is a big toolkit for reading in images, finding data from standard web feeds, and manipulating geographical objects. Many, many libraries are involved and sometimes things break. Luckily the installation that is built for this course is about as reliable as we can ever get. I'm just warning you, though, that it can be quite tough if you want to put this on your laptop from scratch.\nLet's get started\nWe have a number of imports that we will need almost every time. \nIf we are going to plot anything then we need to include matplotlib.", "%pylab inline\n\nimport matplotlib.pyplot as plt\n\nimport cartopy\nimport cartopy.crs as ccrs\n\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\nax = plt.axes(projection=ccrs.PlateCarree())\nax.stock_img()\nax.coastlines()\n", "The simplest plot: global map using the default image built into the package and adding coastlines", "\nfig = plt.figure(figsize=(12, 12), facecolor=\"none\")\nax = plt.axes(projection=ccrs.Mercator())\n\n # make the map global rather than have it zoom in to\n # the extents of any plotted data\n \nax.set_global()\nax.coastlines() \nax.stock_img()\n\n", "Try changing the projection - either look at the list in the link I gave you above or use the tab-completion feature of iPython to see what ccrs has available ( not everything will be a projection, but you can see what works and what breaks ).\nHere is how you can plot a region instead of the globe:", "\nfig = plt.figure(figsize=(12, 12), facecolor=\"none\")\nax = plt.axes(projection=ccrs.Robinson()) \nax.set_extent([0, 40, 28, 48])\n\nax.coastlines(resolution='50m') \nax.stock_img()\n\n\nhelp(ax.stock_img)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
phanrahan/magmathon
notebooks/signal-generator/solutions/Triangle.ipynb
mit
[ "Triangle Signal Generator\nA triangle wave is a periodic waveform that linearly ramps between two values.", "import magma as m\nm.set_mantle_target('ice40')\n\nimport mantle\n\ndef DefineTriangle(n):\n T = m.Bits(n)\n class _Triangle(m.Circuit):\n name = f'Triangle{n}'\n IO = ['I', m.In(T), 'O', m.Out(T)]\n \n @classmethod\n def definition(io):\n invert = mantle.Invert(n)\n mux = mantle.Mux(2, n)\n m.wire( mux( io.I, invert(io.I), io.I[n-1] ), io.O )\n return _Triangle\n\ndef Triangle(n):\n return DefineTriangle(n)()\n\nfrom loam.boards.icestick import IceStick\n\nN = 8\n\nicestick = IceStick()\nicestick.Clock.on()\nfor i in range(N):\n icestick.J3[i].output().on()\n\nmain = icestick.main() \ncounter = mantle.Counter(32)\nsawtooth = counter.O[8:8+N]\ntri = Triangle(N)\nm.wire( tri(sawtooth), main.J3 )\nm.EndDefine()\n\nm.compile('build/triangle', main)\n\n%%bash\ncd build\ncat triangle.pcf\nyosys -q -p 'synth_ice40 -top main -blif triangle.blif' triangle.v\narachne-pnr -q -d 1k -o triangle.txt -p triangle.pcf triangle.blif \nicepack triangle.txt triangle.bin\niceprog triangle.bin", "We can wire up the GPIO pins to a logic analyzer to verify that our circuit produces the correct triangle waveform.\n\nWe can also use Saleae's export data feature to output a csv file. We'll load this data into Python and plot the results.", "import csv\nimport magma as m\nwith open(\"data/triangle-capture.csv\") as triangle_capture_csv:\n csv_reader = csv.reader(triangle_capture_csv)\n next(csv_reader, None) # skip the headers\n rows = [row for row in csv_reader]\ntimestamps = [float(row[0]) for row in rows]\nvalues = [m.bitutils.seq2int(tuple(int(x) for x in row[1:])) for row in rows]", "TODO: Why do we have this little bit of jitter? Logic analyzer is running at 25 MS/s, 3.3+ Volts for 1s", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(timestamps[:1000], values[:1000], \"-\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
pastas/pastas
concepts/armamodel.ipynb
mit
[ "ARMA(1,1) Noise Model for Pastas\nR.A. Collenteur, University of Graz, May 2020\nIn this notebook an Autoregressive-Moving-Average (ARMA(1,1)) noise model is developed for Pastas models. This new noise model is tested against synthetic data generated with Numpy or Statsmodels' ARMA model. This noise model is tested on head time series with a regular time step.\n<div class=\"alert alert-info\">\n\n<b>Warning</b>\n\nIt should be noted that the time step may be non-equidistant in this formulation, but this model is not yet tested for irregular time steps.\n\n</div>", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy.special import gammainc, gammaincinv\n\nimport pastas as ps\n\nps.set_log_level(\"ERROR\")\nps.show_versions(numba=True)", "1. Develop the ARMA(1,1) Noise Model for Pastas\nThe following formula is used to calculate the noise according to the ARMA(1,1) process:\n$$\\upsilon(t_i) = r(t_i) - r(t_{i-1}) \\text{e}^{-\\Delta t_i / \\alpha} - \\upsilon(t_{i-1}) \\text{e}^{-\\Delta t_i / \\beta}$$\nwhere $\\upsilon$ is the noise, $\\Delta t_i$ the time step between the residuals ($r$), and respectively $\\alpha$ [days] and $\\beta$ [days] the parameters of the AR and MA parts of the model. The model is named ArmaModel and can be found in noisemodel.py. It is added to a Pastas model as follows: ml.add_noisemodel(ps.ArmaModel())\n2. Generate synthetic head time series", "# Read in some data\nrain = ps.read.read_knmi('../examples/data/etmgeg_260.txt', variables='RH').series\nevap = ps.read.read_knmi('../examples/data/etmgeg_260.txt', variables='EV24').series\n\n# Set the True parameters\nAtrue = 800\nntrue = 1.1\natrue = 200\ndtrue = 20\n\n# Generate the head\nstep = ps.Gamma().block([Atrue, ntrue, atrue])\nh = dtrue * np.ones(len(rain) + step.size)\nfor i in range(len(rain)):\n h[i:i + step.size] += rain[i] * step\nhead = pd.DataFrame(index=rain.index, data=h[:len(rain)],)\nhead = head['1990':'2015']\n\n# Plot the head without noise\nplt.figure(figsize=(10,2))\nplt.plot(head,'k.', label='head')\nplt.legend(loc=0)\nplt.ylabel('Head (m)')\nplt.xlabel('Time (years)');", "3. Generate ARMA(1,1) noise and add it to the synthetic heads\nIn the following code-block, noise is generated using an ARMA(1,1) process using Numpy. An alternative procedure is available from Statsmodels (commented out now). More information about the ARMA model can be found on the statsmodels website. The noise is added to the head series generated in the previous code-block.", "# reproduction of random numbers\nnp.random.seed(1234)\nalpha= 0.95\nbeta = 0.1\n\n# generate samples using Statsmodels\n# import statsmodels.api as stats\n# ar = np.array([1, -alpha])\n# ma = np.r_[1, beta]\n# arma = stats.tsa.ArmaProcess(ar, ma)\n# noise = arma.generate_sample(head[0].index.size)*np.std(head.values) * 0.1\n\n# generate samples using Numpy\nrandom_seed = np.random.RandomState(1234)\n\nnoise = random_seed.normal(0,1,len(head)) * np.std(head.values) * 0.1\na = np.zeros_like(head[0])\n\nfor i in range(1, noise.size):\n a[i] = noise[i] + noise[i - 1] * beta + a[i - 1] * alpha\n\nhead_noise = head[0] + a\n\nplt.plot(a)\nplt.plot(noise)", "4. Create and solve a Pastas Model", "ml = ps.Model(head_noise)\nsm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')\nml.add_stressmodel(sm)\nml.add_noisemodel(ps.ArmaModel())\n\nml.solve(tmin=\"1991\", tmax='2015-06-29', noise=True, report=True)\naxes = ml.plots.results(figsize=(10,5));\naxes[-2].plot(ps.Gamma().step([Atrue, ntrue, atrue]))", "5. Did we find back the original ARMA parameters?", "print(np.exp(-1./ml.parameters.loc[\"noise_alpha\", \"optimal\"]).round(2), \"vs\", alpha)\nprint(np.exp(-1./np.abs(ml.parameters.loc[\"noise_beta\", \"optimal\"])).round(2), \"vs.\", beta)", "The estimated parameters for the noise model are almost the same as the true parameters, showing that the model works for regular time steps.\n6. So is the autocorrelation removed correctly?", "ml.plots.diagnostics(figsize=(10,4));", "That seems okay. It is important to understand that this noisemodel will only help in removing autocorrelations at the first time lag, but not at larger time lags, compared to its AR(1) counterpart. \n7. What happens if we use an AR(1) model?", "ml = ps.Model(head_noise)\nsm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')\nml.add_stressmodel(sm)\n\nml.solve(tmin=\"1991\", tmax='2015-06-29', noise=True, report=False)\naxes = ml.plots.results(figsize=(10,5));\naxes[-2].plot(ps.Gamma().step([Atrue, ntrue, atrue]))\n\nprint(np.exp(-1./ml.parameters.loc[\"noise_alpha\", \"optimal\"]).round(2), \"vs\", alpha)\n\nml.plots.diagnostics(figsize=(10,4));", "Significant autocorrelation is still present at lag 1 and the parameter of the AR(1) is overestimated, trying to correct for the lack of an MA(1) part. This is to be expected, as the MA(1) process generates a strong autocorrelation at this first time lag. The negative autocorrelation in the first few time steps is a result of the overestimation of the AR(1) parameter.\nA possible effect of failing to remove the autocorrelation at lag 1 may be that the parameter standard errors are under- or overestimated. Although that does not seem the case for this synthetic, real life examples may suffer from this.\n8. Test the ArmaModel for irregular time steps\nIn this final step the ArmaModel is tested for irregular timesteps, using the indices from a real groundwater level time series. It is clear from the example below that the ArmaModel does not yet work for irregular time steps, as (unlike the AR(1) model in Pastas) no weights are applied yet.", "index = pd.read_csv(\"../examples/data/test_index.csv\", parse_dates=True, \n index_col=0).index.round(\"D\").drop_duplicates()\nhead_irregular = head_noise.reindex(index)\n\nml = ps.Model(head_irregular)\nsm = ps.StressModel(rain, ps.Gamma, name='recharge', settings='prec')\nml.add_stressmodel(sm)\nml.add_noisemodel(ps.ArmaModel())\n\nml.solve(tmin=\"1991\", tmax='2015-06-29', noise=True, report=False)\naxes = ml.plots.results(figsize=(10,5));\naxes[-2].plot(ps.Gamma().step([Atrue, ntrue, atrue]))\n\nprint(np.exp(-1./ml.parameters.loc[\"noise_alpha\", \"optimal\"]).round(2), \"vs\", alpha)\nprint(np.exp(-1./ml.parameters.loc[\"noise_beta\", \"optimal\"]).round(2), \"vs.\", beta)\n\nml.plots.diagnostics(figsize=(10,4));" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_cluster_stats_evoked.ipynb
bsd-3-clause
[ "%matplotlib inline", "Permutation F-test on sensor data with 1D cluster level\nOne tests if the evoked response is significantly different\nbetween conditions. Multiple comparison problem is addressed\nwith cluster level permutation test.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.stats import permutation_cluster_test\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = 1\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\nchannel = 'MEG 1332' # include only this channel in analysis\ninclude = [channel]", "Read epochs for the channel of interest", "picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,\n exclude='bads')\nevent_id = 1\nreject = dict(grad=4000e-13, eog=150e-6)\nepochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject)\ncondition1 = epochs1.get_data() # as 3D matrix\n\nevent_id = 2\nepochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject)\ncondition2 = epochs2.get_data() # as 3D matrix\n\ncondition1 = condition1[:, 0, :] # take only one channel to get a 2D array\ncondition2 = condition2[:, 0, :] # take only one channel to get a 2D array", "Compute statistic", "threshold = 6.0\nT_obs, clusters, cluster_p_values, H0 = \\\n permutation_cluster_test([condition1, condition2], n_permutations=1000,\n threshold=threshold, tail=1, n_jobs=1)", "Plot", "times = epochs1.times\nplt.close('all')\nplt.subplot(211)\nplt.title('Channel : ' + channel)\nplt.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),\n label=\"ERF Contrast (Event 1 - Event 2)\")\nplt.ylabel(\"MEG (T / m)\")\nplt.legend()\nplt.subplot(212)\nfor i_c, c in enumerate(clusters):\n c = c[0]\n if cluster_p_values[i_c] <= 0.05:\n h = plt.axvspan(times[c.start], times[c.stop - 1],\n color='r', alpha=0.3)\n else:\n plt.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),\n alpha=0.3)\nhf = plt.plot(times, T_obs, 'g')\nplt.legend((h, ), ('cluster p-value < 0.05', ))\nplt.xlabel(\"time (ms)\")\nplt.ylabel(\"f-values\")\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yiiinghuang/dmc
notebooks/week-3/01-basic ann.ipynb
apache-2.0
[ "Lab 3 - Basic Artificial Neural Network\nHere we will build a very rudimentary Artificial Neural Network (ANN) and use it to solve some basic classification problems. This example is implemented using only basic math and linear algebra functions, which will allow us to study how each aspect of the network works, and to gain an intuitive understanding of its functions. In future labs we will use pre-built libraries such as Keras which automate and optimize much of these functions, making the network much faster and easier to use.\nThe code and MNIST test data is taken directly from http://neuralnetworksanddeeplearning.com/ by Michael Nielsen. Please review the first chapter of the book for a thorough explanation of the code.", "import random\nimport numpy as np\n\nclass Network(object):\n \n def __init__(self, sizes):\n \n \"\"\"The list ``sizes`` contains the number of neurons in the\n respective layers of the network. For example, if the list\n was [2, 3, 1] then it would be a three-layer network, with the\n first layer containing 2 neurons, the second layer 3 neurons,\n and the third layer 1 neuron. The biases and weights for the\n network are initialized randomly, using a Gaussian\n distribution with mean 0, and variance 1. Note that the first\n layer is assumed to be an input layer, and by convention we\n won't set any biases for those neurons, since biases are only\n ever used in computing the outputs from later layers.\"\"\"\n \n self.num_layers = len(sizes)\n self.sizes = sizes\n self.biases = [np.random.randn(y, 1) for y in sizes[1:]]\n self.weights = [np.random.randn(y, x)\n for x, y in zip(sizes[:-1], sizes[1:])]\n \n def feedforward (self, a):\n \n #Return the output of the network if \"a\" is input.\n \n for b, w in zip(self.biases, self.weights):\n a = sigmoid(np.dot(w, a)+b)\n return a\n \n def SGD(self, training_data, epochs, mini_batch_size, eta,\n test_data=None):\n \n \"\"\"Train the neural network using mini-batch stochastic\n gradient descent. The \"training_data\" is a list of tuples\n \"(x, y)\" representing the training inputs and the desired\n outputs. The other non-optional parameters are\n self-explanatory. If \"test_data\" is provided then the\n network will be evaluated against the test data after each\n epoch, and partial progress printed out. This is useful for\n tracking progress, but slows things down substantially.\"\"\"\n\n if test_data: n_test = len(test_data)\n n = len(training_data)\n for j in xrange(epochs):\n random.shuffle(training_data)\n mini_batches = [\n training_data[k:k+mini_batch_size]\n for k in xrange(0, n, mini_batch_size)]\n for mini_batch in mini_batches:\n self.update_mini_batch(mini_batch, eta)\n \n if test_data:\n print \"Epoch {0}: {1} / {2}\".format(\n j, self.evaluate(test_data), n_test)\n else:\n print \"Epoch {0} complete\".format(j)\n \n def update_mini_batch(self, mini_batch, eta):\n \n \"\"\"Update the network's weights and biases by applying\n gradient descent using backpropagation to a single mini batch.\n The \"mini_batch\" is a list of tuples \"(x, y)\", and \"eta\"\n is the learning rate.\"\"\"\n\n nabla_b = [np.zeros(b.shape) for b in self.biases]\n nabla_w = [np.zeros(w.shape) for w in self.weights]\n for x, y in mini_batch:\n delta_nabla_b, delta_nabla_w = self.backprop(x, y)\n nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]\n nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]\n self.weights = [w-(eta/len(mini_batch))*nw \n for w, nw in zip(self.weights, nabla_w)]\n self.biases = [b-(eta/len(mini_batch))*nb \n for b, nb in zip(self.biases, nabla_b)]\n \n def backprop(self, x, y):\n \n \"\"\"Return a tuple ``(nabla_b, nabla_w)`` representing the\n gradient for the cost function C_x. ``nabla_b`` and\n ``nabla_w`` are layer-by-layer lists of numpy arrays, similar\n to ``self.biases`` and ``self.weights``.\"\"\"\n \n nabla_b = [np.zeros(b.shape) for b in self.biases]\n nabla_w = [np.zeros(w.shape) for w in self.weights]\n \n # feedforward\n activation = x\n activations = [x] # list to store all the activations, layer by layer\n zs = [] # list to store all the z vectors, layer by layer\n for b, w in zip(self.biases, self.weights):\n z = np.dot(w, activation)+b\n zs.append(z)\n activation = sigmoid(z)\n activations.append(activation)\n # backward pass\n delta = self.cost_derivative(activations[-1], y) * \\\n sigmoid_prime(zs[-1])\n nabla_b[-1] = delta\n nabla_w[-1] = np.dot(delta, activations[-2].transpose())\n \n \"\"\"Note that the variable l in the loop below is used a little\n differently to the notation in Chapter 2 of the book. Here,\n l = 1 means the last layer of neurons, l = 2 is the\n second-last layer, and so on. It's a renumbering of the\n scheme in the book, used here to take advantage of the fact\n that Python can use negative indices in lists.\"\"\"\n \n for l in xrange(2, self.num_layers):\n z = zs[-l]\n sp = sigmoid_prime(z)\n delta = np.dot(self.weights[-l+1].transpose(), delta) * sp\n nabla_b[-l] = delta\n nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())\n return (nabla_b, nabla_w)\n\n def evaluate(self, test_data):\n \n \"\"\"Return the number of test inputs for which the neural\n network outputs the correct result. Note that the neural\n network's output is assumed to be the index of whichever\n neuron in the final layer has the highest activation.\"\"\"\n \n test_results = [(np.argmax(self.feedforward(x)), y)\n for (x, y) in test_data]\n return sum(int(x == y) for (x, y) in test_results)\n\n def cost_derivative(self, output_activations, y):\n \"\"\"Return the vector of partial derivatives \\partial C_x /\n \\partial a for the output activations.\"\"\"\n return (output_activations-y)\n\ndef sigmoid(z):\n# The sigmoid function.\n return 1.0/(1.0 + np.exp(-z))\n\ndef sigmoid_prime(z):\n# Derivative of the sigmoid function.\n return sigmoid(z)*(1-sigmoid(z))", "Iris dataset example\nNow we will test our basic artificial neural network on a very simple classification problem. First we will use the seaborn data visualization library to load the 'iris' dataset, \nwhich consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), with four features measuring the length and the width of each flower's sepals and petals. After we load the data we will vizualize it using some functions in seaborn.", "%matplotlib inline\nimport seaborn as sns; sns.set(style=\"ticks\", color_codes=True)\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.utils import shuffle\n\niris_data = sns.load_dataset(\"iris\")\n\n# randomly shuffle data\niris_data = shuffle(iris_data)\n\n# print first 5 data points\nprint iris_data[:5]\n\n# create pairplot of iris data\ng = sns.pairplot(iris_data, hue=\"species\")", "Next, we will prepare the data set for training in our ANN. In order to work with our functions, the data needs to be converted to numpy format, split into feature and target sets, and then recombined as separate lists within a single dataset. Finally, we split the data set into training and testing sets, and convert the targets of the training set to 'one-hot' encoding (OHE). OHE takes each piece of categorical data and converts it to a list of binary values the length of which is equal to the number of categories, and the position of the current category denoted with a '1' and '0' for all others. For example, in our dataset we have 3 possible categories: versicolor, virginica, and setosa. After applying OHE, versicolor becomes [1,0,0], virginica becomes [0,1,0], and setosa becomes [0,0,1]. OHE is a standard format for target data as it allows easy application of the cost function during training.", "# convert iris data to numpy format\niris_array = iris_data.as_matrix()\n\n# split data into feature and \nX = iris_array[:, :4].astype(float)\ny = iris_array[:, -1]\n\n_, y = np.unique(y, return_inverse=True)\ny = y.reshape(-1,1)\n\n# create on-hot encoding function\nenc = OneHotEncoder()\nenc.fit(y)\n\n# combine feature and target data\ndata = []\nfor i in range(X.shape[0]):\n data.append(tuple([X[i].reshape(-1,1), y[i][0]]))\n\n# split data into training and test sets\ntrainingSplit = int(.8 * len(data))\ntraining_data = data[:trainingSplit]\ntest_data = data[trainingSplit:]\n\n# convert training targets to on-hot encoding\ntraining_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data]\n\nnet = Network([4, 25, 3])\nnet.SGD(training_data, 21, 10, .1, test_data=test_data)", "MNIST dataset example\nNext, we will test our ANN on another, slightly more difficult classification problem. The data set we'll be using is called MNIST, which contains tens of thousands of scanned images of handwritten digits, classified according to the digit type from 0-9. The name MNIST comes from the fact that it is a Modified (M) version of a dataset originally developed by the United States' National Institute of Standards and Technology (NIST). This is a very popular dataset used to measure the effectiveness of Machine Learning models for image recongnition. This time we don't have to do as much data management since the data is already provided in the right format here. \nWe will get into more details about working with images and proper data formats for image data in later labs, but you can already use this data to test the effectiveness of our network. With the default settings you should be able to get a classification accuracy of 95% in the test set.", "import mnist_loader\n\ntraining_data, validation_data, test_data = mnist_loader.load_data_wrapper()", "We can use the matplotlib library to visualize one of the training images. In the data set, the pixel values of each 28x28 pixel image is encoded in a straight list of 784 numbers, so before we visualize it we have to use numpy's reshape function to convert it back t", "%matplotlib inline\nimport matplotlib.pylab as plt\n\nimg = training_data[0][0][:,0].reshape((28,28))\n\nfig = plt.figure()\nplt.imshow(img, interpolation='nearest', vmin = 0, vmax = 1, cmap=plt.cm.gray)\nplt.axis('off')\nplt.show()\n\nnet = Network([784, 30, 10])\nnet.SGD(training_data, 30, 10, 3.0, test_data=test_data)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_artifacts_correction_ssp.ipynb
bsd-3-clause
[ "%matplotlib inline", ".. _tut_artifacts_correct_ssp:\nArtifact Correction with SSP", "import numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.preprocessing import compute_proj_ecg, compute_proj_eog\n\n# getting some data ready\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.pick_types(meg=True, ecg=True, eog=True, stim=True)", "Compute SSP projections", "projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, average=True)\nprint(projs)\n\necg_projs = projs[-2:]\nmne.viz.plot_projs_topomap(ecg_projs)\n\n# Now for EOG\n\nprojs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, average=True)\nprint(projs)\n\neog_projs = projs[-2:]\nmne.viz.plot_projs_topomap(eog_projs)", "Apply SSP projections\nMNE is handling projections at the level of the info,\nso to register them populate the list that you find in the 'proj' field", "raw.info['projs'] += eog_projs + ecg_projs", "Yes this was it. Now MNE will apply the projs on demand at any later stage,\nso watch out for proj parmeters in functions or to it explicitly\nwith the .apply_proj method\nDemonstrate SSP cleaning on some evoked data", "events = mne.find_events(raw, stim_channel='STI 014')\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n# this can be highly data dependent\nevent_id = {'auditory/left': 1}\n\nepochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,\n proj=False, baseline=(None, 0), reject=reject)\nepochs_no_proj.average().plot(spatial_colors=True)\n\n\nepochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True,\n baseline=(None, 0), reject=reject)\nepochs_proj.average().plot(spatial_colors=True)", "Looks cool right? It is however often not clear how many components you\nshould take and unfortunately this can have bad consequences as can be seen\ninteractively using the delayed SSP mode:", "evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,\n proj='delayed', baseline=(None, 0),\n reject=reject).average()\n\n# set time instants in seconds (from 50 to 150ms in a step of 10ms)\ntimes = np.arange(0.05, 0.15, 0.01)\n\nevoked.plot_topomap(times, proj='interactive')", "now you should see checkboxes. Remove a few SSP and see how the auditory\npattern suddenly drops off" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
regata/rl-book
ch06.ipynb
mit
[ "Exercise 6.7: Windy Gridworld with King’s Moves\n\nExercise 6.7: Windy Gridworld with King’s Moves\nCreate Windy Grid World environment", "import numpy as np\n\nACTION_TO_XY = {\n 'left': (-1, 0),\n 'right': (1, 0),\n 'up': (0, 1),\n 'down': (0, -1),\n 'up_left': (-1, 1),\n 'down_left': (-1, -1),\n 'up_right': (1, 1),\n 'down_right': (1, -1),\n 'stop': (0, 0)\n}\n\n# convert tuples to np so we can do math with states\nACTION_TO_XY = {a: np.array(xy) for a, xy in ACTION_TO_XY.items()}\n\nWIND = [0, 0, 0, 1, 1, 1, 2, 2, 1, 0]\n\nclass WindyGridworld(object):\n def __init__(self):\n self._state = None\n self._goal = np.array([7, 3]) # goal state, XY\n self._start = np.array([0, 3]) # start state, XY\n self.shape = [10, 7] # grid world shape, XY\n self._wind_x = WIND\n assert len(self._wind_x) == self.shape[0]\n \n def reset(self):\n self._state = self._start.copy()\n return tuple(self._state)\n \n def _clip_state(self): \n self._state[0] = np.clip(self._state[0], 0, self.shape[0] - 1) # clip x\n self._state[1] = np.clip(self._state[1], 0, self.shape[1] - 1) # clip y\n \n def step(self, action):\n a_xy = ACTION_TO_XY[action]\n \n # apply wind shift\n wind_shift = [0, self._wind_x[self._state[0]]]\n self._state += np.array(wind_shift)\n self._clip_state()\n # apply action\n self._state += a_xy\n self._clip_state()\n \n \n reward = -1\n term = True if np.all(self._goal == self._state) else False\n \n return tuple(self._state), reward, term, None", "Create Sarsa Agent", "from collections import defaultdict, namedtuple\nimport random\nfrom tqdm import tqdm\n\nTransition = namedtuple('Transition', ['state1',\n 'action',\n 'reward',\n 'state2'])\n\nclass SarsaAgent(object):\n def __init__(self, env, actions, alpha=0.5, epsilon=0.1, gamma=1):\n self._env = env\n self._actions = actions\n self._alpha = alpha\n self._epsilon = epsilon\n self._gamma = gamma \n self.episodes = []\n # init q table\n self._q = {}\n action_vals = {a: 0 for a in self._actions}\n for x in range(self._env.shape[0]):\n for y in range(self._env.shape[1]):\n self._q[(x,y)] = dict(action_vals)\n \n def random_policy(self, state):\n return random.choice(self._actions)\n \n def greedy_policy(self, state):\n return max(self._q[state], key=self._q[state].get)\n \n def e_greedy_policy(self, state):\n if np.random.rand() > self._epsilon:\n action = self.greedy_policy(state)\n else:\n action = self.random_policy(state)\n return action\n\n def play_episode(self):\n s1 = self._env.reset()\n a1 = self.e_greedy_policy(s1)\n transitions = []\n while True:\n s2, r, term, _ = self._env.step(a1)\n a2 = self.e_greedy_policy(s2)\n \n target = r + self._gamma*self._q[s2][a2]\n if term:\n target = 0.0\n self._q[s1][a1] = self._q[s1][a1] + self._alpha*(target - self._q[s1][a1])\n s1 = s2\n a1 = a2\n\n transitions.append(Transition(s1, a1, r, s2))\n \n if term:\n break\n return transitions\n \n def learn(self, n_episodes=500):\n for _ in tqdm(range(n_episodes)):\n transitions = self.play_episode()\n self.episodes.append(transitions)", "Evaluate agents with different action set", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nactions4 = ['left', 'right', 'up', 'down']\nactions8 = ['left', 'right', 'up', 'down', 'up_left', 'down_left', 'up_right', 'down_right']\nactions9 = ['left', 'right', 'up', 'down', 'up_left', 'down_left', 'up_right', 'down_right', 'stop']\n\nACTION_TO_ARROW = {\n 'left': '⇽',\n 'right': '→',\n 'up': '↑',\n 'down': '↓',\n 'up_left': '↖',\n 'down_left': '↙',\n 'up_right': '↗',\n 'down_right': '↘',\n 'stop': '○'\n}\n\ndef evaluate(agent, title):\n agent.learn()\n \n total_rewards = []\n episode_ids = []\n for e_id, episode in enumerate(agent.episodes):\n rewards = map(lambda e: e.reward, episode)\n total_rewards.append(sum(rewards))\n episode_ids.extend([e_id] * len(episode))\n\n fig, axs = plt.subplots(1, 2, figsize=(16, 6))\n \n # display total reward vs episodes\n ax = axs[0]\n ax.plot(total_rewards)\n ax.grid()\n ax.set_title(title)\n ax.set_xlabel('episode')\n ax.set_ylabel('Total rewards')\n \n # display time steps vs episodes\n ax = axs[1]\n ax.plot(episode_ids)\n ax.grid()\n ax.set_xlabel('Time steps')\n ax.set_ylabel('Episodes')\n \n q = agent._q\n for y in range(agent._env.shape[1] - 1, -1, -1):\n row = []\n for x in range(agent._env.shape[0]):\n state = (x,y)\n a = max(q[state], key=q[state].get)\n row.append(ACTION_TO_ARROW[a])\n # row.append(a)\n print(row)\n print([str(w) for w in WIND])\n\nworld = WindyGridworld()\n\nagent4 = SarsaAgent(world, actions4)\nevaluate(agent4, 'Agent with 4 actions')\n\nagent8 = SarsaAgent(world, actions8)\nevaluate(agent8, 'Agent with 8 actions')\n\nagent9 = SarsaAgent(world, actions9)\nevaluate(agent9, 'Agent with 9 actions')", "Exercise 6.8: Stochastic Wind", "class StochasticWindyGridworld(WindyGridworld): \n def step(self, action):\n a_xy = ACTION_TO_XY[action]\n \n # apply wind shift\n wind = self._wind_x[self._state[0]]\n if wind > 0:\n wind = random.choice([wind - 1, wind, wind + 1])\n wind_shift = [0, wind]\n self._state += np.array(wind_shift)\n self._clip_state()\n # apply action\n self._state += a_xy\n self._clip_state()\n \n reward = -1\n term = True if np.all(self._goal == self._state) else False\n \n return tuple(self._state), reward, term, None\n\nstochastic_world = StochasticWindyGridworld()\n\nagent4 = SarsaAgent(stochastic_world, actions4)\nevaluate(agent4, 'Agent with 4 actions')\n\nagent8 = SarsaAgent(stochastic_world, actions8)\nevaluate(agent8, 'Agent with 8 actions')\n\nagent9 = SarsaAgent(stochastic_world, actions9)\nevaluate(agent9, 'Agent with 9 actions')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
flightcom/freqtrade
freqtrade/templates/strategy_analysis_example.ipynb
gpl-3.0
[ "Strategy analysis example\nDebugging a strategy can be time-consuming. Freqtrade offers helper functions to visualize raw data.\nThe following assumes you work with SampleStrategy, data for 5m timeframe from Binance and have downloaded them into the data directory in the default location.\nSetup", "from pathlib import Path\nfrom freqtrade.configuration import Configuration\n\n# Customize these according to your needs.\n\n# Initialize empty configuration object\nconfig = Configuration.from_files([])\n# Optionally, use existing configuration file\n# config = Configuration.from_files([\"config.json\"])\n\n# Define some constants\nconfig[\"timeframe\"] = \"5m\"\n# Name of the strategy class\nconfig[\"strategy\"] = \"SampleStrategy\"\n# Location of the data\ndata_location = Path(config['user_data_dir'], 'data', 'binance')\n# Pair to analyze - Only use one pair here\npair = \"BTC/USDT\"\n\n# Load data using values set above\nfrom freqtrade.data.history import load_pair_history\n\ncandles = load_pair_history(datadir=data_location,\n timeframe=config[\"timeframe\"],\n pair=pair,\n data_format = \"hdf5\",\n )\n\n# Confirm success\nprint(\"Loaded \" + str(len(candles)) + f\" rows of data for {pair} from {data_location}\")\ncandles.head()", "Load and run strategy\n\nRerun each time the strategy file is changed", "# Load strategy using values set above\nfrom freqtrade.resolvers import StrategyResolver\nfrom freqtrade.data.dataprovider import DataProvider\nstrategy = StrategyResolver.load_strategy(config)\nstrategy.dp = DataProvider(config, None, None)\n\n# Generate buy/sell signals using strategy\ndf = strategy.analyze_ticker(candles, {'pair': pair})\ndf.tail()", "Display the trade details\n\nNote that using data.head() would also work, however most indicators have some \"startup\" data at the top of the dataframe.\nSome possible problems\nColumns with NaN values at the end of the dataframe\nColumns used in crossed*() functions with completely different units\n\n\nComparison with full backtest\nhaving 200 buy signals as output for one pair from analyze_ticker() does not necessarily mean that 200 trades will be made during backtesting.\nAssuming you use only one condition such as, df['rsi'] &lt; 30 as buy condition, this will generate multiple \"buy\" signals for each pair in sequence (until rsi returns > 29). The bot will only buy on the first of these signals (and also only if a trade-slot (\"max_open_trades\") is still available), or on one of the middle signals, as soon as a \"slot\" becomes available.", "# Report results\nprint(f\"Generated {df['buy'].sum()} buy signals\")\ndata = df.set_index('date', drop=False)\ndata.tail()", "Load existing objects into a Jupyter notebook\nThe following cells assume that you have already generated data using the cli.\nThey will allow you to drill deeper into your results, and perform analysis which otherwise would make the output very difficult to digest due to information overload.\nLoad backtest results to pandas dataframe\nAnalyze a trades dataframe (also used below for plotting)", "from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats\n\n# if backtest_dir points to a directory, it'll automatically load the last backtest file.\nbacktest_dir = config[\"user_data_dir\"] / \"backtest_results\"\n# backtest_dir can also point to a specific file \n# backtest_dir = config[\"user_data_dir\"] / \"backtest_results/backtest-result-2020-07-01_20-04-22.json\"\n\n# You can get the full backtest statistics by using the following command.\n# This contains all information used to generate the backtest result.\nstats = load_backtest_stats(backtest_dir)\n\nstrategy = 'SampleStrategy'\n# All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well.\n# Example usages:\nprint(stats['strategy'][strategy]['results_per_pair'])\n# Get pairlist used for this backtest\nprint(stats['strategy'][strategy]['pairlist'])\n# Get market change (average change of all pairs from start to end of the backtest period)\nprint(stats['strategy'][strategy]['market_change'])\n# Maximum drawdown ()\nprint(stats['strategy'][strategy]['max_drawdown'])\n# Maximum drawdown start and end\nprint(stats['strategy'][strategy]['drawdown_start'])\nprint(stats['strategy'][strategy]['drawdown_end'])\n\n\n# Get strategy comparison (only relevant if multiple strategies were compared)\nprint(stats['strategy_comparison'])\n\n\n# Load backtested trades as dataframe\ntrades = load_backtest_data(backtest_dir)\n\n# Show value-counts per pair\ntrades.groupby(\"pair\")[\"sell_reason\"].value_counts()", "Plotting daily profit / equity line", "# Plotting equity line (starting with 0 on day 1 and adding daily profit for each backtested day)\n\nfrom freqtrade.configuration import Configuration\nfrom freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats\nimport plotly.express as px\nimport pandas as pd\n\n# strategy = 'SampleStrategy'\n# config = Configuration.from_files([\"user_data/config.json\"])\n# backtest_dir = config[\"user_data_dir\"] / \"backtest_results\"\n\nstats = load_backtest_stats(backtest_dir)\nstrategy_stats = stats['strategy'][strategy]\n\ndates = []\nprofits = []\nfor date_profit in strategy_stats['daily_profit']:\n dates.append(date_profit[0])\n profits.append(date_profit[1])\n\nequity = 0\nequity_daily = []\nfor daily_profit in profits:\n equity_daily.append(equity)\n equity += float(daily_profit)\n\n\ndf = pd.DataFrame({'dates': dates,'equity_daily': equity_daily})\n\nfig = px.line(df, x=\"dates\", y=\"equity_daily\")\nfig.show()\n", "Load live trading results into a pandas dataframe\nIn case you did already some trading and want to analyze your performance", "from freqtrade.data.btanalysis import load_trades_from_db\n\n# Fetch trades from database\ntrades = load_trades_from_db(\"sqlite:///tradesv3.sqlite\")\n\n# Display results\ntrades.groupby(\"pair\")[\"sell_reason\"].value_counts()", "Analyze the loaded trades for trade parallelism\nThis can be useful to find the best max_open_trades parameter, when used with backtesting in conjunction with --disable-max-market-positions.\nanalyze_trade_parallelism() returns a timeseries dataframe with an \"open_trades\" column, specifying the number of open trades for each candle.", "from freqtrade.data.btanalysis import analyze_trade_parallelism\n\n# Analyze the above\nparallel_trades = analyze_trade_parallelism(trades, '5m')\n\nparallel_trades.plot()", "Plot results\nFreqtrade offers interactive plotting capabilities based on plotly.", "from freqtrade.plot.plotting import generate_candlestick_graph\n# Limit graph period to keep plotly quick and reactive\n\n# Filter trades to one pair\ntrades_red = trades.loc[trades['pair'] == pair]\n\ndata_red = data['2019-06-01':'2019-06-10']\n# Generate candlestick graph\ngraph = generate_candlestick_graph(pair=pair,\n data=data_red,\n trades=trades_red,\n indicators1=['sma20', 'ema50', 'ema55'],\n indicators2=['rsi', 'macd', 'macdsignal', 'macdhist']\n )\n\n\n\n\n# Show graph inline\n# graph.show()\n\n# Render graph in a seperate window\ngraph.show(renderer=\"browser\")\n", "Plot average profit per trade as distribution graph", "import plotly.figure_factory as ff\n\nhist_data = [trades.profit_ratio]\ngroup_labels = ['profit_ratio'] # name of the dataset\n\nfig = ff.create_distplot(hist_data, group_labels,bin_size=0.01)\nfig.show()\n", "Feel free to submit an issue or Pull Request enhancing this document if you would like to share ideas on how to best analyze the data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LorenzoBi/courses
TSAADS/tutorial 3/Untitled.ipynb
mit
[ "import numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io as sio\nfrom sklearn import datasets, linear_model\nfrom scipy.stats import linregress\nfrom numpy.linalg import inv\nimport statsmodels.discrete.discrete_model as sm\nfrom numpy.linalg import eigvals\nimport numpy.linalg as LA\nfrom scipy import stats\n\n%matplotlib inline\n\ndef set_data(p, x):\n temp = x.flatten()\n n = len(temp[p:])\n x_T = temp[p:].reshape((n, 1))\n X_p = np.ones((n, p + 1))\n for i in range(1, p + 1):\n X_p[:, i] = temp[i - 1: i - 1 + n]\n return X_p, x_T", "Task 1", "data = sio.loadmat('Tut3_file1.mat')\nDLPFC = data['DLPFC']\nx = DLPFC[0, :]\nplt.plot(DLPFC.T)\n\nx = DLPFC[0]\nX_p, x_T = set_data(3, x)\n\nres = LA.lstsq(X_p, x_T)\n\neps = x_T - np.dot(X_p, res[0])\nXpXp = np.diag(LA.inv(np.dot(X_p.T, X_p)))\nsdeps = np.std(eps, axis=0, ddof=0)\nres[0].flatten() / (sdeps * np.sqrt(XpXp))\nerr = (sdeps * np.sqrt(XpXp))", "The t-statistic computes how distant our parameters are from the zero hypothesis in terms of standard errors. So for $a_0$ and $a_1$ we cannot refute the zero hypothesis, but in case of $a_2$ and $a_3$ it is highly unlikely that our that comes from random fluctuations.", "import statsmodels.api as sm\nols = sm.OLS(x_T, X_p).fit()\nols.summary()", "The likelihood function is:\n$ f (Y) = (\\frac{1}{\\sqrt{2 \\pi} \\sigma} \\exp{- \\frac{1}{2 \\sigma^2}\\sum^n_{i=1}(Y_i - a_0 X_{i1} - ... - a_4)}) $, so the log likelihood will be:\n$-n\\ln(\\sqrt{2 \\pi} \\sigma) - \\frac{1}{2 \\sigma^2}\\sum^n_{i=1}(Y_i - a_0 X_{i1} - ... - a_4)$.\nThe result in our case can be seen in the summary of the previous cell.", "ols.params, res[0].flatten()\nols.HC4_se, err\nnp.sqrt(np.diag(X.T X)^(-1)X.T diag(e_i^(2)) X(XpXp)^(-1)", "Task 2\nWe do the VAR(1) regression similar to the previous case using all the time series.", "X = np.ones((5, len(x)))\nX[1:3, :] = DLPFC\nX[3:, :] = data['Parietal']\nX_T = X[1:, 1:].T.reshape((359, 4))\nA = np.zeros((4, 5))\ntv = np.zeros((4, 5))\nfor i in range(1, 5):\n\n X_T = X[i, 1:].T.reshape((359, 1))\n X_p = X[:, :-1].T\n ols = sm.OLS(X_T, X_p).fit()\n A[i-1, :] = ols.params\n tv[i-1, :] = ols.pvalues", "We can see on the off diagonal terms are much smaller than the terms on diagonal. That shows that there is not as much influence between features as on the feature itself.", "A[:, :]\n\nX_T = X[1:, 1:].T.reshape((359, 4))\n\nres = LA.lstsq(X_p, X_T)", "For checking if it si astationaty process we will see if the eignvalues are in the unitcircle. They are.", "res[0].shape", "The loglikelihood of this model will be the", "eps = X_T - np.dot(X_p, res[0])\nsdeps = np.std(eps, axis=0)\n\nXpXp = LA.inv(np.dot(X_p.T, X_p))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ko/tutorials/generative/style_transfer.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "tf.keras를 사용한 Neural Style Transfer\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/generative/style_transfer\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />TensorFlow.org에서 보기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/generative/style_transfer.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />구글 코랩(Colab)에서 실행하기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/generative/style_transfer.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />깃허브(GitHub) 소스 보기</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/generative/style_transfer.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNote: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도\n불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.\n이 번역에 개선할 부분이 있다면\ntensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.\n문서 번역이나 리뷰에 참여하려면\ndocs-ko@tensorflow.org로\n메일을 보내주시기 바랍니다.\n개요\n이번 튜토리얼에서는 딥러닝을 사용하여 원하는 이미지를 다른 스타일의 이미지로 구성하는 법을 배워보겠습니다(피카소나 반 고흐처럼 그리기를 희망하나요?). 이 기법은 Neural Style Transfer로 알려져있으며, Leon A. Gatys의 논문 A Neural Algorithm of Artistic Style에 잘 기술되어 있습니다.\n참고: 본 튜토리얼은 처음에 발표된 기존의 스타일 전이 알고리즘을 소개합니다. 이 알고리즘은 이미지의 콘텐츠를 특정 스타일에 최적화시키는 방식으로 작동합니다. 보다 최근에 개발된 (CycleGan과 같은) 알고리즘은 모델로 하여금 스타일이 변이된 이미지를 직접 생성하도록 만듭니다. 이 접근은 기존의 스타일 전이 알고리즘에 비해 훨씬 빠릅니다 (최대 1000배). 텐서플로 허브와 텐서플로 라이트에서는 이러한 사전 훈련된 이미지 변이 모듈을 제공하고 있습니다.\nNeural style transfer은 콘텐츠 (content) 이미지와 (유명한 작가의 삽화와 같은) 스타일 참조 (style reference) 이미지를 이용하여, 콘텐츠 이미지의 콘텐츠는 유지하되 스타일 참조 이미지의 화풍으로 채색한 것 같은 새로운 이미지를 생성하는 최적화 기술입니다.\n이 과정은 출력 이미지를 콘텐츠 이미지의 콘텐츠 통계랑(statistic)과 스타일 참조 이미지의 스타일 통계량에 맞춰 최적화시킴으로써 구현됩니다. 통계량은 합성곱 신경망을 이용해 각각의 이미지에서 추출합니다.\n예시로, 아래에 주어진 강아지의 이미지와 바실리 칸딘스키의 7번 작품을 살펴봅시다:\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg\" width=\"500px\"/>\n노란 래브라도, 출처: 위키미디아 공용\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg\" width=\"500px\"/>\n만약 칸딘스키가 7번 작품의 화풍으로 이 강아지를 그렸다면 어떤 작품이 탄생했을까요? 아마 이런 그림이 아니었을까요?\n<img src=\"https://tensorflow.org/tutorials/generative/images/stylized-image.png\" style=\"width: 500px;\"/>\n설정\n모듈 구성 및 임포트", "import tensorflow as tf\n\nimport IPython.display as display\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['figure.figsize'] = (12,12)\nmpl.rcParams['axes.grid'] = False\n\nimport numpy as np\nimport PIL.Image\nimport time\nimport functools\n\ndef tensor_to_image(tensor):\n tensor = tensor*255\n tensor = np.array(tensor, dtype=np.uint8)\n if np.ndim(tensor)>3:\n assert tensor.shape[0] == 1\n tensor = tensor[0]\n return PIL.Image.fromarray(tensor)", "이미지를 다운로드받고 스타일 참조 이미지와 콘텐츠 이미지를 선택합니다:", "content_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')\n\n# https://commons.wikimedia.org/wiki/File:Vassily_Kandinsky,_1913_-_Composition_7.jpg\nstyle_path = tf.keras.utils.get_file('kandinsky5.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg')", "입력 시각화\n이미지를 불러오는 함수를 정의하고, 최대 이미지 크기를 512개의 픽셀로 제한합니다.", "def load_img(path_to_img):\n max_dim = 512\n img = tf.io.read_file(path_to_img)\n img = tf.image.decode_image(img, channels=3)\n img = tf.image.convert_image_dtype(img, tf.float32)\n\n shape = tf.cast(tf.shape(img)[:-1], tf.float32)\n long_dim = max(shape)\n scale = max_dim / long_dim\n\n new_shape = tf.cast(shape * scale, tf.int32)\n\n img = tf.image.resize(img, new_shape)\n img = img[tf.newaxis, :]\n return img", "이미지를 출력하기 위한 간단한 함수를 정의합니다:", "def imshow(image, title=None):\n if len(image.shape) > 3:\n image = tf.squeeze(image, axis=0)\n\n plt.imshow(image)\n if title:\n plt.title(title)\n\ncontent_image = load_img(content_path)\nstyle_image = load_img(style_path)\n\nplt.subplot(1, 2, 1)\nimshow(content_image, 'Content Image')\n\nplt.subplot(1, 2, 2)\nimshow(style_image, 'Style Image')", "TF-Hub를 통한 빠른 스타일 전이\n앞서 언급했듯이, 본 튜토리얼은 이미지 콘텐츠를 특정 스타일에 맞춰 최적화시키는 기존의 스타일 전이 알고리즘을 소개합니다. 이에 대해 살펴보기 전에, 텐서플로 허브 모듈은 어떤 결과물을 생성하는지 시험해봅시다:", "import tensorflow_hub as hub\nhub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1')\nstylized_image = hub_module(tf.constant(content_image), tf.constant(style_image))[0]\ntensor_to_image(stylized_image)", "콘텐츠와 스타일 표현 정의하기\n이미지의 콘텐츠와 스타일 표현(representation)을 얻기 위해, 모델의 몇 가지 중간층들을 살펴볼 것입니다. 모델의 입력층부터 시작해서, 처음 몇 개의 층은 선분이나 질감과 같은 이미지 내의 저차원적 특성에 반응합니다. 반면, 네트워크가 깊어지면 최종 몇 개의 층은 바퀴나 눈과 같은 고차원적 특성들을 나타냅니다. 이번 경우, 우리는 사전학습된 이미지 분류 네트워크인 VGG19 네트워크의 구조를 사용할 것입니다. 이 중간층들은 이미지에서 콘텐츠와 스타일 표현을 정의하는 데 필요합니다. 입력 이미지가 주어졌을때, 스타일 전이 알고리즘은 이 중간층들에서 콘텐츠와 스타일에 해당하는 타깃 표현들을 일치시키려고 시도할 것입니다.\nVGG19 모델을 불러오고, 작동 여부를 확인하기 위해 이미지에 적용시켜봅시다:", "x = tf.keras.applications.vgg19.preprocess_input(content_image*255)\nx = tf.image.resize(x, (224, 224))\nvgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet')\nprediction_probabilities = vgg(x)\nprediction_probabilities.shape\n\npredicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0]\n[(class_name, prob) for (number, class_name, prob) in predicted_top_5]", "이제 분류층을 제외한 VGG19 모델을 불러오고, 각 층의 이름을 출력해봅니다.", "vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n\nprint()\nfor layer in vgg.layers:\n print(layer.name)", "이미지의 스타일과 콘텐츠를 나타내기 위한 모델의 중간층들을 선택합니다:", "content_layers = ['block5_conv2'] \n\nstyle_layers = ['block1_conv1',\n 'block2_conv1',\n 'block3_conv1', \n 'block4_conv1', \n 'block5_conv1']\n\nnum_content_layers = len(content_layers)\nnum_style_layers = len(style_layers)", "스타일과 콘텐츠를 위한 중간층\n그렇다면 사전훈련된 이미지 분류 네트워크 속에 있는 중간 출력으로 어떻게 스타일과 콘텐츠 표현을 정의할 수 있을까요?\n고수준에서 보면 (네트워크의 훈련 목적인) 이미지 분류를 수행하기 위해서는 네트워크가 반드시 이미지를 이해햐야 합니다. 이는 미가공 이미지를 입력으로 받아 픽셀값들을 이미지 내에 존재하는 특성(feature)들에 대한 복합적인 이해로 변환할 수 있는 내부 표현(internal representation)을 만드는 작업이 포함됩니다.\n또한 부분적으로 왜 합성곱(convolutional) 신경망의 일반화(generalize)가 쉽게 가능한지를 나타냅니다. 즉, 합성곱 신경망은 배경잡음(background noise)과 기타잡음(nuisances)에 상관없이 (고양이와 강아지와 같이)클래스 안에 있는 불변성(invariance)과 특징을 포착할 수 있습니다. 따라서 미가공 이미지의 입력과 분류 레이블(label)의 출력 중간 어딘가에서 모델은 복합 특성(complex feature) 추출기의 역할을 수행합니다. 그러므로, 모델의 중간층에 접근함으로써 입력 이미지의 콘텐츠와 스타일을 추출할 수 있습니다.\n모델 만들기\ntf.keras.applications에서 제공하는 모델들은 케라스 함수형 API을 통해 중간층에 쉽게 접근할 수 있도록 구성되어있습니다.\n함수형 API를 이용해 모델을 정의하기 위해서는 모델의 입력과 출력을 지정합니다:\nmodel = Model(inputs, outputs)\n아래의 함수는 중간층들의 결과물을 배열 형태로 출력하는 VGG19 모델을 반환합니다:", "def vgg_layers(layer_names):\n \"\"\" 중간층의 출력값을 배열로 반환하는 vgg 모델을 만듭니다.\"\"\"\n # 이미지넷 데이터셋에 사전학습된 VGG 모델을 불러옵니다\n vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n vgg.trainable = False\n \n outputs = [vgg.get_layer(name).output for name in layer_names]\n\n model = tf.keras.Model([vgg.input], outputs)\n return model", "위 함수를 이용해 모델을 만들어봅시다:", "style_extractor = vgg_layers(style_layers)\nstyle_outputs = style_extractor(style_image*255)\n\n# 각 층의 출력에 대한 통계량을 살펴봅니다\nfor name, output in zip(style_layers, style_outputs):\n print(name)\n print(\" 크기: \", output.numpy().shape)\n print(\" 최솟값: \", output.numpy().min())\n print(\" 최댓값: \", output.numpy().max())\n print(\" 평균: \", output.numpy().mean())\n print()", "스타일 계산하기\n이미지의 콘텐츠는 중간층들의 특성 맵(feature map)의 값들로 표현됩니다.\n이미지의 스타일은 각 특성 맵의 평균과 피쳐맵들 사이의 상관관계로 설명할 수 있습니다. 이런 정보를 담고 있는 그람 행렬(Gram matrix)은 각 위치에서 특성 벡터(feature vector)끼리의 외적을 구한 후,평균값을 냄으로써 구할 수 있습니다. 주어진 층에 대한 그람 행렬은 다음과 같이 계산할 수 있습니다:\n$$G^l_{cd} = \\frac{\\sum_{ij} F^l_{ijc}(x)F^l_{ijd}(x)}{IJ}$$\n이 식은 tf.linalg.einsum 함수를 통해 쉽게 계산할 수 있습니다:", "def gram_matrix(input_tensor):\n result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)\n input_shape = tf.shape(input_tensor)\n num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)\n return result/(num_locations)", "스타일과 콘텐츠 추출하기\n스타일과 콘텐츠 텐서를 반환하는 모델을 만듭시다.", "class StyleContentModel(tf.keras.models.Model):\n def __init__(self, style_layers, content_layers):\n super(StyleContentModel, self).__init__()\n self.vgg = vgg_layers(style_layers + content_layers)\n self.style_layers = style_layers\n self.content_layers = content_layers\n self.num_style_layers = len(style_layers)\n self.vgg.trainable = False\n\n def call(self, inputs):\n \"[0,1] 사이의 실수 값을 입력으로 받습니다\"\n inputs = inputs*255.0\n preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)\n outputs = self.vgg(preprocessed_input)\n style_outputs, content_outputs = (outputs[:self.num_style_layers], \n outputs[self.num_style_layers:])\n\n style_outputs = [gram_matrix(style_output)\n for style_output in style_outputs]\n\n content_dict = {content_name:value \n for content_name, value \n in zip(self.content_layers, content_outputs)}\n\n style_dict = {style_name:value\n for style_name, value\n in zip(self.style_layers, style_outputs)}\n \n return {'content':content_dict, 'style':style_dict}", "이미지가 입력으로 주어졌을때, 이 모델은 style_layers의 스타일과 content_layers의 콘텐츠에 대한 그람 행렬을 출력합니다:", "extractor = StyleContentModel(style_layers, content_layers)\n\nresults = extractor(tf.constant(content_image))\n\nprint('스타일:')\nfor name, output in sorted(results['style'].items()):\n print(\" \", name)\n print(\" 크기: \", output.numpy().shape)\n print(\" 최솟값: \", output.numpy().min())\n print(\" 최댓값: \", output.numpy().max())\n print(\" 평균: \", output.numpy().mean())\n print()\n\nprint(\"콘텐츠:\")\nfor name, output in sorted(results['content'].items()):\n print(\" \", name)\n print(\" 크기: \", output.numpy().shape)\n print(\" 최솟값: \", output.numpy().min())\n print(\" 최댓값: \", output.numpy().max())\n print(\" 평균: \", output.numpy().mean())\n", "경사하강법 실행\n이제 스타일과 콘텐츠 추출기를 사용해 스타일 전이 알고리즘을 구현할 차례입니다. 타깃에 대한 입력 이미지의 평균 제곱 오차를 계산한 후, 오차값들의 가중합을 구합니다.\n스타일과 콘텐츠의 타깃값을 지정합니다:", "style_targets = extractor(style_image)['style']\ncontent_targets = extractor(content_image)['content']", "최적화시킬 이미지를 담을 tf.Variable을 정의하고 콘텐츠 이미지로 초기화합니다. (이때 tf.Variable는 콘텐츠 이미지와 크기가 같아야 합니다.):", "image = tf.Variable(content_image)", "픽셀 값이 실수이므로 0과 1 사이로 클리핑하는 함수를 정의합니다:", "def clip_0_1(image):\n return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)", "옵티마이저를 생성합니다. 참조 연구에서는 LBFGS를 추천하지만, Adam도 충분히 적합합니다:", "opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)", "최적화를 진행하기 위해, 전체 오차를 콘텐츠와 스타일 오차의 가중합으로 정의합니다:", "style_weight=1e-2\ncontent_weight=1e4\n\ndef style_content_loss(outputs):\n style_outputs = outputs['style']\n content_outputs = outputs['content']\n style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2) \n for name in style_outputs.keys()])\n style_loss *= style_weight / num_style_layers\n\n content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2) \n for name in content_outputs.keys()])\n content_loss *= content_weight / num_content_layers\n loss = style_loss + content_loss\n return loss", "tf.GradientTape를 사용해 이미지를 업데이트합니다.", "@tf.function()\ndef train_step(image):\n with tf.GradientTape() as tape:\n outputs = extractor(image)\n loss = style_content_loss(outputs)\n\n grad = tape.gradient(loss, image)\n opt.apply_gradients([(grad, image)])\n image.assign(clip_0_1(image))", "구현한 알고리즘을 시험해보기 위해 몇 단계를 돌려봅시다:", "train_step(image)\ntrain_step(image)\ntrain_step(image)\ntensor_to_image(image)", "잘 작동하는 것을 확인했으니, 더 오랫동안 최적화를 진행해봅니다:", "import time\nstart = time.time()\n\nepochs = 10\nsteps_per_epoch = 100\n\nstep = 0\nfor n in range(epochs):\n for m in range(steps_per_epoch):\n step += 1\n train_step(image)\n print(\".\", end='')\n display.clear_output(wait=True)\n display.display(tensor_to_image(image))\n print(\"훈련 스텝: {}\".format(step))\n \nend = time.time()\nprint(\"전체 소요 시간: {:.1f}\".format(end-start))", "총 변위 손실\n이 기본 구현 방식의 한 가지 단점은 많은 고주파 아티팩(high frequency artifact)가 생겨난다는 점 입니다. 아티팩 생성을 줄이기 위해서는 이미지의 고주파 구성 요소에 대한 레귤러리제이션(regularization) 항을 추가해야 합니다. 스타일 전이에서는 이 변형된 오차값을 총 변위 손실(total variation loss)라고 합니다:", "def high_pass_x_y(image):\n x_var = image[:,:,1:,:] - image[:,:,:-1,:]\n y_var = image[:,1:,:,:] - image[:,:-1,:,:]\n\n return x_var, y_var\n\nx_deltas, y_deltas = high_pass_x_y(content_image)\n\nplt.figure(figsize=(14,10))\nplt.subplot(2,2,1)\nimshow(clip_0_1(2*y_deltas+0.5), \"Horizontal Deltas: Original\")\n\nplt.subplot(2,2,2)\nimshow(clip_0_1(2*x_deltas+0.5), \"Vertical Deltas: Original\")\n\nx_deltas, y_deltas = high_pass_x_y(image)\n\nplt.subplot(2,2,3)\nimshow(clip_0_1(2*y_deltas+0.5), \"Horizontal Deltas: Styled\")\n\nplt.subplot(2,2,4)\nimshow(clip_0_1(2*x_deltas+0.5), \"Vertical Deltas: Styled\")", "위 이미지들은 고주파 구성 요소가 늘어났다는 것을 보여줍니다.\n한 가지 흥미로운 사실은 고주파 구성 요소가 경계선 탐지기의 일종이라는 점입니다. 이를테면 소벨 경계선 탐지기(Sobel edge detector)를 사용하면 유사한 출력을 얻을 수 있습니다:", "plt.figure(figsize=(14,10))\n\nsobel = tf.image.sobel_edges(content_image)\nplt.subplot(1,2,1)\nimshow(clip_0_1(sobel[...,0]/4+0.5), \"Horizontal Sobel-edges\")\nplt.subplot(1,2,2)\nimshow(clip_0_1(sobel[...,1]/4+0.5), \"Vertical Sobel-edges\")", "정규화 오차는 각 값의 절대값의 합으로 표현됩니다:", "def total_variation_loss(image):\n x_deltas, y_deltas = high_pass_x_y(image)\n return tf.reduce_sum(tf.abs(x_deltas)) + tf.reduce_sum(tf.abs(y_deltas))\n\ntotal_variation_loss(image).numpy()", "식이 잘 계산된다는 것을 확인할 수 있습니다. 하지만 다행히도 텐서플로에는 이미 표준 함수가 내장되어 있기 직접 오차식을 구현할 필요는 없습니다:", "tf.image.total_variation(image).numpy()", "다시 최적화하기\ntotal_variation_loss를 위한 가중치를 정의합니다:", "total_variation_weight=30", "이제 이 가중치를 train_step 함수에서 사용합니다:", "@tf.function()\ndef train_step(image):\n with tf.GradientTape() as tape:\n outputs = extractor(image)\n loss = style_content_loss(outputs)\n loss += total_variation_weight*tf.image.total_variation(image)\n\n grad = tape.gradient(loss, image)\n opt.apply_gradients([(grad, image)])\n image.assign(clip_0_1(image))", "최적화할 변수를 다시 초기화합니다:", "image = tf.Variable(content_image)", "최적화를 수행합니다:", "import time\nstart = time.time()\n\nepochs = 10\nsteps_per_epoch = 100\n\nstep = 0\nfor n in range(epochs):\n for m in range(steps_per_epoch):\n step += 1\n train_step(image)\n print(\".\", end='')\n display.clear_output(wait=True)\n display.display(tensor_to_image(image))\n print(\"훈련 스텝: {}\".format(step))\n\nend = time.time()\nprint(\"전체 소요 시간: {:.1f}\".format(end-start))", "마지막으로, 결과물을 저장합니다:", "file_name = 'stylized-image.png'\ntensor_to_image(image).save(file_name)\n\ntry:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download(file_name)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fastai/fastai
nbs/18b_callback.preds.ipynb
apache-2.0
[ "#|hide\n#|skip\n! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab\n\n#|default_exp callback.preds\n\n#|export\nfrom __future__ import annotations\nfrom fastai.basics import *\n\n#|hide\nfrom nbdev.showdoc import *\nfrom fastai.test_utils import *", "Predictions callbacks\n\nVarious callbacks to customize get_preds behaviors\n\nMCDropoutCallback\n\nTurns on dropout during inference, allowing you to call Learner.get_preds multiple times to approximate your model uncertainty using Monte Carlo Dropout.", "#|export\nclass MCDropoutCallback(Callback):\n def before_validate(self):\n for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:\n m.train()\n \n def after_validate(self):\n for m in [m for m in flatten_model(self.model) if 'dropout' in m.__class__.__name__.lower()]:\n m.eval()\n\nlearn = synth_learner()\n\n# Call get_preds 10 times, then stack the predictions, yielding a tensor with shape [# of samples, batch_size, ...]\ndist_preds = []\nfor i in range(10):\n preds, targs = learn.get_preds(cbs=[MCDropoutCallback()])\n dist_preds += [preds]\n\ntorch.stack(dist_preds).shape", "Export -", "#|hide\nfrom nbdev.export import notebook2script\nnotebook2script()" ]
[ "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.16/_downloads/plot_phantom_4DBTi.ipynb
bsd-3-clause
[ "%matplotlib inline", "============================================\n4D Neuroimaging/BTi phantom dataset tutorial\n============================================\nHere we read 4DBTi epochs data obtained with a spherical phantom\nusing four different dipole locations. For each condition we\ncompute evoked data and compute dipole fits.\nData are provided by Jean-Michel Badier from MEG center in Marseille, France.", "# Authors: Alex Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\nfrom mayavi import mlab\nfrom mne.datasets import phantom_4dbti\nimport mne", "Read data and compute a dipole fit at the peak of the evoked response", "data_path = phantom_4dbti.data_path()\nraw_fname = op.join(data_path, '%d/e,rfhp1.0Hz')\n\ndipoles = list()\nsphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.080)\n\nt0 = 0.07 # peak of the response\n\npos = np.empty((4, 3))\n\nfor ii in range(4):\n raw = mne.io.read_raw_bti(raw_fname % (ii + 1,),\n rename_channels=False, preload=True)\n raw.info['bads'] = ['A173', 'A213', 'A232']\n events = mne.find_events(raw, 'TRIGGER', mask=4350, mask_type='not_and')\n epochs = mne.Epochs(raw, events=events, event_id=8192, tmin=-0.2, tmax=0.4,\n preload=True)\n evoked = epochs.average()\n evoked.plot(time_unit='s')\n cov = mne.compute_covariance(epochs, tmax=0.)\n dip = mne.fit_dipole(evoked.copy().crop(t0, t0), cov, sphere)[0]\n pos[ii] = dip.pos[0]", "Compute localisation errors", "actual_pos = 0.01 * np.array([[0.16, 1.61, 5.13],\n [0.17, 1.35, 4.15],\n [0.16, 1.05, 3.19],\n [0.13, 0.80, 2.26]])\nactual_pos = np.dot(actual_pos, [[0, 1, 0], [-1, 0, 0], [0, 0, 1]])\n\nerrors = 1e3 * np.linalg.norm(actual_pos - pos, axis=1)\nprint(\"errors (mm) : %s\" % errors)", "Plot the dipoles in 3D", "def plot_pos(pos, color=(0., 0., 0.)):\n mlab.points3d(pos[:, 0], pos[:, 1], pos[:, 2], scale_factor=0.005,\n color=color)\n\n\nmne.viz.plot_alignment(evoked.info, bem=sphere, surfaces=[])\n# Plot the position of the actual dipole\nplot_pos(actual_pos, color=(1., 0., 0.))\n# Plot the position of the estimated dipole\nplot_pos(pos, color=(1., 1., 0.))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
damienstanton/tensorflownotes
3_regularization.ipynb
mit
[ "Deep Learning\nAssignment 3\nPreviously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.\nThe goal of this assignment is to explore regularization techniques.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nfrom six.moves import cPickle as pickle", "First reload the data we generated in notmist.ipynb.", "pickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)", "Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.", "image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])", "Problem 1\nIntroduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.\n\n\nProblem 2\nLet's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n\n\nProblem 3\nIntroduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.\nWhat happens to our extreme overfitting case?\n\n\nProblem 4\nTry to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.\nOne avenue you can explore is to add multiple layers.\nAnother one is to use learning rate decay:\nglobal_step = tf.Variable(0) # count the number of steps taken.\nlearning_rate = tf.train.exponential_decay(0.5, step, ...)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
4DGenome/Chromosomal-Conformation-Course
Notebooks/00-Hi-C_quality_check.ipynb
gpl-3.0
[ "FASTQ format\nThe file is organized in 4 lines per read:\n 1 - The header of the DNA sequence with the read id (the read length is optional)\n 2 - The DNA sequence\n 3 - The header of the sequence quality (this line could be either a repetition of line 1 or empty)\n 4 - The sequence quality (it is not human readble, but is provided as PHRED score. Check https://en.wikipedia.org/wiki/Phred_quality_score for more details)", "for renz in ['HindIII', 'MboI']:\n print renz\n ! head -n 4 /media/storage/FASTQs/K562_\"$renz\"_1.fastq\n print ''", "Count the number of lines in the file (4 times the number of reads)", "! wc -l /media/storage/FASTQs/K562_HindIII_1.fastq", "There are 40 M lines in the file, which means 10 M reads in total.\nQuality check before mapping", "from pytadbit.utils.fastq_utils import quality_plot\n\nfor r_enz in ['HindIII', 'MboI']:\n quality_plot('/media/storage/FASTQs/K562_{0}_1.fastq'.format(r_enz), r_enz=r_enz, \n nreads=1000000, paired=False)", "These plots provide a quick overview on the quality of the genome sequencing, as well as a rough estimate of the efficiency of the digestion and ligation steps of the Hi-C experiment." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
2.2/examples/legacy.ipynb
gpl-3.0
[ "Comparing PHOEBE 2 vs PHOEBE Legacy\nNOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2. In order to run this backend, you'll need to have PHOEBE 1.0 installed and manually build the python bindings in the phoebe-py directory.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).", "!pip install -I \"phoebe>=2.2,<2.3\"", "As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.", "%matplotlib inline\n\nimport phoebe\nfrom phoebe import u\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nphoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()\nb['q'] = 0.7\nb['requiv@secondary'] = 0.7", "Adding Datasets and Compute Options", "b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')\nb.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')\nb.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')", "Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.", "b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')\n\nb.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')", "Now we add compute options for the 'legacy' backend.", "b.add_compute('legacy', compute='phoebe1', irrad_method='none')", "And set the two RV datasets to use the correct methods (for both compute options)", "b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')\n\nb.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')", "Let's use the external atmospheres available for both phoebe1 and phoebe2", "b.set_value_all('atm', 'extern_planckint')", "Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize", "b.set_value_all('gridsize', 30)", "Let's also disable other special effect such as heating, gravity, and light-time effects.", "b.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'logarithmic')\nb.set_value_all('ld_coeffs', [0.,0.])\n\nb.set_value_all('rv_grav', False)\n\nb.set_value_all('ltte', False)", "Finally, let's compute all of our models", "b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')\n\nb.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')\n\nb.run_compute(compute='phoebe1', model='phoebe1model')", "Plotting\nLight Curve", "colors = {'phoebe2marchingmodel': 'g', 'phoebe2wdmodel': 'b', 'phoebe1model': 'r'}\nafig, mplfig = b['lc01'].plot(c=colors, legend=True, show=True)", "Now let's plot the residuals between these two models", "artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')\nartist, = plt.plot(b.get_value('fluxes@lc01@phoebe2wdmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'b-')\nartist = plt.axhline(0.0, linestyle='dashed', color='k')\nylim = plt.ylim(-0.003, 0.003)", "Dynamical RVs\nSince the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.", "afig, mplfig = b.filter(dataset='rvdyn', model=['phoebe2wdmodel', 'phoebe1model']).plot(c=colors, legend=True, show=True)", "And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)", "artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')\nartist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2wdmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')\nartist = plt.axhline(0.0, linestyle='dashed', color='k')\nylim = plt.ylim(-1.5e-12, 1.5e-12)", "Numerical (flux-weighted) RVs", "afig, mplfig = b.filter(dataset='rvnum').plot(c=colors, show=True)\n\nartist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')\nartist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')\n\nartist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2wdmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='b', ls=':')\nartist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2wdmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='b', ls='-.')\n\nartist = plt.axhline(0.0, linestyle='dashed', color='k')\nylim = plt.ylim(-1e-2, 1e-2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mcamack/Jupyter-Notebooks
keras/keras202-VGG16FineTuning.ipynb
apache-2.0
[ "VGG16\nThis notebook will recreate the VGG16 model from FastAI Lesson 1 (wiki) and FastAI Lesson 2 (wiki)\nThe Oxford Visual Geometry Group created a 16 layer deep ConvNet which placed first in certain aspects of the 2014 Image-Net competition. Their NN was trained on 1000's of images from the image-net database for all sorts of objects. Instead of retraining the NN ourselves, it is possible to download the weights from their trained NN. By using a pretained model, we can create their work and also adapt it to our own classification task.\nRepurposing the Pretrained VGG16 Model\nThe pretrained VGG16 model was trained using image-net data. This data is made up of thousands of categories of \"things\" which each have many framed, well-lit, and focused photos. Knowing these characteristics of the training images will help us understand how this model can and can't work for our dogs vs. cats task.\nThe image-net data is more specific than just dogs and cats, it has been trained on specific breeds of each. One hot encoding is used to label images. This is where the label is a vector of 0's of size equal to the number of categories, but has a 1 where the category is true. So for [dogs, cats] a label of [0, 1] would mean it is a cat.\nBy repurposing the image-net VGG16 to look for just cats and dogs, we are Finetuning the mode. This is where we start with a model that already solved a similar problem. Many of the parameters should be the same, so we only select a subset of them to re-train. Finetuning will replace the 1000's of image-net categories with the 2 it found in our directory structure (dogs and cats). It does this by removing the last layer (with the keras .pop method) and then adding a new output layer with size 2. This will leave us with a pretrained VGG16 model specifically made for categorizing just cats and dogs.\nWhy do Finetuning instead of training our own network?\nImage-net NN has already learned a lot about what the world looks like. The first layer of a NN looks for basic shapes, patterns, or gradients ... which are known as gabor filters. These images come from this paper (Visualizing and Understanding Convolutional Networks):\n<img src=\"images/Layer1.png\" alt=\"Drawing\" style=\"width: 600px;\"/>\nThe second layer combines layer 1 filters to create newer, more complex filters. So it turns multiple line filters into corner filters, and combines lines into curved edges, for example. \n<img src=\"images/Layer2.png\" alt=\"Drawing\" style=\"width: 600px;\"/>\nFurther into the hidden layers of a NN, filters start to find more complex shapes, repeating geometric patterns, faces, etc.\n<img src=\"images/Layer3.png\" alt=\"Drawing\" style=\"width: 600px;\"/>\n<img src=\"images/Layer4-5.png\" alt=\"Drawing\" style=\"width: 600px;\"/>\nVGG16 has ... 16 ... layers, so there are tons of filters created at each layer. Finetuning keeps these lower level filters which have been created already and then combines them in a different way to address different inputs (i.e. cats and dogs instead of 1000's of categories). Neural networks pretrained on HUGE datasets have already found all of these lower level filters, so we don't need to spend weeks doing that part ourselves. Finetuning usually works best on the second to last layer, but it's also a good idea to try it at every layer.\nAdditional information on fine-tuning (aka transfer-learning) can be found on Stanford's CS231n website here.\nVGG Detailed Sizing\nA rough calculation for the memory requirements of running VGG16 can be calculated, as was done in the Stanford CS231n CNN Course\nThis makes sense when tested with my 6GB GTX980ti. A mini-batch size of 32 ran out of VRAM. The GPU has to run other stuff too and has a normal load of around 0.7GB.github.io/convolutional-networks/). At each layer, we can find the size of the memory required and weights. Notice that most of the memory (and compute time) is used in the first layers, while most of the parameters are in the last FC layers. Notice that the POOL layers reduce the spatial dimensions by 50% (don't effect depth) and do not introduce any new parameters.\n| Layer | Size/Memory | Weights |\n|:--- |:--- |:--- |\n| INPUT | 224x224x3 = 150K | 0 |\n| CONV3-64 | 224x224x64 = 3.2M | (3x3x3)x64 = 1,728 |\n| CONV3-64 | 224x224x64 = 3.2M | (3x3x3)x64 = 36,864 |\n| POOL2 | 112x112x64 = 800K | 0 |\n| CONV3-128 | 112x112x128 = 1.6M | (3x3x64)x128 = 73,728 |\n| CONV3-128 | 112x112x128 = 1.6M | (3x3x128)x128 = 147,456 |\n| POOL2 | 56x56x128 = 400K | 0 |\n| CONV3-256 | 56x56x256 = 800K | (3x3x128)x256 = 294,912 |\n| CONV3-256 | 56x56x256 = 800K | (3x3x256)x256 = 589,824 |\n| CONV3-256 | 56x56x256 = 800K | (3x3x256)x256 = 589,824 |\n| POOL2 | 28x28x256 = 200K | 0 |\n| CONV3-512 | 28x28x512 = 400K | (3x3x256)x512 = 1,179,648 |\n| CONV3-512 | 28x28x512 = 400K | (3x3x512)x512 = 2,359,296 |\n| CONV3-512 | 28x28x512 = 400K | (3x3x512)x512 = 2,359,296 |\n| POOL2 | 14x14x512 = 100K | 0 |\n| CONV3-512 | 14x14x512 = 100K | (3x3x512)x512 = 2,359,296 |\n| CONV3-512 | 14x14x512 = 100K | (3x3x512)x512 = 2,359,296 |\n| CONV3-512 | 14x14x512 = 100K | (3x3x512)x512 = 2,359,296 |\n| POOL2 | 7x7x512 = 25K | 0 |\n| FC | 1x1x4096 = 4K | 7x7x512x4096 = 102,760,448 |\n| FC | 1x1x4096 = 4K | 4096x4096 = 16,777,216 |\n| FC | 1x1x1000 = 1K | 4096x1000 = 4,096,000 |\nTOTAL MEMORY = (LayerSizes + 3*Weights) * 4 Bytes * 2 (fwd and bkwd passes) * images/batch", "#GBs required for 16 image mini-batch\nsize = ((15184000 + 3*4096000) * 4 * 2 * 16) / (1024**3)\nprint(str(round(size,2)) + 'GB')", "This makes sense when tested with my 6GB GTX980ti. A mini-batch size of 32 ran out of VRAM. The GPU has to run other stuff too and has a normal load of around 0.7GB.\nCustom Written VGG16 Model\nThis section will go step by step through the process of recreating the VGG16 model from scratch, using python and Keras.\nPrepare the Workspace\nHere we will set matplotlib plots to load directly in this notebook, load all of the python packages needed, check where our data directory is saved, as well as save the pre-trained VGG16 model from the web url.", "%matplotlib inline\n\nimport json\nfrom matplotlib import pyplot as plt\n\nimport numpy as np\nfrom numpy.random import random, permutation\nfrom scipy import misc, ndimage\nfrom scipy.ndimage.interpolation import zoom\n\nimport keras\nfrom keras import backend as K\nfrom keras.utils.data_utils import get_file\nfrom keras.models import Sequential, Model\nfrom keras.layers.core import Flatten, Dense, Dropout, Lambda\nfrom keras.layers import Input\nfrom keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D\nfrom keras.optimizers import SGD, RMSprop\nfrom keras.preprocessing import image\n\ndata_path = \"../../fastAI/deeplearning1/nbs/data/dogscats/\"\n!ls $data_path", "The Keras 'get_file' function will download a file from a URL if it's not already in the cache. The !ls command shows that the file is in the .keras/models/ directory which we specified as our cache location:", "FILE_URL = \"http://files.fast.ai/models/\";\nFILE_CLASS = \"imagenet_class_index.json\";\n\nfpath = get_file(FILE_CLASS, FILE_URL+FILE_CLASS, cache_subdir='models')\n\n!ls ~/.keras/models", "The class file itself is a dictionary where keys are strings from 0 to 1000 and the values are names of everyday objects. Let's open the file using 'json.load' and convert it to a 'classes' array:", "with open(fpath) as f:\n class_dict = json.load(f)\n \nclasses = [class_dict[str(i)][1] for i in range(len(class_dict))]\n\nprint(class_dict['809'])\nprint(class_dict['809'][1])", "Check how many objects are in the 'classes' array and then print the first 5:", "print(len(classes))\nprint(classes[:5])", "Build the Model\nWe need to define the NN model architecture and then load the pre-trained weights (that we downloaded) into it. The VGG model has 1 type of convolutional block and 1 type of fully-connected block. We'll create functions to define each of these blocks and then call them later to actually instantiate the VGG model:", "def ConvBlock(layers, model, filters):\n for i in range(layers):\n model.add(ZeroPadding2D((1,1)))\n model.add(Convolution2D(filters, 3, 3, activation='relu'))\n model.add(MaxPooling2D((2,2), strides=(2,2)))\n\ndef FullyConnectedBlock(model):\n model.add(Dense(4096, activation='relu'))\n model.add(Dropout(0.5))", "Preprocessing\nThe original VGG model has a mean of zero for each channel, obtained by subtracting the average of each RGB channel. It also expects data in the BGR order, so we need to do some preprocessing:", "vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))\n\ndef vgg_preprocess(x):\n x = x - vgg_mean #subtract mean\n return x[:, ::-1] #RGB -> BGR", "Instantiate the Model\nThe convolutional layers help find patterns in the images, while the fully connected (Dense) layers combine patterns across an image. The following function calls the other functions written above. It will instantiate a 16 layer VGG model:", "def VGG16():\n model = Sequential()\n model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))\n \n ConvBlock(2, model, 64)\n ConvBlock(2, model, 128)\n ConvBlock(3, model, 256)\n ConvBlock(3, model, 512)\n ConvBlock(3, model, 512)\n \n model.add(Flatten())\n FullyConnectedBlock(model)\n FullyConnectedBlock(model)\n model.add(Dense(1000, activation='softmax'))\n \n return model\n\nmodel = VGG16()", "Load Pretrained Weights\nNow that a VGG16 model has been created, we can load it up with the pretrained weights we downloaded earlier. This step prevents us from having to train the NN on the 1000's of image-net samples:", "fweights = get_file('vgg16.h5', FILE_URL+'vgg16.h5', cache_subdir='models')\nmodel.load_weights(fweights)", "Grab Batches of Images\nNow the NN is setup to use, so we can grab batches of images and start using the NN to predict their output classes:", "batch_size = 4", "The following helper function will use the Keras image.ImageDataGenerator object with its flow_from_directory() method to start pulling batches of images from the directory we tell it to. It returns an Iterator which we can call with next to get the next batch_size amount of image/label pairs:", "def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=batch_size, class_mode='categorical'):\n return gen.flow_from_directory(data_path+dirname, target_size=(224,224), class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)\n\nbatches = get_batches('sample/train', batch_size=batch_size)\nval_batches = get_batches('sample/valid', batch_size=batch_size)\n\ndef show_plots(ims, figsize=(12,6), rows=1, interp=False, titles=None):\n if type(ims[0]) is np.ndarray:\n ims = np.array(ims).astype(np.uint8)\n if (ims.shape[-1] != 3):\n ims = ims.transpose((0,2,3,1))\n f = plt.figure(figsize=figsize)\n cols = len(ims)//rows if len(ims) % 2 == 0 else len(ims)//rows + 1\n \n for i in range(len(ims)):\n sp = f.add_subplot(rows, cols, i+1)\n sp.axis('Off')\n if titles is not None:\n sp.set_title(titles[i], fontsize=16)\n plt.imshow(ims[i], interpolation=None if interp else 'none')", "Checking the shape of imgs, we can see that this array holds 4 images, each with 3 channels (BGR) and are of size 224x224 pixels", "imgs,labels = next(batches)\nprint(imgs.shape)\nprint(labels[0])\n\nshow_plots(imgs, titles=labels)", "Predict\nNow we will call the predict method on our Sequential Keras model. This returns a vector of size 1000 with probabilities that each image belongs to one of the 1000 image-net categories. The function below is written to find the highest probability for each image in our batch:", "def pred_batch(imgs):\n preds = model.predict(imgs)\n idxs = np.argmax(preds, axis=1)\n\n print('Shape: {}'.format(preds.shape))\n print('Predictions prob/class: ')\n \n for i in range(len(idxs)):\n idx = idxs[i]\n print (' {:.4f}/{}'.format(preds[i, idx], classes[idx])) \n\npred_batch(imgs)\n\nmodel.summary()", "Fine Tuning\nNew Output Layer\nRetrain the Last Layer, keeping everything else the same. Let's replace the final layer with a 2-node softmax activation). Now the NN will use everything it learned on the whole dataset to only classify things into 1 of 2 categories.", "model.pop()\nfor layer in model.layers: layer.trainable=False\nmodel.add(Dense(2, activation='softmax'))", "Freezing Layers\nSetting all the layers in the model to \"trainable=False\" means that their weights will not be updated during training. If all of them were untrainable, then training wouldn't actually do anything! Adding a new Dense layer after these frozen layers will make it trainable. Generally, if we are adding a completely new layer and initializing it from scratch, we would only want to train that layer for at least 1-2 epochs. This allows the final layer to have weights which are closer to what they should be (at least compared to random init values). Once they are set, unfreeze earlier Dense layers so their weights can be updated during training. Because the final layer is now \"closer\" to the previous ones, the training process won't have vastly differing weights which would have made drastic updates to the previous layers. Learning will proceed much more smoothly.\nPre-calculate CONV Layer Outputs\nIt is usually best to freeze the convolutional layers and not retrain them. This means that the weights of the filters in the CONV layers do not change. If the original weights were from a huge dataset like ImageNet, this is probably a good thing because that dataset is so vast that it already contains small, medium, and large complexity filters (like edges, corners, and objects) that can be reused on our new dataset. Our dataset is unlikely to contain any edges, colors, etc. which has not already been seen from the ImageNet data. Because we are not going to update the CONV layer weights (filters) it is best to pre-calculate the CONV layer outputs. \nWay back at the beginning of time, the NN was created from scratch. All of the weights are each layer were randomly initialized and it was trained on a dataset. All the data was passed through once (an epoch) and the weights were updated. Then it was passed over many many more times (10's or 100's of epochs) to really train the NN and lock in the ideal weights. Therefore, the weights of the CONV layers start to represent the training set. When we start adding new Dense layers at the output, we don't need to run through all of the data again, the CONV layers already represent that data as seen for XX epochs. We can just compute the output from all the CONV layers and treat that as an input to our new Dense layers ... this will save a lot of time training. Now we train the NN just using the CONV outputs fed into our new Dense layers, once it is trained we can load those updated last layer weights onto the original entire CNN and have a completely updated model." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
osemer01/us-domestic-flight-performance
flights.ipynb
cc0-1.0
[ "On Time Flight Performance of Domestic Flights in December 2014\nAuthor Information:\nOguz Semerci<br>\noguz.semerci@gmail.com<br>\nIntroduction\nIn this report we analyze on-time performance data of domestic flights\nin the USA for the month of December, 2014. Delays in airline traffic\ncan be attributed to many factors such as weather, security,\nscheduling inefficiencies, imbalance between demand and capacity at\nthe airports as well as propagation of late arrivals and departures\nbetween connecting flights. Our goal is to reveal patterns, or lack\nthereof, of flight delays due to airport characteristics, carrier and\ndate and time of travel. More involved modelling of different possible\neffects mentioned above is out of the scope of this report.\nThere are three sections in the report. Since iPython notebook is\nchosen as the format, source codes implementing the described\ncomputations are also presented in each section. Section I describes\nthe steps for loading, merging and cleaning the data sets in hand. An\nexploratory analysis of selected attributes and their relation to\non-time performance was given in Section II. Section III describes a logistic regression model for the estimation of delay probability. Finally, Section IV summarizes the report and provides some future directions.\nI. Data Preperation\nLet us first import the modules that will be used:", "from mpl_toolkits.basemap import Basemap\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport csv\nimport xlrd\n%matplotlib inline", "Load Additional Data Sets", "book = xlrd.open_workbook('airports_new.xlt')\nsheet = book.sheet_by_index(0)\nairport_data = [[sheet.cell_value(i,j) for j in range(sheet.ncols)] for i in range(sheet.nrows)]\n#convert to dictionary for easy loop-up\nairport_dict = {}\nfor j in range(len(airport_data[0])):\n key = airport_data[0][j]\n airport_dict[key] = [airport_data[i][j] for i in range(1,len(airport_data))]\n\nbook = xlrd.open_workbook('carriers.xls')\nsheet = book.sheet_by_index(0)\n#every other row in 'carrriers.xls' sheet is empty'\ncarrier_data = [[sheet.cell_value(i,j) for j in range(sheet.ncols)]\n for i in range(0,sheet.nrows,2)]\n#convert to dictionary for easy look-up\ncarrier_dict = {}\nfor j in range(len(carrier_data[0])):\n key = carrier_data[0][j]\n carrier_dict[key] = [carrier_data[i][j] for i in range(1,len(carrier_data))]\n\nprint('Fields in the additional carrier data set:')\nprint('-----------------------------------------')\nfor key in carrier_dict.keys():\n print(key)\nprint('')\nprint('Fields in the additional airport data set:')\nprint('-----------------------------------------')\nfor key in airport_dict.keys():\n print(key)", "Load On-Time Performance Data\nWe downloaded the on time performance data from the Bureau of Transportation Statistics for December, 2014.", "delay_data = []\nf = open('532747144_T_ONTIME.csv', 'r')\nreader = csv.reader(f)\ndelay_data_header = next(reader,None)\nfor row in reader:\n delay_data.append(row)\nf.close()", "List of the fields in the delay_data array for reference:", "for i,s in enumerate(delay_data_header):\n print(str(i) + ': ' + s)", "Last column is empty. Let's remove it from our data.", "delay_data = [d[:-1] for d in delay_data]\ndelay_data_header = delay_data_header[:-1]", "Remove Canceled Flights and Flights with Missing Information\nWe are concerned with conducted flights. Therefore let us remove the canceled flights from the data.", "#remove cancelled flights\ndelay_data = [d for d in delay_data if d[16] != '1.00']", "Now a quick glance and printing some of the rows reveal that some flights have missing information. We remove them from the data for sake completeness. Note that rows 20:24 are empty when arrival delay <= 0.", "#determine the rows with missing data:\nrows_with_missing_data = []\nfor i in range(len(delay_data)):\n for j in range(20):\n if len(delay_data[i][j]) == 0:\n rows_with_missing_data.append(i)\n break", "For example observe that the flight below is missing arrival delay and air time information. This is possibly because that particular flight was diverted (hopefully).", "i = rows_with_missing_data[0]\nprint('Example row in the data with missing entries:\\n')\nfor j in range(len(delay_data[i])):\n print(delay_data_header[j] + ': ' + str(delay_data[i][j]))\n\n#remove rows with missing entries:\ndelay_data = [delay_data[i] for i in range(len(delay_data)) if i not in rows_with_missing_data]", "Now let's convert the fields with numerical values to float. Also note that delay type (rows[20:24]) are empty if arrival delay <= 0. We will fill those empty cells with zeros.", "float_index = set([11,12,13,15,17,18,19,20,21,22,23,24])\nfor i in range(len(delay_data)):\n for j in float_index:\n if len(delay_data[i][j]) > 0:\n delay_data[i][j] = float(delay_data[i][j])\n else:\n #delay type fields\n delay_data[i][j] = 0.0\n \nint_index = set([1,2])\nfor i in range(len(delay_data)):\n for j in int_index:\n delay_data[i][j] = int(delay_data[i][j])", "Keep data only from the busiest airports\nNow, we assume that the dynamics of busy airports might be significantly different that the ones with less busy ones. We would like to discard flights to and from smaller aiports so that the delay time dynamics are somewhat similar for each data point. For this aim we sorted all the airports with respect to total number of incoming and outgoing flights in December 2015. We decide the number of airports to be investigated to be 50 via visual inspection of number of flights at busiest airports:", "#get the list of unique carrires:\ncarrier_ID = set()\nairport_ID = set()\nfor d in delay_data:\n carrier_ID.add(d[3])\n airport_ID.add(d[4])\n airport_ID.add(d[7])\n \n#count total arrivals and departures from each airport\nflight_count_dict = {iata: 0 for iata in airport_ID}\nfor d in delay_data:\n flight_count_dict[d[4]] += 1\n flight_count_dict[d[7]] += 1\n\npairs = []\nfor key, value in flight_count_dict.items():\n pairs.append((key,value))\n\n#sort airports according to \npairs.sort(key = lambda x: x[1], reverse = True)", "Decide cut-off point via visual inspections", "c = [c for a,c in pairs]\na = [a for a,c in pairs]\nplt.figure(figsize = (20,4))\nN = 60\nplt.plot(c[:N])\nplt.xticks(range(N), a[:N], fontsize = 8)\nplt.ylabel('Total Number of Flights')\nplt.xlabel('Airport IATA')\nplt.grid()\nplt.axvline(49, color = 'r')\nplt.show()\n\nprint('\\n'+'Use data from 50 most busy airports according to number of total incoming and outgoing domestic flights')\n", "Remove data from non-busy airports", "airports_to_keep = [a for a,c in pairs[:52]]\ndelay_data2 = [d for d in delay_data if (d[4] in airports_to_keep and d[7] in airports_to_keep)]\nprint('Size of the dataset is reduced from ' + str(len(delay_data)) + ' to ' + str(len(delay_data2)))\n#let's delete the large dataset\ndelay_data = delay_data2", "Now let's merge information of carriers and airports into two dictionaries 'carrier_info' and 'airport_info' for easy access during analysis.\nGet Carrier Information", "#find out carrier names from carrier_data\ncarrier_info = {}\nfor code in carrier_ID:\n k = carrier_dict['Code'].index(code)\n carrier_info[code] = carrier_dict['Description'][k] ", "Get Airport Information", "airport_info = {}\nfor iata in airports_to_keep:\n k = airport_dict['iata'].index(iata)\n airport_info[iata] = {key: airport_dict[key][k] for key in airport_dict.keys()}", "Removal of outliers with very large delay times\nAbove example also reveals that some departure delays are ridiclously high. We can consider them outliers as they are most pbobabably caused by some irrelevant incident beyond the scope of this investigation. Let's plot the histogram for departure delays and determine a cut-off point for departure time for outliers. Note that early arrivals and departures are given with negative values. Alternatively we could take 95th percentile. Let's investigate:", "dep_delay_time_vector = [d[11] for d in delay_data]\narr_delay_time_vector = [d[15] for d in delay_data]\nprint('Departure Delay Stats in minutes:')\nprint('--------------------------------')\nprint('95th percentile: ' + str(np.percentile(dep_delay_time_vector, 95)))\nprint('75th percentile: ' + str(np.percentile(dep_delay_time_vector, 75)))\nprint('5th percentile : ' + str(np.percentile(dep_delay_time_vector, 5))) \nprint('median : ' + str(np.median(dep_delay_time_vector)))\nprint('mean : ' + str(np.mean(dep_delay_time_vector)))\nprint('std : ' + str(np.std(dep_delay_time_vector)))\nprint('')\nprint('Arrival Delay Stats in minutes:')\nprint('--------------------------------')\nprint('95th percentile: ' + str(np.percentile(arr_delay_time_vector, 95)))\nprint('75th percentile: ' + str(np.percentile(arr_delay_time_vector, 75)))\nprint('5th percentile : ' + str(np.percentile(arr_delay_time_vector, 5))) \nprint('median : ' + str(np.median(arr_delay_time_vector)))\nprint('mean : ' + str(np.mean(arr_delay_time_vector)))\nprint('std : ' + str(np.std(arr_delay_time_vector)))", "Let's plot histograms for departure and arrival delays in December 2015, as well as scatter plot for departure and arrival delays. Note that we strict the range of data points to [5-95]th percentile for arrival and departure delay histograms.", "arr_5th = np.percentile(arr_delay_time_vector, 5)\narr_95th = np.percentile(arr_delay_time_vector, 95)\ndep_5th = np.percentile(dep_delay_time_vector, 5)\ndep_95th = np.percentile(dep_delay_time_vector, 95)\n\nfig = plt.figure(figsize = (16,3))\nax1 = plt.subplot(141)\nax2 = plt.subplot(142)\nax3 = plt.subplot(143)\nax4 = plt.subplot(144)\n\n_,_,_ = ax1.hist(dep_delay_time_vector, bins = 30, range = [dep_5th, dep_95th])\nax1.set_xlabel('delay [min]')\nax1.set_ylabel('number of flights')\nax1.set_title('Departure Delay Histogram')\n\n_,_,_ = ax2.hist(arr_delay_time_vector, bins = 30, range = [arr_5th, arr_95th])\nax2.set_xlabel('delay [min]')\nax2.set_title('Arrival Delay Histogram')\nax2.set_ylabel('number of flights')\n\n_,_,_ = ax3.hist([a-b for a,b in zip(arr_delay_time_vector,dep_delay_time_vector)], bins = 30)\nax3.set_xlabel('delay [min]')\nax3.set_title('Arrival-Departure Delay Histogram')\nax3.set_ylabel('number of flights')\n\n\ncorr_coef = np.corrcoef(dep_delay_time_vector,arr_delay_time_vector)[0,1]\nax4.scatter(dep_delay_time_vector,arr_delay_time_vector)\nax4.set_xlim([-50,1500])\nax4.set_ylim([-50,1500])\nax4.set_title('correlation coefficient: %2.2f' %(corr_coef) )\nax4.set_xlabel('departure delay [min]')\nax4.set_ylabel('arrival delay [min]')\n\nplt.tight_layout()\nplt.show()\n", "As expected, departure delays and arrival delay are highly correlated. Let us first remove outliers in terms of departure delay. 95th percentile gives us the departure delay of 69 minutes, which is not too drastic. Therefore, we remove flights with departure delay larger than 69 minutes. Note the very large departure delay times in the scatter plot. We reason that those extreme values are assumed to be governed by unusual events such as storms or errupting volcanos, which are needed to be removed from our data.", "N = len(dep_delay_time_vector)\ndelay_data = [delay_data[i] for i in range(N) if dep_delay_time_vector[i] < 69]", "Next, let us see if we have outliers in the arrival delays after the removal of departure delay outliers.", "dep_delay_time_vector = [d[11] for d in delay_data]\narr_delay_time_vector = [d[15] for d in delay_data]\nprint('Departure Delay Stats in minutes:')\nprint('--------------------------------')\nprint('95th percentile: ' + str(np.percentile(dep_delay_time_vector, 95)))\nprint('75th percentile: ' + str(np.percentile(dep_delay_time_vector, 75)))\nprint('5th percentile : ' + str(np.percentile(dep_delay_time_vector, 5))) \nprint('median : ' + str(np.median(dep_delay_time_vector)))\nprint('mean : ' + str(np.mean(dep_delay_time_vector)))\nprint('std : ' + str(np.std(dep_delay_time_vector)))\nprint('')\nprint('Arrival Delay Stats in minutes:')\nprint('--------------------------------')\nprint('95th percentile: ' + str(np.percentile(arr_delay_time_vector, 95)))\nprint('75th percentile: ' + str(np.percentile(arr_delay_time_vector, 75)))\nprint('5th percentile : ' + str(np.percentile(arr_delay_time_vector, 5))) \nprint('median : ' + str(np.median(arr_delay_time_vector)))\nprint('mean : ' + str(np.mean(arr_delay_time_vector)))\nprint('std : ' + str(np.std(arr_delay_time_vector)))\n\narr_5th = np.percentile(arr_delay_time_vector, 5)\narr_95th = np.percentile(arr_delay_time_vector, 95)\ndep_5th = np.percentile(dep_delay_time_vector, 5)\ndep_95th = np.percentile(dep_delay_time_vector, 95)\n\nfig = plt.figure(figsize = (16,3))\nax1 = plt.subplot(141)\nax2 = plt.subplot(142)\nax3 = plt.subplot(143)\nax4 = plt.subplot(144)\n\nax1.boxplot(arr_delay_time_vector)\nax1.set_ylabel('arrival delay [min]')\nax1.set_title('Arrival Delay Box Plot')\n\n_,_,_ = ax2.hist(arr_delay_time_vector, bins = 30, range = [arr_5th, arr_95th])\nax2.set_xlabel('delay [min]')\nax2.set_title('Arrival Delay Histogram')\nax2.set_ylabel('number of flights')\n\n_,_,_ = ax3.hist([a-b for a,b in zip(arr_delay_time_vector,dep_delay_time_vector)], bins = 30)\nax3.set_xlabel('delay [min]')\nax3.set_title('Arrival-Departure Delay Histogram')\nax3.set_ylabel('number of flights')\n\n\ncorr_coef = np.corrcoef(dep_delay_time_vector,arr_delay_time_vector)[0,1]\nax4.scatter(dep_delay_time_vector,arr_delay_time_vector)\nax4.set_xlim([-20,100])\nax4.set_ylim([-50,300])\nax4.set_title('correlation coefficient: %2.2f' %(corr_coef) )\nax4.set_xlabel('departure delay [min]')\nax4.set_ylabel('arrival delay [min]')\n\nplt.tight_layout()\nplt.show()", "Notice the correlation between departure delay and arrival delay is reduced to 0.75. The distribution of the difference of arrival and departure delays has a peaked shape and most of the points are in the [-50,50] minutes range. Scatter plot also reveals that points with arrival time greater than ~125 minutes are somewhat outside of the big cluster of points. With these observations we assume arrival delays greater than 125 minutes are outliers. It would have been interesting to investigate the the causes of these big delay times. However we are concerned with common patterns in the on-time performance of airline traffic.", "N = len(arr_delay_time_vector)\ndelay_data = [delay_data[i] for i in range(N) if arr_delay_time_vector[i] < 125]", "Finally, Let's convert delay_data to a set of dictionaries for easy access", "delay_data_dict = {}\nfor j in range(len(delay_data_header)):\n key = delay_data_header[j]\n delay_data_dict[key] = [delay_data[i][j] for i in range(len(delay_data))]\n\n#let's approximate arrival and departure times by only their hour\ndelay_data_dict['ARR_TIME'] = [round( float(v)*1e-2 ) for v in delay_data_dict['ARR_TIME']]\ndelay_data_dict['DEP_TIME'] = [round( float(v)*1e-2 ) for v in delay_data_dict['DEP_TIME']]", "Let's Summarize the availabe data\nThe dictionart 'airport_info' indexed by the 'iata' code. We remind the reader that only the busiest 52 US airports were kept in the data set. Each airport has further information on its location. Let's look at Boston's Logan Airport as an example", "print(\"Example: Info on Logan Airport: \\n\")\nfor key,value in airport_info['BOS'].items():\n print(key + ': ' + str(value))", "The dictionary 'carrier_info' pairs carrier codes with airline names:", "for key,value in carrier_info.items():\n print(key + ': ' + value)\n\n#we will not delve into data before 07. Let's make US: US Airways\ncarrier_info['US'] = 'US Airways Inc.'", "The main data, 'delay_data_dict' is also in a dictionary format where keys are the fields and each field has all the samples for that field (feature) in the data set. Here are the fields one more time for reference. Note that 'UNIQUE_CARRIER' corresponds to the carrier codes in the carrier_info dictionary, whereas DEST and ORIGIN fields are the 'iata' id's in the airpot_info dictionary.", "for key in delay_data_dict.keys():\n print(key) ", "By the way let's make sure that delay_data_dict does not have flights information on carrriers that are not known to us:", "s1 = set(delay_data_dict['UNIQUE_CARRIER'])\ns2 = set(carrier_info.keys())\nprint(list(s1-s2))\nprint(list(s2-s1))", "II. Exploratory Analysis to Reveal Features That Effect On-time Performance\nLet's look at the distribution of delay causes among all delays in 12/2015:", "delays = [sum(delay_data_dict['CARRIER_DELAY']),\n sum(delay_data_dict['WEATHER_DELAY']),\n sum(delay_data_dict['NAS_DELAY']),\n sum(delay_data_dict['SECURITY_DELAY']),\n sum(delay_data_dict['LATE_AIRCRAFT_DELAY'])]\ntotal = sum(delays)\ndelays = [100*d/total for d in delays]\nprint('Delay Cause Percentages:')\nprint('-----------------------')\nprint('Carrier delay : ' + str(delays[0]))\nprint('Weather delay : ' + str(delays[1]))\nprint('NAS delay : ' + str(delays[2]))\nprint('Security delay : ' + str(delays[3]))\nprint('Late Aircraft : ' + str(delays[4]))", "One can say that most of the delays are caused by 'relative congestion' at the airports as more than 98% of the delays are caused by carrier, NAS and late aircraft related reasons. Weather also seem to be effecting the on-time performance. Please follow this link for definitions of types of delays.\nOn Time Performance Analysis of Airports and Carriers\nAirline traffic network is extremely complex with interactions of many variables and propagation of delays during the day. Therefore we need to be careful in our definition of late flights. We already establied the fact that departure delays are highly correlated with arrival delays.\nWe will use the following definitions for delay at airports:\n\nAt the origin departure delay larger than 15 minutes is counted as late flight\nAt the destination if the difference between arrival delay and departure delay is larger than 15 that flight is considered late. Note that this definition regarding the destination assume that there are no causes of delay when the plane is on route in the air.\n\nFor carriers, we consider only late departures.", "N = len(delay_data_dict['ORIGIN']) # N: sample size\ncarrier_performance = {}\nairport_performance = {}\n\n#airport on time performance\nfor airport in airport_info.keys():\n #departures:\n ind = [i for i in range(N) if delay_data_dict['DEST'][i] == airport]\n total_flights = len(ind)\n on_time_flights = sum( [delay_data_dict['DEP_DELAY'][i] <= 15 for i in ind] )\n #arrivals:\n ind = [i for i in range(N) if delay_data_dict['ORIGIN'][i] == airport]\n total_flights += len(ind)\n on_time_flights += sum( [delay_data_dict['ARR_DELAY'][i] - delay_data_dict['DEP_DELAY'][i] <= 15 for i in ind] )\n\n if total_flights > 0:\n airport_performance[airport] = {'total_flights': total_flights,\n 'on_time_flights': on_time_flights,\n 'on_time_ratio': on_time_flights/total_flights} \n\n#carreir on time performance \nfor carrier in carrier_info.keys():\n #departures:\n ind = [i for i in range(N) if delay_data_dict['UNIQUE_CARRIER'][i] == carrier]\n total_flights = len(ind)\n on_time_flights = sum( [delay_data_dict['DEP_DELAY'][i] <= 15 for i in ind] )\n \n if total_flights > 0:\n carrier_performance[carrier] = {'total_flights': total_flights,\n 'on_time_flights': on_time_flights,\n 'on_time_ratio': on_time_flights/total_flights} \n ", "Overall on-time performance of carriers.", "name = []\ncode = []\non_time = []\nflights = []\nfor key in carrier_performance.keys():\n code.append(key)\n name.append(carrier_info[key])\n on_time.append(carrier_performance[key]['on_time_ratio'])\n flights.append(carrier_performance[key]['total_flights'])\n\nname, code, on_time, flights = zip( *sorted( zip(name, code, on_time, flights), key = lambda x: x[3], reverse = True ) )\n\nfig = plt.figure(figsize = (15,3))\nwidth = .6\nax1 = plt.subplot(121)\nax1.bar(range(len(on_time)), [1- v for v in on_time], width = width)\nax1.set_xticks(np.arange(len(on_time)) + width/2)\nax1.set_xticklabels(name, rotation = 90)\nax1.set_title('On-time Performance of Carriers in 12/2015')\nax1.set_ylabel('delay ratio')\n\nax2 = plt.subplot(122)\nax2.bar(range(len(on_time)), flights, width = width)\nax2.set_xticks(np.arange(len(on_time)) + width/2)\nax2.set_xticklabels(name, rotation = 90)\nax2.set_ylabel('toatl #of flights')\nax2.set_title('#of Flights in 12/2015')\nplt.show()\n\nfig = plt.figure(figsize=(3,3))\nplt.scatter([1- v for v in on_time], flights)\n#plt.xticks([0.14, 0.16, 0.20, 0.26])\nplt.xlabel('delay ratio')\nplt.ylabel('total #of flights')\nplt.grid()\nplt.show()", "The upper left shows that overall on-time performance varies quite a bit from carrier to carrier. Whereas no correlation between flight volume of the carrier and its on-time performance is observed. We decide to divide the carriers into performance categories according to their overall delay ratios as follows:\n\nno_unless_its_really_cheap : {delay ratio greater than 0.20}.\nnot_bad: {delay ratio greater than 0.15 smaller or equal to 0.20}.\nway_to_go: {delay ratio smaller than or equal to 0.15}.", "#find the airlines within each category:\nno_unless_its_really_cheap = []\nnot_bad = []\nway_to_go = []\n\nfor c,v in zip(code, on_time):\n r = 1-v\n if r > 0.20:\n no_unless_its_really_cheap.append(c)\n elif r <= 0.15:\n way_to_go.append(c)\n else:\n not_bad.append(c)\n\nprint('way_to_go carriers:')\nprint('------------------')\nfor c in way_to_go:\n print(carrier_info[c])", "Overall on-time performance of airports.\nLet's visualize airport traffic and on time performance of all airports on the map of USA.", "lat = []\nlon = []\nname = []\non_time = []\nflights = []\nfor key in airport_performance.keys():\n name.append(airport_info[key]['airport'])\n lat.append(airport_info[key]['lat'])\n lon.append(airport_info[key]['long'])\n on_time.append(airport_performance[key]['on_time_ratio'])\n flights.append(airport_performance[key]['total_flights'])\n\nfig = plt.figure(figsize=[12,10])\nm = Basemap(llcrnrlon=-119,llcrnrlat=22,urcrnrlon=-64,urcrnrlat=49,\n projection='lcc',lat_1=33,lat_2=45,lon_0=-95)\nm.drawcoastlines(linewidth=1)\nm.fillcontinents(color = 'green', lake_color = 'blue', alpha = 0.2)\nm.drawcountries(linewidth=1)\nx,y = m(lon, lat)\n\nim = m.scatter(x,y, marker = 'o', s = np.array(flights)/10, c = on_time,\n cmap = 'autumn')\ncb = m.colorbar(im,'bottom')\ncb.set_label('on time percentage', fontsize = '14')\nplt.show()", "In the map above, airport locations are shown with circles color coded accordinf to on-time performance. The area of each circle is proportional to the total number of flights at that airport. Similar to carrier and number of flights we observe no immediate relationship between flight volume and on-time percentage. One intereting question is whether there is a relationship between closeness of the airport to either of the coasts (east or west). Since longitude lines are nearly parellel to the east-west alignment of the map of the US we can measure closeness of an airport with its distance to the middle of the map in terms of longitude. The below scatter plot and delay ratio as a function of distance from the coast plots investigate this possibility.", "middle_of_map = (min(lon)+max(lon))/2.0\ndistance_from_coasts = abs(np.array(lon)-np.array(middle_of_map))\n\nfig = plt.figure(figsize = (14,5))\nax1 = plt.subplot(121)\nim = ax1.scatter(flights, [1-v for v in on_time], marker = 'o', s = np.array(flights)/100, c = distance_from_coasts)\n#cbar3 = plt.colorbar(im3, cax=cax3, ticks=MultipleLocator(0.2), format=\"%.2f\")\ncb = plt.colorbar(im)\ncb.set_label('distance from coast [longitude]', fontsize = '14')\nax1.set_xlabel('number of flights', fontsize = '14')\nax1.set_ylabel('delay ratio', fontsize = '14')\n\nx,y = zip(*sorted(zip(distance_from_coasts, [1- v for v in on_time]), key = lambda x: x[0]))\nfit = np.polyfit(x,y,1)\nfit_fn = np.poly1d(fit)\n\nax2 = plt.subplot(122)\nax2.plot(x, fit_fn(x), '--k', label = 'linear fit')\nax2.plot(x,y,'o-', label = 'data')\nax2.legend()\nax2.set_xlabel('distance from coast [longitude]', fontsize = '14')\nax2.set_ylabel('delay ratio', fontsize = '14')\nplt.show()", "Observing the plots above, we can say that coastal distance and delay ratio are negatively correlated. Although longitude is is a bit crude and a more precise computation of coastal distance is possible we chose to use it as a continious variable (predictor) in our model.\nFinally in this section we list the busiest ten airports in the US and their on time performances.", "name, on_time, flights = zip( *sorted( zip(name, on_time, flights), key = lambda x: x[2], reverse = True ) )\n\nfig = plt.figure(figsize = (10,3))\nwidth = .6\nax1 = plt.subplot(121)\nax1.bar(range(10), [1-v for v in on_time[:10]], width = width)\nax1.set_xticks(np.arange(10) + width/2)\nax1.set_xticklabels(name[:10], rotation = 90)\nax1.set_title('On-time Performance of Airports in 12/2015')\nax1.set_ylabel('delay ratio')\n\nax2 = plt.subplot(122)\nax2.bar(range(10), flights[:10], width = width)\nax2.set_xticks(np.arange(10) + width/2)\nax2.set_xticklabels(name[:10], rotation = 90)\nax2.set_title('#of Flights in 12/2015')\nplt.show()", "Analysis of on-time performance in terms of flight date and time\nWhen considering delays for each trip what passengers are really concerned acout is the arrival delay. Also considering the correlation between departure and arrival delays as well possible accumulation of delays we only count arrival delays when analyzing daily trends.", "total_flights_month = [0]*32\non_time_flights_month = [0]*32\navg_delay_month = [0]*32\ntotal_flights_day = [0]*8\non_time_flights_day = [0]*8\navg_delay_day = [0]*8\ntotal_flights_time = [0]*25\non_time_flights_time = [0]*25\navg_delay_time = [0]*25\n\nN = len(delay_data_dict['ARR_DELAY']) #sample size\nday_dict = {1:'mon',2:'tue',3:'wed',4:'thu',5:'fri',6:'sat',7:'sun'}\ndays = ['']*32\n\nfor i in range(N):\n j = delay_data_dict['DAY_OF_MONTH'][i]\n day = delay_data_dict['DAY_OF_WEEK'][i]\n t = delay_data_dict['ARR_TIME'][i]\n days[j] = day_dict[day] # keep list of days for indexing purposes \n delay = delay_data_dict['ARR_DELAY'][i]\n \n total_flights_month[j] += 1\n total_flights_day[day] += 1\n total_flights_time[t] += 1\n if delay <= 15:\n on_time_flights_month[j] += 1\n on_time_flights_day[day] += 1\n on_time_flights_time[t] += 1\n avg_delay_month[j] += delay\n avg_delay_day[day] += delay\n avg_delay_time[t] += delay\navg_delay_time[24] += avg_delay_time[0]\n \navg_delay_month = np.array(avg_delay_month[1:]) / np.array(total_flights_month[1:])\navg_delay_day = np.array(avg_delay_day[1:]) / np.array(total_flights_day[1:])\navg_delay_time = np.array(avg_delay_time[1:]) / np.array(total_flights_time[1:])\n\n\ndelay_ratio_month = 1.0 - np.array(on_time_flights_month[1:]) / np.array(total_flights_month[1:])\ndelay_ratio_day = 1.0 - np.array(on_time_flights_day[1:]) / np.array(total_flights_day[1:])\ndelay_ratio_time = 1.0 - np.array(on_time_flights_time[1:]) / np.array(total_flights_time[1:])\n\nday = days[1:]\n\nfig = plt.figure(figsize=(12,3))\n\nplt.subplot(121)\nplt.plot(avg_delay_day, 'o-')\nplt.xticks(range(0,8), [day_dict[i] for i in range(1,8)])\nplt.title('Average Delay (Early Arrivals Are Accounted)')\nplt.ylabel('delay [min]')\nplt.grid()\n\nplt.subplot(122)\nplt.plot(delay_ratio_day, 'o-')\nplt.xticks(range(0,8), [day_dict[i] for i in range(1,8)])\nplt.title('Delay Ratio')\nplt.grid()\nplt.show()\n\nfig = plt.figure(figsize=(3,3))\nplt.scatter(delay_ratio_day,avg_delay_day)\nplt.xticks([0.14, 0.16, 0.20, 0.26])\nplt.xlabel('delay ratio')\nplt.ylabel('average delay [min]')\nplt.grid()\nplt.show()", "It is interesting to observe Tuesday is the day with the highest probability of delay. Note that in 2014 Chiristmas day was Thursday. The behaviour above can be due to congestion two day before Christmas. We investigate this possibility below when we analyze the daily patterns withing the month. Here we also show the correlation between average delay in minutes and delay ratio with the scatter plot above. Similar behaviours are observed in weekly and hourly patterns.", "fig = plt.figure(figsize=(20,6))\n\nplt.subplot(221)\nplt.plot(avg_delay_month, 'o-')\nplt.xticks(range(5,32,7), days[6::7])\nplt.title('Average Delay (Early Arrivals Are Accounted)')\nplt.ylabel('delay [min]')\nplt.grid()\n\nplt.subplot(222)\nplt.plot(delay_ratio_month, 'o-')\nplt.xticks(range(0,31),range(1,32))\nplt.ylabel('delay ratio')\nplt.title('Delay Ratio for the day of month')\nplt.axhline(y = 0.20, color = 'r')\nplt.axhline(y = 0.15, color = 'r')\n\nplt.grid()\n\nplt.subplot(224)\nplt.plot(total_flights_month[1:], 'o-')\nplt.xticks(range(5,32,7), days[6::7])\nplt.title('Total Number of Flights')\nplt.grid()\nfig.tight_layout()\nplt.show()", "We notice the weekly periodict of delay times and ratios, where Tuesday-Friday has higher delay ration than Saturdat-Monday. More interestingly, we also notice how the cycle breaks exacyly one week before Christmas, Thursday December 18th. First Tuesday and Wednesday also tend to be different than the general pattern perhaps due to their closeness to the Thanksgiving day. Also notice two peaks on December 23rd and 30th, which were both tuesdays. Even though the number of flights are similar to the Tuesdays before delay ratios are approximately doubled. In addition Chrismas Finally, we note that day-of-month analysis is more informative that the day-of-week analysis as the effect of holiday season then to deviate the daily patterns.\nDue to several interactins of holidays and weekly patterns we decide to simply categorize days of the month as {good_day, bad_day, very_bad_day} according to following definitios:\n\nvery_bad_day : {days of the month with a average delay ratio greater than 0.20}. For example December 2, 11,19 are very bad days.\nbad_day: {days of the month with average delay ratio greater than 0.15 smaller than 0.20}\ngood_day: {days of the month with average delay ratio smaller or equal to 0.15}", "#find the airlines within each category:\nvery_bad_days = []\nbad_days = []\ngood_days = []\n\nfor k in range(31):\n r = delay_ratio_month[k]\n if r > 0.20:\n very_bad_days.append(k+1)\n elif r <= 0.15:\n good_days.append(k+1)\n else:\n bad_days.append(k+1)\n\nprint('very_bad_days:')\nprint('-------------')\nprint(very_bad_days)", "Finally, let's investigate hourly patterns.", "fig = plt.figure(figsize=(20/31*24,6))\n\nplt.subplot(221)\nplt.plot(avg_delay_time, 'o-')\nplt.xticks(range(0,24),range(0,24))\nplt.title('Average Delay (Early Arrivals Are Accounted)')\nplt.ylabel('delay [min]')\nplt.xlabel('hour')\nplt.grid()\n\nplt.subplot(222)\nplt.plot(delay_ratio_time, 'o-')\nplt.axvline(x = 3, color = 'r')\nplt.axvline(x = 16, color = 'r')\nplt.axvline(x = 23, color = 'r')\nplt.xticks(range(0,24),range(0,24))\nplt.title('Delay Ratio for time of the day')\nplt.xlabel('hour')\nplt.ylabel('delay ratio')\nplt.grid()\n\nplt.subplot(224)\nplt.plot(total_flights_time[1:], 'o-')\nplt.xticks(range(0,24),range(0,24))\nplt.title('Total number of Flights')\nplt.xlabel('hour')\nfig.tight_layout()\n\nplt.grid()\nplt.show()\n\n", "Inspired by the plots above we define the following categories for the hour of the day:\n\nmorning: {03:00-12:00}\nafternoon: {13:00-16:00}\nevening: {17:00-22:00}\nnight: {23:00-02:00}", "#find the airlines within each category:\nmorning = range(3,13)\nafternoon = range(13,17)\nevening = range(17,23)\nnight = [23,24,0,1,2]", "III. A Logistic Regression Model for Estimating the Delay Probabilities\nIn this section a logistic regression model for the estimation of delay probability is described and implemented. Let us start with a summary of variables identified in Section II:\n\ncarrier: {no_unless_its_really_cheap, not_bad, way_to_go}\narrival airport: coastal distance given in longitude\nday of the month: {good_day, bad_day, very_bad_day}\ntime of the day: {morning, afternoon, evening, night}\n\nWe stick to the definition of delay (use only arrival delay) that we used when computing deate and time related patterns in Section III. Therefore we define target vector $Y$ with components equal to zero for on time flights and one for delayed flights.", "#create the data set dictionary and target vector Y\nfrom sklearn.feature_extraction import DictVectorizer\n\ntraining_set = []\nY = []\n\nN = len(delay_data_dict['ARR_DELAY'])\n\nfor i in range(N):\n\n lon = airport_info[delay_data_dict['DEST'][i]]['long']\n coastal_dist = abs(np.array(lon)-np.array(middle_of_map))\n\n arr_time = delay_data_dict['ARR_TIME'][i]\n if arr_time in morning:\n arr_time = 'morning'\n elif arr_time in afternoon:\n arr_time = 'afternoon'\n elif arr_time in evening:\n arr_time= 'evening'\n else:\n arr_time = 'night'\n\n arr_day = delay_data_dict['DAY_OF_MONTH'][i]\n if arr_day in good_days:\n arr_day = 'good_days'\n elif arr_day in bad_days:\n arr_day= 'bad_days'\n else:\n arr_day = 'very_bad_days'\n\n carrier = delay_data_dict['UNIQUE_CARRIER'][i]\n if carrier in no_unless_its_really_cheap:\n carrier = 'no_unless_its_really_cheap'\n elif carrier in not_bad:\n carrier = 'not_bad'\n else:\n carrier = 'way_to_go'\n\n training_set.append({'bias': 1.0,'coastal_dist': coastal_dist, 'arr_time': arr_time, 'arr_day': arr_day, 'carrier': carrier})\n Y.append(int(delay_data_dict['ARR_DELAY'][i]>15))\n\nvec = DictVectorizer()\nX = vec.fit_transform(training_set).toarray()\n\n#Train our Logistic Regression Model\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import classification_report\nfrom sklearn.cross_validation import StratifiedKFold, cross_val_score\nfrom sklearn.preprocessing import StandardScaler\n\nY = np.array(Y)\nratio0 = len(Y[Y==0])/len(Y)\n\nmodel = LogisticRegression(fit_intercept = False)\nmodel = model.fit(X, Y)\ntrain_accuracy = model.score(X, Y)\nY = np.array(Y)\n\nprint('Ratio of the on-time flights in the data-set: {}'.format(ratio0))\n\nprint('\\nTraining score: {}'.format(train_accuracy))\nprint('\\nClassification report on training data:\\n')\n\nY_pred = model.predict(X)\nprint(classification_report(Y, Y_pred))\n\ncv = StratifiedKFold(Y, n_folds = 5)\ncv_score = cross_val_score(model, X, Y, cv = cv)\n\nprint('\\n5-fold cross validation score: {}'.format(np.mean(cv_score)))", "The performance of our logistic classifier is basically same as predicting that a flight will always be on time.\nAlso only 60% of the flights that were predicted to be delayed were delayed (precision for label 1 is 0.59).\nOnly 2% of the delayed flights were correctly classified.\n\nIt may help to take a look into ROC curve to get more insight on the choice of the threshold:", "from sklearn.metrics import roc_curve, auc\n\nprobas_ = model.predict_proba(X)\nfpr, tpr, thresholds = roc_curve(Y, probas_[:,1])\nruc_score = auc(fpr, tpr)\n\nplt.plot(fpr, tpr)\nplt.xlim([-0.05, 1.05])\nplt.ylim([-0.05, 1.05])\n\nplt.plot([0, 1], [0, 1], '--k')\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curve')\nplt.show()", "We have been very crude on our design of features. The logistic regression did not work well with the current model.\nLet's also try a random forrest classifier but I believe the issue is in feature engineering.", "from sklearn.ensemble import RandomForestClassifier\n\nmodel_rf = RandomForestClassifier()\nmodel_rf.fit(X,Y)\n\ntrain_accuracy = model_rf.score(X, Y)\n\n\nprint('\\nTraining score - Random Forest Classifier: {}'.format(train_accuracy))\nprint('\\nClassification report on training data:\\n')\n\nY_pred_rf = model_rf.predict(X)\nprint(classification_report(Y, Y_pred_rf))\n\ncv = StratifiedKFold(Y, n_folds = 5)\ncv_score = cross_val_score(model_rf, X, Y, cv = cv)\n\nprint('\\n5-fold cross validation score: {}'.format(np.mean(cv_score)))", "Fairly good test error reported above increases our confidence on our model. Finally let us report on the coefficients of the logistic regression model:", "features = vec.get_feature_names()\ncoeffs = model.coef_[0]\n\nprint('%34s %20s' %('Feature:', 'Coefficient:'))\nprint('%34s %20s' %('-'*34, '-'*20))\nfor f,c in zip(features, coeffs):\n print(('%34s %20.4f' %(f, c)))", "An immediate comment on the coefficients of categorical variables our model: As we observed in Section II flying in the morning of a 'good day' using an airline with good overall track record greatly reduces the risk of delay.\nIV. Conclusions and Some directions for future work\nAnalysis of on-time performance data of US domestic flights in December 2014 is conducted. We have identified key aspects that that could be used as features for a simple linear probability modeling of delays. We also reported on a logistic regression method for the estimation of delay probability using the identified features.\nHere we ommit a more detailed analysis of coefficients of each feature. That type of analysis will definetely usefull in terms of importance of different features in predicting delay probabilities in the context of our logistic regression model.\nWe also tried a random forest classifier on the traing set. Both of the classification methods did not perform well. That is most probably due to pure feature engineering. We won't delve into that issue here as the main goal is exploratory analysis of the data in hand.\nSome possibilities to enhance the model and its performance can be listed as:\n\nInclude interactions between predictions.\nIncrease the granularity of the categorical variables.\nEmpleyment of model selection techniques such as cross-validation.\nInclusion of data from years 2013 and 2012 as needed.\nEmployment of other data sources such as precipitation and other wheather conditions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
esa-as/2016-ml-contest
EvgenyS/Facies_classification_ES.ipynb
apache-2.0
[ "Facies classification using ensemble classifiers\nby: <a href=\"https://ca.linkedin.com/in/evgeny-sorkin-509532b\">Evgeny Sorkin</a> SJ Geophysics\nOriginal contest notebook by Brendon Hall, Enthought\nThis notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). \nThe dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a majority-vote classifier comprised of support vector machine, random forest and XGBoosted tree. We will use the classifiers implementation in scikit-learn and XGBoost\nExploring the dataset\nFirst, we will examine the data set we will use to train the classifier. The training data is contained in the file training_data.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nfrom pandas import set_option\nset_option(\"display.max_rows\", 20)\npd.options.mode.chained_assignment = None\n\nfilename = '../training_data.csv'\ntraining_data = pd.read_csv(filename)\ntraining_data", "This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. \nThe seven predictor variables are:\n* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),\nphotoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.\n* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)\nThe nine discrete facies (classes of rocks) are: \n1. Nonmarine sandstone\n2. Nonmarine coarse siltstone \n3. Nonmarine fine siltstone \n4. Marine siltstone and shale \n5. Mudstone (limestone)\n6. Wackestone (limestone)\n7. Dolomite\n8. Packstone-grainstone (limestone)\n9. Phylloid-algal bafflestone (limestone)\nThese facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.\nFacies |Label| Adjacent Facies\n:---: | :---: |:--:\n1 |SS| 2\n2 |CSiS| 1,3\n3 |FSiS| 2\n4 |SiSh| 5\n5 |MS| 4,6\n6 |WS| 5,7\n7 |D| 6,8\n8 |PS| 6,7,9\n9 |BS| 7,8\nLet's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.", "training_data['Well Name'] = training_data['Well Name'].astype('category')\ntraining_data['Formation'] = training_data['Formation'].astype('category')\ntraining_data['Well Name'].unique()\n\ntraining_data.columns[4:]\n\ntraining_data.describe()", "This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.", "# 1=sandstone 2=c_siltstone 3=f_siltstone \n# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite\n# 8=packstone 9=bafflestone\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',\n '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n#facies_color_map is a dictionary that maps facies labels\n#to their respective colors\nfacies_color_map = {}\nfor ind, label in enumerate(facies_labels):\n facies_color_map[label] = facies_colors[ind]\n\ndef label_facies(row, labels):\n return labels[ row['Facies'] -1]\n\ndef make_facies_log_plot(logs, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im=ax[5].imshow(cluster, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[5])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-1):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)\ndef compare_facies_plot(logs, compadre, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[6])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im2, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-2):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n ax[6].set_xlabel(compadre)\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n ax[6].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)\n\nmake_facies_log_plot(\n training_data[training_data['Well Name'] == 'SHRIMPLIN'],\n facies_colors)", "Next: make labels and features vectors", "feat_labels =training_data.columns[4:]", "Feature engineering", "def add_del(df,feat_names):\n \"\"\"\"\"\"\n for fn in feat_names:\n df[\"del_\"+fn] = np.gradient(df[fn] )\n return df\ntraining_data.columns[4:]\ntraining_data = add_del(training_data,[fn for fn in training_data.columns[4:] if fn != 'NM_M'])\ntraining_data = add_del(training_data,[fn for fn in training_data.columns[4:] if fn != 'NM_M'])\ntraining_data.columns[4:]\nfeat_labels =training_data.columns[4:] \n\nfeat_labels", "import and initilize a few classifires that we play with", "y = training_data['Facies'].values\nX = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values\nfeat_labels =training_data.columns[4:]\nlabel_encoded_y = np.unique(y)\n\n## import and initilize a few classifires that we play with\n\ndef randomize(dataset, labels):\n permutation = np.random.permutation(labels.shape[0])\n shuffled_dataset = dataset[permutation,:]\n shuffled_labels = labels[permutation]\n return shuffled_dataset, shuffled_labels\nX, y = randomize(X, y)\n\n\nfrom sklearn import __version__ as sklearn_version\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, BaggingClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\ntree = DecisionTreeClassifier(criterion = 'entropy', max_depth = 1)\nforest = RandomForestClassifier( n_estimators = 10000, random_state=50, n_jobs=-1)\nbag = BaggingClassifier(base_estimator = tree, n_estimators=400,random_state=0)\nknn = KNeighborsClassifier(n_neighbors = 5, p=2, metric = 'minkowski')\n\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn import svm\nSVC_classifier = svm.SVC(kernel = 'rbf', random_state=0, gamma=0.01)\npipe_svm = Pipeline([('scl',StandardScaler()),('clf',SVC_classifier)])\n\nimport xgboost\nfrom xgboost import XGBClassifier", "Split the training data into training and test sets. Let's use 10% of the data for the test set.", "from sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n X,y, test_size=0.1, random_state=42)\n\n# the scores \nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.metrics import precision_score, recall_score, f1_score, make_scorer\npre_scorer= make_scorer(score_func=f1_score, greater_is_better=True,average = 'micro')\n#the scorer is f1_scrore with micro averaging, i.e.\nfrom sklearn.cross_validation import StratifiedKFold\nkfold = StratifiedKFold(y_train, n_folds=10, shuffle=True, random_state=7)\n# stratified kfold cross-validation keeps the classes balanced in each fold.\n\nfrom sklearn.model_selection import GridSearchCV", "check feature importance with random forest classifier", "forest.fit(X, y)\nimportances = forest.feature_importances_\nindices = np.argsort(importances)[::-1]\nfor f in range(X_train.shape[1]):\n print(\"%2d) %-*s %f\" % (f+1 , 30, \n feat_labels[indices[f]], \n importances[indices[f]]))", "Looks like not all features are nearly important, e.g. may consider dropping those with relative importance below 2.3%", "if sklearn_version < '0.18':\n X_selected = forest.transform(X, threshold=0.023)\nelse:\n from sklearn.feature_selection import SelectFromModel\n sfm = SelectFromModel(forest, threshold=0.023, prefit=True)\n X_selected = sfm.transform(X)\n\nX_selected = X\n\nfrom sklearn.cross_validation import train_test_split\n\n\nX_train, X_test, y_train, y_test = train_test_split(\n X_selected,y, test_size=0.1, random_state=42)\n\n#tuned forest\nforest = RandomForestClassifier( \n n_estimators = 303,\n min_samples_leaf = 1,\n oob_score = True,\n random_state=50, n_jobs=-1)\nforest.fit(X_train, y_train)\ny_pred = forest.predict(X_test)\nprint (\"F1-score: %.4g\" % f1_score(y_test,y_pred, average='micro'))\n\n#tuned xgb \nxg = XGBClassifier(learning_rate =0.005,\n n_estimators=1888,\n max_depth=10,\n min_child_weight=1,\n gamma=0.2,\n subsample=0.9,\n colsample_bytree=0.7,\n reg_alpha=0,\n nthread=-1,\n objective='multi:softprob',\n scale_pos_weight=1,\n seed=43)\n\nxg.fit(X_train, y_train,eval_metric=pre_scorer)\ny_pred = xg.predict(X_test)\nprint (\"F1-score: %.4g\" % f1_score(y_test,y_pred, average='micro'))\n\nprint('Predicted F1-Score: {}'.format(xg.score(X_test,y_test)))\n\n#tuned SVM\npipe_svm = Pipeline([('scl',StandardScaler()),('clf',svm.SVC(kernel = 'rbf', random_state=0, gamma=0.1, C=100))])", "The combined majority vote classifier is constructed out of 3-individual classifiers", "from sklearn.ensemble import VotingClassifier\nmv = VotingClassifier(estimators=[('forest',forest),('XGBoost',xg),('svn',pipe_svm)])\n\n\nclf_labels = [ 'Random Forest' ,'XGBoost','SVN', 'Majority-Vote']\n\nprint('10-fold cross validation:\\n')\nfor clf, label in zip([forest,xg,pipe_svm,mv], clf_labels):\n scores = cross_val_score(estimator=clf,\n X=X_train,\n y=y_train,\n cv=kfold,\n scoring=pre_scorer)\n print(\"F1-score: %0.2f (+/- %0.2f) [%s]\"\n % (scores.mean(), scores.std(), label))\n\nmv.fit(X_train,y_train)\nprint('Predicted F1-Score: {}'.format(mv.score(X_test,y_test)))\n\ny_pred=mv.predict(X_test)\nprint('Predicted F1-Score: {}'.format(f1_score(y_true=y_test,y_pred=y_pred, average='micro')))", "Some more detailes metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.\nThe confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i. \nTo simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.", "predicted_labels=mv.predict(X_test)\n\nfrom sklearn.metrics import confusion_matrix\n\nfrom classification_utilities import display_cm, display_adj_cm\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS'];\n\nconf = confusion_matrix(y_test, predicted_labels)\ndisplay_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)", "The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.", "def accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc", "As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.", "adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))\n\nprint('Facies classification accuracy = %f' % accuracy(conf))\nprint('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))\n\ndisplay_adj_cm(conf, facies_labels, adjacent_facies, \n display_metrics=True, hide_zeros=True)", "Considering adjacent facies, the F1 scores for all facies types are above 0.9, except when classifying SiSh or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.87), most often as wackestone. \nNow we can train the classifier using the entire data set", "mv.fit(X,y)\nprint('Predicted F1-Score: {}'.format(mv.score(X,y)))", "Applying the classification model to new data\nNow that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.\nThis dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.", "well_data = pd.read_csv('../validation_data_nofacies.csv')\nwell_data['Well Name'] = well_data['Well Name'].astype('category')\nwell_data.columns[4:]\nwell_data= add_del(well_data,[fn for fn in well_data.columns[3:] if fn != 'NM_M'])\nwell_data = add_del(well_data,[fn for fn in well_data.columns[3:] if fn != 'NM_M'])\nwell_data.columns[3:]\nfeat_labels =well_data.columns[3:] \nX_unknown = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values\n\nwell_data.columns[3:] \n\n#training_data.columns[4:] ", "Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.", "#predict facies of unclassified data\ny_unknown = mv.predict(X_unknown)\nwell_data['Facies'] = y_unknown\nwell_data\n\nwell_data['Well Name'].unique()", "We can use the well log plot to view the classification results along with the well logs.", "make_facies_log_plot(\n well_data[well_data['Well Name'] == 'STUART'],\n facies_colors=facies_colors)\n\nmake_facies_log_plot(\n well_data[well_data['Well Name'] == 'CRAWFORD'],\n facies_colors=facies_colors)", "Finally we can write out a csv file with the well data along with the facies classification results.", "well_data.to_csv('well_data_with_facies.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
allentv/DataScienceMachineLearning
Case Study - Product Survey Analysis/Case Study Analysis.ipynb
mit
[ "%pylab inline\n\nimport pandas as pd\nimport numpy as np\n\n# Seaborn requires matplotlib package to be installed\n# https://stanford.edu/~mwaskom/software/seaborn/installing.html\n#\n# If it is not available, use:\n# pip install matplotlib\n# \n# If you are using Anaconda distribution, then\n# conda install matplotlib\nimport seaborn as sb\nimport json\nimport datetime\n\n# Read and inspect the data file\ndf = pd.read_csv(\"product_test_data.csv\")\nprint df.head(10)\nprint df.columns\nprint df.dtypes", "Data Transformations", "# Convert the string which has a list of values to an actual python list\ndf[\"Amount\"] = df[\"Amount\"].apply(json.loads)\n\n# Create a new column which has the sum of production application\ndf[\"Total_Amount\"] = df[\"Amount\"].apply(sum)\n\n# Create a new column for the number of entries\ndf[\"No_of_entries\"] = df[\"Amount\"].apply(len)\n\n# Remove unused columns\ndf.drop([\"Amount\"], axis=1, inplace=True)\n\n# Check if the transformations have been successful\nprint df.head(10)\nprint df.columns", "1) List the top 10 customers who had the maximum usage of all products", "# Perform a Group-By operation on \"User_ID\" and sum up the \"Total_Amount\" field\ntop10_users_products = df.groupby(['User_ID'], as_index=False)['Total_Amount'].sum()\n\n# Sort in descending order based on \"Total_Amount\" field\ntop10_users_products.sort_values(\"Total_Amount\", ascending=False, inplace=True)\n\n# By default, pandas retains the index values as in the original dataframe.\n# Reset the index to start from beginning\ntop10_users_products.reset_index(inplace=True)\n\n# Show only the top 10 records\nprint top10_users_products.head(10)", "2) List the top 3 users who has the most number of data entries for \"Product1\"", "# Filter rows for \"Product1\"\ntop3_product1 = df[df[\"Product\"] == \"Product1\"]\n\n# Extract the columns - \"User_ID\" and \"No_of_entries\"\ntop3_product1 = top3_product1[[\"User_ID\", \"No_of_entries\"]]\n\n# Sort on \"No_of_entries\" column in descending order\ntop3_product1.sort_values(\"No_of_entries\", ascending=False, inplace=True)\n\n# Display top 3 rows\nprint top3_product1.head(3)", "3) Which product has the maximum usage across all customers?", "product_max_usage = df.groupby([\"Product\"], as_index=False)[\"Total_Amount\"].sum()\nprint product_max_usage.max(column=\"Total_Amount\")", "4) Find the weekly usage of each product across all users", "# Survey duration is given as 90 days\nSURVEY_DURATION = 90\n\n# Take the current time\ntoday_date = datetime.datetime.today()\n\n# Calculate the start date as 90 days prior\nstart_date = today_date - datetime.timedelta(days=SURVEY_DURATION)\n\n# Convert the date that is a string to YY-MM-DD format and\n# find the number of days elapsed from the start date\ndf[\"Days\"] = (df[\"Entry_Date\"].apply(lambda x: datetime.datetime.strptime(x, \"%Y-%m-%d\")) - start_date) / np.timedelta64(1, \"D\")\n\n# Round off the day values to a whole number\ndf[\"Days\"] = df[\"Days\"].round(0)\n\n# Calculate the week by dividing the number of days by 7.\n# Add 1 to start the week count from 1 instead of 0\ndf[\"Week\"] = ((df[\"Days\"] / 7) + 1).round()\n\n# Remove the \"Days\" column\ndf.drop([\"Days\"], axis=1, inplace=True)\n\n\n# Group by \"Product\" and \"Week\" fields followed by summation over \"Total_Amount\" field\nweekly_usage_all_products = df.groupby([\"Product\", \"Week\"], as_index=False)[\"Total_Amount\"].sum()\n\n# Sort by \"Week\" and then \"Product\"\nweekly_usage_all_products.sort_values([\"Week\", \"Product\"], inplace=True)\n\nprint weekly_usage_all_products\n\n# Plotting the above data using seaborn package\nprint sb.factorplot(\n x=\"Week\", y=\"Total_Amount\",\n hue=\"Product\",\n data=weekly_usage_all_products,\n size=12,\n kind=\"bar\",\n palette=\"muted\"\n)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nholtz/structural-analysis
Devel/V05/MemberLoads.ipynb
cc0-1.0
[ "Fixed End Forces\nThis module computes the fixed end forces (moments and shears) due to transverse loads\nacting on a 2-D planar structural member.", "from __future__ import division, print_function\n\nimport numpy as np\nimport sys\nfrom salib import extend", "Class EF\nInstances of class EF represent the 6 end-forces for a 2-D planar beam element.\nThe forces (and local degrees of freedom) are numbered 0 through 5, and are shown here in their\npositive directions on a beam-element of length L. The 6 forces are labelled by prefixing the number with a letter to suggest the normal interpretation of that force: c for axial force,\nv for shear force, and m for moment.\n\nFor use in this module, the end forces will be fixed-end-forces.", "class EF(object):\n \n \"\"\"Class EF represents the 6 end forces acting on a 2-D, planar, beam element.\"\"\"\n \n def __init__(self,c0=0.,v1=0.,m2=0.,c3=0.,v4=0.,m5=0.):\n \"\"\"Initialize an instance with the 6 end forces. If the first\n argument is a 6-element array, initialize from a copy of that\n array and ignore any other arguments.\"\"\"\n if np.isscalar(c0):\n self.fefs = np.matrix([c0,v1,m2,c3,v4,m5],dtype=np.float64).T\n else:\n self.fefs = c0.copy()\n \n def __getitem__(self,ix):\n \"\"\"Retreive one of the forces by numer. This allows allows unpacking\n of all 6 end forces into 6 variables using something like:\n c0,v1,m2,c3,v4,m5 = self\n \"\"\"\n return self.fefs[ix,0]\n \n def __add__(self,other):\n \"\"\"Add this set of end forces to another, returning the sum.\"\"\"\n assert type(self) is type(other)\n new = self.__class__(self.fefs+other.fefs)\n return new\n \n def __sub__(self,other):\n \"\"\"Subtract the other from this set of forces, returning the difference.\"\"\"\n assert type(self) is type(other)\n new = self.__class__(self.fefs-other.fefs)\n return new\n \n def __mul__(self,scale):\n \"\"\"Multiply this set of forces by the scalar value, returning the product.\"\"\"\n if scale == 1.0:\n return self\n return self.__class__(self.fefs*scale)\n \n __rmul__ = __mul__\n \n def __repr__(self):\n return '{}({},{},{},{},{},{})'.format(self.__class__.__name__,*list(np.array(self.fefs.T)[0]))\n\n##test:\nf = EF(1,2,0,4,1,6)\nf\n\n##test:\ng = f+f+f\ng\n\n##test:\nf[1]\n\n##test:\nf[np.ix_([3,0,1])]\n\n##test:\ng[(3,0,1)]\n\n##test:\nf0,f1,f2,f3,f4,f5 = g\nf3\n\n##test:\ng, g*5, 5*g", "Now define properties so that the individual components can be accessed like name atrributes,\neg: 'ef.m3' or 'ef.m5 = 100.'.", "@extend\nclass EF:\n\n @property\n def c0(self):\n return self.fefs[0,0]\n \n @c0.setter\n def c0(self,v):\n self.fefs[0,0] = v\n \n @property\n def v1(self):\n return self.fefs[1,0]\n \n @v1.setter\n def v1(self,v):\n self.fefs[1,0] = v\n \n @property\n def m2(self):\n return self.fefs[2,0]\n \n @m2.setter\n def m2(self,v):\n self.fefs[2,0] = v\n \n @property\n def c3(self):\n return self.fefs[3,0]\n \n @c3.setter\n def c3(self,v):\n self.fefs[3,0] = v\n \n @property\n def v4(self):\n return self.fefs[4,0]\n \n @v4.setter\n def v4(self,v):\n self.fefs[4,0] = v\n \n @property\n def m5(self):\n return self.fefs[5,0]\n \n @m5.setter\n def m5(self,v):\n self.fefs[5,0] = v\n\n##test:\nf = EF(10.,11,12,13,15,15)\nf, f.c0, f.v1, f.m2, f.c3, f.v4, f.m5\n\n##test:\nf.c0 *= 2\nf.v1 *= 3\nf.m2 *= 4\nf.c3 *= 5\nf.v4 *= 6\nf.m5 *= 7\nf", "Class MemberLoad\nThis is the base class for all the different types of member loads (point loads, UDLs, etc.)\nof 2D planar beam elements.\nThe main purpose is to calculate the fixed-end member forces, but we will also supply\nlogic to enable calculation of internal shears and moments at any point along the span.\nAll types of member loads will be input using a table containing five data columns:\nW1, W2, A, B, and C. Each load type contains a 'TABLE_MAP'\nthat specifies the mapping between attribute name and column name in the table.", "class MemberLoad(object):\n \n TABLE_MAP = {} # map from load parameter names to column names in table\n \n def fefs(self):\n \"\"\"Return the complete set of 6 fixed end forces produced by the load.\"\"\"\n raise NotImplementedError() \n \n def shear(self,x):\n \"\"\"Return the shear force that is in equilibrium with that\n produced by the portion of the load to the left of the point at \n distance 'x'. 'x' may be a scalar or a 1-dimensional array\n of values.\"\"\"\n raise NotImplementedError()\n \n def moment(self,x):\n \"\"\"Return the bending moment that is in equilibrium with that\n produced by the portion of the load to the left of the point at \n distance 'x'. 'x' may be a scalar or a 1-dimensional array\n of values.\"\"\"\n raise NotImplementedError()\n\n@extend\nclass MemberLoad:\n \n @property\n def vpts(self):\n \"\"\"Return a descriptor of the points at which the shear force must \n be evaluated in order to draw a proper shear force diagram for this \n load. The descriptor is a 3-tuple of the form: (l,r,d) where 'l'\n is the leftmost point, 'r' is the rightmost point and 'd' is the\n degree of the curve between. One of 'r', 'l' may be None.\"\"\"\n raise NotImplementedError()\n \n @property\n def mpts(self):\n \"\"\"Return a descriptor of the points at which the moment must be \n evaluated in order to draw a proper bending moment diagram for this \n load. The descriptor is a 3-tuple of the form: (l,r,d) where 'l'\n is the leftmost point, 'r' is the rightmost point and 'd' is the\n degree of the curve between. One of 'r', 'l' may be None.\"\"\"\n raise NotImplementedError()", "Load Type PL\nLoad type PL represents a single concentrated force, of magnitude P, at a distance a from the j-end:", "class PL(MemberLoad):\n \n TABLE_MAP = {'P':'W1','a':'A'}\n \n def __init__(self,L,P,a):\n self.L = L\n self.P = P\n self.a = a\n \n def fefs(self):\n P = self.P\n L = self.L\n a = self.a\n b = L-a\n m2 = -P*a*b*b/(L*L)\n m5 = P*a*a*b/(L*L)\n v1 = (m2 + m5 - P*b)/L\n v4 = -(m2 + m5 + P*a)/L\n return EF(0.,v1,m2,0.,v4,m5)\n \n def shear(self,x):\n return -self.P*(x>self.a)\n \n def moment(self,x):\n return self.P*(x-self.a)*(x>self.a)\n \n def __repr__(self):\n return '{}(L={},P={},a={})'.format(self.__class__.__name__,self.L,self.P,self.a)\n\n##test:\np = PL(1000.,300.,400.)\np, p.fefs()\n\n@extend\nclass MemberLoad:\n \n EPSILON = 1.0E-6\n\n@extend\nclass PL:\n \n @property\n def vpts(self):\n return (self.a-self.EPSILON,self.a+self.EPSILON,0)\n \n @property\n def mpts(self):\n return (self.a,None,1)\n\n##test:\np = PL(1000.,300.,400.)\np.vpts\n\n##test:\np.mpts", "Load Type PLA\nLoad type PLA represents a single concentrated force applied parallel to the length\nof the segment (producing only axial forces).", "class PLA(MemberLoad):\n \n TABLE_MAP = {'P':'W1','a':'A'}\n \n def __init__(self,L,P,a):\n self.L = L\n self.P = P\n self.a = a\n \n def fefs(self):\n P = self.P\n L = self.L\n a = self.a\n c0 = -P*(L-a)/L\n c3 = -P*a/L\n return EF(c0=c0,c3=c3)\n \n def shear(self,x):\n return 0.\n \n def moment(self,x):\n return 0.\n \n def __repr__(self):\n return '{}(L={},P={},a={})'.format(self.__class__.__name__,self.L,self.P,self.a)\n\n##test:\np = PLA(10.,P=100.,a=4.)\np.fefs()\n\n@extend\nclass PLA:\n \n @property\n def vpts(self):\n return (0.,self.L,0)\n \n @property\n def mpts(self):\n return (0.,self.L,0)", "Load Type UDL\nLoad type UDL represents a uniformly distributed load, of magnitude w, over the complete length of the element.", "class UDL(MemberLoad):\n \n TABLE_MAP = {'w':'W1'}\n \n def __init__(self,L,w):\n self.L = L\n self.w = w\n \n def __repr__(self):\n return '{}(L={},w={})'.format(self.__class__.__name__,self.L,self.w)\n \n def fefs(self):\n L = self.L\n w = self.w\n return EF(0.,-w*L/2., -w*L*L/12., 0., -w*L/2., w*L*L/12.)\n \n def shear(self,x):\n l = x*(x>0.)*(x<=self.L) + self.L*(x>self.L) # length of loaded portion\n return -(l*self.w)\n \n def moment(self,x):\n l = x*(x>0.)*(x<=self.L) + self.L*(x>self.L) # length of loaded portion\n d = (x-self.L)*(x>self.L) # distance from loaded portion to x: 0 if x <= L else x-L\n return self.w*l*(l/2.+d)\n \n @property\n def vpts(self):\n return (0.,self.L,1)\n \n @property\n def mpts(self):\n return (0.,self.L,2)\n\n##test:\nw = UDL(12,10)\nw,w.fefs()", "Load Type LVL\nLoad type LVL represents a linearly varying distributed load actiong over a portion of the span:", "class LVL(MemberLoad):\n \n TABLE_MAP = {'w1':'W1','w2':'W2','a':'A','b':'B','c':'C'}\n \n def __init__(self,L,w1,w2=None,a=None,b=None,c=None):\n if a is not None and b is not None and c is not None and L != (a+b+c):\n raise Exception('Cannot specify all of a, b & c')\n if a is None:\n if b is not None and c is not None:\n a = L - (b+c)\n else:\n a = 0.\n if c is None:\n if b is not None:\n c = L - (a+b)\n else:\n c = 0.\n if b is None:\n b = L - (a+c)\n if w2 is None:\n w2 = w1\n self.L = L\n self.w1 = w1\n self.w2 = w2\n self.a = a\n self.b = b\n self.c = c\n \n def fefs(self):\n \"\"\"This mess was generated via sympy. See:\n ../../examples/cive3203-notebooks/FEM-2-Partial-lvl.ipynb \"\"\"\n L = float(self.L)\n a = self.a\n b = self.b\n c = self.c\n w1 = self.w1\n w2 = self.w2\n m2 = -b*(15*a*b**2*w1 + 5*a*b**2*w2 + 40*a*b*c*w1 + 20*a*b*c*w2 + 30*a*c**2*w1 + 30*a*c**2*w2 + 3*b**3*w1 + 2*b**3*w2 + 10*b**2*c*w1 + 10*b**2*c*w2 + 10*b*c**2*w1 + 20*b*c**2*w2)/(60.*(a + b + c)**2)\n m5 = b*(20*a**2*b*w1 + 10*a**2*b*w2 + 30*a**2*c*w1 + 30*a**2*c*w2 + 10*a*b**2*w1 + 10*a*b**2*w2 + 20*a*b*c*w1 + 40*a*b*c*w2 + 2*b**3*w1 + 3*b**3*w2 + 5*b**2*c*w1 + 15*b**2*c*w2)/(60.*(a + b + c)**2)\n v4 = -(b*w1*(a + b/2.) + b*(a + 2*b/3.)*(-w1 + w2)/2. + m2 + m5)/L\n v1 = -b*(w1 + w2)/2. - v4\n return EF(0.,v1,m2,0.,v4,m5)\n \n def __repr__(self):\n return '{}(L={},w1={},w2={},a={},b={},c={})'\\\n .format(self.__class__.__name__,self.L,self.w1,self.w2,self.a,self.b,self.c)\n \n def shear(self,x):\n c = (x>self.a+self.b) # 1 if x > A+B else 0\n l = (x-self.a)*(x>self.a)*(1.-c) + self.b*c # length of load portion to the left of x\n return -(self.w1 + (self.w2-self.w1)*(l/self.b)/2.)*l \n \n def moment(self,x):\n c = (x>self.a+self.b) # 1 if x > A+B else 0\n # note: ~c doesn't work if x is scalar, thus we use 1-c\n l = (x-self.a)*(x>self.a)*(1.-c) + self.b*c # length of load portion to the left of x\n d = (x-(self.a+self.b))*c # distance from right end of load portion to x\n return ((self.w1*(d+l/2.)) + (self.w2-self.w1)*(l/self.b)*(d+l/3.)/2.)*l\n \n @property\n def vpts(self):\n return (self.a,self.a+self.b,1 if self.w1==self.w2 else 2)\n \n @property\n def mpts(self):\n return (self.a,self.a+self.b,2 if self.w1==self.w2 else 3)", "Load Type CM\nLoad type CM represents a single concentrated moment of magnitude M a distance a from the j-end:", "class CM(MemberLoad):\n \n TABLE_MAP = {'M':'W1','a':'A'}\n \n def __init__(self,L,M,a):\n self.L = L\n self.M = M\n self.a = a\n \n def fefs(self):\n L = float(self.L)\n A = self.a\n B = L - A\n M = self.M\n m2 = B*(2.*A - B)*M/L**2\n m5 = A*(2.*B - A)*M/L**2\n v1 = (M + m2 + m5)/L\n v4 = -v1\n return EF(0,v1,m2,0,v4,m5)\n \n def shear(self,x):\n return x*0.\n \n def moment(self,x):\n return -self.M*(x>self.A)\n \n @property\n def vpts(self):\n return (None,None,0)\n \n @property\n def mpts(self):\n return (self.A-self.EPSILON,self.A+self.EPSILON,1)\n \n def __repr__(self):\n return '{}(L={},M={},a={})'.format(self.__class__.__name__,self.L,self.M,self.a)", "makeMemberLoad() factory function\nFinally, the function makeMemberLoad() will create a load object of the correct type from \nthe data in dictionary data. That dictionary would normally containing the data from one\nrow ov the input data file table.", "def makeMemberLoad(L,data,ltype=None):\n def all_subclasses(cls):\n _all_subclasses = []\n for subclass in cls.__subclasses__():\n _all_subclasses.append(subclass)\n _all_subclasses.extend(all_subclasses(subclass))\n return _all_subclasses\n\n if ltype is None:\n ltype = data.get('TYPE',None)\n for c in all_subclasses(MemberLoad):\n if c.__name__ == ltype and hasattr(c,'TABLE_MAP'):\n MAP = c.TABLE_MAP\n argv = {k:data[MAP[k]] for k in MAP.keys()}\n return c(L,**argv)\n raise Exception('Invalid load type: {}'.format(ltype))\n\n##test:\nml = makeMemberLoad(12,{'TYPE':'UDL', 'W1':10})\nml, ml.fefs()\n\ndef unmakeMemberLoad(load):\n type = load.__class__.__name__\n ans = {'TYPE':type}\n for a,col in load.TABLE_MAP.items():\n ans[col] = getattr(load,a)\n return ans\n\n##test:\nunmakeMemberLoad(ml)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/bcc/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: BCC\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eds-uga/csci1360e-su17
assignments/A3/A3_Q1.ipynb
mit
[ "Q1\nLoops, loops, and more loops. And writing basic functions.\nPart A\nWrite a function:\n\nnamed add_one\ntakes 1 argument, a list\nreturns 1 value, a list\n\nIn this function, you should loop through each element of the argument list, add 1 to it, and then return the new list.\nFor example, add_one([1, 2, 3]) should return [2, 3, 4]. No imports!\nHINT: I would highly recommend constructing a new list, rather than updating the old one.", "import numpy as np\nnp.random.seed(7584)\n\nlist1 = np.random.randint(-100, 100, 10)\nactual1 = list1 + 1\npred1 = add_one(list1.tolist())\nassert set(pred1) == set(actual1.tolist())\n\nnp.random.seed(68527)\n\nlist2 = np.random.randint(-100, 100, 100)\nactual2 = list2 + 1\npred2 = add_one(list2.tolist())\nassert set(pred2) == set(actual2.tolist())", "Part B\nWrite a function that adds an arbitrary integer quantity to every element in a list. This is just like Part A, except instead of adding 1 to each element, you add some positive integer x to each one.\nThis function should:\n\nbe named add_to_list\ntake 2 arguments: the list, and the number\nreturn a list: the new list that contains the added elements\n\nIn this function, you should loop through each element of the argument list, add the second argument to it, and then return the new list.\nFor example, add_to_list([1, 2, 3], 1) should return [2, 3, 4], just like in Part A. add_to_list([1, 2, 3], 5) should return [6, 7, 8]. No imports!\nHINT: I would highly recommend constructing a new list, rather than updating the old one.", "import numpy as np\nnp.random.seed(2846)\n\nlist1 = np.random.randint(-100, 100, 10)\nnum1 = 15\nactual1 = list1 + num1\npred1 = add_to_list(list1.tolist(), num1)\nassert set(pred1) == set(actual1.tolist())\n\nnp.random.seed(68527)\n\nlist2 = np.random.randint(-100, 100, 100)\nnum2 = np.random.randint(0, 100)\nactual2 = list2 + num2\npred2 = add_to_list(list2.tolist(), num2)\nassert set(pred2) == set(actual2.tolist())", "Part C\nWrite a function:\n\nnamed list_of_positive_indices\ntakes 1 arguments, a list\nreturns 1 value, a list\n\nIn this function, you should loop through each element of the argument list. If the element is positive, then you should append the index number of the positive element to a new list. If not, skip that index (i.e. do nothing). Once the loop is finished, return the new list.\nFor example, list_of_positive_indices([1, -1, 0, 3]) should return [0, 3]. No imports!\nHINT: Instead of doing a simple for element in list: loop, try using the range() function in the loop header.", "import numpy as np\nnp.random.seed(986)\n\nlist1 = np.random.randint(-100, 100, 10)\nindices1 = np.arange(list1.shape[0])[list1 > 0].tolist()\npred1 = list_of_positive_indices(list1.tolist())\nassert set(pred1) == set(indices1)\n\nnp.random.seed(578752)\n\nlist2 = np.random.randint(-100, 100, 100)\nindices2 = np.arange(list2.shape[0])[list2 > 0].tolist()\npred2 = list_of_positive_indices(list2.tolist())\nassert set(pred2) == set(indices2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session11/Day2/FindingSources.ipynb
mit
[ "Background Subtraction and Source Detection\nVersion 0.1\nBy Yusra AlSayyad (Princeton University)\nNote: for portability, the examples in this notebook are one-dimensional and avoid using libraries. In practice on real astronomical images, I recommend using a library for astronomical image processing, e.g. AstroPy or the LSST Stack. \nBackground Estimation\nA prerequisite to this notebook is the introductionToBasicStellarPhotometry.ipynb notebook. We're going to use the same single stellar simulation, but with increasingly complex backgrounds.\nFirst, setup the simulation and necessary imports", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm\nfrom matplotlib.ticker import MultipleLocator\n\n%matplotlib notebook\n\ndef pixel_plot(pix, counts, fig=None, ax=None): \n '''Make a pixelated 1D plot'''\n if fig is None and ax is None:\n fig, ax = plt.subplots()\n \n ax.step(pix, counts, \n where='post')\n \n ax.set_xlabel('pixel number')\n ax.set_ylabel('relative counts')\n ax.xaxis.set_minor_locator(MultipleLocator(1))\n ax.xaxis.set_major_locator(MultipleLocator(5))\n fig.tight_layout()\n return fig, ax\n\n# Define your PSF function phi()\n# It is sufficient to copy and paste from\n# your introductionToBasicStellarPhotometry noteboook\n\ndef phi(x, mu, fwhm):\n \"\"\"Evalute the 1d PSF N(mu, sigma^2) along x\n \n Parameters\n ----------\n x : array-like of shape (n_pixels,)\n detector pixel number\n mu : float\n mean position of the 1D star\n fwhm : float\n Full-width half-maximum of the stellar profile on the detector\n \n Returns\n -------\n flux : array-like of shape (n_pixels,)\n Flux in each pixel of the input array\n \"\"\"\n # complete\n \n return flux\n\n\n# Define your image simulation function to\n# It is sufficient to copy and paste from\n# your introductionToBasicStellarPhotometry noteboook\n# Note that the background S should now be supplied as \n# an array of length (S) or a constant. \n\ndef simulate(x, mu, fwhm, S, F):\n \"\"\"simulate a noisy stellar signal\n \n Parameters\n ----------\n x : array-like\n detector pixel number\n mu : float\n mean position of the 1D star\n fwhm : float\n Full-width half-maximum of the stellar profile on the detector\n S : float or array-like of len(x)\n Sky background for each pixel\n F : float\n Total stellar flux\n \n Returns\n -------\n noisy_counts : array-like (same shape as x)\n the (noisy) number of counts in each pixel\n \"\"\"\n # complete\n \n return noisy_counts \n", "Problem 1) Simple 1-D Background Estimation\nProblem 1.1) Estimate a the background as a constant offset (order = 0)\nFor this problem we will use a simulated star with a constant background offset of $S=100$.\nBackground estimation is typically done by inspecting the distribution of counts in pixel bins. First inspect the distribution of counts, and pick an estimator for the background that is robust to the star (reduces biased fromthe star). Remember that we haven't done detection yet and don't know where the sources are.", "# simulate the star\nx = np.linspace(0, 100)\nmu = 35\nS = 100\nfwhm = 5\nF = 500\n\nfig = plt.figure(figsize=(8,4))\nax = plt.subplot()\nsim_star = # complete\npixel_plot(x, sim_star, fig=fig, ax=ax)\n\n\n# plot and inspect histogram\n\nfig = plt.figure(figsize=(6,4))\n# complete\n\nS_estimate = # complete\n\nprint('My background estimate = {:.4f}'.format(S_estimate)\nprint('The mean pixel count = {:.4f}'.format( # complete\n\n# plot your background model over the \"image\"\n\nfig, ax = pixel_plot(x, sim_star)\npixel_plot(x, # complete, fig=fig, ax=ax)", "Problem 1.2) Estimate a the background as a ramp/line (order = 1)\nNow let's simulate a slightly more complicated background a linear ramp: $y = 3x + 100$. First simulate the same star with the new background. Then we're going to fit it using the following steps:\n* Bin the image\n* Use your robust estimator to estimate the background value per bin center\n* Fit these bin center with a model\n\nA common simple model that astronomers use are Chebyshev polynomials. Chebyshevs have some very nice properties that prevent ringing at the edges of the fit window. Another popular way to \"model\" the bin centers is non-parametrically via interpolation.", "# Double check that your simulate function can take S optionally as array-like\n\n# Create and plot the image with S = 3*x + 100\n\nsim_star = # complete\npixel_plot(x, sim_star)\n\n# bin the image in 20-pixel bins \n\n# complete\nbin_centers = # complete\nbin_values = # complete \n\n# Fit the bin_values vs bin_centers with a 1st-order chebyshev polynomial\n# Evaluate your model for the full image\n# hint: look up np.polynomial.chebyshev.chebfit and np.polynomial.chebyshev.chebeval\n\n# complete\n\n# Replot the image: \nfig, ax = pixel_plot(x, sim_star)\n# binned values\nax.plot(bin_centers, bin_values, 'o')\n\n# Overplot your background model:\n# complete\n\n# Finally plot your background subtracted image:\n\n# complete", "Problem 1.3) Estimate a more realistic background (still in 1D)\nNow repeat the the exercise in problem 1.2 with a more complex background.", "SIGMA_PER_FWHM = 2*np.sqrt(2*np.log(2))\n\nfwhm = 5\nx = np.linspace(0, 100)\nbackground = 1000*norm.pdf(x, 50, 18) + 100*norm.pdf(x, 20, fwhm/SIGMA_PER_FWHM) + 100*norm.pdf(x, 60, fwhm/SIGMA_PER_FWHM)\n\nsim_star3 = simulate(x=x, mu=35, fwhm=fwhm, S=background, F=200)\nfig, ax = pixel_plot(x, sim_star3)", "1.3.1) Bin the image. Plot the bin centers. What bin size did you pick?", "bin_centers = # complete\n\nbin_values = # complete\n\n# overplot the binned esimtates:\nax.plot(bin_centers, bin_values, 'o')", "1.3.2) Spatially model the binned estimates (bin_values vs bin_centers) as a chebyshev polynomial.\nEvaluate your model on the image grid and overplot.(what degree/order did you pick?)", "# complete", "1.3.3) Subtract off the model and plot the \"background-subtracted image.\"", "# Plot the background subtracted image\n", "Problem 2) Finding Sources\nNow that we have subtracted background image, let’s look for sources. In the lecture we focused on the matched filter interpretation. Here we will go into the hypothesis testing and maximum likelihood interpretations. \nMaximum likelihood interpretation:\nAssume that we know there is a point source somewhere in this image. We want to find to pixel that has the maximum likelihood of having a point source centered on it. recall from session 10, the probability for an individual observation $I_i$ is:\n$$P(X_i) = \\frac{1}{\\sqrt{2\\pi\\sigma_i^2}} \\exp{-\\frac{(X_i - y_i)^2}{2\\sigma_i^2}}$$\nHere: $X_i$ is the pixel value of pixel $i$ in the image and $y_i$ is the model prediction for that pixel. \nThe model in this case is your simulate() function from the IntroductionToBasicStellarPhotometry.ipynb notebook: the PSF evaluated at a distance from the center multiplied by the flux amplitude: $F * \\phi(x - x_{center}) + S$ Where $F$ is the flux amplitude $\\phi$ is the PSF profile (a function of position), and $S$ is the background.\nPlug it in:\n$$P(X_i) = \\frac{1}{\\sqrt{2\\pi\\sigma_i^2}} \\exp{-\\frac{(X_i - (F * \\phi_i(x_{center}) + S))^2}{2\\sigma_i^2}}$$\nHypothesis test interpretation:\nIf I were teaching source detection to my non-scientist, college stats 101 students, I'd frame the problem like this:\nPretend you have an infinitely large population of pixels Say I know definitively, that the arbitrarily large population of pixels is drawn from $N(0,100)$ (has a variance of 10). I have another sample of 13 pixels. I want to tesst the hypothesis that those 13 pixels were drawn from the $N(0,100)$ pop too. \nTest the hypothesis that your subsample of 13 pixels were drawn from the larger sample.\n* $H_0$: $\\mu = 0$\n* $H_A$: $\\mu > 0$\n$$z = \\frac{\\bar{x} - \\mu}{\\sigma / \\sqrt{n}} $$\n$$z = \\frac{\\sum{x}/13 - 0}{10 /\\sqrt{13}} $$\nOK, if this is coming back now, let's replace this with our real estimator for PSF flux, which is a weighted mean of the pixels where the weights are the PSF $\\phi_i$. Whenever I forget the formulas for weighted means, I consult the wikipedia page.\nNow tweak it for a weighted mean (PSF flux):\n$$ z = \\frac{\\sum{\\phi_i x_i} - \\mu} {\\sqrt{ \\sum{\\phi_i^2 \\sigma_i^2}}} $$\nWhere the denominator is from the variance estimate of a weighted mean. For constant $\\sigma$ it reduces to $\\sigma_{\\bar{x}}^2 = \\sigma^2 \\sum{\\phi^2_i}$, and for a constant $\\phi$ this reduces to $\\sigma_{\\bar{x}}^2 = \\sigma^2 /n$, the denomiator in the simple mean example above. Replace $\\mu=0$ again. \n$$ z = \\frac{\\sum{\\phi_i x_i}} {\\sqrt{ \\sum{\\phi_i^2 \\sigma_i^2}}} $$\nOur detection map is just the nominator for each pixel! We deal with the denominator later when choosing the thresholding, but we could just as easily divide the whole image by the denominator and have a z-score image!\n2.0) Plot the problem image", "# set up simulation\nx = np.linspace(0, 100)\nmu = 35\nS = 100\nfwhm = 5\nF = 300\n\nfig = plt.figure(figsize=(8,4))\nax = plt.subplot()\nsim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)\n\n# To simplify this pretend \\sqrt{\\sum \\phi_i^2 \\sigma_i^2we know for sure the background = 100\n# Plots the backround subtracted image\nimage = sim_star - 100\npixel_plot(x, image, fig=fig, ax=ax)", "2.1) Make a kernel for the PSF.\nProperties of kernels: They're centered at x=0 (which also means that they have an odd number of pixels) and sum up to 1. You can use your phi().", "xx = # complete\nkernel = # complete \n\n# plot your kernel", "2.2) Correlate the image with the PSF kernel,\nand plot the result.\nWhat are the tradeoffs when choosing the size of your PSF kernel? What happens if its too big? What happens if it's too small.? \nhint: scipy.signal.convolve", "# detection_image = # complete\n\n# plot your detection image \n# Note: pay attention to how scipy.signal.convolve handles the edges. ", "2.3) Detect pixels\nfor which the null hypothesis that there's no source centered there is ruled out at the 5$\\sigma$ level.", "# Using a robust estimator for the detection image standard deviation,\n# Compute the 5 sigma threshold\n\nthreshold_value = # complete\nprint('5 sigma threshold value is = {:.4f}'.format(threshold_value))", "2.4) Dilate footprint to provide a window or region for the point source.\nWe will use this window to compute the centroid and total flux of the star in the next two lessons. In the meantime, compute the flux like we did in introductionToStellarPhotometry assuming the input center.", "# complete", "Challenge problem A\nCombine problem 1 and 2 to iterate background estimation and source detection, masking the pixels with detections. Use the more complex background from 1.3. \nChallenge problem B\nRepeat challenge problem A in two dimensions" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.19/_downloads/36ac16a286b47b66f1b51a959c65b5b9/plot_stats_cluster_time_frequency_repeated_measures_anova.ipynb
bsd-3-clause
[ "%matplotlib inline", "Mass-univariate twoway repeated measures ANOVA on single trial power\nThis script shows how to conduct a mass-univariate repeated measures\nANOVA. As the model to be fitted assumes two fully crossed factors,\nwe will study the interplay between perceptual modality\n(auditory VS visual) and the location of stimulus presentation\n(left VS right). Here we use single trials as replications\n(subjects) while iterating over time slices plus frequency bands\nfor to fit our mass-univariate model. For the sake of simplicity we\nwill confine this analysis to one single channel of which we know\nthat it exposes a strong induced response. We will then visualize\neach effect by creating a corresponding mass-univariate effect\nimage. We conclude with accounting for multiple comparisons by\nperforming a permutation clustering test using the ANOVA as\nclustering function. The results final will be compared to multiple\ncomparisons using False Discovery Rate correction.", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n# Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.time_frequency import tfr_morlet\nfrom mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\ntmin, tmax = -0.2, 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\ninclude = []\nraw.info['bads'] += ['MEG 2443'] # bads\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,\n stim=False, include=include, exclude='bads')\n\nch_name = 'MEG 1332'\n\n# Load conditions\nreject = dict(grad=4000e-13, eog=150e-6)\nevent_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n picks=picks, baseline=(None, 0), preload=True,\n reject=reject)\nepochs.pick_channels([ch_name]) # restrict example to one channel", "We have to make sure all conditions have the same counts, as the ANOVA\nexpects a fully balanced data matrix and does not forgive imbalances that\ngenerously (risk of type-I error).", "epochs.equalize_event_counts(event_id)\n\n# Factor to down-sample the temporal dimension of the TFR computed by\n# tfr_morlet.\ndecim = 2\nfreqs = np.arange(7, 30, 3) # define frequencies of interest\nn_cycles = freqs / freqs[0]\nzero_mean = False # don't correct morlet wavelet to be of mean zero\n# To have a true wavelet zero_mean should be True but here for illustration\n# purposes it helps to spot the evoked response.", "Create TFR representations for all conditions", "epochs_power = list()\nfor condition in [epochs[k] for k in event_id]:\n this_tfr = tfr_morlet(condition, freqs, n_cycles=n_cycles,\n decim=decim, average=False, zero_mean=zero_mean,\n return_itc=False)\n this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))\n this_power = this_tfr.data[:, 0, :, :] # we only have one channel.\n epochs_power.append(this_power)", "Setup repeated measures ANOVA\nWe will tell the ANOVA how to interpret the data matrix in terms of factors.\nThis is done via the factor levels argument which is a list of the number\nfactor levels for each factor.", "n_conditions = len(epochs.event_id)\nn_replications = epochs.events.shape[0] // n_conditions\n\nfactor_levels = [2, 2] # number of levels in each factor\neffects = 'A*B' # this is the default signature for computing all effects\n# Other possible options are 'A' or 'B' for the corresponding main effects\n# or 'A:B' for the interaction effect only (this notation is borrowed from the\n# R formula language)\nn_freqs = len(freqs)\ntimes = 1e3 * epochs.times[::decim]\nn_times = len(times)", "Now we'll assemble the data matrix and swap axes so the trial replications\nare the first dimension and the conditions are the second dimension.", "data = np.swapaxes(np.asarray(epochs_power), 1, 0)\n# reshape last two dimensions in one mass-univariate observation-vector\ndata = data.reshape(n_replications, n_conditions, n_freqs * n_times)\n\n# so we have replications * conditions * observations:\nprint(data.shape)", "While the iteration scheme used above for assembling the data matrix\nmakes sure the first two dimensions are organized as expected (with A =\nmodality and B = location):\n.. table:: Sample data layout\n===== ==== ==== ==== ====\n trial A1B1 A1B2 A2B1 B2B2\n ===== ==== ==== ==== ====\n 1 1.34 2.53 0.97 1.74\n ... ... ... ... ...\n 56 2.45 7.90 3.09 4.76\n ===== ==== ==== ==== ====\nNow we're ready to run our repeated measures ANOVA.\nNote. As we treat trials as subjects, the test only accounts for\ntime locked responses despite the 'induced' approach.\nFor analysis for induced power at the group level averaged TRFs\nare required.", "fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)\n\neffect_labels = ['modality', 'location', 'modality by location']\n\n# let's visualize our effects by computing f-images\nfor effect, sig, effect_label in zip(fvals, pvals, effect_labels):\n plt.figure()\n # show naive F-values in gray\n plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],\n times[-1], freqs[0], freqs[-1]], aspect='auto',\n origin='lower')\n # create mask for significant Time-frequency locations\n effect = np.ma.masked_array(effect, [sig > .05])\n plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],\n times[-1], freqs[0], freqs[-1]], aspect='auto',\n origin='lower')\n plt.colorbar()\n plt.xlabel('Time (ms)')\n plt.ylabel('Frequency (Hz)')\n plt.title(r\"Time-locked response for '%s' (%s)\" % (effect_label, ch_name))\n plt.show()", "Account for multiple comparisons using FDR versus permutation clustering test\nFirst we need to slightly modify the ANOVA function to be suitable for\nthe clustering procedure. Also want to set some defaults.\nLet's first override effects to confine the analysis to the interaction", "effects = 'A:B'", "A stat_fun must deal with a variable number of input arguments.\nInside the clustering function each condition will be passed as flattened\narray, necessitated by the clustering procedure. The ANOVA however expects an\ninput array of dimensions: subjects X conditions X observations (optional).\nThe following function catches the list input and swaps the first and\nthe second dimension and finally calls the ANOVA function.", "def stat_fun(*args):\n return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,\n effects=effects, return_pvals=False)[0]\n\n\n# The ANOVA returns a tuple f-values and p-values, we will pick the former.\npthresh = 0.001 # set threshold rather high to save some time\nf_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,\n pthresh)\ntail = 1 # f-test, so tail > 0\nn_permutations = 256 # Save some time (the test won't be too sensitive ...)\nT_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(\n epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,\n n_permutations=n_permutations, buffer_size=None)", "Create new stats image with only significant clusters:", "good_clusters = np.where(cluster_p_values < .05)[0]\nT_obs_plot = np.ma.masked_array(T_obs,\n np.invert(clusters[np.squeeze(good_clusters)]))\n\nplt.figure()\nfor f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):\n plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],\n freqs[0], freqs[-1]], aspect='auto',\n origin='lower')\nplt.xlabel('Time (ms)')\nplt.ylabel('Frequency (Hz)')\nplt.title(\"Time-locked response for 'modality by location' (%s)\\n\"\n \" cluster-level corrected (p <= 0.05)\" % ch_name)\nplt.show()", "Now using FDR:", "mask, _ = fdr_correction(pvals[2])\nT_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))\n\nplt.figure()\nfor f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):\n plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],\n freqs[0], freqs[-1]], aspect='auto',\n origin='lower')\n\nplt.xlabel('Time (ms)')\nplt.ylabel('Frequency (Hz)')\nplt.title(\"Time-locked response for 'modality by location' (%s)\\n\"\n \" FDR corrected (p <= 0.05)\" % ch_name)\nplt.show()", "Both cluster level and FDR correction help get rid of\npotential spots we saw in the naive f-images." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KJE2001/seminars
04_operators_and_commutators.ipynb
mit
[ "<figure>\n <IMG SRC=\"gfx/Logo_norsk_pos.png\" WIDTH=100 ALIGN=\"right\">\n</figure>\n\nOperators and commutators\nRoberto Di Remigio, Luca Frediani\nWe will be exercising our knowledge of operators and commutator algebra. These are extremely useful exercises, as you\nwill see these type of manipulations recurring throughout the rest of the course.\nA note on notation:\n\nan operator will be designed by putting an hat on top of any letter:\n \\begin{equation}\n \\hat{A},\\,\\hat{O},\\,\\hat{b},\\,\\hat{\\gamma}\n \\end{equation}\nthe commutator of two operators is defined as:\n \\begin{equation}\n [\\hat{A}, \\hat{B}] = \\hat{A}\\hat{B} - \\hat{B}\\hat{A}\n \\end{equation}\nthe position and momentum operator are defined as:\n \\begin{equation}\n \\hat{x}_i = x_i\\cdot \\quad\\quad \\hat{p}_i = -\\mathrm{i}\\hbar\\frac{\\partial}{\\partial x_i}\n \\end{equation}\n where $i$ refers to any of the three Cartesian components, i.e. $i = x, y, z$\nthe Canonical Commutation Relations (CCR) are:\n \\begin{alignat}{3}\n [x_i, x_j] = 0; \\quad& [p_i, p_j] = 0; \\quad& [x_i, p_j] = \\mathrm{i}\\hbar \\delta_{ij}\n \\end{alignat}\n where the Kronecker $\\delta$ symbol is defined as:\n \\begin{equation}\n \\delta_{ij} = \n \\begin{cases}\n 1 & \\text{if } i = j \\\n 0 & \\text{if } i \\neq j\n \\end{cases}\n \\end{equation}\nDirac braket notation. We will interpret the following symbols as:\n \\begin{equation}\n \\langle \\psi | \\phi \\rangle = \\int \\mathrm{d} \\mathbf{r} \\psi^(\\mathbf{r})\\phi(\\mathbf{r})\n \\end{equation}\n \\begin{equation}\n \\langle \\psi | \\hat{A} | \\phi \\rangle = \\int\\mathrm{d} \\mathbf{r} \\psi^(\\mathbf{r})\\hat{A}\\phi(\\mathbf{r})\n \\end{equation}\n\nUsing SymPy\nSymPy is a Python library for symbolic mathematics. It can be used to evaluate derivatives, definite and indefinite integrals, differential equations and much more.\nAs an example, the following code will evaluate the derivative of $\\exp(x^2)$ and print it to screen:\nPython\nfrom sympy import *\nx, y, z = symbols('x y z')\ninit_printing(use_unicode=True)\ndiff(exp(x**2), x)", "from sympy import *\n# Define symbols\nx, y, z = symbols('x y z')\n# We want results to be printed to screen\ninit_printing(use_unicode=True)\n# Calculate the derivative with respect to x\ndiff(exp(x**2), x)", "There is an extensive tutorial that you can refer to. Another useful example is the calculation\nof definite and indefinite integrals using SymPy. Consider the following code snippet:\n```Python\nAn indefinite integral\nintegrate(cos(x), x)\n```\nThis will calculate the primitive function of $\\cos(x)$:\n\\begin{equation}\n\\int \\cos(x)\\mathrm{d}x = \\sin(x) + C\n\\end{equation}", "integrate(cos(x), x)", "This code snippet will instead calculate the definite integral of the same function\nin a given interval:\n\\begin{equation}\n\\int_{-\\pi/2}^{\\pi/2} \\cos(x)\\mathrm{d}x =[\\sin(x)]_{-\\pi/2}^{\\pi/2} = 2\n\\end{equation}\n```Python\nA definite integral\nintegrate(cos(x), (x, -\\pi/2., pi/2))\n```", "integrate(cos(x), (x, -pi/2., pi/2.))", "SymPy is quite powerful. It can handle expression with multiple variables and be used to simplify complicated expressions. You are encouraged to experiment with SymPy whenever needed in the following exercises.\nExercise 1: The importance of commuting\nLet us have two operators $\\hat{A}$ and $\\hat{B}$. Further assume that their commutator\n is known to be: $[\\hat{A}, \\hat{B}] = c$, where $c$ is a scalar (a complex number in\n the general case).\n Is the following true?\n \\begin{equation}\n \\langle \\psi| \\hat{A}\\hat{B} |\\phi \\rangle = \\langle \\psi| \\hat{B}\\hat{A} | \\phi \\rangle\n \\end{equation}\n To convince yourself, try to calculate:\n \\begin{equation}\n \\langle \\sin(x) |\\hat{x}\\hat{p}_x | \\cos(x) \\rangle; \\quad\\quad \\langle \\sin(x) |\\hat{p}_x\\hat{x} | \\cos(x) \\rangle\n \\end{equation}\nExercise 2: Commutator identities\nProve the following commutator identities:\n \\begin{align}\n &[\\hat{A}, \\hat{A}] = 0 \\\n &[\\hat{A}, \\hat{B}] = - [\\hat{B}, \\hat{A}] \\\n &[\\hat{A}+\\hat{B}, \\hat{C}] = [\\hat{A}, \\hat{C}] + [\\hat{B}, \\hat{C}] \\\n &[\\hat{A}, \\hat{B}\\hat{C}] = [\\hat{A}, \\hat{B}]\\hat{C} + \\hat{B}[\\hat{A},\\hat{C}] \\\n &[\\hat{A}, [\\hat{B}, \\hat{C}]] + [\\hat{B}, [\\hat{C}, \\hat{A}]] + [\\hat{C}, [\\hat{A}, \\hat{B}]] = 0 \n \\end{align}\nThe last one is known as Jacobi identity.\nExercise 3: Some more commutators\nHaving proved the commutator identities, calculate the following commutators:\n \\begin{align}\n &[\\hat{p}_x, \\hat{x}^2] \\\n &[\\hat{y}\\hat{p}_z - \\hat{z}\\hat{p}_y, \\hat{z}\\hat{p}_x - \\hat{x}\\hat{p}_z] \\\n &[\\hat{a}, \\hat{a}^\\dagger]\n \\end{align}\n where:\n \\begin{alignat}{2}\n \\hat{a} = \\frac{\\hat{x} + \\mathrm{i}\\hat{p}_x}{\\sqrt{2}}; \\quad & \\quad \\hat{a}^\\dagger = \\frac{\\hat{x} - \\mathrm{i}\\hat{p}_x}{\\sqrt{2}}\n \\end{alignat}\nExercise 4: Normalization\nIn quantum mechanics, physical states are represented by mathematical objects called wavefunctions. Wavefunctions are\n functions of the coordinates: $\\psi(\\mathbf{r})$. Not all functions can aspire to become wavefunctions. As wavefunctions represent\n probability densities, a very important requirement they must satisfy is to be normalizable:\n \\begin{equation}\n \\langle \\psi|\\psi\\rangle = \\int \\mathrm{d} \\mathbf{r} \\psi^*(\\mathbf{r}) \\psi(\\mathbf{r}) < \\infty\n \\end{equation}\n i.e. the integral above must be finite. Notice that the property of normalizability depends on the domain of the function\n and the limits of the integration above.\n Are the following functions normalizable?\n \\begin{align}\n \\psi(x) &= e^{-\\frac{x^2}{2}} \\quad x\\in[-\\infty, +\\infty] \\\n \\psi(x) &= e^{-x} \\quad x\\in[0, +\\infty] \\\n \\psi(x) &= e^{-x} \\quad x\\in[-\\infty, +\\infty] \\\n \\psi(x) &= e^{\\mathrm{i}x} \\quad x\\in[-\\infty, +\\infty] \\\n \\psi(x) &= e^{\\mathrm{i}x} \\quad x\\in[-\\pi, +\\pi]\n \\end{align}\nExercise 5: Self-adjointedness 1\nFor any operator on our vector space of functions, we can define its adjoint operator (also called Hermitian conjugate).\n Given the operator $\\hat{A}$, the operator $\\hat{A}^\\dagger$ is its adjoint if and only if, for any pair of vectors $\\psi, \\phi$ the following\n holds true:\n \\begin{equation}\n \\langle \\psi | \\hat{A} | \\phi \\rangle^ = \\langle \\phi | \\hat{A}^\\dagger | \\psi \\rangle\n \\end{equation}\n Of all the operators that can exist, a class is particularly interesting in quantum mechanics: the self-adjoint operators. \n An operator $\\hat{A}$ is said to be self-adjoint if and only if, for any pair of vectors $\\psi, \\phi$ we have:\n \\begin{equation}\n \\langle \\psi | \\hat{A} | \\phi \\rangle^ = \\langle \\phi | \\hat{A} | \\psi \\rangle\n \\end{equation}\n that is to say: $\\hat{A} = \\hat{A}^\\dagger$. Why are self-adjoint operators so important? Because they have a series of important\n properties that make them useful in representing physical observables, such as position, momentum, energy etc.\nIn this exercise, we will prove that the momentum operator is self-adjoint. To simplify the matter, we will just prove it for $\\hat{p}_x$.\n First let's write down explicitly $\\langle \\psi | \\hat{p}_x | \\phi \\rangle$:\n \\begin{equation}\n \\int \\mathrm{d} x \\psi^(x)\\left[-\\mathrm{i}\\hbar \\frac{\\partial}{\\partial x} \\phi(x)\\right]\n \\end{equation}\n We then take the complex conjugate:\n \\begin{equation}\n \\int \\mathrm{d} x \\psi(x)\\left[\\mathrm{i}\\hbar \\frac{\\partial}{\\partial x} \\phi^(x)\\right]\n \\end{equation}\n and this need to be equal to:\n \\begin{equation}\n \\int \\mathrm{d} x \\phi^*(x)\\left[-\\mathrm{i}\\hbar \\frac{\\partial}{\\partial x} \\psi(x)\\right]\n \\end{equation}\n Is the operator $-\\hbar\\frac{\\partial}{\\partial x}$ self-adjoint?\nWarning You can't just move $\\phi^*(x)$ to the left!! The momentum operator is a derivative with respect to $x$!\nHint 1 Use integration by parts on the last expression. Remember that we are integrating on the whole set of real numbers i.e. $\\int$ means $\\int_{-\\infty}^{+\\infty}$.\nHint 2 The functions in our vector space go to zero at infinity, so that $[\\psi(x)\\phi^*(x)]_{-\\infty}^{+\\infty}= 0$\nExercise 6: Self-adjointedness 2\nNow that you know the tricks of the trade, which of the following operators are self-adjoint?\n \\begin{align}\n & \\hat{p}_x^2 = -\\hbar^2 \\frac{\\partial^2}{\\partial x^2} \\\n & \\hat{l}_x = \\hat{y}\\hat{p}_z - \\hat{z}\\hat{p}_y \\\n & \\hat{a} = \\frac{\\hat{x} + \\mathrm{i}\\hat{p}_x}{\\sqrt{2}} \\\n & \\hat{a}^\\dagger = \\frac{\\hat{x} - \\mathrm{i}\\hat{p}_x}{\\sqrt{2}}\n \\end{align}" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jweob/Reprap-Squirty
utils/bedlevel.ipynb
gpl-3.0
[ "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nimport numpy as np\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nimport copy\nfrom numpy.linalg import svd\nfrom numpy import linspace, meshgrid\nfrom matplotlib.mlab import griddata\n", "Instructions for measuring a 3D printer bed\nJWEOB 27th Feb 2016\nThis notebook is for investigating how level a 3D printer bed is.\n\nRecord a gcode file that will probe a grid of locations on your print bed. You can generate one with the script below\nUpload the gcode file to the printer's SD card\nSSH into the host for the printer (I use a Raspberry Pi)\nGo to pronterface directory and run pronterface logging stdout to a file: \npython pronterface | tee printlog.txt\nConnect to the printer: connect /dev/ttyUSB0 250000\nRun gcode using: sdprint file.g\nExit: exit\nUse filezilla to connect to pi using SFTP\nCopy across text file, load it into the viewing script at the bottom of this notebook\n\nScript to generate GCODE\nTake some key parameters and create a gcode file that will tell a printer to probe a grid of locations on the bed", "probe_x_offset = 5 # i.e. probe is 5 mm to the right of the nozzle\nprobe_y_offset = -31 # i.e. probe is 31mm \"down\" from the nozzle\nprobe_z_offset = -22.5 # i.e. probe clicks with nozzle 22.5mm above the bed\n\nmin_y = 41\nmin_x = 5\nmax_y = 147 # Moving beyond this value after homing will crash bed\nmax_x = 130 # moving beyond this value will cause probe to miss bed\npre_travel_z = 27\nsafe_z = 32.5\n\nx_points = 10 # Number of points in x direction\ny_points = 10 # Number of points in y direction\n\nf = open('bedread8.g', 'w')\n\nf.write(\"G28\\n\") # Home first\n\nx_step = (max_x - min_x) / (x_points - 1)\ny_step = (max_y - min_y) / (y_points - 1)\n\nfor y_point in reversed(range(0, y_points)):\n for x_point in range(0, x_points):\n f.write('G1 X{0:.1f} Y{1:.1f} Z{2:.1f}\\n'.format((x_point * x_step + min_x), (y_point * y_step + min_y), (safe_z)))\n f.write('M400\\n')\n f.write('G30\\n')\n f.write('M400\\n')\n f.write('G1 Z{0:.1f}\\n'.format(pre_travel_z))\n f.write('M400\\n')\nf.write('G1 Z{0:.1f}\\n'.format(safe_z))\nf.write('M400\\n')\nf.write('M402\\n')\n \nf.close()", "Display results\nLoad the file with the results and visualize them", "f = open(\"bed_level_20160227_6.txt\")\nprintlog = f.read()\n\n# Extract values from logfile (only lines starting with 'Bed' are interesting)\nbed_values = []\nfor line in iter(printlog.splitlines()):\n if line[0:3] == \"Bed\":\n x_start = line.index(\"X\")\n y_start = line.index(\"Y\")\n z_start = line.index(\"Z\")\n\n bed_values.append([float(line[x_start+3:y_start]), \n float(line[y_start+3:z_start]), \n float(line[z_start+3:].rstrip(' \\t\\r\\n\\0\\x00\\x03'))])\n \n\ndef plot_scatter_from_points(bed_values):\n xs = np.array([event[0] for event in bed_values])\n ys = np.array([event[1] for event in bed_values])\n zs = np.array([event[2] for event in bed_values])\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n ax.scatter(xs, ys, zs, c='r', marker='o')\n plt.show()\n\nplot_scatter_from_points(bed_values)\n\ndef plot_surface_from_points(bed_values, resX=10, resY=10):\n \n \n \n # Convert scatter plot into surface plot\n # http://stackoverflow.com/questions/18764814/make-contour-of-scatter\n\n x = [point[0] for point in bed_values]\n y = [point[1] for point in bed_values]\n z = [point[2] for point in bed_values]\n xi = linspace(min(x), max(x), resX)\n yi = linspace(min(y), max(y), resY)\n Z = griddata(x, y, z, xi, yi, interp='linear')\n \n X, Y = meshgrid(xi, yi)\n \n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0.1)\n fig.colorbar(surf, shrink=0.5, aspect=5)\n \n plt.show()\n \ndef plot_contour_from_points(bed_values, resX=10, resY=10):\n \n \n \n # Convert scatter plot into surface plot\n # http://stackoverflow.com/questions/18764814/make-contour-of-scatter\n\n x = [point[0] for point in bed_values]\n y = [point[1] for point in bed_values]\n z = [point[2] for point in bed_values]\n xi = linspace(min(x), max(x), resX)\n yi = linspace(min(y), max(y), resY)\n Z = griddata(x, y, z, xi, yi, interp='linear')\n \n X, Y = meshgrid(xi, yi)\n \n plt.figure()\n CS = plt.contour(X, Y, Z)\n plt.clabel(CS, inline=1, fontsize=10)\n\n \"\"\"\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0.1)\n fig.colorbar(surf, shrink=0.5, aspect=5)\n \"\"\"\n plt.show()\n \n \nplot_surface_from_points(bed_values)\nplot_contour_from_points(bed_values)\n", "Need to remove the tilt from this. Get the best fit orthoganal distance regression plane using approach here: http://stackoverflow.com/questions/12299540/plane-fitting-to-4-or-more-xyz-points", "def planeFit(points):\n \"\"\"\n p, n = planeFit(points)\n\n Given an array, points, of shape (d,...)\n representing points in d-dimensional space,\n fit an d-dimensional plane to the points.\n Return a point, p, on the plane (the point-cloud centroid),\n and the normal, n.\n \"\"\"\n\n points = np.reshape(points, (np.shape(points)[0], -1)) # Collapse trialing dimensions\n assert points.shape[0] <= points.shape[1], \"There are only {} points in {} dimensions.\".format(points.shape[1], points.shape[0])\n ctr = points.mean(axis=1)\n x = points - ctr[:,np.newaxis]\n M = np.dot(x, x.T) # Could also use np.cov(x) here.\n return ctr, svd(M)[0][:,-1]\n\n\ncentroid, normal = planeFit(np.transpose(np.array(bed_values)))\nmag = np.sqrt(normal.dot(normal)) # Should be 1 if this is a unit vector\nprint(centroid, normal)\n\n# Plot the plane to make sure\ndef get_height_on_plane(centroid, normal, x, y):\n # plane is of form ax + by +cz +d = 0\n # Normal vector of plane is [a,b,c]T, so just need to find d\n # d = -(ax0 + by0 + cz0)\n d = - np.sum((np.array(centroid) * np.array(normal)))\n [a, b, c] = normal\n z = -(a * x + b * y + d) / c\n return z\n\nbest_fit_plane = []\nfor point in bed_values:\n best_fit_plane.append([\n point[0],\n point[1],\n get_height_on_plane(centroid, normal, point[0], point[1])\n ]) \n\nplot_surface_from_points(best_fit_plane)", "What is the angle between the z axis and the normal vector? Need to dot product the normal with z unit vector. http://www.intmath.com/vectors/7-vectors-in-3d-space.php", "z_unit = np.array([0,0,1])\ndot_product = normal.dot(z_unit)\ntilt_angle =np.arccos(dot_product)\nprint(\"Bed is tilted by {0:.3f} degrees\".format(tilt_angle))", "Now rotate the original set of points so that the best fit plane has normal equal to z axis unit vector. Use approach here: http://stackoverflow.com/questions/1023948/rotate-normal-vector-onto-axis-plane", "old_x_unit = np.array([1,0,0])\nold_y_unit = np.array([0,1,0])\nold_z_unit = np.array([0,0,1])\n\nnew_z_unit = np.array(normal)\nnew_y_unit = np.cross(old_x_unit, new_z_unit)\nnew_y_unit = new_y_unit / (np.dot(new_y_unit, new_y_unit))\n\nnew_x_unit = np.cross(new_z_unit, new_y_unit)\nnew_x_unit = new_x_unit / (np.dot(new_x_unit, new_x_unit))\n\n# For each point, create new coords\n\ncentroid_vec = np.array(centroid)\n\nrotated_bed_values = []\n\nmin_x = None\nmin_y = None\nmin_z = None\n\n\nfor point in bed_values:\n point_vec = np.array(point)\n from_centroid = point_vec - centroid_vec\n new_coords = [np.dot(from_centroid, new_x_unit),\n np.dot(from_centroid, new_y_unit),\n np.dot(from_centroid, new_z_unit)\n ]\n if min_z is None or np.dot(from_centroid, new_z_unit) < min_z:\n min_z = np.dot(from_centroid, new_z_unit)\n if min_y is None or np.dot(from_centroid, new_y_unit) < min_y:\n min_y = np.dot(from_centroid, new_y_unit)\n if min_x is None or np.dot(from_centroid, new_x_unit) < min_x:\n min_x = np.dot(from_centroid, new_x_unit)\n\n rotated_bed_values.append(new_coords)\n\n# Make minimum point z = 0\nfor point in rotated_bed_values:\n point[0] -= min_x \n point[1] -= min_y\n point[2] -= min_z\n\n \nplot_scatter_from_points(rotated_bed_values)\n\n\n\nplot_surface_from_points(rotated_bed_values, resX=20, resY=20)\nplot_contour_from_points(rotated_bed_values, resX=20, resY=20)\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
martinjrobins/hobo
examples/optimisation/convenience.ipynb
bsd-3-clause
[ "Convenience methods for optimisation\nThis example demonstrates how to use the convenience methods fmin and curve_fit for optimisation.\nThese methods allow you to perform simple minimisation or curve fitting outside the time-series context typically used in Pints.\nMinimisation with fmin\nIn this part of the example, we define a function f() and estimate the arguments that minimise it. For this we use fmin(), which has a similar interface to SciPy's fmin().", "import pints\n\n# Define a quadratic function f(x)\ndef f(x):\n return 1 + (x[0] - 3) ** 2 + (x[1] + 5) ** 2\n\n# Choose a starting point for the search\nx0 = [1, 1]\n\n# Find the arguments for which it is minimised\nxopt, fopt = pints.fmin(f, x0, method=pints.XNES)\nprint(xopt)\nprint(fopt)", "We can make a contour plot near the true solution to see how we're doing", "import numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 6, 100)\ny = np.linspace(-10, 0, 100)\nX, Y = np.meshgrid(x, y)\nZ = f(np.stack((X, Y)))\n\nplt.figure()\nplt.contour(X, Y, Z)\nplt.plot(xopt[0], xopt[1], 'x')\nplt.show()", "Curve fitting with curve_fit\nIn this part of the example we fit a curve to some data, using curve_fit(), which has a similar interface to SciPy's curve_fit().", "# Define a quadratic function `y = f(x|a, b, c)`\ndef f(x, a, b, c):\n return a + b * x + c * x ** 2\n\n# Generate some test noisy test data\nx = np.linspace(-5, 5, 100)\ne = np.random.normal(loc=0, scale=2, size=x.shape)\ny = f(x, 9, 3, 1) + e\n\n# Find the parameters that give the best fit\nx0 = [0, 0, 0]\nxopt, fopt = pints.curve_fit(f, x, y, x0, method=pints.XNES)\n\nprint(xopt)", "Again, we can use matplotlib to have a look at the results", "plt.figure()\nplt.xlabel('x')\nplt.ylabel('y')\nplt.plot(x, y, 'x', label='Noisy data')\nplt.plot(x, f(x, 9, 3, 1), label='Original function')\nplt.plot(x, f(x, *xopt), label='Esimated function')\nplt.legend()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/python-docs-samples
people-and-planet-ai/geospatial-classification/README.ipynb
apache-2.0
[ "#@title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.", "🏭 Coal Plant ON/OFF: Predictions\n\nTime estimate: 1 hour\nCost estimate: Around $1.00 USD (free if you use \\$300 Cloud credits)\n\n\nWatch the video in YouTube<br> \n\n\nThis is an interactive notebook that contains all of the code necessary to train an ML model from satellite images for geospatial classification of whether a coal plant is on/off. \nThis is a first step introductory example of how these satellite images can be used to detect carbon pollution from power plants.\n\n💚 This is one of many machine learning how-to samples inspired from real climate solutions aired on the People and Planet AI 🎥 series.\n🙈 Using this interactive notebook\nClick the run icons ▶️ of each section within this notebook. \nThis notebook code lets you train and deploy an ML model from end-to-end. When you run a code cell, the code runs in the notebook's runtime, so you're not making any changes to your personal computer.\n\n🛎️ To avoid any errors, wait for each section to finish in their order before clicking the next “run” icon.\n\nThis sample must be connected to a Google Cloud project, but nothing else is needed other than your Google Cloud project.\nYou can use an existing project and the cost will be around $1.00. Alternatively, you can create a new Cloud project with cloud credits for free.\n🚴‍♀️ Steps summary\nHere's a quick summary of what you’ll go through:\n\n\nGet the training data (~15 minutes to complete, no cost for using Earth Engine):\n Extract satellite images from Earth Engine, combine it with the data that was labeled and contains lat/long coordinates from Climate TRACE in a CSV, and export to\n Cloud Storage.\n\n\nRun a custom training job (~15 minutes to complete, costs ~ $1):\n Using Tensorflow on Vertex AI Training using a pre-built training container.\n\n\nDeploy a web service to host the trained model (~7 minutes to complete, costs a few cents to build the image, and deployment cost covered by free tier):\n On\n Cloud Run\n and get predictions using the model.\n\n\nGet Predictions (a few seconds per prediction, costs covered by free tier):\n Use the web service to get predictions for new data.\n\n\nVisualize predictions (~5 minutes to complete) :\n Visualize the predictions on a map.\n\n\n(Optional) Delete the project to avoid ongoing costs.\n\n\n✨ Before you begin, you need to…\n\nDecide on creating a new\n free project\n (recommended) or using an existing one.\n Then copy the project ID and paste it in the google_cloud_project field in the \"Entering project details” section below.\n\n\n💡 If you don't plan to keep the resources that you create via this sample, we recommend creating a new project instead of selecting an existing project.\nAfter you finish these steps, you can delete the project, removing all the resources associated in bulk.\n\n\n\nClick here\n to enable the following APIs in your Google Cloud project:\n Earth Engine, Vertex AI, Container Registry, Cloud Build, and Cloud Run.\n\n\nMake sure that billing is enabled for your Google Cloud project,\n click here\n to learn how to confirm that billing is enabled.\n\n\nClick here\n to create a Cloud Storage bucket.\n Then copy the bucket’s name and paste it in the cloud_storage_bucket field in the “Entering project details” section below.\n\n\n\n🛎️ Make sure it's a regional bucket in a location where\nVertex AI is available.\n\n\nHave an Earth Engine account (it's FREE) or create a new one.\n To create an account, fill out the registration form here.. Please note this can take from 0-24 hours...but it's worth it! Come back to this sample after you have this.\n\n⛏️ Preparing the project environment\nClick the run ▶️ icons in order for the cells to download and install the necessary code, libraries, and resources for this solution.\n\n💡 You can optionally view the entire\ncode in GitHub.\n\n↘️ Get the code", "# Get the sample source code.\n\n!git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git ~/python-docs-samples\n%cd ~/python-docs-samples/people-and-planet-ai/geospatial-classification\n\n!pip install -r requirements.txt -c constraints.txt", "🛎️ [DON’T PANIC] It’s safe to ignore the warnings.\nWhen we pip install the requirements, there might be some warnings about conflicting dependency versions.\nFor the scope of this sample, that’s ok.\n⚠️ Restart the runtime: Running the previous cell just updated some libraries and requires to restart the runtime to load those libraries correctly.\nIn the top-left menu, click \"Runtime\" > \"Restart runtime\".\n\n✏️ Enter your Cloud project's details. Ensure you provide a regional bucket!", "#@title My Google Cloud resources\nproject = '' #@param {type:\"string\"}\ncloud_storage_bucket = '' #@param {type:\"string\"}\nregion = '' #@param {type:\"string\"}\n\n# Validate the inputs.\nif not project:\n raise ValueError(f\"Please provide a value for 'project'\")\nif not cloud_storage_bucket:\n raise ValueError(f\"Please provide a value for 'cloud_storage_bucket'\")\nif not region:\n raise ValueError(f\"Please provide a value for 'region'\")\n\n# Authenticate\nfrom google.colab import auth\n\nauth.authenticate_user()\nprint('Authenticated')\n\n!gcloud config set project {project}\n\n%cd ~/python-docs-samples/people-and-planet-ai/geospatial-classification", "🗺️ Authenticate to Earth Engine\nIn order to use the Earth Engine API, you'll need to have an Earth Engine account.\nTo create an account, fill out the registration form here.", "import ee\nimport google.auth\n\ncredentials, _ = google.auth.default()\nee.Initialize(credentials, project=project)", "🚏 Overview\nThis notebook leverages geospatial data from Google Earth Engine, and labeled data provided by the organization Climate TRACE. By combining these two data sources, you'll build and train a model that predicts whether or not a power plant is turned on and producing emissions.\n🛰️ Data (inputs)\nThe data in this example consists of images from a satellite called Sentinel-2, a wide-swath, high-resolution, multi-spectral imaging mission for land monitoring studies.\nWhen working with satellite data, each input image has the dimensions [width, height, bands]. Bands are measurements from specific satellite instruments for different ranges of the electromagnetic spectrum. For example, Sentinel-2 contains 🌈 13 spectral bands. If you're familiar with image classification problems, you can think of the bands as similar to an image's RGB (red, green, blue) channels. However, when working with satellite data we generally have more than just 3 channels.\n\n🏷️ Labels (outputs)\nFor each patch of pixels (an image of a power plant) that we give to the model, it performs binary classification, which indicates whether the power plant is on or off.\nIn this example, the output is a single number between 0 (Off) and 1 (On), representing the probability of that power plant being ON.\nModel (function)\nTL;DR\nThe model will receive a patch of pixels, in the center is the power plant tower. We take 16 pixels as padding creating a 33x33 patch. The model returns a classification of ON/OFF\nIn this example, we have a CSV file of labels. Each row in this file represents a power plant at a specific lat/lon and timestamp. At training time we'll prepare a dataset where each input image is a single pixel that we have a label for. We will then add padding around that image. These padded pixels will not get predictions, but will help our model to make better predictions for the center point that we have a label for.\nFor example, with a padding of 16, each 1 pixel input point would become a 33x33 image after the padding is added.\n\nThe model in this sample is trained for image patches where a power plant is located in the center, and the dimensions must be 33x33 pixels where each pixel has a constant number of bands.\n1. 🛰️ Get the training data\nThe training data in this sample comes from two places: \n\n\nThe satellite images will be extracted from Earth Engine.\n\n\nThe labels are provided in a CSV file that indicates whether a coal plant is turned on or off at a particular timestamp. \n\n\nFor each row in the CSV file, we need to extract the corresponding Sentinel image taken at that specific latitude/longitude and timestamp. We'll export this image data, along with the corresponding label (on/off), to Cloud Storage.", "# Define constants\n\nLABEL = 'is_powered_on'\nIMAGE_COLLECTION = \"COPERNICUS/S2\"\nBANDS = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B8', 'B8A', 'B9', 'B10', 'B11', 'B12']\nSCALE = 10\nPATCH_SIZE = 16", "🏷️ Import labels\nFirst, we import the CSV file that contains the labels.", "import pandas as pd\nimport numpy as np\n\nlabels_dataframe = pd.read_csv('labeled_geospatial_data.csv')", "Each row in this dataframe represents a power plant at a particular timestamp. \nThe \"is_powered_on\" column indicates whether or not the coal plant was turned on (1) or off (0) at that timestamp.", "labels_dataframe.head()", "🎛️ Create train/validation splits\nBefore we can train an ML model, we need to split this data into training and validation datasets. We will do this by creating two new dataframes with a 70/30 training validation split.", "TRAIN_VALIDATION_SPLIT = 0.7\n\ntrain_dataframe = labels_dataframe.sample(frac=TRAIN_VALIDATION_SPLIT,random_state=200) #random state is a seed value\nvalidation_dataframe = labels_dataframe.drop(train_dataframe.index).sample(frac=1.0)", "Merge 🏷️ labels + 🛰️ Sentinel image data\nIn Earth Engine, an ImageCollection is a stack or sequence of images. An Image is composed of one or more bands and each band has its own name, data type, scale, mask and projection. The Sentinel-2 dataset is represented as an ImageCollection, where each image in the collection is of a specific geographic location at a particular time.\nIn the cell below, we write a function to extract the Sentinel image taken at the specific latitude/longitude and timestamp for each row of our dataframe.\nWe will store all of this information as an Earth Engine Feature Collection. In Earth Engine, a Feature is an object with a geometry property storing a Geometry object, and a properties property storing a dictionary of other properties. Groups of related Features can be combined into a FeatureCollection to enable additional operations on the entire set such as filtering, sorting, and rendering. \nWe first filter the Sentinel-2 ImageCollection at the start/end dates for a particular row in our dataframe.\nThen, using the neighorboodToArray method we create a FeatureCollection that contains the satellite data for each band at the latitude and longitude of interest as well as a 16 pixel padding around that point.\nIn the image below you can think of the purple box representing the lat/lon where the power plant is located. And around this pixel, we add the padding.", "from datetime import datetime, timedelta\n\ndef labeled_feature(row):\n start = datetime.fromisoformat(row.timestamp)\n end = start + timedelta(days=1)\n image = (\n ee.ImageCollection(IMAGE_COLLECTION)\n .filterDate(start.strftime(\"%Y-%m-%d\"), end.strftime(\"%Y-%m-%d\"))\n .select(BANDS)\n .mosaic()\n )\n point = ee.Feature(\n ee.Geometry.Point([row.lon, row.lat]),\n {LABEL: row.is_powered_on},\n )\n return (\n image.neighborhoodToArray(ee.Kernel.square(PATCH_SIZE))\n .sampleRegions(ee.FeatureCollection([point]), scale=SCALE)\n .first()\n )\n\ntrain_features = [labeled_feature(row) for row in train_dataframe.itertuples()]\nvalidation_features = [labeled_feature(row) for row in validation_dataframe.itertuples()]", "To get a better sense of what's going on, let's look at the properties for the first Feature in the train_features list. You can see that it contains a property for the label is_powered_on, and 13 additional properies, one for each spectral band.", "ee.FeatureCollection(train_features[0]).propertyNames().getInfo()", "The data contained in each band property is an array of shape 33x33.\nFor example, here is the data for band B1 in the first element in our list expressed as a numpy array.", "example_feature = np.array(train_features[0].get('B1').getInfo())\nprint(example_feature)\nprint('shape: ' + str(example_feature.shape))", "💾 Export data\nLastly, we'll export the data to a Cloud Storage bucket. We'll export the data as TFRecords.\nLater when we run the training job, we'll parse these TFRecords and feed them to the model.", "# Export data\n\ntraining_task = ee.batch.Export.table.toCloudStorage(\n collection=ee.FeatureCollection(train_features),\n description=\"Training image export\",\n bucket=cloud_storage_bucket,\n fileNamePrefix=\"geospatial_training\",\n selectors=BANDS + [LABEL],\n fileFormat=\"TFRecord\",\n)\n\ntraining_task.start()\n\nvalidation_task = ee.batch.Export.table.toCloudStorage(\n collection=ee.FeatureCollection(validation_features),\n description=\"Validation image export\",\n bucket=cloud_storage_bucket,\n fileNamePrefix=\"geospatial_validation\",\n selectors= BANDS + [LABEL],\n fileFormat='TFRecord')\n\nvalidation_task.start()", "This export will take around 10 minutes. You can monitor the progress with the following command:", "from pprint import pprint\n\npprint(ee.batch.Task.list())", "2. 👟 Run a custom training job\nOnce the export jobs have finished, we're ready to use that data to train a model on Vertex AI Training.\nThe complete training code can be found in the task.py file.\nTo run our custom training job on Vertex AI Training, we'll use the pre-built containers provided by Vertex AI to run our training script.\nWe'll also make use of a GPU. Our model training will only take a couple of minutes, so using a GPU isn't really necessary. But for demonstration purposes (since adding a GPU is simple!) we will make sure we use a container image that is GPU compatible, and then add the accelerator_type and accelerator_count parameters to job.run. TensorFlow will make use of a single GPU out of the box without any extra code changes.", "from google.cloud import aiplatform\n\naiplatform.init(project=project, staging_bucket=cloud_storage_bucket)\n\njob = aiplatform.CustomTrainingJob(\n display_name=\"geospatial_model_training\",\n script_path=\"task.py\",\n container_uri=\"us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-7:latest\")", "The job will take around 10 minutes to run.", "model = job.run(accelerator_type='NVIDIA_TESLA_K80', accelerator_count=1, args=[f'--bucket={cloud_storage_bucket}'])", "3. 💻 Deploy a web service to host the trained model\nNext, we use\nCloud Run\nto deploy a web service that exposes a\nREST API to\nget predictions from our trained model.\nWe'll deploy our service to Cloud Run directly from source code so we don't need to build the container image first. Behind the scenes, this command uses Google Cloud buildpacks and Cloud Build to automatically build a container image from our source code in the serving_app directory. To run the web service, we configure Cloud Run to launch\ngunicorn\non this container image. \nSince calls to this web service could launch potentially expensive jobs in our project, we configure it to only accept authenticated calls.\n🐣 Deploy app", "# Deploy the web service to Cloud Run.\n# https://cloud.google.com/sdk/gcloud/reference/run/deploy\n!gcloud run deploy \"geospatial-service\" \\\n --source=serving_app \\\n --command=\"gunicorn\" \\\n --args=\"--threads=8,--timeout=0,main:app\" \\\n --region=\"{region}\" \\\n --memory=\"1G\" \\\n --no-allow-unauthenticated \\", "Now we need the web service URL to make calls to the REST API we just exposed. We can use gcloud run services describe to get the web service URL.\nSince we only accept authorized calls in our web service, we also need to authenticate each call.\ngcloud is already authenticated, so we can use gcloud auth print-identity-token to get quick access.\n\nℹ️ For more information on how to do authenticated calls in Cloud Run, see the\nAuthentication overview page.", "import subprocess\n\n# Get the web service URL.\n# https://cloud.google.com/sdk/gcloud/reference/run/services/describe\nservice_url = subprocess.run(\n [ 'gcloud', 'run', 'services', 'describe', 'geospatial-service',\n f'--region={region}',\n f'--format=get(status.url)',\n ],\n capture_output=True,\n).stdout.decode('utf-8').strip()\nprint(f\"service_url: {service_url}\")\n\n# Get an identity token for authorized calls to our web service.\n# https://cloud.google.com/sdk/gcloud/reference/auth/print-identity-token\nidentity_token = subprocess.run(\n ['gcloud', 'auth', 'print-identity-token'],\n capture_output=True,\n).stdout.decode('utf-8').strip()\nprint(f\"identity_token: {identity_token}\")", "Finally, we can test that everything is working.\nWe included a ping method in our web service just to make sure everything is working as expected.\nIt simply returns back the arguments we passed to the call, as well as a response saying that the call was successful.\n\n🛎️ This is a convenient way to make sure the web service is reachable, the authentication is working as expected, and the request arguments are passed correctly.\n\nWe can use Python's\nrequests\nlibrary.\nThe web service was built to always accept JSON-encoded requests, and returns JSON-encoded responses.\nFor a request to be successful, it must:\n\nBe an HTTP POST request\nContain the following headers:\nAuthorization: Bearer IDENTITY_TOKEN\nContent-Type: application/json\nThe data must be valid JSON, if no arguments are needed we can pass {} as an empty object.\n\nFor ease of use, requests.post has a\njson parameter\nthat automatically attaches the header Content-Type: application/json and encodes our data into a JSON string.", "import requests\n\nrequests.post(\n url=f'{service_url}/ping',\n headers={'Authorization': f'Bearer {identity_token}'},\n json={'x': 42, 'message': 'Hello world!'},\n).json()", "4.🔮 Get Predictions\nNow that we know our app is up and running, we can use it to make predictions.\nLet's start by making a prediction for a particular coal plant. To do this we will need to extract the Sentinel data from Earth Engine and send it in the body of the post requst to the prediction service.\nWe'll start with a plant located at the coordinates -84.80529, 39.11613, and then extract the satellite data from October 2021.", "# Extract image data\n\nimport json\n\ndef get_prediction_data(lon, lat, start, end):\n \"\"\"Extracts Sentinel image as json at specific lat/lon and timestamp.\"\"\"\n\n location = ee.Feature(ee.Geometry.Point([lon, lat]))\n image = (\n ee.ImageCollection(IMAGE_COLLECTION)\n .filterDate(start, end)\n .select(BANDS)\n .mosaic()\n )\n\n feature = image.neighborhoodToArray(ee.Kernel.square(PATCH_SIZE)).sampleRegions(\n collection=ee.FeatureCollection([location]), scale=SCALE\n )\n\n return feature.getInfo()[\"features\"][0][\"properties\"]", "When we call the get_prediction_data function we need to pass in the start and end dates. \nSentinel-2 takes pictures every 10 days. At training time, we knew the exact date of the Sentinel-2 image, as this was provided in the labels CSV file. However, for user supplied images for prediction we don't know the specific date the image was taken. To address this, we'll extract data for the entire month of October and then use the mosaic function in Earth Engine which will grab the earliest image in that range, stitch together images at the seams, and discard the rest.", "prediction_data = get_prediction_data(-84.80529, 39.11613, '2021-10-01', '2021-10-31')", "The prediction service expects two things the input data for the prediction as well as the Cloud Storage path where the model is stored.", "requests.post(\n url=f'{service_url}/predict',\n headers={'Authorization': f'Bearer {identity_token}'},\n json={'data': prediction_data, 'bucket': cloud_storage_bucket},\n).json()['predictions']", "4. 🗺️ Visualize predictions\nLet's visualize the results of a coal plant in Spain. First, we get predictions for the four towers at this power plant.", "def get_prediction(lon, lat, start, end):\n prediction_data = get_prediction_data(lon, lat, start, end)\n result = requests.post(\n url=f'{service_url}/predict',\n headers={'Authorization': f'Bearer {identity_token}'},\n json={'data': prediction_data, 'bucket': cloud_storage_bucket},).json()\n return result['predictions']['predictions'][0][0][0][0]\n\nlons = [-7.86444, -7.86376, -7.85755, -7.85587]\nlats = [43.43717, 43.43827, 43.44075, 43.44114]\n\nplant_predictions = [get_prediction(lon , lat, '2021-10-01', '2021-10-31') for lon, lat in zip(lons, lats)]", "Next, we can plot these points on a map. Blue means our model predicts that the towers are \"off\", and red means our model predicts that the towers are \"on\" and producing carbon pollution.", "import folium\nimport folium.plugins as folium_plugins\nimport branca.colormap as cm\n\ncolormap = cm.LinearColormap(colors=['lightblue', 'red'], index=[0,1], vmin=0, vmax=1)\nmap = folium.Map(\n location=[43.44, -7.86],\n zoom_start=16,\n tiles='https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',\n attr = 'ESRI'\n)\nfor loc, p in zip(zip(lats, lons), plant_predictions):\n folium.Circle(\n location=loc,\n radius=20,\n fill=True,\n color=colormap(p),\n ).add_to(map)\n\nmap.add_child(colormap)\n\ndisplay(map)", "6. 🧹 Clean Up\nTo avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.\nDeleting the project\nThe easiest way to eliminate billing is to delete the project that you created for the tutorial.\nTo delete the project:\n\n⚠️ Deleting a project has the following effects:\n\n\nEverything in the project is deleted. If you used an existing project for this tutorial, when you delete it, you also delete any other work you've done in the project.\n\n\nCustom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project.\n\n\nIf you plan to explore multiple tutorials and quickstarts, reusing projects can help you avoid exceeding project quota limits.\n\n\nIn the Cloud Console, go to the Manage resources page.\n\n<button>\nGo to Manage resources\n</button>\n\n\nIn the project list, select the project that you want to delete, and then click Delete.\n\n\nIn the dialog, type the project ID, and then click Shut down to delete the project." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amitkaps/hackermath
Module_1b_linear_regression_ols.ipynb
mit
[ "Linear Regression (OLS)\nKey Equation: $Ax =b ~~ \\text{for} ~~ n \\times p+1 $\nLinear regression - Ordinary Least Square (OLS) is the most basic form of supervised learning. In this we have a target variable (y) and we want to establish a linear relationship with a set of features (x<sub>1</sub>, x<sub>2</sub>, x<sub>3</sub>, ...)\nLets take a simple example to illustrate this problem:\nWe have price ('000 INR) and mileage (kmpl) for 7 hatchback cars as below\nprice = [199 , 248 , 302 , 363 , 418 , 462 , 523 ]\nkmpl = [23.9, 22.7, 21.1, 20.5, 19.8, 20.4, 18.6]\nWe want to predict the target variable price, given the input variable kmpl", "import numpy as np\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('fivethirtyeight')\nplt.rcParams['figure.figsize'] = (10, 6)\n\nimport ipywidgets as widgets\nfrom ipywidgets import interact, interactive\n\nprice = np.array([199, 248, 302, 363, 418, 462, 523])\n\nkmpl = np.array([23.9, 22.7, 21.1, 20.5, 19.8, 20.4, 18.6])\n\nplt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')", "Thinking Linear Algebra Way\nThe basic problem in linear regression is solving - n linear equation, with p unknowns, where p &lt; n\nSo a linear relationship can be written as:\n$$ price = \\beta_{0} + \\beta_{1} kmpl $$\nWe have added an intercept to the equation, so that the line does not need to pass through zero\nSo we are trying to solve these n = 7 equations with, p = 2\n$$ 199 = \\beta_{0} + \\beta_{1} 23.9 ~~~~ \\text{(eq 1)} $$\n$$ 248 = \\beta_{0} + \\beta_{1} 22.7 ~~~~ \\text{(eq 2)} $$\n$$ 302 = \\beta_{0} + \\beta_{1} 21.1 ~~~~ \\text{(eq 3)} $$\n$$ 363 = \\beta_{0} + \\beta_{1} 20.5 ~~~~ \\text{(eq 4)} $$\n$$ 418 = \\beta_{0} + \\beta_{1} 19.8 ~~~~ \\text{(eq 5)} $$\n$$ 462 = \\beta_{0} + \\beta_{1} 20.4 ~~~~ \\text{(eq 6)} $$\n$$ 523 = \\beta_{0} + \\beta_{1} 18.6 ~~~~ \\text{(eq 7)} $$\nSo the key to remember here is that we are solving for $\\beta_{0}$ and $ \\beta_{1} $\nNow if we plot these lines, it is clear that there will not be a one point of intersection that we can get like we get if we had only 2 equations.", "b0 = np.arange(-500,4000, 100)\n\nfor i in range(7):\n b1 = (price[i] - b0)/kmpl[i]\n plt.plot(b0, b1, linewidth = 1)\n plt.text(b0[-10], b1[-10], 'eq %s'% (i + 1), fontsize = 8 )\n\nplt.axhline(0, color='grey', linewidth=2)\nplt.axvline(0, color='grey', linewidth=2)\n\nplt.xlabel('beta0')\nplt.ylabel('beta1')\n\nplt.ylim(-150,50)", "Now we don't have an exact solution. But can see the $\\beta_{0} $ is around [1500,1700] and $ \\beta_{1} $ is around [-50,-70]. So one possible line is \n$$ price = 1600 - 60 * kmpl $$\nBut we can clearly see that this is probably not the best possible line!!", "beta_0_list = widgets.IntSlider(min=1500, max=1700, step=10, value=1600)\nbeta_1_list = widgets.IntSlider(min=-70, max=-50, step=2, value=-60)\n\nbeta_0 = 1600 \nbeta_1 = -60\n\ndef plot_line(beta_0, beta_1):\n\n plt.scatter(kmpl, price, s = 150)\n plt.xlabel('kmpl')\n plt.ylabel('price')\n y = beta_0 + beta_1 * kmpl\n plt.plot(kmpl, y, '-')", "Lets change the value of beta_0 and beta_1 and see if can find the right answer", "interactive(plot_line, beta_0 = beta_0_list, beta_1 = beta_1_list )", "Adding Error Term\nThe linear relationship hence needs to be modeled through a error variable $\\epsilon_{i}$ — an unobserved random variable that adds noise to the linear relationship between the target variable and input variable.\nIf we have p input variables then,\n$$ y_{i} = \\beta_{0} + \\sum_{i=1}^p \\beta_{i} x_{i} + \\epsilon_{i} $$\nWe can add the $x_{0} = 1 $ in the equation:\n$$ y_{i} = \\sum_{i=0}^p \\beta_{i} x_{i} + \\epsilon_{i} $$\n$$ y_{i} = x_{i}^T \\beta_{i} + \\epsilon_{i} $$", "plt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')\ny = 1600 - 60 * kmpl\nyerrL = y - price\nyerrB = y - y\nplt.errorbar(kmpl,y, fmt = 'o', yerr= [yerrL, yerrB], c= 'r')\nplt.plot(kmpl, y,linewidth = 2)", "Represent Matrix Way\nIf we write this in matrix form \n$$ y = X\\beta + \\epsilon $$\n$$ \\text{where} ~~~~ X = \\begin{bmatrix} - x_{1}^T- \\ - x_{2}^T- \\ ... \\ - x_{n}^T- \\end{bmatrix} ~~ \\text{,} ~~ y = \\begin{bmatrix} y_{1} \\ y_{2} \\ ... \\ y_{n} \\end{bmatrix} ~~ \\text{and} ~~ \\epsilon = \\begin{bmatrix} \\epsilon_{1} \\ \\epsilon_{2} \\ ... \\ \\epsilon_{n} \\end{bmatrix} $$\nFor our specific example, the matrix looks like:\n$$ \\begin{bmatrix}199 \\ 248 \\ 302 \\ 363 \\ 418 \\ 462 \\ 523 \\end{bmatrix} = \\begin{bmatrix} 1 & 23.9 \\ 1 & 22.7 \\ 1 & 21.1 \\ 1 & 20.5 \\ 1 & 19.8 \\ 1 & 20.4 \\ 1 & 18.6 \\end{bmatrix} \\begin{bmatrix}\n\\beta_{0} \\ \\beta_{1} \\end{bmatrix} + \\begin{bmatrix} \\epsilon_{1} \\ \\epsilon_{2} \\ \\epsilon_{3} \\ \\epsilon_{4} \\ \\epsilon_{5} \\ \\epsilon_{6} \\ \\epsilon_{7} \\end{bmatrix} $$\nMinimize Error - Ordinary Least Square\nThe error we will aim to minimize is the squared error:\n$$ E(\\beta)= \\frac {1}{n} \\sum {i=1}^{n}(\\epsilon{i})^2 $$\nThis is why this technique is called Ordinary Least Square (OLS) regression\n$$ E(\\beta)= \\frac {1}{n} \\sum {i=1}^{n}(y{i}-x_{i}^{T}\\beta)^{2} $$\nwhich in matrix way is equal to: \n$$ E(\\beta)= \\frac {1}{n} (y-X\\beta)^{T}(y-X\\beta) $$\n$$ E(\\beta)= \\frac {1}{n} ((y^{T} - \\beta^{T}X^{T})(y-X\\beta)) $$\n$$ E(\\beta)= \\frac {1}{n} (y^{T}y - \\beta^{T}X^{T}y - y^{T}X\\beta - \\beta^{T}X^{T}X\\beta) $$\nNow, $ y^{T}X\\beta = {\\beta^{T}X^{T}y}^T $ and is a scalar matrix of $1 x 1$, which means it is equal to its transpose and hence $ y^{T}X\\beta = \\beta^{T}X^{T}y $\n$$ E(\\beta)= \\frac {1}{n} (y^{T}y - 2\\beta^{T}X^{T}y + \\beta^{T}X^{T}X\\beta) $$\nTo get the minimum for this error function, we need to differentiate by $\\beta^T$\n$$ \\nabla E(\\beta) = 0 $$\n$$ \\nabla E(\\beta) ={\\frac {dE(\\beta)}{d\\beta^T}} = {\\frac {d}{d\\beta^T}}{\\bigg (}{ \\frac {1}{n} ||y - X\\beta||}^2{\\bigg )} = 0 $$\n$$ {\\frac {d}{d\\beta^T}}{\\bigg (}{ y^{T}y - 2\\beta^{T}X^{T}y + \\beta^{T}X^{T}X\\beta}{\\bigg )} = 0 $$\n$$ - 2 X^Ty + 2X^{T}X\\beta = 0 $$\n$$ X^T X\\beta = X^T y $$\nSo the solution to OLS:\n$$ \\beta = X^†y ~~ \\text{where} ~~ X^† = (X^T X)^{−1} X^T $$\n$$X^† ~~ \\text{is the pseudo inverse of} ~~ X $$\nCalculate Pseudo Inverse\n$$ X^† = (X^T X)^{−1} X^T $$\n$X^† $ is the pseudo inverse of $ X $ has good properties\n$$ X^† = \\left( \\begin{matrix} ~ \\\n \\begin{bmatrix} ~ \\ p + 1 \\times n \\ ~ \\end{bmatrix} \n \\begin{bmatrix} ~ \\ n \\times p + 1 \\ ~ \\end{bmatrix} \n \\ ~ \n \\end{matrix}\n \\right)^{-1} \n \\begin{bmatrix} ~ \\ (p + 1 \\times n) \\ ~ \\end{bmatrix}$$\n$$ X^† = \\left( \\begin{matrix} ~ \\\n \\begin{bmatrix} ~ \\ p + 1 \\times p + 1 \\ ~ \\end{bmatrix} \n \\ ~ \n \\end{matrix}\n \\right)^{-1} \n \\begin{bmatrix} ~ \\ (p + 1 \\times n) \\ ~ \\end{bmatrix}$$\n$$ X^† = \\begin{bmatrix} ~ \\ (p + 1 \\times n) \\ ~ \\end{bmatrix}$$\n$$ X^†{p + 1 \\times n} = {(X^T{p + 1 \\times n} ~ X_{n \\times p+1})}^{-1} ~ X^T_{p + 1 \\times n}$$", "n = 7\n\nx0 = np.ones(n)\nx0\n\nx1 = kmpl\nx1\n\n# Create the X matrix\nX = np.c_[x0, x1]\nX = np.asmatrix(X)\nX\n\n# Create the y matrix\ny = np.asmatrix(price.reshape(-1,1))\ny\n\ny.shape\n\nX_T = np.transpose(X)\nX_T\n\nX_T * X\n\nX_pseudo = np.linalg.inv(X_T * X) * X_T\nX_pseudo\n\nbeta = X_pseudo * y\nbeta", "OLS Solution\nHence we now know that the best-fit line is $\\beta_0 = 1662 $ and $\\beta_1 = -62$\n$$ price = 1662 - 62 * kmpl $$", "beta_0 = 1662 \nbeta_1 = -62\nplt.scatter(kmpl, price, s = 150)\nplt.xlabel('kmpl')\nplt.ylabel('price')\ny = beta_0 + beta_1 * kmpl\nplt.plot(kmpl, y, '-')", "Exercise 1\nWe had price ('000 INR), mileage (kmpl) and now we have one more input variable - horsepower (bhp) for the 7 cars\nprice = [199 , 248 , 302 , 363 , 418 , 462 , 523 ]\nkmpl = [23.9, 22.7, 21.1, 20.5, 19.8, 20.4, 18.6]\nbhp = [38 , 47 , 55 , 67 , 68 , 83 , 82 ]\nWe want to predict the value of price, given the variable kmpl and bhp", "bhp = np.array([38, 47, 55, 67, 68, 83, 82])\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.scatter(bhp, kmpl, price, c='r', marker='o', s = 200)\nax.view_init(azim=30)", "So a linear relationship can be written as:\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp $$\nWe have added an intercept to the equation, so that the plane does not need to pass through zero\nSo we are trying to solve these n = 7 equations with, p = 3\n$$ 199 = \\beta_{0} + \\beta_{1} 23.9 + \\beta_{2} 38 + \\epsilon_{1} ~~~~ \\text{(eq 1)} $$\n$$ 248 = \\beta_{0} + \\beta_{1} 22.7 + \\beta_{2} 47 + \\epsilon_{2} ~~~~ \\text{(eq 2)} $$\n$$ 302 = \\beta_{0} + \\beta_{1} 21.1 + \\beta_{2} 55 + \\epsilon_{3} ~~~~ \\text{(eq 3)} $$\n$$ 363 = \\beta_{0} + \\beta_{1} 20.5 + \\beta_{2} 67 + \\epsilon_{4} ~~~~ \\text{(eq 4)} $$\n$$ 418 = \\beta_{0} + \\beta_{1} 19.8 + \\beta_{2} 68 + \\epsilon_{5} ~~~~ \\text{(eq 5)} $$\n$$ 462 = \\beta_{0} + \\beta_{1} 20.4 + \\beta_{2} 83 + \\epsilon_{6} ~~~~ \\text{(eq 6)} $$\n$$ 523 = \\beta_{0} + \\beta_{1} 18.6 + \\beta_{2} 82 + \\epsilon_{7} ~~~~ \\text{(eq 7)} $$\nor in matrix form - we can write it as\n$$ \\begin{bmatrix}199 \\ 248 \\ 302 \\ 363 \\ 418 \\ 462 \\ 523 \\end{bmatrix} = \\begin{bmatrix} 1 & 23.9 & 38 \\ 1 & 22.7 & 47 \\ 1 & 21.1 & 55 \\ 1 & 20.5 & 67 \\ 1 & 19.8 & 68 \\ 1 & 20.4 & 83 \\ 1 & 18.6 & 82 \\end{bmatrix} \\begin{bmatrix}\\beta_{0} \\ \\beta_{1} \\ \\beta_{2}\\end{bmatrix} + \\begin{bmatrix} \\epsilon_{1} \\ \\epsilon_{2} \\ \\epsilon_{3} \\ \\epsilon_{4} \\ \\epsilon_{5} \\ \\epsilon_{6} \\ \\epsilon_{7} \\end{bmatrix}$$\nDevelop the $X$ matrix for this problem?\nDevelop the $y$ matrix for this problem?\nCalculate the pseudo inverse of $X$.\nFind the $\\beta$ for the best-fit plane.\nPlot the price, kmpl and bhp and the best-fit plane.", "from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\nax.scatter(bhp, kmpl, price, c='r', marker='o', s = 200)\n\nxrange = np.arange(min(bhp), max(bhp), 1)\nyrange = np.arange(min(kmpl), max(kmpl), 1)\nx, y = np.meshgrid(xrange, yrange)\nz = 524 - 22 * y + 4 * x\nax.plot_surface(x, y, z, color ='blue', alpha = 0.5)\nax.view_init(azim=60)", "Using a package: sklearn\nRun the Ordinary Least Square using the package sklearn", "import pandas as pd\ndf = pd.read_csv(\"data/cars_sample.csv\")\n\nfrom sklearn import linear_model\n\ny = df.price\n\nX = df[['kmpl', 'bhp']]\n\nmodel_sklearn = linear_model.LinearRegression()\n\nmodel_sklearn.fit(X, y)\n\nmodel_sklearn.coef_\n\nmodel_sklearn.intercept_\n\nmodel_sklearn_norm = linear_model.LinearRegression(normalize = True)\n\nmodel_sklearn_norm.fit(X, y)\n\nmodel_sklearn_norm.coef_\n\nmodel_sklearn_norm.intercept_", "Non Linear Transformation\nWhat happens when we do Non-Linear transforms to the features?\nWhat if we want to predict $price$ based on $kmpl$, $bhp$, $kmpl^2$ and $bhp / kmpl$\nThe think to remember is that non-linear transforms of the features does not impact the Linear Regression. Because the linear relationship is really about $\\beta $ and not the features.\nWe can be write this as:\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp + \\beta_{3} kmpl^2 + \\beta_{4} bhp/kmpl $$", "df['kmpl2'] = np.power(df.kmpl,2)\n\nplt.scatter(df.kmpl2, df.price, s = 150)\nplt.xlabel('kmpl2')\nplt.ylabel('price')\n\ndf['bhp_kmpl'] = np.divide(df.bhp, df.kmpl)\n\nplt.scatter(df.bhp_kmpl, df.price, s = 150)\nplt.xlabel('bhp/kmpl')\nplt.ylabel('price')\n\ndf", "Exercise 2\nRun a linear regeression:\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp + \\beta_{2} kmpl^2 + \\beta_{2} bhp/kmpl $$\nUsing Pseudo-Inverse Matrix:\nUsing sklearn package:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
desihub/desisim
doc/nb/quickcat-calibration.ipynb
bsd-3-clause
[ "quickcat calibration\nThis notebook is the quickcat calibration script.\n- Its input is a redshift catalog merged with a target list and a truth table from simulations.\n- Its output is a set of coefficients saved in a yaml file \nto be copied to desisim/py/desisim/data/quickcat.yaml . \nThis notebook does sequencially :\n- open the merged redshift catalog\n- run quickcat on it\n- for each target class\n - fit a model for redshift efficiency, and display input, quickcat, best fit model\n - fit a model for redshift uncertainty, and display input, (quickcat,) best fit model\n- save the best fit parameters in a yaml file (to be copied in desisim/data/quickcat.yaml)\nPlease first edit first the following path:", "# input merged catalog (from simulations for now)\nsimulation_catalog_filename=\"/home/guy/Projets/DESI/analysis/quickcat/20180926/zcatalog-redwood-target-truth.fits\"\n# output quickcat parameter file that this code will write\nquickcat_param_filename=\"/home/guy/Projets/DESI/analysis/quickcat/20180926/quickcat.yaml\"\n# output quickcat catalog (same input target and truth)\nquickcat_catalog_filename=\"/home/guy/Projets/DESI/analysis/quickcat/20180926/zcatalog-redwood-target-truth-quickcat.fits\"\n\nimport os.path\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport astropy.io.fits as pyfits\nimport scipy.optimize\nfrom pkg_resources import resource_filename\nimport yaml\nfrom desisim.quickcat import eff_model,get_observed_redshifts\nfrom desisim.simexp import reference_conditions\n\ndef eff_err(k,n) :\n # given k and n\n # the most probable efficiency is k/n but the uncertainty is complicated\n # I choose to define error bar as FWHM/2.35 , converging to sigma for large k,n-k,and n\n # this is the Bayesian probability\n # P(eff|k,n) = gamma(n+2)/(gamma(k+1)*gamma(n-k+1)) * eff**k*(1-eff)**(n-k)\n \n if k>10 and n-k>10 and n>10 :\n return np.sqrt(k*(1-k/n))/n\n \n ns=300\n e=np.arange(ns)/ns\n p=e**k*(1-e)**(n-k)\n xc=float(k)/n\n i=int(ns*xc+0.5)\n if i>ns-1 : i=ns-1\n p/=p[i]\n if k==0 :\n xl=0\n else :\n xl = np.interp(0.5*p[i],p[:i],e[:i])\n if k==n :\n xh=1\n else :\n xh = np.interp(0.5*p[i],p[i:][::-1],e[i:][::-1])\n sigma = (xh-xl)/2.35\n return sigma\n\ndef efficiency(x,selection,bins=40) :\n h0,bins=np.histogram(x,bins=bins)\n hx,bins=np.histogram(x,bins=bins,weights=x)\n h1,bins=np.histogram(x[selection],bins=bins)\n ii=(h0>1)\n n=h0[ii]\n k=h1[ii]\n meanx=hx[ii]/n\n eff=k/n\n err=np.zeros(eff.shape)\n for i in range(err.size) :\n err[i] = eff_err(k[i],n[i])\n return meanx,eff,err\n\ndef prof(x,y,bins=40) :\n h0,bins=np.histogram(x,bins=bins)\n hx,bins=np.histogram(x,bins=bins,weights=x)\n hy,bins=np.histogram(x,bins=bins,weights=y)\n hy2,bins=np.histogram(x,bins=bins,weights=y**2)\n ii=(h0>1)\n n=h0[ii]\n x=hx[ii]/n\n y=hy[ii]/n\n y2=hy2[ii]/n\n var=y2-y**2\n err=np.zeros(x.size)\n err[var>0]=np.sqrt(var[var>0])\n return x,y,err,n\n\ndef efficiency2d(x,y,selection,bins=20) :\n h0,xx,yy=np.histogram2d(x,y,bins=bins)\n h1,xx,yy=np.histogram2d(x[selection],y[selection],bins=(xx,yy))\n shape=h0.shape\n n=h0.ravel()\n k=h1.ravel()\n eff=np.zeros(n.size)\n err=np.zeros(n.size)\n for i in range(n.size) :\n if n[i]==0 :\n err[i]=1000.\n else :\n eff[i]=k[i]/n[i]\n err[i]=eff_err(k[i],n[i])\n return xx,yy,eff.reshape(shape),err.reshape(shape),n.reshape(shape)\n\ndef prof2d(x,y,z,bins=20) :\n h0,xx,yy=np.histogram2d(x,y,bins=bins)\n hz,xx,yy=np.histogram2d(x,y,bins=(xx,yy),weights=z)\n hz2,xx,yy=np.histogram2d(x,y,bins=(xx,yy),weights=z**2)\n n=(h0+(h0==0)).astype(float)\n z=hz/n\n z2=hz2/n\n var=z2-z**2\n err=np.sqrt(var*(var>0))\n x=xx[:-1]+(xx[1]-xx[0])/2.\n y=yy[:-1]+(yy[1]-yy[0])/2.\n return x,y,z,err\n\n## open input file\nhdulist = pyfits.open(simulation_catalog_filename)\ntable = hdulist[\"ZCATALOG\"].data\nprint(table.dtype.names)\n\n# quickcat parameters\nquickcat_params=dict()\n# quickcat output table (for display purpose only)\nqtable=None\n\nif True :\n # run the quickcat simulation in this cell (don't necessarily have to do this to \n # follow the rest of the notebook)\n # use default parameters or the ones in the file specified above \n # (and probably obtained with a previous run of this script) if exist\n \n input_quickcat_param_filename = None\n if os.path.isfile(quickcat_param_filename) :\n input_quickcat_param_filename = quickcat_param_filename\n \n # dummy tiles\n targets_in_tile=dict()\n targets_in_tile[0]=table[\"TARGETID\"]\n\n # dummy obs. conditions\n tmp = reference_conditions['DARK']\n tmp['TILEID']=0\n obsconditions=dict()\n for k in tmp :\n obsconditions[k]=np.array([tmp[k],])\n \n #qtable = table.copy()\n hdulist = pyfits.open(simulation_catalog_filename)\n qtable = hdulist[\"ZCATALOG\"].data\n\n # run quickcat\n # ignore_obsconditions because it only add extra noise\n z, zerr, zwarn = get_observed_redshifts(qtable,qtable,targets_in_tile,obsconditions,\n parameter_filename=quickcat_param_filename,\n ignore_obscondition=True)\n # replace z,zwarn and write quickcat\n qtable[\"Z\"]=z\n qtable[\"ZWARN\"]=zwarn\n hdulist[\"ZCATALOG\"].data = qtable\n hdulist.writeto(quickcat_catalog_filename,overwrite=True)\n print(\"done\")\n\n\n# open quickcat catalog\nif qtable is None and os.path.isfile(quickcat_catalog_filename) :\n qcat_hdulist = pyfits.open(quickcat_catalog_filename)\n qtable = qcat_hdulist[\"ZCATALOG\"].data", "ELG redshift efficiency\nWe assume the ELG redshift efficiency is a function of \n - the S/N in the emission lines, approximately proportional to OII flux.\n - the S/N in the continuum, approximately proportional to the r-band flux.\n - the redshift\nWe know that for a given ELG, the S/N in the lines varies with redshift according to the flux limit defined in the FDR.\nSo, we will scale the OII flux with this flux limit to account for some of the redshift dependency.\nWe ignore the evolution of the continuum S/N with redshift for fixed r-band magnitude.\nWe model the efficiency with an error function,\n$ Eff(SNR) = \\frac{1}{2} \\left( 1+Erf \\left( \\frac{SNR-3}{b \\sqrt{2}} \\right) \\right) $\nwith \n$SNR = \\sqrt{ \\left( 7 \\frac{OII flux}{fluxlimit} \\right)^2 + \\left( a \\times rflux \\right)^2 }$\n$a$ is the continuum $SNR$ normalization, which is proportionnal to the r-band flux.\n$b$ is a fudge factor. One would have $b = 1$ if $SNR$ was the variable that determines the redshift efficiency. \nHowever $SNR$ is only a proxy that is not 100% correlated with the efficiency, so we expect $b>1$.", "# OII flux limit (FDR), the has-build version should be recomputed but is probably not very different\nfilename = resource_filename('desisim', 'data/elg_oii_flux_threshold_fdr.txt')\nfdr_z, fdr_flux_limit = np.loadtxt(filename, unpack=True)\n\nplt.figure()\nplt.plot(fdr_z, fdr_flux_limit)\nplt.ylim([0,1.5e-16])\nplt.xlabel(\"Redshift\")\nplt.ylabel(\"OII flux limit (ergs/s/cm2)\")\nplt.grid()", "Measured ELG efficiency as a function of rmag and oii flux", "######################\nelgs=(table[\"TEMPLATETYPE\"]==\"ELG\")&(table[\"TRUEZ\"]>0.6)&(table[\"TRUEZ\"]<1.6)\nz=table[\"Z\"][elgs]\ntz=table[\"TRUEZ\"][elgs]\ndz=z-tz\ngood=(table[\"ZWARN\"][elgs]==0)\nrflux=table[\"FLUX_R\"][elgs]\nprint(\"Number of ELGs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\noiiflux=table[\"OIIFLUX\"][elgs]\noiiflux=oiiflux*(oiiflux>0)+1e-20*(oiiflux<=0)\n\nqgood=None\nif qtable is not None : #quickcat output\n qgood=(qtable[\"ZWARN\"][elgs]==0)\n\n######################\n\n#good=oiiflux>8e-17 #debug to verify indexation\n\nbins2d=20\nrmag=-2.5*np.log10(rflux)+22.5\nxx,yy,eff2d,err2d,nn2d = efficiency2d(np.log10(oiiflux),rmag,good,bins=bins2d)\n\n\nplt.figure()\nplt.imshow(eff2d.T,origin=0,extent=(xx[0],xx[-1],yy[0],yy[-1]),vmin=0.2,aspect=\"auto\")\nplt.xlabel(\"log10([OII] flux)\")\nplt.ylabel(\"rmag\")\nplt.colorbar()", "Model", "# model ELG efficiency vs rflux and oiiflux\noiiflux = table[\"OIIFLUX\"][elgs]\noiiflux = oiiflux*(oiiflux>=0)+0.00001*(oiiflux<=0)\nfluxlimit=np.interp(z,fdr_z,fdr_flux_limit)\nfluxlimit[fluxlimit<=0]=1e-20\nsnr_lines=7*oiiflux/fluxlimit\n\ndef elg_efficiency_model_2d(params,log10_snr_lines,rmag) :\n p = params\n snr_tot = np.sqrt( (p[0]*10**log10_snr_lines)**2 + (p[1]*10**(-0.4*(rmag-22.5))) **2 )\n return 0.5*(1.+np.erf((snr_tot-3)/(np.sqrt(2.)*p[2])))\n \n \ndef elg_efficiency_2d_residuals(params,log10_snr_lines,mean_rmag,eff2d,err2d) :\n \n model = elg_efficiency_model_2d(params,log10_snr_lines,mean_rmag)\n #res = (eff2d-model)\n res = (eff2d-model)/err2d #np.sqrt(err2d**2+(0.1*(eff2d>0.9))**2)\n res = res[(err2d<2)&(mean_rmag>22)]\n #chi2 = np.sum(res**2)\n #print(\"params={} chi2/ndata={}/{}={}\".format(params,chi2,res.size,chi2/res.size))\n return res\n\n\n\n# 2d fit\n#good=snr_lines>4. # debug\n#good=rmag<22. # debug\nxx,yy,eff2d_bis,err2d_bis,nn = efficiency2d(np.log10(snr_lines),rmag,good)\nx1d = xx[:-1]+(xx[1]-xx[0])\ny1d = yy[:-1]+(yy[1]-yy[0]) \nx2d=np.tile(x1d,(y1d.size,1)).T\ny2d=np.tile(y1d,(x1d.size,1))\n\n#elg_efficiency_params=[1,3,2]\nelg_efficiency_params=[1,2.,1,0,0]#,1,2,1,0,0]\nif 0 :\n meff2d=elg_efficiency_model_2d(elg_efficiency_params,x2d,y2d)\n i=(y2d.ravel()>22.)&(y2d.ravel()<22.4)&(err2d.ravel()<1)\n plt.plot(x2d.ravel()[i],eff2d.ravel()[i],\"o\")\n plt.plot(x2d.ravel()[i],meff2d.ravel()[i],\"o\")\n \n \nresult=scipy.optimize.least_squares(elg_efficiency_2d_residuals,elg_efficiency_params,args=(x2d,y2d,eff2d_bis,err2d_bis))\nelg_efficiency_params=result.x\n\nquickcat_params[\"ELG\"]=dict()\nquickcat_params[\"ELG\"][\"EFFICIENCY\"]=dict()\nquickcat_params[\"ELG\"][\"EFFICIENCY\"][\"SNR_LINES_SCALE\"]=float(elg_efficiency_params[0])\nquickcat_params[\"ELG\"][\"EFFICIENCY\"][\"SNR_CONTINUUM_SCALE\"]=float(elg_efficiency_params[1])\nquickcat_params[\"ELG\"][\"EFFICIENCY\"][\"SIGMA_FUDGE\"]=float(elg_efficiency_params[2])\n\nprint(\"Best fit parameters for ELG efficiency model:\") \nprint(elg_efficiency_params)\nprint(\"SNR_lines = {:4.3f} * 7 * OIIFLUX/limit\".format(elg_efficiency_params[0]))\nprint(\"SNR_cont = {:4.3f} * R_FLUX\".format(elg_efficiency_params[1]))\nprint(\"sigma fudge = {:4.3f}\".format(elg_efficiency_params[2]))\n\n\n#params[0]=0.001 # no dependence on rmag\nmeff=elg_efficiency_model_2d(elg_efficiency_params,np.log10(snr_lines),rmag)\nxx,yy,meff2d,merr=prof2d(np.log10(oiiflux),rmag,meff,bins=bins2d)\n#plt.imshow(meff2d.T,aspect=\"auto\")\n\nplt.imshow(meff2d.T,origin=0,extent=(xx[0],xx[-1],yy[0],yy[-1]),aspect=\"auto\")\nplt.colorbar()\n\n\n \n\nif 1 :\n plt.figure()\n print(\"meff2d.shape=\",meff2d.shape)\n ii=np.arange(meff2d.shape[0])\n y1=eff2d[ii,-ii]\n e1=err2d[ii,-ii]\n y2=meff2d[ii,-ii]\n ok=(e1<1)\n plt.errorbar(ii[ok],y1[ok],e1[ok],fmt=\"o\",label=\"input\")\n plt.plot(ii[ok],y2[ok],\"-\",label=\"model\")\n plt.legend(loc=\"lower right\")\n plt.xlabel(\"linear combination of log10([OII] flux) and rmag\")\n plt.ylabel(\"efficiency\")\n\nplt.figure()\nbins1d=20\nx,eff1d,err1d = efficiency(rmag,good,bins=bins1d)\nx,meff1d,merr,nn = prof(rmag,meff,bins=bins1d)\nplt.errorbar(x,eff1d,err1d,fmt=\"o\",label=\"input\")\nplt.plot(x,meff1d,\"-\",label=\"model\")\n\nif qgood is not None : #quickcat output\n x,eff1d,err1d = efficiency(rmag,qgood,bins=bins1d)\n plt.errorbar(x,eff1d,err1d,fmt=\"x\",label=\"qcat run\")\n\nplt.legend(loc=\"lower left\")\nplt.xlabel(\"rmag\")\nplt.ylabel(\"efficiency\")\n\n\nplt.figure()\nbins1d=20\nx,eff1d,err1d = efficiency(np.log10(oiiflux),good,bins=bins1d)\nx,meff1d,merr,nn = prof(np.log10(oiiflux),meff,bins=bins1d)\nplt.errorbar(x,eff1d,err1d,fmt=\"o\",label=\"input\")\nplt.plot(x,meff1d,\"-\",label=\"model\")\n\nif qgood is not None : #quickcat output\n x,eff1d,err1d = efficiency(np.log10(oiiflux),qgood,bins=bins1d)\n plt.errorbar(x,eff1d,err1d,fmt=\"x\",label=\"qcat run\")\n\nplt.legend(loc=\"lower right\")\nplt.xlabel(\"log10(oiiflux)\")\nplt.ylabel(\"efficiency\")\n\n\nplt.figure()\nfcut=8e-17\nmcut=22.5\ns=(oiiflux<fcut)&(rmag>mcut) # select faint ones to increase contrast in z\nbins=100\nx,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)\nx,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)\nplt.errorbar(x,eff1d,err1d,fmt=\"o\",label=\"input\")\nplt.plot(x,meff1d,\"-\",label=\"model\")\n\nif qgood is not None : #quickcat output\n x,eff1d,err1d = efficiency(tz[s],qgood[s],bins=bins1d)\n plt.errorbar(x,eff1d,err1d,fmt=\"x\",label=\"qcat run\")\n\n\n\nplt.legend(loc=\"upper left\",title=\"Faint ELGs with [OII] flux<{} and rmag>{}\".format(fcut,mcut))\nplt.xlabel(\"redshift\")\nplt.ylabel(\"efficiency\")\nplt.ylim([0.,1.4])", "ELG redshift uncertainty\nPower law of [OII] flux (proxy for all lines)", "#ELG redshift uncertainty\n######################\nelgs=(table[\"TEMPLATETYPE\"]==\"ELG\")&(table[\"TRUEZ\"]>0.6)&(table[\"TRUEZ\"]<1.6)\nz=table[\"Z\"][elgs]\ndz=z-table[\"TRUEZ\"][elgs]\ngood=(table[\"ZWARN\"][elgs]==0)&(np.abs(dz/(1+z))<0.003)\nrflux=table[\"FLUX_R\"][elgs]\nprint(\"Number of ELGs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\noiiflux=table[\"OIIFLUX\"][elgs]\noiiflux=oiiflux*(oiiflux>0)+1e-20*(oiiflux<=0)\nlflux=np.log10(oiiflux)\n\nqz=None\nqdz=None\nif qtable is not None : # quickcat output\n qz=qtable[\"Z\"][elgs]\n qdz=qz-qtable[\"TRUEZ\"][elgs]\n qgood=(qtable[\"ZWARN\"][elgs]==0)&(np.abs(qdz/(1+qz))<0.003)\n\n######################\n\nbins=20\n\nbinlflux,var,err,nn=prof(lflux[good],((dz/(1+z))**2)[good],bins=bins)\nbinflux=10**(binlflux)\nvar_err = np.sqrt(2/nn)*var\nrms=np.sqrt(var)\nrmserr=0.5*var_err/rms\n\ndef redshift_error(params,flux) :\n return params[0]/(1e-9+flux)**params[1]\n\ndef redshift_error_residuals(params,flux,rms,rmserror) :\n model = redshift_error(params,flux)\n res = (rms-model)/np.sqrt(rmserror**2+1e-6**2)\n return res\n \n#plt.plot(binlflux,rms,\"o\",label=\"meas\")\nplt.errorbar(binlflux,rms,rmserr,fmt=\"o\",label=\"sim\")\nparams=[0.0006,1.]\nbinoiiflux=np.array(10**binlflux)\nresult=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binoiiflux*1e17,rms,rmserr))\nparams=result.x\nelg_uncertainty_params = params\nprint(\"ELG redshift uncertainty parameters = \",params)\nquickcat_params[\"ELG\"][\"UNCERTAINTY\"]=dict()\nquickcat_params[\"ELG\"][\"UNCERTAINTY\"][\"SIGMA_17\"]=float(elg_uncertainty_params[0])\nquickcat_params[\"ELG\"][\"UNCERTAINTY\"][\"POWER_LAW_INDEX\"]=float(elg_uncertainty_params[1])\n\nm=redshift_error(params,10**binlflux*1e17)\nplt.plot(binlflux,m,\"-\",label=\"model\")\n\nif qz is not None :\n qbinlflux,qvar,qerr,nn=prof(lflux[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)\n qbinflux=10**(qbinlflux)\n qvar_err = np.sqrt(2/nn)*qvar\n qrms=np.sqrt(qvar)\n qrmserr=0.5*qvar_err/qrms\n plt.errorbar(qbinlflux,qrms,qrmserr,fmt=\"x\",label=\"quickcat\")\n\nplt.legend(loc=\"upper right\",title=\"ELG\")\nplt.xlabel(\"log10([oII] flux)\")\nplt.ylabel(\"rms dz/(1+z)\")", "ELG catastrophic failure rate\nFraction of targets with ZWARN=0 and $|\\Delta z/(1+z)|>0.003$", "nbad = np.sum((table[\"ZWARN\"][elgs]==0)&(np.abs(dz/(1+z))>0.003))\nntot = np.sum(table[\"ZWARN\"][elgs]==0)\nfrac = float(nbad/float(ntot))\nprint(\"ELG catastrophic failure rate={}/{}={:4.3f}\".format(nbad,ntot,frac))\nquickcat_params[\"ELG\"][\"FAILURE_RATE\"]=frac\n\nqnbad = np.sum((qtable[\"ZWARN\"][elgs]==0)&(np.abs(qdz/(1+qz))>0.003))\nqntot = np.sum(qtable[\"ZWARN\"][elgs]==0)\nqfrac = float(qnbad/float(qntot))\nprint(\"quickcat run ELG catastrophic failure rate={}/{}={:4.3f}\".format(qnbad,qntot,qfrac))\n", "LRG redshift efficiency\nSigmoid function of the r-band magnitude\n$Eff = \\frac{1}{1+exp (( rmag - a ) / b))}$", "# simply use RFLUX for snr\n######################\nlrgs=(table[\"TEMPLATETYPE\"]==\"LRG\")\nz=table[\"Z\"][lrgs]\ntz=table[\"TRUEZ\"][lrgs]\ndz=z-tz\ngood=(table[\"ZWARN\"][lrgs]==0)\nrflux=table[\"FLUX_R\"][lrgs]\nprint(\"Number of LRGs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\nrmag=-2.5*np.log10(rflux)+22.5\n\nqgood=None\nif qtable is not None : #quickcat output\n qgood=(qtable[\"ZWARN\"][lrgs]==0)\n\n\n######################\n\nbins=15\nbin_rmag,eff,err=efficiency(rmag,good,bins=bins)\nprint(\"eff=\",eff)\nprint(\"err=\",err)\nplt.errorbar(bin_rmag,eff,err,fmt=\"o\",label=\"sim\")\n\n\ndef sigmoid(params,x) :\n return 1/(1+np.exp((x-params[0])/params[1]))\n\ndef sigmoid_residuals(params,x,y,err) :\n m = sigmoid(params,x)\n res = (m-y)/err\n return res\n\nlrg_efficiency_params=[26.,1.]\nresult=scipy.optimize.least_squares(sigmoid_residuals,lrg_efficiency_params,args=(bin_rmag,eff,err)) \nlrg_efficiency_params=result.x\nplt.plot(bin_rmag,sigmoid(lrg_efficiency_params,bin_rmag),\"-\",label=\"model\")\n\nif qgood is not None:\n bin_rmag,eff,err=efficiency(rmag,qgood,bins=bins)\n plt.errorbar(bin_rmag,eff,err,fmt=\"x\",label=\"quickcat run\")\n\nplt.xlabel(\"rmag\")\nplt.ylabel(\"efficiency\")\nplt.legend(loc=\"lower left\")\n\nprint(\"LRG redshift efficiency parameters = \",lrg_efficiency_params)\n\nquickcat_params[\"LRG\"]=dict()\nquickcat_params[\"LRG\"][\"EFFICIENCY\"]=dict()\nquickcat_params[\"LRG\"][\"EFFICIENCY\"][\"SIGMOID_CUTOFF\"]=float(lrg_efficiency_params[0])\nquickcat_params[\"LRG\"][\"EFFICIENCY\"][\"SIGMOID_FUDGE\"]=float(lrg_efficiency_params[1])\n\n\n\n\nmeff=sigmoid(lrg_efficiency_params,rmag)\nplt.figure()\nmcut=22.\ns=(rmag>mcut) # select faint ones to increase contrast in z\nbins=50\nx,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)\nx,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)\nplt.errorbar(x,eff1d,err1d,fmt=\"o\",label=\"sim\")\nplt.plot(x,meff1d,\"-\",label=\"model\")\nplt.legend(loc=\"upper left\",title=\"Faint LRGs with rmag>{}\".format(mcut))\nplt.xlabel(\"redshift\")\nplt.ylabel(\"efficiency\")\n", "LRG redshift uncertainty\nPower law of broad band flux", "# LRGs redshift uncertainties\n\n######################\nlrgs=(table[\"TEMPLATETYPE\"]==\"LRG\")\nz=table[\"Z\"][lrgs]\ntz=table[\"TRUEZ\"][lrgs]\ndz=z-tz\ngood=(table[\"ZWARN\"][lrgs]==0)&(np.abs(dz/(1+z))<0.003)\nrflux=table[\"FLUX_R\"][lrgs]\nprint(\"Number of LRGs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\nrmag=-2.5*np.log10(rflux)+22.5\n\nqz=None\nqdz=None\nif qtable is not None : # quickcat output\n qz=qtable[\"Z\"][lrgs]\n qdz=qz-qtable[\"TRUEZ\"][lrgs]\n qgood=(qtable[\"ZWARN\"][lrgs]==0)&(np.abs(qdz/(1+qz))<0.003)\n\n\n######################\n\nbins=20\nbinmag,var,err,nn=prof(rmag[good],((dz/(1+z))**2)[good],bins=bins)\nbinflux=10**(-0.4*(binmag-22.5))\nvar_err = np.sqrt(2/nn)*var\nrms=np.sqrt(var)\nrmserr=0.5*var_err/rms\n\nparams=[1.,1.2]\nresult=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binflux,rms,rmserr))\nparams=result.x\nprint(\"LRG redshift error parameters = \",params)\nquickcat_params[\"LRG\"][\"UNCERTAINTY\"]=dict()\nquickcat_params[\"LRG\"][\"UNCERTAINTY\"][\"SIGMA_17\"]=float(params[0])\nquickcat_params[\"LRG\"][\"UNCERTAINTY\"][\"POWER_LAW_INDEX\"]=float(params[1])\n\nmodel = redshift_error(params,binflux)\n\nplt.errorbar(binmag,rms,rmserr,fmt=\"o\",label=\"sim\")\nplt.plot(binmag,model,\"-\",label=\"model\")\n\n\nif qz is not None :\n qbinmag,qvar,qerr,nn=prof(rmag[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)\n qvar_err = np.sqrt(2/nn)*qvar\n qrms=np.sqrt(qvar)\n qrmserr=0.5*qvar_err/qrms\n plt.errorbar(qbinmag,qrms,qrmserr,fmt=\"x\",label=\"quickcat\")\n\n\n\n\nplt.legend(loc=\"upper left\",title=\"LRG\")\nplt.xlabel(\"rmag\")\nplt.ylabel(\"rms dz/(1+z)\")", "LRG catastrophic failure rate\nFraction of targets with ZWARN=0 and $|\\Delta z/(1+z)|>0.003$", "nbad = np.sum((table[\"ZWARN\"][lrgs]==0)&(np.abs(dz/(1+z))>0.003))\nntot = np.sum(table[\"ZWARN\"][lrgs]==0)\nfrac = float(nbad/float(ntot))\nprint(\"LRG catastrophic failure rate={}/{}={:4.3f}\".format(nbad,ntot,frac))\nquickcat_params[\"LRG\"][\"FAILURE_RATE\"]=frac\n\nqnbad = np.sum((qtable[\"ZWARN\"][lrgs]==0)&(np.abs(qdz/(1+qz))>0.003))\nqntot = np.sum(qtable[\"ZWARN\"][lrgs]==0)\nqfrac = float(qnbad/float(qntot))\nprint(\"quickcat run LRG catastrophic failure rate={}/{}={:4.3f}\".format(qnbad,qntot,qfrac))\n\n\n# choice of redshift for splitting between \"lower z / tracer\" QSOs and Lya QSOs\nzsplit = 2.0", "QSO tracers (z<~2) redshift efficiency\nSigmoid function of the r-band magnitude\n$Eff = \\frac{1}{1+exp (( rmag - a ) / b))}$", "# simply use RFLUX for snr\n######################\nqsos=(table[\"TEMPLATETYPE\"]==\"QSO\")&(table[\"TRUEZ\"]<zsplit)\nz=table[\"Z\"][qsos]\ntz=table[\"TRUEZ\"][qsos]\ndz=z-tz\ngood=(table[\"ZWARN\"][qsos]==0)\nrflux=table[\"FLUX_R\"][qsos]\nprint(\"Number of QSOs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\nrmag=-2.5*np.log10(rflux)+22.5\n\nqgood=None\nif qtable is not None : # quickcat output\n qgood=(qtable[\"ZWARN\"][qsos]==0)\n\n\n######################\n\nbins=30\nbin_rmag,eff,err=efficiency(rmag,good,bins=bins)\nplt.errorbar(bin_rmag,eff,err,fmt=\"o\",label=\"sim\")\nqso_efficiency_params=[23.,0.3]\nresult=scipy.optimize.least_squares(sigmoid_residuals,qso_efficiency_params,args=(bin_rmag,eff,err)) \nqso_efficiency_params=result.x\nplt.plot(bin_rmag,sigmoid(qso_efficiency_params,bin_rmag),\"-\",label=\"model\")\n\nif qgood is not None :\n bin_rmag,eff,err=efficiency(rmag,qgood,bins=bins)\n plt.errorbar(bin_rmag,eff,err,fmt=\"x\",label=\"quickcat run\")\n\nplt.xlabel(\"rmag\")\nplt.ylabel(\"efficiency\")\nplt.legend(loc=\"lower left\")\n\nprint(\"QSO redshift efficiency parameters = \",qso_efficiency_params)\nquickcat_params[\"QSO_ZSPLIT\"]=zsplit\nquickcat_params[\"LOWZ_QSO\"]=dict()\nquickcat_params[\"LOWZ_QSO\"][\"EFFICIENCY\"]=dict()\nquickcat_params[\"LOWZ_QSO\"][\"EFFICIENCY\"][\"SIGMOID_CUTOFF\"]=float(qso_efficiency_params[0])\nquickcat_params[\"LOWZ_QSO\"][\"EFFICIENCY\"][\"SIGMOID_FUDGE\"]=float(qso_efficiency_params[1])\n\nmeff=sigmoid(qso_efficiency_params,rmag)\nplt.figure()\nmcut=22.\ns=(rmag>mcut) # select faint ones to increase contrast in z\nbins=50\nx,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)\nx,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)\nplt.errorbar(x,eff1d,err1d,fmt=\"o\",label=\"input\")\nplt.plot(x,meff1d,\"-\",label=\"model\")\n\nif qgood is not None :\n x,qeff1d,qerr1d = efficiency(tz[s],qgood[s],bins=bins)\n plt.errorbar(x,qeff1d,qerr1d,fmt=\"x\",label=\"quickcat\")\n \nplt.legend(loc=\"upper left\",title=\"Faint tracer QSOs with rmag>{}\".format(mcut))\nplt.xlabel(\"redshift\")\nplt.ylabel(\"efficiency\")\nplt.ylim([0.5,1.2])", "QSO (z<2) redshift uncertainty\nPower law of broad band flux", "# QSO redshift uncertainties\nqsos=(table[\"TEMPLATETYPE\"]==\"QSO\")&(table[\"TRUEZ\"]<zsplit)\nz=table[\"Z\"][qsos]\ndz=z-table[\"TRUEZ\"][qsos]\ngood=(table[\"ZWARN\"][qsos]==0)&(np.abs(dz/(1+z))<0.01)\nrflux=table[\"FLUX_R\"][qsos]\nprint(\"Number of QSOs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\nrmag=-2.5*np.log10(rflux)+22.5\n\nqgood=None\nqz=None\nqdz=None\nif qtable is not None : # quickcat output\n qz=qtable[\"Z\"][qsos]\n qdz=qz-qtable[\"TRUEZ\"][qsos]\n qgood=(qtable[\"ZWARN\"][qsos]==0)&(np.abs(qdz/(1+qz))<0.01)\n\n\n\nbins=20\nbinmag,var,err,nn=prof(rmag[good],((dz/(1+z))**2)[good],bins=bins)\nbinflux=10**(-0.4*(binmag-22.5))\nvar_err = np.sqrt(2/nn)*var\nrms=np.sqrt(var)\nrmserr=0.5*var_err/rms\n\nparams=[1.,1.2]\nresult=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binflux,rms,rmserr))\nparams=result.x\nprint(\"QSO redshift error parameters = \",params)\nquickcat_params[\"LOWZ_QSO\"][\"UNCERTAINTY\"]=dict()\nquickcat_params[\"LOWZ_QSO\"][\"UNCERTAINTY\"][\"SIGMA_17\"]=float(params[0])\nquickcat_params[\"LOWZ_QSO\"][\"UNCERTAINTY\"][\"POWER_LAW_INDEX\"]=float(params[1])\n\nmodel = redshift_error(params,binflux)\n\nplt.errorbar(binmag,rms,rmserr,fmt=\"o\",label=\"sim\")\nplt.plot(binmag,model,\"-\",label=\"model\")\n\nif qz is not None :\n qbinmag,qvar,qerr,nn=prof(rmag[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)\n qvar_err = np.sqrt(2/nn)*qvar\n qrms=np.sqrt(qvar)\n qrmserr=0.5*qvar_err/qrms\n plt.errorbar(qbinmag,qrms,qrmserr,fmt=\"x\",label=\"quickcat\")\n\nplt.legend(loc=\"upper left\",title=\"Tracer QSO\")\nplt.xlabel(\"rmag\")\nplt.ylabel(\"rms dz/(1+z)\")", "Tracer QSO (z<~2) catastrophic failure rate\nFraction of targets with ZWARN=0 and $|\\Delta z/(1+z)|>0.003$", "nbad = np.sum((table[\"ZWARN\"][qsos]==0)&(np.abs(dz/(1+z))>0.003))\nntot = np.sum(table[\"ZWARN\"][qsos]==0)\nfrac = float(nbad/float(ntot))\nprint(\"Tracer QSO catastrophic failure rate={}/{}={:4.3f}\".format(nbad,ntot,frac))\nquickcat_params[\"LOWZ_QSO\"][\"FAILURE_RATE\"]=frac\n\nqnbad = np.sum((qtable[\"ZWARN\"][qsos]==0)&(np.abs(qdz/(1+qz))>0.003))\nqntot = np.sum(qtable[\"ZWARN\"][qsos]==0)\nqfrac = float(qnbad/float(qntot))\nprint(\"quickcat run tracer QSO catastrophic failure rate={}/{}={:4.3f}\".format(qnbad,qntot,qfrac))\n\n\n", "Lya QSO (z>~2) redshift efficiency\nSigmoid function of the r-band magnitude\n$Eff = \\frac{1}{1+exp (( rmag - a ) / b))}$", "# simply use RFLUX for snr\n######################\nqsos=(table[\"TEMPLATETYPE\"]==\"QSO\")&(table[\"TRUEZ\"]>zsplit)\nz=table[\"Z\"][qsos]\ntz=table[\"TRUEZ\"][qsos]\ndz=z-tz\ngood=(table[\"ZWARN\"][qsos]==0)\nrflux=table[\"FLUX_R\"][qsos]\nprint(\"Number of QSOs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\nrmag=-2.5*np.log10(rflux)+22.5\n\nqgood=None\nif qtable is not None : # quickcat output\n qgood=(qtable[\"ZWARN\"][qsos]==0)\n\n\n######################\n\nbins=30\nbin_rmag,eff,err=efficiency(rmag,good,bins=bins)\nplt.errorbar(bin_rmag,eff,err,fmt=\"o\",label=\"sim\")\nqso_efficiency_params=[23.,0.3]\nresult=scipy.optimize.least_squares(sigmoid_residuals,qso_efficiency_params,args=(bin_rmag,eff,err)) \nqso_efficiency_params=result.x\nplt.plot(bin_rmag,sigmoid(qso_efficiency_params,bin_rmag),\"-\",label=\"model\")\n\nif qgood is not None :\n bin_rmag,eff,err=efficiency(rmag,qgood,bins=bins)\n plt.errorbar(bin_rmag,eff,err,fmt=\"x\",label=\"quickcat run\")\n\nplt.xlabel(\"rmag\")\nplt.ylabel(\"efficiency\")\nplt.legend(loc=\"lower left\")\n\nprint(\"QSO redshift efficiency parameters = \",qso_efficiency_params)\n\nquickcat_params[\"LYA_QSO\"]=dict()\nquickcat_params[\"LYA_QSO\"][\"EFFICIENCY\"]=dict()\nquickcat_params[\"LYA_QSO\"][\"EFFICIENCY\"][\"SIGMOID_CUTOFF\"]=float(qso_efficiency_params[0])\nquickcat_params[\"LYA_QSO\"][\"EFFICIENCY\"][\"SIGMOID_FUDGE\"]=float(qso_efficiency_params[1])\n\nmeff=sigmoid(qso_efficiency_params,rmag)\nplt.figure()\nmcut=22.5\ns=(rmag>mcut) # select faint ones to increase contrast in z\nbins=50\nx,eff1d,err1d = efficiency(tz[s],good[s],bins=bins)\nx,meff1d,merr,nn = prof(tz[s],meff[s],bins=bins)\nplt.errorbar(x,eff1d,err1d,fmt=\"o\",label=\"sim\")\nplt.plot(x,meff1d,\"-\",label=\"model\")\nplt.legend(loc=\"upper left\",title=\"Faint Lya QSOs with rmag>{}\".format(mcut))\nplt.xlabel(\"redshift\")\nplt.ylabel(\"efficiency\")\nplt.ylim([0.,1.4])", "Lya QSO (z>2) redshift uncertainty\nPower law of broad band flux", "# QSO redshift uncertainties\nqsos=(table[\"TEMPLATETYPE\"]==\"QSO\")&(table[\"TRUEZ\"]>zsplit)\nz=table[\"Z\"][qsos]\ndz=z-table[\"TRUEZ\"][qsos]\ngood=(table[\"ZWARN\"][qsos]==0)&(np.abs(dz/(1+z))<0.01)\nrflux=table[\"FLUX_R\"][qsos]\nprint(\"Number of QSOs={}\".format(rflux.size))\nrflux=rflux*(rflux>0)+0.00001*(rflux<=0)\nrmag=-2.5*np.log10(rflux)+22.5\n\nqgood=None\nqz=None\nqdz=None\nif qtable is not None : # quickcat output\n qz=qtable[\"Z\"][qsos]\n qdz=qz-qtable[\"TRUEZ\"][qsos]\n qgood=(qtable[\"ZWARN\"][qsos]==0)&(np.abs(qdz/(1+qz))<0.01)\n\n\n\nbins=20\nbinmag,var,err,nn=prof(rmag[good],((dz/(1+z))**2)[good],bins=bins)\nbinflux=10**(-0.4*(binmag-22.5))\nvar_err = np.sqrt(2/nn)*var\nrms=np.sqrt(var)\nrmserr=0.5*var_err/rms\n\nparams=[1.,1.2]\nresult=scipy.optimize.least_squares(redshift_error_residuals,params,args=(binflux,rms,rmserr))\nparams=result.x\nprint(\"LYA_QSO redshift error parameters = \",params)\nquickcat_params[\"LYA_QSO\"][\"UNCERTAINTY\"]=dict()\nquickcat_params[\"LYA_QSO\"][\"UNCERTAINTY\"][\"SIGMA_17\"]=float(params[0])\nquickcat_params[\"LYA_QSO\"][\"UNCERTAINTY\"][\"POWER_LAW_INDEX\"]=float(params[1])\n\nmodel = redshift_error(params,binflux)\n\nplt.errorbar(binmag,rms,rmserr,fmt=\"o\",label=\"sim\")\nplt.plot(binmag,model,\"-\",label=\"model\")\n\nif qz is not None :\n qbinmag,qvar,qerr,nn=prof(rmag[qgood],((qdz/(1+qz))**2)[qgood],bins=bins)\n qvar_err = np.sqrt(2/nn)*qvar\n qrms=np.sqrt(qvar)\n qrmserr=0.5*qvar_err/qrms\n plt.errorbar(qbinmag,qrms,qrmserr,fmt=\"x\",label=\"quickcat\")\n\nplt.legend(loc=\"upper left\",title=\"Lya QSO\")\nplt.xlabel(\"rmag\")\nplt.ylabel(\"rms dz/(1+z)\")", "Lya QSO (z>~2) catastrophic failure rate\nFraction of targets with ZWARN=0 and $|\\Delta z/(1+z)|>0.003$", "nbad = np.sum((table[\"ZWARN\"][qsos]==0)&(np.abs(dz/(1+z))>0.003))\nntot = np.sum(table[\"ZWARN\"][qsos]==0)\nfrac = float(nbad/float(ntot))\nprint(\"Lya QSO catastrophic failure rate={}\".format(frac))\nquickcat_params[\"LYA_QSO\"][\"FAILURE_RATE\"]=frac\n\n# write results to a yaml file\nwith open(quickcat_param_filename, 'w') as outfile:\n yaml.dump(quickcat_params, outfile, default_flow_style=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
daniestevez/jupyter_notebooks
Lucy/Lucy frames Bochum 2021-10-24.ipynb
gpl-3.0
[ "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom construct import *\nimport scipy.signal\n\nimport sys\nsys.path.append('../Tianwen/') # to import ccsds\nimport ccsds\n\nimport struct\nimport collections\nimport pathlib\n\nAOSFrame = Struct(\n 'primary_header' / ccsds.AOSPrimaryHeader,\n 'm_pdu_header' / ccsds.M_PDU_Header,\n 'm_pdu_packet_zone' / GreedyBytes\n)\n\ndef packets_asarray(packets):\n packets = [np.frombuffer(p[ccsds.SpacePacketPrimaryHeader.sizeof():], 'uint8')\n for p in packets]\n l = np.max([p.size for p in packets])\n packets = [np.concatenate((p, np.zeros(l-p.size, 'uint8'))) for p in packets]\n return np.array(packets)\n\ndef plot_apids(apids, vc=0):\n for apid in sorted(apids.keys()):\n plt.figure(figsize = (16,16), facecolor = 'w')\n ps = packets_asarray(apids[apid])\n plt.imshow(ps, aspect = ps.shape[1]/ps.shape[0], interpolation='none')\n plt.title(f\"Lucy APID {apid} Virtual channel {vc}\")", "Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12:00:00).\nLooking at the idle APID packets, the next byte might indicate fractional seconds (since it is still part of the secondary header rather than idle data), but it is difficult to be sure.", "def timestamps(packets):\n epoch = np.datetime64('2000-01-01T12:00:00')\n t = np.array([struct.unpack('>I', p[ccsds.SpacePacketPrimaryHeader.sizeof():][:4])[0]\n for p in packets], 'uint32')\n return epoch + t * np.timedelta64(1, 's')\n\ndef load_frames(path):\n frame_size = 223 * 5 - 2\n frames = np.fromfile(path, dtype = 'uint8')\n frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size))\n return frames\n\nframes = load_frames('lucy_frames_bochum_20211024_214614.u8')\nframes.shape[0]", "AOS frames\nTelemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data.", "aos = [AOSFrame.parse(f) for f in frames]\n\ncollections.Counter([a.primary_header.transfer_frame_version_number for a in aos])\n\ncollections.Counter([a.primary_header.spacecraft_id for a in aos])\n\ncollections.Counter([a.primary_header.virtual_channel_id for a in aos])", "Virtual Channel 63 (Only Idle Data)\nVirtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's.", "vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63]\n[a.primary_header for a in vc63[:10]]\n\nvc63[0]\n\nvc63_frames = np.array([f for f, a in zip(frames, aos) if a.primary_header.virtual_channel_id == 63])\n\nnp.unique(vc63_frames[:, 6:8], axis = 0)\n\nbytes(vc63_frames[0, 6:8]).hex()\n\nnp.unique(vc63_frames[:, 8:])\n\nhex(170)\n\nfc = np.array([a.primary_header.virtual_channel_frame_count for a in vc63])\n\nplt.figure(figsize = (10, 5), facecolor = 'w')\nplt.plot(fc[1:], np.diff(fc)-1, '.')\nplt.title(\"Lucy virtual channel 63 (OID) frame loss\")\nplt.xlabel('Virtual channel frame counter')\nplt.ylabel('Lost frames');\n\nfc.size/(fc[-1]-fc[0]+1)", "Virtual channel 0\nVirtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol.", "vc0 = [a for a in aos if a.primary_header.virtual_channel_id == 0]\n[a.primary_header for a in vc0[:10]]\n\nfc = np.array([a.primary_header.virtual_channel_frame_count for a in vc0])\n\nplt.figure(figsize = (10, 5), facecolor = 'w')\nplt.plot(fc[1:], np.diff(fc)-1, '.')\nplt.title(\"Lucy virtual channel 0 (telemetry) frame loss\")\nplt.xlabel('Virtual channel frame counter')\nplt.ylabel('Lost frames');\n\nfc.size/(fc[-1]-fc[0]+1)\n\nvc0_packets = list(ccsds.extract_space_packets(vc0, 49, 0))\n\nvc0_t = timestamps(vc0_packets)\n\nvc0_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc0_packets]\n\nvc0_apids = collections.Counter([p.APID for p in vc0_sp_headers])\nvc0_apids\n\napid_axis = {a : k for k, a in enumerate(sorted(vc0_apids))}\n\nplt.figure(figsize = (10, 5), facecolor = 'w')\nplt.plot(vc0_t, [apid_axis[p.APID] for p in vc0_sp_headers], '.')\nplt.yticks(ticks=range(len(apid_axis)), labels=apid_axis)\nplt.xlabel('Space Packet timestamp')\nplt.ylabel('APID')\nplt.title('Lucy Virtual Channel 0 APID distribution');\n\nvc0_by_apid = {apid : [p for h,p in zip(vc0_sp_headers, vc0_packets)\n if h.APID == apid] for apid in vc0_apids}\n\nplot_apids(vc0_by_apid)", "APID 5\nAs found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag.", "tags = {2: Int16ub, 3: Int16ub, 15: Int32ub, 31: Int16ub, 32: Int16ub, 1202: Float64b,\n 1203: Float64b, 1204: Float64b, 1205: Float64b, 1206: Float64b, 1208: Float32b,\n 1209: Float32b, 1210: Float32b, 1601: Float32b, 1602: Float32b, 1603: Float32b,\n 1630: Float32b, 1631: Float32b, 1632: Float32b, 17539: Float32b, 17547: Float32b,\n 17548: Float32b, 21314: Int32sb, 21315: Int32sb, 21316: Int32sb, 21317: Int32sb,\n 46555: Int32sb, 46980: Int16ub, 46981: Int16ub, 46982: Int16ub, 47090: Int16ub,\n 47091: Int16ub, 47092: Int16ub,\n }\n\nvalues = list()\nfor packet in vc0_by_apid[5]:\n t = timestamps([packet])[0]\n packet = packet[6+5:] # skip primary and secondary headers\n while True:\n tag = Int16ub.parse(packet)\n packet = packet[2:]\n value = tags[tag].parse(packet)\n packet = packet[tags[tag].sizeof():]\n values.append((tag, value, t))\n if len(packet) == 0:\n break\n \nvalues_keys = {v[0] for v in values}\nvalues = {k: [(v[2], v[1]) for v in values if v[0] == k] for k in values_keys}\n\nfor k in sorted(values_keys):\n vals = values[k]\n plt.figure()\n plt.title(f'Key {k}')\n plt.plot([v[0] for v in vals], [v[1] for v in vals], '.')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
saashimi/code_guild
wk2/extras/arrays_strings/unique_chars/unique_chars_solution.ipynb
mit
[ "<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>\nSolution Notebook\nProblem: Implement an algorithm to determine if a string has all unique characters\n\nConstraints\nTest Cases\nAlgorithm 1: Sets and Length Comparison\nCode: Sets and Length Comparison\nAlgorithm 2: Hash Map Lookup\nCode: Hash Map Lookup\nAlgorithm 3: In-Place\nCode: In-Place\nUnit Test\n\nConstraints\n\nCan you assume the string is ASCII?\nYes\nNote: Unicode strings could require special handling depending on your language\n\n\nCan we assume this is case sensitive?\nYes\n\n\nCan you use additional data structures? \nYes\n\n\n\nTest Cases\n\n'' -> True\n'foo' -> False\n'bar' -> True\n\nAlgorithm 1: Sets and Length Comparison\nA set is an unordered collection of unique elements. \n\nIf the length of the set(string) equals the length of the string\nReturn True\n\n\nElse\nReturn False\n\n\n\nComplexity:\n* Time: O(n)\n* Space: Additional O(n)\nCode: Sets and Length Comparison", "def unique_chars(string):\n return len(set(string)) == len(string)", "Algorithm 2: Hash Map Lookup\nWe'll keep a hash map (set) to keep track of unique characters we encounter. \nSteps:\n* Scan each character\n* For each character:\n * If the character does not exist in a hash map, add the character to a hash map\n * Else, return False\n* Return True\nNotes:\n* We could also use a dictionary, but it seems more logical to use a set as it does not contain duplicate elements\n* Since the characters are in ASCII, we could potentially use an array of size 128 (or 256 for extended ASCII)\nComplexity:\n* Time: O(n)\n* Space: Additional O(n)\nCode: Hash Map Lookup", "def unique_chars_hash(string):\n chars_set = set()\n for char in string:\n if char in chars_set:\n return False\n else:\n chars_set.add(char)\n return True", "Algorithm 3: In-Place\nAssume we cannot use additional data structures, which will eliminate the fast lookup O(1) time provided by our hash map.\n* Scan each character\n* For each character:\n * Scan all [other] characters in the array\n * Exluding the current character from the scan is rather tricky in Python and results in a non-Pythonic solution\n * If there is a match, return False\n* Return True\nAlgorithm Complexity:\n* Time: O(n^2)\n* Space: O(1)\nCode: In-Place", "def unique_chars_inplace(string):\n for char in string:\n if string.count(char) > 1:\n return False\n return True", "Unit Test", "%%writefile test_unique_chars.py\nfrom nose.tools import assert_equal\n\n\nclass TestUniqueChars(object):\n\n def test_unique_chars(self, func):\n assert_equal(func(''), True)\n assert_equal(func('foo'), False)\n assert_equal(func('bar'), True)\n print('Success: test_unique_chars')\n\n\ndef main():\n test = TestUniqueChars()\n test.test_unique_chars(unique_chars)\n try:\n test.test_unique_chars(unique_chars_hash)\n test.test_unique_chars(unique_chars_inplace)\n except NameError:\n # Alternate solutions are only defined\n # in the solutions file\n pass\n\n\nif __name__ == '__main__':\n main()\n\n%run -i test_unique_chars.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pgmpy/pgmpy_notebook
notebooks/2. Bayesian Networks.ipynb
mit
[ "Bayesian Network", "from IPython.display import Image", "Bayesian Models\n\nWhat are Bayesian Models\nIndependencies in Bayesian Networks\nHow is Bayesian Model encoding the Joint Distribution\nHow we do inference from Bayesian models\nTypes of methods for inference\n\n1. What are Bayesian Models\nA Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are mostly used when we want to represent causal relationship between the random variables. Bayesian Networks are parameterized using Conditional Probability Distributions (CPD). Each node in the network is parameterized using $P(node | Pa(node))$ where $Pa(node)$ represents the parents of node in the network.\nWe can take the example of the student model:", "Image('../images/2/student_full_param.png')", "In pgmpy we define the network structure and the CPDs separately and then associate them with the structure. Here's an example for defining the above model:", "from pgmpy.models import BayesianModel\nfrom pgmpy.factors.discrete import TabularCPD\n\n# Defining the model structure. We can define the network by just passing a list of edges.\nmodel = BayesianModel([('D', 'G'), ('I', 'G'), ('G', 'L'), ('I', 'S')])\n\n# Defining individual CPDs.\ncpd_d = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]])\ncpd_i = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]])\n\n# The representation of CPD in pgmpy is a bit different than the CPD shown in the above picture. In pgmpy the colums\n# are the evidences and rows are the states of the variable. So the grade CPD is represented like this:\n#\n# +---------+---------+---------+---------+---------+\n# | diff | intel_0 | intel_0 | intel_1 | intel_1 |\n# +---------+---------+---------+---------+---------+\n# | intel | diff_0 | diff_1 | diff_0 | diff_1 |\n# +---------+---------+---------+---------+---------+\n# | grade_0 | 0.3 | 0.05 | 0.9 | 0.5 |\n# +---------+---------+---------+---------+---------+\n# | grade_1 | 0.4 | 0.25 | 0.08 | 0.3 |\n# +---------+---------+---------+---------+---------+\n# | grade_2 | 0.3 | 0.7 | 0.02 | 0.2 |\n# +---------+---------+---------+---------+---------+\n\ncpd_g = TabularCPD(variable='G', variable_card=3, \n values=[[0.3, 0.05, 0.9, 0.5],\n [0.4, 0.25, 0.08, 0.3],\n [0.3, 0.7, 0.02, 0.2]],\n evidence=['I', 'D'],\n evidence_card=[2, 2])\n\ncpd_l = TabularCPD(variable='L', variable_card=2, \n values=[[0.1, 0.4, 0.99],\n [0.9, 0.6, 0.01]],\n evidence=['G'],\n evidence_card=[3])\n\ncpd_s = TabularCPD(variable='S', variable_card=2,\n values=[[0.95, 0.2],\n [0.05, 0.8]],\n evidence=['I'],\n evidence_card=[2])\n\n# Associating the CPDs with the network\nmodel.add_cpds(cpd_d, cpd_i, cpd_g, cpd_l, cpd_s)\n\n# check_model checks for the network structure and CPDs and verifies that the CPDs are correctly \n# defined and sum to 1.\nmodel.check_model()\n\n# CPDs can also be defined using the state names of the variables. If the state names are not provided\n# like in the previous example, pgmpy will automatically assign names as: 0, 1, 2, ....\n\ncpd_d_sn = TabularCPD(variable='D', variable_card=2, values=[[0.6], [0.4]], state_names={'D': ['Easy', 'Hard']})\ncpd_i_sn = TabularCPD(variable='I', variable_card=2, values=[[0.7], [0.3]], state_names={'I': ['Dumb', 'Intelligent']})\ncpd_g_sn = TabularCPD(variable='G', variable_card=3, \n values=[[0.3, 0.05, 0.9, 0.5],\n [0.4, 0.25, 0.08, 0.3],\n [0.3, 0.7, 0.02, 0.2]],\n evidence=['I', 'D'],\n evidence_card=[2, 2],\n state_names={'G': ['A', 'B', 'C'],\n 'I': ['Dumb', 'Intelligent'],\n 'D': ['Easy', 'Hard']})\n\ncpd_l_sn = TabularCPD(variable='L', variable_card=2, \n values=[[0.1, 0.4, 0.99],\n [0.9, 0.6, 0.01]],\n evidence=['G'],\n evidence_card=[3],\n state_names={'L': ['Bad', 'Good'],\n 'G': ['A', 'B', 'C']})\n\ncpd_s_sn = TabularCPD(variable='S', variable_card=2,\n values=[[0.95, 0.2],\n [0.05, 0.8]],\n evidence=['I'],\n evidence_card=[2],\n state_names={'S': ['Bad', 'Good'],\n 'I': ['Dumb', 'Intelligent']})\n\n# These defined CPDs can be added to the model. Since, the model already has CPDs associated to variables, it will\n# show warning that pmgpy is now replacing those CPDs with the new ones.\nmodel.add_cpds(cpd_d_sn, cpd_i_sn, cpd_g_sn, cpd_l_sn, cpd_s_sn)\nmodel.check_model()\n\n# We can now call some methods on the BayesianModel object.\nmodel.get_cpds()\n\n# Printing a CPD which doesn't have state names defined.\nprint(cpd_g)\n\n# Printing a CPD with it's state names defined.\nprint(model.get_cpds('G'))\n\nmodel.get_cardinality('G')", "2. Independencies in Bayesian Networks\nIndependencies implied by the network structure of a Bayesian Network can be categorized in 2 types:\n\n\nLocal Independencies: Any variable in the network is independent of its non-descendents given its parents. Mathematically it can be written as: $$ (X \\perp NonDesc(X) | Pa(X) $$\nwhere $NonDesc(X)$ is the set of variables which are not descendents of $X$ and $Pa(X)$ is the set of variables which are parents of $X$.\n\n\nGlobal Independencies: For discussing global independencies in Bayesian Networks we need to look at the various network structures possible. \nStarting with the case of 2 nodes, there are only 2 possible ways for it to be connected:", "Image('../images/2/two_nodes.png')", "In the above two cases it is fairly obvious that change in any of the node will affect the other. For the first case we can take the example of $difficulty \\rightarrow grade$. If we increase the difficulty of the course the probability of getting a higher grade decreases. For the second case we can take the example of $SAT \\leftarrow Intel$. Now if we increase the probability of getting a good score in SAT that would imply that the student is intelligent, hence increasing the probability of $i_1$. Therefore in both the cases shown above any change in the variables leads to change in the other variable.\nNow, there are four possible ways of connection between 3 nodes:", "Image('../images/2/three_nodes.png')", "Now in the above cases we will see the flow of influence from $A$ to $C$ under various cases.\n\nCausal: In the general case when we make any changes in the variable $A$, it will have effect of variable $B$ (as we discussed above) and this change in $B$ will change the values in $C$. One other possible case can be when $B$ is observed i.e. we know the value of $B$. So, in this case any change in $A$ won't affect $B$ since we already know the value. And hence there won't be any change in $C$ as it depends only on $B$. Mathematically we can say that: $(A \\perp C | B)$.\nEvidential: Similarly in this case also observing $B$ renders $C$ independent of $A$. Otherwise when $B$ is not observed the influence flows from $A$ to $C$. Hence $(A \\perp C | B)$.\nCommon Evidence: This case is a bit different from the others. When $B$ is not observed any change in $A$ reflects some change in $B$ but not in $C$. Let's take the example of $D \\rightarrow G \\leftarrow I$. In this case if we increase the difficulty of the course the probability of getting a higher grade reduces but this has no effect on the intelligence of the student. But when $B$ is observed let's say that the student got a good grade. Now if we increase the difficulty of the course this will increase the probability of the student to be intelligent since we already know that he got a good grade. Hence in this case $(A \\perp C)$ and $( A \\not\\perp C | B)$. This structure is also commonly known as V structure.\nCommon Cause: The influence flows from $A$ to $C$ when $B$ is not observed. But when $B$ is observed and change in $A$ doesn't affect $C$ since it's only dependent on $B$. Hence here also $( A \\perp C | B)$. \n\nLet's not see a few examples for finding the independencies in a newtork using pgmpy:", "# Getting the local independencies of a variable.\nmodel.local_independencies('G')\n\n# Getting all the local independencies in the network.\nmodel.local_independencies(['D', 'I', 'S', 'G', 'L'])\n\n# Active trail: For any two variables A and B in a network if any change in A influences the values of B then we say\n# that there is an active trail between A and B.\n# In pgmpy active_trail_nodes gives a set of nodes which are affected (i.e. correlated) by any \n# change in the node passed in the argument.\nmodel.active_trail_nodes('D')\n\nmodel.active_trail_nodes('D', observed='G')", "3. How is this Bayesian Network representing the Joint Distribution over the variables ?\nTill now we just have been considering that the Bayesian Network can represent the Joint Distribution without any proof. Now let's see how to compute the Joint Distribution from the Bayesian Network.\nFrom the chain rule of probabiliy we know that:\n$P(A, B) = P(A | B) * P(B)$\nNow in this case:\n$P(D, I, G, L, S) = P(L| S, G, D, I) * P(S | G, D, I) * P(G | D, I) * P(D | I) * P(I)$\nApplying the local independence conditions in the above equation we will get:\n$P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G| D, I) * P(D) * P(I)$\nFrom the above equation we can clearly see that the Joint Distribution over all the variables is just the product of all the CPDs in the network. Hence encoding the independencies in the Joint Distribution in a graph structure helped us in reducing the number of parameters that we need to store.\n4. Inference in Bayesian Models\nTill now we discussed just about representing Bayesian Networks. Now let's see how we can do inference in a Bayesian Model and use it to predict values over new data points for machine learning tasks. In this section we will consider that we already have our model. We will talk about constructing the models from data in later parts of this tutorial.\nIn inference we try to answer probability queries over the network given some other variables. So, we might want to know the probable grade of an intelligent student in a difficult class given that he scored good in SAT. So for computing these values from a Joint Distribution we will have to reduce over the given variables that is $I = 1$, $D = 1$, $S = 1$ and then marginalize over the other variables that is $L$ to get $P(G | I=1, D=1, S=1)$.\nBut carrying on marginalize and reduce operation on the complete Joint Distribution is computationaly expensive since we need to iterate over the whole table for each operation and the table is exponential is size to the number of variables. But in Graphical Models we exploit the independencies to break these operations in smaller parts making it much faster.\nOne of the very basic methods of inference in Graphical Models is Variable Elimination.\nVariable Elimination\nWe know that:\n$P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$\nNow let's say we just want to compute the probability of G. For that we will need to marginalize over all the other variables.\n$P(G) = \\sum_{D, I, L, S} P(D, I, G, L, S)$\n$P(G) = \\sum_{D, I, L, S} P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$\n$P(G) = \\sum_D \\sum_I \\sum_L \\sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$\nNow since not all the conditional distributions depend on all the variables we can push the summations inside:\n$P(G) = \\sum_D \\sum_I \\sum_L \\sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I)$\n$P(G) = \\sum_D P(D) \\sum_I P(G|D, I) * P(I) \\sum_S P(S|I) \\sum_L P(L|G)$\nSo, by pushing the summations inside we have saved a lot of computation because we have to now iterate over much smaller tables.\nLet's take an example for inference using Variable Elimination in pgmpy:", "from pgmpy.inference import VariableElimination\ninfer = VariableElimination(model)\ng_dist = infer.query(['G'])\nprint(g_dist)", "There can be cases in which we want to compute the conditional distribution let's say $P(G | D=0, I=1)$. In such cases we need to modify our equations a bit:\n$P(G | D=0, I=1) = \\sum_L \\sum_S P(L|G) * P(S| I=1) * P(G| D=0, I=1) * P(D=0) * P(I=1)$\n$P(G | D=0, I=1) = P(D=0) * P(I=1) * P(G | D=0, I=1) * \\sum_L P(L | G) * \\sum_S P(S | I=1)$\nIn pgmpy we will just need to pass an extra argument in the case of conditional distributions:", "print(infer.query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent'}))", "Predicting values from new data points\nPredicting values from new data points is quite similar to computing the conditional probabilities. We need to query for the variable that we need to predict given all the other features. The only difference is that rather than getting the probabilitiy distribution we are interested in getting the most probable state of the variable.\nIn pgmpy this is known as MAP query. Here's an example:", "infer.map_query(['G'])\n\ninfer.map_query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent'})\n\ninfer.map_query(['G'], evidence={'D': 'Easy', 'I': 'Intelligent', 'L': 'Good', 'S': 'Good'})", "5. Other methods for Inference\nEven though exact inference algorithms like Variable Elimination optimize the inference task, it is still computationally quite expensive in the case of large models. For such cases we can use approximate algorithms like Message Passing Algorithms, Sampling Algorithms etc. We will talk about a few other exact and approximate algorithms in later parts of the tutorial." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MTG/essentia
src/examples/python/tutorial_tonal_hpcpkeyscale.ipynb
agpl-3.0
[ "Tonality analysis: HPCP, key and scale detection\nIn this example we will analyze tonality of a music track. We will analyze the spectrum of an audio signal, find out its spectral peaks using SpectralPeak and then estimate the harmonic pitch class profile using the HPCP algorithm. Finally, we will estimate key and scale of the track based on its HPCP value using the Key algorithm.\nIn this particular case, it is easier to write the code in streaming mode as it is much simpler.", "import essentia.streaming as ess\nimport essentia\n\naudio_file = '../../../test/audio/recorded/dubstep.flac'\n\n# Initialize algorithms we will use.\nloader = ess.MonoLoader(filename=audio_file)\nframecutter = ess.FrameCutter(frameSize=4096, hopSize=2048, silentFrames='noise')\nwindowing = ess.Windowing(type='blackmanharris62')\nspectrum = ess.Spectrum()\nspectralpeaks = ess.SpectralPeaks(orderBy='magnitude',\n magnitudeThreshold=0.00001,\n minFrequency=20,\n maxFrequency=3500, \n maxPeaks=60)\n\n# Use default HPCP parameters for plots.\n# However we will need higher resolution and custom parameters for better Key estimation.\n\nhpcp = ess.HPCP()\nhpcp_key = ess.HPCP(size=36, # We will need higher resolution for Key estimation.\n referenceFrequency=440, # Assume tuning frequency is 44100.\n bandPreset=False,\n minFrequency=20,\n maxFrequency=3500,\n weightType='cosine',\n nonLinear=False,\n windowSize=1.)\n\nkey = ess.Key(profileType='edma', # Use profile for electronic music.\n numHarmonics=4,\n pcpSize=36,\n slope=0.6,\n usePolyphony=True,\n useThreeChords=True)\n\n# Use pool to store data.\npool = essentia.Pool() \n\n# Connect streaming algorithms.\nloader.audio >> framecutter.signal\nframecutter.frame >> windowing.frame >> spectrum.frame\nspectrum.spectrum >> spectralpeaks.spectrum\nspectralpeaks.magnitudes >> hpcp.magnitudes\nspectralpeaks.frequencies >> hpcp.frequencies\nspectralpeaks.magnitudes >> hpcp_key.magnitudes\nspectralpeaks.frequencies >> hpcp_key.frequencies\nhpcp_key.hpcp >> key.pcp\nhpcp.hpcp >> (pool, 'tonal.hpcp')\nkey.key >> (pool, 'tonal.key_key')\nkey.scale >> (pool, 'tonal.key_scale')\nkey.strength >> (pool, 'tonal.key_strength')\n\n# Run streaming network.\nessentia.run(loader)\n\nprint(\"Estimated key and scale:\", pool['tonal.key_key'] + \" \" + pool['tonal.key_scale'])", "The audio we have just analyzed:", "import IPython\nIPython.display.Audio(audio_file)", "Let's plot the resulting HPCP:", "# Plots configuration.\nimport matplotlib.pyplot as plt\nfrom pylab import plot, show, figure, imshow\nplt.rcParams['figure.figsize'] = (15, 6)\n\n# Plot HPCP.\nimshow(pool['tonal.hpcp'].T, aspect='auto', origin='lower', interpolation='none')\nplt.title(\"HPCPs in frames (the 0-th HPCP coefficient corresponds to A)\")\nshow()", "Here we have plotted a 12-bin HPCPgram with default parameters and bins corresponding to semitones from A to G#.\nIn contrast, in this example, Key/scale estimation is done using 36-bin HPCPs with more resolution and specific parameters for better accuracy. \nKey estimation works by comparing the HPCPs to different distribution profiles suited for different types of music. The one used here, edma, is specifically designed for electronic dance music. See the Key algorithm for more information about the available profiles.", "print(\"Estimated key and scale:\", pool['tonal.key_key'] + \" \" + pool['tonal.key_scale'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
agile-geoscience/welly
tutorial/05_Location.ipynb
apache-2.0
[ "Location and the deviation survey\nMost wells are vertical, but many are not. All modern wells have a deviation survey, which is converted into a position log, giving the 3D position of the well in space. \nwelly has a simple way to add a position log in a specific format, and computes a position log from it. You can use the position log to convert between MD and TVD. \nFirst, version check.", "import welly\nwelly.__version__", "Adding deviation to an existing well\nFirst we'll read a LAS and instantiate a well w", "from welly import Well\n\nw = Well.from_las(\"data/P-130_out.LAS\")\nw\n\nw.plot()", "There aren't a lot of tricks for handling the input data, which is assumed to be a CSV-like file containing columns like:\nMD, inclination, azimuth\n\nFor example:", "with open('data/P-130_deviation_survey.csv') as f:\n lines = f.readlines()\n\nfor line in lines[:6]:\n print(line, end='')", "Then we can turn that into an ndarray:", "import numpy as np\n\ndev = np.loadtxt('data/P-130_deviation_survey.csv', delimiter=',', skiprows=1, usecols=[0,1,2])\ndev[:5]", "You can use any other method to get to an array or pandas.DataFrame like this one.\nThen we can add the deviation survey to the well's location attribute. This will automatically convert it into a position log, which is an array containing the x-offset, y-offset, and TVD of the well, in that order.", "w.location.add_deviation(dev, td=w.location.tdd)", "Now you have the position log:", "w.location.position[:5]", "Note that it is irregularly sampled &mdash; this is nothing more than the deviation survey (which is MD, INCL, AZI) converted into relative positions (i.e. deltaX, deltaY, deltaZ). These positions are relative to the tophole location. \nMD to TVD and vice versa\nWe now have the methods md2tvd and tvd2md available to us:", "w.location.md2tvd(1000)\n\nw.location.tvd2md(998.78525)", "These can also accept an array:", "md = np.linspace(0, 300, 31)\n\nw.location.md2tvd(md)", "Note that these are linear in MD, but not in TVD.", "w.location.md2tvd([0, 10, 20, 30])", "If you have the position log, but no deviation survey\nIn general, deviation surveys are considered 'canonical'. That is, they are data recorded in the well. The position log &mdash; a set of (x, y, z) points in a linear Euclidean space like (X_UTM, Y_UTM, TVDSS) &mdash; is then computed from the deviation survey. \nIf you have deviation and position log, I recommend loading the deviation survey as above.\nIf you only have position, in a 3-column array-like called position (say), then you can add it to the well like so:\nw.location.position = np.array(position)\n\nYou can still use the MD-to-TVD and TVD-to-MD converters above, and w.position.trajectory() will work as usual, but you won't have w.position.dogleg or w.position.deviation.\nDogleg severity\nThe dogleg severity array is captured in the dogleg attribute:", "w.location.dogleg[:10]", "Starting from new well\nData from Rob:", "import pandas as pd\n\ndev = pd.read_csv('data/deviation.csv')\ndev.head(10)\n\ndev.tail()", "First we'll create an 'empty' well.", "x = Well(params={'header': {'name': 'foo'}})", "Now add the Location object to the well's location attribute, finally calling its add_deviation() method on the deviation data:", "from welly import Location\n\nx.location = Location(params={'kb': 100})\n\nx.location.add_deviation(dev[['MD[m]', 'Inc[deg]', 'Azi[deg]']].values)", "Let's see how our new position data compares to what was in the deviation.csv data file:\nCompare x, y, and dogleg", "np.set_printoptions(suppress=True)\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(figsize=(15,5))\n\nmd_welly = x.location.deviation[:, 0]\n\n# Plot x vs depth\nax.plot(x.location.position[:, 0], md_welly, lw=6, label=\"welly\")\nax.plot(dev['East[m]'], dev['MD[m]'], c='limegreen', label=\"file\")\nax.invert_yaxis()\nax.legend()", "They seem to match well. There's a difference at the top because welly always adds a (0, 0, 0) point to both the deviation and position logs:", "x.location.position[:7]", "In plan view, the wells match:", "fig, ax = plt.subplots(figsize=(6,6))\n\nax.plot(*x.location.position[:, :2].T, c='c', lw=5, label=\"welly\")\nax.plot(dev['East[m]'], dev['North[m]'], c='yellow', ls='--', label=\"file\")\n#ax.set_xlim(-20, 800); ax.set_ylim(-820, 20)\nax.grid(color='black', alpha=0.2)", "Fit a spline to the position log\nTo make things a bit more realistic, we can shift to the correct spatial datum, i.e. the (x, y, z) of the top hole, where z is the KB elevation. \nWe can also adjust the z value to elevation (i.e. negative downwards).", "np.set_printoptions(suppress=True, precision=2)\n\nx.location.trajectory(datum=[111000, 2222000, 100], elev=True)", "We can make a 3D plot with this trajectory:", "from mpl_toolkits.mplot3d import Axes3D\n\nfig, ax = plt.subplots(figsize=(12, 7), subplot_kw={'projection': '3d'})\nax.plot(*x.location.trajectory().T, lw=3, alpha=0.75)\nplt.show()", "Compare doglegs\nThe deviation.csv file also contains a measure of dogleg severity, which welly also generates (since v0.4.2).\nNote that in the current version dogleg severity is in radians, whereas the usual units are degrees per 100 ft or degrees per 30 m. The next release of welly, v0.5, will start using degrees per 30 m by default.", "fig, ax = plt.subplots(figsize=(15,4))\n\nax.plot(x.location.dogleg, lw=5, label=\"welly\")\nax = plt.twinx(ax=ax)\nax.plot(dev['Dogleg [deg/30m]'], c='limegreen', ls='--', label=\"file\")\nax.text(80, 4, 'file', color='limegreen', ha='right', va='top', size=16)\nax.text(80, 3.5, 'welly', color='C0', ha='right', va='top', size=16)", "Apart from the scaling, they agree. \nImplementation details\nThe position log is computed from the deviation survey with the minimum curvature algorithm, which is fairly standard in the industry. To use a different method, pass method='aa' (average angle) or method='bt' (balanced tangent) directly to Location.compute_position_log() yourself. \nOnce we have the position log, we still need a way to look up arbitrary depths. To do this, we use a cubic spline fitted to the position log. This should be OK for most 'natural' well paths, but it might break horribly. If you get weird results, you can pass method='linear' to the conversion functions — less accurate but more stable.\n\nAzimuth datum\nYou can adjust the angle of the azimuth datum with the azimuth_datum keyword argument. The default is zero, which means the azimuths in your survey are in degrees relative to grid north (of your UTM grid, say).\nLet's make some fake data like\nMD, INCL, AZI", "dev = [[100, 0, 0],\n [200, 10, 45],\n [300, 20, 45],\n [400, 20, 45],\n [500, 20, 60],\n [600, 20, 75],\n [700, 90, 90],\n [800, 90, 90],\n [900, 90, 90],\n ]\n\nz = welly.Well()\nz.location = welly.Location(params={'kb': 10})\nz.location.add_deviation(dev, td=1000, azimuth_datum=20)\n\nz.location.plot_plan()\n\nz.location.plot_3d()", "Trajectory\nGet regularly sampled well trajectory with a specified number of points. Assumes there is a position log already, e.g. resulting from calling add_deviation() on a deviation survey. Computed from the position log by scipy.interpolate.splprep().", "z.location.trajectory(points=20)", "TODO\n\nAdd plot_projection() for a vertical projection.\nExport a shapely linestring.\nExport SHP.\nExport 3D postgis object.\n\n\n&copy; Agile Scientific 2019–2022, licensed CC-BY / Apache 2.0" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gdementen/larray
doc/source/tutorial/tutorial_transforming.ipynb
gpl-3.0
[ "Transforming Arrays (Relabeling, Renaming, Reordering, Sorting, ...)\nImport the LArray library:", "from larray import *", "Import the population array from the demography_eurostat dataset:", "demography_eurostat = load_example_data('demography_eurostat')\npopulation = demography_eurostat.population\n\n# display the 'population' array\npopulation", "Manipulating axes\nThe Array class offers several methods to manipulate the axes and labels of an array:\n\nset_labels: to replace all or some labels of one or several axes.\nrename: to replace one or several axis names.\nset_axes: to replace one or several axes.\ntranspose: to modify the order of axes.\ndrop: to remove one or several labels.\ncombine_axes: to combine axes.\nsplit_axes: to split one or several axes by splitting their labels and names.\nreindex: to reorder, add and remove labels of one or several axes.\ninsert: to insert a label at a given position.\n\nRelabeling\nReplace some labels of an axis:", "# replace only one label of the 'gender' axis by passing a dict\npopulation_new_labels = population.set_labels('gender', {'Male': 'Men'})\npopulation_new_labels\n\n# set all labels of the 'country' axis to uppercase by passing the function str.upper()\npopulation_new_labels = population.set_labels('country', str.upper)\npopulation_new_labels", "See set_labels for more details and examples.\nRenaming axes\nRename one axis:", "# 'rename' returns a copy of the array\npopulation_new_names = population.rename('time', 'year')\npopulation_new_names", "Rename several axes at once:", "population_new_names = population.rename({'gender': 'sex', 'time': 'year'})\npopulation_new_names", "See rename for more details and examples.\nReplacing Axes\nReplace one axis:", "new_gender = Axis('sex=Men,Women')\npopulation_new_axis = population.set_axes('gender', new_gender)\npopulation_new_axis", "Replace several axes at once:", "new_country = Axis('country_codes=BE,FR,DE') \npopulation_new_axes = population.set_axes({'country': new_country, 'gender': new_gender})\npopulation_new_axes", "Reordering axes\nAxes can be reordered using transpose method.\nBy default, transpose reverse axes, otherwise it permutes the axes according to the list given as argument.\nAxes not mentioned come after those which are mentioned(and keep their relative order).\nFinally, transpose returns a copy of the array.", "# starting order : country, gender, time\npopulation\n\n# no argument --> reverse all axes\npopulation_transposed = population.transpose()\n\n# .T is a shortcut for .transpose()\npopulation_transposed = population.T\n\npopulation_transposed\n\n# reorder according to list\npopulation_transposed = population.transpose('gender', 'country', 'time')\npopulation_transposed\n\n# move 'time' axis at first place\n# not mentioned axes come after those which are mentioned (and keep their relative order)\npopulation_transposed = population.transpose('time')\npopulation_transposed\n\n# move 'gender' axis at last place\n# not mentioned axes come before those which are mentioned (and keep their relative order)\npopulation_transposed = population.transpose(..., 'gender')\npopulation_transposed", "See transpose for more details and examples.\nDropping Labels", "population_labels_dropped = population.drop([2014, 2016])\npopulation_labels_dropped", "See drop for more details and examples.\nCombine And Split Axes\nCombine two axes:", "population_combined_axes = population.combine_axes(('country', 'gender'))\npopulation_combined_axes", "Split an axis:", "population_split_axes = population_combined_axes.split_axes('country_gender')\npopulation_split_axes", "See combine_axes and split_axes for more details and examples.\nReordering, adding and removing labels\nThe reindex method allows to reorder, add and remove labels along one axis:", "# reverse years + remove 2013 + add 2018 + copy data for 2017 to 2018\npopulation_new_time = population.reindex('time', '2018..2014', fill_value=population[2017])\npopulation_new_time", "or several axes:", "population_new = population.reindex({'country': 'country=Luxembourg,Belgium,France,Germany', \n 'time': 'time=2018..2014'}, fill_value=0)\npopulation_new", "See reindex for more details and examples.\nAnother way to insert new labels is to use the insert method:", "# insert a new country before 'France' with all values set to 0\npopulation_new_country = population.insert(0, before='France', label='Luxembourg')\n# or equivalently\npopulation_new_country = population.insert(0, after='Belgium', label='Luxembourg')\n\npopulation_new_country", "See insert for more details and examples.\nSorting\n\nsort_axes: sort the labels of an axis.\nlabelsofsorted: give labels which would sort an axis. \nsort_values: sort axes according to values", "# get a copy of the 'population_benelux' array\npopulation_benelux = demography_eurostat.population_benelux.copy()\npopulation_benelux", "Sort an axis (alphabetically if labels are strings)", "population_sorted = population_benelux.sort_axes('gender')\npopulation_sorted", "Give labels which would sort the axis", "population_benelux.labelsofsorted('country')", "Sort according to values", "population_sorted = population_benelux.sort_values(('Male', 2017))\npopulation_sorted", "Aligning Arrays\nThe align method align two arrays on their axes with a specified join method.\nIn other words, it ensure all common axes are compatible.", "# get a copy of the 'births' array\nbirths = demography_eurostat.births.copy()\n\n# align the two arrays with the 'inner' join method\npopulation_aligned, births_aligned = population_benelux.align(births, join='inner')\n\nprint('population_benelux before align:')\nprint(population_benelux)\nprint()\nprint('population_benelux after align:')\nprint(population_aligned)\n\nprint('births before align:')\nprint(births)\nprint()\nprint('births after align:')\nprint(births_aligned)", "Aligned arrays can then be used in arithmetic operations:", "population_aligned - births_aligned", "See align for more details and examples." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/guide/keras/writing_a_training_loop_from_scratch.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Writing a training loop from scratch\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/writing_a_training_loop_from_scratch.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/keras-team/keras-io/blob/master/guides/writing_a_training_loop_from_scratch.py\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/writing_a_training_loop_from_scratch.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nSetup", "import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nimport numpy as np", "Introduction\nKeras provides default training and evaluation loops, fit() and evaluate().\nTheir usage is covered in the guide\nTraining & evaluation with the built-in methods.\nIf you want to customize the learning algorithm of your model while still leveraging\nthe convenience of fit()\n(for instance, to train a GAN using fit()), you can subclass the Model class and\nimplement your own train_step() method, which\nis called repeatedly during fit(). This is covered in the guide\nCustomizing what happens in fit().\nNow, if you want very low-level control over training & evaluation, you should write\nyour own training & evaluation loops from scratch. This is what this guide is about.\nUsing the GradientTape: a first end-to-end example\nCalling a model inside a GradientTape scope enables you to retrieve the gradients of\nthe trainable weights of the layer with respect to a loss value. Using an optimizer\ninstance, you can use these gradients to update these variables (which you can\nretrieve using model.trainable_weights).\nLet's consider a simple MNIST model:", "inputs = keras.Input(shape=(784,), name=\"digits\")\nx1 = layers.Dense(64, activation=\"relu\")(inputs)\nx2 = layers.Dense(64, activation=\"relu\")(x1)\noutputs = layers.Dense(10, name=\"predictions\")(x2)\nmodel = keras.Model(inputs=inputs, outputs=outputs)", "Let's train it using mini-batch gradient with a custom training loop.\nFirst, we're going to need an optimizer, a loss function, and a dataset:", "# Instantiate an optimizer.\noptimizer = keras.optimizers.SGD(learning_rate=1e-3)\n# Instantiate a loss function.\nloss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\n# Prepare the training dataset.\nbatch_size = 64\n(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()\nx_train = np.reshape(x_train, (-1, 784))\nx_test = np.reshape(x_test, (-1, 784))\n\n# Reserve 10,000 samples for validation.\nx_val = x_train[-10000:]\ny_val = y_train[-10000:]\nx_train = x_train[:-10000]\ny_train = y_train[:-10000]\n\n# Prepare the training dataset.\ntrain_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))\ntrain_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)\n\n# Prepare the validation dataset.\nval_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))\nval_dataset = val_dataset.batch(batch_size)", "Here's our training loop:\n\nWe open a for loop that iterates over epochs\nFor each epoch, we open a for loop that iterates over the dataset, in batches\nFor each batch, we open a GradientTape() scope\nInside this scope, we call the model (forward pass) and compute the loss\nOutside the scope, we retrieve the gradients of the weights\nof the model with regard to the loss\nFinally, we use the optimizer to update the weights of the model based on the\ngradients", "epochs = 2\nfor epoch in range(epochs):\n print(\"\\nStart of epoch %d\" % (epoch,))\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n\n # Open a GradientTape to record the operations run\n # during the forward pass, which enables auto-differentiation.\n with tf.GradientTape() as tape:\n\n # Run the forward pass of the layer.\n # The operations that the layer applies\n # to its inputs are going to be recorded\n # on the GradientTape.\n logits = model(x_batch_train, training=True) # Logits for this minibatch\n\n # Compute the loss value for this minibatch.\n loss_value = loss_fn(y_batch_train, logits)\n\n # Use the gradient tape to automatically retrieve\n # the gradients of the trainable variables with respect to the loss.\n grads = tape.gradient(loss_value, model.trainable_weights)\n\n # Run one step of gradient descent by updating\n # the value of the variables to minimize the loss.\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %s samples\" % ((step + 1) * batch_size))", "Low-level handling of metrics\nLet's add metrics monitoring to this basic loop.\nYou can readily reuse the built-in metrics (or custom ones you wrote) in such training\nloops written from scratch. Here's the flow:\n\nInstantiate the metric at the start of the loop\nCall metric.update_state() after each batch\nCall metric.result() when you need to display the current value of the metric\nCall metric.reset_states() when you need to clear the state of the metric\n(typically at the end of an epoch)\n\nLet's use this knowledge to compute SparseCategoricalAccuracy on validation data at\nthe end of each epoch:", "# Get model\ninputs = keras.Input(shape=(784,), name=\"digits\")\nx = layers.Dense(64, activation=\"relu\", name=\"dense_1\")(inputs)\nx = layers.Dense(64, activation=\"relu\", name=\"dense_2\")(x)\noutputs = layers.Dense(10, name=\"predictions\")(x)\nmodel = keras.Model(inputs=inputs, outputs=outputs)\n\n# Instantiate an optimizer to train the model.\noptimizer = keras.optimizers.SGD(learning_rate=1e-3)\n# Instantiate a loss function.\nloss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\n# Prepare the metrics.\ntrain_acc_metric = keras.metrics.SparseCategoricalAccuracy()\nval_acc_metric = keras.metrics.SparseCategoricalAccuracy()", "Here's our training & evaluation loop:", "import time\n\nepochs = 2\nfor epoch in range(epochs):\n print(\"\\nStart of epoch %d\" % (epoch,))\n start_time = time.time()\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n with tf.GradientTape() as tape:\n logits = model(x_batch_train, training=True)\n loss_value = loss_fn(y_batch_train, logits)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n\n # Update training metric.\n train_acc_metric.update_state(y_batch_train, logits)\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %d samples\" % ((step + 1) * batch_size))\n\n # Display metrics at the end of each epoch.\n train_acc = train_acc_metric.result()\n print(\"Training acc over epoch: %.4f\" % (float(train_acc),))\n\n # Reset training metrics at the end of each epoch\n train_acc_metric.reset_states()\n\n # Run a validation loop at the end of each epoch.\n for x_batch_val, y_batch_val in val_dataset:\n val_logits = model(x_batch_val, training=False)\n # Update val metrics\n val_acc_metric.update_state(y_batch_val, val_logits)\n val_acc = val_acc_metric.result()\n val_acc_metric.reset_states()\n print(\"Validation acc: %.4f\" % (float(val_acc),))\n print(\"Time taken: %.2fs\" % (time.time() - start_time))", "Speeding-up your training step with tf.function\nThe default runtime in TensorFlow 2 is\neager execution.\nAs such, our training loop above executes eagerly.\nThis is great for debugging, but graph compilation has a definite performance\nadvantage. Describing your computation as a static graph enables the framework\nto apply global performance optimizations. This is impossible when\nthe framework is constrained to greedly execute one operation after another,\nwith no knowledge of what comes next.\nYou can compile into a static graph any function that takes tensors as input.\nJust add a @tf.function decorator on it, like this:", "@tf.function\ndef train_step(x, y):\n with tf.GradientTape() as tape:\n logits = model(x, training=True)\n loss_value = loss_fn(y, logits)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n train_acc_metric.update_state(y, logits)\n return loss_value\n", "Let's do the same with the evaluation step:", "@tf.function\ndef test_step(x, y):\n val_logits = model(x, training=False)\n val_acc_metric.update_state(y, val_logits)\n", "Now, let's re-run our training loop with this compiled training step:", "import time\n\nepochs = 2\nfor epoch in range(epochs):\n print(\"\\nStart of epoch %d\" % (epoch,))\n start_time = time.time()\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n loss_value = train_step(x_batch_train, y_batch_train)\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %d samples\" % ((step + 1) * batch_size))\n\n # Display metrics at the end of each epoch.\n train_acc = train_acc_metric.result()\n print(\"Training acc over epoch: %.4f\" % (float(train_acc),))\n\n # Reset training metrics at the end of each epoch\n train_acc_metric.reset_states()\n\n # Run a validation loop at the end of each epoch.\n for x_batch_val, y_batch_val in val_dataset:\n test_step(x_batch_val, y_batch_val)\n\n val_acc = val_acc_metric.result()\n val_acc_metric.reset_states()\n print(\"Validation acc: %.4f\" % (float(val_acc),))\n print(\"Time taken: %.2fs\" % (time.time() - start_time))", "Much faster, isn't it?\nLow-level handling of losses tracked by the model\nLayers & models recursively track any losses created during the forward pass\nby layers that call self.add_loss(value). The resulting list of scalar loss\nvalues are available via the property model.losses\nat the end of the forward pass.\nIf you want to be using these loss components, you should sum them\nand add them to the main loss in your training step.\nConsider this layer, that creates an activity regularization loss:", "class ActivityRegularizationLayer(layers.Layer):\n def call(self, inputs):\n self.add_loss(1e-2 * tf.reduce_sum(inputs))\n return inputs\n", "Let's build a really simple model that uses it:", "inputs = keras.Input(shape=(784,), name=\"digits\")\nx = layers.Dense(64, activation=\"relu\")(inputs)\n# Insert activity regularization as a layer\nx = ActivityRegularizationLayer()(x)\nx = layers.Dense(64, activation=\"relu\")(x)\noutputs = layers.Dense(10, name=\"predictions\")(x)\n\nmodel = keras.Model(inputs=inputs, outputs=outputs)", "Here's what our training step should look like now:", "@tf.function\ndef train_step(x, y):\n with tf.GradientTape() as tape:\n logits = model(x, training=True)\n loss_value = loss_fn(y, logits)\n # Add any extra losses created during the forward pass.\n loss_value += sum(model.losses)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n train_acc_metric.update_state(y, logits)\n return loss_value\n", "Summary\nNow you know everything there is to know about using built-in training loops and\nwriting your own from scratch.\nTo conclude, here's a simple end-to-end example that ties together everything\nyou've learned in this guide: a DCGAN trained on MNIST digits.\nEnd-to-end example: a GAN training loop from scratch\nYou may be familiar with Generative Adversarial Networks (GANs). GANs can generate new\nimages that look almost real, by learning the latent distribution of a training\ndataset of images (the \"latent space\" of the images).\nA GAN is made of two parts: a \"generator\" model that maps points in the latent\nspace to points in image space, a \"discriminator\" model, a classifier\nthat can tell the difference between real images (from the training dataset)\nand fake images (the output of the generator network).\nA GAN training loop looks like this:\n1) Train the discriminator.\n- Sample a batch of random points in the latent space.\n- Turn the points into fake images via the \"generator\" model.\n- Get a batch of real images and combine them with the generated images.\n- Train the \"discriminator\" model to classify generated vs. real images.\n2) Train the generator.\n- Sample random points in the latent space.\n- Turn the points into fake images via the \"generator\" network.\n- Get a batch of real images and combine them with the generated images.\n- Train the \"generator\" model to \"fool\" the discriminator and classify the fake images\nas real.\nFor a much more detailed overview of how GANs works, see\nDeep Learning with Python.\nLet's implement this training loop. First, create the discriminator meant to classify\nfake vs real digits:", "discriminator = keras.Sequential(\n [\n keras.Input(shape=(28, 28, 1)),\n layers.Conv2D(64, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(128, (3, 3), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.GlobalMaxPooling2D(),\n layers.Dense(1),\n ],\n name=\"discriminator\",\n)\ndiscriminator.summary()", "Then let's create a generator network,\nthat turns latent vectors into outputs of shape (28, 28, 1) (representing\nMNIST digits):", "latent_dim = 128\n\ngenerator = keras.Sequential(\n [\n keras.Input(shape=(latent_dim,)),\n # We want to generate 128 coefficients to reshape into a 7x7x128 map\n layers.Dense(7 * 7 * 128),\n layers.LeakyReLU(alpha=0.2),\n layers.Reshape((7, 7, 128)),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding=\"same\"),\n layers.LeakyReLU(alpha=0.2),\n layers.Conv2D(1, (7, 7), padding=\"same\", activation=\"sigmoid\"),\n ],\n name=\"generator\",\n)", "Here's the key bit: the training loop. As you can see it is quite straightforward. The\ntraining step function only takes 17 lines.", "# Instantiate one optimizer for the discriminator and another for the generator.\nd_optimizer = keras.optimizers.Adam(learning_rate=0.0003)\ng_optimizer = keras.optimizers.Adam(learning_rate=0.0004)\n\n# Instantiate a loss function.\nloss_fn = keras.losses.BinaryCrossentropy(from_logits=True)\n\n\n@tf.function\ndef train_step(real_images):\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))\n # Decode them to fake images\n generated_images = generator(random_latent_vectors)\n # Combine them with real images\n combined_images = tf.concat([generated_images, real_images], axis=0)\n\n # Assemble labels discriminating real from fake images\n labels = tf.concat(\n [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0\n )\n # Add random noise to the labels - important trick!\n labels += 0.05 * tf.random.uniform(labels.shape)\n\n # Train the discriminator\n with tf.GradientTape() as tape:\n predictions = discriminator(combined_images)\n d_loss = loss_fn(labels, predictions)\n grads = tape.gradient(d_loss, discriminator.trainable_weights)\n d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights))\n\n # Sample random points in the latent space\n random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim))\n # Assemble labels that say \"all real images\"\n misleading_labels = tf.zeros((batch_size, 1))\n\n # Train the generator (note that we should *not* update the weights\n # of the discriminator)!\n with tf.GradientTape() as tape:\n predictions = discriminator(generator(random_latent_vectors))\n g_loss = loss_fn(misleading_labels, predictions)\n grads = tape.gradient(g_loss, generator.trainable_weights)\n g_optimizer.apply_gradients(zip(grads, generator.trainable_weights))\n return d_loss, g_loss, generated_images\n", "Let's train our GAN, by repeatedly calling train_step on batches of images.\nSince our discriminator and generator are convnets, you're going to want to\nrun this code on a GPU.", "import os\n\n# Prepare the dataset. We use both the training & test MNIST digits.\nbatch_size = 64\n(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()\nall_digits = np.concatenate([x_train, x_test])\nall_digits = all_digits.astype(\"float32\") / 255.0\nall_digits = np.reshape(all_digits, (-1, 28, 28, 1))\ndataset = tf.data.Dataset.from_tensor_slices(all_digits)\ndataset = dataset.shuffle(buffer_size=1024).batch(batch_size)\n\nepochs = 1 # In practice you need at least 20 epochs to generate nice digits.\nsave_dir = \"./\"\n\nfor epoch in range(epochs):\n print(\"\\nStart epoch\", epoch)\n\n for step, real_images in enumerate(dataset):\n # Train the discriminator & generator on one batch of real images.\n d_loss, g_loss, generated_images = train_step(real_images)\n\n # Logging.\n if step % 200 == 0:\n # Print metrics\n print(\"discriminator loss at step %d: %.2f\" % (step, d_loss))\n print(\"adversarial loss at step %d: %.2f\" % (step, g_loss))\n\n # Save one generated image\n img = tf.keras.preprocessing.image.array_to_img(\n generated_images[0] * 255.0, scale=False\n )\n img.save(os.path.join(save_dir, \"generated_img\" + str(step) + \".png\"))\n\n # To limit execution time we stop after 10 steps.\n # Remove the lines below to actually train the model!\n if step > 10:\n break", "That's it! You'll get nice-looking fake MNIST digits after just ~30s of training on the\nColab GPU." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/ai-for-finance/practice/aapl_regression_scikit_learn.ipynb
apache-2.0
[ "Building a Regression Model for a Financial Dataset\nIn this notebook, you will build a simple linear regression model to predict the closing AAPL stock price. The lab objectives are:\n* Pull data from BigQuery into a Pandas dataframe\n* Use Matplotlib to visualize data\n* Use Scikit-Learn to build a regression model", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\n!pip install --user google-cloud-bigquery==1.25.0", "Note: Restart your kernel to use updated packages.\nKindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.", "%%bash\n\nbq mk -d ai4f\nbq load --autodetect --source_format=CSV ai4f.AAPL10Y gs://cloud-training/ai4f/AAPL10Y.csv\n\n%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import r2_score\n\nplt.rc('figure', figsize=(12, 8.0))", "Pull Data from BigQuery\nIn this section we'll use a magic function to query a BigQuery table and then store the output in a Pandas dataframe. A magic function is just an alias to perform a system command. To see documentation on the \"bigquery\" magic function execute the following cell:\nThe query below selects everything you'll need to build a regression model to predict the closing price of AAPL stock. The model will be very simple for the purposes of demonstrating BQML functionality. The only features you'll use as input into the model are the previous day's closing price and a three day trend value. The trend value can only take on two values, either -1 or +1. If the AAPL stock price has increased over any two of the previous three days then the trend will be +1. Otherwise, the trend value will be -1.\nNote, the features you'll need can be generated from the raw table ai4f.AAPL10Y using Pandas functions. However, it's better to take advantage of the serverless-ness of BigQuery to do the data pre-processing rather than applying the necessary transformations locally.", "%%bigquery df\nWITH\n raw AS (\n SELECT\n date,\n close,\n LAG(close, 1) OVER(ORDER BY date) AS min_1_close,\n LAG(close, 2) OVER(ORDER BY date) AS min_2_close,\n LAG(close, 3) OVER(ORDER BY date) AS min_3_close,\n LAG(close, 4) OVER(ORDER BY date) AS min_4_close\n FROM\n `ai4f.AAPL10Y`\n ORDER BY\n date DESC ),\n raw_plus_trend AS (\n SELECT\n date,\n close,\n min_1_close,\n IF (min_1_close - min_2_close > 0, 1, -1) AS min_1_trend,\n IF (min_2_close - min_3_close > 0, 1, -1) AS min_2_trend,\n IF (min_3_close - min_4_close > 0, 1, -1) AS min_3_trend\n FROM\n raw ),\n train_data AS (\n SELECT\n date,\n close,\n min_1_close AS day_prev_close,\n IF (min_1_trend + min_2_trend + min_3_trend > 0, 1, -1) AS trend_3_day\n FROM\n raw_plus_trend\n ORDER BY\n date ASC )\nSELECT\n *\nFROM\n train_data", "View the first five rows of the query's output. Note that the object df containing the query output is a Pandas Dataframe.", "print(type(df))\ndf.dropna(inplace=True)\ndf.head()", "Visualize data\nThe simplest plot you can make is to show the closing stock price as a time series. Pandas DataFrames have built in plotting funtionality based on Matplotlib.", "df.plot(x='date', y='close');", "You can also embed the trend_3_day variable into the time series above.", "start_date = '2018-06-01'\nend_date = '2018-07-31'\n\nplt.plot(\n 'date', 'close', 'k--',\n data = (\n df.loc[pd.to_datetime(df.date).between(start_date, end_date)]\n )\n)\n\nplt.scatter(\n 'date', 'close', color='b', label='pos trend', \n data = (\n df.loc[df.trend_3_day == 1 & pd.to_datetime(df.date).between(start_date, end_date)]\n )\n)\n\nplt.scatter(\n 'date', 'close', color='r', label='neg trend',\n data = (\n df.loc[(df.trend_3_day == -1) & pd.to_datetime(df.date).between(start_date, end_date)]\n )\n)\n\nplt.legend()\nplt.xticks(rotation = 90);\n\ndf.shape", "Build a Regression Model in Scikit-Learn\nIn this section you'll train a linear regression model to predict AAPL closing prices when given the previous day's closing price day_prev_close and the three day trend trend_3_day. A training set and test set are created by sequentially splitting the data after 2000 rows.", "features = ['day_prev_close', 'trend_3_day']\ntarget = 'close'\n\nX_train, X_test = df.loc[:2000, features], df.loc[2000:, features]\ny_train, y_test = df.loc[:2000, target], df.loc[2000:, target]\n\n# Create linear regression object. Don't include an intercept,\n# TODO\n\n# Train the model using the training set\n# TODO\n\n# Make predictions using the testing set\n# TODO\n\n# Print the root mean squared error of your predictions\n# TODO\n\n# Print the variance score (1 is perfect prediction)\n# TODO\n\n# Plot the predicted values against their corresponding true values\n# TODO", "The model's predictions are more or less in line with the truth. However, the utility of the model depends on the business context (i.e. you won't be making any money with this model). It's fair to question whether the variable trend_3_day even adds to the performance of the model:", "print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, X_test.day_prev_close))))", "Indeed, the RMSE is actually lower if we simply use the previous day's closing value as a prediction! Does increasing the number of days included in the trend improve the model? Feel free to create new features and attempt to improve model performance!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mbod/intro_python_for_comm
Processing a LexisNexus text export into CSV.ipynb
cc0-1.0
[ "Processing a LexisNexus text export into CSV\nPreparation\n\ndownload the file: https://github.com/mbod/intro_python_for_comm/blob/master/data/LexisNexusVapingExample.txt\nplace it in the data folder of your IPython notebook\n\nTask\n\nLoad the text file <code>LexisNexusVapingExample.txt</code> into a variable text\nExamine the first 20000 chars to figure out how articles are separated\nCreate a list by splitting on the separator string\nSeperate each article into prebody, body and postbody components\nSave the output to a CSV file with three columns:\n prebody, body and postbody\n and a row for each article\n\nLoading contexts of the text file from the data folder\nIf you downloaded the text file and placed in the data folder you can read it into a variable like this:", "text = open('data/LexisNexusVapingExample.txt', 'r').read()", "Show the number of characters in the text file:", "len(text) ", "Downloading the text file directly from github using the <code>requests</code> module", "import requests # import the module", "The <code>get</code> function takes a URL and returns the content at that address.", "resp = requests.get('https://raw.githubusercontent.com/mbod/intro_python_for_comm/master/data/LexisNexusVapingExample.txt')\n\ntext2=resp.text # assign content of response to a variable\n\ntext2[:300] # show first 300 characters\n\nprint(text2[:300]) # use print to see formatting (spacing and newlines etc.)", "Examine the first 20,000 characters to find string patterns that mark divisions between documents", "print(text[0:20000])", "The string <code>of 1000 DOCUMENTS</code> looks like a good candidate for splitting the the text file into the individual documents\n\nSplit the text string into n chunks using of 1000 DOCUMENTS:", "chunks = text.split('of 1000 DOCUMENTS')\n\nlen(chunks) # see how many chunks this produces\n\ndocs = chunks[1:]", "Python excurcus: Using enumerate to loop over lists\n\nWhen you have a list of items and what to process each in turn then using a for loop is a common approach, e.g.", "alist = [1,2,3,4,5]\nslist = ['a','b','c','d']\n\nfor item in alist:\n print(item)\n \nfor item in slist:\n print('The current item is:',item)", "Another, 'less Pythonic', way to do this is to create a loop that uses the indices of each item in the list, e.g.", "for idx in range(0,len(alist)):\n print('Index', idx, 'is item:', alist[idx])", "But often you want to have both each item in the list and its index without having to do list[idx] to get the item. The enumerate function helps in such cases.\nenumerate(list) returns a list of tuples, where each item in the list consists of a pair where the first item is the index and second the item itself.", "list(enumerate(slist))\n\nresult = list(enumerate(slist))\nresult[0]\n\nfor idx, item in enumerate(slist):\n print(idx, item)", "Back to the LexisNexus task\n\nWe can use the enumerate and for loop idiom to check how good our proposed strings are for splitting the document up into components.\nFor example, it looks like the string ( END ) could be a good marker of the end of the body. So we can test which documents contain it using the count function and testing whether we get at least one instance:", "for idx, doc in enumerate(docs):\n print('Document', idx, 'has ( END )?', doc.count('( END )') > 0)", "AH! - actually doesn't look like a great candidate for splitting a document in to body and post body sections.\nA second look at some sample documents suggests we might be able to use LOAD-DATE instead.", "for idx, doc in enumerate(docs):\n print('Document', idx, 'has LOAD-DATE?', doc.count('LOAD-DATE:') > 0)", "We can use list interpolation to get a count of the number of documents that contain a feature", "has_end = sum([doc.count('( END )')>0 for doc in docs])\nprint(has_end, 'documents with ( END ) out of ', len(docs), 'docs')\n\nhas_load_date = sum([doc.count('LOAD-DATE:')>0 for doc in docs])\nprint(has_load_date, 'documents with LOAD-DATE out of ', len(docs), 'docs')", "Now we have a set of string markers and a strategy for splitting documents up:\n\nfor each document\nsplit into three parts\nprebody = text up to LENGTH xxx words\nbody = text from LENGTH to before LOAD-DATE\npostbody = LOAD-DATE to the end", "doc = docs[0]\n\ndoc.index('LENGTH') # find the character position for the start of string LENGTH\n\ndoc[224:274] # slice the string starting at this point plus 50 characters\n\ndoc.index('\\n',224) # find the first newline character after the start of LENGTH\n\ndoc[241:280] # slice after this character", "Now we have that figured out we can set two variables:\nstart_pos for the beginning of the body (the line after the one beginning with LENGTH)\nend_pos the point were LOAD-DATE begins", "start_pos = doc.index('LENGTH')\nstart_pos = doc.index('\\n', start_pos)\nend_pos = doc.index('LOAD-DATE:')", "Then we can get the three parts of the document we want", "pre_body=doc[:start_pos]\n\nbody = doc[start_pos:end_pos]\n\npost_body = doc[end_pos:]", "Python excurcus: Dictionaries\n\nAlongside lists one of the most useful structures in Python is a dictionary. It is an unordered set of key-value pairings.\nData is organized and identified by a unique key using the syntax 'key' : value", "exdict = { 'item1': 123, 'item2': 'asdsad', 'item3': [1,2,3,'asd','dada'] } # define a dictionary\n\nexdict\n\nexdict['item2'] # get the value associated with the key 'item2'\n\n# add a new value associated with the key 'item4' which is itself a dictionary\nexdict['item4'] = {'a': 12323, 'b': [1,2,23]} \n\nexdict\n\nexdict['item4']['a'] # address this dictionary of dictionary structure\n\nexdict['item3'][3]", "Back to the LexisNexus task\n\nWe are going to use a list of dictionaries approach to store the prebody, body, postbody components for each document", "doc_dict = {'prebody': pre_body, 'body': body, 'postbody': post_body}\n\ndoc_dict['postbody']\n\nrows = []\nfor idx, doc in enumerate(docs):\n \n try:\n start_pos = doc.index('LENGTH')\n start_pos = doc.index('\\n', start_pos)\n end_pos = doc.index('LOAD-DATE:')\n except:\n print('ERROR with doc', idx)\n continue\n \n doc_dict = {\n 'prebody': doc[:start_pos],\n 'body': doc[start_pos: end_pos],\n 'postbody': doc[end_pos:]\n }\n \n rows.append(doc_dict)\n\nprint(docs[13])", "Python excursus: The Zen of Python\n\nA little bit of poetry from the creators of Python explaining the design and suggestions for truly Pythonic coding!", "import this", "The final parts of the task!\n\nNow we want to write out the documents split into three parts to a CSV file\nThen just for fun we construction a frequency list of all the words in the documents", "import csv\n\nwith open('data/articles.csv','w') as out:\n csvfile = csv.DictWriter(out, fieldnames=('prebody','body','postbody'))\n csvfile.writeheader()\n csvfile.writerows(rows)\n\ndocs2 = [r for r in csv.DictReader(open('data/articles.csv','r'))]\n\nlen(docs2)\n\nprint(docs2[0])", "Frequency counts\n\nThe <code>Counter</code> function generates a dictionary-like object with words as keys and the number of times they occur in a sequence as values\nIt is a quick way to generate a word frequency list from a set of tokenized documents (i.e., where the text has been turned into a list of words)", "from collections import Counter", "Simple example of using <code>Counter</code>\n<code>Counter</code> works by passing it a list of items, e.g.:", "count = Counter(['a','a','v','c','d','e','a','c'])", "and it returns a dictionary with a count for the number of times each item occurs:", "count.items()", "Just like a dictionary you can get the frequency for a specific item like this:", "count['a'] # how many times does 'a' occur in the list ['a','a','v','c','d','e','a','c']", "<code>Counter</code> object has an <code>update</code> method that allows multiple lists to be counted.", "text1 = 'This is a text with some words in it'\ntext2 = 'This is another text with more words that the other one has in it'\ntokens1 = text1.lower().split()\ntokens2 = text2.lower().split()\nprint('text1:', tokens1)\nprint('text2:', tokens2)", "First define a new <code>Counter</code>", "freq = Counter()", "Then update it with the words from text1", "freq.update(tokens1)\nfreq.items()", "Then update it again with the words from text2", "freq.update(tokens2)\nfreq.items()", "A simple example of looping of some texts and generating a frequency list using Counter", "texts = [\n 'This is the first text and it has words',\n 'This is the second text and it has some more words',\n 'Finally this one has the most words of all three examples words and words and words'\n]\n\nfreq2 = Counter()\nfor text in texts:\n # turn the text into lower case and split on whitespace\n tokens = text.lower().split()\n freq2.update(tokens)\n \n# show the top 10 most frequent words\nprint(freq2.most_common(10))", "Finally lets make the formatting a bit more pretty by looping over the frequency list and producing a tab separated table:", "for item in freq2.most_common(7):\n print(\"{}\\t\\t{}\".format(item[0],item[1]))", "Counting words in the LexisNexus documents\n\nFor a single LexisNexus document loaded in from the CSV file (the list of dictionaries), we select the document by index and then the body component:", "freq_list = Counter(docs2[0]['body'].lower().split())\n\nfreq_list.most_common()", "Find the frequency of the word vaping", "freq_list['vaping']", "Finally we can create a frequency list for all the documents with a loop and using the update function on the Counter object.", "freq_list_all = Counter()\nfor doc in docs2:\n body_text = doc['body']\n tokens = body_text.lower().split()\n freq_list_all.update(tokens)\n\n \nprint(freq_list_all.most_common())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sz2472/foundations-homework
homework_6_shengying_zhao.ipynb
mit
[ "Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)\n\nimport requests\n\n!pip3 install requests\n\n#new york\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889\")\n\ndata = response.json()\n\nprint(data)", "2) What's the current wind speed? How much warmer does it feel than it actually is?", "type(data)\n\ndata.keys()\n\nprint(data['currently'])\n\nprint(data['currently']['temperature']-data['currently']['apparentTemperature'])", "3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?", "print(data['daily'])\n\ntype(data['daily'])\n\ndata['daily'].keys()\n\nprint(data['daily']['data'][0])\n\ntype(data['daily']['data'])\n\nprint(data['daily']['data'][0]['moonPhase'])", "4) What's the difference between the high and low temperatures for today?", "weather_today = data['daily']['data'][0]\nprint(weather_today['temperatureMax']-weather_today['temperatureMin'])", "5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.", "print(data['daily']['data'])\ndaily_data = data['daily']['data']\n\nweather_next_week = data['daily']['data']\nfor weather in weather_next_week:\n print(weather['temperatureMax'])\n if weather['temperatureMax'] > 84:\n print(\"it's a hot day.\")\n elif weather['temperatureMax'] > 74 and weather['temperatureMax'] < 83:\n print(\"it's a warm day.\")\n else:\n print(\"it's a cold day.\")", "6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say \"{temperature} and cloudy\" instead of just the temperature.", "import requests\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/25.7738889, -80.1938889\")\n\ndata = response.json()\n\nprint(data['hourly'])\n\ndata['hourly'].keys()\n\ndata['hourly']['data']\n\nfor cloudcover in data['hourly']['data']:\n if cloudcover['cloudCover'] > 0.5:\n print(cloudcover['temperature'], \"and cloudy\")\n else:\n print(cloudcover['temperature'])\n ", "7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?\nTip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date!\nTip: You'll want to use Forecast.io's \"time machine\" API at https://developer.forecast.io/docs/v2", "import requests\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,346550400\")\ndata = response.json()\nprint(data['currently']['temperature'])\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,662083200\")\ndata = response.json()\nprint(data['currently']['temperature'])\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,977702400\")\ndata = response.json()\nprint(data['currently']['temperature'])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PrACiDa/intro_ciencia_de_datos
01_arreglos_graficos_y_2D.ipynb
gpl-3.0
[ "NumPy\nNumPy (Numerical Python) es un módulo de Python para cómputo científico. Esta biblioteca contiene muchas funciones útiles en computación científica entre las que se encuentran manipulación de arreglos numéricos, operaciones de álgebra lineal y generación de números pseudo-aleatorios. \nPara poder usar NumPy debemos importarlo, la forma más común de importar NumPy es asignándole el alias np:", "import numpy as np", "Arreglos\nNumPy usa una estructura de datos llamada arreglos (arrays). Los arreglos de NumPy son similares a las listas de Python, pero son más eficientes para realizar tareas numéricas. La eficiencia deriva de las siguientes características:\n\n\nLas listas de Python son muy generales, pudiendo contener objetos de distinto tipo. Los arreglos de NumPy son homogéneos solo pueden contener objetos de un mismo tipo.\n\n\nEn una lista de Python los objetos son asignados dinamicamente, es decir el tamaño de una lista no está predefinidos, siempre podemos agregar más y más elementos. Por el contrario los arreglos de NumPy son estáticos. \n\n\nEstos dos primeros puntos permiten hacer uso eficiente de la memoria\n\nOtra razón por la cual los arreglos son más eficientes que las listas es que en Python todo es un objeto, incluso los números! Por ejemplo en C un entero es esencialmente un rótulo que conecta un lugar en la memoria de la computadora cuyos bytes se usan para codificar el valor de ese entero. Sin embargo en Python un entero es un objeto más complejo que contiene más información que simplemente el valor de un número. Esto da flexibilidad a Python, pero a cambio de pagar un costo en términos computacionales. Python es en general un lenguaje más lento que lenguages como C o Fortran. Este costo es aún mayor cuando combinamos muchos de estos objetos en un objeto más complejo, como por ejemplo enteros dentro de una lista.\n\nOtra ventaja de los arreglos es que se comportan de forma similar a los vectores y matrices usados en matemática. Esto facilita muchas de las tareas científicas, precisamente por que el álgebra lineal es el lenguaje usado para pensar y resolver muchos problemas científicos de forma eficiente.\n<a href=\"https://xkcd.com/1838/\">\n<img src='imagenes/ml_al.png' width=250 >\n</a>\nCreando arreglos\nExisten varias rutinas para crear arreglos de NumPy a partir de:\n\nListas o tuplas de Python\nRangos numéricos\nNúmeros aletorios\nCeros y unos\nArchivos\n\nA partir de listas y tuplas\nPara crear arreglos a partir de listas (o tuplas) podemos usar la funcion array:", "v = np.array([1, 2, 3, 4 , 5, 6])\nv\n\nM = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\nM", "El primer arreglo, v, lo creamos a partir de una lista y es por lo tanto unidimensional, mientras que el segundo M lo creamos a partir de una lista anidada (una lista de listas) y resulta en un arreglo bidimensional.\nLos arreglos tienen atributos como por ejemplo shape:", "v.shape, M.shape", "El shape nos indica la candidad de elemenos en cada eje (o axis). En 2 dimensiones podemos pensarlo como el número de (filas, columnas).\nTambien podemos preguntarle a un array cual es su dimensión:", "v.ndim # v es unidimensional\n\nM.ndim # M es bidimensional", "A partir de un rango numérico\nOtra forma de crear arreglos es usando rangos. Por ejemplo podemos crear un arreglo conteniendo números igualmente espaciados en el intervalo [desde, hasta), usando arange.", "np.arange(0, 10, 1) # desde, hasta(sin incluir), paso (el paso es opcional!)", "Otra función para crear rangos es linspace que devuelve numeros igualmente espacios en el intervalo [desde, hasta] (es decir incluyendo el hasta). Otra diferencia con arange es que no se especifica el paso sino la cantidad total de números que contendrá el arreglo.", "np.linspace(1, 10, 25) # desde, hasta, elementos (elementos es opcional)", "A partir de números aleatorios\nLos números aleatorios son usados en muchos problemas científicos. En la práctica las computadoras son solo capaces de generar números pseudo-aleatorios. Python usa un algortimo llamado Mersenne Twister para generar números pseudo-aleatorios. Este algorítmo es más que suficiente para fines científicos, pero no es útil en caso que necesitemos números pseudo-aleatorios para usar en criptografía. Resumiendo para nuestros fines podemos asumir que los números pseudo-aleatorios que generaremos a lo largo del curso son realmente números aleatorios. \nTodas las rutinas de NumPy para generar números aleatorios viven dentro del módulo random. \nLa función mas simple es rand. Esta función crea un arreglo de números en el intervalo [0, 1). Dentro de ese intervalo los números son equiprobables, es decir es una distribución uniforme. El argumento de rand son las dimensiones del arreglo resultante.", "np.random.rand(2, 5) # arreglo con forma (2, 5)", "De forma similar randn (noten la n al final) devuelve muestras a partir de la distribución normal estándar (media = 0, desviación estándar = 1), según las dimensiones que especifiquemos.", "np.random.randn(10)", "Creando arreglos con ceros y unos\nEn Python es común crear una lista vacía que luego se llena de elementos en un loop. En NumPy es común crear un arreglo del tamaño necesario y luego llenarlo de valores. Para estas situaciones resulta conveniente tener a mano funciones que crean arreglos con ceros o unos.", "np.zeros((2, 4))\n\nnp.ones((2, 4))\n\nnp.zeros_like(M) # noten que los ceros no tienen \".\", es decir son enteros, ¿Por qué?\n\nnp.full((2, 4), 42.)", "Dado que en Python 0 evalua como False y 1 como True, una forma de crear un arreglo de booleanos es:", "np.ones((2, 4), dtype=bool)", "Creando arreglos con ceros y unos\nExisten al menos un par de funciones para crear arreglos de NumPy desde archivos, la más versatil de ellas es genfromtxt:", "datos = np.genfromtxt('datos/microbioma.csv', delimiter=','\n , skip_header=1, usecols=(1,2,3,4), dtype='int')\n\ndatos.shape\n\ndatos[:4]", "Leamos el archivo microbioma.csv pero esta vez pasando menos argumentos que en el caso anterior:\nnp.genfromtxt('datos/microbioma.csv', delimiter=',')\n\n¿Cuál son las diferencias entre ambos arreglos?\n¿Cómo se explican los nan?\n<br>\n<br>\n<br>\n<br>\nTip: al menos bajo Linux, es posible usar las celdas de una notebook para ejecutar comandos como si fuese una terminal (o linea de comandos), por ejemplo podemos ejecutar el comando head para ver en encabezado de un archivo.", "!head -4 datos/microbioma.csv", "Indexado y rebanado de arreglos\nLos arreglos de NumPy, al igual que las listas, se pueden indexar y se pueden rebanar (slices). Por ejemplo la siguiente sintáxis funciona de la misma manera sin importar si M es un arreglo o una lista.", "M[0] # el primer elemento de M", "La sintaxis usada para indexar y rebanar es una generalización de la usada para las listas de Python. Esta generalización facilita trabajar con arreglos de más de 1 dimensión. Trabajar con más de una o dos dimensiones puede ser un poco confuso, sobre todo al principio ¡Aunque no necesariamente solo al principio!\nLa siguiente expresión es válida para arreglos", "M[0, 1] # el elemento (0, 1) de M\n # o también la primer fila de M y de ella el segundo elemento", "Sin embargo esta expresión no es válida para listas.\n¿Cuál es la expresión equivalente que funciona con listas?\n<br>\n<br>\n<br>\n<br>\nEn la siguiente celda tenemos un ejemplo de una expresión que es común a listas y arreglos.", "M[1:] # a partir de la fila 1 todo", "Y este es un ejemplo de rebanado que funciona con arreglos, pero no con listas.", "M[1,:] # solo la fila 1 (o la fila 1 en el primer axis y todo en el resto de los axis)", "El poder rebanar/indexar en varias dimensiones en simultaneo nos da flexibilidad para trabajar con subconjuntos de datos contenidos en un arreglo. Veamos más ejemplos:", "M[:,1] # solo la columna 1 (o todo en la primer dimensión y la columna 1 en el resto de las dimensiones)\n\nM[:,1:] # todas las filas y todas las columnas a partir de la columna 1\n\nM[::-1] # los elementos de M en reversa", "Es importante notar que al tomar rebanadas NumPy NO genera un nuevo arreglo, sino una vista (view) del arreglo original. Por lo tanto si a una rebanada le asignamos un número, se lo estaremos asignando al arreglo original, como se puede ver en el siguiente ejemplo.", "M[0, 0] = 42\nM", "Distinto es asignar la rebanada a una variable y luego modificar esa variable:", "a = M[0, 0]\na = 1\nM", "Para crear copias se puede usar la función np.copy() o el método .copy().\nGenera una copia de M, llamada K, modifica K y comprueba que M no se modificó\n<br>\n<br>\n<br>\n<br>\nFunciones Universales (Ufunc)\nSi quisieramos calcular la raíz cuadrada de todos los elementos de un array deberíamos hacer un loop sobre cada uno de los elementos del mismo, computar la raíz cuadrada y almacenar el resultado.\nUna opción sería:", "res = np.zeros_like(M, dtype=float)\n\nfor i in range(M.shape[0]):\n for j in range(M.shape[1]):\n res[i, j] = M[i, j] ** 0.5\n\nres", "Otra opción sería usar enumerate:", "res = np.zeros_like(M, dtype=float)\n\nfor i, fila_i in enumerate(M):\n for j, elemento_ij in enumerate(fila_i):\n res[i][j] = elemento_ij ** 0.5\n\nres", "NumPy permite vectorizar estas operaciones, es decir podemos calcular la raíz cuadrada de todos los elementos de un arreglo en una sola operación que se aplica a cada uno de los elementos del arreglo:", "M**0.5", "Como se ve en el ejemplo anterior vectorizar permite omitir. Esta capacidad de\nvectorizar código no está restringida a operadores matemáticos como la potenciación **, funciona con otros operadores y con una gran cantidad de funciones. Por ejemplo", "np.sqrt(M)\n\nnp.log(M) # 0j0 log es logaritmo natural", "Funciones como sqrt o log, que operan sobre arreglos elemento-a-elemento se conocen como funciones universales (usualmente abreviadas como ufunc).\nUna de las ventajas de usar ufuncs es que permite escribir código más breve. Otra ventaja es que los cómputos son más rápidos que usando loops de Python. Detrás de escena NumPy realiza las operaciones en un lenguaje como C o Fortran, por lo que hay una ganancia considerable en velocidad, respecto de código en Python puro. Además, el código usado por NumPy es código que suele estar optimizado gracias a los años de labor de programadores y científicos que colaboran con proyectos científicos.\nVeamos otro ejemplo, como sumar todos los elementos de un arreglo.", "np.sum(M)", "En el ejemplo anterior la suma se hizo sobre todos los números contenidos en el arreglo, sin respetar sus dimensiones. En muchas ocaciones es preferible hacer operaciones sobre alguna dimensión en particular, por ejemplo sumar a lo largo de las columnas:", "np.sum(M, axis=0)", "O sumar a lo largo de las filas:", "np.sum(M, axis=1)", "Un arreglo tendrá tantos axis como dimensiones. \nBroadcasting\nUn característica que facilita vectorizar código es la capacidad de operar sobre arreglos que no tienen las mismas dimensiones. Esto se llama broadcasting y no es más que un conjunto de reglas que permiten aplicar operaciones binarias (suma, multiplicación etc) a arreglos de distinto tamaño.\nConsideremos el siguiente ejemplo donde sumamos dos arreglos, elemento a elemento.", "a = np.array([0, 1, 2])\nb = np.array([2, 2, 2])\na + b", "Podemos ver que el arreglo b contiene 3 veces el número 2. El broadcasting hace que la siguiente operación también sea válida y brinde el mismo resultado que la celda anterior.", "a + np.array(2)", "Incluso la siguiente operación es válida:", "M + b", "En ambos casos lo que está sucediendo es como si antes de realizar la suma extendieramos una de las partes hasta que las dimensiones coincidan, por ejemplo repetir 3 veces el número 2 o tres veces el vector b. \nEl broadcasting no funciona para cualquier par de arreglos. La siguiente operación funciona:", "M[1:,:] + b", "mientras que la siguiente dará un error", "M + b[:2]", "El mensaje de error nos dice que NumPy no sabe como hacer para encajar las dimensiones de estos dos arreglos. Considero que este es un error múy razonable ya que no es del todo claro como NumPy podría hacer la operación que le pedidos que haga, además creo que el error es bastante transparente ¿Opinan igual?\nMás detalles sobre broadcasting aqui.\nComparaciones y máscaras de booleanos\nAsí como es posible sumar un número a un arreglo, es posible hacer comparaciones elemento-a-elemento. Por ejemplo podemos preguntar que valores de M son mayores a 3, el resultado será una array con booleanos.", "M > 6", "Es muy común usar expresiones como la anterior para obtener, de un array dado, solo el subconjunto de valores que cumplen con cierto criterio, como:", "M[M > 6]", "o incluso combinando arreglos, como:", "M[a == 2]", "Medidas de centralidad y dispersión (usando NumPy)\nEn el capítulo anterior vimos como usar Python para calcular la media, la mediana y la varianza. NumPy incluye funciones (y métodos) ya definidos para calcular estas cantidades. Para calcular la media de los valores en un array simplemente hacemos.", "np.mean(v)", "Una forma alternativa es usar el método .mean()", "v.mean()\n\nprint('varianza {:.2f}'.format(np.var(v)))\nprint('desviación estándar {:.2f}'.format(np.std(v)))", "Cuantil\nAdemás de la varianza existen otras formas de describir la dispersión de los datos. Una de ellas es el rango. Es decir la diferencia entre el valor más grande y el más chico en un conjunto de datos. Un problema con el rango es que es muy sensible a los valores extremos, después de todo se define como la resta de los dos valores más extremos. Una alternativa es calcular un rango pero truncado, es decir dejando de lado valores hacia ambos extremos. Esto se puede hacer con los cuantiles.\nLos cuantiles son puntos de corte que dividen al conjunto de datos en grupos de igual tamaño. Existen varios nombres para los cuantiles según la cantidad de divisiones que nos interesen.\n\nLos cuartiles son los tres puntos que dividen a la distribución en 4 partes iguales, se corresponden con los cuantiles 0.25, 0.50 y 0.75.\nLos quintiles dividen a la distribución en cinco partes (corresponden a los cuantiles 0.20, 0.40, 0.60 y 0.80).\nLos deciles dividen a la distribución en diez partes.\nLos percentiles dividen a la distribución en cien partes.\nLa mediana es el percentil 50 y el cuartil 0.5.\n\nEn Python el cálculo de estos estadísticos puede realizarse fácilmente usando funciones predefinidas en NumPy.", "x = np.random.normal(0, 1, 100)\n'percentiles 25={:.2f}; 50={:.2f}; 75={:.2f}'.format(*(np.percentile(x , [25, 50, 75])))", "Un rango que se calcula usando cuantiles y que es muy usado es el rango intercuartil, el cual se calcula como:\n$$IQR = q3 − q1 = p75-p25$$\ny usando NumPy", "np.diff(np.percentile(x , [25, 75]))", "Gráficos\nLos gráficos ocupan un lugar central en la estadística moderna y en la ciencia de datos, ya sea en el análisis exploratorio de datos o en procesos de inferencia.\nExisten varias bibliotecas para hacer gráficos en Python, Matplolib es una de las más usadas. La forma más común de importarla es:", "%matplotlib inline\nimport matplotlib.pyplot as plt", "La primer línea es para decirle a la Notebook que los gráficos queden embebidos en la notebook (si no estuvieramos usando la notebook no escribiríamos esta línea).\nLa segunda línea es la forma estándar de importar matplotlib.\nVeamos como hacer un gráfico sencillo.", "x = range(20)\ny = [i ** 0.5 for i in x]\nplt.plot(x, y)\nplt.xlabel('x')\nplt.ylabel(r'$\\sqrt{x}$', rotation=0, labelpad=15);", "En la primer y segunda línea estamos \"generando\" datos.\nEn la tercer línea decimos que queremos generar un gráfico del tipo plot (ya veremos que los hay de otros tipos), donde graficaremos x vs y.\nEn la cuarta y quinta línea agregamos rótulos a los ejes. En la quinta línea usamos la misma notación usada por $LaTeX$ para escribir fórmulas matemáticas. $LaTeX$ es un lenguaje para escribir textos que es muy usado en muchas disciplinas científicas para escribir papers, posters, diapositivas, libros, etc.\n\nVeamos otro ejemplo:", "x = range(20)\ny = [i ** 2 for i in x]\nz = [i ** 1.8 for i in x]\nplt.plot(x, y, label=r'$x^2$')\nplt.plot(x, z, label=r'$x^{1.8}$')\nplt.xlabel('x', fontsize=16)\nplt.ylabel(r'$f(x)$', fontsize=16)\nplt.legend(fontsize=16);", "Existen muchos tipos de gráficos para representar datos. A continuación veremos cinco representaciones comunes para datos unidimensionales:\n\nhistogramas\nkde plots\nstripplot\nbox plots\nviolin plots\n\nHistogramas\nEn un histograma se representa la frecuencia con la que aparecen los distintos valores en un conjunto de datos. Se utilizan barras contiguas para representar los datos. La superficie (y no la altura) de las barras es proporcional a la frecuencia de datos observados. Los datos son agrupados en bins, y suelen graficarse sin normalizar o normalizados. Normalizar implica que el área total del histograma suma 1. No hay que confundir los histogramas con los gráficos de barras que se utilizan para comparar valores discretos entre grupos, mientras que los histogramas se usan para representar distribuciones continuas.\nLos histogramas son sensibles a la cantidad de bins que se usan. Si usamos unos pocos bins no lograremos capturar la estructura de los datos, si usamos demasiados bins no solo estaremos representando la estructura de los datos sino también el ruido. Esto se ve más claramente si probamos valores extremos, en el extremo inferior tedríamos a todos los datos representados con una sola barra, en el superior una barra por cada dato.", "np.random.seed(440)\nx = np.random.gamma(2, 10, size=1000)\n\nplt.hist(x, bins=50, density=True, cumulative=False); # probá cambiar los bins, y los demás argumentos.", "Aprovechando lo que hemos aprendido hasta el momento generemos un gráfico que muestre la diferencia entre media y mediana", "plt.hist(x, bins=20)\nmedia = np.mean(x)\nmediana = np.median(x)\n\nplt.axvline(media, ymax=.9, c='C1', lw='3', label='{:.2f} media'.format(media))\nplt.axvline(mediana, ymax=.9, c='C3', lw='3', label='{:.2f} mediana'.format(mediana))\nplt.legend(fontsize=14);", "Kernel Density plot\nEste tipo de gráfico es similar a un histograma, pero en vez de usar un número discreto de bins para representar una distribución se usa una curva suave y continua.\nUn gráfico KDE se dibuja de la siguiente forma: se reemplaza cada dato por una distribución Gaussiana y luego se suman todas las Gaussianas. En vez de una distribución Gaussiana es posible usar otras distribuciones. El nombre genérico para esas distribuciones cuya suma se usa como aproximación de una función es el de kernel. La Gaussiana es uno de los kernels más usado.\nDe forma análoga a lo que sucede con los bins los KDE son sensibles a un parámetro llamado bandwith. Existen varias heurísticas (reglas empíricas que suelen funcionar bien en la práctica) para ajustar el bandwith de forma automática de acuerdo a los datos.\nEs posible usar matplotlib para graficar un kde, pero no existe una función que lo haga de forma automática. Es decir es posible pero requiere de cierto trabajo. Lo mismo sucede con otros tipos de gráficos usados para analizar datos, es por ello que existe una biblioteca llamada Seaborn, la cual no es más que una colección de funciones escritas usando matplotlib.", "import seaborn as sns", "Usando Seaborn, podemos hacer un kde de forma muy simple", "sns.kdeplot(x); #también ver la función sns.distplot()", "Uno de los problemas de los KDE, o mejor dicho de muchas de sus implementaciones, es que los bordes o límites de las distribuciones no son tenidos en cuenta. Si observamos con cuidado la figura anterior veremos que la curva azul cubre un rango que va aproximadamente desde -10 hasta 100. Pero si miramos los datos veremos que no hay valores negativos ni por encima de $\\approx$ 90, es decir el KDE ¡se está inventando datos! Algunas implementaciones, como la utilizada por PyMC3, aplican correcciones que permiten una mejor aproximación en los bordes de la distribuciones. Al usar paquetes como Seaborn que no tienen en cuenta estas consideraciones una forma algo burda de subsanar este problema es truncar el gráfico en los valores mínimos y máximos de los datos. Como Seaborn está escrito usando Matplotlib, podemos hacer esto, y muchas otras modificaciones, usando Matplotlib.", "sns.kdeplot(x)\nplt.xlim(x.min(), x.max())\nplt.xlabel('$x$')\nplt.ylabel('$density$');", "Los KDE se pueden utilizar para graficar más de una distribución en el mismo gráfico", "sns.kdeplot(x, label='$x$')\nsns.kdeplot(x**0.9, label='$x^{0.9}$')\nsns.kdeplot(10 + x*0.5, label='$10 + 0.5x$')\nplt.xlim(x.min(), x.max());\nplt.legend();", "Un efecto similar puede lograrse con los histrogramas si solo graficamos el contorno de los mismos (histtype='step') o graficando histogramas semitranspartes(alpha menor a 1). Pero en general el efecto no es tan claro como cuando se usan KDEs.\nIntenta reproducir el gráfico anterior pero usando histogramas\n<br>\n<br>\n<br>\n<br>\nStripplot\nEste tipo de gráfico sirve para visualizar un conjunto de datos donde una variable es métrica y las demás son categóricas.\nPara visualizarlos podemos usar la función stripplot de seaborn (un gráfico similar es el swarmplot). Los stripplot se suelen graficar agregado un poco de ruido (jitter en inglés) a lo largo del eje de las $x$, esto es simplemente un truco para facilitar la visualización de los puntos, que caso contrario caerían todos en una misma línea ya que las variables categóricas no tienen dispersión.\nPuede ser útil en si mismo o puede ser usado superpuesto sobre un boxplot o violinplot (ver más adelante).", "y0 = np.random.normal(0, 10, size=42)\ny1 = np.random.normal(-1, 10, size=50)\nsns.stripplot(data=[y0, y1], jitter=True);", "Gráficos de cajas o de bigotes (Box plot o Wisker-plot)\nLos gráficos de caja están basados en cuartiles. La caja está delimitada (en sus bordes inferior y superior) por el primer y tercer cuartil, mientras que la línea dentro de la caja es el segundo cuartil (la mediana). Los bigotes pueden indicar varias medidas, por eso es siempre importante leer/escribir la leyenda o texto que acompaña a un boxplot, a veces se usa una desviación estándar, otras veces los percentiles 2 y 98, otras veces es una función del rango intercuartil (como en el siguiente gráfico). Los valores por fuera de los bigotes se suelen considerar como datos aberrantes (ver más adelante).", "sns.boxplot(data=[y0, y1]);", "Gráficos de violín (violin plot)\nLos gráficos de violín son una combinación de gráficos de caja con kde.", "sns.violinplot(data=[y0, y1]);", "Datos aberrantes\nLos datos aberrantes (outliers) son valores que están muy alejados de la mayoría de los valores de una distribución. Los datos aberrantes pueden ser errores de medición, errores al procesar los datos o incluso valores correctos pero inusuales (sobre todo cuando la muestra es pequeña). Siempre es buena idea revisar si nuestros datos contienen datos aberrantes y en muchos casos puede llegar a ser conveniente removerlos. Siempre que se remueve un dato aberrante deberá reportarse que fue removido y explicar cual fue el criterio usado para removerlos. Es importante destacar que la decisión de remover datos aberrantes no debe ser tomada a la ligera. Si un supuesto dato aberrante fuese un valor correcto quizá nos estaría indicando que no comprendemos del todo el fenómeno estudiado y al dejarlo de lado podríamos estar perdiéndo información importante!\nExisten varios criterios para identificar datos aberrantes. Dos muy usados son:\n * Todo valor por debajo de $\\mu$-n$\\sigma$ y por encima de $\\mu$+n$\\sigma$. Donde n = 1, 2, 3...\n * Se define el rango intercuartil como $IQR = q3 − q1 = p75-p25$ y se define como aberrante todo valor por debajo de $q1-1.5 \\times IQR$ y por encima de $q3+1.5 \\times IQR$\nEl primer criterio suele ser usado para distribuciones que se asemejan a Gaussianas, mientras que el segundo es más general ya que el rango intercuartil es una medida más robusta de la dispersión de una distribución que la desviación estándar. El valor de 1.5 es totalmente arbitrario y un valor que se viene usando desde que esta idea fue propuesta. Si nuestros datos son aproximadamente gaussianos, entonces este criterio excluye menos del 1% de los datos.\nSegún la desigualdad de Chebyshev (y bajo ciertas condiciones), al menos $1 - \\frac{1}{k^2}$ de los valores de una distribución están dentro $k$ desviaciones estándar. Es decir casi todos los valores de una distribución de probabilidad están cerca de la media. Por lo tanto el 75% y el 89% de los valores de una distribución se encuentran dentro de 2 y 3 desviaciones estandar, respectivamente. La desigualdad de Chebyshev indica una cota, para varias distribuciones es posible que los valores se encuentren mucho más concentrados alrededor de la media. Por ejemplo esto sucede con las curvas Gaussianas. Para una curva Gaussiana se cumple la regla 68-95-99,7 es decir el 68 por cierto de los datos se encuentra dentro de 1 desviación estándar, el 95 dentro de 2 y el 99.7 dentro de 3.\n<a href=\"https://en.wikipedia.org/wiki/Box_plot\">\n<img src='imagenes/Boxplot_vs_pdf.png' width=350 >\n</a>\nRelación entre dos variables\nLos gráficos que hasta ahora hemos visto sirven para visualizar una variable por vez, (aunque sns.kdeplot soporta la visualización de dos variables). Sin embargo, en muchos casos necesitamos entender la relación entre dos variables. Dos variables están inter-relacionadas, si el conocer el valor de una de ellas provee de información sobre el valor de la otra.\nGráfico de dispersión\nUn gráfico de dispersión es un gráfico que utiliza las coordenadas cartesianas para mostrar los valores de dos variables en simultáneo. Estos gráficos son la forma más simple de visualizar la relación entre dos variables.\nSupongamos que tenemos dos variables, que creativamente llamaremos $x$, $y$.", "x = np.random.normal(size=1000)\ny = np.random.normal(loc=x, scale=1)", "Usando matplotlib podemos graficar ambas variables usando la función scatter", "plt.scatter(x, y, alpha=1)\nplt.xlabel('x')\nplt.ylabel('y');", "Seaborn provee de múltiples opciones para visualizar relaciones entre dos variables, varias de ellas están contenidas en la función joinplot. Esta función además de mostrar el gráfico de dispersión muestra las distribuciones marginales de $x$ e $y$.", "sns.jointplot(x, y, kind='scatter', stat_func=None);", "Cuando tenemos una gran cantidad de datos los puntos de un scatter plot empiezan a superponerse y puede que ciertos patrones pasen desapercibidos. En estos casos puede ser conveniente agrupar los datos de alguna forma en vez de mirar los datos en crudo.\nEl siguiente gráfico usa un kernel density estimation tanto para las distribuciones marginales como para la distribución conjunta.", "sns.jointplot(x, y, kind='kde', stat_func=None);", "Una alternativa a un KDE bidimensional es el hexbin. Este tipo de gráfico es una versión 2D de un histograma. El nombre se debe a que los datos son agrupados en celdas hexagonales. ¿Por qué hexágonos en vez de cuadrados o triángulos? Simplemente porque las celdas hexagonales introducen una menor distorsión en los datos que otras opciones. Esto se debe a las siguientes razones:\n\nLos hexágonos se relacionan con sus vecinos por lados (los cuadrados y triángulos lo hacen por vértices y lados). Es decir se vinculan con sus vecinos de forma más simétrica. \nLos hexágonos son el polígono con mayor número de lados que mejor cubren (teselan) una superficie plana.\nLos hexágonos introducen menor distorsión visual que por ejemplo los cuadrados. Un malla cuadrada hace que tendamos a mirar en sentido horizontal y vertical.\n\nLos hexbin son útiles cuando necesitamos visualizar muchos datos. Por muchos me refiero a números por encima de las centenas de miles de datos. Una ventaja de los hexbin sobre los KDE es que el costo computacional es menor.", "sns.jointplot(x, y, kind='hex', stat_func=None); # ver también plt.hexbin();", "Una alternativa para evitar que algunos puntos opaquen al resto, en un gráfico de dispersión común, es hacer los puntos semitransparentes. En matplotlib la transparencia de los objetos es controlada mediante un parámetro llamado alpha que va entre 0 y 1. Este es un buen momento para volver algunas celdas atrás y ver como este y otros parámetros pueden ser usados para modificar las gráficas realizadas.\nCorrelación\nAl trabajar con dos variables es común preguntarse por la relación entre ellas. Decimos que dos variables están relacionadas si una provee información sobre la otra. Si en cambio una variable no ofrece información sobre otra decimos que son independientes. \nLa correlación es una medida de la dependencia de dos variables. Existen varios coeficientes de correlación el más comunmente usado es el coeficiente de correlación de Pearson. Este coeficiente solo sirve para medir relaciones lineales entre variables. El coeficiente de correlación de Pearson es el resultado de dividir la covarianza de las dos variables ($E[(\\bar x - x)(\\bar y - y)]$) por el producto de sus desviaciones estándar:\n$$\\rho_{(x,y)}={E[(\\bar x - x)(\\bar y - y)] \\over \\sigma_x\\sigma_y}$$\nEn palabras (que puede ser más oscuro que en fórmulas), el coeficiente de correlación de Pearson indica como cambia una variable al cambiar la otra respecto de la variación intrínseca de ambas variables.\n¿Por qué usar el coeficiente de Pearson y no directamente la covarianza? Fundamentalmente por que la interpretación es más simple. Al dividir por el producto de las varianzas estámos estandarizando la covarianza, obteniendo un coeficiente que no tiene dimensión y que solo puede variar entre -1 y 1 sin importar las unidades de nuestras variables.\nLa función joinplot, que vimos en el apartado anterior, por defecto nos devuelve el valor del coeficiente de correlación de Person, junto con un valor p cuyo significado estudiaremos en el capítulo 4.", "sns.jointplot(x, y, kind='scatter');", "Identificar correlaciones puede ser útil para entender como dos variables se relacionan y para predecir una a partir de la otra. Es por ello que muchas veces además de visualizar la relación entre variables se estiman modelos que ajustan a los datos. Como por ejemplo líneas rectas. En el tercer curso de este programa veremos como crear modelos lineales y no-lineales. Por ahora simplemente nos conformaremos con dejar que seaborn ajuste los datos a un recta por nosotros.", "sns.jointplot(x, y, kind='reg');", "En la siguiente imagen se puede ver varios conjuntos de datos y sus respectivos coeficientes de correlación de Pearson. Es importante notar que el coeficiente de correlación de Pearson refleja la linearidad y la dirección de dicha linearidad (primera fila), pero no la pendiente de dicha relación (fila del medio). Tampoco es capaz de capturar relaciones no-lineales. En la fila del medio la línea con pendiente cero tiene un coeficiente de correlación de Pearson indefinido, ya que la varianza de la variable $y$ es 0.\n<img src='imagenes/Correlación.png' alt=\"correlación\", width=600, height=600>\nCorrelación y causalidad\nSi existe algún tipo de mecanismo que hace que una variable dependa de otra deberá existir correlación (aunque no necesariamente lineal). Pero lo opuesto no es siempre cierto, dos variables pueden estar correlacionadas sin que exista ningún tipo de mecanismo que las vincule. Dado el gran conjunto de variables que es posible medir no debería ser sorprendente que existan correlaciones espurias. Por ejemplo en la siguiente figura se puede ver que el número de piratas y la media de la temperatura global están inversamente correlacionados.\n<img src='imagenes/pirates_temp.png' alt=\"Pirates_temp\", width=600, height=600> \nEste gráfico fue creado a propósito para ilustrar, entre otros puntos, que correlación no implica causalidad (nótese además que el orden de los datos en el eje $x$ es erróneo y la escala es al menos problemática). Para más detalles del origen de esta gráfica leer esta entrada de wikipedia \nLa aparente relación entre las variables temperatura media y cantidad de piratas podría ser explicada de varias formas, quizá es pura casualidad o quizá se podría establecer que los cambios introducidos por la revolución industrial terminaron por un lado aumentando la cantidad de $CO_2$ (y otros gases de invernadero) y por el otro produciendo cambios socio-culturales y tecnológicos que llevaron (luego de una larga cadena de sucesos) a la disminución de los piratas. Pero ¡no es cierto que podamos contrarrestar el calentamiento global simplemente aumentando la cantidad de piratas!\nPara poder establecer una relación causal a partir de una correlación hace falta poder establecer y probar la existencia de un mecanismo que vincule ambas variables. Espero que este ejemplo haya servido para ayudarles a entender que correlación no implica causalidad. \n<img src='http://imgs.xkcd.com/comics/correlation.png' alt=\"xkcd\">\nSciPy\nNumPy junto con SciPy son el núcleo de todo el ecosistema científico de Python. Al trabajar en computación científica muchas veces necesitaremos acceso a rutinas numéricas para realizar tareas como interpolar, integrar, optimizar, realizar análisis estadísticos, procesar audio, procesar imágenes, etc.\nSciPy es una librería de computación científica construida encima de NumPy que nos ofrece muchas de estas funciones. Al igual que como sucede con NumPy, SciPy cuenta con muchas rutinas rápidas, confiables y facilmente disponibles. SciPy es también el nombre de un grupo de conferencias donde participan usuarios y desarrolladores de herramientas de computación científica en Python.\nNo es muy común importar todas las funciones de SciPy, en general se importan submodulos o incluso funciones particulares. Si necesitaramos funciones estadísticas probablemente las importaríamos de la siguiente forma:", "from scipy import stats", "Y luego lo usaríamos como:", "stats.describe(x)", "O podríamos querer calcular una regresión lineal:", "stats.linregress(x, y)", "En general para usar SciPy no hay que aprender mucho más que lo que ya aprendimos de NumPy y luego leer la documentación de cada función específica que necesitemos usar. En los próximos capítulos veremos algunos ejemplos de su uso a medida que vayamos necesitándolo.\nEjercicios\nVamos a explorar que otras funciones nos ofrece NumPy, para ello podemos usar los métodos de instrospección que ofrece Jupyter y la documentación de NumPy·\n\n\nExisten muchas otras operaciones que se pueden hacer sobre arreglos y muchas otras funciones que ofrece NumPy. Exploremos las siguientes:\n\nreshape\nconcatenate\nhstack\nvstack\nsplit\nflatten\nsort\nargsort\nloadtxt\n\n\n\nCrear un array llamado arr con dimensiones 3x3 y que contenga los enteros de 0 hasta 9.\n\nExtraer todos los números impares de arr\n\nCrear un array como arr pero con los números impares reemplazados por -1\n\n\nGenerar datos gaussianos con np.random.randn(size=s) donde s es igual a 10, 100 o 1000 y para cada caso vamos a contar cuantos puntos son outliers de acuerdo a la regla del rango intercuartil (usando el valor de 1.5) y cuantos valores son aberrantes usando 2 y 3 desviaciones estándar. Para asegurarse de tener números confiables repetir el ejercicio varias veces para cada s y reportar el número promedio de datos aberrantes y su desviación estándar.\n\n\nComparar las formulas de la varianza y covarianza y explicar por que es correcto decir que la varianza es la covarianza de una variable respecto de ella misma.\n\n\nEscribir una función que calcule el coeficiente de correlación de Pearson. Evaluar que el resultado es correcto comparándola con el resultado de stats.linregress\n\n\nGenerar 3 o 4 conjuntos de datos utilizando NumPy. Podés intentar combinando métodos random, funciones trigonométricas, logaritmos, etc.\n\n\nHacer gráficos unidimensionales de los datos generados en el punto anterior. Hacer al menos un ejemplo usando solo matplotlib, otro usando solo seaborn y otro usando matplotlib junto con seaborn. \n\n\nHace al menos un gráfico bidimensional que refleje la relación entre dos variables generadas en el punto 6. Incluir como parte de la leyenda, el valor del coeficiente de correlación de Pearson (usando tanto la función creada en 5 como stats.linregress)\n\n\nPara seguir leyendo\n\nwikipedia :-)\nThink Stats\nData Analysis with Open Source Tools" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
oscarmore2/deep-learning-study
transfer-learning/Transfer_Learning_Solution.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. \nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n images = np.concatenate(batch)\n\n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn.preprocessing import LabelBinarizer\n\nlb = LabelBinarizer()\nlb.fit(labels)\n\nlabels_vecs = lb.transform(labels)", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n\ntrain_idx, val_idx = next(ss.split(codes, labels_vecs))\n\nhalf_val_len = int(len(val_idx)/2)\nval_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\nfc = tf.contrib.layers.fully_connected(inputs_, 256)\n \nlogits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)\ncost = tf.reduce_mean(cross_entropy)\n\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.", "epochs = 10\niteration = 0\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n feed = {inputs_: x,\n labels_: y}\n loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e+1, epochs),\n \"Iteration: {}\".format(iteration),\n \"Training loss: {:.5f}\".format(loss))\n iteration += 1\n \n if iteration % 5 == 0:\n feed = {inputs_: val_x,\n labels_: val_y}\n val_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Validation Acc: {:.4f}\".format(val_acc))\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/20151210160455_XPaji.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nwith tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
machow/siuba
examples/architecture/004-user-defined-functions.ipynb
mit
[ "Pandas fast mutate architecture\nProblem: users may need to define their own functions for SQL or pandas\nIn siuba, much of what users do involves expressions using _.\nDepending on the backend they're using, these expressions are then transformed and executed.\nHowever, sometimes no translation exists for a method.\nThis is not so different from pandas or SQL alchemy, where a limited number of methods are available to users.\nFor example, in pandas...\n\nyou can do some_data.cumsum()\nyou can't do some_data.cumany()\n\nMoreover, you can use .cummean() on an ungrouped, but not a grouped DataFrame. And as a final cruel twist, some methods are fast when grouped, while others (e.g. expanding().sum()) use the slow apply route.\nWhat's the way out?\nIn pandas, it's not totally clear how you would define something like .cumany(), and let it run on grouped or ungrouped data, without submitting a PR to pandas itself.\n(maybe by registering an accessor, but this doesn't apply to grouped DataFrames.)\nThis is the tyranny of methods. The object defining the method owns the method. To add or modify a method, you need to modify the class behind the object.\nNow, this isn't totally true--the class could provide a way for you to register your method (like accessors). But wouldn't it be nice if the actions we wanted to perform on data didn't have to check in with the data class itself? Why does the data class get to decide what we do with it, and why does it get priviledged methods?\nEnter singledispatch\nRather than registering functions onto your class (i.e. methods), singledispatch lets you register classes with your functions.\nIn singledispatch, this works by having the class of your first argument, decide which version of a function to call.", "from functools import singledispatch\n\n# by default dispatches on object, which everything inherits from\n@singledispatch\ndef cool_func(x):\n print(\"Default dispatch over:\", type(x))\n \n@cool_func.register(int)\ndef _cool_func_int(x):\n print(\"Special dispatch for an integer!\")\n \ncool_func('x')\ncool_func(1)", "This concept is incredibly powerful for two reasons...\n\nmany people can define actions over a DataFrame, without a quorum of priviledged methods.\nyou can use normal importing, so don't have to worry about name conflicts\n\nsingledispatch in siuba\nsiuba uses singledispatch in two places\n\ndispatching verbs like mutate, whose actions depend on the backend they're operating on (e.g. SQL vs pandas)\ncreating symbolic calls\n\nIt's worth looking at symbolic calls in detail", "from siuba.siu import symbolic_dispatch, _\nimport pandas as pd\n\n@symbolic_dispatch(cls = pd.Series)\ndef add2(x):\n return x + 2\n\nadd2(pd.Series([1,2,3]))", "One special property of symbolic_dispatch is that if we pass it a symbol, then it returns a symbol.", "sym = add2(_.astype(int))\n\nsym\n\nsym(pd.Series(['1', '2']))", "Note that in this case these two bits of code work the same...\n```python\nser = pd.Series(['1', '2'])\nsym = add2(_.astype(int))\nsym(ser)\nfunc = lambda : add2(.astype(int))\nfunc(ser)\n```\nsiuba knows that if the function's first argument is a symbolic expression, then the function needs to return a symbolic expression.\nWhat should we singledispatch over\nIn essence, siuba needs to allow dispatching over the forms of data it can operate on, including..\n\nregular Series\ngrouped Series\n(maybe) sqlachemy column mappings\n\nAre there any risks?\nI'm glad you asked! There is one very big risk with singledispatch, and it's this:\nsingledispatch will dispatch on the \"closest\" matching parent class it has registered.\n\nThis means that if it has object registered, then at the very least, it will dispatch on that.\nThis is a big problem since e.g. sqlalchemy column mappings and everything else is an object.\nIn order to mitigate this risk, there are two compelling options...\n\nPut an upper bound on dispatching classes (related concept in type annotations)\nRequire an explicit annotation on return type\n\nThe downsides are that (1) requires a custom dispatch implementation, and (2) requires that people know about type annotations.\nThat said, I'm curious to explore option (2), as this has an appealing logic: an appropriate function will be a subtype of the one we typically use.\nRequiring an annotation over return type\nIn order to fully contextualize the process, consider the stage where something may need to be pulled from the dispatcher: call shaping via CallTreeLocal.", "from siuba.siu import CallTreeLocal, strip_symbolic\n\ndef as_string(x):\n return x.astype(str)\n\nctl = CallTreeLocal(local = {'as_string': as_string})\n\ncall = ctl.enter(strip_symbolic(_.as_string()))\n\n# Call object holding function as first argument\ncall.__dict__\n\n# proof it's just the function\ntype(call.args[0])", "Now this setup is good and well--but how is a user going to put their function on CallTreeLocal?\nRegister it? Nah. What they need is a clear interface.\nWe're already \"bouncing\" symbolic dispatch functions when they get a symbolic expression. We can use this mechanic to make CallTreeLocal more \"democratic\"\nNotice that when we \"bounce\" add2, it reports the function as a \"custom_func\".", "@symbolic_dispatch(cls = pd.Series)\ndef add2(x):\n return x + 2\n\nadd2(_)", "This is because it's a special call, called a FuncArg (name subject to change). We can modify CallTreeLocal to perform custom behavior when it enters / exits __custom_func__.", "class SpecialClass: pass\n\n@add2.register(SpecialClass)\ndef _add2_special(x):\n print(\"Wooweee!\")\n\nclass CallTree2(CallTreeLocal):\n # note: self.dispatch_cls already used in init for this very purpose\n \n def enter___custom_func__(self, node):\n # the function itself is the first arg\n dispatcher = node.args[0]\n # hardcoding for now...\n return dispatcher.dispatch(self.dispatch_cls)\n\nctl2 = CallTree2({}, dispatch_cls = SpecialClass)\n\nfunc = ctl2.enter(strip_symbolic(add2(_)))\n\nfunc\n\ntype(func)", "However, there's one major problem--CallTree2 may still dispatch the default function!", "@symbolic_dispatch\ndef add3(x):\n print(\"Calling add3 default\")\n\ncall3 = ctl2.enter(strip_symbolic(add3(_)))\n\ncall3(1)", "THIS MEANS THAT EVERY SINGLEDISPATCH FUNCTION WILL AT LEAST USE ITS DEFAULT\nImagine that some defined the default, but then it gets fired for SQL, and for pandas, etc etc..\nWhat a headache.\nKeeping only when there's a compatible return type\nWe can check the result annotation of the function we'd dispatch, to know whether it won't. In this case, we assume it won't work if the result is not a subclass of the one our SQL tools expect: ClauseElement.\nWe can shut down the process early if we know the function won't return what we need.\nThis is because a function is a subtype of another function if it's input is contravarient (e.g. a parent), and it's output is covariant (e.g. a subclass).", "# used to get type info\nimport inspect\n\n# the most basic of SQL classes\nfrom sqlalchemy.sql.elements import ClauseElement\n\nRESULT_CLS = ClauseElement\n\nclass CallTree3(CallTreeLocal):\n # note: self.dispatch_cls already used in init for this very purpose\n \n def enter___custom_func__(self, node):\n # the function itself is the first arg\n dispatcher = node.args[0]\n # hardcoding for now...\n f = dispatcher.dispatch(self.dispatch_cls)\n sig = inspect.signature(f)\n ret_type = sig.return_annotation\n \n if issubclass(ret_type, RESULT_CLS):\n return f\n \n raise TypeError(\"Return type, %s, not subclass of %s\" %(ret_type, RESULT_CLS))\n\nfrom sqlalchemy import sql\nsel = sql.select([sql.column('id'), sql.column('x'), sql.column('y')])\n\n# this is what siuba sql expressions operate on\ncol_class = sel.columns.__class__\n\nclt3 = CallTree3({}, dispatch_cls = col_class)\n\n@symbolic_dispatch\ndef f_bad(x):\n return x + 1\n\n@symbolic_dispatch\ndef f_good(x: ClauseElement) -> ClauseElement:\n return x.contains('woah')\n\n\n\n# here is the error for the first, without that pesky stack trace\ntry:\n clt3.enter(strip_symbolic(f_bad(_)))\nexcept TypeError as err:\n print(err)\n\n# here is the good one going through\nclt3.enter(strip_symbolic(f_good(_)))", "How do I get this in my life today?\nWell, runtime evaluation of result types isn't the most fleshed out process in python. And there are some edge cases.\nFor example, what should we do if the return type is a Union? Any?\nThere is also a bug with the Union implementation before 3.7, where if it receives 3 classes, and 1 is the parent of the others, it just returns the parent...", "from typing import Union\n\nclass A: pass\n\nclass B(A): pass\n\nclass C(B): pass\n\nUnion[A,B,C]", "To be honest--I think we can be optimistic for now that anyone using a Union as their return type knows what they're doing with siuba. I think the main behaviors we want to support are...\n\nCan create singledispatch, with potentially a default function\nDon't shoot yourself in the foot when the default is fired for SQL and pandas\n\nAnd even a crude result type check will ensure that. In some ways the existence of a result type is almost all the proof we need.\nTo decide\n\nWhat should siuba do when dispatch function doesn't qualify? Fall back to local?\nRelated: should local only look up methods? (makes sense to me)\nIf so, how do we implement SQL dialects? Have ImmutableColumnCollection >= SqlColumns >= PostgresqlColumns, etc..\n\nCould siuba allow static type checking?\nI think so. It would take a bit of work. Mostly PRs to the typing package to...\n\nImplement higher-kinded types\nSupport static checking of singledispatch (or stubbing with @overload)\nWait for pandas type annotations, or stub, so we can check the pipe, which uses __rshift__ 😅" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pombredanne/gensim
docs/notebooks/topic_coherence_tutorial.ipynb
lgpl-2.1
[ "Demonstration of the topic coherence pipeline in Gensim\nIntroduction\nWe will be using the u_mass and c_v coherence for two different LDA models: a \"good\" and a \"bad\" LDA model. The good LDA model will be trained over 50 iterations and the bad one for 1 iteration. Hence in theory, the good LDA model will be able come up with better or more human-understandable topics. Therefore the coherence measure output for the good LDA model should be more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable.", "import numpy as np\nimport logging\nimport pyLDAvis.gensim\nimport json\nimport warnings\nwarnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity\n\nfrom gensim.models.coherencemodel import CoherenceModel\nfrom gensim.models.ldamodel import LdaModel\nfrom gensim.models.wrappers import LdaVowpalWabbit, LdaMallet\nfrom gensim.corpora.dictionary import Dictionary\nfrom numpy import array", "Set up logging", "logger = logging.getLogger()\nlogger.setLevel(logging.DEBUG)\nlogging.debug(\"test\")", "Set up corpus\nAs stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. We will be setting up two LDA models. One with 50 iterations of training and the other with just 1. Hence the one with 50 iterations (\"better\" model) should be able to capture this underlying pattern of the corpus better than the \"bad\" LDA model. Therefore, in theory, our topic coherence for the good LDA model should be greater than the one for the bad LDA model.", "texts = [['human', 'interface', 'computer'],\n ['survey', 'user', 'computer', 'system', 'response', 'time'],\n ['eps', 'user', 'interface', 'system'],\n ['system', 'human', 'system', 'eps'],\n ['user', 'response', 'time'],\n ['trees'],\n ['graph', 'trees'],\n ['graph', 'minors', 'trees'],\n ['graph', 'minors', 'survey']]\n\ndictionary = Dictionary(texts)\ncorpus = [dictionary.doc2bow(text) for text in texts]", "Set up two topic models\nWe'll be setting up two different LDA Topic models. A good one and bad one. To build a \"good\" topic model, we'll simply train it using more iterations than the bad one. Therefore the u_mass coherence should in theory be better for the good model than the bad one since it would be producing more \"human-interpretable\" topics.", "goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)\nbadLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)", "Using U_Mass Coherence", "goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')\n\nbadcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')", "View the pipeline parameters for one coherence model\nFollowing are the pipeline parameters for u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.", "print goodcm", "Interpreting the topics\nAs we will see below using LDA visualization, the better model comes up with two topics composed of the following words:\n1. goodLdaModel:\n - Topic 1: More weightage assigned to words such as \"system\", \"user\", \"eps\", \"interface\" etc which captures the first set of documents.\n - Topic 2: More weightage assigned to words such as \"graph\", \"trees\", \"survey\" which captures the topic in the second set of documents.\n2. badLdaModel:\n - Topic 1: More weightage assigned to words such as \"system\", \"user\", \"trees\", \"graph\" which doesn't make the topic clear enough.\n - Topic 2: More weightage assigned to words such as \"system\", \"trees\", \"graph\", \"user\" which is similar to the first topic. Hence both topics are not human-interpretable.\nTherefore, the topic coherence for the goodLdaModel should be greater for this than the badLdaModel since the topics it comes up with are more human-interpretable. We will see this using u_mass and c_v topic coherence measures.\nVisualize topic models", "pyLDAvis.enable_notebook()\n\npyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)\n\npyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)\n\nprint goodcm.get_coherence()\n\nprint badcm.get_coherence()", "Using C_V coherence", "goodcm = CoherenceModel(model=goodLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')\n\nbadcm = CoherenceModel(model=badLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')", "Pipeline parameters for C_V coherence", "print goodcm", "Print coherence values", "print goodcm.get_coherence()\n\nprint badcm.get_coherence()", "Support for wrappers\nThis API supports gensim's ldavowpalwabbit and ldamallet wrappers as input parameter to model.", "model1 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=50)\nmodel2 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=1)\n\ncm1 = CoherenceModel(model=model1, corpus=corpus, coherence='u_mass')\ncm2 = CoherenceModel(model=model2, corpus=corpus, coherence='u_mass')\n\nprint cm1.get_coherence()\nprint cm2.get_coherence()\n\nmodel1 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=50)\nmodel2 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=1)\n\ncm1 = CoherenceModel(model=model1, texts=texts, coherence='c_v')\ncm2 = CoherenceModel(model=model2, texts=texts, coherence='c_v')\n\nprint cm1.get_coherence()\nprint cm2.get_coherence()", "Conclusion\nHence as we can see, the u_mass and c_v coherence for the good LDA model is much more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable. The badLdaModel however fails to decipher between these two topics and comes up with topics which are not clear to a human. The u_mass and c_v topic coherences capture this wonderfully by giving the interpretability of these topics a number as we can see above. Hence this coherence measure can be used to compare difference topic models based on their human-interpretability." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tzoiker/gensim
docs/notebooks/annoytutorial.ipynb
lgpl-2.1
[ "Similarity Queries using Annoy Tutorial\nThis tutorial is about using the Annoy(Approximate Nearest Neighbors Oh Yeah) library for similarity queries in gensim\nWhy use Annoy?\nThe current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Annoy can find approximate nearest neighbors much faster.\nFor the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)\nSee the Word2Vec tutorial for how to initialize and save this model.", "# Load the model\nimport gensim, os\nfrom gensim.models.word2vec import Word2Vec\n\n# Set file names for train and test data\ntest_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep\nlee_train_file = test_data_dir + 'lee_background.cor'\n\nclass MyText(object):\n def __iter__(self):\n for line in open(lee_train_file):\n # assume there's one document per line, tokens separated by whitespace\n yield line.lower().split()\n\nsentences = MyText()\n \nmodel = Word2Vec(sentences, min_count=1)\n\nprint(model)", "Comparing the traditional implementation and the Annoy\nN.B. Running the timing cells below more than once gives subsequent timings close to zero, as cached objects are used. To get accurate timings, always run these cells from a freshly started kernel.", "#Set up the model and vector that we are using in the comparison\ntry:\n from gensim.similarities.index import AnnoyIndexer\nexcept ImportError:\n raise ValueError(\"SKIP: Please install the annoy indexer\")\n\nmodel.init_sims()\nvector = model.syn0norm[0]\nannoy_index = AnnoyIndexer(model, 500)\n\n%%time\n#Traditional implementation:\nmodel.most_similar([vector], topn=5)\n\n%%time\n#Annoy implementation:\nneighbors = model.most_similar([vector], topn=5, indexer=annoy_index)\nfor neighbor in neighbors:\n print(neighbor)", "A similarity query using Annoy is significantly faster than using the traditional brute force method\n\nNote: Initialization time for the annoy indexer was not included in the times. The optimal knn algorithm for you to use will depend on how many queries you need to make and the size of the corpus. If you are making very few similarity queries, the time taken to initialize the annoy indexer will be longer than the time it would take the brute force method to retrieve results. If you are making many queries however, the time it takes to initialize the annoy indexer will be made up for by the incredibly fast retrieval times for queries once the indexer has been initialized\n\nWhat is Annoy?\nAnnoy is an open source library to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data. For our purpose, it is used to find similarity between words or documents in a vector space. See the tutorial on similarity queries for more information on them.\nGetting Started\nFirst thing to do is to install annoy, by running the following in the command line:\nsudo pip install annoy\nAnd then set up the logger:", "# import modules & set up logging\nimport logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)", "Making a Similarity Query\nCreating an indexer\nAn instance of AnnoyIndexer needs to be created in order to use Annoy in gensim. The AnnoyIndexer class is located in gensim.similarities.index\nAnnoyIndexer() takes two parameters:\nmodel: A Word2Vec or Doc2Vec model\nnum_trees: A positive integer. num_trees effects the build time and the index size. A larger value will give more accurate results, but larger indexes. More information on what trees in Annoy do can be found here. The relationship between num_trees, build time, and accuracy will be investigated later in the tutorial.", "from gensim.similarities.index import AnnoyIndexer\n# 100 trees are being used in this example\nannoy_index = AnnoyIndexer(model,100)", "Now that we are ready to make a query, lets find the top 5 most similar words to \"army\" in the lee corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexer in gensim as of now is Annoy.", "# Derive the vector for the word \"army\" in our model\nvector = model[\"science\"]\n# The instance of AnnoyIndexer we just created is passed \napproximate_neighbors = model.most_similar([vector], topn=5, indexer=annoy_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nfor neighbor in approximate_neighbors:\n print(neighbor)", "Analyzing the results\nThe closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for \"army\".\nPersisting Indexes\nYou can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes. Before loading an index, you will have to create an empty AnnoyIndexer object.", "fname = 'index'\n\n# Persist index to disk\nannoy_index.save(fname)\n\n# Load index back\nif os.path.exists(fname):\n annoy_index2 = AnnoyIndexer()\n annoy_index2.load(fname)\n annoy_index2.model = model\n\n# Results should be identical to above\nvector = model[\"science\"]\napproximate_neighbors = model.most_similar([vector], topn=5, indexer=annoy_index2)\nfor neighbor in approximate_neighbors:\n print neighbor", "Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.\nSave memory by memory-mapping indices saved to disk\nAnnoy library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.\nBelow are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.", "%%time\n\n# Bad example. Two processes load the Word2vec model from disk and create there own Annoy indices from that model. \n\nfrom gensim import models\nfrom gensim.similarities.index import AnnoyIndexer\nfrom multiprocessing import Process\nimport os\nimport psutil\n\nmodel.save('/tmp/mymodel')\n\ndef f(process_id):\n print 'Process Id: ', os.getpid()\n process = psutil.Process(os.getpid())\n new_model = models.Word2Vec.load('/tmp/mymodel')\n vector = new_model[\"science\"]\n annoy_index = AnnoyIndexer(new_model,100)\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)\n for neighbor in approximate_neighbors:\n print neighbor\n print 'Memory used by process '+str(os.getpid())+'=', process.memory_info()\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()\n\n%%time\n\n# Good example. Two processes load both the Word2vec model and index from disk and memory-map the index\n\nfrom gensim import models\nfrom gensim.similarities.index import AnnoyIndexer\nfrom multiprocessing import Process\nimport os\nimport psutil\n\nmodel.save('/tmp/mymodel')\n\ndef f(process_id):\n print 'Process Id: ', os.getpid()\n process = psutil.Process(os.getpid())\n new_model = models.Word2Vec.load('/tmp/mymodel')\n vector = new_model[\"science\"]\n annoy_index = AnnoyIndexer()\n annoy_index.load('index')\n annoy_index.model = new_model\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=annoy_index)\n for neighbor in approximate_neighbors:\n print neighbor\n print 'Memory used by process '+str(os.getpid()), process.memory_info()\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()", "Relationship between num_trees and initialization time", "%matplotlib inline\nimport matplotlib.pyplot as plt, time\nx_cor = []\ny_cor = []\nfor x in range(100):\n start_time = time.time()\n AnnoyIndexer(model, x)\n y_cor.append(time.time()-start_time)\n x_cor.append(x)\n\nplt.plot(x_cor, y_cor)\nplt.title(\"num_trees vs initalization time\")\nplt.ylabel(\"Initialization time (s)\")\nplt.xlabel(\"num_tress\")\nplt.show()", "Initialization time of the annoy indexer increases in a linear fashion with num_trees. Initialization time will vary from corpus to corpus, in the graph above the lee corpus was used\nRelationship between num_trees and accuracy", "exact_results = [element[0] for element in model.most_similar([model.syn0norm[0]], topn=100)]\nx_axis = []\ny_axis = []\nfor x in range(1,30):\n annoy_index = AnnoyIndexer(model, x)\n approximate_results = model.most_similar([model.syn0norm[0]],topn=100, indexer=annoy_index)\n top_words = [result[0] for result in approximate_results]\n x_axis.append(x)\n y_axis.append(len(set(top_words).intersection(exact_results)))\n \nplt.plot(x_axis, y_axis)\nplt.title(\"num_trees vs accuracy\")\nplt.ylabel(\"% accuracy\")\nplt.xlabel(\"num_trees\")\nplt.show()", "This was again done with the lee corpus, a relatively small corpus. Results will vary from corpus to corpus" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mrustl/flopy
examples/Notebooks/flopy3_PEST.ipynb
bsd-3-clause
[ "FloPy\nParameter Estimation with FloPy\nThis notebook demonstrates the current parameter estimation functionality that is available with FloPy. The capability to write a simple template file for PEST is the only capability implemented so far. The plan is to develop functionality for creating PEST instruction files as well as the PEST control file.", "%matplotlib inline\nimport numpy as np\nimport flopy\nimport flopy.pest.templatewriter as tplwriter\nimport flopy.pest.params as params", "This notebook will work with a simple model using the dimensions below", "# Define the model dimensions\nnlay = 3\nnrow = 20\nncol = 20\n\n# Create the flopy model object and add the dis and lpf packages\nm = flopy.modflow.Modflow(modelname='mymodel', model_ws='./data')\ndis = flopy.modflow.ModflowDis(m, nlay, nrow, ncol)\nlpf = flopy.modflow.ModflowLpf(m, hk=10.)", "Simple One Parameter Example\nIn order to create a PEST template file, we first need to define a parameter. For example, let's say we want to parameterize hydraulic conductivity, which is a static variable in flopy and MODFLOW. As a first step, let's define a parameter called HK_LAYER_1 and assign it to all of layer 1. We will not parameterize hydraulic conductivity for layers 2 and 3 and instead leave HK at its value of 10. (as assigned in the block above this one). We can do this as follows.", "mfpackage = 'lpf'\npartype = 'hk'\nparname = 'HK_LAYER_1'\nidx = np.empty((nlay, nrow, ncol), dtype=np.bool)\nidx[0] = True\nidx[1:] = False\n\n# The span variable defines how the parameter spans the package\nspan = {'idx': idx}\n\n# These parameters have not affect yet, but may in the future\nstartvalue = 10.\nlbound = 0.001\nubound = 1000.\ntransform='log'\n\np = params.Params(mfpackage, partype, parname, startvalue, \n lbound, ubound, span)", "At this point, we have enough information to the write a PEST template file for the LPF package. We can do this using the following statement:", "tw = tplwriter.TemplateWriter(m, [p])\ntw.write_template()", "At this point, the lpf template file will have been created. The following block will print the template file.", "lines = open('./data/mymodel.lpf.tpl', 'r').readlines()\nfor l in lines:\n print(l.strip())", "The span variable will also accept 'layers', in which the parameter applies to the list of layers, as shown next. When 'layers' is specifed in the span dictionary, then the original hk value of 10. remains in the array, and the multiplier is specified on the array control line.", "mfpackage = 'lpf'\npartype = 'hk'\nparname = 'HK_LAYER_1-3'\n\n# Span indicates that the hk parameter applies as a multiplier to layers 0 and 2 (MODFLOW layers 1 and 3)\nspan = {'layers': [0, 2]}\n\n# These parameters have not affect yet, but may in the future\nstartvalue = 10.\nlbound = 0.001\nubound = 1000.\ntransform='log'\n\np = params.Params(mfpackage, partype, parname, startvalue, \n lbound, ubound, span)\ntw = tplwriter.TemplateWriter(m, [p])\ntw.write_template()\n\nlines = open('./data/mymodel.lpf.tpl', 'r').readlines()\nfor l in lines:\n print(l.strip())", "Multiple Parameter Zoned Approach\nThe params module has a helper function called zonearray2params that will take a zone array and some other information and create a list of parameters, which can then be passed to the template writer. This next example shows how to create a slightly more complicated LPF template file in which both HK and VKA are parameterized.", "# Create a zone array\nzonearray = np.ones((nlay, nrow, ncol), dtype=int)\nzonearray[0, 10:, 7:] = 2\nzonearray[0, 15:, 9:] = 3\nzonearray[1] = 4\n\n# Create a list of parameters for HK\nmfpackage = 'lpf'\nparzones = [2, 3, 4]\nparvals = [56.777, 78.999, 99.]\nlbound = 5\nubound = 500\ntransform = 'log'\nplisthk = params.zonearray2params(mfpackage, 'hk', parzones, lbound, \n ubound, parvals, transform, zonearray)", "In this case, Flopy will create three parameters: hk_2, hk_3, and hk_4, which will apply to the horizontal hydraulic conductivity for cells in zones 2, 3, and 4, respectively. Only those zone numbers listed in parzones will be parameterized. For example, many cells in zonearray have a value of 1. Those cells will not be parameterized. Instead, their hydraulic conductivity values will remain fixed at the value that was specified when the Flopy LPF package was created.", "# Create a list of parameters for VKA\nparzones = [1, 2]\nparvals = [0.001, 0.0005]\nzonearray = np.ones((nlay, nrow, ncol), dtype=int)\nzonearray[1] = 2\nplistvk = params.zonearray2params(mfpackage, 'vka', parzones, lbound, \n ubound, parvals, transform, zonearray)\n\n# Combine the HK and VKA parameters together\nplist = plisthk + plistvk\nfor p in plist:\n print(p.name, p.mfpackage, p.startvalue)\n\n# Write the template file\ntw = tplwriter.TemplateWriter(m, plist)\ntw.write_template()\n\n# Print contents of template file\nlines = open('./data/mymodel.lpf.tpl', 'r').readlines()\nfor l in lines:\n print(l.strip())", "Two-Dimensional Transient Arrays\nFlopy supports parameterization of transient two dimensional arrays, like recharge. This is similar to the approach for three dimensional static arrays, but there are some important differences in how span is specified. The parameter span here is also a dictionary, and it must contain a 'kper' key, which corresponds to a list of stress periods (zero based, of course) for which the parameter applies. The span dictionary must also contain an 'idx' key. If span['idx'] is None, then the parameter is a multiplier for those stress periods. If span['idx'] is a tuple (iarray, jarray), where iarray and jarray are a list of array indices, or a boolean array of shape (nrow, ncol), then the parameter applies only to the cells specified in idx.", "# Define the model dimensions (made smaller for easier viewing)\nnlay = 3\nnrow = 5\nncol = 5\nnper = 3\n\n# Create the flopy model object and add the dis and lpf packages\nm = flopy.modflow.Modflow(modelname='mymodel', model_ws='./data')\ndis = flopy.modflow.ModflowDis(m, nlay, nrow, ncol, nper=nper)\nlpf = flopy.modflow.ModflowLpf(m, hk=10.)\nrch = flopy.modflow.ModflowRch(m, rech={0: 0.001, 2: 0.003})", "Next, we create the parameters", "plist = []\n\n# Create a multiplier parameter for recharge\nmfpackage = 'rch'\npartype = 'rech'\nparname = 'RECH_MULT'\nstartvalue = None\nlbound = None\nubound = None\ntransform = None\n\n# For a recharge multiplier, span['idx'] must be None\nidx = None\nspan = {'kpers': [0, 1, 2], 'idx': idx}\np = params.Params(mfpackage, partype, parname, startvalue,\n lbound, ubound, span)\nplist.append(p)\n\n# Write the template file\ntw = tplwriter.TemplateWriter(m, plist)\ntw.write_template()\n\n# Print the results\nlines = open('./data/mymodel.rch.tpl', 'r').readlines()\nfor l in lines:\n print(l.strip())", "Multiplier parameters can also be combined with index parameters as follows.", "plist = []\n\n# Create a multiplier parameter for recharge\nmfpackage = 'rch'\npartype = 'rech'\nparname = 'RECH_MULT'\nstartvalue = None\nlbound = None\nubound = None\ntransform = None\n\n# For a recharge multiplier, span['idx'] must be None\nspan = {'kpers': [1, 2], 'idx': None}\np = params.Params(mfpackage, partype, parname, startvalue,\n lbound, ubound, span)\nplist.append(p)\n\n# Now create an index parameter\nmfpackage = 'rch'\npartype = 'rech'\nparname = 'RECH_ZONE'\nstartvalue = None\nlbound = None\nubound = None\ntransform = None\n\n# For a recharge index parameter, span['idx'] must be a boolean array or tuple of array indices\nidx = np.empty((nrow, ncol), dtype=np.bool)\nidx[0:3, 0:3] = True\nspan = {'kpers': [1], 'idx': idx}\np = params.Params(mfpackage, partype, parname, startvalue,\n lbound, ubound, span)\nplist.append(p)\n\n# Write the template file\ntw = tplwriter.TemplateWriter(m, plist)\ntw.write_template()\n\n# Print the results\nlines = open('./data/mymodel.rch.tpl', 'r').readlines()\nfor l in lines:\n print(l.strip())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
emiliom/stuff
odm2api_sample_fromsqlite.ipynb
cc0-1.0
[ "odm2api demo with Little Bear SQLite sample DB\nLargely from https://github.com/ODM2/ODM2PythonAPI/blob/master/Examples/Sample.py \n- 4/25/2016. Started testing with the new odm2 conda channel, based on the new 0.5.0-alpha odm2api release. See my odm2api_odm2channel env. Ran into problems b/c the SQLite database needed to be updated to have a SamplingFeature.FeatureGeometryWKT field; so I added and populated it manually with SQLite Manager.\n- 2/7/2016. Tested successfully with sfgeometry_em_1 branch, with my overhauls. Using odm2api_dev env.\n- 2/1 - 1/31. Errors with SamplingFeatures code, with latest odm2api from master (on env odm2api_jan31test). The code also fails the same way with the odm2api env, but it does still run fine with the odm2api_jan21 env! I'm investigating the differences between those two envs.\n- 1/22-20,9/2016.\nEmilio Mayorga", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import dates\n\nfrom odm2api.ODMconnection import dbconnection\nfrom odm2api.ODM2.services.readService import ReadODM2\n\n# Create a connection to the ODM2 database\n# ----------------------------------------\nodm2db_fpth = '/home/mayorga/Desktop/TylerYeats/ODM2-LittleBear1.sqlite'\nsession_factory = dbconnection.createConnection('sqlite', odm2db_fpth, 2.0)\nread = ReadODM2(session_factory)\n\n# Run some basic sample queries.\n# ------------------------------\n# Get all of the variables from the database and print their names to the console\nallVars = read.getVariables()\n\nfor x in allVars:\n print x.VariableCode + \": \" + x.VariableNameCV\n\n# Get all of the people from the database\nallPeople = read.getPeople()\n\nfor x in allPeople:\n print x.PersonFirstName + \" \" + x.PersonLastName\n\ntry:\n print \"\\n-------- Information about an Affiliation ---------\"\n allaff = read.getAffiliations()\n for x in allaff:\n print x.PersonObj.PersonFirstName + \": \" + str(x.OrganizationID)\nexcept Exception as e:\n print \"Unable to demo getAllAffiliations\", e\n\nallaff = read.getAffiliations()\ntype(allaff)", "SamplingFeatures tests", "# from odm2api.ODM2.models import SamplingFeatures\n# read._session.query(SamplingFeatures).filter_by(SamplingFeatureTypeCV='Site').all()\n\n# Get all of the SamplingFeatures from the database that are Sites\ntry:\n siteFeatures = read.getSamplingFeatures(type='Site')\n numSites = len(siteFeatures)\n\n for x in siteFeatures:\n print x.SamplingFeatureCode + \": \" + x.SamplingFeatureName\nexcept Exception as e:\n print \"Unable to demo getSamplingFeatures(type='Site')\", e\n\nread.getSamplingFeatures()\n\nread.getSamplingFeatures(codes=['USU-LBR-Mendon'])\n\n# Now get the SamplingFeature object for a SamplingFeature code\nsf_lst = read.getSamplingFeatures(codes=['USU-LBR-Mendon'])\nvars(sf_lst[0])\n\nsf = sf_lst[0]\n\nprint sf, \"\\n\"\nprint type(sf)\nprint type(sf.FeatureGeometryWKT), sf.FeatureGeometryWKT\nprint type(sf.FeatureGeometry)\n\nvars(sf.FeatureGeometry)\n\nsf.FeatureGeometry.__doc__\n\nsf.FeatureGeometry.geom_wkb, sf.FeatureGeometry.geom_wkt\n\n# 4/25/2016: Don't know why the shape is listed 4 times ...\ntype(sf.shape()), sf.shape().wkt", "Back to the rest of the demo", "read.getResults()\n\nfirstResult = read.getResults()[0]\nfirstResult.FeatureActionObj.ActionObj", "Foreign Key Example\nDrill down and get objects linked by foreign keys", "try:\n # Call getResults, but return only the first result\n firstResult = read.getResults()[0]\n action_firstResult = firstResult.FeatureActionObj.ActionObj\n print \"The FeatureAction object for the Result is: \", firstResult.FeatureActionObj\n print \"The Action object for the Result is: \", action_firstResult\n print (\"\\nThe following are some of the attributes for the Action that created the Result: \\n\" +\n \"ActionTypeCV: \" + action_firstResult.ActionTypeCV + \"\\n\" +\n \"ActionDescription: \" + action_firstResult.ActionDescription + \"\\n\" +\n \"BeginDateTime: \" + str(action_firstResult.BeginDateTime) + \"\\n\" +\n \"EndDateTime: \" + str(action_firstResult.EndDateTime) + \"\\n\" +\n \"MethodName: \" + action_firstResult.MethodObj.MethodName + \"\\n\" +\n \"MethodDescription: \" + action_firstResult.MethodObj.MethodDescription)\nexcept Exception as e:\n print \"Unable to demo Foreign Key Example: \", e", "Example of Retrieving Attributes of a Time Series Result using a ResultID", "tsResult = read.getResults(ids=[1])[0]\ntype(tsResult), vars(tsResult)", "Why are ProcessingLevelObj, VariableObj and UnitsObj objects not shown in the above vars() listing!? They are actually available, as demonstrated in much of the code below.", "try:\n tsResult = read.getResults(ids=[1])[0]\n # Get the site information by drilling down\n sf_tsResult = tsResult.FeatureActionObj.SamplingFeatureObj\n print(\n \"Some of the attributes for the TimeSeriesResult retrieved using getResults(ids=[]): \\n\" +\n \"ResultTypeCV: \" + tsResult.ResultTypeCV + \"\\n\" +\n # Get the ProcessingLevel from the TimeSeriesResult's ProcessingLevel object\n \"ProcessingLevel: \" + tsResult.ProcessingLevelObj.Definition + \"\\n\" +\n \"SampledMedium: \" + tsResult.SampledMediumCV + \"\\n\" +\n # Get the variable information from the TimeSeriesResult's Variable object\n \"Variable: \" + tsResult.VariableObj.VariableCode + \": \" + tsResult.VariableObj.VariableNameCV + \"\\n\" +\n \"AggregationStatistic: \" + tsResult.AggregationStatisticCV + \"\\n\" +\n # Get the site information by drilling down\n \"Elevation_m: \" + str(sf_tsResult.Elevation_m) + \"\\n\" +\n \"SamplingFeature: \" + sf_tsResult.SamplingFeatureCode + \" - \" +\n sf_tsResult.SamplingFeatureName)\nexcept Exception as e:\n print \"Unable to demo Example of retrieving Attributes of a time Series Result: \", e", "Example of Retrieving Time Series Result Values, then plotting them", "# Get the values for a particular TimeSeriesResult\n\ntsValues = read.getResultValues(resultid=1) # Return type is a pandas dataframe\n# Print a few Time Series Values to the console\n# tsValues.set_index('ValueDateTime', inplace=True)\ntsValues.head()\n\n# Plot the time series\ntry:\n fig = plt.figure()\n ax = fig.add_subplot(111)\n tsValues.plot(x='ValueDateTime', y='DataValue', kind='line',\n title=tsResult.VariableObj.VariableNameCV + \" at \" + \n tsResult.FeatureActionObj.SamplingFeatureObj.SamplingFeatureName,\n ax=ax)\n \n ax.set_ylabel(tsResult.VariableObj.VariableNameCV + \" (\" + \n tsResult.UnitsObj.UnitsAbbreviation + \")\")\n ax.set_xlabel(\"Date/Time\")\n ax.xaxis.set_minor_locator(dates.MonthLocator())\n ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))\n ax.xaxis.set_major_locator(dates.YearLocator())\n ax.xaxis.set_major_formatter(dates.DateFormatter('\\n%Y'))\n ax.grid(True)\nexcept Exception as e:\n print \"Unable to demo plotting of tsValues: \", e" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gfeiden/Notebook
Projects/mlt_calib/kde_mle_median.ipynb
mit
[ "Comparing Results from Kernel Density Estimates of Parameters", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "Load data for three different approaches to estimating stellar parameters from MCMC simulations.", "kde = np.genfromtxt('data/run08_kde_props.txt')\nmle = np.genfromtxt('data/run08_mle_props.txt')\nmed = np.genfromtxt('data/run08_median_props.txt')", "Compare KDE and MLE methods. MLE is a kernel density estimate with a constant bandwidth and should provide more detail about the underlying posterior distribution, whereas the KDE is a kernel density estimate with a \"rule-of-thumb\" determination of the bandwidth (Silverman's rule). They provide a modal estimate for the distributions.", "fig, ax = plt.subplots(3, 3, figsize=(12., 12.))\n\nfor i in range(9):\n row = i/3\n col = i%3\n axis = ax[row, col]\n \n # set axis labels and ranges\n axis.set_xlabel('KDE', fontsize=14.)\n axis.set_ylabel('MLE', fontsize=14.)\n axis.grid(True)\n axis.tick_params(which='major', axis='both', labelsize=14., length=12.)\n \n # 1-to-1 correlation\n axis.plot([min(kde[:, i]), max(kde[:, i])], [min(kde[:, i]), max(kde[:, i])], lw=2, dashes=(20., 5.), c='#b22222')\n axis.plot(kde[:, i], mle[:, i], 'o', markersize=7., c='#555555', alpha=0.7)\n\nfig.tight_layout()", "Analysis.\nNow, comparing KDE to estimate of distribution median.", "fig, ax = plt.subplots(3, 3, figsize=(12., 12.))\n\nfor i in range(9):\n row = i/3\n col = i%3\n axis = ax[row, col]\n \n # set axis labels and ranges\n axis.set_xlabel('KDE', fontsize=14.)\n axis.set_ylabel('Median', fontsize=14.)\n axis.grid(True)\n axis.tick_params(which='major', axis='both', labelsize=14., length=12.)\n \n # 1-to-1 correlation\n axis.plot([min(kde[:, i]), max(kde[:, i])], [min(kde[:, i]), max(kde[:, i])], lw=2, dashes=(20., 5.), c='#b22222')\n axis.plot(kde[:, i], med[:, i], 'o', markersize=7., c='#555555', alpha=0.7)\nfig.tight_layout()", "Analysis.\nHow does this choice affect the inferred relationship between Teff, [M/H], log(g) and the mixing length parameter?", "fig, ax = plt.subplots(2, 3, figsize=(12., 8.))\n\n# KDE\nax[0, 0].plot(10**kde[:, 6], kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[0, 1].plot(kde[:, 1], kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[0, 2].plot(kde[:, 0], kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\n\n# Median\nax[1, 0].plot(10**med[:, 6], med[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[1, 1].plot(med[:, 1], med[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[1, 2].plot(med[:, 0], med[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\n\nfig.tight_layout()", "To permit a better comparison with 3D RHD models, we should isolate stars that have a roughly solar metallicity. Since 3D RHD simluations are only performed at solar metallicity, it may bias the comparison given that we a significantly larger spread in metallicity.", "solar_kde = np.array([star for star in kde if -0.1 < star[1] < 0.1])\n\nfig, ax = plt.subplots(1, 3, figsize=(12., 4.))\n\n# KDE\nax[0].plot(10**solar_kde[:, 6], solar_kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[1].plot(solar_kde[:, 1], solar_kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[2].plot(solar_kde[:, 0], solar_kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\n\nfig.tight_layout()", "Now try only a limited range of effective temperatures to mitigate effects related to the temperature sensitivity of various opacity sources.", "warm_kde = np.array([star for star in kde if 5000. <= 10**star[6] <= 6000. and star[1] > -0.35])\n\nfig, ax = plt.subplots(1, 3, figsize=(12., 4.))\n\n# KDE\nax[0].plot(10**warm_kde[:, 6], warm_kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[1].plot(warm_kde[:, 1], warm_kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\nax[2].plot(warm_kde[:, 0], warm_kde[:, 5], 'o', markersize=7., c='#555555', alpha=0.7)\n\nfig.tight_layout()", "This subset is also more comparable to results from Bonaca et al. (2012), who used asteroseismic data to constrain how the convective mixing length parameter changes with stellar properties.", "import scipy.stats as stat", "Calculate correlation coefficients from Spearman r test.", "stat.spearmanr(10**warm_kde[:, 6], warm_kde[:, 5]), stat.spearmanr(warm_kde[:, 1], warm_kde[:, 5]), \\\n stat.spearmanr(warm_kde[:, 0], warm_kde[:, 5])", "We find similar trends of $\\alpha$ with stellar parameters as Bonaca et al. However, it should be noted that we find an anti-correlation of $\\alpha$ with metallicity, whereas they find a positive correlation coefficient." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
asurunis/CrisisMappingToolkit
ipython/CrisisMappingToolkitOverview.ipynb
apache-2.0
[ "Crisis Mapping Toolkit Documentation\nThis document provides a high level overview of how to use the Crisis Mapping Toolkit (CMT). The CMT is a set of tools built using Google's Earth Engine Python API so familiariaty with that API will be extremely useful when working with the CMT.\nInstalling Earth Engine\nSee instructions from Google here: https://docs.google.com/document/d/1tvkSGb-49YlSqW3AGknr7T_xoRB1KngCD3f2uiwOS3Q/edit\n\"Hello Crisis Mapping Toolkit\"\nInitialize Earth Engine", "import sys\nimport os\nimport ee\n\n# This script assumes your authentification credentials are stored as operatoring system\n# environment variables.\n__MY_SERVICE_ACCOUNT = os.environ.get('MY_SERVICE_ACCOUNT')\n__MY_PRIVATE_KEY_FILE = os.environ.get('MY_PRIVATE_KEY_FILE')\n\n# Initialize the Earth Engine object, using your authentication credentials.\nee.Initialize()", "Load the Crisis Mapping Toolkit", "# Make sure that Python can find the CMT source files\nCMT_INSTALL_FOLDER = '/home/smcmich1/repo/earthEngine/CrisisMappingToolkit/'\nsys.path.append(CMT_INSTALL_FOLDER)\nimport cmt.util.evaluation\nfrom cmt.mapclient_qt import centerMap, addToMap", "Load a domain\nA domain is a geographic location associated with certain sensor images, global data sets, and other supporting files. A domain is described by a custom XML file and can easily be loaded in Python. Once the XML file is loaded all of the associated data can be easily accessed. Note that none of the images are stored locally; instead they have been uploaded to web storage locations where Earth Engine can access them.", "import cmt.domain\ndomainPath = os.path.join(CMT_INSTALL_FOLDER, 'config/domains/modis/kashmore_2010_8.xml')\nkashmore_domain = cmt.domain.Domain(domainPath)", "Display the domain", "import cmt.util.gui_util\ncmt.util.gui_util.visualizeDomain(kashmore_domain)", "A GUI should appear in a seperate window displaying the domain location. If the GUI does not appear, try restarting the IPython kernel and trying again. This is the default GUI used by the CMT. It is an enhanced version of the GUI provided with the Earth Engine Python API and behaves similarly to the Earth Engine online \"playground\" interface.\nBasic GUI instructions:\n\nYou can move the view location by clicking and dragging.\nYou can zoom in and out using the mouse wheel.\nRight clicking the view brings up a context menu with the following:\nThe lat/lon coordinate where you clicked.\nThe list of currently loaded image layers.\nAn opacity slider for each image layer.\nThe value for each image layer at the location you clicked.\nA button which will save the current view as a geotiff file.\n\nCall a classification algorithm", "from cmt.modis.flood_algorithms import *\n\n# Select the algorithm to use and then call it\nalgorithm = DIFFERENCE\n(alg, result) = detect_flood(kashmore_domain, algorithm)\n\n# Get a color pre-associated with the algorithm, then draw it on the map\ncolor = get_algorithm_color(algorithm)\naddToMap(result.mask(result), {'min': 0, 'max': 1, 'opacity': 0.5, 'palette': '000000, ' + color}, alg, False)", "Classifier output\n\nThe algorithm output should have been added to the GUI as another image layer.\nEach classifier algorithm evaluates each pixel as flooded(1) or dry (0). Some algorithms will return a probability of being flooded ranging from 0 to 1.\n\nEvaluate classification results", "precision, recall, eval_count, quality = cmt.util.evaluation.evaluate_approach(result, kashmore_domain.ground_truth, kashmore_domain.bounds, is_algorithm_fractional(algorithm))\nprint('For algorithm \"%s\", precision = %f and recall = %f' % (alg, precision, recall) )\n ", "Interpreting results\nThe two main scores for evaluating an algorithm are \"precision\" and \"recall\".\n- Precision is a measure of how many false positives the algorithm has. It is calculated as: (number of pixels classified as flooded which are actually flooded) / (number of pixels classified as flooded)\n- Recall is a measure of how sensitive to flooding the algorithm is. It is calculated as: (number of pixels classified as flooded which are actually flooded) / (total number of flooded pixels)\nIn order for these measurements to be computed the domain must have a ground truth file associated with it which labels each pixel as flooded or dry.\nEnd of introduction\nThe documentation so far covers most of the code used to write a file such as the tool detect_flood_modis.py. The rest of the documentation covers different aspects of the CMT in more detail.\nSupported Sensor Data\nThe Crisis Mapping Toolkit has so far been used with the following types of data:\n- MODIS = 250m to 500m satellite imagery covering the globe daily.\n- LANDSAT = 30m satellite imagery with global coverage but infrequent images.\n- DEM = Earth Engine provides the SRTM90 and NED13 digital elevation maps.\n- Skybox = Google owned RGBN imaging satellites.\n- SAR = Cloud penetrating radar data. Several specific sources have been tested:\n - UAVSAR\n - Sentinel-1\n - Terrasar-X\nMODIS and LANDSAT data are the easiest types to work with because Earth Engine already has all of that data loaded and easily accessible. SAR data on the other hand can be difficult or expensive to get ahold of.\nMost of the processing algorithms currently in CMT are for processing MODIS or SAR data and are split between the modis and radar folders. Some of the algorithms, such as the active contour, can also operate on other types of data.\nInstructions for how to load your own data are located in the \"Domains\" section of this documentation.\nAlgorithm Overviews\nThe algorithms currently implemented by the CMT fall into these categories:\nMODIS\n - Simple algorithms = Basic thresholding and small decision tree algorithms.\n - EE Classifiers = These algorithms are built around Earth Engine's classifier tool.\n - DNNS = Variants of the DNNS algorithm (http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6307841)\n - Adaboost = Uses multiple instances of the simple algorithms to build a more accurate composite classification.\n - Misc algorithms = A few other algorithms outside those categories.\nRADAR\n - Learning = Algorithms built around Earth Engine's classifier tool.\n - Matgen = An algorithm which attempts to detect water using find a global histogram split. (http://www.sciencedirect.com/science/article/pii/S1474706510002160)\n - Martinis = Breaks up the region into sub-regions to try and obtain a more useful histogram to split (http://www.nat-hazards-earth-syst-sci.net/9/303/2009/nhess-9-303-2009.pdf)\n - Active Contour = A \"snake\" algorithm for finding water boundaries.\nSkybox\n - The MODIS EE Classifiers can incorporate Skybox imagery to improve their results.\n - The Active Contour algorithm can be used on Skybox data.\nThe Production GUI\nIn addition to the default GUI, the Crisis Mapping Toolkit has another GUI customized to perform a few useful operations. It is accessible by running the \"flood_detection_wizard.py\" tool. The main map portion of the production GUI is the same as in the default GUI but there are additional controls above and below the map window.\nWhy use the production GUI?\n\nEasily search through MODIS and Landsat data. The production GUI lets you quickly change the date and then searches for the closest Landsat data.\nQuickly perform basic MODIS flood detection. The controls at the bottom allow quick tuning of a simple flood detection algorithm on the currently displayed MODIS data.\nGenerate training data. You can use the production GUI to create labled training polygons to load into several of the classifier algorithms.\n\n<br>\n<img src=\"production_gui_screenshot.png\">\n<center> A screenshot of the Production GUI </center>\nTop buttons from left to right\n\nDate Selector Button = Choose the date of interest. MODIS data will be loaded from that date and LANDSAT data will be searched for nearby that date.\nSet Processing Region = When clicked the current field of view in the map will be set as the region of interest. This region is used when searching for LANDSAT images and performing flood detection.\nLoad Images = Once the data and region have been set, press this button to search for MODIS and LANDSAT data. The data should be added to the main map display.\nDetect Flood = Run a flood detection algorithm using the values currently set by the sliders at the bottom of the GUI. Flood detection results will be displayed in the main map display.\nLoad Maps Engine Image = Paste the full Earth Engine ID from an image loaded in Google Maps Engine, then select the associated sensor type and click \"Ok\". The image will now be displayed on the main map display. Currently only one image at a time is supported.\nOpen Class Trainer = Opens another window for generating training regions.\nClear Map Button = Click this to remove all images from the main map display.\n\nHow to load MODIS/LANDSAT data\n\nClick the date select button and pick a date.\nPan and zoom to your region of interest and click \"Set Processing Region\".\nClick \"Load Images\"\n\nHow to detect floods\n\nPerform the three steps above to load MODIS and LANDSAT data.\nAdjust the two sliders at the bottom to set the algorithm parameters.\nChange Detection Threshold = Decrease this value to detect more pixels as flooded.\nWater Mask Threshold = Increase this value to detect more pixels as flooded.\n\n\nClick \"Detect Flood\"\n\nHow to generate training regions for classifiers\n\nLoad the imagery you want to look at while selecting regions, either MODIS/LANDSAT data or by clicking \"Load ME image\".\nClick \"Open Class Trainer\"\nUse the text editor box to enter the name of a region. Each name should contain either \"Land\" or \"Water\" to let the classifiers know how to use that region.\nPress \"Add New Class\" to add the named region to the class list.\nTo select a class, click its name in the list. When a class is selected you cannot drag the map view around!\nTo unselect a class (so you can reposition the map) click \"Deselect Class\"\nYou can delete a selected class from the list by clicking \"Delete Class\"\nTo set the region for a selected class just click on locations in the main map view. The points you click will form a polygon which should be drawn in the main map view.\nThe main map view should keep updated with the polygon of the currently selected class but you may see some transient drawing artifacts.\nClick \"Save Class File\" to write a json file storing the training data.\nClick \"Load Class File\" to load an existing json class file.\n\nWorking With Domains\nThe Domain Concept\nA Domain consists of a region, training information, and a list of descriptions of avialable sensor data. They can be easily loaded from XML files and the existing algorithms are all designed to take domain objects as input. MODIS and DEM data are almost always available in any domain. Instructions for creating a custom domain XML file are in the next section.\nAnatomy of a Domain File\nTo use a custom domain generally requires three files:\n- A sensor definition XML file. Only one of these is needed per sensor. It defines the bands, data characteristics, and possibly the data source.\n- A test domain XML file. This defines the geographic region, algorithm parameters, training and truth information, dates, and other other source information.\n- A training domain XML file. This is similar to the test domain file except that it will specify a different date or location to collect training data from.\nFor more detailed descriptions of all the possible contents of a domain file, check out the domain_example and sensor_example XML files and all of the real config files that are included with the Crisis Mapping Toolkit.\nCode Examples\nHere are some examples of code working with the Domain class in Python:", "# Access a specific parameter listed in the domain file\nkashmore_domain.algorithm_parameters['modis_diff_threshold']\n\n# Call this function to get whatever digital elevation map is available.\ndem = kashmore_domain.get_dem()\n\n# All the sensors included in the domain are stored as a list\nfirst_sensor = kashmore_domain.sensor_list[0]\n\n# If you know the name of a sensor you can access it like this\nmodis_sensor = kashmore_domain.modis\n\n# Then you can access individual sensor bands like this\none_band = modis_sensor.sur_refl_b03\n\n# To get the EE image object containing all the bands, do this\nall_bands = modis.image\n\n# The sensor contains some other information,\n# but only if the information is present in the XML files\nfirst_band_name = band_names[0]\nfirst_band_resolution = modis.band_resolutions[first_band_name]\n\n# Related domains have the same structure as the main domain\n# and can be accessed like this\nkashmore_domain.training_domain\nkashmore_domain.unflooded_domain\n", "Other Crisis Mapping Features\nThe Local Image Class\nEarth Engine is very powerful but it is not well suited for all tasks. In these cases you can use the LocalEEImage class to easily download image data from Earth Engine and work with it locally using whatever Python image processing method you prefer. You can see an example of doing this in the Active Contour algorithm." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bmeaut/python_nlp_2017_fall
course_material/01_Introduction/01_Python_introduction.ipynb
mit
[ "Introduction to Python and Natural Language Technologies\nLecture 01, Introduction to Python\nSeptember 6, 2017\nAbout this part of the course\nGoal\n\nupper intermediate level Python\nwill cover some advanced concepts\nfocus on string manipulation\n\nPrerequisites\n\nintermediate level in at least one object oriented programming language\nmust know: class, instance, method, operator overloading, basic IO handling\ngood to know: static method, property, mutability, garbage collection\n\nCourse material\nOfficial Github repository\n\nwill push the slideshow notebooks right before the lecture, so you can follow on your own notebook\n\nHomework\n\none homework for this part\nreleased on Week 4\ndeadline by the end of Week 7\n\nJupyter\n\nJupyter - formally known as IPython Notebook is a web application that allows you to create and share documents with live code, equations, visualizations etc.\nJupyter notebooks are JSON files with the extension .ipynb\ncan be converted to HTML, PDF, LateX etc.\n\ncan render images, tables, graphs, LateX equations\n\n\ncontent is organized into cells\n\n\nCell types\n\ncode cell: Python/R/Lua/etc. code\nraw cell: raw text\nmarkdown cell: formatted text using Markdown\n\nCode cell", "print(\"Hello world\")", "The last command's output is displayed", "2 + 3\n3 + 4", "This can be a tuple of multiple values", "2 + 3, 3 + 4, \"hello \" + \"world\"", "Markdown cell\nThis is in bold\nThis is in italics\n| This | is |\n| --- | --- |\n| a | table |\nand is a pretty LateX equation:\n$$\n\\mathbf{E}\\cdot\\mathrm{d}\\mathbf{S} = \\frac{1}{\\varepsilon_0} \\iiint_\\Omega \\rho \\,\\mathrm{d}V\n$$\nUsing Jupyter\nCommand mode and edit mode\nJupyter has two modes: command mode and edit mode\n\nCommand mode: perform non-edit operations on selected cells (can select more than one cell)\nselected cells are marked blue\nEdit mode: edit a single cell\nthe cell being edited is marked green\n\nSwitching between modes\n\nEsc: Edit mode -> Command mode\nEnter or double click: Command mode -> Edit mode\n\nRunning cells\n\nCtrl + Enter: run cell\nShift + Enter: run cell and select next cell\nAlt + Enter: run cell and insert new cell below\n\nCell magic\nSpecial commands can modify a single cell's behavior, for example", "%%time\n\nfor x in range(100000):\n pass\n\n%%timeit\n\nx = 2\n\n%%writefile hello.py\n\nprint(\"Hello world from BME\")", "For a complete list of magic commands:", "%lsmagic", "Course material - Jupyter slides\nJupyter notebooks can be converted to slides and rendered with Reveal.js just like this course material.\nThis slideshow is a single Jupyter notebook which means:\n- you can view it as a notebook on Github\n- you can run and modify it on your own computer\n- you can render it using Reveal.js\n~~~\njupyter-nbconvert --to slides 01_Python_introduction.ipynb --reveal-prefix=reveal.js --post serve\n~~~\nMore on Jupyter slides:\n10 min video on Jupyter slides\n\ncells may be skipped during presentations\nsome extra material is skipped, they will not be covered in the exam\nall notebooks should run without errors using Kernel -&gt; Restart &amp; Run All\ncode samples that would raise an exception are commented\nthis live presentation uses the RISE jupyter extension\n\nUnder the hood\n\neach notebook is run by its own Kernel (Python interpreter)\nthe kernel can interrupted or restarted through the Kernel menu\nalways run Kernel -&gt; Restart &amp; Run All before submitting homework to make sure that your notebook behaves as expected\nall cells share a single namespace\ncells can be run in arbitrary order, execution count is helpful", "print(\"this is run first\")\n\nprint(\"this is run afterwords. Note the execution count on the left.\")", "The input and output of code cells can be accessed\nPrevious output:", "42\n\n_", "Next-previous output:", "\"first\"\n\n\"second\"\n\n__\n\n__", "Next-next previous output:", "___", "N-th output can also be accessed as a variable _output_count. This is only defined if the N-th cell had an output.\nHere is a way to list all defined outputs (you will understand the code in 3 week):", "list(filter(lambda x: x.startswith('_') and x[1:].isdigit(), globals()))", "Inputs can be accessed similarly\nPrevious input:", "_i", "N-th input:", "_i2", "The Python programming language\nHistory of Python\n\nPython started as a hobby project of Dutch programmer, Guido van Rossum in 1989.\nPython 1.0 in 1994\nPython 2.0 in 2000\ncycle-detecting garbage collector\nUnicode support\nPython 3.0 in 2008\nbackward incompatible\nPython2 End-of-Life (EOL) date was postponed from 2015 to 2020\n\n# Benevolent Dictator for Life\n<img width=\"400\" alt=\"portfolio_view\" src=\"https://upload.wikimedia.org/wikipedia/commons/6/66/Guido_van_Rossum_OSCON_2006.jpg\">\n Guido van Rossum at OSCON 2006. by Doc Searls licensed under CC BY 2.0\nPython community and development\n\nPython Software Foundation nonprofit organization based in Delaware, US\nmanaged through PEPs (Python Enhancement Proposal)\nstrong community inclusion\nlarge standard library\nvery large third-party module repository called PyPI (Python Package Index)\npip installer", "import antigravity", "Python neologisms\n\nthe Python community has a number of made-up expressions\nPythonic: following Python's conventions, Python-like\nPythonist or Pythonista: good Python programmer\n\nGeneral properties of Python\nWhitespaces\n\nwhitespace indentation instead of curly braces\nno semicolons", "n = 12\nif n % 2 == 0:\n print(\"n is even\")\nelse:\n print(\"n is odd\")", "Dynamic typing\n\ntype checking is performed at run-time as opposed to compile-time (C++)", "n = 2\nprint(type(n))\n\nn = 2.1\nprint(type(n))\n\nn = \"foo\"\nprint(type(n))", "Assignment\nassignment differs from other imperative languages:\n\nin C++ i = 2 translates to typed variable named i receives a copy of numeric value 2\nin Python i = 2 translates to name i receives a reference to object of numeric type of value 2\n\nthe built-in function id returns the object's id", "i = 2\nprint(id(i))\n\ni = 3\nprint(id(i))\n\ni = \"foo\"\nprint(id(i))\n\ns = i\nprint(id(s) == id(i))\n\ns += \"bar\"\nprint(id(s) == id(i))", "Simple statements\nif, elif, else", "#n = int(input())\nn = 12\n\nif n < 0:\n print(\"N is negative\")\nelif n > 0:\n print(\"N is positive\")\nelse:\n print(\"N is neither positive nor negative\")", "Conditional expressions\n\none-line if statements\nthe order of operands is different from C's ?: operator, the C version of abs would look like this\n\n~~~C\nint x = -2;\nint abs_x = x ? x>=0 : -x;\n~~~\n- should only be used for very short statements\n&lt;expr1&gt; if &lt;condition&gt; else &lt;expr2&gt;", "n = -2\nabs_n = n if n >= 0 else -n\nabs_n", "Lists\n\nlists are the most frequently used built-in containers\nbasic operations: indexing, length, append, extend\nlists will be covered in detail next week", "l = [] # empty list\nl.append(2)\nl.append(2)\nl.append(\"foo\")\n\nlen(l), l\n\nl[1] = \"bar\"\nl.extend([-1, True])\nlen(l), l", "for, range\nIterating a list", "for e in [\"foo\", \"bar\"]:\n print(e)", "Iterating over a range of integers\nThe same in C++:\n~~~C++\nfor (int i=0; i<5; i++)\n cout << i << endl;\n~~~\nBy default range starts from 0.", "for i in range(5):\n print(i)", "specifying the start of the range:", "for i in range(2, 5):\n print(i)", "specifying the step. Note that in this case we need to specify all three positional arguments.", "for i in range(0, 10, 2):\n print(i)", "while", "i = 0\nwhile i < 5:\n print(i)\n i += 1\n ", "There is no do...while loop in Python.\nbreak and continue\n\nbreak: allows early exit from a loop\ncontinue: allows early jump to next iteration", "for i in range(10):\n if i % 2 == 0:\n continue\n print(i)\n\nfor i in range(10):\n if i > 4:\n break\n print(i)", "Functions\nDefining functions\nFunctions can be defined using the def keyword:", "def foo():\n print(\"this is a function\")\n \nfoo()", "Function arguments\n\npositional\nnamed or keyword arguments\n\nkeyword arguments must follow positional arguments", "def foo(arg1, arg2, arg3):\n print(\"arg1 \", arg1)\n print(\"arg2 \", arg2)\n print(\"arg3 \", arg3)\n \nfoo(1, 2, 3)\n\nfoo(1, arg3=2, arg2=29)", "Default arguments\n\narguments can have default values\ndefault arguments must follow non-default arguments", "def foo(arg1, arg2, arg3=3):\n print(\"arg1 \", arg1)\n print(\"arg2 \", arg2)\n print(\"arg3 \", arg3)\nfoo(1, 2)", "Default arguments need not be specified when calling the function", "foo(1, 2)\n\nfoo(arg1=1, arg3=33, arg2=222)", "If more than one value has default arguments, either can be skipped:", "def foo(arg1, arg2=2, arg3=3):\n print(\"arg1 \", arg1)\n print(\"arg2 \", arg2)\n print(\"arg3 \", arg3)\n \nfoo(11, arg3=33)", "This mechanism allows having a very large number of arguments.\nMany libraries have functions with dozens of arguments.\nThe popular data analysis library pandas has functions with dozens of arguments, for example:\n~~~python\n pandas.read_csv(filepath_or_buffer, sep=', ', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='\"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None)\n ~~~\nThe return statement\n\nfunctions may return more than one value\na tuple of the values is returned\nwithout an explicit return statement None is returned\nan empty return statement returns None", "def foo(n):\n if n < 0:\n return \"negative\"\n if 0 <= n < 10:\n return \"positive\", n\n return\n\nprint(foo(-2))\nprint(foo(3))\nprint(foo(12))", "Zen of Python", "import this" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
folivetti/PIPYTHON
ListaEX_04.ipynb
mit
[ "Exercício 01: Crie uma função ContaPalavras que receba como entrada o nome de um arquivo de texto e retorne a frequência de cada palavra contida nele.", "# Contador de palavras\nimport codecs\nfrom collections import defaultdict\n\ndef ContaPalavras(texto):\n \nfor palavra, valor in ContaPalavras('exemplo.txt').iteritems():\n print (palavra, valor)", "Exercício 02: Crie uma função ConverteData() que recebe uma string no formato DIA-MES-ANO e retorne uma string no formato DIA-MES_NUMERO-ANO. Exemplo:\n'01-MAI-2000' => '01-05-2000'.\nVocê pode separar a string em uma lista de strings da seguinte maneira:\ndata = '01-MAI-2000'\nlista = data.split('-')\nprint lista # ['01','MAI','2000']\nE pode juntar novamente usando join:\nlista = ['01','05', '2000']\ndata = '-'.join(lista)\nprint data # '01-05-2000'", "# converter data no formato 01-MAI-2000 em 01-05-2000\ndef ConverteData(data):\n\nprint (ConverteData('01-MAI-2000'))", "Exercício 03: Crie um dicionário chamado Dados que tenha como chave um número de 2 até 12 e o valor seja uma lista contendo todas as combinações dos valores de dois dados que resulta nessa chave.", "# crie um dicionário em que a chave é um número de 2 a 12 \n# e o valor é uma lista de combinações de dois dados que resulta na chave\nDados =...\n\n\nfor chave, valor in Dados.iteritems():\n print (chave, valor)", "Exercício 04: Crie um dicionário onde as chaves são palavras em português e os valores sua tradução para o inglês. Use todas as palavras do texto do exercício 01.\nCrie uma função Traduz() que recebe o nome do arquivo texto como parâmetro e retorna uma string com a tradução.", "# crie um pequeno dicionário de inglês para português e use para traduzir frases simples\nimport codecs\n\ndef Traduz(texto):\n \n\nprint (Traduz('exemplo.txt'))", "Exercício 05: A Cifra de César é uma forma simples de criptografar um texto. O procedimento é simples:\n\ndado um número $n$\ncrie um mapa de substituição em que cada letra será substituida pela n-ésima letra após ela no alfabeto. Ex.:\n\nn = 1\nA -&gt; B\nB -&gt; C\n...\nn = 2\nA -&gt; C\nB -&gt; D\n...\nA Codificação é feita substituindo cada letra da frase pelo correspondente do mapa.\nPara Decodificar uma frase, basta criar um mapa utilizando $-n$ ao invés de $n$.\nCrie uma função ConstroiDic() que recebe um valor n como entrada e cria um mapa de substituição. Utilize a constante string.ascii_letters para obter todas as letras do alfabeto. \nNote que o mapa é cíclico, ou seja, para n=1, a letra Z tem que ser substituida pela letra A. Isso pode ser feito utilizando o operador '%'.\nCrie uma função Codifica() que recebe como parâmetros uma string contendo uma frase e um valor para n, essa função deve construir o dicionário e retornar a frase codificada.\nPara Decodificar o texto, basta chamar a função Codifica() pasando -n como parâmetro.", "# cifra de César\nimport string\n\ndef ConstroiDic(n):\n\n \ndef Codifica(frase, n):\n \n \n\nl = Codifica('Vou tirar dez na proxima prova', 5)\nprint (l)\nprint (Codifica(l,-5))", "Exercício 06: Faça uma função que leia a tabela periódica de um arquivo (você construirá esse arquivo) e armazene em um dicionário.", "# tabela periodica", "Exercício 07: Assista o vídeo abaixo e crie uma lista com os personagens da letra da música.\nEm seguida, utilizando dois laços for percorra essa lista e escreva a letra da música.", "from IPython.display import YouTubeVideo\nYouTubeVideo('BZzNBNoae-Y', 640,480)\n\n# velha a fiar\n", "Exercício 08: Faça uma função que converta um número decimal para romano. Para isso construa um dicionário em que as chaves são os números decimais e os valores o equivalente em romano.\nO algoritmo funciona da seguinte forma:\n\nPara cada valor decimal do dicionário, do maior para o menor\nEnquanto eu puder subtrair esse valor de x\nsubtraio o valor de x e concateno o equivalente romano em uma string\n\nExercício 09: Faça uma função que converta um número romano para decimal. Para isso construa um dicionário com o inverso do que foi feito no ex. anterior. O algoritmo fica assim:\n\nPara i de 0 até o tamanho da string do número romano\ncria a string formada pela letra i e letra i+1 caso i seja menor que o tamanho da string - 1\ncria a string formada pela letra i-1 e i, caso i seja maior que 0\nse a primeira string estiver no dicionário, some o valor em x\nsenão se a segunda string NÃO estiver no dicionário, some o valor da letra i em x", "# dec - romano - dec\ndef DecRoman(x):\n\n \n \n \ndef RomanDec(r):\n\n \n \nr = DecRoman(1345)\nx = RomanDec(r)\nprint (r,x)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eds-uga/csci1360e-su17
lectures/L7.ipynb
mit
[ "Lecture 7: Functions I\nCSCI 1360E: Foundations for Informatics and Analytics\nOverview and Objectives\nIn this lecture, we'll introduce the concept of functions, critical abstractions in nearly every modern programming language. Functions are important for abstracting and categorizing large codebases into smaller, logical, and human-digestable components. By the end of this lecture, you should be able to:\n\nDefine a function that performs a specific task\nSet function arguments and return values\nWrite a function from scratch to answer questions in JupyterHub!\n\nPart 1: Defining Functions\nA function in Python is not very different from a function as you've probably learned since algebra.\n\"Let $f$ be a function of $x$\"...sound familiar? We're basically doing the same thing here.\nA function ($f$) will [usually] take something as input ($x$), perform some kind of operation on it, and then [usually] return a result ($y$). Which is why we usually see $f(x) = y$.\nA function, then, is composed of three main components:\n1: The function itself. A [good] function will have one very specific task it performs. This task is usually reflected in its name. Take the examples of print, or sqrt, or exp, or log; all these names are very clear about what the function does.\n2: Arguments (if any). Arguments (or parameters) are the input to the function. It's possible a function may not take any arguments at all, but often at least one is required. For example, print has 1 argument: a string.\n3: Return values (if any). Return values are the output of the function. It's possible a function may not return anything; technically, print does not return anything. But common math functions like sqrt or log have clear return values: the output of that math operation.\nPhilosophy\nA core tenet in writing functions is that functions should do one thing, and do it well (with apologies to the Unix Philosophy).\nWriting good functions makes code much easier to troubleshoot and debug, as the code is already logically separated into components that perform very specific tasks. Thus, if your application is breaking, you usually have a good idea where to start looking.\nIt's very easy to get caught up writing \"god functions\": one or two massive functions that essentially do everything you need your program to do. But if something breaks, this design is very difficult to debug.\nFunctions vs Methods\nYou've probably heard the term \"method\" before, in this class. Quite often, these two terms are used interchangeably, and for our purposes they are pretty much the same.\nBUT. These terms ultimately identify different constructs, so it's important to keep that in mind. Specifically:\n\n\nMethods are functions defined inside classes (sorry, not being covered in 1360E).\n\n\nFunctions are not inside classes.\n\n\nOtherwise, functions and methods work identically.\nSo how do we write functions? At this point in the course, you've probably already seen how this works, but we'll go through it step by step regardless.\nFirst, we define the function header. This is the portion of the function that defines the name of the function, the arguments, and uses the Python keyword def to make everything official:", "def our_function():\n pass\n\ndef our_function():\n pass", "That's everything we need for a working function! Let's walk through it:\n\ndef keyword: required before writing any function, to tell Python \"hey! this is a function!\"\nFunction name: one word (can \"fake\" spaces with underscores), which is the name of the function and how we'll refer to it later\nArguments: a comma-separated list of arguments the function takes to perform its task. If no arguments are needed (as above), then just open-paren-close-paren.\nColon: the colon indicates the end of the function header and the start of the actual function's code.\npass: since Python is sensitive to whitespace, we can't leave a function body blank; luckily, there's the pass keyword that does pretty much what it sounds like--no operation at all, just a placeholder.\n\nAdmittedly, our function doesn't really do anything interesting. It takes no parameters, and the function body consists exclusively of a placeholder keyword that also does nothing. Still, it's a perfectly valid function!", "# Call the function!\n\nour_function()\n\n# Nothing happens...no print statement, no computations, nothing.\n# But there's no error either...so, yay?", "Other notes on functions\n\n\nYou can define functions (as we did just before) almost anywhere in your code. Still, good coding practices behooves you to generally group your function definitions together, e.g. at the top of your Python file.\n\n\nInvoking or activating a function is referred to as calling the function. When you call a function, you type its name, an open parenthesis, any arguments you're sending to the function, and a closing parenthesis. If there are no arguments, then calling the function is as simple as typing the function name and an open-close pair of parentheses (as in our previous example).\n\n\nPart 2: Function Arguments\nArguments (or parameters), as stated before, are the function's input; the \"$x$\" to our \"$f$\", as it were.\nYou can specify as many arguments as want, separating them by commas:", "def one_arg(arg1):\n print(arg1)\n\ndef two_args(arg1, arg2):\n print(arg1, arg2)\n\ndef three_args(arg1, arg2, arg3):\n print(arg1, arg2, arg3)\n\n# And so on...", "Like functions, you can name the arguments anything you want, though also like functions you'll probably want to give them more meaningful names besides arg1, arg2, and arg3. When these become just three functions among hundreds in a massive codebase written by dozens of different people, it's helpful when the code itself gives you hints as to what it does.\nWhen you call a function, you'll need to provide the same number of arguments in the function call as appear in the function header, otherwise Python will yell at you.", "one_arg(10) # \"one_arg\" takes only 1 argument\n\none_arg(10, 5) # \"one_arg\" won't take 2 arguments!\n\ntwo_args(10, 5) # \"two_args\", on the other hand, does take 2 arguments\n\ntwo_args(10, 5, 1) # ...but it doesn't take 3", "To be fair, it's a pretty easy error to diagnose, but still something to keep in mind--especially as we move beyond basic \"positional\" arguments (as they are so called in the previous error message) into optional arguments.\nDefault arguments\n\"Positional\" arguments--the only kind we've seen so far--are required whenever you call a function. If the function header specifies a positional argument, then every single call to that functions needs to have that argument specified.\nIn our previous example, one_arg is defined with 1 positional argument, so every time you call one_arg, you HAVE to supply 1 argument. Same with two_args defining 2 arguments, and three_args defining 3 arguments. Calling any of these functions without exactly the right number of arguments will result in an error.\nThere are cases, however, where it can be helpful to have optional, or default, arguments. In this case, when the function is called, the programmer can decide whether or not they want to override the default values.\nYou can specify default arguments in the function header:", "def func_with_default_arg(positional, default = 10):\n print(positional, default)\n\nfunc_with_default_arg(\"pos_arg\")\n\nfunc_with_default_arg(\"pos_arg\", default = 999)", "Can you piece together what's happening here?\nNote that, in the function header, one of the arguments is set equal to a particular value:\ndef func_with_default_arg(positional, default = 10):\nThis means that you can call this function with only 1 arguments, and if you do, the second argument will take its \"default\" value, aka the value that is assigned in the function header (in this case, 10).\nAlternatively, you can specify a different value for the second argument if you supply 2 arguments when you call the function.\nCan you think of examples where default arguments might be useful?\nLet's do one more small example before moving on to return values. Let's build a method which prints out a list of video games in someone's Steam library.", "def games_in_library(username, library):\n print(\"User '{}' owns: \".format(username))\n for game in library:\n print(\"\\t{}\".format(game))", "You can imagine how you might modify this function to include a default argument--perhaps a list of games that everybody owns by simply registering with Steam.", "games_in_library('fps123', ['DOTA 2', 'Left 4 Dead', 'Doom', 'Counterstrike', 'Team Fortress 2'])\n\ngames_in_library('rts456', ['Civilization V', 'Cities: Skylines', 'Sins of a Solar Empire'])\n\ngames_in_library('smrt789', ['Binding of Isaac', 'Monaco'])", "In this example, our function games_in_library has two positional arguments: username, which is the Steam username of the person, and library, which is a list of video game titles. The function simply prints out the username and the titles they own.\nPart 3: Return Values\nJust as functions [can] take input, they also [can] return output for the programmer to decide what to do with.\nAlmost any function you will ever write will most likely have a return value of some kind. If not, your function may not be \"well-behaved\", aka sticking to the general guideline of doing one thing very well.\nThere are certainly some cases where functions won't return anything--functions that just print things, functions that run forever (yep, they exist!), functions designed specifically to test other functions--but these are highly specialized cases we are not likely to encounter in this course. Keep this in mind as a \"rule of thumb\": if your function doesn't have a return statement, you may need to double-check your code.\nTo return a value from a function, just use the return keyword:", "def identity_function(in_arg):\n return in_arg\n\nx = \"this is the function input\"\nreturn_value = identity_function(x)\nprint(return_value)", "This is pretty basic: the function returns back to the programmer as output whatever was passed into the function as input. Hence, \"identity function.\"\nAnything you can pass in as function parameters, you can return as function output, including lists:", "def explode_string(some_string):\n list_of_characters = []\n for index in range(len(some_string)):\n list_of_characters.append(some_string[index])\n return list_of_characters\n\nwords = \"Blahblahblah\"\noutput = explode_string(words)\nprint(output)", "This function takes a string as input, uses a loop to \"explode\" the string, and returns a list of individual characters.\nYou can even return multiple values simultaneously from a function. They're just treated as tuples!", "def list_to_tuple(inlist):\n return [10, inlist] # Yep, this is just a list.\n\nprint(list_to_tuple([1, 2, 3]))\n\nprint(list_to_tuple([\"one\", \"two\", \"three\"]))", "This two-way communication that functions enable--arguments as input, return values as output--is an elegant and powerful way of allowing you to design modular and human-understandable code.\nReview Questions\nSome questions to discuss and consider:\n1: You're a software engineer for a prestigious web company named after a South American rain forest. You've been tasked with rewriting their web-based shopping cart functionality for users who purchase items through the site. Without going into too much detail, quickly list out a handful of functions you'd want to write with their basic arguments. Again, no need for excessive detail; just consider the workflow of navigating an online store and purchasing items with a shopping cart, and identify some of the key bits of functionality you'd want to write standalone functions for, as well as the inputs and outputs of those functions.\n2: From where do you think the term \"positional argument\" gets its name?\n3: Write a function, grade, which accepts a positional argument number (floating point) and returns a letter grade version of it (\"A\", \"B\", \"C\", \"D\", or \"F\"). Include a second, default argument that is a string and indicates whether there should be a \"+\", \"-\", or no suffix to the letter grade (default is no suffix).\n4: Name a couple of functions in your experience that would benefit from being implemented with default arguments (hint: mathematical functions).\nCourse Administrivia\n\n\nAssignment 1 was due yesterday. How did it go?\n\n\nAssignment 2 is due tomorrow. Post questions to #questions--I'm going to start answering questions there, because I am getting a lot of the same questions. Which is GOOD--keep asking!\n\n\nAdditional Resources\n\nMatthes, Eric. Python Crash Course. 2016. ISBN-13: 978-1593276034" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cvxopt/chompack
doc/source/examples.ipynb
gpl-3.0
[ "Examples\nSDP conversion\nThis example demonstrates the SDP conversion method. We first generate a random sparse SDP:", "\nfrom cvxopt import matrix, spmatrix, sparse, normal, solvers, blas\nimport chompack as cp\nimport random\n\n# Function for generating random sparse matrix\ndef sp_rand(m,n,a):\n \"\"\"\n Generates an m-by-n sparse 'd' matrix with round(a*m*n) nonzeros.\n \"\"\"\n if m == 0 or n == 0: return spmatrix([], [], [], (m,n))\n nnz = min(max(0, int(round(a*m*n))), m*n)\n nz = matrix(random.sample(range(m*n), nnz), tc='i')\n return spmatrix(normal(nnz,1), nz%m, nz/m, (m,n))\n\n# Generate random sparsity pattern and sparse SDP problem data\nrandom.seed(1)\nm, n = 50, 200\nA = sp_rand(n,n,0.015) + spmatrix(1.0,range(n),range(n))\nI = cp.tril(A)[:].I\nN = len(I)/50 # each data matrix has 1/50 of total nonzeros in pattern\nIg = []; Jg = []\nfor j in range(m):\n Ig += sorted(random.sample(I,N)) \n Jg += N*[j]\nG = spmatrix(normal(len(Ig),1),Ig,Jg,(n**2,m))\nh = G*normal(m,1) + spmatrix(1.0,range(n),range(n))[:]\nc = normal(m,1)\ndims = {'l':0, 'q':[], 's': [n]};\n", "The problem can be solved using CVXOPT's cone LP solver:", "\nprob = (c, G, matrix(h), dims)\nsol = solvers.conelp(*prob)\nZ1 = matrix(sol['z'], (n,n))\n", "An alternative is to convert the sparse SDP into a block-diagonal SDP using the conversion method and solve the converted problem using CVXOPT:", "\nprob2, blocks_to_sparse, symbs = cp.convert_conelp(*prob)\nsol2 = solvers.conelp(*prob2) \n", "The solution to the original SDP can be found by mapping the block-diagonal solution to a sparse positive semidefinite completable matrix and computing a positive semidefinite completion:", "\n# Map block-diagonal solution sol2['z'] to a sparse positive semidefinite completable matrix\nblki,I,J,bn = blocks_to_sparse[0]\nZ2 = spmatrix(sol2['z'][blki],I,J)\n\n# Compute completion \nsymb = cp.symbolic(Z2, p=cp.maxcardsearch)\nZ2c = cp.psdcompletion(cp.cspmatrix(symb)+Z2, reordered=False)\nY2 = cp.mrcompletion(cp.cspmatrix(symb)+Z2, reordered=False)\n", "The conversion can also be combined with clique-merging techniques in the symbolic factorization. This typically yields a block-diagonal SDP with fewer (but bigger) blocks than without clique-merging:", "\nmf = cp.merge_size_fill(5,5)\nprob3, blocks_to_sparse, symbs = cp.convert_conelp(*prob, coupling = 'full', merge_function = mf)\nsol3 = solvers.conelp(*prob3) \n", "Finally, we recover the solution to the original SDP:", "\n# Map block-diagonal solution sol2['z'] to a sparse positive semidefinite completable matrix\nblki,I,J,bn = blocks_to_sparse[0]\nZ3 = spmatrix(sol3['z'][blki],I,J)\n\n# Compute completion \nsymb = cp.symbolic(Z3, p=cp.maxcardsearch)\nZ3c = cp.psdcompletion(cp.cspmatrix(symb)+Z3, reordered=False)\n", "Euclidean distance matrix completion\nSuppose that $A$ is a partial EDM of order $n$ where the squared distance $A_{ij} = \\| p_i - p_j \\|2^2$ between two point $p_i$ and $p_j$ is known if $p_i$ and $p_j$ are sufficiently close. We will assume that $A{ij}$ is known if and only if\n$$\\| p_i - p_j \\|_2^2 \\leq \\delta $$ \nwhere $\\delta$ is a positive constant. Let us generate a random partial EDM based on points in $\\mathbb{R}^2$:", "\nfrom cvxopt import uniform, spmatrix, matrix\nimport chompack as cp\n\nd = 2 # dimension\nn = 100 # number of points (order of A)\ndelta = 0.15**2 # distance threshold\n\nP = uniform(d,n) # generate n points with independent and uniformly distributed coordinates\nY = P.T*P # Gram matrix\n\n# Compute true distances: At[i,j] = norm(P[:,i]-P[:,j])**2\n# At = diag(Y)*ones(1,n) + ones(n,1)*diag(Y).T - 2*Y\nAt = Y[::n+1]*matrix(1.0,(1,n)) + matrix(1.0,(n,1))*Y[::n+1].T - 2*Y\n\n# Generate matrix with \"observable distances\"\n# A[i,j] = At[i,j] if At[i,j] <= delta\nV,I,J = zip(*[(At[i,j],i,j) for j in range(n) for i in range(j,n) if At[i,j] <= delta])\nA = spmatrix(V,I,J,(n,n))\n", "The partial EDM $A$ may or may not be chordal. We can find a maximal chordal subgraph using the maxchord routine which returns a chordal matrix $A_{\\mathrm{c}}$ and a perfect elimination order $p$. Note that if $A$ is chordal, then $A_{\\mathrm{c}} = A$.", "\nAc,p = cp.maxchord(A)\n", "The points $p_i$ and the known distances can be visualized using Matplotlib:", "\nfrom pylab import plot,xlim,ylim,gca\n\n# Extract entries in Ac and entries dropped from A\nIJc = zip(Ac.I,Ac.J)\ntmp = A - Ac\nIJd = [(i,j) for i,j,v in zip(tmp.I,tmp.J,tmp.V) if v > 0]\n\n# Plot edges\nfor i,j in IJc: \n if i > j: plot([P[0,i],P[0,j]],[P[1,i],P[1,j]],'k-')\nfor i,j in IJd:\n if i > j: plot([P[0,i],P[0,j]],[P[1,i],P[1,j]],'r-')\n\n# Plot points\nplot(P[0,:].T,P[1,:].T, 'b.', ms=12)\nxlim([0.,1.])\nylim([0.,1.])\ngca().set_aspect('equal')\n", "The edges represent known distances. The red edges are edges that were removed to produce the maximal chordal subgraph, and the black edges are the edges of the chordal subgraph.\nNext we compute a symbolic factorization of the chordal matrix $A_{\\mathrm{c}}$ using the perfect elimination order $p$:", "\nsymb = cp.symbolic(Ac, p=p)\np = symb.p\n", "Now edmcompletion can be used to compute an EDM completion of the chordal matrix $A_{\\mathrm{c}}$:", "\nX = cp.edmcompletion(cp.cspmatrix(symb)+Ac, reordered = False)\n", "Symbolic factorization\nThis example demonstrates the symbolic factorization. We start by generating a test problem and computing a symbolic factorization using the approximate minimum degree (AMD) ordering heuristic:", "\nimport chompack as cp\nfrom cvxopt import spmatrix, amd\n\nL = [[0,2,3,4,14],[1,2,3],[2,3,4,14],[3,4,14],[4,8,14,15],[5,8,15],[6,7,8,14],[7,8,14],[8,14,15],[9,10,12,13,16],[10,12,13,16],[11,12,13,15,16],[12,13,15,16],[13,15,16],[14,15,16],[15,16],[16]]\nI = []\nJ = []\nfor k,l in enumerate(L):\n I.extend(l)\n J.extend(len(l)*[k])\n \nA = spmatrix(1.0,I,J,(17,17))\nsymb = cp.symbolic(A, p=amd.order)\n", "The sparsity graph can be visualized with the sparsity_graph routine if Matplotlib, NetworkX, and Graphviz are installed:", "\nfrom chompack.pybase.plot import sparsity_graph\nsparsity_graph(symb, node_size=50, with_labels=False)\n", "The sparsity_graph routine passes all optional keyword arguments to NetworkX to make it easy to customize the visualization.\nIt is also possible to visualize the sparsity pattern using the spy routine which requires the packages Matplotlib, Numpy, and Scipy:", "\nfrom chompack.pybase.plot import spy\nfig = spy(symb, reordered=True)\n", "The supernodes and the supernodal elimination tree can be extracted from the symbolic factorization as follows:", "\npar = symb.parent()\nsnodes = symb.supernodes()\n\nprint \"Id Parent id Supernode\"\nfor k,sk in enumerate(snodes):\n print \"%2i %2i \"%(k,par[k]), sk\n ", "The supernodal elimination tree can be visualized with the etree_graph routine if Matplotlib, NetworkX, and Graphviz are installed:", "\nfrom chompack.pybase.plot import etree_graph\netree_graph(symb, with_labels=True, arrows=False, node_size=500, node_color='w', node_shape='s', font_size=14)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
abrupt-climate/notebooks
ERA5_atmospheric_rivers.ipynb
apache-2.0
[ "from scipy import ndimage\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport matplotlib.cm as cm\nfrom matplotlib.ticker import FormatStrFormatter\nimport numpy as np\n\nimport cartopy.crs as ccrs\n\nimport os\nfrom pathlib import Path\n\nfrom hypercc.data.box import Box\nfrom hypercc.data.data_set import DataSet\nfrom hypercc.units import unit\nfrom hypercc.filters import (taper_masked_area, gaussian_filter, sobel_filter)\nfrom hypercc.plotting import (\n plot_mollweide, plot_orthographic_np, plot_plate_carree,\n plot_signal_histogram, earth_plot)\nfrom hypercc.calibration import (calibrate_sobel)\nfrom hypercc.workflow import write_netcdf_3d\nfrom hyper_canny import cp_edge_thinning, cp_double_threshold\n\nimport netCDF4\nfrom skimage.morphology import flood_fill", "Enter your settings here", "#data_folder = Path(\"/home/bathiany/Sebastian/datamining/edges/Abrupt/hypercc/evaluation/AtmosphericRivers\")\ndata_folder = Path(\"/media/bathiany/Elements/obsdata/qvi\")\n\nyear=1998\nmonths='04'\n#months='01-04'\n#months='05-08'\n#months='09-12'\n\n\n## smoothing scales\nsigma_d = unit('100 km') # space\nsigma_t = unit('1 hour') # time\n\n### aspect ratio: all weight on space\ngamma = 1e10\n\n## date choice for illustration: 25 April 1998\ntimeind=26*24 + 12 #hourly data\n", "select data based on settings above\nNo editing below this point required", "period = str(year) + '_' + months\n\nfile = 'ERA5_qvi_Pacific_hourly_' + period + '.nc'\ndata_set = DataSet([data_folder / file ], 'qvi')\n\nscaling_factor = gamma * unit('1 km/year')\nsobel_delta_t = unit('1 year')\nsobel_delta_d = sobel_delta_t * scaling_factor\nsobel_weights = [sobel_delta_t, sobel_delta_d, sobel_delta_d]", "Load and inspect the data\nNext we define a box. The box contains all information on the geometry of the data. It loads the lattitudes and longitudes of the grid points from the NetCDF file and computes quantities like resolution.", "from datetime import date, timedelta\n\nbox = data_set.box\n\nprint(\"({:.6~P}, {:.6~P}, {:.6~P}) per pixel\".format(*box.resolution))\nfor t in box.time[:3]:\n print(box.date(t), end=', ')\nprint(\" ...\")\n\ndt = box.time[1:] - box.time[:-1]\nprint(\"time steps: max\", dt.max(), \"min\", dt.min())\n\ndata = data_set.data\n\nlons = box.lon.copy()\nlats = box.lat.copy()\n\n# a look at the event\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, data_set.data[timeind,:,:])\ncbar = fig.colorbar(pcm)\ncbar.ax.tick_params(labelsize=16)\n#ax.coastlines()\n#fig.colorbar(pcm, labelsize=10)\n\n# Smoothing\nsmooth_data = gaussian_filter(box, data, [sigma_t, sigma_d, sigma_d])\n\ndel data", "Sobel filtering\nThe Sobel filter has the same problem as the Gaussian filter, but the solution is easier. We just correct for the magnitude of the Sobel response by multiplying the longitudinal component by the cosine of the latitude.", "sb = sobel_filter(box, smooth_data, weight=sobel_weights)\npixel_sb = sobel_filter(box, smooth_data, physical=False)\n\ndel smooth_data", "Determine hysteresis settings", "signal = 1/sb[3]\n\n### set thresholds\n\nperc_upper=95\nperc_lower=90\n\nupper_threshold=np.percentile(signal, perc_upper)\nlower_threshold=np.percentile(signal, perc_lower)\n\n\ndel signal\n\n# use directions of pixel based sobel transform and magnitudes from calibrated physical sobel.\ndat = pixel_sb.transpose([3,2,1,0]).astype('float32')\ndel pixel_sb\ndat[:,:,:,3] = sb[3].transpose([2,1,0])\n\nmask = cp_edge_thinning(dat)\n#thinned = mask.transpose([2, 1, 0])\ndat = sb.transpose([3,2,1,0]).copy().astype('float32')\n\nedges = cp_double_threshold(data=dat, mask=mask, a=1/upper_threshold, b=1/lower_threshold)\nm = edges.transpose([2, 1, 0])\n\ndel mask\n\n#plot_signal_histogram(box, signal, lower_threshold, upper_threshold);\n\ncmap_rev = cm.get_cmap('gray_r')\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, m[timeind], cmap=cmap_rev)\nax.coastlines()\nfig.colorbar(pcm)\n\n## remove spurious boundary effects (resulting from imposed periodicity)\nm[:,0:1,:]=0\nm[:,:,0:1]=0\n#m[:,np.size(m, axis=1)-1,:]=0\n#m[:,:,np.size(m, axis=2)-1]=0\n\nm[:,-2:-1,:]=0\nm[:,:,-2:-1]=0\n\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, m[timeind], cmap=cmap_rev)\nax.coastlines()\nfig.colorbar(pcm)\n\n# load lsm\n\nlsmfile = netCDF4.Dataset(data_folder / \"ERA5_lsm_Pacific.nc\", \"r\", format=\"NETCDF4\")\nlsm = lsmfile.variables['lsm'][:,:,:]\n\n\n# show edges with coastlines\n\n#fig = plt.figure(figsize=(20, 10))\n#ax = fig.add_subplot(111, projection=ccrs.Mercator())\n#pcm = ax.pcolormesh(\n#lons, lats, m[timeind,:,:])\n#ax.coastlines()\n#fig.colorbar(pcm)\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.PlateCarree())\npcm = ax.pcolormesh(\nlons, lats, m[timeind,:,:], cmap=cmap_rev)\nax.coastlines()\n#plt.show()\nfig.colorbar(pcm)\n\n### the next lines cut parts of land according to criteria in Dettinger et al., 2011 (Table 1)\n\n## all cells with little bit of land become 1\nlsm[lsm>0]=1\n\n## remove Western parts (islands)\nlsm[:,:,0:70]=0\n\n## remove Southern parts of coast (Mexico)\nlsm[:,99:141:]=0\n\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, lsm[0,:,:])\nax.coastlines()\nfig.colorbar(pcm)\n\n## shift coast to the West \n#(in order to not miss atmospheric rivers that are a few pixels away; smoothing can destroy such links)\n\n## use gaussian filter to do that:\nsigma_lon_lsm = unit('33 km') # space\nsigma_lat_lsm = unit('33 km') # space\nsigma_t_lsm = unit('0 hour') # time\n\nlsm = gaussian_filter(box, lsm, [sigma_t_lsm, sigma_lat_lsm, sigma_lon_lsm])\n\nlsm[lsm>0]=1\n\n# remove Western parts (islands)\nlsm[:,:,0:70]=0\n\n### remove most Eastern column to avoid circular connectivity between East and West:\n#lsm[:,:,239:240]=0\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, lsm[0,:,:])\nax.coastlines()\nfig.colorbar(pcm)\n\n## add to mask of detected edges\nmask_sum=m+lsm\n\nmask_sum[mask_sum>1]=1\n\nmask_sum=mask_sum.astype(int)\n\ndel m\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, mask_sum[timeind,:,:])\nax.coastlines()\nfig.colorbar(pcm)\n\n## floodfill\n\nmask_floodfilled=mask_sum*0\nfor timeind_flood in range(0,np.size(mask_floodfilled, axis=0)):\n mask_floodfilled[timeind_flood] = flood_fill(mask_sum[timeind_flood,:,:], (0,239), 2)\n\ndel mask_sum\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, mask_floodfilled[timeind,:,:])\nax.coastlines()\nfig.colorbar(pcm)\n\n## suppress unconnected edges\n\nmask_floodfilled[mask_floodfilled<2]=0\nmask_floodfilled[mask_floodfilled==2]=1\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, mask_floodfilled[timeind,:,:])\nax.coastlines()\nfig.colorbar(pcm)\n\n## subtract lsm again\nmask_result=mask_floodfilled-lsm\n\nfig = plt.figure(figsize=(20, 10))\nax = fig.add_subplot(111, projection=ccrs.Mercator())\npcm = ax.pcolormesh(\nlons, lats, mask_result[timeind,:,:])\nax.coastlines()\nfig.colorbar(pcm)\n\ndel mask_floodfilled\n\n### output of m\noutfilename = 'ERA5_qvi_Pacific_hourly_' + \"detected_rivers_sigmaS\" + str(sigma_d.magnitude) + \"_sigmaT\" + str(sigma_t.magnitude) + \"_percupper\" + str(perc_upper) + \"_perclower\" + str(perc_lower) + '_' + period + \".nc\"\ndummyfile='dummy_hourly_' + period + '.nc'\n!cp $data_folder/$dummyfile $data_folder/$outfilename\nwrite_netcdf_3d(mask_result, data_folder / outfilename)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PythonFreeCourse/Notebooks
week04/5_Builtins.ipynb
mit
[ "<img src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\">\n<span style=\"text-align: right; direction: rtl; float: right;\">פונקציות מובנות</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הקדמה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אנו פוגשים הרבה בעיות בתכנות לעיתים קרובות ובמגוון מצבים:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>מציאת הערך הקטן או הגדול ביותר ברשימת ערכים.</li>\n <li>יצירת רשימת מספרים המתחילה בערך מסוים ומסתיימת בערך אחר.</li>\n <li>שינוי מיקום הערכים לפי סדר מסוים.</li>\n <li>המרה של ערך כלשהו למספר, למחרוזת, לבוליאני או לרשימה (נשמע מוכר?)</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כדי שלא נצטרך לכתוב שוב ושוב את אותו פתרון, שפות רבות מצוידות בכלים מוכנים מראש שמטרתם לפתור בעיות נפוצות.<br>\n פייתון מתגאה בכך שהיא שפה ש\"הבטריות בה כלולות\" (batteries included), היגד שנועד לתאר את העובדה שהיא מכילה פתרונות מוכנים לאותן בעיות.<br>\n במחברת זו נכיר חלק מהפונקציות שפייתון מספקת לנו במטרה להקל על חיינו.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n השם הנפוץ עבור הפונקציות הללו הוא <dfn>builtins</dfn>, ואפשר למצוא את התיעוד של כולן <a href=\"https://docs.python.org/3/library/functions.html\">כאן</a>. \n</p>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n חשבו על פונקציות כאלו שאתם כבר מכירים משיעורים קודמים.<br>\n חלק מהפתרונות האפשריים: <span style=\"background: black;\">len, int, float, str, list, type</span>\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/tip.png\" style=\"height: 50px !important;\" alt=\"טיפ!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n בהמשך המחברת ישנן דוגמאות עבור כל פונקציה חדשה שנכיר.<br>\n למען הבהירות אני מדפיס דוגמה בודדת, ואז אני משתמש בלולאה כדי להדפיס כמה דוגמאות ברצף.<br>\n הדוגמאות ירווחו בצורה מוזרה כדי שיהיה נוח להסתכל על הפלט. אל תשתמשו בריווחים כאלו בקוד שלכם.\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מתמטיקה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ישנן פעולות מתמטיות שנצטרך לעיתים תכופות, שאת חלקן אפילו מימשנו שוב ושוב לאורך הקורס.<br>\n נראה מה יש לארגז הכלים של פייתון להציע לנו.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">ערך מוחלט</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפונקציה <code>abs</code> מחזירה את הערך המוחלט של המספר שנעביר לה.<br>\n אם נעביר לה כארגומנט מספר שלם או עשרוני, היא תחזיר את המרחק של המספר מ־0 על ציר המספרים:<br>\n</p>", "print(abs(-5))\n\nnumbers = [5, -5, 1.337, -1.337]\nfor number in numbers:\n print(f\"abs({number:>6}) = {abs(number)}\")", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/tip.png\" style=\"height: 50px !important;\" alt=\"טיפ!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n המשמעות של <code dir=\"ltr\" style=\"direction: ltr\">:>6</code> ב־fstring היא \"דאג שיהיו לפחות 6 תווים, וישר את הערך לימין\".<br>\n אפשר להחליף את <code dir=\"ltr\" style=\"direction: ltr\">&gt;</code> בתו <code dir=\"ltr\" style=\"direction: ltr\">^</code> לצורך יישור לאמצע, או בתו <code dir=\"ltr\" style=\"direction: ltr\">&lt;</code> לצורך יישור לשמאל.\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מקסימום ומינימום</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפונקציות <code>max</code> ו־<code>min</code> מקבלות iterable, ומחזירות את האיבר הגבוה או הנמוך ביותר ב־iterable (בהתאמה).\n</p>", "numbers = [6, 7, 3, 4, 5]\nwords = ['apple', 'ginger', 'tomato', 'sushi']\nprint(max(numbers))\n\nprint(f\"max(numbers) = {max(numbers)}\")\nprint(f\"min(numbers) = {min(numbers)}\")\nprint(f\"max(words) = {max(words)}\")\nprint(f\"min(words) = {min(words)}\")", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n מחרוזות בפייתון מושוות לפי הערך המספרי של התווים שמהם הן מורכבות.<br>\n כשמדובר ב־iterable שיש בו מחרוזות, פייתון מוצאת את הערך המרבי או המזערי לפי הייצוג המספרי של האותיות באותן מחרוזות.<br>\n מהסיבה הזו, הסידור האלפבתי לא יחזיר את הערך הרצוי במצב שבו יש גם אותיות גדולות וגם אותיות וקטנות: \n </p>\n </div>\n</div>", "words.append('ZEBRA')\nprint(f\"Minimum in {words} is {min(words)}??\")", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n זה קורה כיוון שהערכים המספריים של אותיות גדולות קטנים מאלו של אותיות קטנות.<br>\n הערך המספרי של Z במילה <em>ZEBRA</em> הוא 90, והערך המספרי של a (במילה <em>apple</em>, שציפינו שתיחשב כקטנה ביותר) הוא 97.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">סכום</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אפשר לחשב את סכום האיברים ב־iterable באמצעות הפונקציה <code>sum</code>: \n</p>", "numbers = [1, 1, 2, 3, 5, 8]\nsum(numbers)\n\ni = 1\nwhile i <= len(numbers):\n sum_until_i = sum(numbers[:i])\n print(f\"The sum of the first {i} items in 'numbers' is {sum_until_i}\")\n i = i + 1", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">עיגול</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אפשר גם לעגל מספרים בעזרת הפונקציה <code>round</code>: \n</p>", "number = 5.9\nround(number)\n\nnumbers = [6, 3.1415, 0.9, -0.9, 0.5, -0.5]\nfor number in numbers:\n print(f\"round({number:>6}) = {round(number)}\")", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ואפשר להחליט על מספר הספרות אחרי הנקודה שלפיו העיגול יתבצע: \n</p>", "pi = 3.141592653589793\nround(pi, 3) # הפרמטר השני פה, 3, מייצג את הדיוק\n\nnumbers = [6, 3.1415, 0.9, -0.9, 0.5, -0.5]\nround_options = [-1, 1, 2, 3]\nfor number in numbers:\n for round_argument in round_options:\n result = round(number, round_argument)\n print(f\"round({number:>6}, {round_argument}) = {result}\")", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">המרות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n טוב, על המרות (casting) כבר למדנו.<br>\n אבל בואו בכל זאת נראה כמה דוגמאות מגניבות.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אפשר להמיר מחרוזת לרשימה. הפעולה הזו תפריד כל אחת מהאותיות לתא נפרד ברשימה:\n</p>", "list('hello') # Iterable -> List", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עוד פרט טריוויה נחמד הוא שלכל ערך בפייתון יש ערך בוליאני שקול.<br>\n בדרך כלל, ערכים ריקים שקולים ל־<code>False</code> וערכים שאינם ריקים שקולים ל־<code>True</code>:\n</p>", "bool_checks = [\n 'hello', '', 0, 1, -1, 0.0, 0.1, 1000, '\\n', ' ', [], {}, [1],\n]\n\nfor check in bool_checks:\n print(f\"bool({check!r:>7}) is {bool(check)}\")", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/tip.png\" style=\"height: 50px !important;\" alt=\"טיפ!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n המשמעות של <code dir=\"ltr\" style=\"direction: ltr\">!r</code> ב־fstring היא \"הצג את הערך בתצורתו הגולמית\" (raw).<br>\n לדוגמה, מחרוזות יוצגו עם הגרשיים משני צידיהן.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אפשר להמיר ערכים בוליאניים למספר שלם:\n</p>", "print(int(True))\nprint(int(False))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מה בנוגע לטריקים על מילונים?<br>\n אם יש לנו iterable שמכיל זוגות של ערכים, אפשר להפוך אותו למילון:\n</p>", "stock = [('apples', 2), ('banana', 3), ('crembo', 4)]\nprint(dict(stock))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n וכטיפ כללי, אפשר להמיר את סוג הערכים שלנו לטיפוס שונה, בעזרת הפונקציה שנקראת על שם הטיפוס שאליו אנחנו רוצים להמיר:\n</p>", "print(int(5.5))\nprint(float('5.5'))\nprint(str(True))\nprint(bool(0))\nprint(dict([('name', 'Yam'), ('age', 27)]))\ntuple_until_4 = (1, 2, 3, 4)\nprint(list(tuple_until_4))\nprint(tuple('yo!'))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל צריך לזכור שלא כל טיפוס אפשר להמיר לטיפוס אחר:\n</p>", "print(list(5))", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מאחורי הקלעים של פייתון</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ישנן פונקציות מובנות רבות שמטרתן להנגיש לנו דברים שקורים מאחורי הקלעים של פייתון. במחברת זו נסביר על שלוש מהן.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">id</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n על הפונקציה <code>id</code> כבר למדנו בעבר. היא מקבלת כארגומנט ערך, ומחזירה לנו את \"הזהות\" שלו – ערך אחר שמייצג רק את אותו.<br>\n בגרסת הפייתון שאנחנו משתמשים בה, מדובר בכתובת של הערך ב<a href=\"https://he.wikipedia.org/wiki/%D7%96%D7%99%D7%9B%D7%A8%D7%95%D7%9F_%D7%92%D7%99%D7%A9%D7%94_%D7%90%D7%A7%D7%A8%D7%90%D7%99%D7%AA\">זיכרון המחשב</a>.<br>\n כל אדם שיריץ את הקוד הבא יקבל ערכים שונים, אבל לעולם לא יודפסו לו 2 ערכים זהים באותה ריצה.<br>\n הסיבה לכך שתמיד יודפסו 4 ערכים שונים היא שכל הנתונים שמופיעים ברשימה <var>values</var> שונים זה מזה.\n</p>", "values = ['1337', [1, 3, 3, 7], 1337, ['1', '3', '3', '7']]\n\nfor value in values:\n id_of_value = id(value)\n print(f'id({str(value):>20}) -> {id_of_value}')", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אם ניצור מצב שבו 2 שמות מצביעים על אותו ערך בדיוק, <code>id</code> תחזיר את אותה התוצאה עבור שני הערכים:\n</p>", "song1 = [\"I've\", 'got', 'a', 'lovely', 'bunch', 'of', 'coconuts']\nsong2 = song1 # שתי הרשימות כרגע מצביעות לאותו מקום\nsong3 = song1[:] # הערך זהה, אך שכפלנו את הרשימה כך שהמשתנה הזה יצביע למקום אחר\nprint(f\"id(song1): {id(song1)}\")\nprint(f\"id(song2): {id(song2)} # Same id!\")\nprint(f\"id(song3): {id(song3)} # Another id!\")\n# נשים לב שהן מתנהגות בהתאם:\nsong2.append(\"🎵\")\nprint(f\"song1: {song1}\")\nprint(f\"song2: {song2}\")\nprint(f\"song3: {song3}\")", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפונקציה הזו יכולה לסייע לנו להגיע לתובנות בנוגע לדרך שבה פייתון עובדת בכל מיני מצבים.<br>\n קשה להבין, לדוגמה, מה קורה בקוד הבא:\n</p>", "collections_of_numbers = [[0, 0, 0]] * 3 # ניצור 3 רשימות של 3 איברים בכל אחת\nprint(collections_of_numbers)\ncollections_of_numbers[0].append(100) # נוסיף את האיבר 100 לרשימה הראשונה בלבד\nprint(collections_of_numbers)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בדיקה בעזרת <code>id</code> תעזור לנו לראות שהרשימות הפנימיות בתוך <var>collections_of_numbers</var> מצביעות לאותו מקום:\n</p>", "print(id(collections_of_numbers[0]))\nprint(id(collections_of_numbers[1]))\nprint(id(collections_of_numbers[2]))", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נסו לכתוב קוד של שורה אחת או יותר, שיחליף את השורה הראשונה בקוד שלמעלה.<br>\n גרמו לכך שהקוד שמוסיף את הערך 100 לרשימה הראשונה לא ישפיע על שאר הרשימות.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">dir</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n גם על הפונקציה <code>dir</code> כבר למדנו. ראיתם אותה לאחרונה במחברת שעסקה בדוקומנטציה.<br>\n הפונקציה <code>dir</code> תחזיר לנו את כל הפעולות שאפשר לבצע על משתנה מסוים או על טיפוס מסוים.<br>\n נוכל להעביר לה כארגומנט את הערך שנרצה להבין מה הפעולות שאפשר לבצע עליו:\n</p>", "dir('hello')", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">או שנוכל להעביר לה ממש שם של טיפוס:</span>", "dir(str)", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">eval</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n על <code>eval</code> אנחנו מלמדים בעיקר כדי להזהיר ממנה – זו פונקציה שאחראית למחדלי אבטחה בכל שפת תכנות שבה היא קיימת.<br>\n אתם לא אמורים להשתמש בה בקורס, וכדאי לזכור שכשהתשובה היא <code>eval</code>, ברוב המוחלט של המקרים שאלתם את השאלה הלא נכונה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אחרי ההקדמה המלודרמטית, נספר לכם ש־<code>eval</code> (מלשון evaluation) פשוט מקבלת קוד כמחרוזת, ומריצה אותו בעזרת פייתון.\n</p>", "x = []\neval(\"x.append('So lonely')\")\nprint(x)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לעיתים קרובות הפיתוי להשתמש ב־<code>eval</code> הוא גדול.<br>\n אפשר לתכנת, לדוגמה, את המחשבון מהתרגול בשבוע השני בצורה הפשוטה הבאה:\n</p>", "# לנסות בבית עם כפפות – בסדר. לא לכתוב בשום קוד אחר\nprint(eval(input(\"Please enter any mathematical expression: \")))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל כשחושבים על זה – המשתמש שמתבקש להכניס תרגיל מתמטי, יכול להריץ כל קוד שיתחשק לו, והקוד הזה יבוצע!<br>\n הוא יכול לקרוא קבצים שנמצאים על מחשב או למחוק אותם, לחטט בסיסמאות, ולמעשה – לעשות ככל העולה על רוחו.<br>\n לקריאה נוספת על הסכנות, חפשו על <a href=\"https://en.wikipedia.org/wiki/Code_injection\">code injection</a>.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">כלים שימושיים נוספים</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">טווח מספרים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פעמים רבות אנחנו נתקלים במצבים שבהם אנחנו רוצים לעבור על כל המספרים מ־0 ועד ערך מסוים.<br>\n לדוגמה, כך פתרנו את התרגיל שמחשב את סכום המספרים עד מספר שהתקבל כקלט:\n</p>", "max_number = int(input())\n\ncurrent_number = 0\ntotal = 0\n\nwhile current_number <= max_number:\n total = total + current_number\n current_number = current_number + 1\n\nprint(total)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מטרת הפונקציה <code>range</code> היא לפתור לנו את הבעיה הזו בקלות.<br>\n תנו ל־<code>range</code> מספר כארגומנט, והיא תחזיר לכם iterable שמכיל את כל המספרים הטבעיים עד המספר שנתתם לה, ללא המספר האחרון:\n</p>", "for i in range(5):\n print(i)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפונקציה יודעת גם לקבל ערך שממנו היא תתחיל לספור:\n</p>", "for i in range(12, 14):\n print(i)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n וגם ערך דילוג, שקובע על כמה מספרים <code>range</code> תדלג בכל פעם:\n</p>", "for i in range(0, 101, 10):\n print(i)", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n הפונקציה <code>range</code> מזכירה את פעולת החיתוך מהשבוע השלישי, שבה משתמשים בסוגריים מרובעים ובנקודתיים.<br>\n <code>range</code>, לעומת חיתוך, היא פונקציה – קוראים לה בעזרת סוגריים עגולים, ומפרידים בין הארגומנטים שנשלחים אליה בעזרת פסיקים.\n </p>\n </div>\n</div>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו בעצמכם קוד שמחשב את סכום המספרים מ־0 ועד המספר שהתקבל כקלט.<br>\n השתמשו ב־<code>range</code>.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">סידור</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עד כה ארגז הכלים שלנו כלל רק את פעולת הסידור ששייכת לרשימות, <code>sort</code>.<br>\n אף שמעשית רוב פעולות הסידור מבוצעות על רשימות, לפעמים יעלה הצורך לסדר טיפוסי נתונים אחרים.<br>\n במקרים האלה נשתמש בפונקציה המובנית <code>sorted</code>:\n</p>", "sorted('spoilage')", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <code>sorted</code> מקבלת iterable, ומחזירה רשימה מסודרת של האיברים ב־iterable.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נראה דוגמה נוספת של סידור tuple.<br>\n שימו לב שטיפוס הנתונים שמוחזר מ־<code>sorted</code> הוא תמיד רשימה:\n</p>", "numbers = (612314, 4113, 1, 11, 31)\nsorted(numbers)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n גם ל־<code>sorted</code> וגם לפעולה <code>sort</code> שיכולה להתבצע על רשימות, יש 2 פרמטרים שלא למדנו עליהם.<br>\n לפרמטר הראשון קוראים <em>reverse</em>, והוא הפשוט להבנה מבין השניים – ברגע שמועבר אליו <var>True</var>, הוא מחזיר את הרשימה מסודרת בסדר יורד:</p>", "sorted('deflow', reverse=True)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפרמטר השני, <em>key</em>, מסובך קצת יותר להבנה.<br>\n כשנעביר לפרמטר הזה פונקציה, הסידור של איברי ה־iterable יתבצע לפי הערך שחוזר מהפונקציה הזו עבור כל אחד מהאיברים שצריך למיין.<br>\n מבולבלים? נראה, לדוגמה, את הרשימה הבאה, שמורכבת משמות אנשי הסגל של אחד מהמחזורים הקודמים:\n</p>", "staff = [\"Dafi\", \"Efrat\", \"Ido\", \"Itamar\", \"Yam\"]", "<table style=\"font-size: 2rem; border: 0px solid black; border-spacing: 0px;\">\n <tr>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;\">0</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">1</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">4</td>\n </tr>\n <tbody>\n <tr>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Dafi\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Efrat\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Ido\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Itamar\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Yam\"</td>\n </tr>\n <tr style=\"background: #f5f5f5;\">\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right;\">-5</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;\">-4</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;\">-3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;\">-2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: right; border-left: 1px solid #555555;\">-1</td>\n </tr>\n </tbody>\n</table>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אם ארצה לסדר את הרשימה הזו לפי האורך (<code>len</code>) של שמות כל אחד מאנשי הסגל, אשתמש ב־key בצורה הבאה:\n</p>", "staff = sorted(staff, key=len)\nprint(staff)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מה קרה שם בפועל?<br>\n הפונקציה <code>len</code> הופעלה על כל אחד מהאיברים. התוצאות מופיעות בתרשים הבא:\n</p>\n\n<table style=\"font-size: 2rem; border: 0px solid black; border-spacing: 0px;\">\n <tr>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;\">0</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">1</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">4</td>\n </tr>\n <tbody>\n <tr>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Dafi\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Efrat\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Ido\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Itamar\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Yam\"</td>\n </tr>\n <tr style=\"background: #f5f5f5; color: black;\">\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center;\"><strong>4</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>5</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>3</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>6</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>3</strong></td>\n </tr>\n </tbody>\n</table>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בפועל, אנחנו מחזיקים עכשיו רשימה ראשית (שמות אנשי הסגל), ועוד רשימה משנית שבה מאוחסנים אורכי האיברים שברשימה הראשית.<br>\n פייתון תמיין את הרשימה המשנית, ובכל פעם שהיא תזיז את אחד האיברים בה, היא תזיז איתו את האיבר התואם ברשימה המקורית:\n</p>\n\n<table style=\"font-size: 2rem; border: 0px solid black; border-spacing: 0px;\">\n <tr>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;\">0</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">1</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid; border-bottom: 1px solid; background: #d1d1d1;\">2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">4</td>\n </tr>\n <tbody>\n <tr>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Dafi\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Efrat\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid; background: #d1d1d1\">\"Ido\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Itamar\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Yam\"</td>\n </tr>\n <tr style=\"background: #f5f5f5; color: black;\">\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center;\"><strong>4</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>5</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555; background: #d1d1d1\"><strong>3</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>6</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>3</strong></td>\n </tr>\n </tbody>\n</table>\n\n<table style=\"font-size: 2rem; border: 0px solid black; border-spacing: 0px;\">\n <tr>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid; border-bottom: 1px solid; background: #d1d1d1;\">0</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid; border-left: 1px solid #555555;\">1</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">4</td>\n </tr>\n <tbody>\n <tr>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid; background: #d1d1d1\">\"Ido\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Dafi\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Efrat\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Itamar\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Yam\"</td>\n </tr>\n <tr style=\"background: #f5f5f5; color: black;\">\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555; background: #d1d1d1\"><strong>3</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>4</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>5</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>6</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>3</strong></td>\n </tr>\n </tbody>\n</table>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הרשימה המשנית עדיין לא מסודרת. נמשיך:</span>\n<table style=\"font-size: 2rem; border: 0px solid black; border-spacing: 0px;\">\n <tr>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;\">0</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid; border-left: 1px solid #555555;;\">1</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid; background: #d1d1d1;\">4</td>\n </tr>\n <tbody>\n <tr>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid; border-left: 2px solid #555555;\">\"Ido\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Dafi\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Efrat\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Itamar\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid; background: #d1d1d1\">\"Yam\"</td>\n </tr>\n <tr style=\"background: #f5f5f5; color: black;\">\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 0px solid #555555;\"><strong>3</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>4</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>5</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>6</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555; background: #d1d1d1\"><strong>3</strong></td>\n </tr>\n </tbody>\n</table>\n\n<table style=\"font-size: 2rem; border: 0px solid black; border-spacing: 0px;\">\n <tr>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid;\">0</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-bottom: 1px solid; border-left: 1px solid #555555; background: #d1d1d1;\">1</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">2</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">3</td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; color: #777; text-align: left; border-left: 1px solid #555555; border-bottom: 1px solid;\">4</td>\n </tr>\n <tbody>\n <tr>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid; border-left: 2px solid #555555;\">\"Ido\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid; background: #d1d1d1;\">\"Yam\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Dafi\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Efrat\"</td>\n <td style=\"padding-top: 8px; padding-bottom: 8px; padding-left: 10px; padding-right: 10px; vertical-align: bottom; border: 2px solid;\">\"Itamar\"</td>\n </tr>\n <tr style=\"background: #f5f5f5; color: black;\">\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 0px solid #555555;\"><strong>3</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555; background: #d1d1d1\"><strong>3</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>4</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>5</strong></td>\n <td style=\"padding-left: 4px; padding-top: 2px; padding-bottom: 3px; font-size: 1.3rem; text-align: center; border-left: 1px solid #555555;\"><strong>6</strong></td>\n </tr>\n </tbody>\n</table>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עכשיו הרשימה המשנית מסודרת, ואפשר להפסיק את פעולת הסידור.<br>\n הקסם הוא שאפשר להכניס ל־<code>key</code> כל פונקציה, ולקבל iterable מסודר לפי ערכי ההחזרה של אותה פונקציה עבור הערכים ב־iterable.\n</p>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו פונקציה שמקבלת כפרמטר את רשימת השמות של המשתתפות בכיתה.<br>\n הפונקציה תחזיר רשימה המורכבת מאותם שמות, מסודרים לפי סדר האלף־בית.<br>\n סדרו נכונה גם מקרים שבהם חלק מהשמות מתחילים באות גדולה, וחלק – באות קטנה.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">צימוד ערכים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הרבה פעמים נגיע למצב שבו יש לנו 2 iterables ואנחנו מעוניינים לעבור על הערכים שלהם, זה לצד זה.<br>\n נניח שיש לנו רשימה של ציורים, ורשימה של הציירים שציירו אותם:\n</p>", "paintings = ['Mona Lisa', 'The Creation of Adam', 'The Scream', 'The Starry Night']\nartists = ['Leonardo da Vinci', 'Michelangelo', 'Edvard Munch', 'Vincent van Gogh']", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צורה אחת לעבור על שתי הרשימות תהיה כזו:\n</p>", "i = 0\nmax_iterations = len(paintings)\n\nwhile i < max_iterations:\n artist = artists[i]\n painting = paintings[i]\n print(f\"{artist} painted '{painting}'.\")\n i = i + 1", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nאבל אפשר להסכים על כך שהדרך הזו לא נוחה במיוחד.<br>\nאפשרות אחרת היא להשתמש ב־<code>zip</code>, שיצמיד בין שני הערכים ויצור לנו את המבנה הבא:\n</p>", "zipped_values = zip(paintings, artists)\nprint(list(zipped_values))", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n אם שתי הרשימות אינן זהות באורכן, <code>zip</code> לא יתייחס לאיברים העודפים של הרשימה הארוכה יותר.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nאחד השימושים האפשריים ל־<code>zip</code> הוא unpacking בתוך לולאת <code>for</code>:\n</p>", "for artist, painting in zip(artists, paintings):\n print(f\"{artist} painted '{painting}'.\")", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nוהיא אפילו לא מוגבלת ל־2 ארגומנטים בלבד:\n</p>", "paintings = ['Mona Lisa', 'The Creation of Adam', 'The Scream', 'The Starry Night']\nartists = ['Leonardo da Vinci', 'Michelangelo', 'Edvard Munch', 'Vincent van Gogh']\nyears = [1503, 1512, 1893, 1889]\n\nfor artist, painting, year in zip(artists, paintings, years):\n print(f\"{artist} painted '{painting}' in {year}.\")", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n יש טיפוס נתונים שהיה מתאים יותר למקרה הזה מאשר 2 רשימות. מהו לדעתכם?\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nשימוש נפוץ ל־<code>zip</code> הוא במקרה שיש לנו שתי רשימות שמתאימות זו לזו, ויחד יכולות ליצור מילון.<br>\nאם רשימה אחת מתאימה להיות המפתחות במילון, והאחרת להיות הערכים באותו מילון, נוכל להשתמש ב־<code>zip</code> כדי לבצע את ההמרה:\n</p>", "paintings = ['Mona Lisa', 'The Creation of Adam', 'The Scream', 'The Starry Night']\nartists = ['Leonardo da Vinci', 'Michelangelo', 'Edvard Munch', 'Vincent van Gogh']\n\nartists_from_paintings = dict(zip(paintings, artists))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עכשיו נוכל לשאול מי צייר את המונה ליזה:\n</p>", "print(artists_from_paintings.get('Mona Lisa'))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n או לעבור על המילון ולהדפיס את הערכים, כמו שעשינו לפני כן:\n</p>", "for painting, artist in artists_from_paintings.items():\n print(f\"{artist} painted '{painting}'.\")", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">המרת תווים למספרים ולהפך</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כפי שהסברנו במחברת קבצים בשבוע שעבר, לכל תו מוצמד מספר שמזהה אותו.<br>\n בעזרת הפונקציה <code>ord</code> נוכל לאחזר את הערך הזה:\n</p>", "print(ord('A'))\n\nchars = 'a !א'\nfor char in chars:\n print(f\"ord({char!r}) = {ord(char)}\")", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ויש גם דרך לעשות את הפעולה ההפוכה!<br>\n בעזרת הפונקציה <code>chr</code> נוכל לקבל את התו לפי המספר שמייצג אותו:\n</p>", "ascii_numbers = [97, 32, 33, 1488]\nfor ascii_number in ascii_numbers:\n # המקפים שם כדי שתראו את הרווח :)\n print(f\"chr({ascii_number:>4}) = -{chr(ascii_number)}-\")", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו תוכנית שמדפיסה זוג ערכים עבור כל מספר מ־9,760 ועד 10,100.<br>\n הערך הראשון יהיה המספר, והערך השני יהיה התו שאותו המספר מייצג.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מניית איברים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נניח שאנחנו רוצים לעבור על כל השורות בקובץ מסוים, ולהדפיס ליד כל שורה את המספר הסידורי שלה:\n</p>", "with open('resources/haiku.txt') as haiku:\n haiku_text = haiku.readlines()\n\nline_number = 0\nfor line in haiku_text:\n print(f\"{line_number}:\\t{line.rstrip()}\")\n line_number = line_number + 1", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n חייבת להיות דרך טובה יותר!<br>\n הפונקציה המובנית <code>enumerate</code> מאפשרת למתכנת להצמיד מספר רץ ל־iterable:\n</p>", "haiku_text_enumerated = enumerate(haiku_text)\nprint(list(haiku_text_enumerated))", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n השימוש הנפוץ ביותר ל־<code>enumerate</code> הוא בלולאות <code>for</code>:\n</p>", "for line_number, line in enumerate(haiku_text):\n print(f\"{line_number}:\\t{line.rstrip()}\")\n", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n טריק מדליק: אם בא לנו להתחיל למספר ממספר שהוא לא 0, אפשר להעביר ל־<code>enumerate</code> את המספר הזה כפרמטר:\n</p>", "for line_number, line in enumerate(haiku_text, 1):\n print(f\"{line_number}:\\t{line.rstrip()}\")\n line_number = line_number + 1", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">דוגמאות לשימושים</span>\n<span style=\"text-align: right; direction: rtl; float: right;\">מפת צופן קיסר</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צופן קיסר היא שיטת הצפנה, שבה כל תו מוחלף בתו שנמצא 3 תווים אחריו באלף־בית של השפה.<br>\n בעברית, לדוגמה, האות א' תוחלף באות ד', האות ג' באות ו' והאות ת' באות ג'.<br>\n נבנה מפת פענוח לצופן קיסר בעזרת הפונקציות שלמדנו במחברת:\n</p>", "def create_chars_from_numbers(numbers):\n chars = []\n for number in numbers:\n chars.append(chr(number))\n return chars\n\n\ndef get_all_english_letters():\n first_letter = ord('a')\n last_letter = ord('z')\n all_letters_by_number = range(first_letter, last_letter + 1)\n all_letters = create_chars_from_numbers(all_letters_by_number)\n return all_letters\n\n\ndef get_ceaser_map():\n letters = get_all_english_letters()\n shifted_letters = letters[3:] + letters[:3]\n return dict(zip(letters, shifted_letters))\n\nget_ceaser_map()", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ורק בשביל הכיף, נבנה קוד שיאפשר לנו להשתמש במפה כדי להצפין מסרים:\n</p>", "def encrypt(message, encryption_map):\n encrypted = ''\n for char in message.lower():\n # If we can't find the character, assume it is not encrypted.\n encrypted = encrypted + encryption_map.get(char, char)\n return encrypted\n\nencryption_map = get_ceaser_map()\nencrypted_message = encrypt('This is the encrypted message!', encryption_map)\nprint(encrypted_message)\n\ndef create_decryption_map(encryption_map):\n \"\"\"Actually just flip the keys and the values of the dictionary\"\"\"\n decryption_map = {}\n for key, value in encryption_map.items():\n decryption_map[value] = key\n return decryption_map\n\ndef decrypt(message, decryption_map):\n decrypted = ''\n for char in message.lower():\n # If we can't find the character, assume it is not encrypted.\n decrypted = decrypted + decryption_map.get(char, char)\n return decrypted\n\ndecryption_map = create_decryption_map(encryption_map)\ndecrypted_message = decrypt(encrypted_message, decryption_map)\nprint(decrypted_message)", "<span style=\"text-align: right; direction: rtl; float: right;\">סכימה מדורגת</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נבנה תוכנה קצרה שמאפשרת למשתמש להזין מספרים, מסדרת אותם בסדר הפוך, ומדפיסה עבור כל איבר את סכום האיברים עד המיקום של אותו איבר:\n</p>", "def convert_to_integers(numbers):\n integers = []\n for number in numbers:\n integers.append(int(number))\n \n return integers\n\n\nnumbers = input(\"Please enter numbers splitted by ',': \").split(',')\nintegers = convert_to_integers(numbers)\nintegers.sort(reverse=True)\n\nfor i, number in enumerate(integers):\n current_sum = sum(integers[:i+1])\n print(f\"The sum until {number} is {current_sum}\")", "<span style=\"text-align: right; direction: rtl; float: right;\">ממוצע ציונים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n קבלו רשימת שמות תלמידים, ועבור כל תלמיד את רשימת הציונים שלו.<br>\n הדפיסו את שם התלמיד שממוצע הציונים שלו הוא הגבוה ביותר, לצד הציונים שלו.\n</p>", "def get_grades(student_name):\n grades = []\n grade = input(f'Please enter a grade for {student_name}: ')\n\n while grade.isdecimal():\n grades.append(int(grade))\n grade = input(f'Please enter another grade for {student_name}: ')\n\n return grades\n\n\ndef get_students():\n students = []\n student = input('Please enter a student: ')\n\n while student != '':\n students.append(student)\n student = input('Please enter another student: ')\n\n return students\n\n\ndef get_students_and_grades():\n grades = []\n students = get_students()\n for student in students:\n student_grades = get_grades(student)\n grades.append(student_grades)\n\n return zip(students, grades)\n\n\ndef get_average_grade(student_and_his_grades):\n grades = student_and_his_grades[1]\n if len(grades) == 0:\n return 0\n\n return sum(grades) / len(grades)\n\n\nstudents_and_grades = get_students_and_grades()\nstudents_sorted_by_grades = sorted( # מפצלים שורה כדי שלא יהיו שורות ארוכות מדי\n students_and_grades, key=get_average_grade, reverse=True\n)\nbest_student_name, best_grade = students_sorted_by_grades[0]\nprint(f\"The best student is {best_student_name} with the grades: {best_grade}\")", "<span style=\"align: right; direction: rtl; float: right; clear: both;\">תרגילים</span>\n<span style=\"align: right; direction: ltr; float: right; clear: both;\">אותיות או לא להיות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nכתבו תוכנה שמדפיסה את המספר הסידורי של כל אות באלף־בית האנגלי, מהסוף להתחלה.<br>\nהשתמשו בכמה שיותר פונקציות מובנות בדרך.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nהפלט אמור להיראות כך:\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nאנחנו הצלחנו להשתמש ב־5 פונקציות שנלמדו במחברת הזו, בקוד שאורכו 2 שורות.\n</p>\n\n<span style=\"align: right; direction: ltr; float: right; clear: both;\">מלחמה וזהו</span>\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl; clear: both;\">\n זהו תרגיל ברמת קושי גבוהה, שמערב נושאים רבים.<br>\n הרגישו בנוח להיעזר במתרגלים שלכם.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; clear: both;\">\n מצאו את עשר המילים הנפוצות ביותר בספר \"מלחמה ושלום\", והדפיסו אותן למסך מהמילה הנפוצה ביותר למילה הנפוצה הכי פחות.<br>\n ליד כל מילה הדפיסו את מספר המופעים שלה בספר.<br>\n הספר נמצא בקובץ war-and-peace.txt בתוך התיקייה resources.\n</p>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jrmontag/Data-Science-45min-Intros
embeddings/dense-tweets/dense-tweets_rst.ipynb
unlicense
[ "DenseTweets\nJosh Montague, 2017-09-29\nHow language embedding models might offer an opportunity for more robust and insightful modeling of short texts.", "import gzip\nimport json\nimport logging\nimport re\nimport sys\n\nfrom gensim.models import Word2Vec\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nfrom scipy.spatial import distance\nimport pandas as pd\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nfrom sklearn.datasets.samples_generator import make_blobs\nimport seaborn as sns\n\nfrom tweet_parser.tweet import Tweet\n\n%load_ext autoreload\n%autoreload 2\n\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', \n stream=sys.stderr, level=logging.DEBUG)", "To motivate a new approach to text modelling, consider a typical approach to modeling on text data...\nA classic approach to language modeling\nThe standard approach is to:\n- acquire data\n- tokenize and count tokens for each document\n- optionally transform those counts and end up with a document-term matrix\nThis matrix is the representation of the observed data, and one way we can use it to surface patterns (cluster previously observed data), or apply it to new data as a label (assign cluster label to new data).\nNote for readers: to get this to work with your own data, replace the input file string below with a file of your own newline-delimited Tweets from one of the Twitter APIs. Later, there will be additional file paths and files that you will need to modify or create.", "input = []\nwith gzip.open('/mnt3/archives/twitter/2017/09/30/13/twitter_2017-09-30_1304.gz','r') as infile:\n for i,line in enumerate(infile):\n try:\n tw = Tweet(json.loads(line.decode()))\n except json.JSONDecodeError:\n continue\n # strip URLs and numbers for demo purposes\n # https://bit.ly/PyURLre\n text = re.sub('(https?://)?(\\w*[.]\\w+)+([/?=&]+\\w+)*', ' ', tw.text) \n text = re.sub('\\\\b[0-9]+\\\\b', ' ', text)\n input.append(text)\n # grab just a handful of tweets\n if i == 1000:\n break\n\ninput[:10]", "The vectorization step (specifically, the .fit_transform() step) is when we create a model of our corpus. The fit part creates the model and the transform part returns a new representation of the corpus according to that model. The arguments we choose in the vectorizer dictate things like tokenization choices as well (optionally) the final dimensionality of the space. \nLater, we can (and will!) use the vectorizer to transform new text, and the document term matrix to measure and group observations.\nIf we use the defaults of the vectorizer, we'll get things like lowercasing, \"word-boundary\" tokenization, and keep 1-gram feature that is observed in the corpus.", "# default settings\nvec = CountVectorizer()\ndtm = vec.fit_transform(input)\n\n# what does the input space look like?\ndtm\n\nprint('data matrix is {:.1%} non-zero values'.format(dtm.count_nonzero()/(dtm.shape[0] * dtm.shape[1])))", "That low % above is what people mean when they say a feature matrix or the space of the data is sparse. \nAs an aside, note that the data structure is also technically called a \"sparse matrix,\" which is a little confusing. This particular data structure is usually an efficient optimization for models. So, while that's fine, the content sparsity of the matrix is not! Models can learn poor representations of data when the dimensionality of the feature space is high and the amount of data is low (see also: this session).\nNevertheless, let's get a better mental model of the document-term matrix...", "# dataframes have nice reprs\ndtm_df = pd.DataFrame(dtm.todense(), columns=[x for x in vec.get_feature_names()])\ntweet_count = len(dtm_df)\ndtm_df.head()", "For now, put aside that we didn't strip stopwords or do much of preprocessing that we sometimes do. The general approach is the same, while the specific features would vary a bit.\nIn this labeled matrix (technically a dataframe), the vector representation of each tweet (row) is now the linear combination of the corresponding set of word features, each with a coefficient that is the number of occurances of the word. \nNote that this representation has no sense of word ordering - \"the dog saw a cat\" has the same vector representation as \"the cat saw a dog\".\nFor example, our first tweet:", "input[0]", "has a vector representation with word coefficients (counts) that look like:", "dtm_df.head(1).values[0][:500]", "Note: in this illustration, I'm suggesting that the Tweet vector is the linear combination of all the token vectors. That is one of many ways you can do this. For example, another common approach is for the Tweet vector to be the (arithmetic) mean of the token vectors.\nMost of the token coefficients are zero! This is really unhelpful. There isn't much useful information in a zero - we can't depend on a model to pick up meaning based on small changes around zero. And if many unrelated things are all \"zero,\" then we may end up calling them related on accident since they're at the same point.\nUltimately that means the only information we have about this tweet vector is encoded in the non-zero dimensions (and linear combination) below:", "# how does this represent some of our tweets?\n# compare sampled text array to sampled dtm to make sure they're aligned\n\nfor i,doc in enumerate(dtm.toarray()): \n idx = [int(x) for x in np.nonzero(doc)[0]]\n print(\"vector: (\", end='')\n for x in idx:\n print(vec.get_feature_names()[x] + ', ', end='')\n print(')', end='')\n print('\\n[doc: {}]'.format(input[i].replace('\\n',' ')))\n print()", "This situation gets even worse if we apply the \"100x\" rule of thumb to limit our feature count (pdf, Rule #21).", "# 100x obs for each feature\nfeats = dtm.shape[0] // 100\n\n# kwargs uses the max_features most common terms\nvec_small = CountVectorizer(max_features=feats)\ndtm_small = vec_small.fit_transform(input)\n\ndtm_small\n\ndtm_small_df = pd.DataFrame(dtm_small.todense(), columns=[x for x in vec_small.get_feature_names()])\n\ndtm_small_df.head()", "We can already see that this is going to turn out poorly. By choosing the \"biggest\" coefficients in our feature engineering, we've reduced the document vectors to approximately stopwords only. This is an exaggerated example, but the principle is still correct.", "# how does this represent some of our tweets?\n# compare sampled text array to sampled dtm to make sure they're aligned\n\nfor i,doc in enumerate(dtm_small.toarray()): \n idx = [int(x) for x in np.nonzero(doc)[0]]\n print(\"vector: (\", end='')\n for x in idx:\n print(vec_small.get_feature_names()[x] + ', ', end='')\n print(')', end='')\n print('\\n[doc: {}]'.format(input[i].replace('\\n',' ')))\n print()", "Note that the order of tokens has no impact on the ultimate tweet vector - only the tokens and their counts (or, if we used a tfidf vectorizer the normalized counts). \nThis approach has two problems for unsupervised learning like clustering, and both are caused by the high sparsity of our observed document-term matrix:\n1. the model of the observed data is not robust \n - a) small changes in coefficients lead to very different vectors\n - b) we will typically further truncate the feature space which loses some of the already low amount of \n2. the model doesn't apply to new observations well \n information\n - a) the majority of our newly observed tokens will not be present in the model\nVisualizing sparsity\nWe can highlight this issue (and motivate the next steps) by appealing to our visual sense and intuition.\nImagine we want to use clustering to simplify the representation of our observations - instead of coordinate pairs for each observation, we want a single label. \nIntuitively, when grouping a set of real-world data points into clusters we expect there to be some local variation of non-zero values within higher-density regions, and then some sort of gap between these high-density regions. In two dimensions, we can visualize this well. Imagine that each point below represents an observation of something (like a weather measurement) in two dimensions (like temperature and humidity).", "scatter_kwargs=dict(s=100, alpha=0.5)\n\nX, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=42)\nX = pd.DataFrame(X, columns=['temp','humidity'])\n\n# might as well use some made up labels for our made up data!\nX.plot.scatter(x='temp', y='humidity', **scatter_kwargs);", "The specific algorithm we apply in an attempt to discover these clusters isn't too important. The point is that for data that looks like that above, most algorithms would identify those two blobs as unique clusters.\nIn our language model, the analogous plot to humidity vs. temp would be word1 vs. word2. But as we saw above, those observations appear mostly at (0,0) for any pair of word1 and word2.", "print('total tweets: {}'.format(tweet_count))\n\n# choose other slices for fun\ns = slice(500,505)\n\nsns.pairplot(dtm_df.iloc[:,s], diag_kind='kde', plot_kws=dict(s=200,alpha=0.5))", "That most of these data points are at (0,0) means that most documents don't include either of these words. Further, none of these words co-occur in a document (would be at (1,1), or (n,n)), and none of them appear more than one time.\nIf we truly believe that there are meaningful patterns in these Tweets (\"similarity,\" \"communities,\" \"patterns,\" however you might describe it), then we need a way to look at - and compare - these data points in a different space.\nEffectively, what we're seeking with a new language model is a way to smear out those observations that are currently mostly sitting at (0,0) in a way that has meaning (i.e. it won't be helpful to randomly distribute them in space).\nThis is the goal of #DenseTweets.\n#DenseTweets\nThe alternative approach proposed here makes the following assumption: we can use (or create) a new feature space for model training and inference where we'll have less risk of spurious results (\"curse of dimensionality\"), as well as obtain richer semantic structure. This should allow our model results to be more robust to variations in input, as well as be more robust in tasks such as unsupervised clustering.\nHow do we make this new feature space? \n\n\nFirst of all, we use as large a training corpus as we can (to see all of the words and uses of those words). This, by itself, doesn't solve the issues we highlighted above, however. \n\n\nSecond, instead of representing the model in the feature space of tokens (words), we use a new, abstract space and iteratively train a model that ultimately positions tokens (words) within that space in such a way as to encode their semantic relationship in terms of their distance and position relative to other words. \n\n\nFor example, dog and puppy should be relatively \"near\" each other in the lower-dimensional space, while dog and cabinet should not necessarily be near each other. Similarly, the positional difference between e.g. man and woman should be similar to that of boy and girl because they represent the same comparison (for a binary view of gender that is likely common in text), while differing only in age. These are the things we mean by semantic relationships, the relative \"meaning in language\".\nSure, but how do we do that, really?\nHow word vectors work should be it's own separate RST, but this is my favorite write up of it. For now, sit with this inadequate explanation: we assume that all our tokens (words) can be represented as a distribution over the fixed dimensionality space (typically a few hundred dimensions). The linear combination of weights in each dimension is the distributed representation of the word. The model is a shallow, dense neural network (multi-layer preceptron), which is optimized by e.g. SGD against a loss function which is roughly seeking to maximize the conditional probability of observing a specific word given the input of the surrounding (or preceding) words. \nWhat does it produce?\nIn the end, you basically end up with a giant matrix that represents the distributed weights of (most of) the observed tokens. If we previously thought of our data matrix representation as being Tweets (rows) by word features (columns), you can think of the new matrix representation as being words (rows) by calculated feature dimensions (columns). \nTo create the vector representation of a sentence (or Tweet), you now combine the individual word vectors in some way (like summing them, or taking their vector mean).\nThis lower-dimensionality representation of word relationships is called a \"word embedding.\" I'm intentionally glossing over a ton of detail here! Again, if you're curious, I recommend reading this edition of the morning paper. \nA library in two acts\nProject #DenseTweets works toward this goal in two phases:\n1. use a pre-trained language model from another data source (to project new tweets into a new dense space)\n2. create our own language model from a twitter data corpus that we can then use for both additional modeling and inference on new data \nWe'll walk through the 0.0.1 version of this code by demonstrating those two phases.\n1. Use an existing embedding model (on observed data)\nGoogle News corpus\nThe original paper on the word2vec family of algorithms included a pre-trained model based on Google News data (100 billion words) for 3 million words and phrases. You can simply download this file. \nImportantly, we must remember that the way language is used in news articles is different than that of Tweets. We'll work on this in step 2. Nevertheless, we think the overlap of the two languages is non-zero, so there should be some value in this approach. \nBecause the GNews dataset is so accessible, it has a helper function to load it.", "import densetweets as dent\n\n# this takes ~1 min to load\ngn_model = dent.load_GNews_model()\n\ngn_model", "Once we have this representation of language, we can take our new previous observations (tweets), and project them into this new space. The method below that does this ( .get_summary_vector() ) makes some choices about how to split a Tweet into words, and how to combine those words into a Tweet summary vector. These are configurable but have sensible defaults.\nWith verbose logging, this also outputs some useful insight into the construction of our Tweet summary vector. In particular, we can see which words aren't in the model (remember this model wasn't created from a Twitter corpus), and also the fraction of tokens in the Tweet that contribute to the summary vector.", "for i,tw in enumerate(input[:10]):\n # tokenize input text\n tokens = dent.nltk_tweet_tokenizer(tw)\n # show first 5 dimensions of each summary vector\n summary = dent.get_summary_vector(model=gn_model, token_list=tokens)[:5]\n print('tweet #{}: {} ...'.format(i, summary))", "The current implementation uses either the KeyedVector or Word2Vec model from gensim so those docs are the best reference for methods and attributes. The main highlights are token lookup and similarity measurement.", "# single-word vector lookup is dict-like (only showing first 20 of 300 dimensions)\ngn_model['colorado'][:20]\n\n# we have to account for any words that were never seen in the GNews corpus\ntry:\n gn_model['adsfaksjdfhlakjshf']\nexcept KeyError as e:\n print(e)", "The model exposes a .most_similar() method, which returns the topn terms that are closest to the given word in the 300-dimensional feature space. These should be words that carry similar semantics to the given word.", "gn_model.most_similar('robot', topn=5)", "The densetweets library also provides access to a number of useful internal methods for passing data around. Most can be overridden, but most have sensible defaults and can be used without modification.", "# use a minimal fake tweet\ntiny_tw_s = \"\"\"\n{\"postedTime\": \"1999-07-18T23:25:04.000Z\", \n\"body\": \"N) A Tweet with explicit geo coordinates https://t.co/d7d7d7d7d7\", \n\"actor\": {\"displayName\": \"jk no\"}, \n\"id\": \"tag:search.twitter.com,2005:111111111111111\"}\n\"\"\"\n\n# tweet parsing is format agnostic and managed by a dedicated library\ntw = dent.parse_tweet(tiny_tw_s)\n\ntype(tw)\n\n# for now, we only parse the tweet text\nprint(tw.text)\nprint('-'*10)\n\n# the default tokenizer is NLTK's TweetTokenizer\ntokens = dent.extract_tokens(tw)\ntokens", "The .get_summary_vector() method encodes the specific mapping of tokens to summary. It's currently just the arithmetic vector mean.", "# calculate a summary vector in the dimensionality of the pretrained model\n# (only display the first few dimensions)\ndent.get_summary_vector(gn_model, tokens)[:10]", "Modeling on existing data\nOnce we can calculate summary vectors for any Tweet, we can also apply various models to the data we have on hand. \nFor example, we can calculate similarity between Tweets. In the tiny collection of data, the similarity metric often ends up presenting as a language classifier! \nIn this calculation, we use scipy's cosine distance - \"small distance\" is \"more similar.\"", "print('* calculating cosine distance from: \\n\\n{}\\n'.format(input[0]))\nprint('='*100 + '\\n')\nprint('dist. -- tweet')\n\nv1 = dent.get_summary_vector(gn_model, input[0].split()) \nfor text in input[:10]: \n v2 = dent.get_summary_vector(gn_model, text.split()) \n print('{:.3f} -- {}\\n'.format(distance.cosine(v1, v2), text.replace('\\n',' ')))", "Great! Now that we have this different representation there are many explorations to consider, like differences in user or tweet clustering. \nWhile we're setting up some tools for that work, that's not the goal for right now. First, we want to continue working on our points above. \n2. Training a new embedding model from our data\nIt's great to get up and running with a pre-trained model. But, we also want to make our own. \nThe specific reason here is that we have a strong hunch that the way language is used on twitter is somewhat different than in news articles. If we can build a language model that incorporates the colloquialisms and nuance of twitter, our application of such a model to new data should lead to more robust results.\nTraining this kind of word embedding model is a relatively slow process, but it's achievable. For example, the last model I trained during hackweek was on about half a day of the 10% stream (in the ballpark of 25M Tweets) and it took about 12 hours to run. In the current implementation, I believe the main bottleneck is the JSON parsing and string tokenization. I'll work on making these faster in a later version :) \nTo demonstrate, we'll train a model on a very small sample of 1000 Tweets. Note that this will be a pretty bad model for any task to which we might want to apply it! Ideally, we want a lot more data. But this will do for a demo.", "# 10k newline-delimited tweet records from the api\ntweet_file = 'rdata/10000-tweets.json'\n\nsm_model = dent.create_model(tweet_file)", "Now, we have another word embedding model for which we can use all the tools shown previously. Remember that this model isn't going to be very good - in particular, it won't have a very large vocabulary.", "# stopwords gonna stopword...\nsm_model.most_similar('the')\n\n# can serialize it for later\nsm_model.save('rdata/small.model')\n\n# can reinstantiate it from disk\nsm_model_2 = dent.load_model('rdata/small.model')\n\nsm_model_2.most_similar('the')", "But, the real utility of this technique is in training such a model on a large corpus. Due to some technical challenges, this is the largest model I was able to train during hackweek. It's about half a day of the 10% stream.", "model = dent.load_model('rdata/2017-08-25-1_2_hr.model')\n\nmodel\n\nfor x in ['dog', 'cat', 'man', 'woman', '#MAGA', '#BLM', 'baseball', 'hockey']:\n print('-- {} --'.format(x))\n print(model.most_similar_cosmul(x, topn=5))\n print()", "Wrap up\nThis session doesn't include any mind-blowing results - I'm betting that language modeling based on word (and other) embeddings will enable us to do new and valuable things with our text data. densetweets is hopefully a first, tooling-focused step in that direction! I plan to continue work on this code, and will open source a version that can be pip-installed soon. Stay tuned." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Planet-Nine/cs207project
Paper/SGD algorithm paper.ipynb
mit
[ "Stochastic Gradient Descent and its Optimizations in Scikit Learn\nGroup: Planet-Nine\nAuthors: Harold Wang, Sarah Wellons, Rodrick Kuate Defo\nProblem Setting & Algorithm\nIntroduction\nStochastic Gradient Descent (SGD) is a approach to solving linear classification problems such as (linear) Support Vector Machines and Logistic Regression. Even though SGD has been around in the machine learning community for a long time, it has received a considerable amount of attention just recently in the context of large-scale learning.\nThe advantages of Stochastic Gradient Descent are:\nEfficiency and \nEase of implementation.\nThe disadvantages of Stochastic Gradient Descent include:\nSGD requires a number of hyperparameters such as the regularization parameter and the number of iterations and\nsensitivity to feature scaling.\nProblem Setting\nGiven a set of training examples $(x_1,y_1),\\cdots,(x_n,y_n)$ where $x_i\\in R^m$ and $y_i\\in{-1,1}$, the goal is to learn a linear scoring function $f(x)=w^Tx+b$ with model parameters $w\\in R^m$ and intercept $b\\in R$. In order to make predictions, we simply look at the sign of $f(x)$. The regularized training error, which is the number stochastic gradient descent actually tries to minimize, is \n\\begin{align}\nE(w,b)&=\\frac{1}{n}\\sum_{i=1}^n L(y_i,f(x_i))+\\alpha R(w)\n\\end{align}\nwhere $L$ is a loss function and R is a penalty term for model complexity; $\\alpha>0$ is a non-negative parameter. Different choices for L entail different classifiers such as\nHinge: (soft-margin) Support Vector Machines, Log: Logistic Regression, Least-Squares: Ridge Regression, Epsilon-Insensitive: (soft-margin) Support Vector Regression. All of the above loss functions can be regarded as an upper bound on the misclassification error (Zero-one loss). The choices for regularization terms include: L2 normm $R(w):=\\frac{1}{2}\\sum w_i^2$, L1 norm $R(w):=\\sum |w_i|$, and elastic net $R(w):=\\frac{\\rho}{2}\\sum w_i^2+(1-\\rho)\\sum|w_i|$.\nThe Stochastic Gradient Descent Algorithm\nThe SGD algorithm takes an initial weight vector $w$ and intercept $b$. At each time step of the SGD algorithm, $w$ and $b$ are updated using data from a single sample\n\\begin{align}\nw&\\leftarrow w-\\eta_t(\\alpha\\frac{\\partial R(w)}{\\partial w}+\\frac{\\partial L(w^Tx_i+b,y_i)}{\\partial w})\\\nb&\\leftarrow b-\\eta_t(\\frac{\\partial L(w^Tx_i+b,y_i)}{\\partial b})\n\\end{align}\nThe algorithm runs through the data set one-sample at a time until the it runs through the whole dataset, and repeats the process. SGD terminates after a fixed number of iterations through the dataset. Theoretically, the algorithm is guaranteed to converge if $\\eta_t$ satisfies the Robbins-Monro conditions\n\\begin{align}\n\\sum_{t=1}^\\infty \\eta_t =\\infty, \\sum_{t=1}^\\infty \\eta_t^2 <\\infty\n\\end{align}\nLetting the algorithm run for more iterations generally means a higher chance of convergence, but this is not guaranteed. A common practice is to shuffle the data samples after each run through the entire data. \nIn this paper we will assume a Hinge loss function, a L2 norm regularization term and \n\\begin{align}\n\\eta_t&=\\frac{1}{\\alpha(t_0+t)}\n\\end{align}\nfor Scikit learn $t_0$ is based on a heuristic proposed by Leon Bottou. This becomes the normal SVM problem. \nIntuition through an example\nWe a dataset of 400 samples, each sample has a two dimensional X variable and the class variable Y which takes values -1 and 1. The scatterplot of these data is as follows", "from numpy import loadtxt\ntrain = loadtxt('data_stdev2_train.csv')\nX = train[:,0:2]\nY = train[:,2:3]\nimport pylab as pl\n%matplotlib inline\npl.figure(0,figsize=(8, 6))\npl.ylabel('X1')\npl.xlabel('X0')\npl.scatter(X[:, 0], X[:, 1], c=(1.-Y), s=50, cmap = pl.cm.cool)", "The purpose of SVM is to find a weight vector $w$ and intercept $b$ such that the line described by $w_0X_1+w_0X_2+b=0$ best separates the two classes of points. Stochastic gradient descent will obtain the optimal $w$ and $b$ from the data. \nImplementation in Python", "class Lossfunction:\n def loss(self, p, y):\n return 0\n def dloss(self, p, y):\n return 0\n\nclass Hinge(Lossfunction):\n def __init__(self, threshold=1):\n self.threshold=threshold\n def loss(self, p, y):\n z=p*y\n \n if z<=self.threshold:\n return (self.threshold-z)\n return 0\n def _dloss(self, p ,y):\n z= p*y\n if z<=self.threshold:\n return -y\n return 0\n\nimport numpy as np\nclass sgddata:\n def __init__(self, X, Y, sample_weights=0,seed=None):\n if len(X)!=len(Y):\n raise IndexError('X, Y not same length')\n self.X = np.array(X)\n self.Y = np.array(Y)\n self.sample_weights = sample_weights\n self.n_samples = self.X.shape[0]\n self.n_features = self.X.shape[1]\n self.current_index = -1\n def __next__(self):\n self.current_index += 1\n return self.X[self.current_index], self.Y[self.current_index]\n def _reset(self):\n self.current_index = -1\n def shuffle(self ,seed=None):\n np.random.seed(seed)\n idx = np.random.permutation(self.n_samples)\n self.X = self.X[idx]\n self.Y = self.Y[idx]\n def __str__(self):\n return '{},{}'.format(str(self.X), str(self.Y))\n\nfrom time import time\ndef sgd( weights,\n intercept,\n loss,\n penalty_type,\n alpha, \n dataset,\n n_iter, fit_intercept=True,\n verbose=False, shuffle=True, seed=None,\n weight_pos=1, weight_neg=1,\n eta0=0,\n t=1.0,\n intercept_decay=1.0):\n MAX_DLOSS = 1e12\n eta = eta0\n l1_ratio = 0.0\n sumlosslist = []\n n_samples=dataset.n_samples\n typw = np.sqrt(1.0 / np.sqrt(alpha))\n # computing eta0, the initial learning rate\n initial_eta0 = typw / max(1.0, loss.dloss(-typw, 1.0))\n # initialize t such that eta at first sample equals eta0\n optimal_init = 1.0 / (initial_eta0 * alpha)\n t_start = time()\n for epoch in range(n_iter):\n sumloss=0\n\n if shuffle:\n dataset.shuffle(seed)\n for i in range(n_samples):\n x_current,y_current = next(dataset)\n p = np.dot(x_current,weights) + intercept\n eta = 1.0 / (alpha * (optimal_init + t - 1))\n if y_current > 0.0:\n class_weight = weight_pos\n else:\n class_weight = weight_neg\n dloss = loss._dloss(p, y_current)\n # clip dloss with large values to avoid numerical\n # instabilities\n if dloss < -MAX_DLOSS:\n dloss = -MAX_DLOSS\n elif dloss > MAX_DLOSS:\n dloss = MAX_DLOSS\n update = -eta * dloss\n update *= class_weight\n weights *= (max(0, 1.0 - ((1.0 - l1_ratio) * eta * alpha)))\n if update != 0.0:\n weights += update*x_current\n if fit_intercept == 1:\n intercept += update * intercept_decay\n dataset._reset()\n t += 1\n if verbose > 0:\n sumloss=0\n for i in range(n_samples):\n x_current,y_current = next(dataset)\n p = np.dot(x_current,weights) + intercept\n sumloss+=loss.loss(p,y_current)\n# print(\"loss={}\".format(str(sumloss)))\n sumlosslist.append(sumloss)\n dataset._reset()\n print('time = {} second'.format(str(time()-t_start)))\n return weights, intercept,sumlosslist", "Running with Data\nWe use the initial weight $[0,0]$ and initial intercept $0$ to initialize the SGD. We let $\\alpha=0.01$ and run the algorithm through 1000 iterations. Our results are shown below.", "weight_init=np.array([0.,0.])\nintercept_init = 0.\ndataset = sgddata(X,Y)\nloss=Hinge()\nn_iter = 1000\nverbose=True\nweights,intercept,sumlosslist=sgd(weight_init,intercept_init,loss,'L2',0.01,dataset,n_iter,verbose=verbose)\nimport pylab as pl\n%matplotlib inline\npl.figure(0,figsize=(8, 6))\npl.scatter(X[:, 0], X[:, 1], c=(1.-Y), s=50, cmap = pl.cm.cool)\nx1 = np.linspace(min(X[:, 0])*0.7,max(X[:, 0])*0.7,10)\nx2 = -(weights[0]*x1+intercept)/weights[1]\npl.plot(x1,x2)\nif verbose:\n pl.figure(1,figsize=(8, 6))\n pl.plot(range(n_iter),sumlosslist)", "We see from the top figure that the SGD algorithm arrived a reasonable conclusion. The stochastic nature of the SGD algorithm is apparent in the lower plot, which shows the total loss function against iterations. This is in contrast to normal gradient descent, where the loss function typically goes down and stays down. The reason for this stems from the fact that SGD goes through the data one sample at a time, so it cannot gurantee the loss function will go down after each iteration. However the this random nature also means that it is better at escaping from local minima. Setting $\\alpha$ to a smaller value increases the step size and leads to more fluctuations. \nScikit Learn Optimizations\nThe SGD algorithm implemented in scikit-learn has many layers of function calls with many options. For clarity purposes we only discuss the parts relevant to the linear SVM problem. Scikit learn optimizes SGD using Cython. The optimizations happen in three areas: 1. The loss function (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/sgd_fast.pyx#L138-L167\">sgd_fast.pyx</a>), 2. The data structure that stores the data (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/seq_dataset.pyx\">seq_dataset.pyx</a>), 3. The datastructure that stores and updates the weight vector (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/weight_vector.pyx\">weight_vector.pyx</a>) and 4. The implementation of SGD itself (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/sgd_fast.pyx#L138-L167\">sgd_fast.pyx</a>). Throughout the Cython implementation, functions and variables are statically typed whenever possible. This ensures that the costs associated with Python dynamic typing are minimized. \nOptimizations in Loss Function\nThe hinge loss function is implemented in Cython at lines <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/sgd_fast.pyx#L138-L167\">138-167</a>, copied below. Besides the init function that initialize the Hinge object, all other functions and all variables are statically typed.", "cdef class Hinge(Classification):\n \"\"\"Hinge loss for binary classification tasks with y in {-1,1}\n Parameters\n ----------\n threshold : float > 0.0\n Margin threshold. When threshold=1.0, one gets the loss used by SVM.\n When threshold=0.0, one gets the loss used by the Perceptron.\n \"\"\"\n\n cdef double threshold\n\n def __init__(self, double threshold=1.0):\n self.threshold = threshold\n\n cdef double loss(self, double p, double y) nogil:\n cdef double z = p * y\n if z <= self.threshold:\n return (self.threshold - z)\n return 0.0\n\n cdef double _dloss(self, double p, double y) nogil:\n cdef double z = p * y\n if z <= self.threshold:\n return -y\n return 0.0\n\n def __reduce__(self):\n return Hinge, (self.threshold,)", "Optimizations in Data Storage\nScikit-learn first convert the data into a ArrayDataset object, which is also implemented in Cython. The code that converts the raw input data to a ArrayDataset is linked <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/base.py#L48-L67\">here</a>. \nFirst, we look at how ArrayDataSet is initialized. At lines <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/seq_dataset.pyx#L195-L196\">195-196</a> we see that ArrayDataSet creates memoryviews to the numpy array X and Y and assign their pointers to X_data_ptr and Y_data_ptr.", "self.X_data_ptr = <double *>X.data\nself.Y_data_ptr = <double *>Y.data", "The SGD algorithm samples through the dataset one-by-one. This corresponds to lines <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/sgd_fast.pyx#L606-L607\">606-607</a> in the SGD algorithm, copied below. We see that this is implemented within the ArrayDataSet object", "dataset.next(&x_data_ptr, &x_ind_ptr, &xnnz,\n &y, &sample_weight)", "We take a closer look at the .next() function. The .next function is implemented at lines <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/seq_dataset.pyx#L20-L46\">20-46</a> of seq_dataset.pyx. The ArrayDataSet object keeps a data index, which .next() calls, advancing the index by one, and then calls the <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/seq_dataset.pyx#L207-L217\">._sample</a> function,", "cdef void _sample(self, double **x_data_ptr, int **x_ind_ptr,\n int *nnz, double *y, double *sample_weight,\n int current_index) nogil:\n cdef long long sample_idx = self.index_data_ptr[current_index]\n cdef long long offset = sample_idx * self.X_stride\n\n y[0] = self.Y_data_ptr[sample_idx]\n x_data_ptr[0] = self.X_data_ptr + offset\n x_ind_ptr[0] = self.feature_indices_ptr\n nnz[0] = self.n_features\n sample_weight[0] = self.sample_weight_data[sample_idx]", "We see that the _sample function takes x_data_ptr and y and make it so they point to the pointer to the start of $X_i$. x_data_ptr and y are then used by the rest of the SGD algorithm to access the current sample. The main optimizations in this data storage step, aside from static typing, are the usage of memoryviews and the usage of pointer to access data. Memoryviews allows sharing memory between data structures without copying, while data access through pointers is much faster than a Python function such as the getitem command.\nOptimizations in Weights\nIn the SGD algorithm the weight vector is stored in a WeightVector object (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/sgd_fast.pyx#L551\">relevant line copied below</a>).", "cdef WeightVector w = WeightVector(weights, average_weights)", "The WeightVector object is defined <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/weight_vector.pyx\">here</a>. Its mechanism of data storage is similar to ArrayDataset, (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/weight_vector.pyx#L54-L60\">relevant line copied below</a>).", "cdef double *wdata = <double *>w.data\n\n if w.shape[0] > INT_MAX:\n raise ValueError(\"More than %d features not supported; got %d.\"\n % (INT_MAX, w.shape[0]))\n self.w = w\n self.w_data_ptr = wdata", "All operations that update the weight vector is done within the WeightVector object (<a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/seq_dataset.pyx#L195-L196\">relevant lines in sgd_fast.pyx here</a>). In particular we look at the <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/weight_vector.pyx#L71-L105\"> add function</a>, which scales sample x by constant c and add it to the weight vector. We copy the add function below", "cdef void add(self, double *x_data_ptr, int *x_ind_ptr, int xnnz,\n double c) nogil:\n \"\"\"Scales sample x by constant c and adds it to the weight vector.\n This operation updates ``sq_norm``.\n Parameters\n ----------\n x_data_ptr : double*\n The array which holds the feature values of ``x``.\n x_ind_ptr : np.intc*\n The array which holds the feature indices of ``x``.\n xnnz : int\n The number of non-zero features of ``x``.\n c : double\n The scaling constant for the example.\n \"\"\"\n cdef int j\n cdef int idx\n cdef double val\n cdef double innerprod = 0.0\n cdef double xsqnorm = 0.0\n\n # the next two lines save a factor of 2!\n cdef double wscale = self.wscale\n cdef double* w_data_ptr = self.w_data_ptr\n\n for j in range(xnnz):\n idx = x_ind_ptr[j]\n val = x_data_ptr[j]\n innerprod += (w_data_ptr[idx] * val)\n xsqnorm += (val * val)\n w_data_ptr[idx] += val * (c / wscale)\n\n self.sq_norm += (xsqnorm * c * c) + (2.0 * innerprod * wscale * c)", "Getting values from $X_i$ is done using x_data_ptr[j], modifying the weight vector is done similarly. The optimizations for this step include ther aforementioned static typing, memoryviews and pointer data access. A further optimization here comes from the \"nogil\" aregument after the function definition. This releases the global interpreter lock, allowing for higher efficiency through multi-tasking.\nOptimizations in SGD algorithm\nThe raw code for the SGD algorithm in scikit-learn is <a href=\"https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/sgd_fast.pyx#L529-L700\">here</a>. Since we're assuming OPTIMAL learning rate, hinge loss and L2 regularization, we copy the relevant parts of the code below", "def _plain_sgd(np.ndarray[double, ndim=1, mode='c'] weights,\n double intercept,\n np.ndarray[double, ndim=1, mode='c'] average_weights,\n double average_intercept,\n LossFunction loss,\n int penalty_type,\n double alpha, double C,\n double l1_ratio,\n SequentialDataset dataset,\n int n_iter, int fit_intercept,\n int verbose, bint shuffle, np.uint32_t seed,\n double weight_pos, double weight_neg,\n int learning_rate, double eta0,\n double power_t,\n double t=1.0,\n double intercept_decay=1.0,\n int average=0):\n\n # get the data information into easy vars\n cdef Py_ssize_t n_samples = dataset.n_samples\n cdef Py_ssize_t n_features = weights.shape[0]\n\n cdef WeightVector w = WeightVector(weights, average_weights)\n cdef double* w_ptr = &weights[0]\n cdef double *x_data_ptr = NULL\n cdef int *x_ind_ptr = NULL\n cdef double* ps_ptr = NULL\n\n # helper variables\n cdef bint infinity = False\n cdef int xnnz\n cdef double eta = 0.0in\n cdef double p = 0.0\n cdef double update = 0.0\n cdef double sumloss = 0.0\n cdef double y = 0.0\n cdef double sample_weight\n cdef double class_weight = 1.0\n cdef unsigned int count = 0\n cdef unsigned int epoch = 0\n cdef unsigned int i = 0\n cdef int is_hinge = isinstance(loss, Hinge)\n cdef double optimal_init = 0.0\n cdef double dloss = 0.0\n cdef double MAX_DLOSS = 1e12\n\n # q vector is only used for L1 regularization\n cdef np.ndarray[double, ndim = 1, mode = \"c\"] q = None\n cdef double * q_data_ptr = NULL\n\n if penalty_type == L2:\n l1_ratio = 0.0\n\n\n if learning_rate == OPTIMAL:\n typw = np.sqrt(1.0 / np.sqrt(alpha))\n # computing eta0, the initial learning rate\n initial_eta0 = typw / max(1.0, loss.dloss(-typw, 1.0))\n # initialize t such that eta at first sample equals eta0a\n optimal_init = 1.0 / (initial_eta0 * alpha)\n\n t_start = time()\n with nogil:\n for epoch in range(n_iter):\n if verbose > 0:\n with gil:\n print(\"-- Epoch %d\" % (epoch + 1))\n if shuffle:\n dataset.shuffle(seed)\n for i in range(n_samples):\n dataset.next(&x_data_ptr, &x_ind_ptr, &xnnz,\n &y, &sample_weight)\n\n p = w.dot(x_data_ptr, x_ind_ptr, xnnz) + intercept\n if learning_rate == OPTIMAL:\n eta = 1.0 / (alpha * (optimal_init + t - 1))\n\n if verbose > 0:\n sumloss += loss.loss(p, y)\n\n if y > 0.0:\n class_weight = weight_pos\n else:\n class_weight = weight_neg\n\n \n dloss = loss._dloss(p, y)\n # clip dloss with large values to avoid numerical\n # instabilities\n if dloss < -MAX_DLOSS:\n dloss = -MAX_DLOSS\n elif dloss > MAX_DLOSS:\n dloss = MAX_DLOSS\n \n update = -eta * dloss\n\n update *= class_weight * sample_weight\n\n if penalty_type >= L2:\n # do not scale to negative values when eta or alpha are too\n # big: instead set the weights to zero\n w.scale(max(0, 1.0 - ((1.0 - l1_ratio) * eta * alpha)))\n if update != 0.0:\n w.add(x_data_ptr, x_ind_ptr, xnnz, update)\n if fit_intercept == 1:\n intercept += update * intercept_decay\n\n\n t += 1\n count += 1\n\n # report epoch information\n if verbose > 0:\n with gil:\n print(\"Norm: %.2f, NNZs: %d, \"\n \"Bias: %.6f, T: %d, Avg. loss: %.6f\"\n % (w.norm(), weights.nonzero()[0].shape[0],\n intercept, count, sumloss / count))\n print(\"Total training time: %.2f seconds.\"\n % (time() - t_start))\n\n # floating-point under-/overflow check.\n if (not skl_isfinite(intercept)\n or any_nonfinite(<double *>weights.data, n_features)):\n infinity = True\n break\n\n if infinity:\n raise ValueError((\"Floating-point under-/overflow occurred at epoch\"\n \" #%d. Scaling input data with StandardScaler or\"\n \" MinMaxScaler might help.\") % (epoch + 1))\n\n w.reset_wscale()\n\n return weights, intercept, average_weights, average_intercept\n", "Besides the performance benefits from ArrayDataSet and WeightVector, the Cython implementation of SGD itself also benefits from static typing, memoryviews and the release of global interpreter lock. Compared to the previous three, the SGD algorithm itself is actually straightforward., which reflects the simplicity of the algorithm." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ContinualAI/avalanche
notebooks/from-zero-to-hero-tutorial/04_training.ipynb
mit
[ "description: Continual Learning Algorithms Prototyping Made Easy\nTraining\nWelcome to the \"Training\" tutorial of the \"From Zero to Hero\" series. In this part we will present the functionalities offered by the training module.\nFirst, let's install Avalanche. You can skip this step if you have installed it already.", "!pip install avalanche-lib=0.2.0", "💪 The Training Module\nThe training module in Avalanche is designed with modularity in mind. Its main goals are to:\n\nprovide a set of popular continual learning baselines that can be easily used to run experimental comparisons;\nprovide simple abstractions to create and run your own strategy as efficiently and easily as possible starting from a couple of basic building blocks we already prepared for you.\n\nAt the moment, the training module includes three main components:\n\nTemplates: these are high level abstractions used as a starting point to define the actual strategies. The templates contain already implemented basic utilities and functionalities shared by a group of strategies (e.g. the BaseSGDTemplate contains all the implemented methods to deal with strategies based on SGD).\nStrategies: these are popular baselines already implemented for you which you can use for comparisons or as base classes to define a custom strategy.\nPlugins: these are classes that allow to add some specific behaviour to your own strategy. The plugin system allows to define reusable components which can be easily combined (e.g. a replay strategy, a regularization strategy). They are also used to automatically manage logging and evaluation.\n\nKeep in mind that many Avalanche components are independent of Avalanche strategies. If you already have your own strategy which does not use Avalanche, you can use Avalanche's benchmarks, models, data loaders, and metrics without ever looking at Avalanche's strategies!\n📈 How to Use Strategies & Plugins\nIf you want to compare your strategy with other classic continual learning algorithm or baselines, in Avalanche you can instantiate a strategy with a couple lines of code.\nStrategy Instantiation\nMost strategies require only 3 mandatory arguments:\n- model: this must be a torch.nn.Module.\n- optimizer: torch.optim.Optimizer already initialized on your model.\n- loss: a loss function such as those in torch.nn.functional.\nAdditional arguments are optional and allow you to customize training (batch size, number of epochs, ...) or strategy-specific parameters (memory size, regularization strength, ...).", "from torch.optim import SGD\nfrom torch.nn import CrossEntropyLoss\nfrom avalanche.models import SimpleMLP\nfrom avalanche.training.supervised import Naive, CWRStar, Replay, GDumb, Cumulative, LwF, GEM, AGEM, EWC # and many more!\n\nmodel = SimpleMLP(num_classes=10)\noptimizer = SGD(model.parameters(), lr=0.001, momentum=0.9)\ncriterion = CrossEntropyLoss()\ncl_strategy = Naive(\n model, optimizer, criterion,\n train_mb_size=100, train_epochs=4, eval_mb_size=100\n)", "Training & Evaluation\nEach strategy object offers two main methods: train and eval. Both of them, accept either a single experience(Experience) or a list of them, for maximum flexibility.\nWe can train the model continually by iterating over the train_stream provided by the scenario.", "from avalanche.benchmarks.classic import SplitMNIST\n\n# scenario\nbenchmark = SplitMNIST(n_experiences=5, seed=1)\n\n# TRAINING LOOP\nprint('Starting experiment...')\nresults = []\nfor experience in benchmark.train_stream:\n print(\"Start of experience: \", experience.current_experience)\n print(\"Current Classes: \", experience.classes_in_this_experience)\n\n cl_strategy.train(experience)\n print('Training completed')\n\n print('Computing accuracy on the whole test set')\n results.append(cl_strategy.eval(benchmark.test_stream))", "Adding Plugins\nMost continual learning strategies follow roughly the same training/evaluation loops, i.e. a simple naive strategy (a.k.a. finetuning) augmented with additional behavior to counteract catastrophic forgetting. The plugin systems in Avalanche is designed to easily augment continual learning strategies with custom behavior, without having to rewrite the training loop from scratch. Avalanche strategies accept an optional list of plugins that will be executed during the training/evaluation loops.\nFor example, early stopping is implemented as a plugin:", "from avalanche.training.plugins import EarlyStoppingPlugin\n\nstrategy = Naive(\n model, optimizer, criterion,\n plugins=[EarlyStoppingPlugin(patience=10, val_stream_name='train')])", "In Avalanche, most continual learning strategies are implemented using plugins, which makes it easy to combine them together. For example, it is extremely easy to create a hybrid strategy that combines replay and EWC together by passing the appropriate plugins list to the SupervisedTemplate:", "from avalanche.training.templates import SupervisedTemplate\nfrom avalanche.training.plugins import ReplayPlugin, EWCPlugin\n\nreplay = ReplayPlugin(mem_size=100)\newc = EWCPlugin(ewc_lambda=0.001)\nstrategy = SupervisedTemplate(\n model, optimizer, criterion,\n plugins=[replay, ewc])", "Beware that most strategy plugins modify the internal state. As a result, not all the strategy plugins can be combined together. For example, it does not make sense to use multiple replay plugins since they will try to modify the same strategy variables (mini-batches, dataloaders), and therefore they will be in conflict.\n📝 A Look Inside Avalanche Strategies\nIf you arrived at this point you already know how to use Avalanche strategies and are ready to use it. However, before making your own strategies you need to understand a little bit the internal implementation of the training and evaluation loops.\nIn Avalanche you can customize a strategy in 2 ways:\n\nPlugins: Most strategies can be implemented as additional code that runs on top of the basic training and evaluation loops (e.g. the Naive strategy). Therefore, the easiest way to define a custom strategy such as a regularization or replay strategy, is to define it as a custom plugin. The advantage of plugins is that they can be combined, as long as they are compatible, i.e. they do not modify the same part of the state. The disadvantage is that in order to do so you need to understand the strategy loop, which can be a bit complex at first.\nSubclassing: In Avalanche, continual learning strategies inherit from the appropriate template, which provides generic training and evaluation loops. The most high level template is the BaseTemplate, from which all the Avalanche's strategies inherit. Most template's methods can be safely overridden (with some caveats that we will see later).\n\nKeep in mind that if you already have a working continual learning strategy that does not use Avalanche, you can use most Avalanche components such as benchmarks, evaluation, and models without using Avalanche's strategies!\nTraining and Evaluation Loops\nAs we already mentioned, Avalanche strategies inherit from the appropriate template (e.g. continual supervised learning strategies inherit from the SupervisedTemplate). These templates provide:\n\nBasic Training and Evaluation loops which define a naive (finetuning) strategy.\nCallback points, which are used to call the plugins at a specific moments during the loop's execution.\nA set of variables representing the state of the loops (current model, data, mini-batch, predictions, ...) which allows plugins and child classes to easily manipulate the state of the training loop.\n\nThe training loop has the following structure:\n```text\ntrain\n before_training\nbefore_train_dataset_adaptation\ntrain_dataset_adaptation\nafter_train_dataset_adaptation\nmake_train_dataloader\nmodel_adaptation\nmake_optimizer\nbefore_training_exp # for each exp\n before_training_epoch # for each epoch\n before_training_iteration # for each iteration\n before_forward\n after_forward\n before_backward\n after_backward\n after_training_iteration\n before_update\n after_update\n after_training_epoch\nafter_training_exp\nafter_training\n\n```\nThe evaluation loop is similar:\ntext\neval\n before_eval\n before_eval_dataset_adaptation\n eval_dataset_adaptation\n after_eval_dataset_adaptation\n make_eval_dataloader\n model_adaptation\n before_eval_exp # for each exp\n eval_epoch # we have a single epoch in evaluation mode\n before_eval_iteration # for each iteration\n before_eval_forward\n after_eval_forward\n after_eval_iteration\n after_eval_exp\n after_eval\nMethods starting with before/after are the methods responsible for calling the plugins.\nNotice that before the start of each experience during training we have several phases:\n- dataset adaptation: This is the phase where the training data can be modified by the strategy, for example by adding other samples from a separate buffer.\n- dataloader initialization: Initialize the data loader. Many strategies (e.g. replay) use custom dataloaders to balance the data.\n- model adaptation: Here, the dynamic models (see the models tutorial) are updated by calling their adaptation method.\n- optimizer initialization: After the model has been updated, the optimizer should also be updated to ensure that the new parameters are optimized.\nStrategy State\nThe strategy state is accessible via several attributes. Most of these can be modified by plugins and subclasses:\n- self.clock: keeps track of several event counters.\n- self.experience: the current experience.\n- self.adapted_dataset: the data modified by the dataset adaptation phase.\n- self.dataloader: the current dataloader.\n- self.mbatch: the current mini-batch. For supervised classification problems, mini-batches have the form &lt;x, y, t&gt;, where x is the input, y is the target class, and t is the task label.\n- self.mb_output: the current model's output.\n- self.loss: the current loss.\n- self.is_training: True if the strategy is in training mode.\nHow to Write a Plugin\nPlugins provide a simple solution to define a new strategy by augmenting the behavior of another strategy (typically the Naive strategy). This approach reduces the overhead and code duplication, improving code readability and prototyping speed.\nCreating a plugin is straightforward. As with strategies, you have to create a class which inherits from the corresponding plugin template (BasePlugin, BaseSGDPlugin, SupervisedPlugin) and implements the callbacks that you need. The exact callback to use depend on the aim of your plugin. You can use the loop shown above to understand what callbacks you need to use. For example, we show below a simple replay plugin that uses after_training_exp to update the buffer after each training experience, and the before_training_exp to customize the dataloader. Notice that before_training_exp is executed after make_train_dataloader, which means that the Naive strategy already updated the dataloader. If we used another callback, such as before_train_dataset_adaptation, our dataloader would have been overwritten by the Naive strategy. Plugin methods always receive the strategy as an argument, so they can access and modify the strategy's state.", "from avalanche.benchmarks.utils.data_loader import ReplayDataLoader\nfrom avalanche.core import SupervisedPlugin\nfrom avalanche.training.storage_policy import ReservoirSamplingBuffer\n\n\nclass ReplayP(SupervisedPlugin):\n\n def __init__(self, mem_size):\n \"\"\" A simple replay plugin with reservoir sampling. \"\"\"\n super().__init__()\n self.buffer = ReservoirSamplingBuffer(max_size=mem_size)\n\n def before_training_exp(self, strategy: \"SupervisedTemplate\",\n num_workers: int = 0, shuffle: bool = True,\n **kwargs):\n \"\"\" Use a custom dataloader to combine samples from the current data and memory buffer. \"\"\"\n if len(self.buffer.buffer) == 0:\n # first experience. We don't use the buffer, no need to change\n # the dataloader.\n return\n strategy.dataloader = ReplayDataLoader(\n strategy.adapted_dataset,\n self.buffer.buffer,\n oversample_small_tasks=True,\n num_workers=num_workers,\n batch_size=strategy.train_mb_size,\n shuffle=shuffle)\n\n def after_training_exp(self, strategy: \"SupervisedTemplate\", **kwargs):\n \"\"\" Update the buffer. \"\"\"\n self.buffer.update(strategy, **kwargs)\n\n\nbenchmark = SplitMNIST(n_experiences=5, seed=1)\nmodel = SimpleMLP(num_classes=10)\noptimizer = SGD(model.parameters(), lr=0.01, momentum=0.9)\ncriterion = CrossEntropyLoss()\nstrategy = Naive(model=model, optimizer=optimizer, criterion=criterion, train_mb_size=128,\n plugins=[ReplayP(mem_size=2000)])\nstrategy.train(benchmark.train_stream)\nstrategy.eval(benchmark.test_stream)", "Check base plugin's documentation for a complete list of the available callbacks.\nHow to Write a Custom Strategy\nYou can always define a custom strategy by overriding the corresponding template methods.\nHowever, There is an important caveat to keep in mind. If you override a method, you must remember to call all the callback's handlers (the methods starting with before/after) at the appropriate points. For example, train calls before_training and after_training before and after the training loops, respectively. The easiest way to avoid mistakes is to start from the template's method that you want to override and modify it to your own needs without removing the callbacks handling.\nNotice that the EvaluationPlugin (see evaluation tutorial) uses the strategy callbacks.\nAs an example, the SupervisedTemplate, for continual supervised strategies, provides the global state of the loop in the strategy's attributes, which you can safely use when you override a method. For instance, the Cumulative strategy trains a model continually on the union of all the experiences encountered so far. To achieve this, the cumulative strategy overrides adapt_train_dataset and updates `self.adapted_dataset' by concatenating all the previous experiences with the current one.", "from avalanche.benchmarks.utils import AvalancheConcatDataset\nfrom avalanche.training.templates import SupervisedTemplate\n\n\nclass Cumulative(SupervisedTemplate):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.dataset = None # cumulative dataset\n\n def train_dataset_adaptation(self, **kwargs):\n super().train_dataset_adaptation(**kwargs)\n curr_data = self.experience.dataset\n if self.dataset is None:\n self.dataset = curr_data\n else:\n self.dataset = AvalancheConcatDataset([self.dataset, curr_data])\n self.adapted_dataset = self.dataset.train()\n\nstrategy = Cumulative(model=model, optimizer=optimizer, criterion=criterion, train_mb_size=128)\nstrategy.train(benchmark.train_stream)", "Easy, isn't it? :-)\nIn general, we recommend to implement a Strategy via plugins, if possible. This approach is the easiest to use and requires a minimal knowledge of the strategy templates. It also allows other people to re-use your plugin and facilitates interoperability among different strategies.\nFor example, replay strategies can be implemented as a custom strategy or as plugins. However, creating a plugin allows to use the replay in conjunction with other strategies or plugins, making possible the combination of different approach to build the ultimate continual learning algorithm!.\nThis completes the \"Training\" chapter for the \"From Zero to Hero\" series. We hope you enjoyed it!\n🤝 Run it on Google Colab\nYou can run this chapter and play with it on Google Colaboratory:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
zczapran/datascienceintensive
human_temp/sliderule_dsi_inferential_statistics_exercise_1.ipynb
mit
[ "What is the True Normal Human Body Temperature?\nBackground\nThe mean normal body temperature was held to be 37$^{\\circ}$C or 98.6$^{\\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct?\n<div class=\"span5 alert alert-info\">\n<h3>Exercises</h3>\n\n<p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p>\n\n<p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p> \n\n<ol>\n<li> Is the distribution of body temperatures normal? \n <ul>\n <li> Although this is not a requirement for CLT to hold (read CLT carefully), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population.\n </ul>\n<li> Is the sample size large? Are the observations independent?\n <ul>\n <li> Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply.\n </ul>\n<li> Is the true population mean really 98.6 degrees F?\n <ul>\n <li> Would you use a one-sample or two-sample test? Why?\n <li> In this situation, is it appropriate to use the $t$ or $z$ statistic? \n <li> Now try using the other test. How is the result be different? Why?\n </ul>\n<li> At what temperature should we consider someone's temperature to be \"abnormal\"?\n <ul>\n <li> Start by computing the margin of error and confidence interval.\n </ul>\n<li> Is there a significant difference between males and females in normal temperature?\n <ul>\n <li> What test did you use and why?\n <li> Write a story with your conclusion in the context of the original problem.\n </ul>\n</ol>\n\nYou can include written notes in notebook cells using Markdown: \n - In the control panel at the top, choose Cell > Cell Type > Markdown\n - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\n\n#### Resources\n\n+ Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm\n+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\n\n****\n</div>", "import pandas as pd\n\ndf = pd.read_csv('data/human_body_temperature.csv')\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline", "Is the distribution of body temperatures normal?", "sns.distplot(df['temperature'])", "From the bar chart we can say that the distrubition resembles normal.\nIs the sample size large? Are the observations independent?", "df.shape\n\ndf.head()", "Is the true population mean really 98.6 degrees F?", "stddev_sd = df.temperature.std() / np.sqrt(130)\nsample_mean = df.temperature.mean()\n\nz_statistic = (sample_mean - 98.6) / stddev_sd\n(len(df), sample_mean, sample_stddev, z_statistic)", "As I'm comparing a sample mean to a fixed value (assumed population mean of 98.6F), I'm using one-sample test. In the situation we have n >> 30 thus it's appropriate to use z-statistic. There is a distance of 5.45 std dev between sample mean and the assumed population mean, therefore it gives us very high confidence (>99.9%) that 98.6F is not the true population mean. In order to make a two-sample test I'm going to generate a second sample based on the assumed population mean and sample standard deviation.", "other = pd.Series(np.random.normal(98.6, df.temperature.std(), 130))\nother_mean = other.mean()\npooled_stddev = np.sqrt(sample_stddev * sample_stddev/130 + other.var()/130)\n(other_mean, pooled_stddev, other_mean - sample_mean)", "I assume there is no difference between two sample means (H0: sample_mean - other_mean = 0). I'm going to show 99% confidence, that H0 is not true and that there is in reality a difference between those two means. For that, the distance between means has to be >= 2.58 (z-value for 0.995 - two-sided test).", "z_statistic = (other_mean - sample_mean) / pooled_stddev\nz_statistic", "At what temperature should we consider someone's temperature to be \"abnormal\"?", "(sample_mean - 1.96*sample_stddev, sample_mean + 1.96*sample_stddev, 1.96*sample_stddev)", "Margin of error is 0.126F and the confidence interval (98.12F, 98.38F) which means any temperature below 98.12F or above 98.38F would be considered abnormal.\nIs there a significant difference between males and females in normal temperature?", "males = df.temperature[df.gender=='M']\nfemales = df.temperature[df.gender=='F']\n(males.size, females.size)\n\nmales_mean = males.mean()\nfemales_mean = females.mean()\nmales_std = males.std()\nfemales_std = females.std()\n(males_mean, females_mean)\n\n(males_mean - females_mean) / sample_stddev", "I have used a two-sided test with H0 that there is no difference between means of the male and female samples. I'm comparing difference of two sample means and computing z-statistic for it which equals -4.5, therefore it gives us very high confidence (>99.9%) that males have different mean temperature than females.\nI conclude that we should define two standard body temperatures (male and female) as there is a very high confidence that they are truly different." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
FRESNA/atlite
examples/create_cutout_SARAH.ipynb
gpl-3.0
[ "Creating a Cutout with the SARAH-2 dataset\nThis walkthrough describes the process of creating a cutout using the SARAH-2 dataset by EUMETSAT.\nThe SARAH-2 dataset contains extensive information on solar radiation variables, like surface incoming direct radiation (SID) or surface incoming shortwave radiation (SIS).\nIt serves as an addition to the ERA5 dataset and as such requires the cdsapi to be setup properly.\n\nRecommendation\nThis is a reduced version for cutout creation. Creating cutouts with ERA-5 is simpler and explained in more details.\nWe therefore recommend you have a look at this example first.\nNote:\nFor creating a cutout from this dataset, you need to download large files and your computers memory needs to be able to handle these as well.\n\nDownloading the data set\nTo download the dataset, head to the EUMETSTATs website (the link points to the current 2.1 edition)\nhttps://wui.cmsaf.eu/safira/action/viewDoiDetails?acronym=SARAH_V002_01 \nOn the bottom, select the products you want to include in the cutout, i.e. for us:\n| variable | time span | time resolution | \n| --- | --- | --- |\n| Surface incoming direct radiation (SID) | 2013 | Instantaneous |\n| Surface incoming shortwave radiation (SIS) | 2013 | Instantaneous |\n\nAdd each product to your cart and register with the website.\nFollow the instructions to activate your account, confirm your order and wait for the download to be ready.\nYou will be notified by email with the download instructions.\nDownload the ordered files of your order into a directory, e.g. sarah-2.\nExtract the tar files (e.g. for linux systems tar -xvf * or with 7zip for windows) into the same folder\n\nYou are now ready to create cutouts using the SARAH-2 dataset.\nSpecifying the cutout\nImport the package and set recommended logging settings:", "import atlite\n\nimport logging\nlogging.basicConfig(level=logging.INFO)\n\ncutout = atlite.Cutout(path=\"western-europe-2011-01.nc\",\n module=[\"sarah\", \"era5\"],\n sarah_dir=\"/home/vres-climate/data/sarah_v2\",\n x=slice(-13.6913, 1.7712),\n y=slice(49.9096, 60.8479),\n time=\"2013-01\",\n chunks={'time': 100}\n )", "Let's see what the available features that is the available weather data variables are.", "cutout.available_features.to_frame()", "Preparing the Cutout\nNo matter which dataset you use, this is where all the work actually happens.\nThis can be fast or take some or a lot of time and resources, among others depending on\nyour computer ressources (especially memory for SARAH-2).", "cutout.prepare()", "Querying the cutout gives us basic information on which data is contained and can already be used.\nInspecting the Cutout", "cutout # basic information\n\ncutout.data.attrs # cutout meta data\n\ncutout.prepared_features # included weather variables\n\ncutout.data # access to underlying xarray data" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
InsightSoftwareConsortium/SimpleITK-Notebooks
Python/04_Image_Display.ipynb
apache-2.0
[ "Image Display <a href=\"https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F04_Image_Display.ipynb\"><img style=\"float: right;\" src=\"https://mybinder.org/badge_logo.svg\"></a>\nThe native SimpleITK approach to displaying images is to use an external viewing program. In the notebook environment it is convenient to use matplotlib to display inline images and if the need arises we can implement some reasonably rich inline graphical user interfaces, combining control components from the ipywidgets package and matplotlib based display.\nIn this notebook we cover the usage of external programs and matplotlib for viewing images. We also instantiate a more involved inline interface that uses ipywidgets to control display. For the latter type of moderately complex display, used in many of the notebooks, take a look at the gui.py file.", "import SimpleITK as sitk\n\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport gui\n\n# Utility method that either downloads data from the Girder repository or\n# if already downloaded returns the file name for reading from disk (cached data).\n%run update_path_to_download_script\nfrom downloaddata import fetch_data as fdata", "Image Display with An External Viewer\nSimpleITK provides two options for invoking an external viewer, use a procedural interface or an object oriented one. \nProcedural interface\nSimpleITK provides a built in Show method. This function writes the image out to disk and than launches a program for visualization. By default it is configured to use the Fiji program, because it readily supports many medical image formats and loads quickly. However, the Show visualization program is easily customizable via environment variables:\n<ul>\n<li>SITK_SHOW_COMMAND: Viewer to use (<a href=\"http://www.itksnap.org\">ITK-SNAP</a>, <a href=\"http://www.slicer.org\">3D Slicer</a>...) </li>\n<li>SITK_SHOW_COLOR_COMMAND: Viewer to use when displaying color images.</li>\n<li>SITK_SHOW_3D_COMMAND: Viewer to use for 3D images.</li>\n</ul>\n\nIn general, the Show command accepts three parameters: (1) image to display; (2) window title; (3) boolean specifying whether to print the invoked command and additional debugging information.", "mr_image = sitk.ReadImage(fdata(\"training_001_mr_T1.mha\"))\n\n?sitk.Show\n\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print(\n \"SimpleITK Show method could not find the viewer (ImageJ not installed or \"\n + \"environment variable pointing to non existant viewer).\"\n )", "Use a different viewer by setting environment variable(s). Do this from within your Jupyter notebook using 'magic' functions, or set in a more permanent manner using your OS specific convention.", "%env SITK_SHOW_COMMAND /Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP\n\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print(\n \"SimpleITK Show method could not find the viewer (ITK-SNAP not installed or \"\n + \"environment variable pointing to non existant viewer).\"\n )\n\n%env SITK_SHOW_COMMAND '/Applications/ImageJ/ImageJ.app/Contents/MacOS/JavaApplicationStub'\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print(\n \"SimpleITK Show method could not find the viewer (ImageJ not installed or \"\n + \"environment variable pointing to non existant viewer).\"\n )\n\n%env SITK_SHOW_COMMAND '/Applications/Slicer.app/Contents/MacOS/Slicer'\ntry:\n sitk.Show(mr_image)\nexcept RuntimeError:\n print(\n \"SimpleITK Show method could not find the viewer (Slicer not installed or \"\n + \"environment variable pointing to non existant viewer).\"\n )", "Object Oriented interface\nThe Image Viewer class provides a more standard approach to controlling image viewing by setting various instance variable values. Also, it ensures that all of your viewing settings are documented, as they are part of the code and not external environment variables.\nA caveat to this is that if you have set various environment variables to control SimpleITK settings, the image viewer will use these settings as the default ones and not the standard defaults (Fiji as viewer etc.).", "# Which external viewer will the image_viewer use if we don't specify the external viewing application?\n# (see caveat above)\nimage_viewer = sitk.ImageViewer()\nimage_viewer.SetApplication(\"/Applications/Fiji.app/Contents/MacOS/ImageJ-macosx\")\nimage_viewer.SetTitle(\"MR image\")\n\n# Use the default image viewer.\nimage_viewer.Execute(mr_image)\n\n# Change viewer, and display again.\nimage_viewer.SetApplication(\"/Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP\")\nimage_viewer.Execute(mr_image)\n\n# Change the viewer command, (use ITK-SNAP -z option to open the image in zoomed mode)\nimage_viewer.SetCommand(\"/Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP -z 3\")\nimage_viewer.Execute(mr_image)\n\nprint(\n \"Default format for saved file used in display: \" + image_viewer.GetFileExtension()\n)\n\n# Change the file format (possibly to make it compatible with your viewer of choice)\nimage_viewer.SetFileExtension(\".nrrd\")\nimage_viewer.Execute(mr_image)", "Inline display with matplotlib", "mr_image = sitk.ReadImage(fdata(\"training_001_mr_T1.mha\"))\nnpa = sitk.GetArrayViewFromImage(mr_image)\n\n# Display the image slice from the middle of the stack, z axis\nz = int(mr_image.GetDepth() / 2)\nnpa_zslice = sitk.GetArrayViewFromImage(mr_image)[z, :, :]\n\n# Three plots displaying the same data, how do we deal with the high dynamic range?\nfig = plt.figure(figsize=(10, 3))\n\nfig.add_subplot(1, 3, 1)\nplt.imshow(npa_zslice)\nplt.title(\"default colormap\", fontsize=10)\nplt.axis(\"off\")\n\nfig.add_subplot(1, 3, 2)\nplt.imshow(npa_zslice, cmap=plt.cm.Greys_r)\nplt.title(\"grey colormap\", fontsize=10)\nplt.axis(\"off\")\n\nfig.add_subplot(1, 3, 3)\nplt.title(\n \"grey colormap,\\n scaling based on volumetric min and max values\", fontsize=10\n)\nplt.imshow(npa_zslice, cmap=plt.cm.Greys_r, vmin=npa.min(), vmax=npa.max())\nplt.axis(\"off\");\n\n# Display the image slice in the middle of the stack, x axis\n\nx = int(mr_image.GetWidth() / 2)\n\nnpa_xslice = npa[:, :, x]\nplt.figure(figsize=(10, 2))\nplt.imshow(npa_xslice, cmap=plt.cm.Greys_r)\nplt.axis(\"off\")\n\nprint(f\"Image spacing: {mr_image.GetSpacing()}\")\n\n# Collapse along the x axis\nextractSliceFilter = sitk.ExtractImageFilter()\nsize = list(mr_image.GetSize())\nsize[0] = 0\nextractSliceFilter.SetSize(size)\n\nindex = (x, 0, 0)\nextractSliceFilter.SetIndex(index)\nsitk_xslice = extractSliceFilter.Execute(mr_image)\n\n# Resample slice to isotropic\noriginal_spacing = sitk_xslice.GetSpacing()\noriginal_size = sitk_xslice.GetSize()\n\nmin_spacing = min(sitk_xslice.GetSpacing())\nnew_spacing = [min_spacing, min_spacing]\nnew_size = [\n int(round(original_size[0] * (original_spacing[0] / min_spacing))),\n int(round(original_size[1] * (original_spacing[1] / min_spacing))),\n]\nresampleSliceFilter = sitk.ResampleImageFilter()\nresampleSliceFilter.SetSize(new_size)\nresampleSliceFilter.SetTransform(sitk.Transform())\nresampleSliceFilter.SetInterpolator(sitk.sitkNearestNeighbor)\nresampleSliceFilter.SetOutputOrigin(sitk_xslice.GetOrigin())\nresampleSliceFilter.SetOutputSpacing(new_spacing)\nresampleSliceFilter.SetOutputDirection(sitk_xslice.GetDirection())\nresampleSliceFilter.SetDefaultPixelValue(0)\nresampleSliceFilter.SetOutputPixelType(sitk_xslice.GetPixelID())\n\n# Why is the image pixelated?\nsitk_isotropic_xslice = resampleSliceFilter.Execute(sitk_xslice)\nplt.figure(figsize=(10, 2))\nplt.imshow(sitk.GetArrayViewFromImage(sitk_isotropic_xslice), cmap=plt.cm.Greys_r)\nplt.axis(\"off\")\nprint(f\"Image spacing: {sitk_isotropic_xslice.GetSpacing()}\")", "Inline display with matplotlib and ipywidgets\nDisplay two volumes side by side, with sliders to control the displayed slice. The menu on the bottom left allows you to home (return to original view), back and forward between views, pan, zoom and save a view. \nA variety of interfaces combining matplotlib display and ipywidgets can be found in the gui.py file.", "ct_image = sitk.ReadImage(fdata(\"training_001_ct.mha\"))\nct_window_level = [720, 80]\nmr_window_level = [790, 395]\n\ngui.MultiImageDisplay(\n [mr_image, ct_image],\n figure_size=(10, 3),\n window_level_list=[mr_window_level, ct_window_level],\n);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bryanfry/nyc-schools
nyc-schools_A.ipynb
gpl-3.0
[ "nyc-schools_A\nThis notebook performs the following functions:\n\nParses the NYC Public School Guide (2016) to find the physical addresses for each school\nUses the openstreetmap API to assign lat / lon values to each school, given addresses\nFinds the 50 closest Census Tracts to to school, given Lat / Lon coordinates of the centroids for each tract. The tract centroid file was obtained from the American Community Survey\nLoads a spreadsheet file with statistics on each school, obtained from Open Data NYC. This file is merged with the info on nearby census tracts on the 'DBN' field (a unique identifier for each school)\nSaves the merged file as a *.csv for later use.", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom geopy.distance import vincenty\nfrom bs4 import BeautifulSoup\nimport os, string, requests, re, pickle\n%matplotlib inline\n\nbp_data = '/Users/bryanfry/projects/proj_nyc-schools/data_files' #basepath for input files", "Functions to parse ID and physical address information from the NYC High School Directory", "# Function to find all occurrences of the string 'Contact Information' in the doc.\n# The indices of these lines are returned, and serve as starting point to locate\n# addresses.\ndef locate_CI_lines (content):\n CI_line_indices = []\n for i, line in enumerate (content):\n if 'Contact Information' in line: CI_line_indices.append (i)\n return CI_line_indices\n\n\n# Function parses the information for a single public high school in the guide.\n# Ingests 'content' (all text in the public school section of the guide), and the \n# line number with the contact info ('ci') for a single school.\n\ndef parse_school_info (content, CI_line_index):\n # Parse the line after 'Contact Information'. \n # This contatins the name and the DBN information (DBN is unique ID code for school)\n line = content [CI_line_index-1]\n i = string.find (line, 'DBN')\n name = line [0:i-5] \n DBN = line [i+4:-1]\n \n # Starting at the 'Contact' line, we look BACKWARDS to find first previous line with DBN.\n # This contains both the school name and DBN code.\n DBN_line_index = CI_line_index -1\n while not 'DBN' in content[DBN_line_index]:\n DBN_line_index -= 1\n line = content [DBN_line_index]\n i = string.find (line, 'DBN')\n name = line [0:i-5] \n DBN = line [i+4:-1] \n \n # Starting at the 'Contact' line, we need to locate the next line containing\n # string 'Address:' This is not always a fixed # of lines after the\n # DBN line.\n addy_line_index = CI_line_index +1 # Number of lines to look ahead of DBN line for 'Address:'\n while not 'Address' in content[addy_line_index]:\n addy_line_index += 1\n #Get street address and zipcode\n street = content [addy_line_index][9:-1]\n zipcode = content [addy_line_index+1][-6:-1]\n \n return name, DBN, street, zipcode\n ", "Functions for geolocation", "# Function to get the lat / lon and neighborhood, given street address and zipcode\n# This uses the free openstreetmap API for geolocation!\n# It typically takes ~ 4 min to work on all the schools.\n\ndef get_coords_and_hood (street, zipcode):\n zipcode = str (zipcode).zfill(5)\n try:\n street = street.replace (' ','+')\n s = 'http://nominatim.openstreetmap.org/search?format=xml'\n s = s + '&street=' + street\n s = s + '&postalcode=' + zipcode\n s = s + '&addressdetails=1'\n r = requests.get (s)\n soup = BeautifulSoup (r.content, 'lxml')\n lat = float (soup.place.attrs['lat'])\n lon = float (soup.place.attrs['lon'])\n county = soup.place.county.contents[0]\n hood = soup.place.neighbourhood.contents[0]\n display_name = soup.place.attrs['display_name']\n except:\n lat, lon, county, hood, display_name = None, None, None, None, None\n return lat, lon, county, hood, display_name\n\n# Given a specific input geocode and a dictionary of location strings ('tags'), each \n# with lat/lon, this function returns the location in the dictionary closest to the \n# input geocode. It uses 'vincenty distance' which accounts for global curvature to\n# compute distance. (Probably a simpler and faster 'flat earth' calc. would work just as\n# well given that all locations are within only a couple degrees.)\ndef find_closest_loc (loc_dict, loc_query):\n dist_dict = {}\n for k, loc in loc_dict.items(): # Loop over the dictionary entries and compute distance to each one, \n # populating the new dictionary dist_dict\n dist_dict[k] = vincenty (loc, loc_query).meters \n min_loc = min (dist_dict, key=dist_dict.get)\n min_dist = dist_dict [min_loc]\n return min_loc, min_dist\n\n\n# Given dictionary of tags and corresponding lat/lon tuples, find N locations in the dictionary\n# closest to the input loc_query\ndef find_closest_n_loc (loc_dict, loc_query, n):\n dist_dict = {}\n for k, loc in loc_dict.items():\n dist_dict[k] = vincenty (loc, loc_query).meters\n loc_sorted = sorted (dist_dict, key=dist_dict.get)\n dist_sorted = sorted (dist_dict.values())\n return loc_sorted[0:n], dist_sorted[0:n]\n ", "Functions to process school outcome data", "# Function to remove % from percentage strings, and return a number\ndef proc_percent (s):\n try:\n return float (s.replace ('%', ''))\n except:\n return np.nan\n \n# Function to add jitter -- this addresses problems of equal min/ max for \n# bins in quintile calculation and in plotting histograms.\ndef add_jitter (x, amp = 0.00001):\n return x + (np.random.random (size = len (x)) * amp)\n \n#########################################\n\n# Split data into quintiles\n# NOTE: qcut will return error if data has non-unique edges (ex. more than 20% of data is 0%)\n# If qcut throws an error, we bypass this issue by adding a trivial amount of positive\n# noise to each value. Cheesy but works fine.\ndef quantile(column, quantile=5):\n try:\n q = pd.qcut(column, quantile)\n except: #Error -- add a little noise\n column = add_jitter (column)\n q = pd.qcut (column, quantile)\n return [i+1 for i in q.codes]\n\n########################################\n\n# This function calculates quintiles for a set of desired columns on each \n# school. It also combines the outcome dataframe with the 20 nearest census tract\n# set for each school by merging the two dataframes on DBN.\n\ndef combine_data (df_outcomes, df_tract, percent_col_list):\n df = df_outcomes[df_outcomes.Cohort == '2006'] # Limit to 2006\n df = pd.merge (df_tract, df, how = 'inner', on = 'DBN') # Perform join on the DBNs\n\n for c in percent_col_list:\n df [c] = df [c].apply (proc_percent) \n #On each of the 'interesting' percent_col_list, compute the quantiles\n for c in percent_col_list:\n c_Q = 'Q_' + c\n df [c_Q] = quantile (df[c].tolist())\n return df\n\n", "Functions for Visualization", "# This wraps the Matplotlib hist function to do NaN-removal, and add\n# plot title and axis labels.\n\ndef plot_histogram (x, n_bins, title='', x_label='', y_label='', color = None):\n # First, use pandas or numpy to remvoe NaNs from the data. The\n # presence of NaN may cause the matplotlib histogram to fail.\n try:\n x = x.dropna() # Will work if x is a pandas DataFrame or Series\n except:\n x = np.array (x)[~np.isnan (x)] # Remove using numpy functions\n \n plt.figure()\n plt.hist (x, color = color)\n plt.title (title)\n plt.xlabel(x_label)\n plt.ylabel(y_label)\n \n### TEST HISTOGRAM ###\nx = np.random.normal(size = 1000)\nplot_histogram (x, 20, 'Test - Dummy','X_Label','Y_Label', 'Maroon')", "MAIN\nFirst, read the Public School section of the NYC High School guide to find physical addresses for the schools. These will be used later for geolocation, assigning the schools to census tracts.", "fp_hs_guide = os.path.join (bp_data, 'NY_Public_High_school_guide_FROM_PDF.txt')\nfp_tract_centroids = os.path.join (bp_data, 'tract_centroids.txt')\n#fp_hs_addresses = os.path.join (bp_data, 'HS_Addresses.csv')\n\nwith open(fp_hs_guide) as f:\n content = f.readlines()\nCI_index_list = locate_CI_lines (content) # Find locations of the text 'Contact Information'\n\n# Build list of physical addresses.\n# Each list element is a tuple with name, DBN, street address, and zipcode.\nschool_loc_list = [parse_school_info(content, i) for i in CI_index_list] \n", "Get geolocations for the schools", "school_geocode_list = [get_coords_and_hood (i[2], i[3]) for i in school_loc_list] # ~ 4 min\n", "Build a dataframe with the school location info (name, address, DBN, lat/lon, county, neighboorhood", "df_loc = pd.DataFrame()\ndf_loc['NAME'] = [i[0] for i in school_loc_list]\ndf_loc['DBN'] = [i[1] for i in school_loc_list]\ndf_loc['STREET'] = [i[2] for i in school_loc_list]\ndf_loc['ZIPCODE'] = [i[3] for i in school_loc_list]\ndf_loc['LAT'] = [i[0] for i in school_geocode_list]\ndf_loc['LON'] = [i[1] for i in school_geocode_list]\ndf_loc['COUNTY'] = [i[2] for i in school_geocode_list]\ndf_loc['HOOD'] = [i[3] for i in school_geocode_list]\ndf_loc['DISPLAY_NAME'] = [i[4] for i in school_geocode_list]\ndf_loc = df_loc.dropna()", "Assign each school to a census tract\nThis is done by loading a file from the American Community Survey that contains the centroid (lat/lon) of each census tract. We then calculate Vincenty distance from each school to the centroids of each tract, and take the tract with the shortest distance. This method is not exact (it assumes uniformly shaped tracts), but it is pretty close and should always result in indentifying at least with a CLOSE census tract. Code takes ~1 min to run (Vincenty distance is slow)", "# Load the file from ACS with centroids for each tract\nfp_tract_centroids = os.path.join (bp_data, 'tract_centroids.txt')\ndf_tracts = pd.read_csv (fp_tract_centroids)\ndf_tracts = df_tracts.drop ([df_tracts.columns[0]], axis = 1) # Drop first column, unused\n\n# Build dictionary of GEOIDS (keys) and Lat / Lon tuples (values)\ntract_dict = {df_tracts.GEOID[i]: (df_tracts.LAT[i], df_tracts.LON[i]) \\\n for i in range (0,len(df_tracts))}\n\n\n# Assign the 50 closest tracts to each school, based on centroid distances \ntract_list, dist_list = [],[] # Each element in these lists will be another n-element list\nn = 50 # Use 20 closest tract centroids\nfor lat, lon, name in zip (df_loc.LAT.tolist(), df_loc.LON.tolist(), df_loc.NAME):\n loc_query = [lat, lon]\n tracts, dists = find_closest_n_loc (tract_dict, loc_query, n)\n tract_list.append (tracts) # Append a 20-element list to the list-of-lists\n dist_list.append (dists) # Append a 20-element list to the list-of-lists\n\n# Add the geocode (name) of the 20 closest tracts to the dataframe\ntract_array = np.array (tract_list)\ncol_names_tract = ['GEOCODE' + str(i).zfill(2) for i in range (0,n)]\nfor i in range (n):\n df_loc ['GEOCODE' + str(i).zfill(2)] = tract_array[:,i]\n \ndf_loc.head()", "Now we load and process the *.csv file with info on school outcomes", "# Load file with school outcomes\ndf_sch_outcomes = pd.read_csv (os.path.join (bp_data, 'Graduation_Outcomes.csv'))\n\n\n# list of columns given as percentages.\n# We will compute quintiles for each of these.\npercent_col_list = ['Total Grads - % of cohort', \\\n 'Total Regents - % of cohort',\\\n 'Total Regents - % of grads',\\\n 'Advanced Regents - % of cohort',\\\n 'Advanced Regents - % of grads',\\\n 'Regents w/o Advanced - % of cohort',\\\n 'Regents w/o Advanced - % of grads',\\\n 'Local - % of cohort',\\\n 'Local - % of grads',\\\n 'Still Enrolled - % of cohort',\\\n 'Dropped Out - % of cohort']\\\n\n# expand the dataframe on to include quintile on the 'interesting' school stats\ndf = combine_data (df_sch_outcomes[df_sch_outcomes.Demographic == 'Total Cohort'], \\\n df_loc, percent_col_list)\n\n# There are some schools with no data. dropna () to get rid of them\ndf = df.dropna()\n\ndf.to_csv (os.path.join (bp_data, 'df_A_school_info.csv'))", "Plot the first half of the 'interesting' percentage school stats as histograms", "for c in percent_col_list [0:6]:\n plot_histogram (df[c].tolist(), n_bins = 5, title = c, \\\n x_label = 'Percent', y_label = '# Schools', color = 'Maroon' )", "Plot the histograms for remainder of the percentrage statistics", "for c in percent_col_list [6:]:\n plot_histogram (df[c].tolist(), n_bins = 5, title = c, \\\n x_label = 'Percent', y_label = '# Schools', color = 'Maroon' )\n\ndf.head()\n\n1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
unnikrishnankgs/va
venv/lib/python3.5/site-packages/matplotlib/backends/web_backend/nbagg_uat.ipynb
bsd-2-clause
[ "from __future__ import print_function\nfrom imp import reload", "UAT for NbAgg backend.\nThe first line simply reloads matplotlib, uses the nbagg backend and then reloads the backend, just to ensure we have the latest modification to the backend code. Note: The underlying JavaScript will not be updated by this process, so a refresh of the browser after clearing the output and saving is necessary to clear everything fully.", "import matplotlib\nreload(matplotlib)\n\nmatplotlib.use('nbagg')\n\nimport matplotlib.backends.backend_nbagg\nreload(matplotlib.backends.backend_nbagg)", "UAT 1 - Simple figure creation using pyplot\nShould produce a figure window which is interactive with the pan and zoom buttons. (Do not press the close button, but any others may be used).", "import matplotlib.backends.backend_webagg_core\nreload(matplotlib.backends.backend_webagg_core)\n\nimport matplotlib.pyplot as plt\nplt.interactive(False)\n\nfig1 = plt.figure()\nplt.plot(range(10))\n\nplt.show()", "UAT 2 - Creation of another figure, without the need to do plt.figure.\nAs above, a new figure should be created.", "plt.plot([3, 2, 1])\nplt.show()", "UAT 3 - Connection info\nThe printout should show that there are two figures which have active CommSockets, and no figures pending show.", "print(matplotlib.backends.backend_nbagg.connection_info())", "UAT 4 - Closing figures\nClosing a specific figure instance should turn the figure into a plain image - the UI should have been removed. In this case, scroll back to the first figure and assert this is the case.", "plt.close(fig1)", "UAT 5 - No show without plt.show in non-interactive mode\nSimply doing a plt.plot should not show a new figure, nor indeed update an existing one (easily verified in UAT 6).\nThe output should simply be a list of Line2D instances.", "plt.plot(range(10))", "UAT 6 - Connection information\nWe just created a new figure, but didn't show it. Connection info should no longer have \"Figure 1\" (as we closed it in UAT 4) and should have figure 2 and 3, with Figure 3 without any connections. There should be 1 figure pending.", "print(matplotlib.backends.backend_nbagg.connection_info())", "UAT 7 - Show of previously created figure\nWe should be able to show a figure we've previously created. The following should produce two figure windows.", "plt.show()\nplt.figure()\nplt.plot(range(5))\nplt.show()", "UAT 8 - Interactive mode\nIn interactive mode, creating a line should result in a figure being shown.", "plt.interactive(True)\nplt.figure()\nplt.plot([3, 2, 1])", "Subsequent lines should be added to the existing figure, rather than creating a new one.", "plt.plot(range(3))", "Calling connection_info in interactive mode should not show any pending figures.", "print(matplotlib.backends.backend_nbagg.connection_info())", "Disable interactive mode again.", "plt.interactive(False)", "UAT 9 - Multiple shows\nUnlike most of the other matplotlib backends, we may want to see a figure multiple times (with or without synchronisation between the views, though the former is not yet implemented). Assert that plt.gcf().canvas.manager.reshow() results in another figure window which is synchronised upon pan & zoom.", "plt.gcf().canvas.manager.reshow()", "UAT 10 - Saving notebook\nSaving the notebook (with CTRL+S or File->Save) should result in the saved notebook having static versions of the figues embedded within. The image should be the last update from user interaction and interactive plotting. (check by converting with ipython nbconvert &lt;notebook&gt;)\nUAT 11 - Creation of a new figure on second show\nCreate a figure, show it, then create a new axes and show it. The result should be a new figure.\nBUG: Sometimes this doesn't work - not sure why (@pelson).", "fig = plt.figure()\nplt.axes()\nplt.show()\n\nplt.plot([1, 2, 3])\nplt.show()", "UAT 12 - OO interface\nShould produce a new figure and plot it.", "from matplotlib.backends.backend_nbagg import new_figure_manager,show\n\nmanager = new_figure_manager(1000)\nfig = manager.canvas.figure\nax = fig.add_subplot(1,1,1)\nax.plot([1,2,3])\nfig.show()", "UAT 13 - Animation\nThe following should generate an animated line:", "import matplotlib.animation as animation\nimport numpy as np\n\nfig, ax = plt.subplots()\n\nx = np.arange(0, 2*np.pi, 0.01) # x-array\nline, = ax.plot(x, np.sin(x))\n\ndef animate(i):\n line.set_ydata(np.sin(x+i/10.0)) # update the data\n return line,\n\n#Init only required for blitting to give a clean slate.\ndef init():\n line.set_ydata(np.ma.array(x, mask=True))\n return line,\n\nani = animation.FuncAnimation(fig, animate, np.arange(1, 200), init_func=init,\n interval=32., blit=True)\nplt.show()", "UAT 14 - Keyboard shortcuts in IPython after close of figure\nAfter closing the previous figure (with the close button above the figure) the IPython keyboard shortcuts should still function.\nUAT 15 - Figure face colours\nThe nbagg honours all colours appart from that of the figure.patch. The two plots below should produce a figure with a transparent background and a red background respectively (check the transparency by closing the figure, and dragging the resulting image over other content). There should be no yellow figure.", "import matplotlib\nmatplotlib.rcParams.update({'figure.facecolor': 'red',\n 'savefig.facecolor': 'yellow'})\nplt.figure()\nplt.plot([3, 2, 1])\n\nwith matplotlib.rc_context({'nbagg.transparent': False}):\n plt.figure()\n\nplt.plot([3, 2, 1])\nplt.show()", "UAT 16 - Events\nPressing any keyboard key or mouse button (or scrolling) should cycle the line line while the figure has focus. The figure should have focus by default when it is created and re-gain it by clicking on the canvas. Clicking anywhere outside of the figure should release focus, but moving the mouse out of the figure should not release focus.", "import itertools\nfig, ax = plt.subplots()\nx = np.linspace(0,10,10000)\ny = np.sin(x)\nln, = ax.plot(x,y)\nevt = []\ncolors = iter(itertools.cycle(['r', 'g', 'b', 'k', 'c']))\ndef on_event(event):\n if event.name.startswith('key'):\n fig.suptitle('%s: %s' % (event.name, event.key))\n elif event.name == 'scroll_event':\n fig.suptitle('%s: %s' % (event.name, event.step))\n else:\n fig.suptitle('%s: %s' % (event.name, event.button))\n evt.append(event)\n ln.set_color(next(colors))\n fig.canvas.draw()\n fig.canvas.draw_idle()\n\nfig.canvas.mpl_connect('button_press_event', on_event)\nfig.canvas.mpl_connect('button_release_event', on_event)\nfig.canvas.mpl_connect('scroll_event', on_event)\nfig.canvas.mpl_connect('key_press_event', on_event)\nfig.canvas.mpl_connect('key_release_event', on_event)\n\nplt.show()", "UAT 17 - Timers\nSingle-shot timers follow a completely different code path in the nbagg backend than regular timers (such as those used in the animation example above.) The next set of tests ensures that both \"regular\" and \"single-shot\" timers work properly.\nThe following should show a simple clock that updates twice a second:", "import time\n\nfig, ax = plt.subplots()\ntext = ax.text(0.5, 0.5, '', ha='center')\n\ndef update(text):\n text.set(text=time.ctime())\n text.axes.figure.canvas.draw()\n \ntimer = fig.canvas.new_timer(500, [(update, [text], {})])\ntimer.start()\nplt.show()", "However, the following should only update once and then stop:", "fig, ax = plt.subplots()\ntext = ax.text(0.5, 0.5, '', ha='center') \ntimer = fig.canvas.new_timer(500, [(update, [text], {})])\n\ntimer.single_shot = True\ntimer.start()\n\nplt.show()", "And the next two examples should never show any visible text at all:", "fig, ax = plt.subplots()\ntext = ax.text(0.5, 0.5, '', ha='center')\ntimer = fig.canvas.new_timer(500, [(update, [text], {})])\n\ntimer.start()\ntimer.stop()\n\nplt.show()\n\nfig, ax = plt.subplots()\ntext = ax.text(0.5, 0.5, '', ha='center')\ntimer = fig.canvas.new_timer(500, [(update, [text], {})])\n\ntimer.single_shot = True\ntimer.start()\ntimer.stop()\n\nplt.show()", "UAT17 - stoping figure when removed from DOM\nWhen the div that contains from the figure is removed from the DOM the figure should shut down it's comm, and if the python-side figure has no more active comms, it should destroy the figure. Repeatedly running the cell below should always have the same figure number", "fig, ax = plt.subplots()\nax.plot(range(5))\nplt.show()", "Running the cell below will re-show the figure. After this, re-running the cell above should result in a new figure number.", "fig.canvas.manager.reshow()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mathinmse/mathinmse.github.io
Lecture-19-Spectral-Solutions.ipynb
mit
[ "Lecture 19: Numerical Solutions to the Diffusion Equation (Spectral Method)\nWhat to Learn?\n\nThe assumptions of the spectral method\nThe possible choices for basis vectors\nThe linear independence of resulting ODEs\nOne way to code the FFT/IFFT\nFinite differencing of the amplitudes\nUseful data structures and pointer swapping using tuples\n\nWhat to do?\n\nSolve the diffusion equation using a spectral method.\nAnalyze two example implementations and determine the difference scheme.\n\nIntroduction\n\nIn this course the term spectral method refers to a numerical solution composed of a finite set of basis functions and the time evolution of their amplitudes. Practically, the choice of basis functions will satisfy the boundary conditions and the initial amplitudes will satisfy the initial conditions. The evolution of the amplitudes can then be written as a finite difference in time. Although other choices for basis functions are possible$^*$, we will focus on Fourier functions.\n$*$ Chebyshev & Fourier Spectral Methods, Boyd, John P., Springer 1989\nDeveloping the Logic of the Spectral Method\n\nThe spectral method assumes that the solution to a PDE (the function $c(x,t)$) can be represented as a series expansion that contains time dependent amplitudes (the $a_k(t)$ factors) and spatially varying functions (the $\\phi_k(x)$ terms are unit Fourier vectors). Note the similarity to the approach taken in the separation of variables method in the following discussion. \nTo begin, we write:\n$$\nc(x,t) = \\sum_{k=0}^N a_k(t)\\phi_k(x)\n$$\nChoosing the unit vectors from an orthonormal set (like Fourier) permits decoupling the summation into $N$ independent equations that can be solved simultaneously. Furthermore, the assumption of the orthonormal set allows the time dependence to be placed within the amplitudes so that the independent variables are seperated. Therefore we refer to the $a_k(t)$ as the set of amplitudes with one $a_k$ for each of the $k$-basis vectors, $\\phi_k(x)$. There is no restriction on the value of $N$ although practical considerations that relate to diffusive problems and the \"smoothness\" of the solutions will require just a few $N$ terms to achieve acceptable accuracy. The form of $\\phi$ depends on the problem being solved; the boundary conditions and the initial conditions will affect this choice. \nOur example problem:\n$$\n\\frac{\\partial c(x,t)}{\\partial t} = \\frac{\\partial^2 c(x,t)}{\\partial x^2}\n$$\nwith the initial and boundary conditions:\n$$\nc(0,t) = 0\\\nc(L,t) = 0\\\nc(x,0) = c_0(x)\n$$\nThe problem requires that our boundary values for the $c(x,t)$ at $0$ and $L$ are zero. One possible choice is a series of $\\sin$ terms. This satisfies the boundary conditions and enforces perodicity of the solution.\n$$\n\\phi_k(x) = \\sin \\left( \\frac{k\\pi x}{L} \\right)\n$$ \nBegin by importing relevant libraries and defining the symbols we expect to use.", "import sympy as sp\nx, y, z, t = sp.symbols('x y z t')\nk, m, n = sp.symbols('k m n', integer=True)\nf, g, h = sp.symbols('f g h', cls=sp.Function)\nsp.var('a_k, phi, c', cls=sp.Function);\nsp.var('L', real=True);\nsp.init_printing();", "The assumed form of the solution is:", "elementK = sp.Eq(c(x,t),a_k(t)*sp.sin(k*sp.pi*x/L))\nelementK\n\nsp.acos()", "Proceed by substituting the series expansion into the PDE and performing the differentiations as defined:", "spaceDeriv = elementK.rhs.diff(x,2)\nspaceDeriv\n\ntimeDeriv = elementK.rhs.diff(t,1)\ntimeDeriv", "Our final differential equation represented in $a(t)$ is therefore:\n$$\n\\sum_{k=0}^N \\sin{\\left (\\frac{\\pi x}{L} k \\right )} \\frac{d a_k{\\left (t \\right )}}{d t} = - \\sum_{k=0}^N \\frac{\\pi^{2} k^{2}}{L^{2}} a_k{\\left (t \\right )} \\sin{\\left (\\frac{\\pi x}{L} k \\right )}\n$$\nAs a reminder, a $\\sin$ series is orthogonal over $0 < x < 2\\pi$ if the following integral is zero:", "# m and n are symbols defined as integers\nsinIntegral = sp.Integral(sp.sin(n*x)*sp.sin(m*x),(x,0,2*sp.pi))\nsinIntegral\n\nsinIntegral.doit()", "Because the integral is zero for $m \\neq n$, then the series on the LHS:\n$$\n\\sum_{k=0}^N \\sin{\\left (\\frac{\\pi x}{L} k \\right )} \\frac{d a_k{\\left (t \\right )}}{d t}\n$$\nis a linear system. This is also true for the RHS:\n$$\n- \\sum_{k=0}^N \\frac{\\pi^{2} k^{2}}{L^{2}} a_k{\\left (t \\right )} \\sin{\\left (\\frac{\\pi x}{L} k \\right )}\n$$\nThe principle of superposition permits us to split this summation into N independent ordinary differential equations, solve each one, and then sum the solutions to produce the answer to the original PDE. To continue developing a solution for these N independent ODEs it is necessary to analyze the amplitude ODE and define a differencing scheme. This will be illustrated using SymPy:", "ai, aip1 = sp.symbols('a^{i}_k, a^{i+1}_k')\ndt = sp.Symbol(r'\\Delta t')\n\ndifferenceEquation = sp.Eq((ai-aip1)/dt,((sp.pi**2*k**2*ai)/L**2))\ndifferenceEquation\n\nodeSolution = sp.solveset(differenceEquation,aip1)\nodeSolution", "The above solution results in the following difference scheme:\n$$\na^{i+1}_k = a^{i}_k \\left( 1 - \\frac{\\pi^{2} k^{2}}{L^{2}} \\Delta t \\right)\n$$\nThe timestep, $dt$, should be chosen small enough such that the $a_k$ decay at each timestep.\nImplementation of the Spectral Method\n\nAn annotated implementation of the spectral method is developed in the next section. Unlike the development above, we use the full Fourier series for the basis functions. This is a basic implementation that could be improved with the addition of helper and visualization functions. SciPy provides $\\sin$ and $\\cos$ transforms for other boundary conditions where Fourier may be inappropriate.", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "The following quantities are defined:\n\nnumPoints: the number of points in the grid\nL: the total length of the domain\ndt: the numerical timestep\nsteps: the number of timesteps to compute\nx: a vector containing the x-position for each grid point in the domain from $0$ to $L$ with numPoints entries in the vector.", "numPoints = 20\nL = 1.0\ndt = 0.0001\nsteps = 1000\n# we have access to np.pi for $\\pi$\nx = np.linspace(0,L,numPoints)", "The following vectors are defined:\n\nc_new will hold the $c(x,t)$ values at the start and end of the numerical computation\na_old will hold the amplitudes at the current timestep (i.e. the $a^{i}_k$)\na_new will hold the amplitudes at the next timestep (i.e. the $a^{i+1}_k$)\n\nThey are initalized to zero at the start of the calculation. Each of these vectors is the same shape as x to hold the results of the FFT and IFFT. They are declared as complex128 to accomodate different initial conditions where the FFT would produce complex valued results.", "c_new = np.zeros((numPoints), dtype='complex128')\na_old = np.zeros((numPoints), dtype='complex128')\na_new = np.zeros((numPoints), dtype='complex128')", "In previous example calculations a set of basis vectors was created to illustrate Fourier transforms graphically and the terms in a Fourier series. That is not part of this calculation, however this could be easily added if visualizing the results in more detail is desireable.\nThe difference equation requires values for the square of the Fourier numbers, $k^2$:\n$$\na^{i+1}_k = a^{i}_k \\left( 1 - \\frac{\\pi^{2} k^{2}}{L^{2}} \\Delta t \\right)\n$$\nTo ensure appropriate matching between the $a_k(t)$ and $k$ the helper function fftfreq() is used. Using fftfreq with fft and ifft ensures that the bookeeping of amplitudes and $k$ values is done correctly. In the equation for the time rate of change in the amplitudes there is a constant $k^2$ term - computing this first avoids having to repeatedly compute the quantity each time step.\nNumPy arrays are called by reference so it is necessary to perform an element-by-element \"deep copy\" of the data from one array into another array when building the initial condition. The helper function np.copyto provides this capability. a_new is then filled with the amplitudes corresponding to the initial condition.", "k = np.fft.fftfreq(numPoints, d=L/(numPoints-1))\nk2 = k**2\ninitialCondition = np.sin(np.pi*x/L)\n\n# create an initial condition (this could be a simple function like x**2)\nnp.copyto(c_new, initialCondition)\n# transform it (dft or sin transform)\nnp.copyto(a_new,np.fft.fft(c_new))", "Instabilities will occur if the amplitudes do not decay at each timestep. The problem is that the condition depends on the wavenumber - so a suitable $\\Delta t$ must be chosen that satisfies the most restrictive condition for the largest wavenumber. Using a Boolean expression it is possible to check to see if all of the wavenumbers result in a factor less than one:", "(dt*np.pi**2*k2)/L**2 < 1", "If any the results are False then the numerical calculation will not converge, if they are all True then it is possible to complete the numerical calculation. The next code block performs the numerical iterations of the amplitudes. First the pointers to a_new and a_old are swapped and then a_new is filled with the new values based on the a_old values. This sequence of operations is performed for the number of timesteps given in steps:", "for i in range(steps):\n # swap pointers\n a_new, a_old = a_old, a_new\n # find new amplitudes\n np.copyto(a_new, a_old*(1-(dt*np.pi**2*k2)/L**2))", "When the requested number of steps have been computed, we use the inverse Fourier transform to compute the concentration profile and store those results in c_new:", "# inverse transform it\nnp.copyto(c_new, np.fft.ifft(a_new))", "After the computation, the concentration profile are displayed with a helper function:", "def makePlot():\n fig = plt.figure()\n axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # left, bottom, width, height (range 0 to 1)\n axes.plot(x, c_new.real, 'r')\n # Setting the y-limit cleans up the plot.\n axes.set_ylim([0.0,1.0])\n axes.set_xlabel('Distance $x$')\n axes.set_ylabel('Concentration $c(x,t)$')\n axes.set_title('Concentration Profile solved by Spectral Method')\n plt.show()\n return\n\nmakePlot()", "Another solution scheme could preserve the values of the concentration. I choose not to do that here for simplicity. Doing so would permit a small animation of the diffusion process, however. This is left to the student to implement.\nReading Assignments and Practice\n\nAnalyze the spectral difference schemes in the following two examples. For reference and attribution the codes can be found here. The materials are provided under a Creative Commons license with attribution to the original authors whose names can be found at the above link.", "# %load Heat_Eq_1D_Spectral_BE.py\n#!/usr/bin/env python\n\"\"\"\nSolving Heat Equation using pseudospectral methods with Backwards Euler:\nu_t= \\alpha*u_xx\nBC = u(0)=0 and u(2*pi)=0 (Periodic)\nIC=sin(x)\n\"\"\"\n\nimport math\nimport numpy\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator\n\n# Grid\nN = 64; h = 2*math.pi/N; x = [h*i for i in xrange(1,N+1)]\n\n# Initial conditions\nv = [math.sin(y) for y in x]\nalpha = 0.5\nt = 0 \ndt = .001 #Timestep size\n\n# (ik)^2 Vector\nI = complex(0,1)\nk = numpy.array([I*n for n in range(0,N/2) + [0] + range(-N/2+1,0)])\nk2=k**2;\n\n# Setting up Plot\ntmax = 5.0; tplot = 0.1\nplotgap= int(round(tplot/dt))\nnplots = int(round(tmax/tplot))\ndata = numpy.zeros((nplots+1,N))\ndata[0,:] = v\ntdata = [t]\n\nfor i in xrange(nplots):\n v_hat = numpy.fft.fft(v) # convert to fourier space\n for n in xrange(plotgap):\n v_hat = v_hat / (1-dt*alpha*k2) # backward Euler timestepping\n\n v = numpy.fft.ifft(v_hat) # convert back to real space\n data[i+1,:] = numpy.real(v) # records data\n\n t = t+plotgap*dt # records real time\n tdata.append(t)\n\n# Plot using mesh\nxx,tt = (numpy.mat(A) for A in (numpy.meshgrid(x,tdata)))\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(xx, tt, data,rstride=1, cstride=1, cmap=cm.jet,\n linewidth=0, antialiased=False)\nfig.colorbar(surf, shrink=0.5, aspect=5)\nplt.xlabel('x')\nplt.ylabel('t')\nplt.show()\n\n# %load Heat_Eq_1D_Spectral_FE.py\n#!/usr/bin/env python\n\"\"\"\nSolving Heat Equation using pseudo-spectral and Forward Euler\nu_t= \\alpha*u_xx\nBC= u(0)=0, u(2*pi)=0\nIC=sin(x)\n\"\"\"\n\nimport math\nimport numpy\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator\n\n# Grid\nN = 64 # Number of steps\nh = 2*math.pi/N # step size\nx = h*numpy.arange(0,N) # discretize x-direction\n\nalpha = 0.5 # Thermal Diffusivity constant\nt = 0\ndt = .001\n\n# Initial conditions \nv = numpy.sin(x)\nI = complex(0,1)\nk = numpy.array([I*y for y in range(0,N/2) + [0] + range(-N/2+1,0)])\nk2=k**2;\n\n# Setting up Plot\ntmax = 5; tplot = .1;\nplotgap = int(round(tplot/dt))\nnplots = int(round(tmax/tplot))\n\ndata = numpy.zeros((nplots+1,N))\ndata[0,:] = v\ntdata = [t]\n\nfor i in xrange(nplots):\n v_hat = numpy.fft.fft(v)\n\n for n in xrange(plotgap):\n v_hat = v_hat+dt*alpha*k2*v_hat # FE timestepping\n\n v = numpy.real(numpy.fft.ifft(v_hat)) # back to real space\n data[i+1,:] = v\n\n # real time vector\n t = t+plotgap*dt\n tdata.append(t)\n\n# Plot using mesh\nxx,tt = (numpy.mat(A) for A in (numpy.meshgrid(x,tdata)))\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(xx, tt, data,rstride=1, cstride=1, cmap=cm.jet,\n linewidth=0, antialiased=False)\nfig.colorbar(surf, shrink=0.5, aspect=5)\nplt.xlabel('x')\nplt.ylabel('t')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
iurilarosa/thesis
codici/Archiviati/Plots/plot funzioni.ipynb
gpl-3.0
[ "import numpy\nimport math\nimport pylab", "NS\n$$h(t) \\approx_0 \\frac{16 \\pi^2 G}{c^4 r}I \\epsilon (\\nu_0 + \\dot{\\nu}t)^2 cos(2\\pi(\\nu_0+\\dot{\\nu}t)t)$$", "G = 6.67408*1e-11\nc = 299792458\nr = 2.4377e+20\nI = 1e38\nepsilon = 1e-4\nnu0 = 1\nnudot = -5e-10\n\ncost = 16*math.pi**2*G/(c**4*r)*I*epsilon\nprint(cost)\nnmesi = 9\ntobs = nmesi*30*24*60*60\nprint(tobs)\ntempi = numpy.linspace(0,10,100000)\n\nleggeOraria = nu0+nudot*tempi\nampiezza = cost*numpy.power(leggeOraria,2)\n\nonda = ampiezza*numpy.cos(2*math.pi*leggeOraria*tempi)\n\n%matplotlib notebook\n#pylab.plot(tempi,ampiezza)\npylab.plot(tempi,onda)\npylab.show()\n\nt = numpy.linspace(1,24,100000)\nampiezza = 1e-19\nsd = 1e-9\nfreqIniz = 1\nondaNS = ampiezza*(sd/(freqIniz-sd*t))**(1/2)*numpy.cos(2*(freqIniz-sd*t)*t)\n\n%matplotlib notebook\npylab.plot(t,ondaNS)\npylab.show()\n\nampPerTempo = ampiezza*(sd/(freqIniz-sd*t))**(1/2)\n\n%matplotlib notebook\npylab.plot(t,ampPerTempo)\npylab.show()", "chirp", "t = numpy.linspace(1,2,1000)\nampiezza = 1e-2\ntcoal = 2.05\nfreqIniz = 20\nondaChirp = ampiezza*freqIniz*numpy.power((1-t/tcoal),-2/8)*numpy.cos(freqIniz*numpy.power((1-t/tcoal),-3/8)*t)\n\n%matplotlib notebook\npylab.plot(t,ondaChirp)\npylab.show()\n\n\nampPerTempo = ampiezza*freqIniz*numpy.power((1-t/tcoal),-2/8)\n\n%matplotlib notebook\npylab.plot(t,ampPerTempo)\npylab.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
zzsza/TIL
python/dask.ipynb
mit
[ "Dask\n\nDask 공식 문서\nnumpy, pandas, sklearn이랑 통합 가능\n\nDask is a flexible parallel computing library for analytic computing.\n\n\nDask is composed of two components:\n\nDynamic task scheduling optimized for computation. This is similar to Airflow, Luigi, Celery, or Make, but optimized for interactive computational workloads.\n“Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of the dynamic task schedulers.\n\n\n\n\npip3 install dask로 하면 dask 베이직만 설치됨. \nInstallation\n\npip install dask[complete]: Install everything\npip install dask[array]: Install dask and numpy\npip install dask[bag]: Install dask and cloudpickle\npip install dask[dataframe]: Install dask, numpy, and pandas\npip install dask: Install only dask, which depends only on the standard\n library. This is appropriate if you only want the task schedulers.\n\nWe do this so that users of the lightweight core dask scheduler aren’t required\nto download the more exotic dependencies of the collections (numpy, pandas,\netc.).", "import dask \nimport pandas as pd\n\ndf = pd.read_csv('./user_log_2018_01_01.csv')\n\ndf\n\nimport dask.dataframe as dd\n\ndask_df = dd.read_csv('./user_log_2018_01_01.csv')\n\ndask_df\n\ndir(dask_df)\n\ndask_df[\"0\"]\n\ndask_df.index\n\nlen(dask_df.index)\n\ndask_df.info", "Setup\n\n2 families of task scheduler\n1) Single machine scheduler : basic feature, default, does not scale\n2) Distributed scheduler : sophisticated, more feature, a bit more effort to set up", "dask_df.head()\n\ndask_df[\"user_id\"].sum().compute()\n\ndask_df[\"event_cnt\"].sum().compute()\n\ndask_df[dask_df[\"event_cnt\"]>1].sum().compute()\n\nfrom dask.distributed import Client\n\nclient2 = Client()\n# client = Client(process=False)\n\ndask_df[\"event_cnt\"].sum().compute(scheduler='client2')", "Single Machine\nDefault Scheduler : no-setup, local threads or process for larger than memory processing\nDask.distributed : newer system on a single machine. advanced features\n\n\nDistributed computing\nManual SEtup : dask-scheduler and dask-worker\nSSH \nHigh Performance Computers\nKuberneters\nPython API\nDocker\nCloud", "# with\nwith dask.config.set(scheduler='threads'):\n x.compute()\n y.compute()\n\n# global setting\ndask.config.set(scheduler='threads')", "LocalCluster", "from dask.distributed import Client, LocalCluster\n\ncluster = LocalCluster()", "class distributed.deploy.local.LocalCluster(n_workers=None, threads_per_worker=None, processes=True, loop=None, start=None, ip=None, scheduler_port=0, silence_logs=30, diagnostics_port=8787, services={}, worker_services={}, service_kwargs=None, asynchronous=False, **worker_kwargs)", "client = Client(cluster)\n\ncluster\n\nclient\n\n### Add a new worker to the cluster\n\nw = cluster.start_worker(ncores=2)\n\ncluster.stop_worker(w)", "Command line\ndask-worker tcp://192.0.0.100:8786\nSSH\npip3 install paramiko\n- dask-ssh 192.168.0.1 192.168.0.2 192.168.0.3 192.168.0.4\n- dask-ssh 192.168.0.{1,2,3,4}\n- dask-ssh --hostfile hostfile.txt" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
snegirigens/DLND
embeddings/Skip-Gram_word2vec.ipynb
mit
[ "Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.", "import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils", "Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()", "Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.", "words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))", "And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.", "vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]", "Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.", "## Your code here\nfrom collections import Counter\nimport random\n\nword_counts = Counter(int_words)\n\nt = 1e-5\ntotal_words = len(int_words)\nfrequency = { word : float(count) / total_words for word, count in word_counts.items() }\np_drop = {word : 1 - np.sqrt(float(t)/frequency[word]) for word in word_counts }\ntrain_words = [w for w in int_words if p_drop[w] < random.random()] # The final subsampled word list\n\n#print (len(train_words))\n#print(train_words[:30])", "Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.", "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n # Your code here\n R = random.randint (1, window_size) # or window_size + 1?\n start = idx - R if idx >= R else 0\n end = idx + R + 1\n return list (set(words[start:idx] + words[idx+1:end]))", "Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.", "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ", "Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.", "train_graph = tf.Graph()\nwith train_graph.as_default():\n# with tf.name_scope('input'):\n inputs = tf.placeholder (tf.int32, shape=[None], name='inputs')\n# with tf.name_scope('targets'):\n labels = tf.placeholder (tf.int32, shape=[None,None], name='labels')", "Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.", "n_vocab = len(int_to_vocab)\nn_embedding = 200 # Number of embedding features \nwith train_graph.as_default():\n# with tf.name_scope('embeddings'):\n embedding = tf.Variable (tf.random_uniform ([n_vocab, n_embedding], -1.0, 1.0, dtype=tf.float32), name='embedding') # create embedding weight matrix here\n embed = tf.nn.embedding_lookup (embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output\n tf.summary.histogram ('embedding', embedding)", "Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.", "# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable (tf.truncated_normal ([n_vocab, n_embedding], stddev=0.1, dtype=tf.float32), name='softmax_w') # create softmax weight matrix here\n softmax_b = tf.Variable (tf.zeros (n_vocab, dtype=tf.float32), name='softmax_b') # create softmax biases here\n \n tf.summary.histogram ('softmax_w', softmax_w)\n tf.summary.histogram ('softmax_b', softmax_b)\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss (softmax_w, softmax_b, labels, embed, n_sampled, n_vocab, name='loss')\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)\n tf.summary.scalar ('cost', cost)", "Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.", "with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints", "Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.", "epochs = 1\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n merged = tf.summary.merge_all()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n train_writer = tf.summary.FileWriter (\"./logs/2/train\", sess.graph)\n test_writer = tf.summary.FileWriter (\"./logs/2/test\")\n \n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n summary, train_loss, _ = sess.run([merged, cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n train_writer.add_summary (summary, iteration)\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)", "Restore the trained network if you need to:", "with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)", "Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
notebooks/chap02.ipynb
mit
[ "Modeling and Simulation in Python\nChapter 2\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim library\nfrom modsim import *\n\n# set the random number generator\nnp.random.seed(7)\n\n# If this cell runs successfully, it produces no output.", "Modeling a bikeshare system\nWe'll start with a State object that represents the number of bikes at each station.\nWhen you display a State object, it lists the state variables and their values:", "bikeshare = State(olin=10, wellesley=2)", "We can access the state variables using dot notation.", "bikeshare.olin\n\nbikeshare.wellesley", "Exercise: What happens if you spell the name of a state variable wrong? Edit the previous cell, change the spelling of wellesley, and run the cell again.\nThe error message uses the word \"attribute\", which is another name for what we are calling a state variable. \nExercise: Add a third attribute called babson with initial value 0, and display the state of bikeshare again.\nUpdating\nWe can use the update operators += and -= to change state variables.", "bikeshare.olin -= 1", "If we display bikeshare, we should see the change.", "bikeshare", "Of course, if we subtract a bike from olin, we should add it to wellesley.", "bikeshare.wellesley += 1\nbikeshare", "Functions\nWe can take the code we've written so far and encapsulate it in a function.", "def bike_to_wellesley():\n bikeshare.olin -= 1\n bikeshare.wellesley += 1", "When you define a function, it doesn't run the statements inside the function, yet. When you call the function, it runs the statements inside.", "bike_to_wellesley()\nbikeshare", "One common error is to omit the parentheses, which has the effect of looking up the function, but not calling it.", "bike_to_wellesley", "The output indicates that bike_to_wellesley is a function defined in a \"namespace\" called __main__, but you don't have to understand what that means.\nExercise: Define a function called bike_to_olin that moves a bike from Wellesley to Olin. Call the new function and display bikeshare to confirm that it works.", "# Solution goes here\n\n# Solution goes here", "Conditionals\nmodsim.py provides flip, which takes a probability and returns either True or False, which are special values defined by Python.\nThe Python function help looks up a function and displays its documentation.", "help(flip)", "In the following example, the probability is 0.7 or 70%. If you run this cell several times, you should get True about 70% of the time and False about 30%.", "flip(0.7)", "In the following example, we use flip as part of an if statement. If the result from flip is True, we print heads; otherwise we do nothing.", "if flip(0.7):\n print('heads')", "With an else clause, we can print heads or tails depending on whether flip returns True or False.", "if flip(0.7):\n print('heads')\nelse:\n print('tails')", "Step\nNow let's get back to the bikeshare state. Again let's start with a new State object.", "bikeshare = State(olin=10, wellesley=2)", "Suppose that in any given minute, there is a 50% chance that a student picks up a bike at Olin and rides to Wellesley. We can simulate that like this.", "if flip(0.5):\n bike_to_wellesley()\n print('Moving a bike to Wellesley')\n\nbikeshare", "And maybe at the same time, there is also a 40% chance that a student at Wellesley rides to Olin.", "if flip(0.4):\n bike_to_olin()\n print('Moving a bike to Olin')\n\nbikeshare", "We can wrap that code in a function called step that simulates one time step. In any given minute, a student might ride from Olin to Wellesley, from Wellesley to Olin, or both, or neither, depending on the results of flip.", "def step():\n if flip(0.5):\n bike_to_wellesley()\n print('Moving a bike to Wellesley')\n \n if flip(0.4):\n bike_to_olin()\n print('Moving a bike to Olin')", "Since this function takes no parameters, we call it like this:", "step()\nbikeshare", "Parameters\nAs defined in the previous section, step is not as useful as it could be, because the probabilities 0.5 and 0.4 are \"hard coded\".\nIt would be better to generalize this function so it takes the probabilities p1 and p2 as parameters:", "def step(p1, p2):\n if flip(p1):\n bike_to_wellesley()\n print('Moving a bike to Wellesley')\n \n if flip(p2):\n bike_to_olin()\n print('Moving a bike to Olin')", "Now we can call it like this:", "step(0.5, 0.4)\nbikeshare", "Exercise: At the beginning of step, add a print statement that displays the values of p1 and p2. Call it again with values 0.3, and 0.2, and confirm that the values of the parameters are what you expect.", "# Solution goes here", "For loop\nBefore we go on, I'll redefine step without the print statements.", "def step(p1, p2):\n if flip(p1):\n bike_to_wellesley()\n \n if flip(p2):\n bike_to_olin()", "And let's start again with a new State object:", "bikeshare = State(olin=10, wellesley=2)", "We can use a for loop to move 4 bikes from Olin to Wellesley.", "for i in range(4):\n bike_to_wellesley()\n \nbikeshare", "Or we can simulate 4 random time steps.", "for i in range(4):\n step(0.3, 0.2)\n \nbikeshare", "If each step corresponds to a minute, we can simulate an entire hour like this.", "for i in range(60):\n step(0.3, 0.2)\n\nbikeshare", "After 60 minutes, you might see that the number of bike at Olin is negative. We'll fix that problem in the next notebook.\nBut first, we want to plot the results.\nTimeSeries\nmodsim.py provides an object called a TimeSeries that can contain a sequence of values changing over time.\nWe can create a new, empty TimeSeries like this:", "results = TimeSeries()", "And we can add a value to the TimeSeries like this:", "results[0] = bikeshare.olin\nresults", "The 0 in brackets is an index that indicates that this value is associated with time step 0.\nNow we'll use a for loop to save the results of the simulation. I'll start one more time with a new State object.", "bikeshare = State(olin=10, wellesley=2)", "Here's a for loop that runs 10 steps and stores the results.", "for i in range(10):\n step(0.3, 0.2)\n results[i] = bikeshare.olin", "Now we can display the results.", "results", "A TimeSeries is a specialized version of a Pandas Series, so we can use any of the functions provided by Series, including several that compute summary statistics:", "results.mean()\n\nresults.describe()", "You can read the documentation of Series here.\nPlotting\nWe can also plot the results like this.", "plot(results, label='Olin')\n\ndecorate(title='Olin-Wellesley Bikeshare',\n xlabel='Time step (min)', \n ylabel='Number of bikes')\n\nsavefig('figs/chap02-fig01.pdf')", "decorate, which is defined in the modsim library, adds a title and labels the axes.", "help(decorate)", "savefig() saves a figure in a file.", "help(savefig)", "The suffix of the filename indicates the format you want. This example saves the current figure in a PDF file.\nExercise: Wrap the code from this section in a function named run_simulation that takes three parameters, named p1, p2, and num_steps.\nIt should:\n\nCreate a TimeSeries object to hold the results.\nUse a for loop to run step the number of times specified by num_steps, passing along the specified values of p1 and p2.\nAfter each step, it should save the number of bikes at Olin in the TimeSeries.\nAfter the for loop, it should plot the results and\nDecorate the axes.\n\nTo test your function:\n\nCreate a State object with the initial state of the system.\nCall run_simulation with appropriate parameters.\nSave the resulting figure.\n\nOptional:\n\nExtend your solution so it creates two TimeSeries objects, keeps track of the number of bikes at Olin and at Wellesley, and plots both series at the end.", "# Solution goes here\n\n# Solution goes here", "Opening the hood\nThe functions in modsim.py are built on top of several widely-used Python libraries, especially NumPy, SciPy, and Pandas. These libraries are powerful but can be hard to use. The intent of modsim.py is to give you the power of these libraries while making it easy to get started.\nIn the future, you might want to use these libraries directly, rather than using modsim.py. So we will pause occasionally to open the hood and let you see how modsim.py works.\nYou don't need to know anything in these sections, so if you are already feeling overwhelmed, you might want to skip them. But if you are curious, read on.\nPandas\nThis chapter introduces two objects, State and TimeSeries. Both are based on the Series object defined by Pandas, which is a library primarily used for data science.\nYou can read the documentation of the Series object here\nThe primary differences between TimeSeries and Series are:\n\n\nI made it easier to create a new, empty Series while avoiding a confusing inconsistency.\n\n\nI provide a function so the Series looks good when displayed in Jupyter.\n\n\nI provide a function called set that we'll use later.\n\n\nState has all of those capabilities; in addition, it provides an easier way to initialize state variables, and it provides functions called T and dt, which will help us avoid a confusing error later.\nPyplot\nThe plot function in modsim.py is based on the plot function in Pyplot, which is part of Matplotlib. You can read the documentation of plot here.\ndecorate provides a convenient way to call the pyplot functions title, xlabel, and ylabel, and legend. It also avoids an annoying warning message if you try to make a legend when you don't have any labelled lines.", "help(decorate)", "NumPy\nThe flip function in modsim.py uses NumPy's random function to generate a random number between 0 and 1.\nYou can get the source code for flip by running the following cell.", "source_code(flip)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tuanavu/coursera-university-of-washington
machine_learning/1_machine_learning_foundations/assignment/week6/Deep Features for Image Retrieval.ipynb
mit
[ "Building an image retrieval system with deep features\nFire up GraphLab Create", "import graphlab", "Load the CIFAR-10 dataset\nWe will use a popular benchmark dataset in computer vision called CIFAR-10. \n(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)\nThis dataset is already split into a training set and test set. In this simple retrieval example, there is no notion of \"testing\", so we will only use the training data.", "image_train = graphlab.SFrame('image_train_data/')", "Computing deep features for our images\nThe two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded. \n(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)", "#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')\n#image_train['deep_features'] = deep_learning_model.extract_features(image_train)\n\nimage_train.head()", "Train a nearest-neighbors model for retrieving images using deep features\nWe will now build a simple image retrieval system that finds the nearest neighbors for any image.", "knn_model = graphlab.nearest_neighbors.create(image_train,features=['deep_features'],\n label='id')", "Use image retrieval model with deep features to find similar images\nLet's find similar images to this cat picture.", "graphlab.canvas.set_target('ipynb')\ncat = image_train[18:19]\ncat['image'].show()\n\nknn_model.query(cat)", "We are going to create a simple function to view the nearest neighbors to save typing:", "def get_images_from_ids(query_result):\n return image_train.filter_by(query_result['reference_label'],'id')\n\ncat_neighbors = get_images_from_ids(knn_model.query(cat))\n\ncat_neighbors['image'].show()", "Very cool results showing similar cats.\nFinding similar images to a car", "car = image_train[8:9]\ncar['image'].show()\n\nget_images_from_ids(knn_model.query(car))['image'].show()", "Just for fun, let's create a lambda to find and show nearest neighbor images", "show_neighbors = lambda i: get_images_from_ids(knn_model.query(image_train[i:i+1]))['image'].show()\n\nshow_neighbors(8)\n\nshow_neighbors(26)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledeprogram/algorithms
class6/donow/wang_zhizhou_6_donow.ipynb
gpl-3.0
[ "1. Import the necessary packages to read in the data, plot, and create a linear regression model", "import pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt \nimport statsmodels.formula.api as smf ", "2. Read in the hanford.csv file", "df = pd.read_csv(\"../data/hanford.csv\")\ndf.head()", "<img src=\"images/hanford_variables.png\">\n3. Calculate the basic descriptive statistics on the data", "df.describe()", "4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?", "df.corr()\n\ndf.plot(kind='scatter',x='Exposure',y='Mortality')\n\nprint('Yes.')", "5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure", "lm = smf.ols(formula=\"Mortality~Exposure\",data=df).fit() \nlm.params\n\nintercept, slope = lm.params\n\ndf.plot(kind='scatter',x='Exposure',y='Mortality',color='steelblue',linewidth=0)\nplt.plot(df[\"Exposure\"],slope*df[\"Exposure\"]+intercept,\"-\",color=\"red\")", "6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)", "lm.summary()\n\nprint(\"R^2 equals to 0.858.\")", "7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10", "print(\"The mortality rate of exposure 10 is\", 10*slope+intercept)\n\ndef get_mr(exposure):\n rate = exposure*slope + intercept\n return rate\n\nget_mr(10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/iree
samples/dynamic_shapes/dynamic_shapes.ipynb
apache-2.0
[ "Copyright 2021 The IREE Authors", "#@title Licensed under the Apache License v2.0 with LLVM Exceptions.\n# See https://llvm.org/LICENSE.txt for license information.\n# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception", "Dynamic Shapes\nThis notebook\n\nCreates a TensorFlow program with dynamic shapes\nImports that program into IREE's compiler\nCompiles the imported program to an IREE VM bytecode module\nTests running the compiled VM module using IREE's runtime\nDownloads compilation artifacts for use with the native (C API) sample application", "#@title General setup\n\nimport os\nimport tempfile\n\nARTIFACTS_DIR = os.path.join(tempfile.gettempdir(), \"iree\", \"colab_artifacts\")\nos.makedirs(ARTIFACTS_DIR, exist_ok=True)\nprint(f\"Using artifacts directory '{ARTIFACTS_DIR}'\")", "Create a program using TensorFlow and import it into IREE\nNOTE: as in other domains, providing more information to a compiler allows it\nto generate more efficient code. As a general rule, the slowest varying\ndimensions of program data like batch index or timestep are safer to treat as\ndynamic than faster varying dimensions like image x/y/channel. See\nthis paper for a discussion of the\nchallenges imposed by dynamic shapes and one project's approach to addressing\nthem.", "#@title Define a sample TensorFlow module using dynamic shapes\n\nimport tensorflow as tf\n\nclass DynamicShapesModule(tf.Module):\n # reduce_sum_1d (dynamic input size, static output size)\n # e.g. [1, 2, 3] -> 6\n @tf.function(input_signature=[tf.TensorSpec([None], tf.int32)])\n def reduce_sum_1d(self, values):\n return tf.math.reduce_sum(values)\n \n # reduce_sum_2d (partially dynamic input size, static output size)\n # e.g. [[1, 2, 3], [10, 20, 30]] -> [11, 22, 33]\n @tf.function(input_signature=[tf.TensorSpec([None, 3], tf.int32)])\n def reduce_sum_2d(self, values):\n return tf.math.reduce_sum(values, 0)\n\n # add_one (dynamic input size, dynamic output size)\n # e.g. [1, 2, 3] -> [2, 3, 4]\n @tf.function(input_signature=[tf.TensorSpec([None], tf.int32)])\n def add_one(self, values):\n return tf.math.add(values, tf.constant(1, dtype=tf.int32))\n\n%%capture\n!python -m pip install iree-compiler iree-tools-tf -f https://github.com/google/iree/releases\n\n#@title Import the TensorFlow program into IREE as MLIR\n\nfrom IPython.display import clear_output\n\nfrom iree.compiler import tf as tfc\n\ncompiler_module = tfc.compile_module(\n DynamicShapesModule(), import_only=True, \n output_mlir_debuginfo=False)\nclear_output() # Skip over TensorFlow's output.\n\n# Print the imported MLIR to see how the compiler views this program.\nprint(\"Dynamic Shapes MLIR:\\n```\\n%s```\\n\" % compiler_module.decode(\"utf-8\"))\n\n# Save the imported MLIR to disk.\nimported_mlir_path = os.path.join(ARTIFACTS_DIR, \"dynamic_shapes.mlir\")\nwith open(imported_mlir_path, \"wt\") as output_file:\n output_file.write(compiler_module.decode(\"utf-8\"))\nprint(f\"Wrote MLIR to path '{imported_mlir_path}'\")", "Test the imported program\nNote: you can stop after each step and use intermediate outputs with other tools outside of Colab.\nSee the README for more details and example command line instructions.\n\nThe \"imported MLIR\" can be used by IREE's generic compiler tools\nThe \"flatbuffer blob\" can be saved and used by runtime applications\n\nThe specific point at which you switch from Python to native tools will depend on your project.", "%%capture\n!python -m pip install iree-compiler -f https://github.com/google/iree/releases\n\n#@title Compile the imported MLIR further into an IREE VM bytecode module\n\nfrom iree.compiler import compile_str\n\n# Note: we'll use the cpu (LLVM) backend since it has the best support\n# for dynamic shapes among our compiler targets.\n\nflatbuffer_blob = compile_str(compiler_module, target_backends=[\"cpu\"], input_type=\"mhlo\")\n\n# Save the compiled program to disk.\nflatbuffer_path = os.path.join(ARTIFACTS_DIR, \"dynamic_shapes_cpu.vmfb\")\nwith open(flatbuffer_path, \"wb\") as output_file:\n output_file.write(flatbuffer_blob)\nprint(f\"Wrote compiled program to path '{flatbuffer_path}'\")\n\n%%capture\n!python -m pip install iree-runtime -f https://github.com/google/iree/releases\n\n#@title Test running the compiled VM module using IREE's runtime\n\nfrom iree import runtime as ireert\n\nvm_module = ireert.VmModule.from_flatbuffer(flatbuffer_blob)\nconfig = ireert.Config(\"local-task\")\nctx = ireert.SystemContext(config=config)\nctx.add_vm_module(vm_module)\n\nimport numpy as np\n\n# Our @tf.functions are accessible by name on the module named 'module'\ndynamic_shapes_program = ctx.modules.module\n\nprint(dynamic_shapes_program.reduce_sum_1d(np.array([1, 10, 100], dtype=np.int32)).to_host())\nprint(dynamic_shapes_program.reduce_sum_2d(np.array([[1, 2, 3], [10, 20, 30]], dtype=np.int32)).to_host())\nprint(dynamic_shapes_program.reduce_sum_2d(np.array([[1, 2, 3], [10, 20, 30], [100, 200, 300]], dtype=np.int32)).to_host())\nprint(dynamic_shapes_program.add_one(np.array([1, 10, 100], dtype=np.int32)).to_host())", "Download compilation artifacts", "ARTIFACTS_ZIP = \"/tmp/dynamic_shapes_colab_artifacts.zip\"\n\nprint(f\"Zipping '{ARTIFACTS_DIR}' to '{ARTIFACTS_ZIP}' for download...\")\n!cd {ARTIFACTS_DIR} && zip -r {ARTIFACTS_ZIP} .\n\n# Note: you can also download files using Colab's file explorer\ntry:\n from google.colab import files\n print(\"Downloading the artifacts zip file...\")\n files.download(ARTIFACTS_ZIP) \nexcept ImportError:\n print(\"Missing google_colab Python package, can't download files\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sellensr/Juice-Box-Viscometer
JuiceBox.ipynb
mit
[ "Measuring Viscosity with a Juice Box\nChoose Cell/Run All from the menu to get started and load the explanatory video.\nPython scripts usually start by importing some libraries to help with calculation and plotting. This one also imports a video to introduce what the learning module is about. Some of the cells are python code you can run one by one, and some of the cells like this one are markdown a simple way of formatting text and equations to help us describe the code.", "import numpy as np\nimport math\n\n\nfrom IPython.display import HTML\nHTML('<iframe width=\"853\" height=\"480\" \\\n src=\"https://www.youtube.com/embed/ktuVw9C_KNA\" \\\n frameborder=\"0\" allowfullscreen></iframe>')", "<img src=\"JuiceBox.png\" align = \"left\" style=\"margin-right:30px\" width=\"35%\"> \nV: Viscous forces are important for the friction inside the straw. \nP: Pressure is the same on all sides at atmospheric pressure. It will be a little higher in the middle of the juice box, but only due to gravity and hydrostatics. \nI: The velocity is changing significantly from zero to the flow velocity, so we need to consider inertial effects.\nG: Gravity is the driving force in the flow. \nS: Surface tension has a minor effect because non of the radii of curvature are really small.\nIn fluids, like everywhere else, things work out much better with consistent units. It's good practice to put all our quantities into mks units (metres/kilograms/seconds) before making calculations. \nThe dimensions in mks units for my juice box are: (You'll have to put in your own values)", "a = 0.04 # metres\nb = 0.05\nc = 0.04\nd = 0.004\nl = 0.08\ng = 9.81 # metres per second squared\ndz = c/2 + l", "Two limiting cases -- no friction or no inertia\n<img src=\"Model.png\" align = \"left\" style=\"margin-right:30px\" width=\"45%\">\nTwo limiting cases would be: \nNegligible inertia, with viscous friction much larger and consuming almost all the energy from the elevation change. We'll learn more about this idea with Laminar Pipe Flow in Module 6.\nNegligible friction, with almost all the potential energy being converted into kinetic energy with large inertial effects. We'll learn more about this case with Bernoulli's Equation in Module 5.\nIf we knew the fluid properties we could make a prediction of velocity. \n$\\mu$ (mu) is the viscosity in $\\frac{N\\cdot s}{m^2}$, 0.001 for water at room temperature\n$\\rho$ (rho) is the density in $\\frac{kg}{m^3}$, 998 for water at room temperature\n$\\nu$ (nu) is the kinematic viscosity in $\\frac{m^2}{s}$\nThe fastest it could be coming out is the no-friction case with the calculated velocity in m/s. Compare this velocity to the values we actually measured down below.", "Vi = (2 * g * dz)**0.5\nVi", "If the viscosity dominated it would be slower, as in this case with a viscosity 10 times higher than water.", "mu = 0.01\nrho = 998\nnu = mu/rho\nVf = rho * g * dz * d**2 / 32 / mu / l\nVf", "Measure the actual mean velocity from our times\n<img src=\"Measure.png\" align = \"left\" style=\"margin-right:30px\" width=\"55%\">\nIf we measure the time it takes, then we can get the average velocity over time from conservation of mass. \nWe could set t just equal to a scalar value like 5, but by making it an array of all of the time values, we can do all the calculations at once to get four different velocities in m/s.\nWe see that the velocity in the water case is getting close to our \"no-friction\" value, with inertia dominating. The much lower velocities with the other fluids suggests strong effects due to viscosity.", "#t= 5\nt = np.array([5,47,340,20])\nVol = a*b*c\nAreaStraw = math.pi*d**2 /4\nVm = Vol / AreaStraw / t\nVm", "Combine the effects to estimate viscosity\n<img src=\"Combo.png\" align = \"left\" style=\"margin-right:30px\" width=\"55%\">And if part of the elevation change drives the inertial increase in velocity and part of it is dissipated by viscous friction, then we can estimate the viscosity based on the time it takes. In this approach we would need the density to get the dynamic viscosity $\\mu$, but dividing by $\\rho$ lets us estimate the kinematic viscosity directly from the information we have.\nUse this approach to estimate kinematic viscosity from the measurements you made.", "nuM = (dz-Vm**2/2/g)*g*d**2/32/l/Vm\nnuM" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdsanch1/SimRC
01. Parte 1/05. Clase 5/.ipynb_checkpoints/05Class NB-checkpoint.ipynb
mit
[ "Clase 5: Portafolios y riesgo - Selección\nJuan Diego Sánchez Torres, \nProfesor, MAF ITESO\n\nDepartamento de Matemáticas y Física\ndsanchez@iteso.mx\nTel. 3669-34-34 Ext. 3069\nOficina: Cubículo 4, Edificio J, 2do piso\n\n1. Motivación\nEn primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.", "#importar los paquetes que se van a usar\nimport pandas as pd\nimport pandas_datareader.data as web\nimport numpy as np\nfrom sklearn.cluster import KMeans\nimport datetime\nfrom datetime import datetime\nimport scipy.stats as stats\nimport scipy as sp\nimport scipy.optimize as optimize\nimport scipy.cluster.hierarchy as hac\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n#algunas opciones para Python\npd.set_option('display.notebook_repr_html', True)\npd.set_option('display.max_columns', 6)\npd.set_option('display.max_rows', 10)\npd.set_option('display.width', 78)\npd.set_option('precision', 3)", "1. Uso de Pandas para descargar datos de precios de cierre\nAhora, en forma de función", "def get_historical_closes(ticker, start_date, end_date):\n p = web.DataReader(ticker, \"yahoo\", start_date, end_date).sort_index('major_axis')\n d = p.to_frame()['Adj Close'].reset_index()\n d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)\n pivoted = d.pivot(index='Date', columns='Ticker')\n pivoted.columns = pivoted.columns.droplevel(0)\n return pivoted", "Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.\nNota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:\n*conda install -c conda-forge pandas-datareader *", "data=get_historical_closes(['AA','AAPL','AMZN','MSFT','KO','NVDA', '^GSPC'], '2011-01-01', '2016-12-31')\ncloses=data[['AA','AAPL','AMZN','MSFT','KO','NVDA']]\nsp=data[['^GSPC']]\ncloses.plot(figsize=(8,6));", "Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX. \nPor ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.\n2. Formulación del riesgo de un portafolio", "def calc_daily_returns(closes):\n return np.log(closes/closes.shift(1))[1:]\n\ndaily_returns=calc_daily_returns(closes)\ndaily_returns.plot(figsize=(8,6));\n\ndaily_returns.corr()\n\ndef calc_annual_returns(daily_returns):\n grouped = np.exp(daily_returns.groupby(lambda date: date.year).sum())-1\n return grouped\n\nannual_returns = calc_annual_returns(daily_returns)\nannual_returns\n\ndef calc_portfolio_var(returns, weights=None):\n if (weights is None):\n weights = np.ones(returns.columns.size)/returns.columns.size\n sigma = np.cov(returns.T,ddof=0)\n var = (weights * sigma * weights.T).sum()\n return var\n\ncalc_portfolio_var(annual_returns)\n\ndef sharpe_ratio(returns, weights = None, risk_free_rate = 0.015):\n n = returns.columns.size\n if weights is None: weights = np.ones(n)/n\n var = calc_portfolio_var(returns, weights)\n means = returns.mean()\n return (means.dot(weights) - risk_free_rate)/np.sqrt(var)\n\nsharpe_ratio(annual_returns)", "3. Selección de activos", "daily_returns_mean=daily_returns.mean()\ndaily_returns_mean\n\ndaily_returns_std=daily_returns.std()\ndaily_returns_std\n\ndaily_returns_ms=pd.concat([daily_returns_mean, daily_returns_std], axis=1)\ndaily_returns_ms\n\nrandom_state = 10\ny_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(daily_returns_ms)\nplt.scatter(daily_returns_mean, daily_returns_std, c=y_pred);\nplt.axis([-0.01, 0.01, 0, 0.05]);\n\ncorr_mat=daily_returns.corr(method='spearman')\ncorr_mat\n\nZ = hac.linkage(corr_mat, 'single')\n\n# Plot the dendogram\nplt.figure(figsize=(25, 10))\nplt.title('Hierarchical Clustering Dendrogram')\nplt.xlabel('sample index')\nplt.ylabel('distance')\nhac.dendrogram(\n Z,\n leaf_rotation=90., # rotates the x axis labels\n leaf_font_size=8., # font size for the x axis labels\n)\nplt.show()\n\nselected=closes[['AAPL', 'AMZN']]\nselected.plot(figsize=(8,6));\n\ndaily_returns_sel=calc_daily_returns(selected)\ndaily_returns_sel.plot(figsize=(8,6));\n\nannual_returns_sel = calc_annual_returns(daily_returns_sel)\nannual_returns_sel", "4. Optimización de portafolios", "def target_func(x, cov_matrix, mean_vector, r):\n f = float(-(x.dot(mean_vector) - r) / np.sqrt(x.dot(cov_matrix).dot(x.T)))\n return f\n\ndef optimal_portfolio(profits, r, allow_short=True):\n x = np.ones(len(profits.T))\n mean_vector = np.mean(profits)\n cov_matrix = np.cov(profits.T)\n cons = ({'type': 'eq','fun': lambda x: np.sum(x) - 1})\n if not allow_short:\n bounds = [(0, None,) for i in range(len(x))]\n else:\n bounds = None\n minimize = optimize.minimize(target_func, x, args=(cov_matrix, mean_vector, r), bounds=bounds,\n constraints=cons)\n return minimize\n\nopt=optimal_portfolio(annual_returns_sel, 0.015)\nopt\n\nannual_returns_sel.dot(opt.x)\n\nasp=calc_annual_returns(calc_daily_returns(sp))\nasp\n\ndef objfun(W, R, target_ret):\n stock_mean = np.mean(R,axis=0)\n port_mean = np.dot(W,stock_mean)\n cov=np.cov(R.T)\n port_var = np.dot(np.dot(W,cov),W.T)\n penalty = 2000*abs(port_mean-target_ret)\n return np.sqrt(port_var) + penalty\n\ndef calc_efficient_frontier(returns):\n result_means = []\n result_stds = []\n result_weights = []\n means = returns.mean()\n min_mean, max_mean = means.min(), means.max()\n nstocks = returns.columns.size\n for r in np.linspace(min_mean, max_mean, 150):\n weights = np.ones(nstocks)/nstocks\n bounds = [(0,1) for i in np.arange(nstocks)]\n constraints = ({'type': 'eq', 'fun': lambda W: np.sum(W) - 1})\n results = optimize.minimize(objfun, weights, (returns, r), method='SLSQP', constraints = constraints, bounds = bounds)\n if not results.success: # handle error\n raise Exception(result.message)\n result_means.append(np.round(r,4)) # 4 decimal places\n std_=np.round(np.std(np.sum(returns*results.x,axis=1)),6)\n result_stds.append(std_)\n result_weights.append(np.round(results.x, 5))\n return {'Means': result_means, 'Stds': result_stds, 'Weights': result_weights}\n\nfrontier_data = calc_efficient_frontier(annual_returns_sel)\n\ndef plot_efficient_frontier(ef_data):\n plt.figure(figsize=(12,8))\n plt.title('Efficient Frontier')\n plt.xlabel('Standard Deviation of the porfolio (Risk))')\n plt.ylabel('Return of the portfolio')\n plt.plot(ef_data['Stds'], ef_data['Means'], '--');\n\nplot_efficient_frontier(frontier_data)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
krosaen/ml-study
kaggle/predicting-red-hat-business-value2/predicting-red-hat-business-value.ipynb
mit
[ "Kaggle's Predicting Red Hat Business Value\nThis is a follow up attempt at Kaggle's Predicting Red Hat Business Value competition.\nSee my notebooks section for links to the first attempt and other kaggle competitions.\nThe focus of this iteration is exploring whether we can bring back the previously ignored categorical columns that have hundreds if not thousands of unique values, making it impractical to use one-hot encoding. \nTwo approaches are taken on categorical variables with a large amount of unique values:\n\nencoding the values ordinally; sorting the values lexicographically and assigning a sequence of numbers, and then treating them quantitatively from there\nencoding the most frequently occuring values using one-hot and then binary encoding the rest. As part of this I developed a new scikit-learn transformer\n\nThe end results: reincluding the columns boosted performance on the training set by only 0.5%, and surprisingly the binary / one-hot combo did hardly any better than the ordinal encoding.\nLoading in the data", "import pandas as pd\n\npeople = pd.read_csv('people.csv.zip')\npeople.head(3)\n\nactions = pd.read_csv('act_train.csv.zip')\nactions.head(3)", "Joining together to get dataset", "training_data_full = pd.merge(actions, people, how='inner', on='people_id', suffixes=['_action', '_person'], sort=False)\ntraining_data_full.head(5)\n\n(actions.shape, people.shape, training_data_full.shape)", "Building a preprocessing pipeline\nNotice the new OmniEncoder transformer and read more about its development in my learning log.", "# %load \"preprocessing_transforms.py\"\nfrom sklearn.base import TransformerMixin, BaseEstimator\nimport pandas as pd\nimport heapq\nimport numpy as np\n\nclass BaseTransformer(BaseEstimator, TransformerMixin):\n def fit(self, X, y=None, **fit_params):\n return self\n\n def transform(self, X, **transform_params):\n return self\n\n\nclass ColumnSelector(BaseTransformer):\n \"\"\"Selects columns from Pandas Dataframe\"\"\"\n\n def __init__(self, columns, c_type=None):\n self.columns = columns\n self.c_type = c_type\n\n def transform(self, X, **transform_params):\n cs = X[self.columns]\n if self.c_type is None:\n return cs\n else:\n return cs.astype(self.c_type)\n\n\nclass OmniEncoder(BaseTransformer):\n \"\"\"\n Encodes a categorical variable using no more than k columns. As many values as possible\n are one-hot encoded, the remaining are fit within a binary encoded set of columns.\n If necessary some are dropped (e.g if (#unique_values) > 2^k).\n\n In deciding which values to one-hot encode, those that appear more frequently are\n preferred.\n \"\"\"\n def __init__(self, max_cols=20):\n self.column_infos = {}\n self.max_cols = max_cols\n if max_cols < 3 or max_cols > 100:\n raise ValueError(\"max_cols {} not within range(3, 100)\".format(max_cols))\n\n def fit(self, X, y=None, **fit_params):\n self.column_infos = {col: self._column_info(X[col], self.max_cols) for col in X.columns}\n return self\n\n def transform(self, X, **transform_params):\n return pd.concat(\n [self._encode_column(X[col], self.max_cols, *self.column_infos[col]) for col in X.columns],\n axis=1\n )\n\n @staticmethod\n def _encode_column(col, max_cols, one_hot_vals, binary_encoded_vals):\n num_one_hot = len(one_hot_vals)\n num_bits = max_cols - num_one_hot if len(binary_encoded_vals) > 0 else 0\n\n # http://stackoverflow.com/a/29091970/231589\n zero_base = ord('0')\n def i_to_bit_array(i):\n return np.fromstring(\n np.binary_repr(i, width=num_bits),\n 'u1'\n ) - zero_base\n\n binary_val_to_bit_array = {val: i_to_bit_array(idx + 1) for idx, val in enumerate(binary_encoded_vals)}\n\n bit_cols = [np.binary_repr(2 ** i, width=num_bits) for i in reversed(range(num_bits))]\n\n col_names = [\"{}_{}\".format(col.name, val) for val in one_hot_vals] + [\"{}_{}\".format(col.name, bit_col) for bit_col in bit_cols]\n\n zero_bits = np.zeros(num_bits, dtype=np.int)\n\n def splat(v):\n v_one_hot = [1 if v == ohv else 0 for ohv in one_hot_vals]\n v_bits = binary_val_to_bit_array.get(v, zero_bits)\n\n return pd.Series(np.concatenate([v_one_hot, v_bits]))\n\n df = col.apply(splat)\n df.columns = col_names\n\n return df\n\n @staticmethod\n def _column_info(col, max_cols):\n \"\"\"\n\n :param col: pd.Series\n :return: {'val': 44, 'val2': 4, ...}\n \"\"\"\n val_counts = dict(col.value_counts())\n num_one_hot = OmniEncoder._num_onehot(len(val_counts), max_cols)\n return OmniEncoder._partition_one_hot(val_counts, num_one_hot)\n\n @staticmethod\n def _partition_one_hot(val_counts, num_one_hot):\n \"\"\"\n Paritions the values in val counts into a list of values that should be\n one-hot encoded and a list of values that should be binary encoded.\n\n The `num_one_hot` most popular values are chosen to be one-hot encoded.\n\n :param val_counts: {'val': 433}\n :param num_one_hot: the number of elements to be one-hot encoded\n :return: ['val1', 'val2'], ['val55', 'val59']\n \"\"\"\n one_hot_vals = [k for (k, count) in heapq.nlargest(num_one_hot, val_counts.items(), key=lambda t: t[1])]\n one_hot_vals_lookup = set(one_hot_vals)\n\n bin_encoded_vals = [val for val in val_counts if val not in one_hot_vals_lookup]\n\n return sorted(one_hot_vals), sorted(bin_encoded_vals)\n\n\n @staticmethod\n def _num_onehot(n, k):\n \"\"\"\n Determines the number of onehot columns we can have to encode n values\n in no more than k columns, assuming we will binary encode the rest.\n\n :param n: The number of unique values to encode\n :param k: The maximum number of columns we have\n :return: The number of one-hot columns to use\n \"\"\"\n num_one_hot = min(n, k)\n\n def num_bin_vals(num):\n if num == 0:\n return 0\n return 2 ** num - 1\n\n def capacity(oh):\n \"\"\"\n Capacity given we are using `oh` one hot columns.\n \"\"\"\n return oh + num_bin_vals(k - oh)\n\n while capacity(num_one_hot) < n and num_one_hot > 0:\n num_one_hot -= 1\n\n return num_one_hot\n\n\nclass EncodeCategorical(BaseTransformer):\n def __init__(self):\n self.categorical_vals = {}\n\n def fit(self, X, y=None, **fit_params):\n self.categorical_vals = {col: {label: idx + 1 for idx, label in enumerate(sorted(X[col].dropna().unique()))} for\n col in X.columns}\n return self\n\n def transform(self, X, **transform_params):\n return pd.concat(\n [X[col].map(self.categorical_vals[col]) for col in X.columns],\n axis=1\n )\n\n\nclass SpreadBinary(BaseTransformer):\n\n def transform(self, X, **transform_params):\n return X.applymap(lambda x: 1 if x == 1 else -1)\n\n\nclass DfTransformerAdapter(BaseTransformer):\n \"\"\"Adapts a scikit-learn Transformer to return a pandas DataFrame\"\"\"\n\n def __init__(self, transformer):\n self.transformer = transformer\n\n def fit(self, X, y=None, **fit_params):\n self.transformer.fit(X, y=y, **fit_params)\n return self\n\n def transform(self, X, **transform_params):\n raw_result = self.transformer.transform(X, **transform_params)\n return pd.DataFrame(raw_result, columns=X.columns, index=X.index)\n\n\nclass DfOneHot(BaseTransformer):\n \"\"\"\n Wraps helper method `get_dummies` making sure all columns get one-hot encoded.\n \"\"\"\n def __init__(self):\n self.dummy_columns = []\n\n def fit(self, X, y=None, **fit_params):\n self.dummy_columns = pd.get_dummies(\n X,\n prefix=[c for c in X.columns],\n columns=X.columns).columns\n return self\n\n def transform(self, X, **transform_params):\n return pd.get_dummies(\n X,\n prefix=[c for c in X.columns],\n columns=X.columns).reindex(columns=self.dummy_columns, fill_value=0)\n\n\nclass DfFeatureUnion(BaseTransformer):\n \"\"\"A dataframe friendly implementation of `FeatureUnion`\"\"\"\n\n def __init__(self, transformers):\n self.transformers = transformers\n\n def fit(self, X, y=None, **fit_params):\n for l, t in self.transformers:\n t.fit(X, y=y, **fit_params)\n return self\n\n def transform(self, X, **transform_params):\n transform_results = [t.transform(X, **transform_params) for l, t in self.transformers]\n return pd.concat(transform_results, axis=1)\n\n\nfor col in training_data_full.columns:\n print(\"in {} there are {} unique values\".format(col, len(training_data_full[col].unique())))\nNone", "Potential trouble with high dimensionality\nNotice that char_10_action, group_1 and others have a ton of unique values; one-hot encoding will result in a dataframe with thousands of columns. \nLet's explore 3 approaches to dealing with categorical columns with a lot of unique values and compare performance:\n\nignore them\nencode them ordinally, mapping every unique value to a different integer (assuming some ordered value that probably doesn't exist, at least not by our default lexicographical sorting)\nencode them with a combo of one-hot and binary", "from sklearn.pipeline import Pipeline\n\nfrom sklearn.preprocessing import Imputer, StandardScaler\n\ncat_columns = ['activity_category',\n 'char_1_action', 'char_2_action', 'char_3_action', 'char_4_action',\n 'char_5_action', 'char_6_action', 'char_7_action', 'char_8_action',\n 'char_9_action', 'char_1_person',\n 'char_2_person', 'char_3_person',\n 'char_4_person', 'char_5_person', 'char_6_person', 'char_7_person',\n 'char_8_person', 'char_9_person', 'char_10_person', 'char_11',\n 'char_12', 'char_13', 'char_14', 'char_15', 'char_16', 'char_17',\n 'char_18', 'char_19', 'char_20', 'char_21', 'char_22', 'char_23',\n 'char_24', 'char_25', 'char_26', 'char_27', 'char_28', 'char_29',\n 'char_30', 'char_31', 'char_32', 'char_33', 'char_34', 'char_35',\n 'char_36', 'char_37']\n\nhigh_dim_cat_columns = ['date_action', 'char_10_action', 'group_1', 'date_person']\n\nq_columns = ['char_38']\n\npreprocessor_ignore = Pipeline([\n ('features', DfFeatureUnion([\n ('quantitative', Pipeline([\n ('select-quantitative', ColumnSelector(q_columns, c_type='float')),\n ('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),\n ('scale', DfTransformerAdapter(StandardScaler()))\n ])),\n ('categorical', Pipeline([\n ('select-categorical', ColumnSelector(cat_columns)),\n ('apply-onehot', DfOneHot()),\n ('spread-binary', SpreadBinary())\n ])),\n ]))\n])\n\npreprocessor_lexico = Pipeline([\n ('features', DfFeatureUnion([\n ('quantitative', Pipeline([\n ('combine-q', DfFeatureUnion([\n ('highd', Pipeline([\n ('select-highd', ColumnSelector(high_dim_cat_columns)),\n ('encode-highd', EncodeCategorical()) \n ])),\n ('select-quantitative', ColumnSelector(q_columns, c_type='float')),\n ])),\n ('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),\n ('scale', DfTransformerAdapter(StandardScaler()))\n ])),\n ('categorical', Pipeline([\n ('select-categorical', ColumnSelector(cat_columns)),\n ('apply-onehot', DfOneHot()),\n ('spread-binary', SpreadBinary())\n ])),\n ]))\n])\n\npreprocessor_omni_20 = Pipeline([\n ('features', DfFeatureUnion([\n ('quantitative', Pipeline([\n ('select-quantitative', ColumnSelector(q_columns, c_type='float')),\n ('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),\n ('scale', DfTransformerAdapter(StandardScaler()))\n ])),\n ('categorical', Pipeline([\n ('select-categorical', ColumnSelector(cat_columns + high_dim_cat_columns)),\n ('apply-onehot', OmniEncoder(max_cols=20)),\n ('spread-binary', SpreadBinary())\n ])),\n ]))\n])\n\npreprocessor_omni_50 = Pipeline([\n ('features', DfFeatureUnion([\n ('quantitative', Pipeline([\n ('select-quantitative', ColumnSelector(q_columns, c_type='float')),\n ('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),\n ('scale', DfTransformerAdapter(StandardScaler()))\n ])),\n ('categorical', Pipeline([\n ('select-categorical', ColumnSelector(cat_columns + high_dim_cat_columns)),\n ('apply-onehot', OmniEncoder(max_cols=50)),\n ('spread-binary', SpreadBinary())\n ])),\n ]))\n])\n", "Sampling to reduce runtime in training large dataset\nIf we train models based on the entire test dataset provided it exhausts the memory on my laptop. Again, in the spirit of getting something quick and dirty working, we'll sample the dataset and train on that. We'll then evaluate our model by testing the accuracy on a larger sample.", "from sklearn.cross_validation import train_test_split\n\ntraining_frac = 0.01\ntest_frac = 0.05\n\ntraining_data, the_rest = train_test_split(training_data_full, train_size=training_frac, random_state=0)\ntest_data = the_rest.sample(frac=test_frac / (1-training_frac))\n\ntraining_data.shape\n\ntest_data.shape", "Reporting utilities\nSome utilities to make reporting progress easier", "import time\nimport subprocess\n\nclass time_and_log():\n \n def __init__(self, label, *, prefix='', say=False):\n self.label = label\n self.prefix = prefix\n self.say = say\n \n def __enter__(self):\n msg = 'Starting {}'.format(self.label)\n print('{}{}'.format(self.prefix, msg))\n if self.say:\n cmd_say(msg)\n self.start = time.process_time()\n return self\n\n def __exit__(self, *exc):\n self.interval = time.process_time() - self.start\n msg = 'Finished {} in {:.2f} seconds'.format(self.label, self.interval)\n print('{}{}'.format(self.prefix, msg))\n if self.say:\n cmd_say(msg)\n return False\n \ndef cmd_say(msg):\n subprocess.call(\"say '{}'\".format(msg), shell=True)\n\n\nwith time_and_log('wrangling training data', say=True, prefix=\" _\"):\n wrangled = preprocessor_omni_20.fit_transform(training_data)\n\nwrangled.head()", "Putting together classifiers", "from sklearn.ensemble import RandomForestClassifier\n\npipe_rf_ignore = Pipeline([\n ('wrangle', preprocessor_ignore),\n ('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))\n ])\n\npipe_rf_lexico = Pipeline([\n ('wrangle', preprocessor_lexico),\n ('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))\n ])\n\npipe_rf_omni_20 = Pipeline([\n ('wrangle', preprocessor_omni_20),\n ('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))\n ])\n\npipe_rf_omni_50 = Pipeline([\n ('wrangle', preprocessor_omni_50),\n ('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))\n ])\n\nfeature_columns = cat_columns + q_columns + high_dim_cat_columns\n\ndef extract_X_y(df):\n return df[feature_columns], df['outcome']\n\nX_train, y_train = extract_X_y(training_data)\nX_test, y_test = extract_X_y(test_data)", "Cross validation and full test set accuracy\nWe'll cross validate within the training set, and then train on the full training set and see how well it performs on the full test set.", "from sklearn.metrics import accuracy_score\nfrom sklearn.cross_validation import cross_val_score\nimport numpy as np\n\nmodels = [\n ('random forest ignore', pipe_rf_ignore), \n ('random forest ordinal', pipe_rf_lexico), \n ('random forest omni 20', pipe_rf_omni_20), \n ('random forest omni 50', pipe_rf_omni_50), \n]\n\nfor label, model in models:\n print('Evaluating {}'.format(label))\n cmd_say('Evaluating {}'.format(label))\n# with time_and_log('cross validating', say=True, prefix=\" _\"):\n# scores = cross_val_score(estimator=model,\n# X=X_train,\n# y=y_train,\n# cv=5,\n# n_jobs=1)\n# print(' CV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores), np.std(scores)))\n with time_and_log('fitting full training set', say=True, prefix=\" _\"):\n model.fit(X_train, y_train) \n with time_and_log('evaluating on full test set', say=True, prefix=\" _\"):\n print(\" Full test accuracy ({:.2f} of dataset): {:.3f}\".format(\n test_frac, \n accuracy_score(y_test, model.predict(X_test)))) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
seg/2016-ml-contest
SHandPR/RandomForest.ipynb
apache-2.0
[ "Facies classification using Machine Learning- Random Forest\nContest entry by Priyanka Raghavan and Steve Hall\nThis notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). \nThe dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a Logistical regression classifier to classify facies types. We will use simple logistics regression to classify wells scikit-learn.\nFirst we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data. \nNext we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.\nWe will then be ready to build the classifier. \nFinally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.\nExploring the dataset\nFirst, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nfrom sklearn.ensemble import RandomForestClassifier\nfrom pandas import set_option\nset_option(\"display.max_rows\", 10)\npd.options.mode.chained_assignment = None\n\nfilename = '../facies_vectors.csv'\ntraining_data = pd.read_csv(filename)\ntraining_data", "This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. \nThe seven predictor variables are:\n* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),\nphotoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.\n* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)\nThe nine discrete facies (classes of rocks) are: \n1. Nonmarine sandstone\n2. Nonmarine coarse siltstone \n3. Nonmarine fine siltstone \n4. Marine siltstone and shale \n5. Mudstone (limestone)\n6. Wackestone (limestone)\n7. Dolomite\n8. Packstone-grainstone (limestone)\n9. Phylloid-algal bafflestone (limestone)\nThese facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.\nFacies |Label| Adjacent Facies\n:---: | :---: |:--:\n1 |SS| 2\n2 |CSiS| 1,3\n3 |FSiS| 2\n4 |SiSh| 5\n5 |MS| 4,6\n6 |WS| 5,7\n7 |D| 6,8\n8 |PS| 6,7,9\n9 |BS| 7,8\nLet's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.", "training_data['Well Name'] = training_data['Well Name'].astype('category')\ntraining_data['Formation'] = training_data['Formation'].astype('category')\ntraining_data['Well Name'].unique()\n\ntraining_data.describe()", "This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.\nRemove a single well to use as a blind test later.\nThese are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. \nBefore we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.", "# 1=sandstone 2=c_siltstone 3=f_siltstone \n# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite\n# 8=packstone 9=bafflestone\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',\n '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n#facies_color_map is a dictionary that maps facies labels\n#to their respective colors\nfacies_color_map = {}\nfor ind, label in enumerate(facies_labels):\n facies_color_map[label] = facies_colors[ind]\n\ndef label_facies(row, labels):\n return labels[ row['Facies'] -1]\n \n#training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)\nfaciesVals = training_data['Facies'].values \nwell = training_data['Well Name'].values\nmpl.rcParams['figure.figsize'] = (20.0, 10.0)\nfor w_idx, w in enumerate(np.unique(well)): \n ax = plt.subplot(3, 4, w_idx+1)\n hist = np.histogram(faciesVals[well == w], bins=np.arange(len(facies_labels)+1)+.5)\n plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')\n ax.set_xticks(np.arange(len(hist[0])))\n ax.set_xticklabels(facies_labels)\n ax.set_title(w)\n\nblind = training_data[training_data['Well Name'] == 'NEWBY']\ntraining_data = training_data[training_data['Well Name'] != 'NEWBY']\ntraining_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)\n\nPE_mask = training_data['PE'].notnull().values\ntraining_data = training_data[PE_mask]\n", "Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.", "def make_facies_log_plot(logs, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im=ax[5].imshow(cluster, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[5])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-1):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)", "Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters. \nWe then show log plots for wells SHRIMPLIN.", "make_facies_log_plot(\n training_data[training_data['Well Name'] == 'SHRIMPLIN'],\n facies_colors)", "In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.", "#count the number of unique entries for each facies, sort them by\n#facies number (instead of by number of entries)\nfacies_counts = training_data['Facies'].value_counts().sort_index()\n#use facies labels to index each count\nfacies_counts.index = facies_labels\n\nfacies_counts.plot(kind='bar',color=facies_colors, \n title='Distribution of Training Data by Facies')\nfacies_counts", "This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.\nCrossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies. \nConditioning the data set\nNow we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.", "correct_facies_labels = training_data['Facies'].values\n\nfeature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)\nfeature_vectors.describe()", "Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data.", "from sklearn import preprocessing\n\nscaler = preprocessing.StandardScaler().fit(feature_vectors)\nscaled_features = scaler.transform(feature_vectors)\n\nfeature_vectors", "Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.", "from sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n scaled_features, correct_facies_labels, test_size=0.1, random_state=42)", "Training the classifier using Random forest\nNow we use the cleaned and conditioned training set to create a facies classifier. Lets try random forest", "\nclf = RandomForestClassifier(n_estimators=150, \n min_samples_leaf= 50,class_weight=\"balanced\",oob_score=True,random_state=50\n )\n", "Now we can train the classifier using the training set we created above.", "clf.fit(X_train,y_train)", "Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set.", "predicted_labels = clf.predict(X_test)", "We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.\nThe confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i. \nTo simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function.", "from sklearn.metrics import confusion_matrix\nfrom classification_utilities import display_cm, display_adj_cm\n\nconf = confusion_matrix(y_test, predicted_labels)\ndisplay_cm(conf, facies_labels, hide_zeros=True)", "The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS.\nThe entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.", "def accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc", "As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels.", "adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))\n\nprint('Facies classification accuracy = %f' % accuracy(conf))\nprint('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))", "Applying the classification model to the blind data\nWe held a well back from the training, and stored it in a dataframe called blind:", "blind", "The label vector is just the Facies column:", "y_blind = blind['Facies'].values", "We can form the feature matrix by dropping some of the columns and making a new dataframe:", "well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)", "Now we can transform this with the scaler we made before:", "X_blind = scaler.transform(well_features)", "Now it's a simple matter of making a prediction and storing it back in the dataframe:", "y_pred = clf.predict(X_blind)\nblind['Prediction'] = y_pred", "Let's see how we did with the confusion matrix:", "cv_conf = confusion_matrix(y_blind, y_pred)\n\nprint('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))\nprint('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))", "The results are 0.43 accuracy on facies classification of blind data and 0.87 adjacent facies classification.", "display_cm(cv_conf, facies_labels,\n display_metrics=True, hide_zeros=True)", "...but does remarkably well on the adjacent facies predictions.", "display_adj_cm(cv_conf, facies_labels, adjacent_facies,\n display_metrics=True, hide_zeros=True)\n\ndef compare_facies_plot(logs, compadre, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[6])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im2, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-2):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n ax[6].set_xlabel(compadre)\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n ax[6].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)\n\ncompare_facies_plot(blind, 'Prediction', facies_colors)", "Applying the classification model to new data\nNow that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.\nThis dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data.", "well_data = pd.read_csv('../validation_data_nofacies.csv')\nwell_data['Well Name'] = well_data['Well Name'].astype('category')\nwell_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)", "The data needs to be scaled using the same constants we used for the training data.", "X_unknown = scaler.transform(well_features)", "Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe.", "#predict facies of unclassified data\ny_unknown = clf.predict(X_unknown)\nwell_data['Facies'] = y_unknown\nwell_data\n\nwell_data['Well Name'].unique()", "We can use the well log plot to view the classification results along with the well logs.", "make_facies_log_plot(\n well_data[well_data['Well Name'] == 'STUART'],\n facies_colors=facies_colors)\n\nmake_facies_log_plot(\n well_data[well_data['Well Name'] == 'CRAWFORD'],\n facies_colors=facies_colors)", "Finally we can write out a csv file with the well data along with the facies classification results.", "well_data.to_csv('SHPR_FirstAttempt_RandomForest_facies.csv')", "References\nAmato del Monte, A., 2015. Seismic Petrophysics: Part 1, The Leading Edge, 34 (4). doi:10.1190/tle34040440.1\nBohling, G. C., and M. K. Dubois, 2003. An Integrated Application of Neural Network and Markov Chain Techniques to Prediction of Lithofacies from Well Logs, KGS Open-File Report 2003-50, 6 pp. pdf\nDubois, M. K., G. C. Bohling, and S. Chakrabarti, 2007, Comparison of four approaches to a rock facies classification problem, Computers & Geosciences, 33 (5), 599-617 pp. doi:10.1016/j.cageo.2006.08.011" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
diegocavalca/Studies
phd-thesis/nilmtk/loading_data_into_memory.ipynb
cc0-1.0
[ "Loading data into memory\nLoading API is central to a lot of nilmtk operations and provides a great deal of flexibility. Let's look at ways in which we can load data from a NILMTK DataStore into memory. To see the full range of possible queries, we'll use the iAWE data set (whose HDF5 file can be downloaded here).\nThe load function returns a generator of DataFrames loaded from the DataStore based on the conditions specified. If no conditions are specified, then all data from all the columns is loaded. (If you have not come across Python generators, it might be worth reading this quick guide to Python generators.)\nNOTE: If you are on Windows, remember to escape the back-slashes, use forward-slashs, or use raw-strings when passing paths in Python, e.g. one of the following would work:\npython\niawe = DataSet('c:\\\\data\\\\iawe.h5')\niawe = DataSet('c:/data/iawe.h5')\niawe = DataSet(r'c:\\data\\iawe.h5')", "from nilmtk import DataSet\n\niawe = DataSet('/data/iawe.h5')\nelec = iawe.buildings[1].elec\nelec", "Let us see what measurements we have for the fridge:", "fridge = elec['fridge']\nfridge.available_columns()", "Loading data\nLoad all columns (default)", "df = next(fridge.load())\ndf.head()", "Load a single column of power data\nUse fridge.power_series() which returns a generator of 1-dimensional pandas.Series objects, each containing power data using the most 'sensible' AC type:", "series = next(fridge.power_series())\nseries.head()", "or, to get reactive power:", "series = next(fridge.power_series(ac_type='reactive'))\nseries.head()", "Specify physical_quantity or AC type", "df = next(fridge.load(physical_quantity='power', ac_type='reactive'))\ndf.head()", "To load voltage data:", "df = next(fridge.load(physical_quantity='voltage'))\ndf.head()\n\ndf = next(fridge.load(physical_quantity = 'power'))\ndf.head()", "Loading by specifying AC type", "df = next(fridge.load(ac_type='active'))\ndf.head()", "Loading by resampling to a specified period", "# resample to minutely (i.e. with a sample period of 60 secs)\ndf = next(fridge.load(ac_type='active', sample_period=60))\ndf.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
duncanwp/python_for_climate_scientists
course_content/notebooks/matplotlib_intro.ipynb
gpl-3.0
[ "import matplotlib.pyplot as plt\nplt.rcParams['image.cmap'] = 'viridis'", "An introduction to matplotlib\nMatplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (Graphical User Interface) toolkits.\nMatplotlib comes with a convenience sub-package called pyplot. It is a general convention to import this module as plt:", "import matplotlib.pyplot as plt", "The matplotlib figure\nAt the heart of every matplotlib plot is the \"Figure\" object. The \"Figure\" object is the top level concept that can be drawn to one of the many output formats, or simply just to screen. Any object that can be drawn in this way is known as an \"Artist\" in matplotlib.\nLet's create our first artist using pyplot, and then show it:", "fig = plt.figure()\nplt.show()", "On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).\nBy far the most useful artist in matplotlib is the \"Axes\" artist. The Axes artist represents the \"data space\" of a typical plot. A rectangular axes (the most common axes, but not the only axes, e.g. polar plots) will have two Axis Artists with tick labels and tick marks.\nThere is no limit on the number of Axes artists that can exist on a Figure artist. Let's go ahead and create a figure with a single Axes Artist, and show it using pyplot:", "ax = plt.axes()\nplt.show()", "Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.\nUnder the hood matplotlib still had to create a Figure artist; we just didn't need to capture it into a variable. We can access the created object with the \"state\" functions found in pyplot: plt.gcf() and plt.gca().\nExercise 1\nGo to matplotlib.org and search for what these strangely named functions do.\nHint: you will find multiple results so remember we are looking for the pyplot versions of these functions.\nWorking with the axes\nMost of your time building a graphic in matplotlib will be spent on the Axes artist. Whilst the matplotlib documentation for the Axes artist is very detailed, it is also rather difficult to navigate (though this is an area of ongoing improvement).\nAs a result, it is often easier to find new plot types by looking at the pyplot module's documentation.\nThe first and most common Axes method is plot. Go ahead and look at the plot documentation from the following sources:\n\nhttp://matplotlib.org/api/pyplot_summary.html\nhttp://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot\nhttp://matplotlib.org/api/axes_api.html?#matplotlib.axes.Axes.plot\n\nPlot can be used to draw one or more lines in axes data space:", "ax = plt.axes()\nline1, = ax.plot([0, 1, 2, 1.5], [3, 1, 2, 4])\nplt.show()", "Notice how the axes view limits (ax.viewLim) have been updated to include the whole of the line.\nShould we want to add some spacing around the edges of our axes we could set the axes margin using the Axes artist's margins method. Alternatively, we could manually set the limits with the Axes artist's set_xlim and set_ylim methods.\nExercise 2\nModify the previous example to produce three different figures that control the limits of the axes.\n1. Manually set the x and y limits to [0.5, 2] and [1, 5] respectively.\n2. Define a margin such that there is 10% whitespace inside the axes around the drawn line (Hint: numbers to margins are normalised such that 0% is 0.0 and 100% is 1.0).\n3. Set a 10% margin on the axes with the lower y limit set to 0. (Note: order is important here)\nThe previous example can be simplified to be even shorter. We are not using the line artist returned by ax.plot() so we don't need to store it in a variable. In addition, in exactly the same way that we didn't need to manually create a Figure artist when using the pyplot.axes method, we can remove the plt.axes if we use the plot function from pyplot. Our simple line example then becomes:", "plt.plot([0, 1, 2, 1.5], [3, 1, 2, 4])\nplt.show()", "The simplicity of this example shows how visualisations can be produced quickly and easily with matplotlib, but it is worth remembering that for full control of Figure and Axes artists we can mix the convenience of pyplot with the power of matplotlib's object oriented design.\nExercise 3\nBy calling plot multiple times, create a single axes showing the line plots of $y=sin(x)$ and $y=cos(x)$ in the interval $[0, 2\\pi]$ with 200 linearly spaced $x$ samples.\nMultiple axes on the same figure (aka subplot)\nMatplotlib makes it relatively easy to add more than one Axes artist to a figure. The add_subplot method on a Figure artist, which is wrapped by the subplot function in pyplot, adds an Axes artist in the grid position specified. To compute the position, we must tell matplotlib the number of rows and columns to separate the figure into, and which number the axes to be created is (1 based).\nFor example, to create axes at the top right and bottom left of a $3 x 2$ notional grid of Axes artists the grid specifications would be 2, 3, 3 and 2, 3, 4 respectively:", "top_right_ax = plt.subplot(2, 3, 3)\nbottom_left_ax = plt.subplot(2, 3, 4)\n\nplt.show()", "Exercise 3 continued: Copy the answer from the previous task (plotting $y=sin(x)$ and $y=cos(x)$) and add the appropriate plt.subplot calls to create a figure with two rows of Axes artists, one showing $y=sin(x)$ and the other showing $y=cos(x)$.\nFurther plot types\nMatplotlib comes with a huge variety of different plot types. Here is a quick demonstration of the more common ones.", "import numpy as np\n\nx = np.linspace(-180, 180, 60)\ny = np.linspace(-90, 90, 30)\nx2d, y2d = np.meshgrid(x, y)\ndata = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))\n\nplt.contourf(x, y, data)\nplt.show()\n\nplt.imshow(data, extent=[-180, 180, -90, 90],\n interpolation='nearest', origin='lower')\nplt.show()\n\nplt.pcolormesh(x, y, data)\nplt.show()\n\nplt.scatter(x2d, y2d, c=data, s=15)\nplt.show()\n\nplt.bar(x, data.sum(axis=0), width=np.diff(x)[0])\nplt.show()\n\nplt.plot(x, data.sum(axis=0), linestyle='--',\n marker='d', markersize=10, color='red', alpha=0.5)\nplt.show()", "Titles, Legends, colorbars and annotations\nMatplotlib has convenience functions for the addition of plot elements such as titles, legends, colorbars and text based annotation.\nThe suptitle pyplot function allows us to set the title of a figure, and the set_title method on an Axes artist allows us to set the title of an individual axes. Additionally Axes artists have methods named set_xlabel and set_ylabel to label the respective x and y Axis artists (that's Axis, not Axes). Finally, we can add text, located by data coordinates, with the text method on an Axes artist.", "fig = plt.figure()\nax = plt.axes()\n# Adjust the created axes so its topmost extent is 0.8 of the figure.\nfig.subplots_adjust(top=0.8)\nfig.suptitle('Figure title', fontsize=18, fontweight='bold')\nax.set_title('Axes title', fontsize=16)\nax.set_xlabel('The X axis')\nax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)\nax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\\nin data coordinates.',\n horizontalalignment='center', fontsize=14)\nplt.show()", "The creation of a legend is as simple as adding a \"label\" to lines of interest. This can be done in the call to plt.plot and then followed up with a call to plt.legend:", "x = np.linspace(-3, 7, 200)\nplt.plot(x, 0.5 * x ** 3 - 3 * x ** 2, linewidth=2,\n label='$f(x)=0.5x^3-3x^2$')\nplt.plot(x, 1.5 * x ** 2 - 6 * x, linewidth=2, linestyle='--',\n label='Gradient of $f(x)$', )\nplt.legend(loc='lower right')\nplt.grid()\nplt.show()", "Colorbars are created with the plt.colorbar function:", "x = np.linspace(-180, 180, 60)\ny = np.linspace(-90, 90, 30)\nx2d, y2d = np.meshgrid(x, y)\ndata = np.cos(3 * np.deg2rad(x2d)) + np.sin(2 * np.deg2rad(y2d))\n\nplt.contourf(x, y, data)\nplt.colorbar(orientation='horizontal')\nplt.show()", "Matplotlib comes with powerful annotation capabilities, which are described in detail at http://matplotlib.org/users/annotations_intro.html.\nThe annotation's power can mean that the syntax is a little harder to read, which is demonstrated by one of the simplest examples of using annotate.", "x = np.linspace(-3, 7, 200)\nplt.plot(x, 0.5*x**3 - 3*x**2, linewidth=2)\nplt.annotate('Local minimum',\n xy=(4, -18),\n xytext=(-2, -40), fontsize=15,\n arrowprops={'facecolor': 'black', 'headlength': 10})\nplt.grid()\nplt.show()", "Saving your plots\nYou can save a figure using plt.savefig. This function accepts a filename as input, and saves the current figure to the given file. The format of the file is inferred from the file extension:", "plt.plot(range(10))\nplt.savefig('my_plot.png')\n\nfrom IPython.display import Image\nImage(filename='my_plot.png') ", "Matplotlib supports many output file formats, including most commonly used ones. You can see a list of the supported file formats including the filename extensions they are recognised by with:", "plt.gcf().canvas.get_supported_filetypes_grouped()", "Further steps\nMatplotlib has extremely comprehensive documentation at http://matplotlib.org/. Particularly useful parts for beginners are the pyplot summary and the example gallery:\n\npyplot summary: http://matplotlib.org/api/pyplot_summary.html\nexample gallery: http://matplotlib.org/examples/index.html\n\nExercise 4: random walks\nThis exercise requires the use of many of the elements we've discussed (and a few extra ones too, remember the documentation for matplotlib is comprehensive!). We'll start by defining a random walk and some statistical population data for us to plot:", "import matplotlib.pyplot as plt\nimport numpy as np\n\nnp.random.seed(1234)\n\nn_steps = 500\nt = np.arange(n_steps)\n\n# Probability distribution:\nmu = 0.002 # Mean\nsigma = 0.01 # Standard deviation\n\n# Generate a random walk, with position X as a function of time:\nS = mu + sigma * np.random.randn(n_steps)\nX = S.cumsum()\n\n# Calculate the 1 sigma upper and lower analytic population bounds:\nlower_bound = mu * t - sigma * np.sqrt(t)\nupper_bound = mu * t + sigma * np.sqrt(t)", "1. Plot the walker position X against time (t) using a solid blue line of width 2 and give it a label so that it will appear in a legend as \"walker position\".\n2. Plot the population mean (mu*t) against time (t) using a black dashed line of width 1 and give it a label so that it will appear in a legend as \"population mean\".\n3. Fill the space between the variables upper_bound and lower_bound using yellow with alpha (transparency) of 0.5, label this so that it will appear in a legend as \"1 sigma range\" (hint: see the fill_between method of an axes or pyplot.fill_between).\n4. Draw a legend in the upper left corner of the axes (hint: you should have already set the labels for each line when you created them).\n5. Label the x-axis \"num steps\" and the y-axis \"position\", and draw gridlines on the axes (hint: ax.grid toggles the state of the grid).\n6. (harder) Fill the area under the walker position curve that is above the upper bound of the population mean using blue with alpha 0.5 (hint: fill_between can take a keyword argument called where that allows you to limit where filling is drawn)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IBMDecisionOptimization/docplex-examples
examples/mp/jupyter/nurses_pandas.ipynb
apache-2.0
[ "The Nurse Assignment Problem\nThis tutorial includes everything you need to set up IBM Decision Optimization CPLEX Modeling for Python (DOcplex), build a Mathematical Programming model, and get its solution by solving the model on the cloud with IBM ILOG CPLEX Optimizer.\nWhen you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.\n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nDescribe the business problem\nHow decision optimization (prescriptive analytics) can help\nUse decision optimization\nStep 1: Import the library\nStep 2: Model the data\nStep 3: Prepare the data\nStep 4: Set up the prescriptive model\nDefine the decision variables\nExpress the business constraints\nExpress the objective\nSolve with Decision Optimization\n\n\nStep 5: Investigate the solution and run an example analysis\n\n\nSummary\n\n\nDescribe the business problem\nThis notebook describes how to use CPLEX Modeling for Python together with pandas to\nmanage the assignment of nurses to shifts in a hospital.\nNurses must be assigned to hospital shifts in accordance with various skill and staffing constraints.\nThe goal of the model is to find an efficient balance between the different objectives:\n\nminimize the overall cost of the plan and\nassign shifts as fairly as possible.\n\nHow decision optimization can help\n\n\nPrescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. \n\n\nPrescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n\nPrescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.\n<br/>\n\n\n<u>With prescriptive analytics, you can:</u> \n\nAutomate the complex decisions and trade-offs to better manage your limited resources.\nTake advantage of a future opportunity or mitigate a future risk.\nProactively update recommendations based on changing events.\nMeet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.\n\nChecking minimum requirements\nThis notebook uses some features of pandas that are available in version 0.17.1 or above.", "import pip\nREQUIRED_MINIMUM_PANDAS_VERSION = '0.17.1'\ntry:\n import pandas as pd\n assert pd.__version__ >= REQUIRED_MINIMUM_PANDAS_VERSION\nexcept:\n raise Exception(\"Version %s or above of Pandas is required to run this notebook\" % REQUIRED_MINIMUM_PANDAS_VERSION)", "Use decision optimization\nStep 1: Import the library\nRun the following code to import the Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming (docplex.mp) and Constraint Programming (docplex.cp).", "import sys\ntry:\n import docplex.mp\nexcept:\n raise Exception('Please install docplex. See https://pypi.org/project/docplex/')", "Step 2: Model the data\nThe input data consists of several tables:\n\nThe Departments table lists all departments in the scope of the assignment.\nThe Skills table list all skills.\nThe Shifts table lists all shifts to be staffed. A shift contains a department, a day in the week, plus the start and end times.\nThe Nurses table lists all nurses, identified by their names.\nThe NurseSkills table gives the skills of each nurse.\nThe SkillRequirements table lists the minimum number of persons required for a given department and skill.\nThe NurseVacations table lists days off for each nurse.\nThe NurseAssociations table lists pairs of nurses who wish to work together.\nThe NurseIncompatibilities table lists pairs of nurses who do not want to work together.\n\nLoading data from Excel with pandas\nWe load the data from an Excel file using pandas.\nEach sheet is read into a separate pandas DataFrame.", "CSS = \"\"\"\nbody {\n margin: 0;\n font-family: Helvetica;\n}\ntable.dataframe {\n border-collapse: collapse;\n border: none;\n}\ntable.dataframe tr {\n border: none;\n}\ntable.dataframe td, table.dataframe th {\n margin: 0;\n border: 1px solid white;\n padding-left: 0.25em;\n padding-right: 0.25em;\n}\ntable.dataframe th:not(:empty) {\n background-color: #fec;\n text-align: left;\n font-weight: normal;\n}\ntable.dataframe tr:nth-child(2) th:empty {\n border-left: none;\n border-right: 1px dashed #888;\n}\ntable.dataframe td {\n border: 2px solid #ccf;\n background-color: #f4f4ff;\n}\n table.dataframe thead th:first-child {\n display: none;\n }\n table.dataframe tbody th {\n display: none;\n }\n\"\"\"\n\nfrom IPython.core.display import HTML\nHTML('<style>{}</style>'.format(CSS))\n\nfrom IPython.display import display\n\ntry:\n from StringIO import StringIO\nexcept ImportError:\n from io import StringIO\n\n# This notebook requires pandas to work\nimport pandas as pd\nfrom pandas import DataFrame\n\n# Make sure that xlrd package, which is a pandas optional dependency, is installed\n# This package is required for Excel I/O\ntry:\n import xlrd\nexcept:\n if hasattr(sys, 'real_prefix'):\n #we are in a virtual env.\n !pip install xlrd \n else:\n !pip install --user xlrd \n\n# Use pandas to read the file, one tab for each table.\ndata_url = \"https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/mp/jupyter/nurses_data.xls?raw=true\"\nnurse_xls_file = pd.ExcelFile(data_url)\n\ndf_skills = nurse_xls_file.parse('Skills')\ndf_depts = nurse_xls_file.parse('Departments')\ndf_shifts = nurse_xls_file.parse('Shifts')\n# Rename df_shifts index\ndf_shifts.index.name = 'shiftId'\n\n# Index is column 0: name\ndf_nurses = nurse_xls_file.parse('Nurses', header=0, index_col=0)\ndf_nurse_skills = nurse_xls_file.parse('NurseSkills')\ndf_vacations = nurse_xls_file.parse('NurseVacations')\ndf_associations = nurse_xls_file.parse('NurseAssociations')\ndf_incompatibilities = nurse_xls_file.parse('NurseIncompatibilities')\n\n# Display the nurses dataframe\nprint(\"#nurses = {}\".format(len(df_nurses)))\nprint(\"#shifts = {}\".format(len(df_shifts)))\nprint(\"#vacations = {}\".format(len(df_vacations)))", "In addition, we introduce some extra global data:\n\nThe maximum work time for each nurse.\nThe maximum and minimum number of shifts worked by a nurse in a week.", "# maximum work time (in hours)\nmax_work_time = 40\n\n# maximum number of shifts worked in a week.\nmax_nb_shifts = 5", "Shifts are stored in a separate DataFrame.", "df_shifts", "Step 3: Prepare the data\nWe need to precompute additional data for shifts. \nFor each shift, we need the start time and end time expressed in hours, counting from the beginning of the week: Monday 8am is converted to 8, Tuesday 8am is converted to 24+8 = 32, and so on.\nSub-step #1\nWe start by adding an extra column dow (day of week) which converts the string \"day\" into an integer in 0..6 (Monday is 0, Sunday is 6).", "days = [\"monday\", \"tuesday\", \"wednesday\", \"thursday\", \"friday\", \"saturday\", \"sunday\"]\nday_of_weeks = dict(zip(days, range(7)))\n\n# utility to convert a day string e.g. \"Monday\" to an integer in 0..6\ndef day_to_day_of_week(day):\n return day_of_weeks[day.strip().lower()]\n\n# for each day name, we normalize it by stripping whitespace and converting it to lowercase\n# \" Monday\" -> \"monday\"\ndf_shifts[\"dow\"] = df_shifts.day.apply(day_to_day_of_week)\ndf_shifts", "Sub-step #2 : Compute the absolute start time of each shift.\nComputing the start time in the week is easy: just add 24*dow to column start_time. The result is stored in a new column wstart.", "df_shifts[\"wstart\"] = df_shifts.start_time + 24 * df_shifts.dow", "Sub-Step #3 : Compute the absolute end time of each shift.\nComputing the absolute end time is a little more complicated as certain shifts span across midnight. For example, Shift #3 starts on Monday at 18:00 and ends Tuesday at 2:00 AM. The absolute end time of Shift #3 is 26, not 2.\nThe general rule for computing absolute end time is:\nabs_end_time = end_time + 24 * dow + (start_time&gt;= end_time ? 24 : 0)\nAgain, we use pandas to add a new calculated column wend. This is done by using the pandas apply method with an anonymous lambda function over rows. The raw=True parameter prevents the creation of a pandas Series for each row, which improves the performance significantly on large data sets.", "# an auxiliary function to calculate absolute end time of a shift\ndef calculate_absolute_endtime(start, end, dow):\n return 24*dow + end + (24 if start>=end else 0)\n\n# store the results in a new column\ndf_shifts[\"wend\"] = df_shifts.apply(lambda row: calculate_absolute_endtime(\n row.start_time, row.end_time, row.dow), axis=1)", "Sub-step #4 : Compute the duration of each shift.\nComputing the duration of each shift is now a straightforward difference of columns. The result is stored in column duration.", "df_shifts[\"duration\"] = df_shifts.wend - df_shifts.wstart", "Sub-step #5 : Compute the minimum demand for each shift.\nMinimum demand is the product of duration (in hours) by the minimum required number of nurses. Thus, in number of \nnurse-hours, this demand is stored in another new column min_demand.\nFinally, we display the updated shifts DataFrame with all calculated columns.", "# also compute minimum demand in nurse-hours\ndf_shifts[\"min_demand\"] = df_shifts.min_req * df_shifts.duration\n\n# finally check the modified shifts dataframe\ndf_shifts", "Step 4: Set up the prescriptive model\nCreate the DOcplex model\nThe model contains all the business constraints and defines the objective.\nWe now use CPLEX Modeling for Python to build a Mixed Integer Programming (MIP) model for this problem.", "from docplex.mp.model import Model\nmdl = Model(name=\"nurses\")", "Define the decision variables\nFor each (nurse, shift) pair, we create one binary variable that is equal to 1 when the nurse is assigned to the shift.\nWe use the binary_var_matrix method of class Model, as each binary variable is indexed by two objects: one nurse and one shift.", "# first global collections to iterate upon\nall_nurses = df_nurses.index.values\nall_shifts = df_shifts.index.values\n\n# the assignment variables.\nassigned = mdl.binary_var_matrix(keys1=all_nurses, keys2=all_shifts, name=\"assign_%s_%s\")", "Express the business constraints\nOverlapping shifts\nSome shifts overlap in time, and thus cannot be assigned to the same nurse.\nTo check whether two shifts overlap in time, we start by ordering all shifts with respect to their wstart and duration properties. Then, for each shift, we iterate over the subsequent shifts in this ordered list to easily compute the subset of overlapping shifts.\nWe use pandas operations to implement this algorithm. But first, we organize all decision variables in a DataFrame.\nFor convenience, we also organize the decision variables in a pivot table with nurses as row index and shifts as columns. The pandas unstack operation does this.", "# Organize decision variables in a DataFrame\ndf_assigned = DataFrame({'assigned': assigned})\ndf_assigned.index.names=['all_nurses', 'all_shifts']\n\n# Re-organize the Data Frame as a pivot table with nurses as row index and shifts as columns:\ndf_assigned_pivot = df_assigned.unstack(level='all_shifts')\n\n# Create a pivot using nurses and shifts index as dimensions\n#df_assigned_pivot = df_assigned.reset_index().pivot(index='all_nurses', columns='all_shifts', values='assigned')\n\n# Display first rows of the pivot table\ndf_assigned_pivot.head()", "We create a DataFrame representing a list of shifts sorted by \"wstart\" and \"duration\".\nThis sorted list will be used to easily detect overlapping shifts.\nNote that indices are reset after sorting so that the DataFrame can be indexed with respect to\nthe index in the sorted list and not the original unsorted list. This is the purpose of the reset_index()\noperation which also adds a new column named \"shiftId\" with the original index.", "# Create a Data Frame representing a list of shifts sorted by wstart and duration.\n# One keeps only the three relevant columns: 'shiftId', 'wstart' and 'wend' in the resulting Data Frame \ndf_sorted_shifts = df_shifts.sort_values(['wstart','duration']).reset_index()[['shiftId', 'wstart', 'wend']]\n\n# Display the first rows of the newly created Data Frame\ndf_sorted_shifts.head()", "Next, we state that for any pair of shifts that overlap in time, a nurse can be assigned to only one of the two.", "number_of_incompatible_shift_constraints = 0\nfor shift in df_sorted_shifts.itertuples():\n # Iterate over following shifts\n # 'shift[0]' contains the index of the current shift in the df_sorted_shifts Data Frame\n for shift_2 in df_sorted_shifts.iloc[shift[0] + 1:].itertuples():\n if (shift_2.wstart < shift.wend):\n # Iterate over all nurses to force incompatible assignment for the current pair of overlapping shifts\n for nurse_assignments in df_assigned_pivot.iloc[:, [shift.shiftId, shift_2.shiftId]].itertuples():\n # this is actually a logical OR\n mdl.add_constraint(nurse_assignments[1] + nurse_assignments[2] <= 1)\n number_of_incompatible_shift_constraints += 1\n else:\n # No need to test overlap with following shifts\n break\nprint(\"#incompatible shift constraints: {}\".format(number_of_incompatible_shift_constraints))", "Vacations\nWhen the nurse is on vacation, he cannot be assigned to any shift starting that day.\nWe use the pandas merge operation to create a join between the \"df_vacations\", \"df_shifts\", and \"df_assigned\" DataFrames. Each row of the resulting DataFrame contains the assignment decision variable corresponding to the matching (nurse, shift) pair.", "# Add 'day of week' column to vacations Data Frame\ndf_vacations['dow'] = df_vacations.day.apply(day_to_day_of_week)\n\n# Join 'df_vacations', 'df_shifts' and 'df_assigned' Data Frames to create the list of 'forbidden' assigments.\n# The 'reset_index()' function is invoked to move 'shiftId' index as a column in 'df_shifts' Data Frame, and\n# to move the index pair ('all_nurses', 'all_shifts') as columns in 'df_assigned' Data Frame.\n# 'reset_index()' is invoked so that a join can be performed between Data Frame, based on column names.\ndf_assigned_reindexed = df_assigned.reset_index()\ndf_vacation_forbidden_assignments = df_vacations.merge(df_shifts.reset_index()[['dow', 'shiftId']]).merge(\n df_assigned_reindexed, left_on=['nurse', 'shiftId'], right_on=['all_nurses', 'all_shifts'])\n\n# Here are the first few rows of the resulting Data Frames joins\ndf_vacation_forbidden_assignments.head()\n\nfor forbidden_assignment in df_vacation_forbidden_assignments.itertuples():\n # to forbid an assignment just set the variable to zero.\n mdl.add_constraint(forbidden_assignment.assigned == 0)\nprint(\"# vacation forbids: {} assignments\".format(len(df_vacation_forbidden_assignments)))", "Associations\nSome pairs of nurses get along particularly well, so we wish to assign them together as a team. In other words, for every such couple and for each shift, both assignment variables should always be equal.\nEither both nurses work the shift, or both do not.\nIn the same way we modeled vacations, we use the pandas merge operation to create a DataFrame for which each row contains the pair of nurse-shift assignment decision variables matching each association.", "# Join 'df_assignment' Data Frame twice, based on associations to get corresponding decision variables pairs for all shifts\n# The 'suffixes' parameter in the second merge indicates our preference for updating the name of columns that occur both\n# in the first and second argument Data Frames (in our case, these columns are 'all_nurses' and 'assigned').\ndf_preferred_assign = df_associations.merge(\n df_assigned_reindexed, left_on='nurse1', right_on='all_nurses').merge(\n df_assigned_reindexed, left_on=['nurse2', 'all_shifts'], right_on=['all_nurses', 'all_shifts'], suffixes=('_1','_2'))\n\n# Here are the first few rows of the resulting Data Frames joins\ndf_preferred_assign.head()", "The associations constraint can now easily be formulated by iterating on the rows of the \"df_preferred_assign\" DataFrame.", "for preferred_assign in df_preferred_assign.itertuples():\n mdl.add_constraint(preferred_assign.assigned_1 == preferred_assign.assigned_2)", "Incompatibilities\nSimilarly, certain pairs of nurses do not get along well, and we want to avoid having them together on a shift.\nIn other terms, for each shift, both nurses of an incompatible pair cannot be assigned together to the sift. Again, we state a logical OR between the two assignments: at most one nurse from the pair can work the shift.\nWe first create a DataFrame whose rows contain pairs of invalid assignment decision variables, using the same pandas merge operations as in the previous step.", "# Join assignment Data Frame twice, based on incompatibilities Data Frame to get corresponding decision variables pairs\n# for all shifts\ndf_incompatible_assign = df_incompatibilities.merge(\n df_assigned_reindexed, left_on='nurse1', right_on='all_nurses').merge(\n df_assigned_reindexed, left_on=['nurse2', 'all_shifts'], right_on=['all_nurses', 'all_shifts'], suffixes=('_1','_2'))\n\n# Here are the first few rows of the resulting Data Frames joins\ndf_incompatible_assign.head()", "The incompatibilities constraint can now easily be formulated, by iterating on the rows of the \"df_incompatible_assign\" DataFrame.", "for incompatible_assign in df_incompatible_assign.itertuples():\n mdl.add_constraint(incompatible_assign.assigned_1 + incompatible_assign.assigned_2 <= 1)", "Constraints on work time\nRegulations force constraints on the total work time over a week;\nand we compute this total work time in a new variable. We store the variable in an extra column in the nurse DataFrame.\nThe variable is declared as continuous though it contains only integer values. This is done to avoid adding unnecessary integer variables for the branch and bound algorithm. \nThese variables are not true decision variables; they are used to express work constraints.\nFrom a pandas perspective, we apply a function over the rows of the nurse DataFrame to create this variable and store it into a new column of the DataFrame.", "# auxiliary function to create worktime variable from a row\ndef make_var(row, varname_fmt):\n return mdl.continuous_var(name=varname_fmt % row.name, lb=0)\n\n# apply the function over nurse rows and store result in a new column\ndf_nurses[\"worktime\"] = df_nurses.apply(lambda r: make_var(r, \"worktime_%s\"), axis=1)\n\n# display nurse dataframe\ndf_nurses", "Define total work time\nWork time variables must be constrained to be equal to the sum of hours actually worked.\nWe use the pandas groupby operation to collect all assignment decision variables for each nurse in a separate series. Then, we iterate over nurses to post a constraint calculating the actual worktime for each nurse as the dot product of the series of nurse-shift assignments with the series of shift durations.", "# Use pandas' groupby operation to enforce constraint calculating worktime for each nurse as the sum of all assigned\n# shifts times the duration of each shift\nfor nurse, nurse_assignments in df_assigned.groupby(level='all_nurses'):\n mdl.add_constraint(df_nurses.worktime[nurse] == mdl.dot(nurse_assignments.assigned, df_shifts.duration))\n \n# print model information and check we now have 32 extra continuous variables\nmdl.print_information()", "Maximum work time\nFor each nurse, we add a constraint to enforce the maximum work time for a week.\nAgain we use the apply method, this time with an anonymous lambda function.", "# we use pandas' apply() method to set an upper bound on all worktime variables.\ndef set_max_work_time(v):\n v.ub = max_work_time\n # Optionally: return a string for fancy display of the constraint in the Output cell\n return str(v) + ' <= ' + str(v.ub)\n\ndf_nurses[\"worktime\"].apply(convert_dtype=False, func=set_max_work_time)", "Minimum requirement for shifts\nEach shift requires a minimum number of nurses. \nFor each shift, the sum over all nurses of assignments to this shift\nmust be greater than the minimum requirement.\nThe pandas groupby operation is invoked to collect all assignment decision variables for each shift in a separate series. Then, we iterate over shifts to post the constraint enforcing the minimum number of nurse assignments for each shift.", "# Use pandas' groupby operation to enforce minimum requirement constraint for each shift\nfor shift, shift_nurses in df_assigned.groupby(level='all_shifts'):\n mdl.add_constraint(mdl.sum(shift_nurses.assigned) >= df_shifts.min_req[shift])", "Express the objective\nThe objective mixes different (and contradictory) KPIs. \nThe first KPI is the total salary cost, computed as the sum of work times over all nurses, weighted by pay rate.\nWe compute this KPI as an expression from the variables we previously defined by using the panda summation over the DOcplex objects.", "# again leverage pandas to create a series of expressions: costs of each nurse\ntotal_salary_series = df_nurses.worktime * df_nurses.pay_rate\n\n# compute global salary cost using pandas sum()\n# Note that the result is a DOcplex expression: DOcplex if fully compatible with pandas\ntotal_salary_cost = total_salary_series.sum()\nmdl.add_kpi(total_salary_cost, \"Total salary cost\")", "Minimizing salary cost\nIn a preliminary version of the model, we minimize the total salary cost. This is accomplished\nusing the Model.minimize() method.", "mdl.minimize(total_salary_cost)\nmdl.print_information()", "Solve with Decision Optimization\nNow we have everything we need to solve the model, using Model.solve(). The following cell solves using your local CPLEX (if any, and provided you have added it to your PYTHONPATH variable).", "# Set Cplex mipgap to 1e-5 to enforce precision to be of the order of a unit (objective value magnitude is ~1e+5).\nmdl.parameters.mip.tolerances.mipgap = 1e-5\n\ns = mdl.solve(log_output=True)\nassert s, \"solve failed\"\nmdl.report()", "Step 5: Investigate the solution and then run an example analysis\nWe take advantage of pandas to analyze the results. First we store the solution values of the assignment variables into a new pandas Series.\nCalling solution_value on a DOcplex variable returns its value in the solution (provided the model has been successfully solved).", "# Create a pandas Series containing actual shift assignment decision variables value\ns_assigned = df_assigned.assigned.apply(lambda v: v.solution_value)\n\n# Create a pivot table by (nurses, shifts), using pandas' \"unstack\" method to transform the 'all_shifts' row index\n# into columns\ndf_res = s_assigned.unstack(level='all_shifts')\n\n# Display the first few rows of the resulting pivot table\ndf_res.head()", "Analyzing how worktime is distributed\nLet's analyze how worktime is distributed among nurses. \nFirst, we compute the global average work time as the total minimum requirement in hours, divided by number of nurses.", "s_demand = df_shifts.min_req * df_shifts.duration\ntotal_demand = s_demand.sum()\navg_worktime = total_demand / float(len(all_nurses))\nprint(\"* theoretical average work time is {0:g} h\".format(avg_worktime))", "Let's analyze the series of deviations to the average, stored in a pandas Series.", "# a pandas series of worktimes solution values\ns_worktime = df_nurses.worktime.apply(lambda v: v.solution_value)\n\n# returns a new series computed as deviation from average\ns_to_mean = s_worktime - avg_worktime\n\n# take the absolute value\ns_abs_to_mean = s_to_mean.apply(abs)\n\n\ntotal_to_mean = s_abs_to_mean.sum()\nprint(\"* the sum of absolute deviations from mean is {}\".format(total_to_mean))", "To see how work time is distributed among nurses, print a histogram of work time values.\nNote that, as all time data are integers, work times in the solution can take only integer values.", "import matplotlib.pyplot as plt\n%matplotlib inline\n\n# we can also plot as a histogram the distribution of worktimes\ns_worktime.plot.hist(color='LightBlue')\nplt.xlabel(\"worktime\")", "How shifts are distributed\nLet's now analyze the solution from the number of shifts perspective.\nHow many shifts does each nurse work? Are these shifts fairly distributed amongst nurses?\nWe compute a new column in our result DataFrame for the number of shifts worked,\nby summing rows (the \"axis=1\" argument in the sum() call indicates to pandas that each sum is performed by row instead of column):", "# a pandas series of #shifts worked\ndf_worked = df_res[all_shifts].sum(axis=1)\ndf_res[\"worked\"] = df_worked\n\ndf_worked.plot.hist(color=\"gold\", xlim=(0,10))\nplt.ylabel(\"#shifts worked\")", "We see that one nurse works significantly fewer shifts than others do. What is the average number of shifts worked by a nurse? This is equal to the total demand divided by the number of nurses.\nOf course, this yields a fractional number of shifts that is not practical, but nonetheless will help us quantify\nthe fairness in shift distribution.", "avg_worked = df_shifts[\"min_req\"].sum() / float(len(all_nurses))\nprint(\"-- expected avg #shifts worked is {}\".format(avg_worked))\n\nworked_to_avg = df_res[\"worked\"] - avg_worked\ntotal_to_mean = worked_to_avg.apply(abs).sum()\nprint(\"-- total absolute deviation to mean #shifts is {}\".format(total_to_mean))", "Introducing a fairness goal\nAs the above diagram suggests, the distribution of shifts could be improved.\nWe implement this by adding one extra objective, fairness, which balances\nthe shifts assigned over nurses.\nNote that we can edit the model, that is add (or remove) constraints, even after it has been solved. \nStep #1 : Introduce three new variables per nurse to model the\nnumber of shifts worked and positive and negative deviations to the average.", "# add two extra variables per nurse: deviations above and below average\ndf_nurses[\"worked\"] = df_nurses.apply(lambda r: make_var(r, \"worked%s\"), axis=1)\ndf_nurses[\"overworked\"] = df_nurses.apply(lambda r: make_var(r, \"overw_%s\"), axis=1)\ndf_nurses[\"underworked\"] = df_nurses.apply(lambda r: make_var(r, \"underw_%s\"), axis=1)", "Step #2 : Post the constraint that links these variables together.", "# Use the pandas groupby operation to enforce the constraint calculating number of worked shifts for each nurse\nfor nurse, nurse_assignments in df_assigned.groupby(level='all_nurses'):\n # nb of worked shifts is sum of assigned shifts\n mdl.add_constraint(df_nurses.worked[nurse] == mdl.sum(nurse_assignments.assigned))\n\nfor nurse in df_nurses.itertuples():\n # nb worked is average + over - under\n mdl.add_constraint(nurse.worked == avg_worked + nurse.overworked - nurse.underworked)", "Step #3 : Define KPIs to measure the result after solve.", "# finally, define kpis for over and under average quantities\ntotal_overw = mdl.sum(df_nurses[\"overworked\"])\nmdl.add_kpi(total_overw, \"Total over-worked\")\ntotal_underw = mdl.sum(df_nurses[\"underworked\"])\nmdl.add_kpi(total_underw, \"Total under-worked\")", "Finally, let's modify the objective by adding the sum of over_worked and under_worked to the previous objective.\nNote: The definitions of over_worked and under_worked as described above are not sufficient to give them an unambiguous value. However, as all these variables are minimized, CPLEX ensures that these variables take the minimum possible values in the solution.", "mdl.minimize(total_salary_cost + total_overw + total_underw) # incorporate over_worked and under_worked in objective", "Our modified model is ready to solve.\nThe log_output=True parameter tells CPLEX to print the log on the standard output.", "sol2 = mdl.solve(log_output=True) # solve again and get a new solution\nassert sol2, \"Solve failed\"\nmdl.report()", "Analyzing new results\nLet's recompute the new total deviation from average on this new solution.", "# Create a pandas Series containing actual shift assignment decision variables value\ns_assigned2 = df_assigned.assigned.apply(lambda v: v.solution_value)\n\n# Create a pivot table by (nurses, shifts), using pandas' \"unstack\" method to transform the 'all_shifts' row index\n# into columns\ndf_res2 = s_assigned2.unstack(level='all_shifts')\n\n# Add a new column to the pivot table containing the #shifts worked by summing over each row\ndf_res2[\"worked\"] = df_res2[all_shifts].sum(axis=1)\n\n# total absolute deviation from average is directly read on expressions\nnew_total_to_mean = total_overw.solution_value + total_underw.solution_value\nprint(\"-- total absolute deviation to mean #shifts is now {0} down from {1}\".format(new_total_to_mean, total_to_mean))\n\n# Display the first few rows of the result Data Frame\ndf_res2.head()", "Let's print the new histogram of shifts worked.", "df_res2[\"worked\"].plot(kind=\"hist\", color=\"gold\", xlim=(3,8))", "The breakdown of shifts over nurses is much closer to the average than it was in the previous version.\nBut what would be the minimal fairness level?\nBut what is the absolute minimum for the deviation to the ideal average number of shifts?\nCPLEX can tell us: simply minimize only the the total deviation from average, ignoring the salary cost.\nOf course this is unrealistic, but it will help us quantify how far our fairness result is to the\nabsolute optimal fairness.\nWe modify the objective and solve for the third time (using the usual necessary update for DOcplexcloud credentials).", "mdl.minimize(total_overw + total_underw)\nassert mdl.solve(), \"solve failed\"\nmdl.report()", "In the fairness-optimal solution, we have zero under-average shifts and 4 over-average.\nSalary cost is now higher than the previous value of 28884 but this was expected as salary cost was not part of the objective.\nTo summarize, the absolute minimum for this measure of fairness is 4, and we have found a balance with fairness=7.\nFinally, we display the histogram for this optimal-fairness solution.", "# Create a pandas Series containing actual shift assignment decision variables value\ns_assigned_fair = df_assigned.assigned.apply(lambda v: v.solution_value)\n\n# Create a pivot table by (nurses, shifts), using pandas' \"unstack\" method to transform the 'all_shifts' row index\n# into columns\ndf_res_fair = s_assigned_fair.unstack(level='all_shifts')\n\n# Add a new column to the pivot table containing the #shifts worked by summing over each row\ndf_res_fair[\"solution_value_fair\"] = df_res_fair[all_shifts].sum(axis=1)\ndf_res_fair[\"worked\"] = df_res_fair[all_shifts].sum(axis=1)\ndf_res_fair[\"worked\"].plot.hist(color=\"plum\", xlim=(3,8))", "In the above figure, all nurses but one are assigned the average of 7 shifts, which is what we expected.\nSummary\nYou learned how to set up and use IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model and solve it with IBM Decision Optimization on Cloud.\nReferences\n\nCPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here.\nContact us at dofeedback@wwpdl.vnet.ibm.com.\n\nCopyright &copy; 2017-2021 IBM. IPLA licensed Sample Materials." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
qdev-dk/Majorana
examples/Qcodes example with Alazar ATS9360.ipynb
gpl-3.0
[ "Qcodes example notebook for Alazar card ATS9360 and acq controllers", "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport qcodes as qc\nimport qcodes.instrument.parameter as parameter\nimport qcodes.instrument_drivers.AlazarTech.ATS9360 as ATSdriver\nfrom qdev_wrappers.alazar_controllers.ATSChannelController import ATSChannelController\nfrom qdev_wrappers.alazar_controllers.alazar_channel import AlazarChannel\n#import qcodes.instrument_drivers.AlazarTech.acq_helpers as helpers\nfrom qcodes.station import Station\n\nimport logging\nlogging.basicConfig(level=logging.INFO)\n\nfrom qcodes.instrument.parameter import ManualParameter\nimport qcodes", "NB: See ATS9360 example notebook for general commands", "# Create the ATS9360 instrument\nalazar = ATSdriver.AlazarTech_ATS9360(name='Alazar')\n# Print all information about this Alazar card\nalazar.get_idn()\n\n# Configure all settings in the Alazar card\nalazar.config(clock_source='INTERNAL_CLOCK',\n sample_rate=1_000_000_000,\n clock_edge='CLOCK_EDGE_RISING',\n decimation=1,\n coupling=['DC','DC'],\n channel_range=[.4,.4],\n impedance=[50,50],\n trigger_operation='TRIG_ENGINE_OP_J',\n trigger_engine1='TRIG_ENGINE_J',\n trigger_source1='EXTERNAL',\n trigger_slope1='TRIG_SLOPE_POSITIVE',\n trigger_level1=160,\n trigger_engine2='TRIG_ENGINE_K',\n trigger_source2='DISABLE',\n trigger_slope2='TRIG_SLOPE_POSITIVE',\n trigger_level2=128,\n external_trigger_coupling='DC',\n external_trigger_range='ETR_2V5',\n trigger_delay=0,\n timeout_ticks=0,\n aux_io_mode='AUX_IN_AUXILIARY', # AUX_IN_TRIGGER_ENABLE for seq mode on\n aux_io_param='NONE' # TRIG_SLOPE_POSITIVE for seq mode on\n )", "Example 1\nPulls the raw data the alazar acquires averaged over records and buffers.", "# Create the acquisition controller which will take care of the data handling and tell it which \n# alazar instrument to talk to. Explicitly pass the default options to the Alazar.\n# Dont integrate over samples but avarage over records\nmyctrl = ATSChannelController(name='my_controller', alazar_name='Alazar')", "Put the Alazar and the controller in a station so we ensure that all parameters are captured", "station = qc.Station(alazar, myctrl)", "This controller is designed to be highlevel and it is not possible to directly set number of records, buffers and samples. The number of samples is indirecly controlled by the integration time and integration delay and the number of averages controls the number of buffers and records acquired", "myctrl.int_time.set?\n\nmyctrl.int_time._latest\n\nmyctrl.int_delay(2e-7)\nmyctrl.int_time(2e-6)\nprint(myctrl.samples_per_record())\n#myctrl.num_avg(1000)", "Per default the controller does not have any channels assiated with it.", "myctrl.channels", "1D samples trace\nLets define a channel were we avarege over buffers and records but not over samples. This will give us a time series with a x axis defined by int_time, int_delay and the sampling rate. First we create a channel and set the relevant parameters. We may choose to append the channel to the controllers build in list of channels for future reference.", "chan1 = AlazarChannel(myctrl, 'mychan', demod=False, integrate_samples=False)\nmyctrl.channels.append(chan1)\n\nchan1.num_averages(1000)\n\nchan1.alazar_channel('A')\nchan1.prepare_channel()\n\n# Measure this \ndata1 = qc.Measure(chan1.data).run()\nqc.MatPlot(data1.my_controller_mychan_data)", "We can measure the time taken to do a measurement", "%%time\nqc.Measure(chan1.data).run()", "Demodulation\nWe may optionally chose to demodulate the data that we acquire using a software demodulator", "chan1d = AlazarChannel(myctrl, 'mychan_demod_1', demod=True, integrate_samples=False)\nmyctrl.channels.append(chan1d)\n\nchan1d.num_averages(1000)\n\nchan1d.alazar_channel('A')\nchan1d.demod_freq(1e6)\nchan1d.demod_type('magnitude')\n\nchan1d.prepare_channel()\n\n# Measure this \ndata1d = qc.Measure(chan1d.data).run()\nqc.MatPlot(data1d.my_controller_mychan_demod_1_data)", "We are free to add more demodulators with different frequencies", "chan1d2 = AlazarChannel(myctrl, 'mychan_demod_2', demod=True, integrate_samples=False)\nmyctrl.channels.append(chan1d2)\n\nchan1d2.num_averages(1000)\n\nchan1d2.alazar_channel('A')\nchan1d2.demod_freq(2e6)\nchan1d2.demod_type('magnitude')\n\nchan1d2.prepare_channel()\n\n# Measure this \ndata1d = qc.Measure(chan1d2.data).run()\nqc.MatPlot(data1d.my_controller_mychan_demod_2_data)\n\nmyctrl.channels", "We can get the data from multiple chanels in one provided that the shape (buffers,records,samples) is the same, The time overhead is fairly small as we are only capturing the data once.", "%%time\ndata = qc.Measure(myctrl.channels.data).run()\n\ndata1 = qc.Measure(myctrl.channels.data).run()\nplot = qc.MatPlot()\nplot.add(data.my_controller_mychan_data)\nplot.add(data.my_controller_mychan_demod_1_data)\nplot.add(data.my_controller_mychan_demod_2_data)", "1D records trace\nWe can also do a 1D trace of records", "chan2 = AlazarChannel(myctrl, 'myrecchan', demod=False, average_records=False)\nmyctrl.channels.append(chan2)\n\nchan2.num_averages(100)\nchan2.records_per_buffer(55)\nchan2.alazar_channel('A')\n\nchan2.prepare_channel()\n\n# Measure this \ndata2 = qc.Measure(myctrl.channels[-1].data).run()\nqc.MatPlot(data2.my_controller_myrecchan_data)", "Again it is posssible to demodulate the data", "chan2d = AlazarChannel(myctrl, 'myrecchan_D', demod=True, average_records=False)\nmyctrl.channels.append(chan2d)\n\nprint(myctrl.int_delay())\nprint(myctrl.int_time())\n\nmyctrl.int_time._latest\n\nchan2d.alazar_channel('A')\nchan2d.demod_freq(1e6)\nchan2d.demod_type('magnitude')\n\nchan2d.num_averages(100)\nchan2d.records_per_buffer(55)\nchan2d.alazar_channel('A')\n\nchan2d.prepare_channel()\n\n# Measure this \ndata2d = qc.Measure(myctrl.channels[-1].data).run()\nqc.MatPlot(data2d.my_controller_myrecchan_D_data)\n\nmyctrl.channels\n\nmyctrl.channels[-2:]\n\ndata = qc.Measure(myctrl.channels[-2:].data).run()\nplot = qc.MatPlot()\nplot.add(data.my_controller_myrecchan_data )\nplot.add(data.my_controller_myrecchan_D_data)", "1D Buffer trace\nWe can also do a 1D trace over buffers in the same way", "chan3 = AlazarChannel(myctrl, 'myrecchan', demod=False, average_buffers=False)\nmyctrl.channels.append(chan3)\n\n\nchan3.num_averages(100)\nchan3.buffers_per_acquisition(100)\nchan3.alazar_channel('A')\nalazar.buffer_timeout._set(10000)\nalazar.buffer_timeout._set_updated()\nchan3.prepare_channel()\n\n# Measure this \ndata3 = qc.Measure(chan3.data).run()\nqc.MatPlot(data3.my_controller_myrecchan_data)\nprint(alazar.buffer_timeout())", "And demodulate this", "chan3d = AlazarChannel(myctrl, 'myrecchan_d', demod=True, average_buffers=False)\nmyctrl.channels.append(chan3d)\n\nchan3d.num_averages(100)\nchan3d.buffers_per_acquisition(100)\nchan3d.alazar_channel('A')\nchan3d.demod_freq(2e6)\nchan3d.demod_type('magnitude')\nalazar.buffer_timeout._set(10000)\nalazar.buffer_timeout._set_updated()\nchan3d.prepare_channel()\n\n# Measure this \ndata3 = qc.Measure(chan3d.data).run()\nqc.MatPlot(data3.my_controller_myrecchan_d_data)\nprint(alazar.buffer_timeout())\n\ndata = qc.Measure(myctrl.channels[-2:].data).run()\nplot = qc.MatPlot()\nplot.add(data.my_controller_myrecchan_data)\nplot.add(data.my_controller_myrecchan_d_data)", "2D Samples vs records", "chan4 = AlazarChannel(myctrl, 'myrecvssamples', demod=False, average_records=False, integrate_samples=False)\nmyctrl.channels.append(chan4)\n\nchan4.num_averages(1)\nchan4.records_per_buffer(100)\nchan4.alazar_channel('A')\nchan4.prepare_channel()\n# Measure this \ndata4 = qc.Measure(chan4.data).run()\nqc.MatPlot(data4.my_controller_myrecvssamples_data)", "2D Buffers vs Records", "chan5 = AlazarChannel(myctrl, 'mybuffersvsrecs', demod=False, average_records=False, average_buffers=False)\nalazar.buffer_timeout._set(10000)\nchan5.records_per_buffer(72)\nchan5.buffers_per_acquisition(10)\nchan5.num_averages(1)\nchan5.alazar_channel('A')\nchan5.prepare_channel()\n# Measure this\ndata5 = qc.Measure(chan5.data).run()\nqc.MatPlot(data5.my_controller_mybuffersvsrecs_data)\nprint(alazar.buffer_timeout())", "2D Buffers vs Samples", "chan6 = AlazarChannel(myctrl, 'mybufvssamples', demod=False, average_buffers=False, integrate_samples=False)\nchan6.buffers_per_acquisition(100)\nchan6.num_averages(100)\nchan6.alazar_channel('A')\nchan6.prepare_channel()\n# Measure this \ndata6 = qc.Measure(chan6.data).run()\nplot = qc.MatPlot(data6.my_controller_mybufvssamples_data)\n", "Single point", "chan7 = AlazarChannel(myctrl, 'mybufvssamples', demod=False)\n\n\nchan7.num_averages(100)\nchan7.alazar_channel('A')\nchan7.prepare_channel()\n# Measure this\n\ndata7 = qc.Measure(chan7.data).run()", "As we are not integrating over samples the setpoints (label, unit and ticks on number) are automatically set from the integration time and integration delay. Note at the moment this does not cut of the int_delay from the plot. It probably should\nMultiple channels", "chan1 = AlazarChannel(myctrl, 'mychan1', demod=False, integrate_samples=False)\nchan1.num_averages(1000)\nchan1.alazar_channel('A')\nchan1.prepare_channel()\nchan2 = AlazarChannel(myctrl, 'mychan2', demod=False, integrate_samples=False)\nchan2.num_averages(1000)\nchan2.alazar_channel('B')\nchan2.prepare_channel()\nmyctrl.channels.append(chan1)\nmyctrl.channels.append(chan2)\n\n\n#plot = qc.MatPlot(data6.my_controller_mybufvssamples_data)\n\ndata7 = qc.Measure(myctrl.channels[-2:].data).run()\nplot = qc.MatPlot(data7.my_controller_mychan1_data, data7.my_controller_mychan2_data)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Danghor/Algorithms
Python/Chapter-04/Insertion-Sort.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)", "Insertion Sort\nThe function sort is specified via two equations:\n\n$\\mathtt{sort}([]) = []$\n$\\mathtt{sort}\\bigl([x] + R\\bigr) = \n \\mathtt{insert}\\bigl(x, \\mathtt{sort}(R)\\bigr)$\n\nThis is most easily implemented in a recursive fashion.", "def sort(L):\n if L == []:\n return []\n x, *R = L\n return insert(x, sort(R))", "The auxiliary function insert is specified as follows:\n\n$\\mathtt{insert}(x,[]) = [x]$\n$x \\preceq y \\rightarrow \\mathtt{insert}\\bigl(x, [y] + R\\bigr) = [x,y] + R$\n$\\neg x \\preceq y \\rightarrow \n \\mathtt{insert}\\bigl(x, [y] + R\\bigr) = [y] + \\mathtt{insert}(x,R)$\n\nAgain, a recursive implementation is straightforward.", "def insert(x, L):\n if L == []:\n return [x]\n y, *R = L\n if x <= y:\n return [x, y] + R\n else:\n return [y] + insert(x, R)\n\ninsert(5, [1, 3, 4, 7, 9])\n\nsort([7, 8, 11, 12, 2, 5, 3, 7, 9])" ]
[ "code", "markdown", "code", "markdown", "code" ]
Yichuans/wilderness-wh
wilderness_analysis.ipynb
gpl-3.0
[ "Wilderness World Heritage analysis for the marine environment (no Antarctica)\n\nBased on the discussion with Bastian and various people.\nThe spatial analysis was done outside of this notebook. In a nutshell, the spatial component dealt with the question of how much of cumulative marine pressure there is in each unit (see below for such a hypothetical biogeographic classification). The analysis was carried out in such a way that the aggregation happens in the later stage and if thresholds are to be changed (very likely due to the explorative nature of such exercise), it requires minimum efforts without having to re-run any spatial analysis, which are time-consuming and prone to error.\nNodata in the result (when converting rasters to numpy) is also removed thus saving the efforts of having to manually remove them here.\nconcise methodology here", "# load default libraries\nimport os, sys\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n# make sure gdal is correctly installed\nfrom osgeo import gdal\nimport gc\n\n%matplotlib inline", "Get quantiles from the input raster data (global threshold from raw data)\nIt is necessary to load the original raster in order to calculateits quantiles. They are used to define thresholds to explore the extent of marine wilderness areas.", "def raster2array(rasterfn):\n raster = gdal.Open(rasterfn)\n band = raster.GetRasterBand(1)\n return band.ReadAsArray()\n\ng_array = raster2array('global_cumul_impact_2013_all_layers.tif')\n\ng_array_f = g_array.flatten()\n\n(g_array_f == 0).sum()\n\nprint('The total number of non-zero values in the raw raster dataset:', g_array_f.size - (g_array_f==0).sum())\n\n## in fact the following should be used for testing equality of float dtypes. Because the result remains\\\n## the same thus the simpler option is used.\n\n## (np.isclose(g_array_f, 0.0)).sum()", "The number of non-zero values is notably different from esri's calculation, which stands at 414,347,791, less than what's calculated here and is 300,000 fewer zeros. This suggests esri may be using a bigger tolerence value, i.e. what is considered small enough to be regarded as zero .\nNow, get the quantiles... this threshold is subject to change. For the time being, arbitary values of 1%, 3%, 5% and 10% are used.", "## the percentile function applied to the sliced array, i.e., those with values greater than 0\nquantiles = [np.percentile(g_array_f[~(g_array_f == 0)], quantile) for quantile in [1,3,5,10]]\n\nquantiles\n\nprint('\\n'.join(['Threshold cut-off value: '+ str(threshold) for threshold in quantiles]))\n", "<a href='#data'>\nOverlap between biogeography and marine pressure (global threshold)\nThe hypothetical biogeographical classification of the marine environmental within EEZ is described as a combination of MEOW (Marine Ecoregional of the World), its visual representation (called hereafter MEOW visual) up to 200 nautical miles and the World's pelagic provinces. The spatial data was prepared in a way such that from the coastline outwards disjoint polygons represents: MEOW (up to 200 meter depth, inner/red), MEOW visual overlaps with pelagic provinces (middle/green), pelagic provinces that do not overlap with MEOW visual (outer/blue). This is purely a spatial aggregation based on the above data and the World Vector Shoreline EEZ. See below for example.\n\nLoad the input_data table, which describes the intersection between the marine pressure layer and the marine ecoregion/pelagic provinces classification. The input_attr table contains information on the relationship between OBJECTID and each raster pixel value.\n- OBJECTID (one) - pixel value (many)\n- OBJECTID (many) - attr: Province, Ecoregion, and Realm, categories (one)\nEach pixel is of height and width: 934.478 meter, making each pixel in area 0.873 $km^2$", "# calculate cell-size in sqkm2\ncell_size = 934.478*934.478/1000000\nprint(cell_size)\n\n# the OBJECTID - ras_val table. This is a very big table and will take a long time.\ninput_data = pd.read_csv('result.csv')\n# print fields\ninput_data.columns\n\ninput_data.ras_val.min()\n\n# the attribute table containing information about province etcb\ninput_attr = pd.read_csv('attr.csv')\n# print fileds\ninput_attr.columns\n\n# total count of pixels per OBJECTID, i.e. base\nresult_count = input_data.groupby('OBJECTID').count().reset_index()", "Here I created four result tables containing only pixels that meet the criteria as specified by different thresholds", "# filter result only in the top 1, 3, 5, 10 percentile (of least impacted marine areas)\nresult_1, result_3, result_5, result_10 = \\\n[input_data[input_data.ras_val <= threshold].groupby('OBJECTID').count().reset_index() for threshold in quantiles]", "The next step will be to join the input_attr table with filtered pixel values. Replace result10 result table if other threshold is used.", "# join base to the attribute\nattr_merge = pd.merge(input_attr, result_count, on = 'OBJECTID')\n\n# join result to the above table\nattr_merge_10 = pd.merge(attr_merge, result_10, how = 'left', on ='OBJECTID', suffixes = ('_base', '_result'))\n\n# fill ras_val_result's NaN with 0, province and realms with None. This should happen earlier\nattr_merge_10['ras_val_result'].fillna(0, inplace=True)\nattr_merge_10['PROVINCE'].fillna('None', inplace=True)\nattr_merge_10['PROVINCE_P'].fillna('None', inplace=True)\n\n# apply an aggregate function to each sub dataframe, as a result of grouping\ndef apply_func(group):\n overlap = group['ras_val_result'].sum()*cell_size # in sqkm\n base = group['ras_val_base'].sum()*cell_size\n per = overlap/base\n # can have multiple columns as a result, if returened as pd.series\n return pd.Series([overlap, per, base], index=['less_than_threshold', 'per_ltt', 'base'])\n\n# code reuse: threshold\ndef calculate_wilderness_marine(threshold, groups):\n \"\"\"<threshold to consider wilderness value>, <a python list such as ['PROVINCE', 'PROVINCE_P', attr fields]>\"\"\"\n # filtered input data according to threshold merge\n input_data_filtered = input_data[input_data.ras_val <= threshold].groupby('OBJECTID').count().reset_index()\n \n # base merge\n base_merge = pd.merge(input_attr, result_count, on = 'OBJECTID')\n \n # merge the two above\n result = pd.merge(base_merge, input_data_filtered, how='left', on='OBJECTID', suffixes=('_base', '_result'))\n # solve no data issue\n result['ras_val_result'].fillna(0, inplace=True)\n result['PROVINCE'].fillna('None', inplace=True)\n result['PROVINCE_P'].fillna('None', inplace=True)\n \n return result.groupby(groups).apply(apply_func).reset_index()\n ", "One all tables are joined - full attributes with pixel values, attributes can be used to specify groupings", "# use 10% as threshold\ncalculate_wilderness_marine(quantiles[-1], ['PROVINCE', 'PROVINCE_P', 'category']).head(20)", "Further aggregation could be applied here, if needed.\n\nOverlap between biogeography and marine pressure (new threshold for within EEZ)\nThe World Heritage Convention currently operates only within areas under national jurisdiction, and thus high seas/ABNJ is not to be considered. It is sensible to reduce the scope of area of interest to the extent of EEZ, and accordingly adjust wilderness threshold values. \nBy excluding Antartica, where significant area of wilderness exist, it should raise the bar lower for those to be considered as wilderness areas, i.e. having a higher cumulative marine pressure threshold and more areas would be 'eligible' as wilderness.", "# check data integrity\ninput_data.OBJECTID.unique().size\n\n# no zeros in the result data\ninput_data.ras_val.size\n\n# it should not have 0, which indicates nodata in the raster data as it has been removed during the spatial analysis\ninput_data.ras_val.min()\n\n# percentage of EEZ water in relation to the entire ocean\ninput_data.ras_val.size/g_array_f[~(g_array_f==0)].size\n\n# all input_data are non-zero (zero indicates land and nodata)\ninput_data[~(input_data.ras_val == 0)].ras_val.count() == input_data.ras_val.count()\n\n# get threshold for 10%\nnew_threshold = np.percentile(input_data.ras_val, 10)\nold_threshold = np.percentile(g_array_f[~(g_array_f == 0)], 10)", "Use the new threshold (based on EEZ) and the function defined in the previous section to output lists of:\n- all meow provinces (including both 200 meter depth and 200 nautical miles (views) - wilderness area and percentage cover\n- both provinces, meow and pelagic, within EEZ - wilderness area and percentage cover\nIt is possible for other combinations or different threshold if required.", "# export wilderness distribution by province or other groupings\ncalculate_wilderness_marine(new_threshold, ['PROVINCE']).to_csv('export_meow_province.csv')\ncalculate_wilderness_marine(new_threshold, ['PROVINCE', 'PROVINCE_P', 'category']).to_csv('export_province_full.csv')", "The distribution map of wilderness within EEZ using new threshold\n\n\nDistribution of percentage of wilderness (less than threshold, ltt) by groups", "import seaborn as sns\n\n# small multiples: distribution of percentage of less than threshold (ltt)\ng = sns.FacetGrid(calculate_wilderness_marine(new_threshold, ['PROVINCE', 'PROVINCE_P', 'category']), col=\"category\")\ng.map(plt.hist, 'per_ltt', bins=50, log=True)\n\n# MEOW province (200m and 200 nautical combined)\nsns.distplot(calculate_wilderness_marine(new_threshold, ['PROVINCE']).per_ltt)\n\n# pelagic province\nsns.distplot(calculate_wilderness_marine(new_threshold, ['PROVINCE_P']).per_ltt)", "From the graphs, it is obvious that most provinces/pelagic provinces have very low percentage of marine wilderness area inside them.\nOverlap between marine World Heritage sites and marine pressure\nThe aim of this analysis is to understand wilderness marine area, as identified using methods in this study, inside the current WH sites", "# load data\nwh47 = pd.read_csv('wh47.csv')\nwh_attr = pd.read_csv('wh_attr.csv')\n\nprint(wh47.columns, wh_attr.columns)\n\n# check thresholds, use new threshold\nprint('Old threshold: {0}\\nNew threshold: {1}'.format(old_threshold, new_threshold))\n\n# get WH statics\nwh_n_base = (wh47.groupby('wdpaid').ras_val.count()*cell_size).reset_index() # all marine area\nwh_n = (wh47[wh47.ras_val<new_threshold].groupby('wdpaid').ras_val.count()*cell_size).reset_index() # marine wild\n\n# merge in order to calculate percentage (% of marine wilderness in marine area of WH sites)\na = pd.merge(wh_n_base, wh_n, on='wdpaid', suffixes=('_all', '_wild'))\na = pd.merge(wh_attr, a, how='inner', on='wdpaid')\na['per'] = a.ras_val_wild/a.ras_val_all\n\n# export save\na.to_csv('export_wh_wilderness.csv')\n\n# distribution of WH wilderness percentage\nsns.distplot(a.per)\n\nsns.distplot(a.ras_val_wild)\ndel a", "Gap analysis\n1. Mismatch of results using WH boundary alone vs WH intersections with biogeography", "input_attr.columns, wh_attr.columns\n\nint_wh = pd.read_csv('wh_base_intersect.csv')\nint_wh_attr = pd.read_csv('wh_base_intersect_attr.csv')\n\nint_wh.columns, int_wh_attr.columns\n\nint_wh_attr[['wdpaid', 'en_name', 'gis_area', 'PROVINCE_P', 'PROVINCE', 'category']].to_csv('wh_biogeo_intersect.csv')\n\n# filter pixels that meet the new threshold (from EEZ)\nint_wh_filter = int_wh[int_wh.ras_val < new_threshold]\n\n# group value based on OBJECTID\nint_wh_filter_group = int_wh_filter.groupby('OBJECTID_12').count().reset_index()\n\n# attr join\nint_result = pd.merge(int_wh_attr, int_wh_filter_group, on='OBJECTID_12')\n\n# % wilderness area inside each PA within EEZ\nint_result.groupby(['wdpaid', 'en_name']).ras_val.sum()*cell_size", "As contrary to common sense, wilderness in WH sites calculated from the intersection is slightly different from that of directly using WH boundary to cut out the marine cumulative impact data. This is due to boundary mismatches. The intersection of WH and EEZ (with biogeography attrs) removed all land area, where the marine pressure layer may have mapped pixels (See below highlighted pixels, in Galapogas)\n\nVice versa, due to the nature of intersection (clipping in strict sense), adjacent geometries having a long/shared boundary might pick up the same pixel from the base raster twice. This should not be a problem due to very low occurence (upon manual checking) but it is possible to count the same pixel twice. This should not present a problem in most cases, although it could possibly be one if such a shared boundary is very long and complicated. \nIn order to address this issue in the future, one could revert back to the old way: using an aggregated boundary for the result, however every change will mean a complete re-run. I would still prefer the fine granular approach, which far outweighs the shortcomings - do spatial once at the finest scale and the rest would be non-spatial. Subpixel level calculation is perhaps needed to determine whether or not an overlap should be counted or left out.\nFuthermore, the mismatching issue is further plagued by spatial data quality. See below Natural System of Wrangel Island, where the blue part is the overlap between WH and biogeography\n\nThe only logical/sensible way to deal with this is to use WH calulation for its self (i.e. how much of wilderness is in WH system), while the intersection WH result for relations with biogeography.\n\nBelow is an in-depth investigation but it's not part of the gap analysis", "# calculate total WH marine area, no filter applied\n# group value based on OBJECTID\nint_wh_group = int_wh.groupby('OBJECTID_12').count().reset_index()\n\n# base \nG_base = (pd.merge(int_wh_attr, int_wh_group, on='OBJECTID_12').groupby(['wdpaid', 'en_name']).ras_val.sum()*cell_size).reset_index()\nG_wh = (int_result.groupby(['wdpaid', 'en_name']).ras_val.sum()*cell_size).reset_index()\n\nG_base.columns, G_wh.columns\n\nG_result = pd.merge(G_base, G_wh, how='left', on=('wdpaid', 'en_name'))\nG_result.fillna(0, inplace=True)\nG_result.columns = ['wdpaid', 'en_name', 'marine_area', 'marine_wild_area']\nG_result['per'] = G_result.marine_wild_area/G_result.marine_area\n# G_result.to_csv('export_wh_per_.csv')\n\nG_result\n\nwh47.columns, int_wh.columns\n\nwh47_int = pd.merge(int_wh, int_wh_attr, on='OBJECTID_12')\nwh47_int.columns\n\n# compare differences from the two methods\na = wh47.groupby('wdpaid').ras_val.count().reset_index()\nb = wh47_int.groupby('wdpaid').ras_val.count().reset_index()\nc = pd.merge(a, b, on='wdpaid', suffixes=('_wh', '_int'))\nc['per'] = abs(c.ras_val_wh - c.ras_val_int)/c.ras_val_wh\n# c\n\ndel a, b, c", "There are considerable differences in percentage between the two methods to calculate marine areas within WH sites at first glance, however at the site scale, apart from wrangel Island, the differences are quite negliable. \n2. the gap analysis\nThe below is an overlay map between existing marine WH sites on top of wilderness identified.", "# the data to be used\n## wh intersection\n\n# filter pixels that meet the new threshold (from EEZ)\nint_wh_filter = int_wh[int_wh.ras_val < new_threshold]\n\n# group value based on OBJECTID\nint_wh_filter_group = int_wh_filter.groupby('OBJECTID_12').count().reset_index()\n\n# attr join\nint_result = pd.merge(int_wh_attr, int_wh_filter_group, on='OBJECTID_12')\nint_result.columns\n\n# get unique WDPAIDs for each province\nint_result.groupby('PROVINCE').wdpaid.unique()\n\n# get province MEOW (200m + 200nm)\nprovince = calculate_wilderness_marine(new_threshold, ['PROVINCE'])\n\n# provinces with WH sites, nunique() return unique number of WDPAIDs\nprovince_wh_number = pd.merge(province, int_result.groupby('PROVINCE').wdpaid.nunique().reset_index(), on='PROVINCE', how='left')", "The above does not say anything about wilderness, although by linking it with the provinces of wilderness values it could potentially identify prioirity provinces, however the above does not address the questions of how much of widerness is covered by WH sites. It could be a well 'represented' province may have little of its vast wilderness enjoying WH status, thus it may still presents a gap, from the point of view of marine wilderness.", "# WH area that are wilderness area within provinces \nprovince_wh_wilderness = (int_result.groupby('PROVINCE').ras_val.sum() * cell_size).reset_index()\n\n# get province attributes and joi\na = pd.merge(province, province_wh_wilderness, on='PROVINCE', how = 'left')\n\n# fill all NAs with 0\na.fillna(0,inplace=True)\n\n# calculate percentage of province wilderness covered by WH\na['per_wilderness_covered_by_WH'] = a.ras_val/a.less_than_threshold\na.columns = ['PROVINCE', 'wilderness_area', 'per_wilderness_area', 'total_area', 'wh_wilderness_area', a.columns[-1]]\n\n\n# ======== now get number of WH sites per Province into one single dataframe ==========\n\n## num of WH sites\nb = int_result.groupby('PROVINCE').wdpaid.nunique().reset_index()\nb.columns = ['PROVINCE', 'num_wh']\n\n## merge \na = pd.merge(a, b, how='left', on='PROVINCE')\na.fillna(0, inplace=True)\n\n# a.sort_values('num_wh')\na.to_csv('export_gap_meow_province.csv')\n\n# clear temp variable in case of polluting the global name space\ndel a" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]