repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
google/objax
docs/source/notebooks/Objax_Basics.ipynb
apache-2.0
[ "Objax Basics Tutorial\nThis tutorial introduces basic Objax concepts.\nPrerequisites\nObjax is a machine learning library written in Python which works on top of JAX. Readers should therefore have some familiarity with following:\n\nPython. If you're new to Python or need a refresher check Python Tutorial or Python Beginner's Guide.\nNumPy is a library for mathematical computations. Many of JAX primitives are based on NumPy and have the same syntax. NumPy is also useful for data manipulation outside of JAX/Objax. NumPy quickstart covers most of the needed topics. More information can be found on NumPy documentation site.\nJAX can be described as NumPy with gradients which runs on accelerators (GPU and TPU). The JAX quickstart covers most of the concepts needed to understand Objax.\n\nInstallation and imports\nLet's first install Objax:", "%pip --quiet install objax", "After Objax is installed, you can import all necessary modules:", "import jax.numpy as jn\nimport numpy as np\nimport objax", "Tensors\nTensors are essentially multi-dimensional arrays. In JAX and Objax tensors can be placed on GPUs or TPUs to accelerate computations.\nObjax relies on the jax.numpy.ndarray primitive from JAX to represent tensors. In turn, this primitive has a very similar API to NumPy ndarray.\nCreating tensors\nTensors creation is very similar to NumPy and is done in multiple ways:\n\nProvide explicit values to the tensor:", "# Providing explicit values\njn.array([[1.0, 2.0, 3.0],\n [4.0, 5.0, 6.0]])", "From a NumPy array:", "arr = np.array([1.0, 2.0, 3.0])\njn.array(arr)", "From another JAX tensor:", "another_tensor = jn.array([[1.0, 2.0, 3.0],\n [4.0, 5.0, 6.0]])\njn.array(another_tensor)", "Using ones or zeros:", "jn.ones((3, 4))\n\njn.zeros((4, 5))", "As a result of a mathematical operation performed on other tensors:", "t1 = jn.array([[1.0, 2.0, 3.0],\n [4.0, 5.0, 6.0]])\nt2 = jn.ones(t1.shape) * 3\nt1 + t2", "Tensor Properties\nSimilar to NumPy, one can explore various properties of tensors like shape, number of dimensions, or data type:", "t = jn.array([[1.0, 2.0, 3.0],\n [4.0, 5.0, 6.0]])\nprint('Number of dimensions: ', t.ndim)\nprint('Shape: ', t.shape)\nprint('Data type: ', t.dtype)", "Converting tensors to numpy array\nObjax/JAX tensors can be converted to NumPy arrays when needed to perform computations with NumPy:", "np.array(t)", "Tensors are immutable\nOne big difference between JAX ndarray and NumPy ndarray is that JAX ndarray is immutable:", "print('Original tensor t:\\n', t)\n\ntry:\n t[0, 0] = -5.0 # This line will fail\nexcept Exception as e:\n print(f'Exception {e}')\n\nprint('Tensor t after failed attempt to update:\\n', t)", "Instead of updating an existing tensor, a new tensor should be created with updated elements. Updates of individual tensor elements is done using\nindex_update, index_add and some other JAX primitives:", "import jax.ops\n\nprint('Original tensor t:\\n', t)\nnew_t = jax.ops.index_update(t, jax.ops.index[0, 0], -5.0)\nprint('Tensor t after update stays the same:\\n', t)\nprint('Tensor new_t has updated value:\\n', new_t)", "More details about per-element updates of tensors can be found in JAX documentation.\nIn practice, most mathematical operations cover tensors as a whole while manual per-element updates are rarely needed.\nRandom numbers\nIt's very easy to generate random tensors in Objax:", "x = objax.random.normal((3, 4))\nprint(x)", "There are multiple primitives for doing so:", "print('Random integers:', objax.random.randint((4,), low=0, high=10))\nprint('Random normal:', objax.random.normal((4,), mean=1.0, stddev=2.0))\nprint('Random truncated normal: ', objax.random.truncated_normal((4,), stddev=2.0))\nprint('Random uniform: ', objax.random.uniform((4,)))", "Objax Variables and Modules\nObjax Variables store values of tensors. Unlike tensors variables are mutable, i.e. the value which is stored in the variable can change.\nSince tensors are immutable, variables change their value by replacing it with new tensors.\nVariables are commonly used together with modules. The Module is a basic building block in Objax that stores variables and other modules. Also most modules are typically callable (i.e., implement the __call__ method) and when called perform some computations on their variables and sub-modules.\nHere is an example of a simple module with one variable which performs the dot product of that variable with an input tensor:", "class SimpleModule(objax.Module):\n\n def __init__(self, length):\n self.v1 = objax.TrainVar(objax.random.normal((length,)))\n self.v2 = objax.TrainVar(jn.ones((2,)))\n\n def __call__(self, x):\n return jn.dot(x, self.v1)\n \n\nm = SimpleModule(3)", "Modules keep track of all variables they own, including variables in sub-modules. The .vars() method list all the module's variables.\nThe method returns an instance of VarCollection which is a dictionary with several other useful methods.", "module_vars = m.vars()\n\nprint('type(module_vars): ', type(module_vars))\nprint('isinstance(module_vars, dict): ', isinstance(module_vars, dict))\nprint()\n\nprint('Variable names and shapes:')\nprint(module_vars)\nprint()\n\nprint('Variable names and values:')\nfor k, v in module_vars.items():\n print(f'{k} {v.value}')", "If the __call__ method of the module takes tensors as input and outputs tensors then it can act as a mathematical function. In the general case __call__ can be a multivariate vector-values function.\nThe SimpleModule described above takes a vector of size length as input and outputs a scalar:", "x = jn.ones((3,))\ny = m(x)\nprint('Input: ', x)\nprint('Output: ', y)", "The way jn.dot works allows us to run code on 2D tensors as well. In this case SimpleModule will treat the input as a batch of vectors, perform the dot product on each of them and return a vector with the results:", "x = jn.array([[1., 1., 1.],\n [1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]])\ny = m(x)\nprint('Input:\\n', x)\nprint('Output:\\n', y)", "For comparison, here is the result of calling module m on each row of tensor x:", "print('Sequentially calling module on each row of 2D tensor:')\nfor idx in range(x.shape[0]):\n row_value = x[idx]\n out_value = m(row_value)\n print(f'm( {row_value} ) = {out_value}')", "How to compute gradients\nAs shown above, modules can act as mathematical functions. It's essential in machine learning to be able to compute gradients of functions and Objax provides a simple way to do this.\nIt's important to keep in mind that gradients are usually defined for scalar-value functions, while our modules can be vector-valued. In this case we need to define additional functions which will convert vector-valued output of the module into scalar. Then we can compute gradients of scalar valued function with respect to all input variables.\nIn the example with SimpleModule above let's define scalar-values loss function first:", "def loss_fn(x):\n return m(x).sum()\n\nprint('loss_fn(x) = ', loss_fn(x))", "Then we create an objax.GradValues module which computes the gradients of loss_fn. We need to pass the function itself to the constructor of objax.GradValues as well as a VarCollection with the variables that loss_fn depends on:", "# Construct a module which computes gradients\ngv = objax.GradValues(loss_fn, module_vars)", "gv is a module which returns the gradients of loss_fn and the values of loss_fn for the given input:", "# gv returns both gradients and values of original function\ngrads, value = gv(x)\n\nprint('Gradients:')\nfor g, var_name in zip(grads, module_vars.keys()):\n print(g, ' w.r.t. ', var_name)\nprint()\nprint('Value: ', value)", "In the example above, grads is a list of gradients with respect to all variables from module_vars. The order of gradients in the grads list is the same as the order of corresponding variables in module_vars. So grads[0] is the gradient of the function w.r.t. m.v1 and grads[1] is the gradient w.r.t. m.v2.\nJust-in-time compilation (JIT)\nIn the examples shown so far, the Python interpreter executes all operations one by one. This mode of execution becomes slow for larger and more complicated code.\nObjax provides an easy and convenient way to compile sequence of operations using objax.Jit:", "jit_m = objax.Jit(m)\ny = jit_m(x)\nprint('Input:\\n', x)\nprint('Output:\\n', y)", "Objax.Jit can compile not only modules, but also functions and callables. In this case a variable collection should be passed to objax.Jit:", "def loss_fn(x, y):\n return ((m(x) - y) ** 2).sum()\n\njit_loss_fn = objax.Jit(loss_fn, module_vars)\n\nx = objax.random.normal((2, 3))\ny = jn.array((-1.0, 1.0))\n\nprint('x:\\n ', x)\nprint('y:\\n', y)\n\nprint('loss_fn(x, y): ', loss_fn(x, y))\nprint('jit_loss_fn(x, y): ', jit_loss_fn(x, y))", "There is no need to use JIT if you only need to compute a single JAX operation. However JIT can give significant speedups when multiple Objax/JAX operations are chained together. The next tutorial will show examples of how JIT is used in practice.\nNevertherless the difference in execution speed with and without JIT is evident even in this simple example:", "x = objax.random.normal((100, 3))\n# gv is a module define above which compute gradients\njit_gv = objax.Jit(gv)\nprint('Timing for jit_gv:')\n%timeit jit_gv(x)\nprint('Timing for gv:')\n%timeit gv(x)", "When to use JAX and when to use Objax primitives?\nAttentive readers will notice that while Objax works on top of JAX, it redefines quite a few concepts from JAX. Some examples are:\n\nobjax.GradValues vs jax.value_and_grad for computing gradients.\nobjax.Jit vs jax.jit for just-in-time compilation.\nobjax.random vs jax.random to generate random numbers.\n\nAll these differences originate from the fact that JAX is a stateless functional framework, while Objax provides a stateful, object-oriented way to use JAX.\nMixing OOP and functional code can be quite confusing, thus we recommend to use JAX primitives only for basic mathematical operations (defined in jax.numpy) and use Objax primitives for everything else.\nNext: Logistic Regression Tutorial\nThis tutorial introduces all concepts necessary to build and train a machine learning classifier. The next tutorial shows how to apply all of them in logistic regression." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phanrahan/magmathon
notebooks/advanced/coreir.ipynb
mit
[ "CoreIR\nThis notebook uses the \"coreir\" mantle backend on the icestick.\nWe begin by building a normal Magma circuit using Mantle \nand the Loam IceStick board.", "import magma as m\n# default mantle target is coreir, so no need to do this unless you want to be explicit\n# m.set_mantle_target(\"coreir\")\n\nfrom mantle import Counter\nfrom loam.boards.icestick import IceStick\n\nicestick = IceStick()\nicestick.Clock.on()\nicestick.D5.on()\n\nN = 22\nmain = icestick.main()\n\ncounter = Counter(N)\nm.wire(counter.O[N-1], main.D5)\n\nm.EndCircuit()", "To compile to coreir, we simply set the output parameter to the m.compile command to \"coreir\".", "m.compile(\"build/blink_coreir\", main, output=\"coreir\")", "We can inspect the generated .json file.", "%cat build/blink_coreir.json", "We can use the coreir command line tool to generate verilog.", "%%bash\ncoreir -i build/blink_coreir.json -o build/blink_coreir.v", "And now we can inspect the generated verilog from coreir, notice that includes the verilog implementations of all the coreir primitives.", "%cat build/blink_coreir.v\n\n%%bash\ncd build\nyosys -q -p 'synth_ice40 -top main -blif blink_coreir.blif' blink_coreir.v\narachne-pnr -q -d 1k -o blink_coreir.txt -p blink_coreir.pcf blink_coreir.blif \nicepack blink_coreir.txt blink_coreir.bin\n#iceprog blink_coreir.bin" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
njtwomey/ADS
01_data_ingress/02_dicts.ipynb
mit
[ "Define simple printing functions", "from __future__ import print_function\n\nimport json \n\ndef print_dict(dd): \n print(json.dumps(dd, indent=2))", "Constructing and allocating dictionaries\nThe syntax for dictionaries is that {} indicates an empty dictionary", "d1 = dict() \nd2 = {}\n\nprint_dict(d1)\nprint_dict(d2)", "There are multiple ways to construct a dictionary when the key/value pairs are known beforehand. The following two snippets are equivalent.", "d3 = {\n 'one': 1, \n 'two': 2\n}\n\nprint_dict(d3)\n\nd4 = dict(one=1, two=2)\n\nprint_dict(d4)", "Often an ordered list of keys and values are available as lists, and \nit is desirable to create a dictionary from these lists. There are a \nnumber of ways to do this, including:", "keys = ['one', 'two', 'three']\nvalues = [1, 2, 3]\n\nd5 = {key: value for key, value in zip(keys, values)}\nprint_dict(d5)", "Adding new data to the dict", "d1['key_1'] = 1\nd1['key_2'] = False\n\nprint_dict(d1)", "Dictionaries are a dynamic data type, and any object can be used as a value type, including integers, floats, lists, and other dicts, for example:", "d1['list_key'] = [1, 2, 3]\nprint_dict(d1)\n\nd1['dict_key'] = {'one': 1, 'two': 2}\nprint_dict(d1)\n\ndel d1['key_1']\nprint_dict(d1)", "Accessing the data\nIt is always possible to get access to the key/value pairs that are contained in the dictionary, and the following \nfunctions help with this:", "print(d1.keys())\n\nfor item in d1:\n print(item)\n\nd1['dict_key']['one']", "Iterating over key/values\nThe following two cells are nearly equivalent. \nIn order to understand how they differ, it will be helpful to confer with Python documentation on iterators and generators\nhttp://anandology.com/python-practice-book/iterators.html", "for key, value in d1.items(): \n print(key, value)\n\nfor key, value in d1.iteritems(): # Only in Python2 (.items() returns an iterator in Python3)\n print(key, value)\n\nprint(d1.keys())\nprint(d1.values())", "Filtering and mapping dictionaries", "def dict_only(key_value): \n return type(key_value[1]) is dict\n\nprint('All dictionary elements:')\nprint(list(filter(dict_only, d1.items())))\n\nprint('Same as above, but with inline function (lambda):')\nprint(filter(lambda key_value: type(key_value[1]) is dict, d1.items()))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/nicam16-8s/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: NICAM16-8S\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'nicam16-8s', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE    Type: STRING    Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE    Type: STRING    Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bwalrond/explore-notebooks
jupyter_notebooks/ScalableMachineLearning-CS190-1x_Module3-LinRegressLab.ipynb
mit
[ "<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-nd/4.0/\">Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.\n\nLinear Regression Lab\nThis lab covers a common supervised learning pipeline, using a subset of the Million Song Dataset from the UCI Machine Learning Repository. Our goal is to train a linear regression model to predict the release year of a song given a set of audio features.\n This lab will cover: \n* Part 1: Read and parse the initial dataset\n * Visualization 1: Features\n * Visualization 2: Shifting labels\n\nPart 2: Create and evaluate a baseline model\n\nVisualization 3: Predicted vs. actual\n\n\nPart 3: Train (via gradient descent) and evaluate a linear regression model\n\n\nVisualization 4: Training error\n\n\nPart 4: Train using MLlib and tune hyperparameters via grid search\n\nVisualization 5: Best model's predictions\n\nVisualization 6: Hyperparameter heat map\n\n\nPart 5: Add interactions between features\n\n\n\nNote that, for reference, you can look up the details of:\n* the relevant Spark methods in Spark's Python API\n* the relevant NumPy methods in the NumPy Reference", "labVersion = 'cs190.1x-lab3-1.0.4'\nprint labVersion", "WARNING: Before executing the following commands for the first time, you need to follow the instructions in the mount_data notebook in this folder, or download and execute this, notebook, to mount input datasets.\n Part 1: Read and parse the initial dataset \n (1a) Load and check the data \nThe raw data is currently stored in text file. We will start by storing this raw data in as an RDD, with each element of the RDD representing a data point as a comma-delimited string. Each string starts with the label (a year) followed by numerical audio features. Use the count method to check how many data points we have. Then use the take method to create and print out a list of the first 5 data points in their initial string format.\nWARNING: If test_helper, required in the cell below, is not installed, follow the instructions here.", "# load testing library\nfrom test_helper import Test\nimport os.path\nbaseDir = os.path.join('mnt', 'spark-mooc')\ninputPath = os.path.join('cs190', 'millionsong.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nnumPartitions = 2\nrawData = sc.textFile(fileName, numPartitions)\n\n# TODO: Replace <FILL IN> with appropriate code\nnumPoints = rawData.count()\nprint numPoints\nsamplePoints = rawData.take(5)\nprint samplePoints\n\n# TEST Load and check the data (1a)\nTest.assertEquals(numPoints, 6724, 'incorrect value for numPoints')\nTest.assertEquals(len(samplePoints), 5, 'incorrect length for samplePoints')\n\nlen(samplePoints)", "(1b) Using LabeledPoint \nIn MLlib, labeled training instances are stored using the LabeledPoint object. Write the parsePoint function that takes as input a raw data point, parses it using Python's unicode.split method, and returns a LabeledPoint. Use this function to parse samplePoints (from the previous question). Then print out the features and label for the first training point, using the LabeledPoint.features and LabeledPoint.label attributes. Finally, calculate the number features for this dataset.\n\nNote:\n* split() can be called directly on a unicode or str object. For example, u'split,me'.split(',') returns [u'split', u'me'].", "from pyspark.mllib.regression import LabeledPoint\nimport numpy as np\n\n# Here is a sample raw data point:\n# '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419'\n# In this raw data point, 2001.0 is the label, and the remaining values are features\n\n# TODO: Replace <FILL IN> with appropriate code\ndef parsePoint(line):\n \"\"\"Converts a comma separated unicode string into a `LabeledPoint`.\n\n Args:\n line (unicode): Comma separated unicode string where the first element is the label and the\n remaining elements are features.\n\n Returns:\n LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and\n features.\n \"\"\"\n list = line.split(',')\n return (LabeledPoint(list[0],list[1:]))\n\nparsedSamplePoints = map(parsePoint, samplePoints)\nfirstPointFeatures = parsedSamplePoints[0].features\nfirstPointLabel = parsedSamplePoints[0].label\nprint firstPointFeatures, firstPointLabel\n\nd = len(firstPointFeatures)\nprint d\n\n# TEST Using LabeledPoint (1b)\nTest.assertTrue(isinstance(firstPointLabel, float), 'label must be a float')\nexpectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192]\nTest.assertTrue(np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4),\n 'incorrect features for firstPointFeatures')\nTest.assertTrue(np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel')\nTest.assertTrue(d == 12, 'incorrect number of features')", "Visualization 1: Features\nFirst we will load and setup the visualization library. Then we will look at the raw features for 50 data points by generating a heatmap that visualizes each feature on a grey-scale and shows the variation of each feature across the 50 sample data points. The features are all between 0 and 1, with values closer to 1 represented via darker shades of grey.", "import matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n\nsampleMorePoints = rawData.take(50)\n# You can uncomment the line below to see randomly selected features. These will be randomly\n# selected each time you run the cell. Note that you should run this cell with the line commented\n# out when answering the lab quiz questions.\n# sampleMorePoints = rawData.takeSample(False, 50)\n\nparsedSampleMorePoints = map(parsePoint, sampleMorePoints)\ndataValues = map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)\n\ndef preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',\n gridWidth=1.0):\n \"\"\"Template for generating the plot layout.\"\"\"\n plt.close()\n fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')\n ax.axes.tick_params(labelcolor='#999999', labelsize='10')\n for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:\n axis.set_ticks_position('none')\n axis.set_ticks(ticks)\n axis.label.set_color('#999999')\n if hideLabels: axis.set_ticklabels([])\n plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')\n map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])\n return fig, ax\n\n# generate layout and plot\nfig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True,\n gridColor='#eeeeee', gridWidth=1.1)\nimage = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys)\nfor x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]):\n plt.text(x, y, s, color='#999999', size='10')\nplt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation')\ndisplay(fig) \npass", "(1c) Find the range \nNow let's examine the labels to find the range of song years. To do this, first parse each element of the rawData RDD, and then find the smallest and largest labels.", "# TODO: Replace <FILL IN> with appropriate code\nparsedDataInit = rawData.map(parsePoint)\nonlyLabels = parsedDataInit.map(lambda p: p.label)\nminYear = onlyLabels.min()\nmaxYear = onlyLabels.max()\nprint maxYear, minYear\n\n# TEST Find the range (1c)\nTest.assertEquals(len(parsedDataInit.take(1)[0].features), 12,\n 'unexpected number of features in sample point')\nsumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum()\nTest.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values')\nyearRange = maxYear - minYear\nTest.assertTrue(yearRange == 89, 'incorrect range for minYear to maxYear')", "(1d) Shift labels \nAs we just saw, the labels are years in the 1900s and 2000s. In learning problems, it is often natural to shift labels such that they start from zero. Starting with parsedDataInit, create a new RDD consisting of LabeledPoint objects in which the labels are shifted such that smallest label equals zero.", "# TODO: Replace <FILL IN> with appropriate code\nparsedData = parsedDataInit.map(lambda p: LabeledPoint(p.label-1922,p.features))\n\n# Should be a LabeledPoint\nprint type(parsedData.take(1)[0])\n# View the first point\nprint '\\n{0}'.format(parsedData.take(1))\n\n# TEST Shift labels (1d)\noldSampleFeatures = parsedDataInit.take(1)[0].features\nnewSampleFeatures = parsedData.take(1)[0].features\nTest.assertTrue(np.allclose(oldSampleFeatures, newSampleFeatures),\n 'new features do not match old features')\nsumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum()\nTest.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values')\nminYearNew = parsedData.map(lambda lp: lp.label).min()\nmaxYearNew = parsedData.map(lambda lp: lp.label).max()\nTest.assertTrue(minYearNew == 0, 'incorrect min year in shifted data')\nTest.assertTrue(maxYearNew == 89, 'incorrect max year in shifted data')", "Visualization 2: Shifting labels \nWe will look at the labels before and after shifting them. Both scatter plots below visualize tuples storing i) a label value and ii) the number of training points with this label. The first scatter plot uses the initial labels, while the second one uses the shifted labels. Note that the two plots look the same except for the labels on the x-axis.", "# get data for plot\noldData = (parsedDataInit\n .map(lambda lp: (lp.label, 1))\n .reduceByKey(lambda x, y: x + y)\n .collect())\nx, y = zip(*oldData)\n\n# generate layout and plot data\nfig, ax = preparePlot(np.arange(1920, 2050, 20), np.arange(0, 150, 20))\nplt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\nax.set_xlabel('Year'), ax.set_ylabel('Count')\ndisplay(fig) \npass\n\n# get data for plot\nnewData = (parsedData\n .map(lambda lp: (lp.label, 1))\n .reduceByKey(lambda x, y: x + y)\n .collect())\nx, y = zip(*newData)\n\n# generate layout and plot data\nfig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))\nplt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\nax.set_xlabel('Year (shifted)'), ax.set_ylabel('Count')\ndisplay(fig) \npass", "(1e) Training, validation, and test sets \nWe're almost done parsing our dataset, and our final task involves spliting the dataset into training, validation and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets. Next, cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset and verify that the sum of their sizes equals the value computed in Part (1a).", "# TODO: Replace <FILL IN> with appropriate code\nweights = [.8, .1, .1]\nseed = 42\nparsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights,seed)\nparsedTrainData.cache()\nparsedValData.cache()\nparsedTestData.cache()\nnTrain = parsedTrainData.count()\nnVal = parsedValData.count()\nnTest = parsedTestData.count()\n\nprint nTrain, nVal, nTest, nTrain + nVal + nTest\nprint parsedData.count()\n\n# TEST Training, validation, and test sets (1e)\nTest.assertEquals(parsedTrainData.getNumPartitions(), numPartitions,\n 'parsedTrainData has wrong number of partitions')\nTest.assertEquals(parsedValData.getNumPartitions(), numPartitions,\n 'parsedValData has wrong number of partitions')\nTest.assertEquals(parsedTestData.getNumPartitions(), numPartitions,\n 'parsedTestData has wrong number of partitions')\nTest.assertEquals(len(parsedTrainData.take(1)[0].features), 12,\n 'parsedTrainData has wrong number of features')\nsumFeatTwo = (parsedTrainData\n .map(lambda lp: lp.features[2])\n .sum())\nsumFeatThree = (parsedValData\n .map(lambda lp: lp.features[3])\n .reduce(lambda x, y: x + y))\nsumFeatFour = (parsedTestData\n .map(lambda lp: lp.features[4])\n .reduce(lambda x, y: x + y))\nTest.assertTrue(np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],\n 2526.87757656, 297.340394298, 184.235876654),\n 'parsed Train, Val, Test data has unexpected values')\nTest.assertTrue(nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size')\nTest.assertEquals(nTrain, 5371, 'unexpected value for nTrain')\nTest.assertEquals(nVal, 682, 'unexpected value for nVal')\nTest.assertEquals(nTest, 671, 'unexpected value for nTest')", "Part 2: Create and evaluate a baseline model \n(2a) Average label \nA very simple yet natural baseline model is one where we always make the same prediction independent of the given data point, using the average label in the training set as the constant prediction value. Compute this value, which is the average (shifted) song year for the training set. Use an appropriate method in the RDD API.", "# TODO: Replace <FILL IN> with appropriate code\naverageTrainYear = parsedTrainData.map(lambda p: p.label).mean()\nprint averageTrainYear\n\n# TEST Average label (2a)\nTest.assertTrue(np.allclose(averageTrainYear, 53.9316700801),\n 'incorrect value for averageTrainYear')", "(2b) Root mean squared error \nWe naturally would like to see how well this naive baseline performs. We will use root mean squared error (RMSE) for evaluation purposes. Implement a function to compute RMSE given an RDD of (label, prediction) tuples, and test out this function on an example.", "# TODO: Replace <FILL IN> with appropriate code\ndef squaredError(label, prediction):\n \"\"\"Calculates the squared error for a single prediction.\n\n Args:\n label (float): The correct value for this observation.\n prediction (float): The predicted value for this observation.\n\n Returns:\n float: The difference between the `label` and `prediction` squared.\n \"\"\"\n <FILL IN>\n\ndef calcRMSE(labelsAndPreds):\n \"\"\"Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.\n\n Args:\n labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.\n\n Returns:\n float: The square root of the mean of the squared errors.\n \"\"\"\n <FILL IN>\n\nlabelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)])\n# RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291\nexampleRMSE = calcRMSE(labelsAndPreds)\nprint exampleRMSE\n\n# TEST Root mean squared error (2b)\nTest.assertTrue(np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError')\nTest.assertTrue(np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE')", "(2c) Training, validation and test RMSE \nNow let's calculate the training, validation and test RMSE of our baseline model. To do this, first create RDDs of (label, prediction) tuples for each dataset, and then call calcRMSE. Note that each RMSE can be interpreted as the average prediction error for the given dataset (in terms of number of years).", "# TODO: Replace <FILL IN> with appropriate code\nlabelsAndPredsTrain = parsedTrainData.<FILL IN>\nrmseTrainBase = <FILL IN>\n\nlabelsAndPredsVal = parsedValData.<FILL IN>\nrmseValBase = <FILL IN>\n\nlabelsAndPredsTest = parsedTestData.<FILL IN>\nrmseTestBase = <FILL IN>\n\nprint 'Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)\nprint 'Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)\nprint 'Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)\n\n\n# TEST Training, validation and test RMSE (2c)\nTest.assertTrue(np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],\n [21.305869, 21.586452, 22.136957]), 'incorrect RMSE value')", "Visualization 3: Predicted vs. actual \nWe will visualize predictions on the validation dataset. The scatter plots below visualize tuples storing i) the predicted value and ii) true label. The first scatter plot represents the ideal situation where the predicted value exactly equals the true label, while the second plot uses the baseline predictor (i.e., averageTrainYear) for all predicted values. Further note that the points in the scatter plots are color-coded, ranging from light yellow when the true and predicted values are equal to bright red when they drastically differ.", "from matplotlib.colors import ListedColormap, Normalize\nfrom matplotlib.cm import get_cmap\ncmap = get_cmap('YlOrRd')\nnorm = Normalize()\n\nactual = np.asarray(parsedValData\n .map(lambda lp: lp.label)\n .collect())\nerror = np.asarray(parsedValData\n .map(lambda lp: (lp.label, lp.label))\n .map(lambda (l, p): squaredError(l, p))\n .collect())\nclrs = cmap(np.asarray(norm(error)))[:,0:3]\n\nfig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20))\nplt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5)\nax.set_xlabel('Predicted'), ax.set_ylabel('Actual')\ndisplay(fig) \npass\n\npredictions = np.asarray(parsedValData\n .map(lambda lp: averageTrainYear)\n .collect())\nerror = np.asarray(parsedValData\n .map(lambda lp: (lp.label, averageTrainYear))\n .map(lambda (l, p): squaredError(l, p))\n .collect())\nnorm = Normalize()\nclrs = cmap(np.asarray(norm(error)))[:,0:3]\n\nfig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20))\nax.set_xlim(53, 55)\nplt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3)\nax.set_xlabel('Predicted'), ax.set_ylabel('Actual')\ndisplay(fig) ", "Part 3: Train (via gradient descent) and evaluate a linear regression model \n (3a) Gradient summand \nNow let's see if we can do better via linear regression, training a model via gradient descent (we'll omit the intercept for now). Recall that the gradient descent update for linear regression is: \\[ \\scriptsize \\mathbf{w}_{i+1} = \\mathbf{w}_i - \\alpha_i \\sum_j (\\mathbf{w}_i^\\top\\mathbf{x}_j - y_j) \\mathbf{x}_j \\,.\\] where \\( \\scriptsize i \\) is the iteration number of the gradient descent algorithm, and \\( \\scriptsize j \\) identifies the observation.\nFirst, implement a function that computes the summand for this update, i.e., the summand equals \\( \\scriptsize (\\mathbf{w}^\\top \\mathbf{x} - y) \\mathbf{x} \\, ,\\) and test out this function on two examples. Use the DenseVector dot method.", "from pyspark.mllib.linalg import DenseVector\n\n# TODO: Replace <FILL IN> with appropriate code\ndef gradientSummand(weights, lp):\n \"\"\"Calculates the gradient summand for a given weight and `LabeledPoint`.\n\n Note:\n `DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably\n within this function. For example, they both implement the `dot` method.\n\n Args:\n weights (DenseVector): An array of model weights (betas).\n lp (LabeledPoint): The `LabeledPoint` for a single observation.\n\n Returns:\n DenseVector: An array of values the same length as `weights`. The gradient summand.\n \"\"\"\n <FILL IN>\n\nexampleW = DenseVector([1, 1, 1])\nexampleLP = LabeledPoint(2.0, [3, 1, 4])\n# gradientSummand = (dot([1 1 1], [3 1 4]) - 2) * [3 1 4] = (8 - 2) * [3 1 4] = [18 6 24]\nsummandOne = gradientSummand(exampleW, exampleLP)\nprint summandOne\n\nexampleW = DenseVector([.24, 1.2, -1.4])\nexampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1])\nsummandTwo = gradientSummand(exampleW, exampleLP)\nprint summandTwo\n\n# TEST Gradient summand (3a)\nTest.assertTrue(np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne')\nTest.assertTrue(np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo')", "(3b) Use weights to make predictions \nNext, implement a getLabeledPredictions function that takes in weights and an observation's LabeledPoint and returns a (label, prediction) tuple. Note that we can predict by computing the dot product between weights and an observation's features.", "# TODO: Replace <FILL IN> with appropriate code\ndef getLabeledPrediction(weights, observation):\n \"\"\"Calculates predictions and returns a (label, prediction) tuple.\n\n Note:\n The labels should remain unchanged as we'll use this information to calculate prediction\n error later.\n\n Args:\n weights (np.ndarray): An array with one weight for each features in `trainData`.\n observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the\n features for the data point.\n\n Returns:\n tuple: A (label, prediction) tuple.\n \"\"\"\n return <FILL IN>\n\nweights = np.array([1.0, 1.5])\npredictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])),\n LabeledPoint(1.5, np.array([.5, .5]))])\nlabelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp))\nprint labelsAndPredsExample.collect()\n\n# TEST Use weights to make predictions (3b)\nTest.assertEquals(labelsAndPredsExample.collect(), [(2.0, 1.75), (1.5, 1.25)],\n 'incorrect definition for getLabeledPredictions')", "(3c) Gradient descent \nNext, implement a gradient descent function for linear regression and test out this function on an example.", "# TODO: Replace <FILL IN> with appropriate code\ndef linregGradientDescent(trainData, numIters):\n \"\"\"Calculates the weights and error for a linear regression model trained with gradient descent.\n\n Note:\n `DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably\n within this function. For example, they both implement the `dot` method.\n\n Args:\n trainData (RDD of LabeledPoint): The labeled data for use in training the model.\n numIters (int): The number of iterations of gradient descent to perform.\n\n Returns:\n (np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the\n final weights (one weight per feature) for the model, and training errors will contain\n an error (RMSE) for each iteration of the algorithm.\n \"\"\"\n # The length of the training data\n n = trainData.count()\n # The number of features in the training data\n d = len(trainData.take(1)[0].features)\n w = np.zeros(d)\n alpha = 1.0\n # We will compute and store the training error after each iteration\n errorTrain = np.zeros(numIters)\n for i in range(numIters):\n # Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction)\n # tuples. Note that the weights all equal 0 for the first iteration, so the predictions will\n # have large errors to start.\n labelsAndPredsTrain = trainData.<FILL IN>\n errorTrain[i] = calcRMSE(labelsAndPredsTrain)\n\n # Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a).\n # Note that `gradient` should be a `DenseVector` of length `d`.\n gradient = <FILL IN>\n\n # Update the weights\n alpha_i = alpha / (n * np.sqrt(i+1))\n w -= <FILL IN>\n return w, errorTrain\n\n# create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent\n# note: the resulting model will not be useful; the goal here is to verify that\n# linregGradientDescent is working properly\nexampleN = 10\nexampleD = 3\nexampleData = (sc\n .parallelize(parsedTrainData.take(exampleN))\n .map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD])))\nprint exampleData.take(2)\nexampleNumIters = 5\nexampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters)\nprint exampleWeights\n\n# TEST Gradient descent (3c)\nexpectedOutput = [48.88110449, 36.01144093, 30.25350092]\nTest.assertTrue(np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect')\nexpectedError = [79.72013547, 30.27835699, 9.27842641, 9.20967856, 9.19446483]\nTest.assertTrue(np.allclose(exampleErrorTrain, expectedError),\n 'value of exampleErrorTrain is incorrect')", "(3d) Train the model \nNow let's train a linear regression model on all of our training data and evaluate its accuracy on the validation set. Note that the test set will not be used here. If we evaluated the model on the test set, we would bias our final results.\nWe've already done much of the required work: we computed the number of features in Part (1b); we created the training and validation datasets and computed their sizes in Part (1e); and, we wrote a function to compute RMSE in Part (2b).", "# TODO: Replace <FILL IN> with appropriate code\nnumIters = 50\nweightsLR0, errorTrainLR0 = linregGradientDescent(<FILL IN>)\n\nlabelsAndPreds = parsedValData.<FILL IN>\nrmseValLR0 = calcRMSE(labelsAndPreds)\n\nprint 'Validation RMSE:\\n\\tBaseline = {0:.3f}\\n\\tLR0 = {1:.3f}'.format(rmseValBase,\n rmseValLR0)\n\n# TEST Train the model (3d)\nexpectedOutput = [22.64535883, 20.064699, -0.05341901, 8.2931319, 5.79155768, -4.51008084,\n 15.23075467, 3.8465554, 9.91992022, 5.97465933, 11.36849033, 3.86452361]\nTest.assertTrue(np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0')", "Visualization 4: Training error \nWe will look at the log of the training error as a function of iteration. The first scatter plot visualizes the logarithm of the training error for all 50 iterations. The second plot shows the training error itself, focusing on the final 44 iterations.", "norm = Normalize()\nclrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3]\n\nfig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1))\nax.set_ylim(2, 6)\nplt.scatter(range(0, numIters), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)\nax.set_xlabel('Iteration'), ax.set_ylabel(r'$\\log_e(errorTrainLR0)$')\ndisplay(fig) \npass\n\nnorm = Normalize()\nclrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3]\n\nfig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1))\nax.set_ylim(17.8, 21.2)\nplt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)\nax.set_xticklabels(map(str, range(6, 66, 10)))\nax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error')\ndisplay(fig) \npass", "Part 4: Train using MLlib and perform grid search \n(4a) LinearRegressionWithSGD \nWe're already doing better than the baseline model, but let's see if we can do better by adding an intercept, using regularization, and (based on the previous visualization) training for more iterations. MLlib's LinearRegressionWithSGD essentially implements the same algorithm that we implemented in Part (3b), albeit more efficiently and with various additional functionality, such as stochastic gradient approximation, including an intercept in the model and also allowing L1 or L2 regularization.\nFirst use LinearRegressionWithSGD to train a model with L2 regularization and with an intercept. This method returns a LinearRegressionModel. Next, use the model's weights and intercept attributes to print out the model's parameters.", "from pyspark.mllib.regression import LinearRegressionWithSGD\n# Values to use when training the linear regression model\nnumIters = 500 # iterations\nalpha = 1.0 # step\nminiBatchFrac = 1.0 # miniBatchFraction\nreg = 1e-1 # regParam\nregType = 'l2' # regType\nuseIntercept = True # intercept\n\n# TODO: Replace <FILL IN> with appropriate code\nfirstModel = LinearRegressionWithSGD.<FILL IN>\n\n# weightsLR1 stores the model weights; interceptLR1 stores the model intercept\nweightsLR1 = <FILL IN>\ninterceptLR1 = <FILL IN>\nprint weightsLR1, interceptLR1\n\n# TEST LinearRegressionWithSGD (4a)\nexpectedIntercept = 13.3763009811\nexpectedInterceptE = 13.3335907631\nexpectedWeights = [15.9789216525, 13.923582484, 0.781551054803, 6.09257051566, 3.91814791179, -2.30347707767,\n 10.3002026917, 3.04565129011, 7.23175674717, 4.65796458476, 7.98875075855, 3.1782463856]\nexpectedWeightsE = [16.682292427, 14.7439059559, -0.0935105608897, 6.22080088829, 4.01454261926, -3.30214858535,\n 11.0403027232, 2.67190962854, 7.18925791279, 4.46093254586, 8.14950409475, 2.75135810882]\nTest.assertTrue(np.allclose(interceptLR1, expectedIntercept) or np.allclose(interceptLR1, expectedInterceptE),\n 'incorrect value for interceptLR1')\nTest.assertTrue(np.allclose(weightsLR1, expectedWeights) or np.allclose(weightsLR1, expectedWeightsE),\n 'incorrect value for weightsLR1')", "(4b) Predict\nNow use the LinearRegressionModel.predict() method to make a prediction on a sample point. Pass the features from a LabeledPoint into the predict() method.", "# TODO: Replace <FILL IN> with appropriate code\nsamplePoint = parsedTrainData.take(1)[0]\nsamplePrediction = <FILL IN>\nprint samplePrediction\n\n# TEST Predict (4b)\nTest.assertTrue(np.allclose(samplePrediction, 56.5823796609) or np.allclose(samplePrediction, 56.8013380112),\n 'incorrect value for samplePrediction')", "(4c) Evaluate RMSE \nNext evaluate the accuracy of this model on the validation set. Use the predict() method to create a labelsAndPreds RDD, and then use the calcRMSE() function from Part (2b).", "# TODO: Replace <FILL IN> with appropriate code\nlabelsAndPreds = <FILL IN>\nrmseValLR1 = <FILL IN>\n\nprint ('Validation RMSE:\\n\\tBaseline = {0:.3f}\\n\\tLR0 = {1:.3f}' +\n '\\n\\tLR1 = {2:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1)\n\n# TEST Evaluate RMSE (4c)\nTest.assertTrue(np.allclose(rmseValLR1, 19.8730701066) or np.allclose(rmseValLR1, 19.6912473416),\n 'incorrect value for rmseValLR1')", "(4d) Grid search \nWe're already outperforming the baseline on the validation set by almost 2 years on average, but let's see if we can do better. Perform grid search to find a good regularization parameter. Try regParam values 1e-10, 1e-5, and 1.", "# TODO: Replace <FILL IN> with appropriate code\nbestRMSE = rmseValLR1\nbestRegParam = reg\nbestModel = firstModel\n\nnumIters = 500\nalpha = 1.0\nminiBatchFrac = 1.0\nfor reg in <FILL IN>:\n model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,\n miniBatchFrac, regParam=reg,\n regType='l2', intercept=True)\n labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))\n rmseValGrid = calcRMSE(labelsAndPreds)\n print rmseValGrid\n\n if rmseValGrid < bestRMSE:\n bestRMSE = rmseValGrid\n bestRegParam = reg\n bestModel = model\nrmseValLRGrid = bestRMSE\n\nprint ('Validation RMSE:\\n\\tBaseline = {0:.3f}\\n\\tLR0 = {1:.3f}\\n\\tLR1 = {2:.3f}\\n' +\n '\\tLRGrid = {3:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)\n\n\n\n# TEST Grid search (4d)\nTest.assertTrue(np.allclose(17.4831362704, rmseValLRGrid) or np.allclose(17.0171700716, rmseValLRGrid),\n 'incorrect value for rmseValLRGrid')", "Visualization 5: Best model's predictions\nNext, we create a visualization similar to 'Visualization 3: Predicted vs. actual' from Part 2 using the predictions from the best model from Part (4d) on the validation dataset. Specifically, we create a color-coded scatter plot visualizing tuples storing i) the predicted value from this model and ii) true label.", "predictions = np.asarray(parsedValData\n .map(lambda lp: bestModel.predict(lp.features))\n .collect())\nactual = np.asarray(parsedValData\n .map(lambda lp: lp.label)\n .collect())\nerror = np.asarray(parsedValData\n .map(lambda lp: (lp.label, bestModel.predict(lp.features)))\n .map(lambda (l, p): squaredError(l, p))\n .collect())\n\nnorm = Normalize()\nclrs = cmap(np.asarray(norm(error)))[:,0:3]\n\nfig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))\nax.set_xlim(15, 82), ax.set_ylim(-5, 105)\nplt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5)\nax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual')\ndisplay(fig) \npass", "(4e) Vary alpha and the number of iterations \nIn the previous grid search, we set alpha = 1 for all experiments. Now let's see what happens when we vary alpha. Specifically, try 1e-5 and 10 as values for alpha and also try training models for 500 iterations (as before) but also for 5 iterations. Evaluate all models on the validation set. Note that if we set alpha too small the gradient descent will require a huge number of steps to converge to the solution, and if we use too large of an alpha it can cause numerical problems, like you'll see below for alpha = 10.", "# TODO: Replace <FILL IN> with appropriate code\nreg = bestRegParam\nmodelRMSEs = []\n\nfor alpha in <FILL IN>:\n for numIters in <FILL IN>:\n model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,\n miniBatchFrac, regParam=reg,\n regType='l2', intercept=True)\n labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))\n rmseVal = calcRMSE(labelsAndPreds)\n print 'alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)\n modelRMSEs.append(rmseVal)\n\n# TEST Vary alpha and the number of iterations (4e)\nexpectedResults = sorted([56.972629385122502, 56.972629385122502, 355124752.22122133])\nexpectedResultsE = sorted([56.892948663998297, 56.96970493238036, 355124752.22122133])\nactualResults = sorted(modelRMSEs)[:3]\nTest.assertTrue(np.allclose(actualResults, expectedResults) or np.allclose(actualResults, expectedResultsE),\n 'incorrect value for modelRMSEs')", "Visualization 6: Hyperparameter heat map \nNext, we perform a visualization of hyperparameter search using a larger set of hyperparameters (with precomputed results). Specifically, we create a heat map where the brighter colors correspond to lower RMSE values. The first plot has a large area with brighter colors. In order to differentiate within the bright region, we generate a second plot corresponding to the hyperparameters found within that region.", "from matplotlib.colors import LinearSegmentedColormap\n\n# Saved parameters and results, to save the time required to run 36 models\nnumItersParams = [10, 50, 100, 250, 500, 1000]\nregParams = [1e-8, 1e-6, 1e-4, 1e-2, 1e-1, 1]\nrmseVal = np.array([[ 20.36769649, 20.36770128, 20.36818057, 20.41795354, 21.09778437, 301.54258421],\n [ 19.04948826, 19.0495 , 19.05067418, 19.16517726, 19.97967727, 23.80077467],\n [ 18.40149024, 18.40150998, 18.40348326, 18.59457491, 19.82155716, 23.80077467],\n [ 17.5609346 , 17.56096749, 17.56425511, 17.88442127, 19.71577117, 23.80077467],\n [ 17.0171705 , 17.01721288, 17.02145207, 17.44510574, 19.69124734, 23.80077467],\n [ 16.58074813, 16.58079874, 16.58586512, 17.11466904, 19.6860931 , 23.80077467]])\n\nnumRows, numCols = len(numItersParams), len(regParams)\nrmseVal = np.array(rmseVal)\nrmseVal.shape = (numRows, numCols)\n\nfig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True,\n gridWidth=0.)\nax.set_xticklabels(regParams), ax.set_yticklabels(numItersParams)\nax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Number of Iterations')\n\ncolors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)\nimage = plt.imshow(rmseVal,interpolation='nearest', aspect='auto',\n cmap = colors)\ndisplay(fig) \n\n# Zoom into the bottom left\nnumItersParamsZoom, regParamsZoom = numItersParams[-3:], regParams[:4]\nrmseValZoom = rmseVal[-3:, :4]\n\nnumRows, numCols = len(numItersParamsZoom), len(regParamsZoom)\n\nfig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7), hideLabels=True,\n gridWidth=0.)\nax.set_xticklabels(regParamsZoom), ax.set_yticklabels(numItersParamsZoom)\nax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Number of Iterations')\n\ncolors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)\nimage = plt.imshow(rmseValZoom,interpolation='nearest', aspect='auto',\n cmap = colors)\ndisplay(fig) \npass", "Part 5: Add interactions between features \n (5a) Add 2-way interactions \nSo far, we've used the features as they were provided. Now, we will add features that capture the two-way interactions between our existing features. Write a function twoWayInteractions that takes in a LabeledPoint and generates a new LabeledPoint that contains the old features and the two-way interactions between them.\n\nNote:\n* A dataset with three features would have nine ( \\( \\scriptsize 3^2 \\) ) two-way interactions.\n* You might want to use itertools.product to generate tuples for each of the possible 2-way interactions.\n* Remember that you can combine two DenseVector or ndarray objects using np.hstack.", "# TODO: Replace <FILL IN> with appropriate code\nimport itertools\n\ndef twoWayInteractions(lp):\n \"\"\"Creates a new `LabeledPoint` that includes two-way interactions.\n\n Note:\n For features [x, y] the two-way interactions would be [x^2, x*y, y*x, y^2] and these\n would be appended to the original [x, y] feature list.\n\n Args:\n lp (LabeledPoint): The label and features for this observation.\n\n Returns:\n LabeledPoint: The new `LabeledPoint` should have the same label as `lp`. Its features\n should include the features from `lp` followed by the two-way interaction features.\n \"\"\"\n <FILL IN>\n\nprint twoWayInteractions(LabeledPoint(0.0, [2, 3]))\n\n# Transform the existing train, validation, and test sets to include two-way interactions.\ntrainDataInteract = <FILL IN>\nvalDataInteract = <FILL IN>\ntestDataInteract = <FILL IN>\n\n# TEST Add two-way interactions (5a)\ntwoWayExample = twoWayInteractions(LabeledPoint(0.0, [2, 3]))\nTest.assertTrue(np.allclose(sorted(twoWayExample.features),\n sorted([2.0, 3.0, 4.0, 6.0, 6.0, 9.0])),\n 'incorrect features generatedBy twoWayInteractions')\ntwoWayPoint = twoWayInteractions(LabeledPoint(1.0, [1, 2, 3]))\nTest.assertTrue(np.allclose(sorted(twoWayPoint.features),\n sorted([1.0,2.0,3.0,1.0,2.0,3.0,2.0,4.0,6.0,3.0,6.0,9.0])),\n 'incorrect features generated by twoWayInteractions')\nTest.assertEquals(twoWayPoint.label, 1.0, 'incorrect label generated by twoWayInteractions')\nTest.assertTrue(np.allclose(sum(trainDataInteract.take(1)[0].features), 40.821870576035529),\n 'incorrect features in trainDataInteract')\nTest.assertTrue(np.allclose(sum(valDataInteract.take(1)[0].features), 45.457719932695696),\n 'incorrect features in valDataInteract')\nTest.assertTrue(np.allclose(sum(testDataInteract.take(1)[0].features), 35.109111632783168),\n 'incorrect features in testDataInteract')", "(5b) Build interaction model \nNow, let's build the new model. We've done this several times now. To implement this for the new features, we need to change a few variable names.\n\nNote:\n* Remember that we should build our model from the training data and evaluate it on the validation data.\n* You should re-run your hyperparameter search after changing features, as using the best hyperparameters from your prior model will not necessary lead to the best model.\n* For this exercise, we have already preset the hyperparameters to reasonable values.", "# TODO: Replace <FILL IN> with appropriate code\nnumIters = 500\nalpha = 1.0\nminiBatchFrac = 1.0\nreg = 1e-10\n\nmodelInteract = LinearRegressionWithSGD.train(<FILL IN>, numIters, alpha,\n miniBatchFrac, regParam=reg,\n regType='l2', intercept=True)\nlabelsAndPredsInteract = <FILL IN>.map(lambda lp: (lp.label, <FILL IN>.predict(lp.features)))\nrmseValInteract = calcRMSE(labelsAndPredsInteract)\n\nprint ('Validation RMSE:\\n\\tBaseline = {0:.3f}\\n\\tLR0 = {1:.3f}\\n\\tLR1 = {2:.3f}\\n\\tLRGrid = ' +\n '{3:.3f}\\n\\tLRInteract = {4:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1,\n rmseValLRGrid, rmseValInteract)\n\n# TEST Build interaction model (5b)\nTest.assertTrue(np.allclose(rmseValInteract, 15.9963259256) or np.allclose(rmseValInteract, 15.6894664683),\n 'incorrect value for rmseValInteract')", "(5c) Evaluate interaction model on test data \nOur final step is to evaluate the new model on the test dataset. Note that we haven't used the test set to evaluate any of our models. Because of this, our evaluation provides us with an unbiased estimate for how our model will perform on new data. If we had changed our model based on viewing its performance on the test set, our estimate of RMSE would likely be overly optimistic.\nWe'll also print the RMSE for both the baseline model and our new model. With this information, we can see how much better our model performs than the baseline model.", "# TODO: Replace <FILL IN> with appropriate code\nlabelsAndPredsTest = <FILL IN>\nrmseTestInteract = <FILL IN>\n\nprint ('Test RMSE:\\n\\tBaseline = {0:.3f}\\n\\tLRInteract = {1:.3f}'\n .format(rmseTestBase, rmseTestInteract))\n\n# TEST Evaluate interaction model on test data (5c)\nTest.assertTrue(np.allclose(rmseTestInteract, 16.5251427618) or np.allclose(rmseTestInteract, 16.3272040537),\n 'incorrect value for rmseTestInteract')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
magwenelab/mini-term-2016
ode-modeling2-instructor.ipynb
cc0-1.0
[ "Feed Forward Loops\nWe're now going to use some of these tools to look at a class of network motifs, called Feed Forward Loops (FFLs), found in signaling and regulatory networks. FFLs involve interactions between three components, with the basic topology illustrated below. Depending on the signs of the edges (whether activating or repressing) we can classify FFLs as \"coherent\" or \"incoherent.\" We'll take a look at an example of each class.\nA Coherent FFL\nThe most common type of coherent FFL is illustrated in the figure below. In this system $X$ is an activator of $Y$ and both $X$ and $Y$ regulate the production of $Z$ with AND logic (i.e. both $X$ and $Y$ must be above particular thresholds in order to trigger the production of $Z$). \n\nUsing our logic approximation framework we will model the coherent FFL network illustrated above as follows.\nGene Y:\n\\begin{eqnarray}\nY = f(X) = \\beta_y\\ \\Theta(X > K_{xy})\n\\\n\\\n\\frac{dY}{dt} = \\beta_y\\ \\Theta(X > K_{xy}) - \\alpha_{y}Y\n\\end{eqnarray} \nGene Z:\n\\begin{eqnarray}\nZ = g(X,Y) = \\beta_z\\ \\Theta(X > K_{xz})\\Theta(Y > K_{yz})\n\\\n\\\n\\frac{dZ}{dt} = \\beta_z\\ \\Theta(X > K_{xz})\\Theta(Y > K_{yz}) - \\alpha_{z}Z\n\\end{eqnarray}", "# import statements to make numeric and plotting functions available\n%matplotlib inline\nfrom numpy import *\nfrom matplotlib.pyplot import *\n\n## We'll specify the behavior of X as a series of pulse of different length\n## so we'll define a function to generate pulses\n\ndef pulse(ontime, offtime, ntimes, onval=1):\n if ontime >= offtime:\n raise Exception(\"Invalid on/off times.\")\n signal = np.zeros(ntimes)\n signal[ontime:offtime] = onval\n return signal\n\n\nnsteps = 150\nshort_pulse = pulse(20, 23, nsteps) # 5 sec pulse\nlong_pulse = pulse(50, 100, nsteps) # 50 sec pulse\nX = short_pulse + long_pulse # we can then add the pulses to create\n # a single time trace\n\nplot(X, color='black')\nxlabel('Time units')\nylabel('Amount of Gene Product')\nylim(0, 1.5)\npass", "Define Python functions for dY/dt and dZ/dt\nRecall from above that\n\\begin{eqnarray}\n\\frac{dY}{dt} & = & \\beta_y\\ \\Theta(X > K_{xy}) - \\alpha_{y}Y \\ \\\n\\frac{dZ}{dt} & = & \\beta_z\\ \\Theta(X > K_{xz})\\Theta(Y > K_{yz}) - \\alpha_{z}Z\n\\end{eqnarray}", "## Write Python functions for dY/dt and dZ/dt\n\ndef dY(B,K,a,X,Y):\n pass ## replace this line with your function definition\n\n\ndef dZ(B,Kx,Ky,a,X,Y,Z):\n pass ## replace this line with your function definition\n\n\ndef dY(B,K,a,X,Y):\n if X > K:\n theta = 1\n else:\n theta = 0\n return B * theta - a * Y\n\n\ndef dZ(B,Kx,Ky,a,X,Y,Z):\n theta = 0\n if (X > Kx) and (Y > Ky):\n theta = 1\n return B * theta - a * Z\n\n\n## Plot X, Y, and Z on the same time scale\n\nnsteps = 150\nshort_pulse = pulse(20, 23, nsteps) # 5 sec pulse\nlong_pulse = pulse(50, 100, nsteps) # 50 sec pulse\nX = short_pulse + long_pulse\n\n# setup parameters for Y and Z\nY = [0]\nbetay, alphay = 0.2, 0.1\nKxy = 0.5\n\nZ = [0]\nbetaz, alphaz = 0.2, 0.1\nKxz = 0.5\nKyz = 1\n\nfor i in range(nsteps):\n xnow = X[i]\n ynow, znow = Y[-1], Z[-1]\n \n ynew = ynow + dY(betay, Kxy, alphay, xnow, ynow)\n znew = znow + dZ(betaz, Kxz, Kyz, alphaz, xnow, ynow, znow)\n \n Y.append(ynew)\n Z.append(znew)\n \n\nplot(X, 'k--', label='X', linewidth=1.5)\nplot(Y, 'b', label='Y')\nplot(Z, 'r', label='Z')\nylim(-0.1, 2.5)\nxlabel(\"Time\")\nylabel(\"Concentration\")\nlegend()\npass", "<h3> <font color='firebrick'>Questions</font> </h3>\n\n\n\nHow do the dynamics of $Y$ and $Z$ differ in the simulation above?\n\n\nTry varying the length of the first (short) pulse? How does changing the length of the pulse affect the dynamics of $Y$ and $Z$?\n\n\nPerformance of the Coherent FFL under noisy inputs\nLet's further explore the behavior of the coherent FFL defined given noisy inputs. As before we're going to define an input signal, $X$, that has a short and long pulse, but now we're going to pollute $X$ with random noise.", "nsteps = 150\n\np1start = 10\np1duration = 5\n\np2start = 50\np2duration = 50\n\nshort_pulse = pulse(p1start, p1start + p1duration, nsteps) # short pulse\nlong_pulse = pulse(p2start, p2start + p2duration, nsteps) # long pulse\nX = short_pulse + long_pulse \n\n# change this `scale` argument to increase/decrease noise\nnoise = np.random.normal(loc=0, scale=0.2, size=nsteps) # mean=0, sd=0.2\n\nX = X + noise\n\n# setup parameters for Y and Z\nY = [0]\nbetay, alphay = 0.2, 0.1\nKxy = 0.5\n\nZ = [0]\nbetaz, alphaz = 0.2, 0.1\nKxz = 0.5\nKyz = 1\n\nfor i in range(nsteps):\n xnow = X[i]\n ynow, znow = Y[-1], Z[-1]\n ynew = ynow + dY(betay, Kxy, alphay, xnow, ynow)\n znew = znow + dZ(betaz, Kxz, Kyz, alphaz, xnow, ynow, znow)\n Y.append(ynew)\n Z.append(znew)\n\n# draw each trace as a subfigure\n# subfigures stacked in a vertical grid\n\nsubplot2grid((3,1),(0,0))\nplot(X, 'k', label='X', linewidth=1)\nlegend()\n\nsubplot2grid((3,1),(1,0))\nplot(Y, 'b', label='Y', linewidth=2)\nlegend()\n\nsubplot2grid((3,1),(2,0))\nplot(Z, 'r', label='Z', linewidth=2)\n\nvlines(p1start, min(Z),max(Z)*1.1,color='black',linestyle='dashed')\nannotate(\"pulse 1 on\", xy=(p1start,1),xytext=(40,20),\n textcoords='offset points',\n horizontalalignment=\"center\",\n verticalalignment=\"bottom\",\n arrowprops=dict(arrowstyle=\"->\",color='black',\n connectionstyle='arc3,rad=0.5',\n linewidth=1))\nvlines(p2start, min(Z),max(Z)*1.1,color='black',linestyle='dashed')\nannotate(\"pulse 2 on\", xy=(p2start,1),xytext=(-40,0),\n textcoords='offset points',\n horizontalalignment=\"center\",\n verticalalignment=\"bottom\",\n arrowprops=dict(arrowstyle=\"->\",color='black',\n connectionstyle='arc3,rad=0.5',\n linewidth=1))\nlegend()\npass", "To Explore\nIn the code cell above, try changing the duration of the first pulse and the scale of the noise (see comments in code) to get a sense of how good a filter the FFL is. Is there a bias to the filtering with respect to turn on versus turn of?\nOPTIONAL: Dynamics of Y and Z in the Coherent FFL\nAs before we can solve for Y as a function of time and calculate what its steady state value will be:\n$$\nY(t) = Y_{st}(1-e^{-\\alpha_{y}t})\n$$\nand \n$$\nY_{st}=\\frac{\\beta_y}{\\alpha_y}\n$$\nHow about $Z$?\nSince $Z$ is governed by an AND function it needs both $X$ and $Y$ to be above their respective thresholds, $K_{xz}$ and $K_{yz}$. For the sake of simplicity let's assume that both $Y$ and $Z$ have the same threshold with respect to $X$, i.e. $K_{xy} = K_{xz}$. This allows us just to consider how long it takes for $Y$ to reach the threshold value $K_{yz}$. Given this we can calculate the delay before $Z$ turns on, $T_{\\mathrm{on}}$ as follows.\n$$\nY(T_{\\mathrm{on}}) = Y_{st}(1-e^{-\\alpha_y T_{\\mathrm{on}}}) = K_{yz}\n$$\nand solving for $T_{\\mathrm{on}}$ we find:\n$$\nT_{\\mathrm{on}} = \\frac{1}{\\alpha_y} \\log\\left[\\frac{1}{(1-K_{yz}/Y_{st})}\\right]\n$$\nThus we see that the delay before $Z$ turns on is a function of the degradation rate of $Y$ and the ratio between $Y_{st}$ and $K_{yz}$. \nExploring the Parameter space of $Z$'s turn-on time\nFrom the above formula, we see that there are two parameters that affect the turn-on time of $Z$ -- $\\alpha_y$ (the scaling factor for the decay rate of $Y$) and the compound parameter $K_{yz}/Y_{st}$ (the threshold concentration where $Y$ activate $Z$ relative to the steady state of $Y$). To explore the two-dimensional parameter space of $Z's$ $T_on$ we can create a contour plot.", "def Ton(alpha, KYratio):\n return (1.0/alpha) * log(1.0/(1.0-KYratio))\n\n## Create a contour plot for a range of alpha and Kyz/Yst\nx = alpha = linspace(0.01, 0.2, 100)\ny = KYratio = linspace(0.01, 0.99, 100)\nX,Y = meshgrid(x, y)\n\nZ = Ton(X,Y)\nlevels = MaxNLocator(nbins=20).tick_values(Z.min(), Z.max())\n\nim = contourf(X,Y,Z, cmap=cm.inferno_r, levels=levels)\ncontour(X, Y, Z, levels,\n colors=('k',),\n linewidths=(0.5,))\ncolorbar(im)\nxlabel('alpha')\nylabel(\"Kyz/Yst\")\npass", "Type 1 Coherent FFLs can act as a Sign-Sensitive Delays\nAs discussed in the article by Shen-Orr et al. a feed forward loop of the type we've just discussed can act as a type of filter -- a sign-sensitive delay that keeps $Z$ from firing in response to transient noisy signals from $X$, but shuts down $Z$ immediately once the signal from $X$ is removed. \nAn Incoherent FFL\nConsider the FFL illustrated in the figure below. \n\nIn this incoherent FFL, the logic function that regulates $Z$ is \"X AND NOT Y\". That is $Z$ turns on once $X$ is above a given threshold, but only stays on fully as long as $Y$ is below another threshold. Again for simplicity we assume $K_{xy} = K_{yz}$. \nDynamics of Y\nAs before, the dynamics of $Y$ are described by:\n$$\n\\frac{dY}{dt} = \\beta_y\\ \\Theta(X > K_{xy}) - \\alpha_{y}Y\n$$\nand \n$$\nY(t) = Y_{st}(1-e^{-\\alpha_{y}t})\n$$\nDynamics of Z\nTo describe $Z$ we consider two phases - 1) while $Y < K_{yz}$ and 2) while $Y > K_{yz}$. \nZ, Phase 1\nFor the first phase:\n$$\n\\frac{dZ}{dt} = \\beta_z\\ \\Theta(X > K_{xz}) - \\alpha_{z}Z\n$$\nand\n$$\nZ(t) = Z_{m}(1-e^{-\\alpha_{z}t})\n$$\nAs we did in the case of the coherent FFL, we can calculate the time until $Y$ reaches the treshold $K_{yz}$. We'll call this $T_{\\mathrm{rep}}$ and it is the same formula we found for $T_{\\mathrm{on}}$ previously.\n$$\nT_{\\mathrm{rep}} = \\frac{1}{\\alpha_y \\log[\\frac{1}{1-K_{yz}/Y_{st}}]}\n$$\nZ, Phase 2\nAfter a delay, $T_{\\mathrm{rep}}$, $Y$ starts to repress the transcription of $Z$ and $Z$ decays to a new lower steady state, $Z_{st} = \\beta_{z}^{'}/\\alpha$. The value of $\\beta_{z}^{'}$ depends on how leaky the repression of $Z$ is by $Y$. \nThe dynamics of $Z$ in Phase 2 is given by:\n$$\nZ(t) = Z_{st} + (Z_0 - Z_{st})e^{-\\alpha_{z}(t-T_{\\mathrm{rep}})}\n$$\nwhere\n$$\nZ_0 = Z_{m}(1-e^{-\\alpha_{z}T_{\\mathrm{rep}}})\n$$\nCombining the two phases of Z\nWe can combine the two phases of $Z$ into a single function:\n$$\nf(X,Y) = \\beta_z\\Theta(X > K_{xz} \\land Y < K_{yz}) + \\beta_{z}^{'}\\Theta(Y \\geq K_{yz}) - \\alpha_z Z\n$$", "## Write a Python function that represents dZ/dt for the Incoherent FFL\n## our dY function previously defined stays the same\n\ndef dZ_incoh(B1,B2,Kx,Ky,a,X,Y,Z):\n pass # define the function here\n\ndef dZ_incoh(B1,B2,Kx,Ky,a,X,Y,Z):\n theta = 0\n B = 0\n if (X > Kx) and (Y < Ky):\n theta = 1\n B = B1\n elif (X > Kx) and (Y >= Ky):\n theta = 1\n B = B2\n return B * theta - a * Z\n\n\n## Write your simulation here\n\nnsteps = 150\nshort_pulse = pulse(20, 25, nsteps) # 5 sec pulse\nlong_pulse = pulse(50, 100, nsteps) # 50 sec pulse\nX = short_pulse + long_pulse\n\n# setup parameters for Y and Z\nY = [0]\nbetay, alphay = 0.2, 0.1\nKxy = 0.5\n\nZ = [0]\nbetaz1, betaz2 = 0.2, 0.001\nalphaz = 0.1\nKxz = 0.5\nKyz = 0.5\n\nfor i in range(nsteps):\n xnow = X[i]\n ynow, znow = Y[-1], Z[-1]\n \n ynew = ynow + dY(betay, Kxy, alphay, xnow, ynow)\n znew = znow + dZ_incoh(betaz1, betaz2, Kxz, Kyz, alphaz, xnow, ynow, znow)\n \n Y.append(ynew)\n Z.append(znew)\n \n\n# draw each trace as a subfigure\n# subfigures stacked in a vertical grid\n\nsubplot2grid((3,1),(0,0))\nplot(X, 'k', label='X', linewidth=1)\nlegend()\nylim(0,1.1)\n\nsubplot2grid((3,1),(1,0))\nplot(Y, 'b', label='Y', linewidth=2)\nlegend()\nylim(0,2.1)\n\nsubplot2grid((3,1),(2,0))\nplot(Z, 'r', label='Z', linewidth=2)\nlegend()\nylim(0,0.7)\npass", "Dynamics of the Incoherent FFL\nNote that the stimulus amount of $Z$ in the system initially increases, but then decreases to a lower steady even when the initial stimulus persists. This system thus generates pulse-like dynamics to a persistent signal. How pulse-like the signal is depends on the ratio of $\\beta_z$ to $\\beta_{z}^{'}$. We define the repression factor, $F$, as follows:\n$$\nF = \\frac{\\beta_z}{\\beta_{z}^{'}} = \\frac{Z_m}{Z_{st}}\n$$\n<h1> <font color='firebrick'> Modeling Challenge </font> </h1>\n\nSee if you can come up with a reasonably small set of coupled ODEs for one of the signaling or regulatory networks you've learned about in this mini-term." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/GUI/5 - Widget Styling.ipynb
apache-2.0
[ "Widget Styling\nIn this lecture we will learn about the various ways to style widgets!", "%%html\n<style>\n.example-container { background: #999999; padding: 2px; min-height: 100px; }\n.example-container.sm { min-height: 50px; }\n.example-box { background: #9999FF; width: 50px; height: 50px; text-align: center; vertical-align: middle; color: white; font-weight: bold; margin: 2px;}\n.example-box.med { width: 65px; height: 65px; } \n.example-box.lrg { width: 80px; height: 80px; } \n</style>\n\nimport ipywidgets as widgets\nfrom IPython.display import display", "Basic styling\nThe widgets distributed with IPython can be styled by setting the following traits:\n\nwidth \nheight \nbackground_color \nborder_color \nborder_width \nborder_style \nfont_style \nfont_weight \nfont_size \nfont_family \n\nThe example below shows how a Button widget can be styled:", "button = widgets.Button(\n description='Hello World!',\n width=100, # Integers are interpreted as pixel measurements.\n height='2em', # em is valid HTML unit of measurement.\n color='lime', # Colors can be set by name,\n background_color='#0022FF', # and also by color code.\n border_color='cyan')\ndisplay(button)", "Parent/child relationships\nTo display widget A inside widget B, widget A must be a child of widget B. Widgets that can contain other widgets have a children attribute. This attribute can be set via a keyword argument in the widget's constructor or after construction. Calling display on an object with children automatically displays the children.", "from IPython.display import display\n\nfloat_range = widgets.FloatSlider()\nstring = widgets.Text(value='hi')\ncontainer = widgets.Box(children=[float_range, string])\n\ncontainer.border_color = 'red'\ncontainer.border_style = 'dotted'\ncontainer.border_width = 3\ndisplay(container) # Displays the `container` and all of it's children.", "After the parent is displayed\nChildren can be added to parents after the parent has been displayed. The parent is responsible for rendering its children.", "container = widgets.Box()\ncontainer.border_color = 'red'\ncontainer.border_style = 'dotted'\ncontainer.border_width = 3\ndisplay(container)\n\nint_range = widgets.IntSlider()\ncontainer.children=[int_range]", "Fancy boxes\nIf you need to display a more complicated set of widgets, there are specialized containers that you can use. To display multiple sets of widgets, you can use an Accordion or a Tab in combination with one Box per set of widgets (as seen below). The \"pages\" of these widgets are their children. To set the titles of the pages, use set_title.\nAccordion", "name1 = widgets.Text(description='Location:')\nzip1 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)\npage1 = widgets.Box(children=[name1, zip1])\n\nname2 = widgets.Text(description='Location:')\nzip2 = widgets.BoundedIntText(description='Zip:', min=0, max=99999)\npage2 = widgets.Box(children=[name2, zip2])\n\naccord = widgets.Accordion(children=[page1, page2], width=400)\ndisplay(accord)\n\naccord.set_title(0, 'From')\naccord.set_title(1, 'To')", "TabWidget", "name = widgets.Text(description='Name:', padding=4)\ncolor = widgets.Dropdown(description='Color:', padding=4, options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])\npage1 = widgets.Box(children=[name, color], padding=4)\n\nage = widgets.IntSlider(description='Age:', padding=4, min=0, max=120, value=50)\ngender = widgets.RadioButtons(description='Gender:', padding=4, options=['male', 'female'])\npage2 = widgets.Box(children=[age, gender], padding=4)\n\ntabs = widgets.Tab(children=[page1, page2])\ndisplay(tabs)\n\ntabs.set_title(0, 'Name')\ntabs.set_title(1, 'Details')", "Alignment\nMost widgets have a description attribute, which allows a label for the widget to be defined.\nThe label of the widget has a fixed minimum width.\nThe text of the label is always right aligned and the widget is left aligned:", "display(widgets.Text(description=\"a:\"))\ndisplay(widgets.Text(description=\"aa:\"))\ndisplay(widgets.Text(description=\"aaa:\"))", "If a label is longer than the minimum width, the widget is shifted to the right:", "display(widgets.Text(description=\"a:\"))\ndisplay(widgets.Text(description=\"aa:\"))\ndisplay(widgets.Text(description=\"aaa:\"))\ndisplay(widgets.Text(description=\"aaaaaaaaaaaaaaaaaa:\"))", "If a description is not set for the widget, the label is not displayed:", "display(widgets.Text(description=\"a:\"))\ndisplay(widgets.Text(description=\"aa:\"))\ndisplay(widgets.Text(description=\"aaa:\"))\ndisplay(widgets.Text())", "Flex boxes\nWidgets can be aligned using the FlexBox, HBox, and VBox widgets.\nApplication to widgets\nWidgets display vertically by default:", "buttons = [widgets.Button(description=str(i)) for i in range(3)]\ndisplay(*buttons)", "Using hbox\nTo make widgets display horizontally, they can be children of an HBox widget.", "container = widgets.HBox(children=buttons)\ndisplay(container)", "Visibility\nThe visible property of widgets can be used to hide or show widgets that have already been displayed (as seen below). The visible property can be:\n* True - the widget is displayed\n* False - the widget is hidden, and the empty space where the widget would be is collapsed\n* None - the widget is hidden, and the empty space where the widget would be is shown", "w1 = widgets.Latex(value=\"First line\")\nw2 = widgets.Latex(value=\"Second line\")\nw3 = widgets.Latex(value=\"Third line\")\ndisplay(w1, w2, w3)\n\nw2.visible=None\n\nw2.visible=False\n\nw2.visible=True", "Another example\nIn the example below, a form is rendered, which conditionally displays widgets depending on the state of other widgets. Try toggling the student check-box.", "form = widgets.VBox()\nfirst = widgets.Text(description=\"First:\")\nlast = widgets.Text(description=\"Last:\")\n\nstudent = widgets.Checkbox(description=\"Student:\", value=False)\nschool_info = widgets.VBox(visible=False, children=[\n widgets.Text(description=\"School:\"),\n widgets.IntText(description=\"Grade:\", min=0, max=12)\n ])\n\npet = widgets.Text(description=\"Pet:\")\nform.children = [first, last, student, school_info, pet]\ndisplay(form)\n\ndef on_student_toggle(name, value):\n if value:\n school_info.visible = True\n else:\n school_info.visible = False\nstudent.on_trait_change(on_student_toggle, 'value')\n", "Conclusion\nYou should now have an understanding of how to style widgets!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/cnrm-cerfacs/cmip6/models/cnrm-cm6-1-hr/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CNRM-CERFACS\nSource ID: CNRM-CM6-1-HR\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:52\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1-hr', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
C4.Classification_SVM/svm_professor.ipynb
mit
[ "Support Vector Machines\nAuthors: Jesús Cid Sueiro (jcid@tsc.uc3m.es)\n Jerónimo Arenas García (jarenas@tsc.uc3m.es)\n\nThis notebook is a compilation of material taken from several sources:\n\nThe sklearn documentation\nA notebook by Jake Vanderplas\n\nWikipedia\nNotebook version: 1.0 (Oct 28, 2015)\n 1.1 (Oct 27, 2016)\n 2.0 (Nov 2, 2017)\n 2.1 (Oct 20, 2018)\n 2.2 (Oct 20, 2019)\nChanges: \nv.1.0 - First version\nv.1.1 - Typo correction and illustrative figures for linear SVM\nv.2.0 - Compatibility with Python 3 (backcompatible with Python 2.7)\nv.2.1 - Minor corrections on the notation\nv.2.2 - Minor equation errors. Reformatted hyperlinks. Restoring broken visualization of images in some Jupyter versions.\nv.2.3 - Notation revision", "from __future__ import print_function\n\n# To visualize plots in the notebook\n%matplotlib inline\n\n# Imported libraries\n#import csv\n#import random\n#import matplotlib\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits import mplot3d\n#import pylab\n\nimport numpy as np\n#from sklearn.preprocessing import PolynomialFeatures\nfrom sklearn import svm\nfrom sklearn.datasets.samples_generator import make_blobs\nfrom sklearn.datasets.samples_generator import make_circles\n\nfrom ipywidgets import interact", "1. Introduction\n<small> <font color=\"blue\"> [Source: sklearn documentation </a>]</font> </small>\nSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.\nThe advantages of support vector machines are:\n\nEffective in high dimensional spaces.\nStill effective in cases where number of dimensions is greater than the number of samples.\nUses a subset of training points in the decision function (called support vectors), so it is also memory efficient.\nVersatile: different Kernel functions can be specified for the decision function.\n\nThe disadvantages of support vector machines include:\n\nSVMs do not directly provide probability estimates.\n\n2. Motivating Support Vector Machines\n<small> <font color=\"blue\"> [Source: A notebook by Jake Vanderplas] </font> </small>\nSupport Vector Machines (SVMs) are a kind of discriminative classifiers: that is, they draw a boundary between clusters of data without making any explicit assumption about the probability model underlying the data generation process.\nLet's show a quick example of support vector classification. First we need to create a dataset:", "X, y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\nplt.xlabel(\"$x_0$\", fontsize=14)\nplt.ylabel(\"$x_1$\", fontsize=14)\nplt.axis('equal')\nplt.show()", "A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see an inconvenience: such problem is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:", "xfit = np.linspace(-1, 3.5)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\n\nfor m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:\n plt.plot(xfit, m * xfit + b, '-k')\n\nplt.xlim(-1, 3.5);\nplt.xlabel(\"$x_0$\", fontsize=14)\nplt.ylabel(\"$x_1$\", fontsize=14)\nplt.axis('equal')\nplt.show()", "These are three very different separators which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently! How can we improve on this?\nSupport Vector Machines (SVM) select the boundary decision maximizing the margin. The margin of a classifier is defined as twice the maximum signed distance between the decision boundary and the training data. By signed we mean that the distance to misclassified samples is counted negatively. Thus, if the classification problem is \"separable\" (i.e. if there exist a decision boundary with zero errors in the training set), the SVM will choose the zero-error decision boundary that is \"as far as possible\" from the training data.\nIn summary, what an SVM does is to not only draw a line, but consider the \"sample free\" region about the line. Here's an example of what it might look like:", "xfit = np.linspace(-1, 3.5)\nplt.scatter(X[:,0], X[:,1], c=y, s=50, cmap='copper')\n\nfor m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:\n yfit = m * xfit + b\n plt.plot(xfit, yfit, '-k')\n plt.fill_between(xfit, yfit-d, yfit+d, edgecolor='none', \n color='#AAAAAA', alpha=0.4)\n\nplt.xlim(-1, 3.5)\nplt.xlabel(\"$x_0$\", fontsize=14)\nplt.ylabel(\"$x_1$\", fontsize=14)\nplt.axis('equal')\nplt.show()", "Notice here that if we want to maximize this width, the middle fit is clearly the best. This is the intuition of the SVM, which optimizes a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets.\n3. Linear SVM\n<small> <font color=\"blue\"> [Source: adapted from Wikipedia]</font> </small>\nIn order to present the SVM in a formal way, consider a training dataset $\\mathcal{D} = \\left{ (\\mathbf{x}_k, y_k) \\mid \\mathbf{x}_k\\in \\Re^M,\\, y_k \\in {-1,1}, k=0,\\ldots, {K-1}\\right}$, where the binary symmetric label $y_k\\in {-1,1}$ indicates the class to which the point $\\mathbf{x}_k$ belongs. Each $\\mathbf{x}_k$ is an $M$-dimensional real vector. We want to find the maximum-margin hyperplane that divides the points having $y_k=1$ from those having $y_k=-1$. \nAny hyperplane can be written as the set of points $\\mathbf{x}$ satisfying\n$$\n\\mathbf{w}^\\intercal \\mathbf{x} - b=0,\n$$\nwhere ${\\mathbf{w}}$ denotes the (not necessarily normalized) normal vector to the hyperplane. The parameter $\\tfrac{b}{\\|\\mathbf{w}\\|}$ determines the offset of the hyperplane from the origin along the normal vector ${\\mathbf{w}}$.\nIf the training data are linearly separable, we can select two parallel hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called \"the margin\". These hyperplanes can be described by the equations\n$$\\mathbf{w}^\\intercal \\mathbf{x} - b=1$$\nand\n$$\\mathbf{w}^\\intercal \\mathbf{x} - b=-1.$$\nNote that the two equations above can represent any two parallel hyperplanes in $\\Re^M$. Essentially, the direction of vector $\\mathbf{w}$ determines the orientation of the hyperplanes, whereas parameter $b$ and the norm of $\\mathbf{w}$ can be used to select their exact location.\nTo compute the distance between the hyperplanes, we can obtain the projection of vector ${\\mathbf x}_1 - {\\mathbf x}_2$, where ${\\mathbf x}_1$ and ${\\mathbf x}_2$ are points from each of the hyperplanes, onto a unitary vector orthonormal to the hyperplanes:\n<img src=\"./figs/margin_calculation.png\" width=\"500\">\n$$\\text{Distance between hyperplanes} = \\left[\\frac{\\mathbf{w}}{\\|\\mathbf{w}\\|}\\right]^\\top ({\\mathbf x}_1 - {\\mathbf x}_2) = \\frac{\\mathbf{w}^\\top {\\mathbf x}_1 - \\mathbf{w}^\\top {\\mathbf x}_2}{\\|\\mathbf{w}\\|} = \\frac{2}{\\|\\mathbf{w}\\|}.$$\nTherefore, to maximize the distance between the planes we want to minimize $\\|\\mathbf{w}\\|$.\nAs we also have to prevent data points from falling into the margin, we add the following constraints: for each $k$ either\n\\begin{align}\n\\mathbf{w}^\\top \\mathbf{x}_k - b &\\ge +1, \\qquad\\text{ if }\\;\\;y_k=1, \\qquad \\text{or} \\\n\\mathbf{w}^\\top \\mathbf{x}_k - b &\\le -1, \\qquad\\text{ if }\\;\\;y_k=-1.\n\\end{align}\nThis can be rewritten as:\n$$\ny_k(\\mathbf{w}^\\top \\mathbf{x}_k - b) \\ge 1, \\quad \\text{ for all } 0 \\le k \\le K-1.\n$$\nWe can put this together to get the optimization problem:\n$$\n(\\mathbf{w}^,b^) = \\arg\\min_{(\\mathbf{w},b)} \\|\\mathbf{w}\\| \\\n\\text{subject to: } \ny_k(\\mathbf{w}^\\top \\mathbf{x}_k - b) \\ge 1, \\, \\text{ for any } k = 0, \\dots, {K-1}\n$$\nThis optimization problem is difficult to solve because it depends on $\\|\\mathbf{w}\\|$, the norm of $\\mathbf{w}$, which involves a square root. Fortunately it is possible to alter the minimization objective $\\|\\mathbf{w}\\|$ by substituting it with $\\tfrac{1}{2}\\|\\mathbf{w}\\|^2$ (the factor of $\\frac{1}{2}$ being used for mathematical convenience) without changing the solution (the minimum of the original and the modified equation have the same $\\mathbf{w}$ and $b$):\n$$\n(\\mathbf{w}^,b^) = \\arg\\min_{(\\mathbf{w},b)} \\frac{1}{2}\\|\\mathbf{w}\\|^2 \\\n\\text{subject to: } \ny_k(\\mathbf{w}^\\top \\mathbf{x}_k - b) \\ge 1, \\, \\text{ for any } k = 0, \\dots, {K-1}\n$$\nThis is a particular case of a quadratic programming problem. \n3.1. Primal form\nThe optimization problem stated in the preceding section can be solved by means of a generalization of the Lagrange method of multipliers for inequality constraints, using the so called Karush–Kuhn–Tucker (KKT) multipliers $\\boldsymbol{\\alpha}$. According to it, the constrained problem can be expressed as\n$$(\\mathbf{w}^,b^, \\boldsymbol{\\alpha}^*) = \\arg\\min_{\\mathbf{w},b } \\max_{\\boldsymbol{\\alpha}\\geq 0 } \\left{ \\frac{1}{2}\\|\\mathbf{w}\\|^2 - \\sum_{k=0}^{K-1}{\\alpha_k\\left[y_k(\\mathbf{w}^\\top \\mathbf{x}_k - b)-1\\right]} \\right}\n$$\nthat is, we look for a saddle point.\nA key result in convex optimization theory is that, for the kind of optimization problems discussed here (see here, for instance), the max and min operators are interchangeable, so that\n$$\n(\\mathbf{w}^,b^, \\boldsymbol{\\alpha}^*) = \\arg\\max_{\\boldsymbol{\\alpha}\\geq 0 } \\min_{\\mathbf{w},b } \\left{ \\frac{1}{2}\\|\\mathbf{w}\\|^2 - \\sum_{k=0}^{K-1}{\\alpha_k\\left[y_k(\\mathbf{w}^\\top \\mathbf{x}_k - b)-1\\right]} \\right}\n$$\nNote that the inner minimization problem is now quadratic in $\\mathbf{w}$ and, thus, the minimum can be found by differentiation:\n$$\n\\mathbf{w}^* = \\sum_{k=0}^{K-1}{\\alpha_k y_k\\mathbf{x}_k}.\n$$\n3.1.1. Support Vectors\nIn view of the optimization problem, we can check that all the points which can be separated as $y_k(\\mathbf{w}^\\top \\mathbf{x}_k - b) - 1 > 0 $ do not matter since we must set the corresponding $\\alpha_k$ to zero. Therefore, only a few $\\alpha_k$ will be greater than zero. The corresponding $\\mathbf{x}_k$ are known as support vectors.\nIt can be seen that the optimum parameter vector $\\mathbf{w}^\\ast$ can be expressed in terms of the support vectors only:\n$$\n\\mathbf{w}^* = \\sum_{k\\in {\\cal{S}}_{SV}}{\\alpha_k y_k\\mathbf{x}_k}.\n$$\nwhere ${\\cal{S}}_{SV}$ is the set of indexes associated to support vectors.\n3.1.2. The computation of $b$\nSupport vectors lie on the margin and satisfy $y_k(\\mathbf{w}^\\top \\mathbf{x}_k - b) = 1$. From this condition, we can obtain the value of $b$, since for any support vector:\n$$\\mathbf{w}^\\top\\mathbf{x}_k - b = \\frac{1}{y_k} = y_k \\iff b = \\mathbf{w}^\\top\\mathbf{x}_k - y_k\n$$\nThis estimate of $b$, the centerpoint of the division, depends only on a single pair $y_k$ and $x_k$. We may get a more robust estimate of the center by averaging over all of the $N_{SV}$ support vectors, if we believe the population mean is a good estimate of the midpoint, so in practice, $b$ is often computed as:\n$$\nb = \\frac{1}{N_{SV}} \\sum_{\\mathbf{x}k\\in {\\cal{S}}{SV}}{(\\mathbf{w}^\\top\\mathbf{x}_k - y_k)}\n$$\n3.2. Dual form\nWriting the classification rule in its unconstrained dual form reveals that the maximum-margin hyperplane and therefore the classification task is only a function of the support vectors, the subset of the training data that lie on the margin.\nUsing the fact that $\\|\\mathbf{w}\\|^2 = \\mathbf{w}^\\top \\mathbf{w}$ and substituting $\\mathbf{w} = \\sum_{k=0}^{K-1}{\\alpha_k y_k\\mathbf{x}_k}$, we obtain\n\\begin{align}\n(b^, \\boldsymbol{\\alpha}^) \n &= \\arg\\max_{\\boldsymbol{\\alpha}\\geq 0 } \n \\min_b \\left{\n \\sum_{k=0}^{K-1}\\alpha_k - \n \\frac{1}{2}\n \\sum_{k=0}^{K-1} \\sum_{j=0}^{K-1} {\\alpha_k \\alpha_j y_k y_j \\mathbf{x}k^\\top\\mathbf{x}_j}\n + b \\sum{k=0}^{K-1}\\alpha_k y_k\n\\right}\n\\end{align}\nNote that, if $\\sum_{k=0}^{K-1}\\alpha_k y_k \\neq 0$ the optimal value of $b$ is $+\\infty$ of $-\\infty$, and \n\\begin{align}\n\\min_b \\left{\n\\sum_{k=0}^{K-1}\\alpha_k - \n\\frac{1}{2}\n\\sum_{k=0}^{K-1} \\sum_{j=0}^{K-1} {\\alpha_k \\alpha_j y_k y_j \\mathbf{x}k^\\top\\mathbf{x}_j}\n+ b \\sum{k=0}^{K-1}\\alpha_k y_k\n\\right} = -\\infty.\n\\end{align}\nTherefore, any $\\boldsymbol{\\alpha}$ satifying $\\sum_{k=0}^{K-1}\\alpha_k y_k \\neq 0$ is suboptimal, so that the optimal multipliers must satisfy the condition $\\sum_{k=0}^{K-1}\\alpha_k y_k = 0$.\nSummarizing, the dual formulation of the optimization problem is\n$$\n\\boldsymbol{\\alpha}^* = \\arg\\max_{\\boldsymbol{\\alpha}\\geq 0} \\sum_{k=0}^{K-1} \\alpha_k - \n \\frac12 \\sum_{k,j} \\alpha_k \\alpha_j y_k y_j\\;\\kappa(\\mathbf{x}k, \\mathbf{x}_j) \\\n\\text{subject to: } \\qquad \\sum{k=0}^{K-1} \\alpha_k y_k = 0.\n$$\nwhere the kernel $\\kappa(\\cdot)$ is defined by $\\kappa(\\mathbf{x}_k,\\mathbf{x}_j)=\\mathbf{x}_k^\\top\\mathbf{x}_j$.\nMany implementations of the SVM use this dual formulation. They proceed in three steps:\n\n\nSolve the dual problem to obtain $\\boldsymbol{\\alpha}^$. Usually, only a small number of $\\alpha_k^$ are nonzero. The corresponding values of ${\\bf x}_k$ are called the support vectors.\n\n\nCompute $\\mathbf{w}^ = \\sum_{k=0}^{K-1} \\alpha_k^ y_k\\mathbf{x}_k$\n\n\nCompute $b^ = \\frac{1}{N_{SV}} \\sum_{\\alpha_k^\\neq 0}{(\\mathbf{w}^{*\\top}\\mathbf{x}_k - y_k)}\n$\n\n\n4. Fitting a Support Vector Machine\n<small> <font color=\"blue\"> [Source: A notebook by Jake Vanderplas] </font> </small>\nNow we'll fit a Support Vector Machine Classifier to these points.", "clf = svm.SVC(kernel='linear')\nclf.fit(X, y)", "To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:", "def plot_svc_decision_function(clf, ax=None):\n \"\"\"Plot the decision function for a 2D SVC\"\"\"\n if ax is None:\n ax = plt.gca()\n x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)\n y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)\n Y, X = np.meshgrid(y, x)\n P = np.zeros_like(X)\n for i, xi in enumerate(x):\n for j, yj in enumerate(y):\n P[i, j] = clf.decision_function(np.array([xi, yj]).reshape(1,-1))\n # plot the margins\n ax.contour(X, Y, P, colors='k',\n levels=[-1, 0, 1], alpha=0.5,\n linestyles=['--', '-', '--'])\n\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\nplot_svc_decision_function(clf);", "Notice that the dashed lines touch a couple of the points: these points are the support vectors. In scikit-learn, these are stored in the support_vectors_ attribute of the classifier:", "plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],\n s=200, marker='s');\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\nplot_svc_decision_function(clf)\n", "Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.", "def plot_svm(N=10):\n X, y = make_blobs(n_samples=200, centers=2,\n random_state=0, cluster_std=0.60)\n X = X[:N]\n y = y[:N]\n clf = svm.SVC(kernel='linear')\n clf.fit(X, y)\n plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\n plt.xlim(-1, 4)\n plt.ylim(-1, 6)\n plot_svc_decision_function(clf, plt.gca())\n plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],\n s=200, facecolors='none')\n \ninteract(plot_svm, N=[10, 200], kernel='linear')", "Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results!\n5. Non-separable problems.\n<small> <font color=\"blue\"> [Source: adapted from Wikipedia]</font> </small>\nIn 1995, Corinna Cortes and Vladimir N. Vapnik suggested a modified maximum margin idea that allows for mislabeled examples. If there exists no hyperplane that can split the positive and negative samples, the Soft Margin\n method will choose a hyperplane that splits the examples as cleanly as possible, while still maximizing the distance to the nearest cleanly split examples. The method introduces non-negative slack variables, $\\xi_k$, which measure the degree of misclassification of the data $\\mathbf{x}_k$\n$$y_k(\\mathbf{w}^\\top\\mathbf{x}_k - b) \\ge 1 - \\xi_k \\quad k=0,\\ldots, K-1.\n$$\nThe objective function is then increased by a function which penalizes non-zero $\\xi_k$, and the optimization becomes a trade off between a large margin and a small error penalty. If the penalty function is linear, the optimization problem becomes:\n$$(\\mathbf{w}^,\\mathbf{\\xi}^, b^*) = \\arg\\min_{\\mathbf{w},\\mathbf{\\xi}, b } \\left{\\frac{1}{2} \\|\\mathbf{w}\\|^2 + C \\sum_{k=0}^{K-1} \\xi_k \\right} \\\n\\text{subject to: } \\quad y_k(\\mathbf{w}^\\intercal\\mathbf{x}_k - b) \\ge 1 - \\xi_k, \\quad \\xi_k \\ge 0, \\quad k=0,\\ldots, K-1.\n$$\nThis constraint along with the objective of minimizing $\\|\\mathbf{w}\\|$ can be solved using KKT multipliers as done above. One then has to solve the following problem:\n$$\n\\arg\\min_{\\mathbf{w}, \\mathbf{\\xi}, b } \\max_{\\boldsymbol{\\alpha}, \\boldsymbol{\\beta} }\n\\left{ \\frac{1}{2}\\|\\mathbf{w}\\|^2 \n+ C \\sum_{k=0}^{K-1} \\xi_k\n- \\sum_{k=0}^{K-1} {\\alpha_k\\left[y_k(\\mathbf{w}^\\top \\mathbf{x}k - b) -1 + \\xi_k\\right]}\n- \\sum{k=0}^{K-1} \\beta_k \\xi_k \\right }\\\n\\text{subject to: } \\quad \n\\alpha_k, \\beta_k \\ge 0.\n$$\nA similar analysis to that in the separable case can be applied to show that the dual formulation of the optimization problem is\n$$\n \\boldsymbol{\\alpha}^* = \\arg\\max_{0 \\leq \\alpha_k \\leq C} \\sum_{k=0}^{K-1} \\alpha_k - \n \\frac12 \\sum_{k,j} \\alpha_k \\alpha_j y_k y_j \\;\\kappa(\\mathbf{x}k, \\mathbf{x}_j) \\\n\\text{subject to: } \\qquad \\sum{k=0}^{K-1} \\alpha_k y_k = 0.\n$$\nNote that the only difference with the separable case is given by the constraints $\\alpha_k \\leq C$.\n6. Nonlinear classification\n<small> <font color=\"blue\"> [Source: adapted from Wikipedia]</font> </small>\nThe original optimal hyperplane algorithm proposed by Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard E. Boser, Isabelle M. Guyon and Vladimir N. Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space.\n<img src=\"./figs/kernel.png\" width=\"400\">\nThe kernel is related to the transform $\\phi(\\mathbf{x})$ by the equation $\\kappa(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^\\top \\phi(\\mathbf{x}')$. However, note that we do not need to explicitly compute $\\phi(\\mathbf{x})$, as long as we can express all necessary calculations in terms of the kernel function only, as it is the case for the optimization problem in the dual case.\nThe predictions of the SVM classifier can also be expressed in terms of kernels only, so that we never need to explicitely compute $\\phi(\\mathbf{x})$.\n$$\\begin{align}\\hat y({\\mathbf{x}}) & = {\\mathbf {w^\\ast}}^\\top \\phi(\\mathbf{x}) - b^\\ast \\ \\\n& = \\left[\\sum_{k \\in {\\cal{S}}_{SV}} \\alpha_k^ y_k \\phi(\\mathbf{x}k)\\right]^\\top {\\phi(\\mathbf{x})} - b^\\ast \\ \\\n& = - b^\\ast + \\sum{k \\in {\\cal{S}}_{SV}} \\alpha_k^ y_k \\; \\kappa(\\mathbf{x}_k, {\\mathbf{x}})\n\\end{align}$$\nSome common kernels include:\n\nGaussian: $\\kappa(\\mathbf{x},\\mathbf{x}')=\\exp(-\\gamma \\|\\mathbf{x} - \\mathbf{x}'\\|^2)$, for $\\gamma > 0$. Sometimes parametrized using $\\gamma=\\dfrac{1}{2 \\sigma^2}$. This is by far the most widely used kernel.\nPolynomial (homogeneous): $\\kappa(\\mathbf{x},\\mathbf{x}')=(\\mathbf{x}^\\top \\mathbf{x}')^d$\nPolynomial (inhomogeneous): $\\kappa(\\mathbf{x},\\mathbf{x}') = (\\mathbf{x}^\\top \\mathbf{x}' + 1)^d$\nHyperbolic tangent: $\\kappa(\\mathbf{x},\\mathbf{x}') = \\tanh(\\gamma \\mathbf{x}^\\top \\mathbf{x}'+c)$, for some (not every) $\\gamma > 0$ and $c < 0$.\n\n6.1. Example.\n<small> <font color=\"blue\"> [Source: A notebook by Jake Vanderplas] </font> </small>\nWhere SVM gets incredibly exciting is when it is used in conjunction with kernels.\nTo motivate the need for kernels, let's look at some data which is not linearly separable:", "X, y = make_circles(100, factor=.1, noise=.1)\n\nclf = svm.SVC(kernel='linear').fit(X, y)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\nplot_svc_decision_function(clf);", "Clearly, no linear discrimination will ever separate these data.\nOne way we can adjust this is to apply a kernel, which is some functional transformation of the input data.\nFor example, one simple model we could use is a radial basis function", "r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))", "If we plot this along with our data, we can see the effect of it:", "def plot_3D(elev=30, azim=30):\n ax = plt.subplot(projection='3d')\n ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='spring')\n ax.view_init(elev=elev, azim=azim)\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_zlabel('r')\n\ninteract(plot_3D, elev=[-90, 90], azip=(-180, 180));", "We can see that with this additional dimension, the data becomes trivially linearly separable!\nThis is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using the Gaussian kernel (kernel='rbf'), short for radial basis function:", "clf = svm.SVC(kernel='rbf', C=10)\nclf.fit(X, y)\n\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='copper')\nplot_svc_decision_function(clf)\nplt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],\n s=200, facecolors='none');", "Here there are effectively $K$ basis functions: one centered at each point! Through a clever mathematical trick, this computation proceeds very efficiently using the \"Kernel Trick\", without actually constructing the matrix of kernel evaluations.\nExercise: Apply the linear SVM and the SVM with Gaussian kernel to the discrimination of classes Versicolor and Virginica in the Iris Dataset, using atributes $x_0$ and $x_1$ only. Plot the corresponding decision boundaries and the support vectors. \n7. Hyperparameters\nNote that the SVM formulation has several free parameters (hyperparameters) that must be selected out of the optimization problem:\n\nThe free parameter $C$ used to solve non-separable problems.\nThe kernel parameters (for instance, parameter $\\gamma$ for the Gaussian kernel).\n\nThese parameters are usually adjusted using a cross-validation procedure." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arcyfelix/Courses
18-11-22-Deep-Learning-with-PyTorch/07-Deploying PyTorch Models/Deploying_PyTorch_models_in_production.ipynb
apache-2.0
[ "import torch\nimport torchvision", "Requirements", "torch.__version__", "PyTorch deployment requires using an unstable version of PyTorch (1.0.0+).\nIn order to install this version, use \"Preview\" option when choosing PyTorch version.\nhttps://pytorch.org/", "# Let's create an example model using ResNet-18\nmodel = torchvision.models.resnet18()\n\nmodel\n\n# Creating a sample of the input\n# It will be used to pass it to the network to build the dimensions\nsample = torch.rand(size=(1, 3, 224, 224))\n\n# Creating so called \"traced Torch script\"\ntraced_script_module = torch.jit.trace(model, sample)\n\ntraced_script_module\n\n# The TracedModule is capable of making predictions\nsample_prediction = traced_script_module(torch.ones(size=(1, 3, 224, 224)))\n\nsample_prediction.shape\n\n# Serializing the the script module\ntraced_script_module.save('./models_deployment/model.pt')", "The module is ready to be loaded into C++ !\nThat requires:\n- LibTorch\n- CMake" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
edhenry/notebooks
Hashing.ipynb
mit
[ "Hashing\nHashing can be useful in speeding up the search process for a specific item that is part of a larger collection of items. Depending on the implementation of the hashing algorithm, this can turn the computational complexity of our search algorithm from $O(n)$ to $O(1)$. We do this by building a specific data structure, which we'll dive into next.\nHash Table\nA hash table is a collection of items, stored in such a way as to make it easier to find them later. The table consists of slots that hold items and are named by a specific integer value, starting with 0.\nExample of a hash table (sorry for the poor formatting because markdown : \n| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| None | None | None | None | None | None | None | None | None | None | None |\nEach entry in this hash table, is currently set to a value of None.\nA hash function is used when mapping values into the slots available within a Hash table. The hash function typically takes, as input, an item from a collection, and will return an integer in the range of slot names, between $0$ and $m-1$. There are many different hash functions, but the first we can discuss is the \"remainder method\" hash function. \nRemainder Hash Function\nThe remainder hash function takes an item from a collection, divides it by the table size, returning the remainder of it's hash value $h(item) = item \\% \\text{table_size}$. Typically modulo arithmetic is present in some form for all hash functions, as the result must be in the range of the total number of slots within the table.\nAssuming we have a set of integer items ${25,54,34,67,75,21,77,31}$, we can use our hash function to find slots for our values, accordingly.", "items = [25,54,34,67,75,21,77,31]\n\ndef hash(item_list, table_size):\n hash_table = dict([(i,None) for i,x in enumerate(range(table_size))])\n for item in item_list:\n i = item % table_size\n print(\"The hash for %s is %s\" % (item, i))\n hash_table[i] = item\n \n return hash_table\n\n# Execute the hash function\n# Create table with 11 entries to match example above\nhash_table = hash(items, 11)\n\n# Print the resulting hash table\nprint(hash_table)", "Once the hash values have been computed, we inset each item into the hash table at the designated position(s). We can now see that there are entries with corresponding hash values stored in a python dictionary. This is obviously a very simple implementation of a hash table.\nThere is something interesting to note here, though, when working through using a simple hashing algorithm like the remainder method. We have items, in our case integers, which hash to the same value. Specifically, we can see that there are 2 items that hash to each of the 1, 9, and 10 slots. These are what are known as collisions.\nClearly these collisions can cause problems, as out of the 8 initial items that we'd started with, we only have 5 items actually stored in our hash table. This leads us into the next section we'll discuss, and that is hash functions that can help alleviate this collision problem.\nHash Functions\nHash functions that map, perfectly, every item into it's own unique slot in a hash table is known as a perfect hash function. If we knew the collection of items and that it would never change, it's possible to construct a perfect hash function specific to this collection, but we know that the dynamics of the real world tend to not allow something so simple.\nDynamically growing the hash table size so each possible item in the item range can be accomodated is one way to construct a perfect hash function. This guarantees each item will have it's own slot. But this isn't feasible, as something as simple as tracking social security numbers would require over one billion slots within the hash table. And if we're only tracking a small subset of the full set of social security numbers, this would become horribly inefficient with respect to hardware resources available within the machine our code is running on.\nWith the goal of constructing a hash function that will minimize the number of collisions, has low computational complexity, and evenly distributes our items within the hash table, we can take a look at some common ways to extend this remainder method.\nFolding Method\nThe folding method for hashing an item begins by diving the item into equal size pieces (though the last piece may not be of equal size to the rest). These pieces are then added together to create the resulting hash value. A good example of this is a phone number,such as 456-555-1234. We can break each pair of integers up into groups of 2, add them up, and use that resulting value as an input to our hashing function.", "def stringify(item_list):\n \"\"\"\n Method to convert integer values into array of component integers\n \"\"\"\n string_items = []\n while len(item_list) > 0:\n for item in item_list:\n chars = [int(c) for c in str(item)]\n item_list.remove(item)\n string_items.append(chars)\n return string_items\n\ndef folding_hash(item_list):\n '''\n Quick hack at a folding hash algorithm\n '''\n hashes = []\n while len(item_list) > 0:\n hash_val = 0\n for item in item_list:\n while len(item) > 1:\n str_1 = str(item[0])\n str_2 = str(item[1])\n str_concat = str_1 + str_2\n bifold = int(str_concat)\n hash_val += bifold\n item.pop(0)\n item.pop(0)\n else:\n if len(item) > 0:\n hash_val += item[0]\n else:\n pass\n hashes.append(hash_val)\n return hashes\n\n# Example phone numbers\nphone_number = [4565551234, 4565557714, 9871542544, 4365554601]\n\n# String/Character-fy the phone numbers\nstr_pn = stringify(phone_number)\n\n# Hash the phone numbers\nfolded_hash = folding_hash(str_pn)\n\n# Input values into hash table\nfolding_hash_table = hash(folded_hash, 11)\n\n# Print the results\nprint(folding_hash_table)", "Ordinal Hash\nWhen dealing with strings, we can use the ordinal values of the constituent characters of a given word, to create a hash.\nIt's important to notice, however, that anagrams can produce hash collisions, as shown below.", "def ord_hash(string, table_size):\n hash_val = 0\n for position in range(len(string)):\n hash_val = hash_val + ord(string[position])\n \n return hash_val % table_size\n\nprint(ord_hash(\"cat\", 11))\nprint(ord_hash(\"tac\", 11))", "Weighted ordinal hashing\nIn the case above, just using ordinal values can cause hash collisions. We can actually use the positional structure of the word to as a set of weights for generating a given hash. As seen below.\nA simple multiplication by the positional value of each character will cause anagrams to evaluate to different hash values.", "def weighted_ord_hash(string, table_size):\n hash_val = 0\n for position in range(len(string)):\n hash_val = hash_val + (ord(string[position]) * position)\n return hash_val % table_size\n\n# ord_hash\nprint(ord_hash(\"cat\", 11))\n\n# weighted_ord_hash\nprint(weighted_ord_hash(\"tac\", 11))", "Collision Resolution\nWhen there are hash collisions, like we've seen previously, it's important to understand ways that we can alleviate the collisions.\nOne simple way to handle the collision, should there already be an entry in our hash table with the same hash value, is to search sequentially through all slots near the original hash, for an empty slot. This may require us to circularly traverse the entire hash table to allow us to cover all possible slots. This process is known as open addressing and the technique within this process that we're using is called linear probing.\nIn the following code examples, we'll reuse the simple remainder method hash function that we've defined above. Along with the original set of integers we were hashing, as there were some collisions that occured.", "items = [25,54,34,67,75,21,77,31]\n\n# Execute the hash function\n# Create table with 11 entries to match example above\nhash_table = hash(items, 11)\n\n# Print the resulting hash table\nprint(hash_table)", "We can see there were multiple collisions within this dataset. Specifically hashes of 1, 9, and 10. And we can see in the resulting table that only the last computed hashes are stored in the respective table slots.\nBelow we'll implement an lp_hash function that will perform linear probing over the slots available within the table for any collisions that occur.", "items = [25,54,34,67,75,21,77,31]\n\ndef rehash(oldhash, table_size):\n return (oldhash+1) % table_size\n\ndef lp_hash(item_list, table_size):\n \n lp_hash_table = dict([(i,None) for i,x in enumerate(range(table_size))])\n\n for item in item_list:\n i = item % table_size\n print(\"%s hashed == %s \\n\" %(item, i))\n if lp_hash_table[i] == None:\n lp_hash_table[i] = item\n elif lp_hash_table[i] != None:\n print(\"Collision, attempting linear probe \\n\")\n next_slot = rehash(i, table_size)\n print(\"Setting next slot to %s \\n\" % next_slot)\n while lp_hash_table[next_slot] != None:\n next_slot = rehash(next_slot, len(lp_hash_table.keys()))\n print(\"Next slot was not empty, trying next slot %s \\n\" % next_slot)\n if lp_hash_table[next_slot] == None:\n lp_hash_table[next_slot] = item\n return lp_hash_table\n\nprint(lp_hash(items, 11))", "Used a little more interestingly, we can use the weighted ordinal hash function that we've defined above, combined with the lp_hash function that we've just defined, to store string(s) for later lookup.", "animal_items = [\"cat\", \"dog\", \"goat\", \n \"chicken\", \"pig\", \"horse\",\n \"ostrich\", \"lion\", \"puma\"]\n\ndef rehash(oldhash, table_size):\n return (oldhash+1) % table_size\n\ndef weighted_ord_hash(string, table_size):\n hash_val = 0\n for position in range(len(string)):\n hash_val = hash_val + (ord(string[position]) * position)\n return hash_val % table_size\n\n\ndef lp_hash(item_list, table_size):\n \n lp_hash_table = dict([(i,None) for i,x in enumerate(range(table_size))])\n \n for item in item_list:\n i = weighted_ord_hash(item, table_size)\n print(\"%s hashed == %s \\n\" %(item, i))\n if lp_hash_table[i] == None:\n lp_hash_table[i] = item\n elif lp_hash_table[i] != None:\n print(\"Collision, attempting linear probe \\n\")\n next_slot = rehash(i, table_size)\n print(\"Setting next slot to %s \\n\" % next_slot)\n while lp_hash_table[next_slot] != None:\n next_slot = rehash(next_slot, len(lp_hash_table.keys()))\n print(\"Next slot was not empty, trying next slot %s \\n\" % next_slot)\n if lp_hash_table[next_slot] == None:\n lp_hash_table[next_slot] = item\n return lp_hash_table\n\nprint(lp_hash(animal_items, 11))", "References\n\nhttp://interactivepython.org/courselib/static/pythonds/SortSearch/Hashing.html#tbl-hashvalues1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
saga-survey/saga-code
ipython_notebooks/DECALS low-SB_completeness Residuals.ipynb
gpl-2.0
[ "from __future__ import print_function, division\n\n# This changes the current directory to the base saga directory - make sure to run this first!\n# This is necessary to be able to import the py files and use the right directories,\n# while keeping all the notebooks in their own directory.\nimport os\nimport sys\nfrom time import time\n\nif 'saga_base_dir' not in locals():\n saga_base_dir = os.path.abspath('..')\nif saga_base_dir not in sys.path:\n os.chdir(saga_base_dir)\n\nimport hosts\nimport targeting\n\nimport numpy as np\n\nfrom scipy import interpolate\n\nfrom astropy import units as u\nfrom astropy.coordinates import SkyCoord\nfrom astropy import table\nfrom astropy.table import Table\nfrom astropy.io import fits\n\nfrom astropy.utils.console import ProgressBar\n\nfrom collections import Counter\n\n%matplotlib inline\nfrom matplotlib import style, pyplot as plt\n\nplt.style.use('seaborn-deep')\nplt.rcParams['image.cmap'] = 'viridis'\nplt.rcParams['image.origin'] = 'lower'\nplt.rcParams['figure.figsize'] = (14, 8)\nplt.rcParams['axes.titlesize'] = plt.rcParams['axes.labelsize'] = 16\nplt.rcParams['xtick.labelsize'] = plt.rcParams['ytick.labelsize'] = 14\n\nfrom IPython import display\nfrom decals import make_cutout_comparison_table, fluxivar_to_mag_magerr, compute_sb, DECALS_AP_SIZES, band_to_idx, subselect_aperture", "Parts adapted from DECALS low-SB_completeness AnaK overlap.ipynb\nGet AnaK Info and download", "hsts = hosts.get_saga_hosts_from_google(clientsecretjsonorfn='client_secrets.json', useobservingsummary=False)\nanak = [h for h in hsts if h.name=='AnaK']\nassert len(anak)==1\nanak = anak[0]\n\nbricknames = []\nwith open('decals_dr3/anakbricks') as f:\n for l in f:\n l = l.strip()\n if l != '':\n bricknames.append(l)\nprint(bricknames)\n\nbase_url = 'http://portal.nersc.gov/project/cosmo/data/legacysurvey/dr3/tractor/{first3}/tractor-{brickname}.fits'\n\nfor brickname in ProgressBar(bricknames, ipython_widget=True):\n url = base_url.format(brickname=brickname, first3=brickname[:3])\n target = os.path.join('decals_dr3/catalogs/', url.split('/')[-1])\n if not os.path.isfile(target):\n !wget $url -O $target\n else:\n print(target, 'already exists, not downloading')\n\nbricks = Table.read('decals_dr3/survey-bricks.fits.gz')\nbricksdr3 = Table.read('decals_dr3/survey-bricks-dr3.fits.gz')\n\ncatalog_fns = ['decals_dr3/catalogs/tractor-{}.fits'.format(bnm) for bnm in bricknames]\ndecals_catalogs = [Table.read(fn) for fn in catalog_fns]\ndcatall = table.vstack(decals_catalogs, metadata_conflicts='silent')\n\nsdss_catalog = Table.read('catalogs/base_sql_nsa{}.fits.gz'.format(anak.nsaid))", "basic photometric addtions to the catalogs", "for dcat in [dcatall]:\n for magnm, idx in zip('grz', [1, 2, 4]):\n mag, mag_err = fluxivar_to_mag_magerr(dcat['decam_flux'][:, idx], dcat['decam_flux_ivar'][:, idx])\n dcat[magnm] = mag\n dcat[magnm + '_err'] = mag_err\n \n dcat['sb_r_0.5'] = compute_sb(0.5*u.arcsec, dcat['decam_apflux'][:, 2, :])\n dcat['sb_r_0.75'] = compute_sb(0.75*u.arcsec, dcat['decam_apflux'][:, 2, :])\n dcat['sb_r_1'] = compute_sb(1.0*u.arcsec, dcat['decam_apflux'][:, 2, :])\n dcat['sb_r_2'] = compute_sb(2.0*u.arcsec, dcat['decam_apflux'][:, 2, :])", "Basic residual comparisons", "DECALS_AP_SIZES\n\napmag, apmagerr = fluxivar_to_mag_magerr(dcatall['decam_apflux'], dcatall['decam_apflux_ivar'])\napmagres, _ = fluxivar_to_mag_magerr(dcatall['decam_apflux_resid'], dcatall['decam_apflux_ivar'])\n\napdiff = subselect_aperture(apmagres - apmag, 'r')\napcolor = subselect_aperture(apmag, 'g') - subselect(apmag, 'r')\napmagx = subselect_aperture(apmag, 'r')\n\ngood = ~np.isnan(apdiff)&~np.isnan(apcolor)&~np.isnan(apmagx)\n\nplt.scatter(apcolor[good], apmagx[good], c=apdiff[good], cmap='viridis', alpha=.1, lw=0, s=1)\nplt.colorbar()\nplt.xlim(-2,5)\nplt.ylim(26,15)", "Inspect objects with failed residuals", "for band in 'ugrizy':\n reses = subselect_aperture(apmagres, band, None)\n print('Band', band)\n for ap, res in zip(DECALS_AP_SIZES, reses.T):\n print('Aperture', ap,'has', 100*np.sum(np.isfinite(res))/len(res),'% good')", "???? Why are so many of the residuals NaN/infs ???", "rs = subselect_aperture(apmagres, 'r')\ncatnotfin = dcatall[~np.isfinite(rs)]\ncatnotfin['apmagres_allaps'] = subselect_aperture(apmagres, 'r', None)[~np.isfinite(rs)]\n\nmake_cutout_comparison_table(catnotfin[np.random.permutation(len(catnotfin))[:10]], \n inclres=True, inclmod=True, inclsdss=False, doprint=False,\n add_annotation=['apmagres_allaps'])", "Inspect objects with high residuals vs. flux\nNote that this includes only those with r<22 to ensure there's not a flux effect", "dmag_of_ap_distr = {}\n\nfor ap in DECALS_AP_SIZES:\n rs = subselect_aperture(apmag, 'r', ap)\n rres = subselect_aperture(apmagres, 'r', ap)\n\n dmag_of_ap_distr[ap] = dmag = rs - rres\n plt.hist(dmag[np.isfinite(dmag)&(rs<22*u.mag)], bins=100, histtype='step', label=str(ap), normed=True)\n \nplt.legend(loc=0)\nplt.xlabel('r_flux - r_resid')\n\nap = 1.0*u.arcsec\n\nrs = subselect_aperture(apmag, 'r', ap)\nrres = subselect_aperture(apmagres, 'r', ap)\ndmag = rs - rres\n\nperc = 95\np = np.percentile(dmag[np.isfinite(dmag)&(rs<22*u.mag)], perc)\nprint('nobjs in', perc,'percentile:', np.sum(dmag.value>p), 'cutoff is', p)\n\nmsk = np.isfinite(dmag)&(dmag.value>p)&(rs<22*u.mag)\ndcatbadres = dcatall[msk]\ndcatbadres['dmag'] = dmag[msk]\ndcatbadres['r'] = rs[msk]\n\nmake_cutout_comparison_table(dcatbadres[:10], \n inclres=True, inclmod=True, inclsdss=False, doprint=False,\n add_annotation=['dmag', 'r'])", "X-matching to SDSS", "#cut out the non-overlap region\ndsc = SkyCoord(dcatall['ra'], dcatall['dec'], unit=u.deg)\ndcutall = dcatall[dsc.separation(anak.coords) < 1*u.deg]\n\ndsc = SkyCoord(dcutall['ra'], dcutall['dec'], unit=u.deg)\nssc = SkyCoord(sdss_catalog['ra'], sdss_catalog['dec'], unit=u.deg)\nthreshold = 1*u.arcsec\n\nidx, d2d, _ = ssc.match_to_catalog_sky(dsc)\nplt.hist(d2d.arcsec, bins=100, range=(0, 3),histtype='step', log=True)\nplt.axvline(threshold.to(u.arcsec).value, c='k')\nNone\n\ndmatchmsk = idx[d2d<threshold]\ndmatch = dcutall[dmatchmsk]\n\nsmatch = sdss_catalog[d2d<threshold]\n\nidx, d2d, _ = dsc.match_to_catalog_sky(ssc)\ndnomatchmsk = d2d>threshold\ndnomatch = dcutall[dnomatchmsk]\n\nplt.figure(figsize=(12, 10))\n\nxnm = 'r'\nynm = 'sb_r_0.5'\n\nap = 1*u.arcsec\n\napmag, apmagerr = fluxivar_to_mag_magerr(dnomatch['decam_apflux'], dnomatch['decam_apflux_ivar'])\napmagres, _ = fluxivar_to_mag_magerr(dnomatch['decam_apflux_resid'], dnomatch['decam_apflux_ivar'])\nrs = subselect_aperture(apmag, xnm, ap)\nrres = subselect_aperture(apmagres, xnm, ap)\ndmag = rs - rres\n\n\n\n\ndnstar = dnomatch['type']=='PSF '\ndnoext = -2.5*np.log10(dnomatch['decam_mw_transmission'][:, 2])\nr0 = (dnomatch[xnm] - dnoext)\nsb = dnomatch[ynm] - dnoext\nplt.scatter(r0[~dnstar], sb[~dnstar], \n c=dmag[~dnstar], lw=0, alpha=1, s=3, label='Glx in DECALS, not in SDSS', vmax=0, vmin=-5,\n cmap='viridis_r')\nplt.colorbar().set_label('r_ap - r_res [{}]'.format(ap))\n\nplt.axvline(20.75, color='k', ls=':')\n\nplt.xlim(17, 23)\nplt.ylim(18, 28)\n\nplt.xlabel(r'$r_{0, {\\rm DECaLS}}$', fontsize=28)\nplt.ylabel(r'$SB_{0.5^{\\prime \\prime}, {\\rm DECaLS}}$', fontsize=28)\nplt.xticks(fontsize=24)\nplt.yticks(fontsize=24)\n\nplt.legend(loc='lower right', fontsize=20)", "now inspect those that are in the upper-left of that plot", "msk = (r0[~dnstar]<20.75)&(sb[~dnstar]>24)\ncat = dnomatch[~dnstar][msk]\ncat['dmag'] = dmag[~dnstar][msk]\n\n\np = np.percentile(cat['dmag'][np.isfinite(cat['dmag'])], 10)\n\ncatlower = cat[cat['dmag']<p]\nprint(len(catlower))\nmake_cutout_comparison_table(catlower[np.random.permutation(len(catupper))[:10]], \n inclres=True, inclmod=True, inclsdss=False, doprint=False,\n add_annotation=['dmag', 'r'])\n\np = np.percentile(cat['dmag'][np.isfinite(cat['dmag'])], 90)\n\ncatupper = cat[cat['dmag']>p]\nprint(len(catupper))\nmake_cutout_comparison_table(catupper[np.random.permutation(len(catupper))[:10]], \n inclres=True, inclmod=True, inclsdss=False, doprint=False,\n add_annotation=['dmag', 'r'])\n\n# things we want to identify automatically\ndisky_things_from_marla = \"\"\"\nmag ra dec unk\n19.4785 354.157853072 0.102747198705 false\n19.3311 354.255069696 0.619242808225 false\n19.1127 354.284237813 0.160859730123 false\n18.6914 354.069400831 -0.115904590235 false\n19.3392 354.415038652 -0.0645766794736 false\n19.4624 354.114525096 0.00532483801292 false\n19.1087 354.534841322 0.436958919955 false\n19.0242 354.447705125 0.266811924681 false\n19.0354 354.136706143 0.691858529943 false\n19.0534 354.53888635 0.236989192453 false\n19.2452 354.568916976 0.837946572935 false\n19.3481 353.473924073 -0.0912764847886 false\n19.268 354.011615043 -0.445983276629 false\n19.3429 354.408722681 0.904017267882 true\n19.1914 354.851596507 0.401264434888 true\n19.2529 353.412613626 0.640755638979 false\n19.3848 354.878706758 0.364835196656 false\n\"\"\"\ndisky_things_from_marla = Table.read([disky_things_from_marla], format='ascii')\n\n\nsc_marla = SkyCoord(disky_things_from_marla['ra'], disky_things_from_marla['dec'], unit=u.deg)\nidx, d2d, _ = sc_marla.match_to_catalog_sky(SkyCoord(dcatall['ra'], dcatall['dec'], unit=u.deg))\nnp.sum(d2d < 1*u.arcsec)/len(d2d)\n\nmatchcat = dcatall[idx]\n\napmag, apmagerr = fluxivar_to_mag_magerr(matchcat['decam_apflux'], matchcat['decam_apflux_ivar'])\napmagres, _ = fluxivar_to_mag_magerr(matchcat['decam_apflux_resid'], matchcat['decam_apflux_ivar'])\nrs = subselect_aperture(apmag, xnm, None)\nrres = subselect_aperture(apmagres, xnm, None)\nmatchcat['dmag'] = rs - rres\n\ndmagsigs = []\nfor dmag, dmagdistr in zip(matchcat['dmag'].T, dmag_of_ap_distr.values()):\n msk = np.isfinite(dmagdistr)\n dmagsigs.append(np.mean(dmagdistr[msk].value)-dmag/np.std(dmagdistr[msk].value))\n print(np.mean(dmagdistr[msk].value).shape, dmag.shape, np.std(dmagdistr[msk].value).shape, dmagsigs[-1].shape)\nmatchcat['dmag_sig'] = np.array(dmagsigs).T\n\n\nmake_cutout_comparison_table(matchcat[np.random.permutation(len(matchcat))[:10]], \n inclres=True, inclmod=True, inclsdss=False, doprint=False,\n add_annotation=['dmag', 'r', 'dmag_sig'])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TomTranter/OpenPNM
examples/simulations/Coupling Continuum with Pore Network.ipynb
mit
[ "import numpy as np\nimport scipy as sp\nimport openpnm as op\nimport openpnm.models.geometry as gm\nimport openpnm.models.physics as pm\nimport openpnm.models.misc as mm\nimport matplotlib.pyplot as plt\nnp.set_printoptions(precision=4)\nnp.random.seed(10)\nws = op.Workspace()\nws.settings[\"loglevel\"] = 40\n%matplotlib inline", "Generate Two Networks with Different Spacing", "spacing_lg = 0.00006\nlayer_lg = op.network.Cubic(shape=[10, 10, 1], spacing=spacing_lg)\n\nspacing_sm = 0.00002\nlayer_sm = op.network.Cubic(shape=[30, 5, 1], spacing=spacing_sm)", "Position Networks Appropriately, then Stitch Together", "# Start by assigning labels to each network for identification later\nlayer_sm.set_label(\"small\", pores=layer_sm.Ps, throats=layer_sm.Ts)\nlayer_lg.set_label(\"large\", pores=layer_lg.Ps, throats=layer_lg.Ts)\n# Next manually offset CL one full thickness relative to the GDL\nlayer_sm['pore.coords'] -= [0, spacing_sm*5, 0]\nlayer_sm['pore.coords'] += [0, 0, spacing_lg/2 - spacing_sm/2] # And shift up by 1/2 a lattice spacing\n# Finally, send both networks to stitch which will stitch CL onto GDL\nfrom openpnm.topotools import stitch\nstitch(network=layer_lg, donor=layer_sm,\n P_network=layer_lg.pores('left'),\n P_donor=layer_sm.pores('right'),\n len_max=0.00005)\ncombo_net = layer_lg\ncombo_net.name = 'combo'", "Quickly Visualize the Network\nLet's just make sure things are working as planned using OpenPNMs basic visualization tools:", "fig = op.topotools.plot_connections(network=combo_net)", "Create Geometry Objects for Each Layer", "Ps = combo_net.pores('small')\nTs = combo_net.throats('small')\ngeom_sm = op.geometry.GenericGeometry(network=combo_net, pores=Ps, throats=Ts)\nPs = combo_net.pores('large')\nTs = combo_net.throats('small', mode='not')\ngeom_lg = op.geometry.GenericGeometry(network=combo_net, pores=Ps, throats=Ts)", "Add Geometrical Properties to the Small Domain\nThe small domain will be treated as a continua, so instead of assigning pore sizes we want the 'pore' to be same size as the lattice cell.", "geom_sm['pore.diameter'] = spacing_sm\ngeom_sm['pore.area'] = spacing_sm**2\ngeom_sm['throat.diameter'] = spacing_sm\ngeom_sm['throat.area'] = spacing_sm**2\ngeom_sm['throat.length'] = 1e-12 # A very small number to represent nearly 0-length\ngeom_sm.add_model(propname='throat.endpoints',\n model=gm.throat_endpoints.circular_pores)\ngeom_sm.add_model(propname='throat.length',\n model=gm.throat_length.piecewise)\ngeom_sm.add_model(propname='throat.conduit_lengths',\n model=gm.throat_length.conduit_lengths)", "Add Geometrical Properties to the Large Domain", "geom_lg['pore.diameter'] = spacing_lg*np.random.rand(combo_net.num_pores('large'))\ngeom_lg.add_model(propname='pore.area',\n model=gm.pore_area.sphere)\ngeom_lg.add_model(propname='throat.diameter',\n model=mm.from_neighbor_pores,\n pore_prop='pore.diameter', mode='min')\ngeom_lg.add_model(propname='throat.area',\n model=gm.throat_area.cylinder)\ngeom_lg.add_model(propname='throat.endpoints',\n model=gm.throat_endpoints.circular_pores)\ngeom_lg.add_model(propname='throat.length',\n model=gm.throat_length.piecewise)\ngeom_lg.add_model(propname='throat.conduit_lengths',\n model=gm.throat_length.conduit_lengths)", "Create Phase and Physics Objects", "air = op.phases.Air(network=combo_net, name='air')\nphys_lg = op.physics.GenericPhysics(network=combo_net, geometry=geom_lg, phase=air)\nphys_sm = op.physics.GenericPhysics(network=combo_net, geometry=geom_sm, phase=air)", "Add pore-scale models for diffusion to each Physics:", "phys_lg.add_model(propname='throat.diffusive_conductance',\n model=pm.diffusive_conductance.ordinary_diffusion)\nphys_sm.add_model(propname='throat.diffusive_conductance',\n model=pm.diffusive_conductance.ordinary_diffusion)", "For the small layer we've used a normal diffusive conductance model, which when combined with the diffusion coefficient of air will be equivalent to open-air diffusion. If we want the small layer to have some tortuosity we must account for this:", "porosity = 0.5\ntortuosity = 2\nphys_sm['throat.diffusive_conductance'] *= (porosity/tortuosity)", "Note that this extra line is NOT a pore-scale model, so it will be over-written when the phys_sm object is regenerated.\nAdd a Reaction Term to the Small Layer\nA standard n-th order chemical reaction is $ r=k \\cdot x^b $, or more generally: $ r = A_1 \\cdot x^{A_2} + A_3 $. This model is available in OpenPNM.Physics.models.generic_source_terms, and we must specify values for each of the constants.", "# Set Source Term\nair['pore.A1'] = -1e-10 # Reaction pre-factor\nair['pore.A2'] = 1 # Reaction order\nair['pore.A3'] = 0 # A generic offset that is not needed so set to 0\nphys_sm.add_model(propname='pore.reaction',\n model=pm.generic_source_term.power_law,\n A1='pore.A1', A2='pore.A2', A3='pore.A3',\n X='pore.concentration', \n regen_mode='deferred')", "Perform a Diffusion Calculation", "Deff = op.algorithms.ReactiveTransport(network=combo_net, phase=air)\nPs = combo_net.pores(['large', 'right'], mode='intersection')\nDeff.set_value_BC(pores=Ps, values=1)\nPs = combo_net.pores('small')\nDeff.set_source(propname='pore.reaction', pores=Ps)\nDeff.settings['conductance'] = 'throat.diffusive_conductance'\nDeff.settings['quantity'] = 'pore.concentration'\nDeff.run()", "Visualize the Concentration Distribution\nAnd the result would look something like this:", "fig = op.topotools.plot_coordinates(network=combo_net, c=Deff['pore.concentration'], cmap='jet')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
feststelltaste/software-analytics
prototypes/Reading Git logs with Pandas 2.0 with bonus.ipynb
gpl-3.0
[ "Introduction\nIntroduction\nThere are multiple reasons for analyzing a version control system like your Git repository. See for example Adam Tornhill's book \"Your Code as a Crime Scene\" or his upcoming book \"Software Design X-Rays\" for plenty of inspirations:\nYou can \n- analyze knowledge islands\n- distinguish often changing code from stable code parts\n- identify code that is temporal coupled\nHaving the necessary data for those analyses in a Pandas <tt>DataFrame</tt> gives you many many possibilities to quickly gain insights about the evolution of your software system.\nThe idea\nIn another blog post I showed you a way to read in Git log data with Pandas's DataFrame and GitPython. Looking back, this was really complicated and tedious. So with a few tricks we can do it much more better this time:\n\nWe use GitPython's feature to directly access an underlying Git installation. This is way more faster than using GitPython's object representation of the repository makes it possible to have everything we need in one notebook.\nWe use in-memory reading by using StringIO to avoid unnecessary file access. This avoids storing the Git output on disk and read it from from disc again. This method is way more faster.\nWe also exploit Pandas's <tt>read_csv</tt> method even more. This makes the transformation of the Git log into a <tt>DataFrame</tt> as easy as pie.\n\nReading the Git log\nThe first step is to connect GitPython with the Git repo. If we have an instance of the repo, we can gain access to the underlying Git installation of the operation system via <tt>repo.git</tt>.\nIn this case, again, we tap the Spring Pet Clinic project, a small sample application for the Spring framework.", "import git \n\nGIT_REPO_PATH = r'../../spring-petclinic/'\nrepo = git.Repo(GIT_REPO_PATH)\ngit_bin = repo.git\ngit_bin", "With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).\nWe use a little trick to make sure, that the format for the file's statistics fits nicely with the commit's metadata (SHA <tt>%h</tt>, UNIX timestamp <tt>%at</tt> and author's name <tt>%aN</tt>). The <tt>--numstat</tt> option provides data for additions, deletions and the affected file name in one line, separated by the tabulator character <tt>\\t</tt>: \n<p>\n<tt>1<b>\\t</b>1<b>\\t</b>some/file/name.ext</tt>\n</p>\n\nWe use the same tabular separator <tt>\\t</tt> for the format string:\n<p>\n<tt>%h<b>\\t</b>%at<b>\\t</b>%aN</tt>\n</p>\n\nAnd here is the trick: Additionally, we add the amount of tabulators of the file's statistics plus an additional tabulator in front of the format string to pretend that there are empty file statistics' information in front of the format string.\nThe results looks like this:\n<p>\n<tt>\\t\\t\\t%h\\t%at\\t%aN</tt>\n</p>\n\nNote: If you want to export the Git log on the command line into a file to read that file later, you need to use the tabulator character xxx as separator instead of <tt>\\t</tt> in the format string. Otherwise, the trick doesn't work.\nOK, let's first executed the Git log export:", "git_log = git_bin.execute('git log --numstat --pretty=format:\"\\t\\t\\t%h\\t%at\\t%aN\"')\ngit_log[:80]", "We now read in the complete files' history in the <tt>git_log</tt> variable. Don't let confuse you by all the <tt>\\t</tt> characters. \nLet's read the result into a Pandas <tt>DataFrame</tt> by using the <tt>read_csv</tt> method. Because we can't provide a file path to a CSV data, we have to use StringIO to read in our in-memory buffered content.\nPandas will read the first line of the tabular-separated \"file\", sees the many tabular-separated columns and parses all other lines in the same format / column layout. Additionaly, we set the <tt>header</tt> to <tt>None</tt> because we don't have one and provide nice names for all the columns that we read in.", "import pandas as pd\nfrom io import StringIO\n\ncommits_raw = pd.read_csv(StringIO(git_log), \n sep=\"\\t\",\n header=None, \n names=['additions', 'deletions', 'filename', 'sha', 'timestamp', 'author']\n )\ncommits_raw.head()", "We got two different kind of content for the rows:\nFor each other row, we got some statistics about the modified files:\n<pre>\n2 0 src/main/asciidoc/appendices/bibliography.adoc\n</pre>\n\nIt contains the number of lines inserted, the number of lines deleted and the relative path of the file. With a little trick and a little bit of data wrangling, we can read that information into a nicely structured DataFrame.\nThe last steps are easy. We fill all the empty file statistics rows with the commit's metadata.", "commits = commits_raw.fillna(method='ffill')\ncommits.head()", "And drop all the commit metadata rows that don't contain file statitics.", "commits = commits.dropna()\ncommits.head()", "We are finished! This is it. \nIn summary, you'll need \"one-liner\" for converting a Git log file output that was exported with\ngit log --numstat --pretty=format:\"%x09%x09%x09%h%x09%at%x09%aN\" &gt; git.log\ninto a <tt>DataFrame</tt>:", "pd.read_csv(\"../../spring-petclinic/git.log\", \n sep=\"\\t\", \n header=None,\n names=[\n 'additions', \n 'deletions', \n 'filename', \n 'sha', \n 'timestamp', \n 'author']).fillna(method='ffill').dropna().head()", "Bonus section\nWe can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code respectively. But there are also a few exceptions for binary files like images. We skip these lines with the <tt>errors='coerce'</tt> option. This will lead to <tt>Nan</tt> in the rows that will be dropped after the conversion. \nThe <tt>timestamp</tt> column is a UNIX timestamp with the past seconds since January 1st 1970 we can easily convert with Pandas' <tt>to_datetime</tt> method.", "commits['additions'] = pd.to_numeric(commits['additions'], errors='coerce')\ncommits['deletions'] = pd.to_numeric(commits['deletions'], errors='coerce')\ncommits = commits.dropna()\ncommits['timestamp'] = pd.to_datetime(commits['timestamp'], unit=\"s\")\ncommits.head()\n\n%matplotlib inline\ncommits[commits['filename'].str.endswith(\".java\")]\\\n .groupby('filename')\\\n .count()['additions']\\\n .hist()", "Summary\nIn this notebook, I showed you how to read some a Git log output with another separator trick in only one line. This is a very handy method and a good base for further analysis!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
flyingfrog81/subrflettore
RotazioniSubreflettore.ipynb
gpl-3.0
[ "%pylab inline\nfrom sympy import *\nfrom IPython.display import Image\ninit_printing(use_latex=\"mathjax\")", "Rotazione del subreflettore intorno al fuoco", "Image(\"files/rot_sub.png\")", "Sia dato il sistema descritto il figura in cui: \n* O rappresenta il centro del subriflettore in posizione 0\n* F rappresenta il fulcro del parboloide\n* D è la distanza tra il subriflettore e il fuoco nella posizione 0\n* $ \\theta = \\widehat{OFP}$ , angolo di rotazione del subriflettore necessario ad inquadrare un feed in vertex\n* $|FO| = |FP| = \\textbf{D}$\nVogliamo trovare le compensazioni Z e X da applicare al movimento del subriflettore per mantenerlo centrato sull'asse di fuoco senza introdurre errori di puntamento.\n<hr />\n\nEssendo il triangolo $OFP$ isoscele abbiamo che:\n$$ \\alpha = \\frac{\\pi - \\theta}{2} = \\frac{\\pi}{2} - \\frac{\\theta}{2} $$\nda cui possiamo ricavare l'ampiezza dell'angolo $\\beta$:\n$$ \\beta = \\frac{\\pi}{2} - \\alpha = \\frac{\\pi}{2} - \\frac{\\pi}{2} + \\frac{\\theta}{2} = \\frac{\\theta}{2} $$\nPossiamo quindi calcolare X e Z sapendo che:\n$$ |PO| = 2 * (D * \\sin{\\frac{\\theta}{2}}) $$\nE sfruttando il fatto che $PBO$ è rettangolo in $B$:\n$$ X = |PO| * \\cos{\\beta} = 2 * D * \\sin{\\frac{\\theta}{2}} * \\cos{\\frac{\\theta}{2}} = D* \\sin{\\theta} $$\n$$ Z = |PO| * \\sin{\\beta} = 2 * D * \\sin{\\frac{\\theta}{2}} * \\sin{\\frac{\\theta}{2}} = 2 * D * (\\sin{\\frac{\\theta}{2}})^2$$", "d, t, z, x = symbols(\"D theta z x\")\nz = 2 * d * (sin(t/2)**2)\nx = d * sin(t)", "Sapendo che $ D = 310.7mm $ calcolato geometricamente per il subreflettore possiamo calcolare il variare di X e Z al variare dell'angolo in vertex", "xaxis = linspace(0, radians(8), 1000)\nxaxis_labels = linspace(0, 8, 1000)\n#z_at_fixed_d = 2 * 310.7 * np.sin(xaxis/2)**2\n#x_at_fixed_d = 310.7 * np.sin(xaxis)\nz_at_fixed_d = array([N(z.subs(d, 310.7).subs(t, _theta)) for _theta in xaxis])\nx_at_fixed_d = array([N(x.subs(d, 310.7).subs(t, _theta)) for _theta in xaxis])", "Dai plot seguenti vediamo come nella posizione 0 del sureflettore una rotazione di un angolo $\\theta$ attorno al fuoco rappresenti una piccola correzione in Z e una più consistente correzione in X", "pylab.title(\"Z correction at D = 310.7\")\npylab.xlabel(\"Theta deg.\")\npylab.ylabel(\"Z mm\")\npylab.grid(True)\npylab.plot(xaxis_labels, z_at_fixed_d, 'b-', \n xaxis_labels[500], z_at_fixed_d[500], 'bo', \n markersize = 12)\n\npylab.title(\"X correction at D = 310.7\")\npylab.xlabel(\"Theta deg.\")\npylab.ylabel(\"X mm\")\npylab.grid(True)\npylab.plot(xaxis_labels, x_at_fixed_d, 'g-', \n xaxis_labels[500], x_at_fixed_d[500], 'go', \n markersize = 12)", "Presupponiamo ora di inquadrare un ricevitore con un angolo di $\\theta = 4^{\\circ}$ e di voler effettuare un focheggiamento muovendo il subreflettore verso il ricevitore di una quantità che facciamo variare tra 0 e 18cm, calcolati come $ 3\\lambda @ 5 GHz$ . Anche in questo caso possiamo controllare il variare delle correzioni applicate in Z e X al variare della distanza del subreflettore dal fuoco.", "xaxis = linspace(0, 180, 1000)\nz_at_fixed_t = array([N(z.subs(t, radians(4)).subs(d, _d)) for _d in xaxis])\nx_at_fixed_t = array([N(x.subs(t, radians(4)).subs(d, _d)) for _d in xaxis])\n\npylab.title(\"Z correction at theta = 4 deg.\")\npylab.xlabel(\"D mm\")\npylab.ylabel(\"Z mm\")\npylab.grid(True)\npylab.plot(xaxis, z_at_fixed_t, 'b-')\n\npylab.title(\"X correction at theta = 4 deg.\")\npylab.xlabel(\"D mm\")\npylab.ylabel(\"X mm\")\npylab.grid(True)\npylab.plot(xaxis, x_at_fixed_t, 'g-')", "Tilt del subreflettore", "Image(\"files/tilt_sub2.png\")", "Sia dato il sistema in figura in cui:\n* $AB$ è metà di un lato del triangolo subreflettore (vista B in Fig. 3.1.2 pag. 90 \"Matematica di sistema\" 4 di 4)\n* $|AB| = 2068mm / 2 $\n* $\\varphi$ è l'angolo di cui vgliamo ruotare attorno ad un asse il subreflettore\n* $BC$ rappresenta il posizionamento del subreflettore una volta compiuta la rotazione\nNOTA: Per semplicità consideriamo il caso di rotazione sull'asse X applicando un angolo $\\theta_y$ che si riflette sul solo movimento di due attuatori. In questo caso sto solo cercando di quantificare un possibile errore.\nRagionando sui triangoli e sfruttando il fatto che $|AB| = |BC|$ possiamo desumere che:\n * $\\omega = \\Omega$ perchè alterni-interni \n$$ \\Omega = \\omega = \\frac{\\pi - \\varphi}{2} = \\frac{\\pi}{2} - \\frac{\\varphi}{2} $$\n$$ \\sin{\\Omega} = \\sin{\\omega} = \\cos{\\frac{\\varphi}{2}} $$\n$$ \\cos{\\Omega} = \\cos{\\omega} = \\sin{\\frac{\\varphi}{2}} $$\nRagionando a questo punto sui lati otteniamo che:\n$$ |AC| = 2|AB|\\sin(\\frac{\\varphi}{2}) $$\n$$ |AE| = |AC| * \\sin{\\omega} = 2|AB|\\sin{\\frac{\\varphi}{2}} * \\cos{\\frac{\\varphi}{2}} = 2|AB|\\frac{\\sin{\\varphi}}{2} = |AB|\\sin{\\varphi}$$\n$$ |EC| = |AC| * \\cos{\\omega} = 2|AB|\\sin{\\frac{\\varphi}{2}} * \\sin{\\frac{\\varphi}{2}} = 2|AB|(\\sin{\\frac{\\varphi}{2}})^2 = |AB|(1 - \\cos{\\varphi}) $$\n$$ |ED| = |EC| * \\tan{\\varphi} = |AB| \\tan{\\varphi} - |AB|\\cos{\\varphi}*\\frac{\\sin{\\varphi}}{\\cos{\\varphi}} = |AB| * (\\tan{\\varphi} - \\sin{\\varphi})$$\n$$ |AD| = |AB| * \\tan{\\varphi} $$\nQue e ci interessa è che quindi quando noi vogliamo comandare un tilt di $\\varphi$ e diamo come posizione comandata $|AD|$ in realtà dovremmo dare un valore di $|AC|$ , per cui:\n$$ |AD| - |AC| = |AB| * \\tan{\\varphi} - 2|AB|\\sin{\\frac{\\varphi}{2}} = |AB|(\\tan{\\varphi} - 2\\sin{\\frac{\\varphi}{2}}) $$\nFissanso quindi come da ipotesi $|AB| = 2098/2mm$ calcoliamo l'errore al variare di $\\varphi$:", "xaxis = linspace(0, radians(8), 1000)\nxaxis_labels = linspace(0, 8, 1000)\nerror = (2098/2) * (np.tan(xaxis) - 2*np.sin(xaxis/2))\n\npylab.title(\"Correzione attuatore vs. angolo di tilt\")\npylab.xlabel(\"Phi deg.\")\npylab.ylabel(\"Error mm\")\npylab.grid(True)\npylab.plot(xaxis_labels, error, 'b-', \n xaxis_labels[500], error[500], 'bo', \n markersize = 12)", "Ragionando sulle equazioni degli azionamenti\nDa \"matematica di sistema\" 4 di 4 pag. 99 prendiamo le equazioni che tengono conto anche della compensazione relativa alla distanza tra il fuoco e il subreflettore:\n$$ X = -D * \\tan{\\theta_y} $$\n$$ Y = D * \\frac{\\tan{\\theta_x}}{\\cos{8^{\\circ}}}$$\n$$ Z_1 = y * \\tan{8^{\\circ}} + z + (r-f)\\theta_x $$\n$$ Z_2 = y * \\tan{8^{\\circ}} + z -f\\theta_x + \\frac{1}{2} * \\theta_y $$\n$$ Z_3 = y * \\tan{8^{\\circ}} + z -f*\\theta_x - \\frac{1}{2} * \\theta_y $$\nRagioniamo sul caso semplice di voler effettuare unicamente una rotazione di $4.2^\\circ$ attorno all'asse X per cui poniamo:\n$$ D = 310.7 $$\n$$ \\theta_x = 4.2^\\circ $$\n$$ \\theta_y = 0^\\circ $$\n$$ y = 0 $$\nDalle simulazioni fatte al CAD dovrebbe risultare:\n$$ X = 0 $$\n$$ Y = 24.358 $$\n$$ Z_1 = -88.214 $$\n$$ Z_2 = 42.881 $$\n$$ Z_3 = Z_2 = 42.881 $$\nMentre applicando le equazioni otteniamo:", "azx, azy, z1, z2, z3 = symbols(\"X Y Z_1 Z_2 Z_3\")\nd, theta_x, theta_y, x, y, z, r, f = symbols(\"D theta_x theta_y x y z r f\")\nazx = - d * tan(theta_y)\nazy = d * tan(theta_x) / cos(radians(8))\nz1 = y * tan(radians(8)) + z + (r - f) * theta_x\nz2 = y * tan(radians(8)) + z - f * theta_x + 0.5 * theta_y \nz3 = y * tan(radians(8)) + z - f * theta_x - 0.5 * theta_y\n\nN(azx.subs(d, 310.7).subs(theta_y,radians(0)))\n\nN(azy.subs(d, 310.7).subs(theta_x,radians(4.2)))\n\nN(z1.subs(d, 310.7).subs(r, 1791).subs(f, 597).subs(theta_x, radians(4.2)).subs(y,0).subs(z,0))\n\nN(z2.subs(d, 310.7).subs(r, 1791).subs(f, 597).subs(theta_x, radians(4.2)).subs(y,0).subs(z,0).subs(theta_y,0))\n\nN(z3.subs(d, 310.7).subs(r, 1791).subs(f, 597).subs(theta_x, radians(4.2)).subs(y,0).subs(z,0).subs(theta_y,0))", "A cosa sono dovute queste differenze? Proviamo ad aplicare le correzioni in Z e per la rotazione della cerniera precedentemente calcolate.\nPer quanto riguarda la correzione in Z avremo che il nuovcentro del subreflettore sarà ad una quota Z più alta dello 0, la possiamo ottenere come:\n$$ Z = 2 * D * (\\sin{\\frac{\\theta}{2}})^2$$", "zn = 2 * 310.7 * (np.sin(np.radians(4.2/2))**2)\nprint zn\n\nN(z1.subs(d, 310.7).subs(r, 1791).subs(f, 597).subs(theta_x, radians(4.2)).subs(y,0).subs(z,zn))\n\nN(z2.subs(d, 310.7).subs(r, 1791).subs(f, 597).subs(theta_x, radians(4.2)).subs(y,0).subs(z,zn).subs(theta_y,0))", "Già così possiamo vedere come ci stiamo avvicinando al risultato atteso." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
csaladenes/aviation
code/airport_dest_parser.ipynb
mit
[ "import pandas as pd, json, numpy as np\nimport matplotlib.pyplot as plt\nfrom bs4 import BeautifulSoup\n%matplotlib inline", "Load airports of each country", "L=json.loads(file('../json/L.json','r').read())\nM=json.loads(file('../json/M.json','r').read())\nN=json.loads(file('../json/N.json','r').read())\n\nimport requests\n\nAP={}\nfor c in M:\n if c not in AP:AP[c]={}\n for i in range(len(L[c])):\n AP[c][N[c][i]]=L[c][i]", "record schedules for 2 weeks, then augment count with weekly flight numbers.\nseasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.\nparse Departures", "baseurl='https://www.airportia.com/'\nimport requests, urllib2\n\ndef urlgetter(url):\n s = requests.Session()\n cookiesopen = s.get(url)\n cookies=str(s.cookies)\n fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]\n #push token\n opener = urllib2.build_opener()\n for k in fcookies:\n opener.addheaders.append(('Cookie', k[0]+'='+k[1]))\n #read html\n return s.get(url).content", "good dates", "SD={}\nSC=json.loads(file('../json/SC2.json','r').read())\n\n#pop out last - if applicable\ntry: SD.pop(c)\nexcept: pass\nfor h in range(len(AP.keys())):\n c=AP.keys()[h]\n #country not parsed yet\n if c in SC:\n if c not in SD:\n SD[c]=[]\n print h,c\n airportialinks=AP[c]\n sch={}\n #all airports of country, where there is traffic\n for i in airportialinks:\n if i in SC[c]:\n print i,\n if i not in sch:sch[i]={}\n url=baseurl+airportialinks[i]\n m=urlgetter(url)\n for d in range (3,17):\n #date not parsed yet\n if d not in sch[i]:\n url=baseurl+airportialinks[i]+'departures/201704'+str(d)\n m=urlgetter(url)\n soup = BeautifulSoup(m, \"lxml\")\n #if there are flights at all\n if len(soup.findAll('table'))>0:\n sch[i][d]=pd.read_html(m)[0] \n else: print '--W-',d,\n SD[c]=sch\n print ", "Save", "dbpath='E:/Dropbox/Public/datarepo/aviation/' #large file db path\nfile(dbpath+\"json/SD_dest.json\",'w').write(repr(SD))\n\nI3=json.loads(file('../json/I3.json','r').read())\n\nMDF=pd.DataFrame()\n\nfor c in SD:\n sch=SD[c]\n mdf=pd.DataFrame()\n for i in sch:\n for d in sch[i]:\n df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)\n df['From']=i\n df['Date']=d\n mdf=pd.concat([mdf,df])\n if len(sch)>0:\n mdf['City']=[i[:i.rfind(' ')] for i in mdf['To']]\n mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['To']]\n cpath=I3[c].lower()\n file('../countries/'+cpath+\"/json/mdf_dest.json\",'w').write(json.dumps(mdf.reset_index().to_json()))\n MDF=pd.concat([MDF,mdf])\n print c,\n\ndbpath='E:/Dropbox/Public/datarepo/aviation/' #large file db path\nMDF.reset_index().to_json(dbpath+'json/MDF_dest.json')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Diyago/Machine-Learning-scripts
DEEP LEARNING/Pytorch from scratch/MLP/mnist_mlp_with_val.ipynb
apache-2.0
[ "Multi-Layer Perceptron, MNIST\n\nIn this notebook, we will train an MLP to classify images from the MNIST database hand-written digit database.\nThe process will be broken down into the following steps:\n\n\nLoad and visualize the data\nDefine a neural network\nTrain the model\nEvaluate the performance of our trained model on a test dataset!\n\n\nBefore we begin, we have to import the necessary libraries for working with data and PyTorch.", "# import libraries\nimport torch\nimport numpy as np", "Load and Visualize the Data\nDownloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the batch_size if you want to load more data at a time.\nThis cell will create DataLoaders for each of our datasets.", "from torchvision import datasets\nimport torchvision.transforms as transforms\nfrom torch.utils.data.sampler import SubsetRandomSampler\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n# percentage of training set to use as validation\nvalid_size = 0.2\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# choose the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)\n\n# obtain training indices that will be used for validation\nnum_train = len(train_data)\nindices = list(range(num_train))\nnp.random.shuffle(indices)\nsplit = int(np.floor(valid_size * num_train))\ntrain_idx, valid_idx = indices[split:], indices[:split]\n\n# define samplers for obtaining training and validation batches\ntrain_sampler = SubsetRandomSampler(train_idx)\nvalid_sampler = SubsetRandomSampler(valid_idx)\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\n sampler=train_sampler, num_workers=num_workers)\nvalid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, \n sampler=valid_sampler, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers)", "Visualize a Batch of Training Data\nThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.", "import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n # print out the correct label for each image\n # .item() gets the value contained in a Tensor\n ax.set_title(str(labels[idx].item()))", "View an Image in More Detail", "img = np.squeeze(images[1])\n\nfig = plt.figure(figsize = (12,12)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')\nwidth, height = img.shape\nthresh = img.max()/2.5\nfor x in range(width):\n for y in range(height):\n val = round(img[x][y],2) if img[x][y] !=0 else 0\n ax.annotate(str(val), xy=(y,x),\n horizontalalignment='center',\n verticalalignment='center',\n color='white' if img[x][y]<thresh else 'black')", "Define the Network Architecture\nThe architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.", "import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n # number of hidden nodes in each layer (512)\n hidden_1 = 512\n hidden_2 = 512\n # linear layer (784 -> hidden_1)\n self.fc1 = nn.Linear(28 * 28, hidden_1)\n # linear layer (n_hidden -> hidden_2)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n # linear layer (n_hidden -> 10)\n self.fc3 = nn.Linear(hidden_2, 10)\n # dropout layer (p=0.2)\n # dropout prevents overfitting of data\n self.dropout = nn.Dropout(0.2)\n\n def forward(self, x):\n # flatten image input\n x = x.view(-1, 28 * 28)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc1(x))\n # add dropout layer\n x = self.dropout(x)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc2(x))\n # add dropout layer\n x = self.dropout(x)\n # add output layer\n x = self.fc3(x)\n return x\n\n# initialize the NN\nmodel = Net()\nprint(model)", "Specify Loss Function and Optimizer\nIt's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer and then calculates the log loss.", "# specify loss function (categorical cross-entropy)\ncriterion = nn.CrossEntropyLoss()\n\n# specify optimizer (stochastic gradient descent) and learning rate = 0.01\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)", "Train the Network\nThe steps for training/learning from a batch of data are described in the comments below:\n1. Clear the gradients of all optimized variables\n2. Forward pass: compute predicted outputs by passing inputs to the model\n3. Calculate the loss\n4. Backward pass: compute gradient of the loss with respect to model parameters\n5. Perform a single optimization step (parameter update)\n6. Update average training loss\nThe following loop trains for 50 epochs; take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.", "# number of epochs to train the model\nn_epochs = 50\n\n# initialize tracker for minimum validation loss\nvalid_loss_min = np.Inf # set initial \"min\" to infinity\n\nfor epoch in range(n_epochs):\n # monitor training loss\n train_loss = 0.0\n valid_loss = 0.0\n \n ###################\n # train the model #\n ###################\n model.train() # prep model for training\n for data, target in train_loader:\n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*data.size(0)\n \n ###################### \n # validate the model #\n ######################\n model.eval() # prep model for evaluation\n for data, target in valid_loader:\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # update running validation loss \n valid_loss += loss.item()*data.size(0)\n \n # print training/validation statistics \n # calculate average loss over an epoch\n train_loss = train_loss/len(train_loader.dataset)\n valid_loss = valid_loss/len(valid_loader.dataset)\n \n print('Epoch: {} \\tTraining Loss: {:.6f} \\tValidation Loss: {:.6f}'.format(\n epoch+1, \n train_loss,\n valid_loss\n ))\n \n # save model if validation loss has decreased\n if valid_loss <= valid_loss_min:\n print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(\n valid_loss_min,\n valid_loss))\n torch.save(model.state_dict(), 'model.pt')\n valid_loss_min = valid_loss", "Load the Model with the Lowest Validation Loss", "model.load_state_dict(torch.load('model.pt'))", "Test the Trained Network\nFinally, we test our best model on previously unseen test data and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.", "# initialize lists to monitor test loss and accuracy\ntest_loss = 0.0\nclass_correct = list(0. for i in range(10))\nclass_total = list(0. for i in range(10))\n\nmodel.eval() # prep model for evaluation\n\nfor data, target in test_loader:\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # update test loss \n test_loss += loss.item()*data.size(0)\n # convert output probabilities to predicted class\n _, pred = torch.max(output, 1)\n # compare predictions to true label\n correct = np.squeeze(pred.eq(target.data.view_as(pred)))\n # calculate test accuracy for each object class\n for i in range(batch_size):\n label = target.data[i]\n class_correct[label] += correct[i].item()\n class_total[label] += 1\n\n# calculate and print avg test loss\ntest_loss = test_loss/len(test_loader.dataset)\nprint('Test Loss: {:.6f}\\n'.format(test_loss))\n\nfor i in range(10):\n if class_total[i] > 0:\n print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (\n str(i), 100 * class_correct[i] / class_total[i],\n np.sum(class_correct[i]), np.sum(class_total[i])))\n else:\n print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))\n\nprint('\\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (\n 100. * np.sum(class_correct) / np.sum(class_total),\n np.sum(class_correct), np.sum(class_total)))", "Visualize Sample Test Results\nThis cell displays test images and their labels in this format: predicted (ground-truth). The text will be green for accurately classified examples and red for incorrect predictions.", "# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\n\n# get sample outputs\noutput = model(images)\n# convert output probabilities to predicted class\n_, preds = torch.max(output, 1)\n# prep images for display\nimages = images.numpy()\n\n# plot the images in the batch, along with predicted and true labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n ax.set_title(\"{} ({})\".format(str(preds[idx].item()), str(labels[idx].item())),\n color=(\"green\" if preds[idx]==labels[idx] else \"red\"))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SJSlavin/phys202-2015-work
assignments/assignment10/ODEsEx02.ipynb
mit
[ "Ordinary Differential Equations Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Lorenz system\nThe Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:\n$$ \\frac{dx}{dt} = \\sigma(y-x) $$\n$$ \\frac{dy}{dt} = x(\\rho-z) - y $$\n$$ \\frac{dz}{dt} = xy - \\beta z $$\nThe solution vector is $[x(t),y(t),z(t)]$ and $\\sigma$, $\\rho$, and $\\beta$ are parameters that govern the behavior of the solutions.\nWrite a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.", "def lorentz_derivs(yvec, t, sigma, rho, beta):\n \"\"\"Compute the the derivatives for the Lorentz system at yvec(t).\"\"\"\n x = yvec[0]\n y = yvec[1]\n z = yvec[2]\n dx = sigma*(y - x)\n dy = x*(rho - z) - y\n dz = x*y - beta*z\n return np.array([dx, dy, dz])\n\nassert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])", "Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.", "def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Solve the Lorenz system for a single initial condition.\n \n Parameters\n ----------\n ic : array, list, tuple\n Initial conditions [x,y,z].\n max_time: float\n The max time to use. Integrate with 250 points per time unit.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \n Returns\n -------\n soln : np.ndarray\n The array of the solution. Each row will be the solution vector at that time.\n t : np.ndarray\n The array of time points used.\n \n \"\"\"\n # YOUR CODE HERE\n soln = odeint(ic, y0 = [1.0, 1.0, 1.0], t = np.linspace(0, max_time, 100), args=(sigma, rho, beta))\n return soln\n print(soln)\n\nprint(solve_lorentz((1, 1, 1), max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0))\n\nassert True # leave this to grade solve_lorenz", "Write a function plot_lorentz that:\n\nSolves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.\nPlot $[x(t),z(t)]$ using a line to show each trajectory.\nColor each line using the hot colormap from Matplotlib.\nLabel your plot and choose an appropriate x and y limit.\n\nThe following cell shows how to generate colors that can be used for the lines:", "N = 5\ncolors = plt.cm.hot(np.linspace(0,1,N))\nfor i in range(N):\n # To use these colors with plt.plot, pass them as the color argument\n print(colors[i])\n\ndef plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):\n \"\"\"Plot [x(t),z(t)] for the Lorenz system.\n \n Parameters\n ----------\n N : int\n Number of initial conditions and trajectories to plot.\n max_time: float\n Maximum time to use.\n sigma, rho, beta: float\n Parameters of the differential equation.\n \"\"\"\n # YOUR CODE HERE\n np.random.seed(1)\n \n plt.plot(solve_lorentz(np.random.rand(15, 1), max_time, sigma, rho, beta), np.linspace(0, 4.0, 100))\n\nplot_lorentz()\n\nassert True # leave this to grade the plot_lorenz function", "Use interact to explore your plot_lorenz function with:\n\nmax_time an integer slider over the interval $[1,10]$.\nN an integer slider over the interval $[1,50]$.\nsigma a float slider over the interval $[0.0,50.0]$.\nrho a float slider over the interval $[0.0,50.0]$.\nbeta fixed at a value of $8/3$.", "# YOUR CODE HERE\nraise NotImplementedError()", "Describe the different behaviors you observe as you vary the parameters $\\sigma$, $\\rho$ and $\\beta$ of the system:\nYOUR ANSWER HERE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Danghor/Algorithms
Python/Chapter-02/Power.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)", "Efficient Computation of Powers\nThe function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$.", "def power(m, n):\n r = 1\n for i in range(n):\n r *= m\n return r\n\npower(2, 3), power(3, 2)\n\n%%time\np = power(3, 500000)", "Next, we try a recursive implementation that is based on the following two equations:\n1. $m^0 = 1$\n2. $m^n = \\left{\\begin{array}{ll}\n m^{n/2} \\cdot m^{n/2} & \\mbox{if $n$ is even}; \\\n m^{n/2} \\cdot m^{n/2} \\cdot m & \\mbox{if $n$ is odd}.\n \\end{array}\n \\right.\n $", "def power(m, n):\n if n == 0:\n return 1\n p = power(m, n // 2)\n if n % 2 == 0:\n return p * p\n else:\n return p * p * m\n\n%%time\np = power(3, 500000)" ]
[ "code", "markdown", "code", "markdown", "code" ]
dwcaraway/intro-to-python-talk
python-intermediate.ipynb
unlicense
[ "Python Course 2: Intermediate Python\nFunctions\nFunctions encapsulate repeatable code. They're defined with the def keyword", "def say_hello():\n print('hello, world!')", "Functions are invoked using parenthesis (). Argumements are passed between the parenthesis.", "say_hello()\n\ndef hi(name):\n print('hi', name)\n\nhi(\"pythonistas\")", "Functions can use the return keyword to stop execution and send a value back to the caller.", "def double(value):\n return value*2\n\nprint(double(4))", "exercise Write a function, stars that takes a number and returns that number of * (asterisk).\n```python\nprint(stars(5))\n\n```", "\"abcde\"[:2]", "Lists\nThis data type orders elements using a 0-based index.", "empty_list = []\nlist_with_numbers = [0, 1, 2, 3, 4, 5, 6]\nlist_with_mixed = [\"zero\", 1, \"TWO\", 3, 4, \"FIVE\", \"Six\"]", "Any type of element, including functions and other lists, can be stored in a list.", "list_with_lists = [\"this\", \"contains\", \"a\", [\"list\", \"of\", [\"lists\"] ]]\n\ndef some_func(message):\n print('message is ', message)\n\nlist_with_function = [some_func]\n\n# Let's call the function\nlist_with_function[0]('hello from a list')", "Lists are a class of data structure called iterables that allow for easily going over all the elements of the list", "iter_example = ['a', 'b', 'c', 'd', 'e']\n\nfor elem in iter_example:\n print(elem)", "List Comprehension\nLists can be created using list comprehension, which puts for loops inside brackets, capturing the results as a list.\nThe format is [x for x in iter_example]\nSlicing\nSlicing allows for the selection of elements from a list.\npython\na[start:end] # items start through end-1\na[start:] # items start through the rest of the array\na[:end] # items from the beginning through end-1\na[:] # a copy of the whole array\nThere is also the step value, which can be used with any of the above:\npython\na[start:end:step] # start through not past end, by step\nThe key point to remember is that the :end value represents the first value that is not in the selected slice. So, the difference beween end and start is the number of elements selected (if step is 1, the default).\nThe other feature is that start or end may be a negative number, which means it counts from the end of the array instead of the beginning. So:\npython\na[-1] # last item in the array\na[-2:] # last two items in the array\na[:-2] # everything except the last two items\nSimilarly, step may be a negative number:\npython\na[::-1] # all items in the array, reversed\na[1::-1] # the first two items, reversed\na[:-3:-1] # the last two items, reversed\na[-3::-1] # everything except the last two items, reversed\nexercise Write a function that takes a string and returns the reverse of it.\npython\nreverse('abcde') # returns `edcba`\nSets\nSets are like lists except they cannot have duplicate elements.", "list_with_dupes = [1, 2, 3, 3, 4, 5]\n\nprint(set(list_with_dupes)) #removes duplicate 3", "Dictionaries (dicts)\ndictionaries key and value pairs. they let you store data with an object that you can use to look up the value.\nThey're constructed using the curly brace {}", "a_dict = {'some': 'value', 'another': 'value'}\n\nprint(a_dict['another'])", "The dictionary works by calculating the hash of the key. You can see this using the __hash__ function.", "'another'.__hash__()", "Any value that is hashable can be used as a key, including numbers.", "mixed_keys = {4 : 'somevalue'}\nprint(mixed_keys[4])\n\nNew elements will be added or changed just by referencing them. The `del` word will delete entries.\n\nchanging_dict = {'foo': 'bar', 'goner': \"gone soon\"}\n\nchanging_dict['foo'] = 'biv' #update existing value\nchanging_dict['notfound'] = 'found now!' #insert new value\ndel changing_dict['goner'] #removes key / value\n\nprint(changing_dict)\n\nYou can iterate over keys in the dictionary using a for loop\n\nstarter_dictionary = {'a': 1, 'b': 2, 'c': 3}\n\nfor key in starter_dictionary:\n print(key)", "You can iterate over the values using the values() method.", "for val in starter_dictionary.values():\n print(val)", "We'll cover these if there's time\nClasses\nClasses encapsulate data and functions. They're a handy abstraction in many languages such as Java and C++. They're used in python but not as often as functions. Create them using the class keyword.", "class ExampleClass:\n pass\n\nex = ExampleClass() #construct an instance of the ExampleClass", "Unit Testing", "# foo.py\ndef foo():\n return 42\n\n# test_foo.py\nimport unittest\n\nclass TestFoo(unittest.TestCase):\n \n def test_foo_returns_42(self):\n expected = 42\n actual = foo()\n self.assertTrue(expected == actual)\n\n# Below doesn't work in a jupyter notebook \n# if __name__ == '__main__':\n# unittest.main()\n\nif __name__ == '__main__':\n unittest.main(argv=['first-arg-is-ignored'], exit=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kylemede/DS-ML-sandbox
KaggelChallenges/titanic/explore.ipynb
gpl-3.0
[ "My playing with the Kaggle titanic challenge.\nI COPPIED THE INITIAL CODE and got lots of the ideas for this first Kaggle advanture from here.\nI will later compact the important stuff from here into a kernal on my Kaggle account.", "import pandas as pd \nfrom pandas import Series, DataFrame\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\n\n# machine learning\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn import cross_validation\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.cross_validation import StratifiedKFold\nfrom sklearn.ensemble.gradient_boosting import GradientBoostingClassifier\nfrom sklearn.ensemble import ExtraTreesClassifier\n\nimport xgboost as xgb\nfrom xgboost import plot_importance\n\n\ntrain_df = pd.read_csv(\"train.csv\",dtype={\"Age\":np.float64},)\n#train_df.head()\n\n# find how many ages\ntrain_df['Age'].count()\n\n# how many ages are NaN?\ntrain_df['Age'].isnull().sum()\n\n# plot ages of training data set, with NaN's removed\nif False:\n train_df['Age'].dropna().astype(int).hist(bins=70)\nprint 'Mean age = ',train_df['Age'].dropna().astype(int).mean()", "Let's see where they got on", "#train_df['Embarked'].head()\n\n#train_df.info()\n\ntrain_df['Embarked'].isnull().sum()\n\ntrain_df[\"Embarked\"].count()\n\nif False:\n sns.countplot(x=\"Embarked\",data=train_df)\n\nif False:\n sns.countplot(x='Survived',hue='Embarked',data=train_df,order=[0,1])", "OK, so clearly there were more people who got on at S, and it seems their survival is disproportional. Let's check that.", "if False:\n embark_survive_perc = train_df[[\"Embarked\", \"Survived\"]].groupby(['Embarked'],as_index=False).mean()\n sns.barplot(x='Embarked', y='Survived', data=embark_survive_perc,order=['S','C','Q'])", "Interesting, actually those from C had higher rate of survival. So, knowing more people from your home town didn't help.\nNext, did how much they paid have an effect?", "if False:\n train_df['Fare'].astype(int).plot(kind='hist',bins=100, xlim=(0,50))\n\n# get fare for survived & didn't survive passengers \nif False:\n fare_not_survived = train_df[\"Fare\"].astype(int)[train_df[\"Survived\"] == 0]\n fare_survived = train_df[\"Fare\"].astype(int)[train_df[\"Survived\"] == 1]\n\n # get average and std for fare of survived/not survived passengers\n avgerage_fare = DataFrame([fare_not_survived.mean(), fare_survived.mean()])\n std_fare = DataFrame([fare_not_survived.std(), fare_survived.std()])\n\n avgerage_fare.index.names = std_fare.index.names = [\"Survived\"]\n avgerage_fare.plot(yerr=std_fare,kind='bar',legend=False)", "Before digging into how the ages factor in, let's take the advice of others and replace NaN's with random values", "import scipy.stats as stats\n\n# column 'Age' has some NaN values\n# A simple approximation of the distribution of ages is a gaussian, but this is not commonly accurate.\n# lets make a vector of random ages centered on the mean, with a width of the std\nlower, upper = train_df['Age'].min(), train_df['Age'].max()\nmu, sigma = train_df[\"Age\"].mean(), train_df[\"Age\"].std()\n\n# number of rows\nn = train_df.shape[0]\n\nprint 'max: ',train_df['Age'].max()\nprint 'min: ',train_df['Age'].min()\n\n# vector of random values using the truncated normal distribution. \nX = stats.truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma)\nrands = X.rvs(n)\n\n# get the indexes of the elements in the original array that are NaN\nidx = np.isfinite(train_df['Age'])\n\n# use the indexes to replace the NON-NaNs in the random array with the good values from the original array\nrands[idx.values] = train_df[idx]['Age'].values\n\n## At this point rands is now the cleaned column of data we wanted, so push it in to the original df\ntrain_df['Age'] = rands\n\n\"\"\"\n## we will make a new column with Nan's replaced, then push that into the original df\nn = train_df.shape[0] # number of rows\n#randy = np.random.randint(average_age_train - std_age_train, average_age_train + std_age_train, size = n)\n# draw from a gaussian instead of simple uniform\n# note this uses a 'standard gauss' and that tneeds to have its var and mean shifted\nrandy = np.random.randn(n)*std_age_train + average_age_train\nidx = np.isfinite(train_df['Age']) # gives a boolean index for the NaNs in the df's column\nrandy[idx.values] = train_df[idx]['Age'].values ## idexing the values of randy with this\n#now have updated column, next push into original df\ntrain_df['Age'] = randy\n\"\"\"\n\nprint 'After this gaussian replacment, there are: ',train_df['Age'].isnull().sum()\nprint 'max: ',train_df['Age'].max()\nprint 'min: ',train_df['Age'].min()\n\n# plot new Age Values\nif False:\n train_df['Age'].hist(bins=70)\n# Compare this to that from a few cells up for the raw ages with the NaN's dropped. Not much different actually.", "lets perform the same NaN replacement for the 'Age' with the test data as well", "## let's pull in the test data\ntest_df = pd.read_csv(\"test.csv\",dtype={\"Age\":np.float64},)\n#test_df.head()\n\n#### Do the same for the test data\n# column 'Age' has some NaN values\n# A simple approximation of the distribution of ages is a gaussian, but this is not commonly accurate.\n# lets make a vector of random ages centered on the mean, with a width of the std\nlower, upper = test_df['Age'].min(), test_df['Age'].max()\nmu, sigma = test_df[\"Age\"].mean(), test_df[\"Age\"].std()\n\n# number of rows\nn = test_df.shape[0]\n\nprint 'max: ',test_df['Age'].max()\nprint 'min: ',test_df['Age'].min()\n\n# vector of random values using the truncated normal distribution. \nX = stats.truncnorm((lower - mu) / sigma, (upper - mu) / sigma, loc=mu, scale=sigma)\nrands = X.rvs(n)\n\n# get the indexes of the elements in the original array that are NaN\nidx = np.isfinite(test_df['Age'])\n\n# use the indexes to replace the NON-NaNs in the random array with the good values from the original array\nrands[idx.values] = test_df[idx]['Age'].values\n\n## At this point rands is now the cleaned column of data we wanted, so push it in to the original df\ntest_df['Age'] = rands\n\n#test_df['Age'].hist(bins=70)\n\n## Let's make a couple nice plots of survival vs age\n# peaks for survived/not survived passengers by their age\nif False:\n facet = sns.FacetGrid(train_df, hue=\"Survived\",aspect=4)\n #facet.map(sns.kdeplot,'Age',shade= True) # This keeps crashing the kernal, but I don't know why!!!!!!!!!!\n facet.set(xlim=(0, train_df['Age'].astype(int).max()))\n facet.add_legend()\n\n\n# average survived passengers by age\nif False:\n fig, axis1 = plt.subplots(1,1,figsize=(18,4))\n average_age = train_df[[\"Age\", \"Survived\"]].groupby(['Age'],as_index=False).mean()\n sns.barplot(x='Age', y='Survived', data=average_age)\n print 'max: ',train_df['Age'].astype(int).max()\n print 'min: ',train_df['Age'].astype(int).min()\n\n# Cabin\nif False:\n # It has a lot of NaN values, so it won't cause a remarkable impact on prediction\n train_df.drop(\"Cabin\",axis=1,inplace=True)\n test_df.drop(\"Cabin\",axis=1,inplace=True)\n## OR convert NaNs to 'U' meaning 'Unknown' and map all to new columns\nif True:\n # Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html\n # replacing missing cabins with U (for Uknown)\n train_df.Cabin.fillna('U',inplace=True)\n # mapping each Cabin value with the cabin letter\n train_df['Cabin'] = train_df['Cabin'].map(lambda c : c[0])\n # dummy encoding ...\n cabin_dummies = pd.get_dummies(train_df['Cabin'],prefix='Cabin')\n train_df = pd.concat([train_df,cabin_dummies],axis=1)\n train_df.drop('Cabin',axis=1,inplace=True)\n \n # replacing missing cabins with U (for Uknown)\n test_df.Cabin.fillna('U',inplace=True)\n # mapping each Cabin value with the cabin letter\n test_df['Cabin'] = test_df['Cabin'].map(lambda c : c[0])\n # dummy encoding ...\n cabin_dummies = pd.get_dummies(test_df['Cabin'],prefix='Cabin')\n test_df = pd.concat([test_df,cabin_dummies],axis=1)\n test_df.drop('Cabin',axis=1,inplace=True)\n \n\n#train_df.head()\n#test_df.head()\n\n#train_df.head()", "This function introduces 4 new features:\n\nFamilySize : the total number of relatives including the passenger (him/her)self.\nSigleton : a boolean variable that describes families of size = 1\nSmallFamily : a boolean variable that describes families of 2 <= size <= 4\nLargeFamily : a boolean variable that describes families of 5 < size", "# Family\n\n# Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html\n# introducing a new feature : the size of families (including the passenger)\ntrain_df['FamilySize'] = train_df['Parch'] + train_df['SibSp'] + 1\n# introducing other features based on the family size\ntrain_df['Singleton'] = train_df['FamilySize'].map(lambda s : 1 if s == 1 else 0)\ntrain_df['SmallFamily'] = train_df['FamilySize'].map(lambda s : 1 if 2<=s<=4 else 0)\ntrain_df['LargeFamily'] = train_df['FamilySize'].map(lambda s : 1 if 5<=s else 0)\n\n# Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html\n# introducing a new feature : the size of families (including the passenger)\ntest_df['FamilySize'] = test_df['Parch'] + test_df['SibSp'] + 1\n# introducing other features based on the family size\ntest_df['Singleton'] = test_df['FamilySize'].map(lambda s : 1 if s == 1 else 0)\ntest_df['SmallFamily'] = test_df['FamilySize'].map(lambda s : 1 if 2<=s<=4 else 0)\ntest_df['LargeFamily'] = test_df['FamilySize'].map(lambda s : 1 if 5<=s else 0)\n\nif False:\n\n # Instead of having two columns Parch & SibSp, \n # we can have only one column represent if the passenger had any family member aboard or not,\n # Meaning, if having any family member(whether parent, brother, ...etc) will increase chances of Survival or not.\n train_df['Family'] = train_df[\"Parch\"] + train_df[\"SibSp\"]\n train_df['Family'].loc[train_df['Family'] > 0] = 1\n train_df['Family'].loc[train_df['Family'] == 0] = 0\n\n test_df['Family'] = test_df[\"Parch\"] + test_df[\"SibSp\"]\n test_df['Family'].loc[test_df['Family'] > 0] = 1\n test_df['Family'].loc[test_df['Family'] == 0] = 0\n\n # drop Parch & SibSp\n train_df = train_df.drop(['SibSp','Parch'], axis=1)\n test_df = test_df.drop(['SibSp','Parch'], axis=1)\n\n# plot\nif False:\n fig, (axis1,axis2) = plt.subplots(1,2,sharex=True,figsize=(10,5))\n\n # sns.factorplot('Family',data=train_df,kind='count',ax=axis1)\n sns.countplot(x='Family', data=train_df, order=[1,0], ax=axis1)\n\n # average of survived for those who had/didn't have any family member\n family_perc = train_df[[\"Family\", \"Survived\"]].groupby(['Family'],as_index=False).mean()\n sns.barplot(x='Family', y='Survived', data=family_perc, order=[1,0], ax=axis2)\n\n axis1.set_xticklabels([\"With Family\",\"Alone\"], rotation=0)\n\n# Sex\n\n# As we see, children(age < ~16) on aboard seem to have a high chances for Survival.\n# So, we can classify passengers as males, females, and child\ndef get_person(passenger):\n age,sex = passenger\n return 'child' if age < 16 else sex\n \ntrain_df['Person'] = train_df[['Age','Sex']].apply(get_person,axis=1)\ntest_df['Person'] = test_df[['Age','Sex']].apply(get_person,axis=1)\n\n# No need to use Sex column since we created Person column\ntrain_df.drop(['Sex'],axis=1,inplace=True)\ntest_df.drop(['Sex'],axis=1,inplace=True)\n\n# create dummy variables for Person column\nperson_dummies_titanic = pd.get_dummies(train_df['Person'])\nperson_dummies_titanic.columns = ['Child','Female','Male']\n#person_dummies_titanic.drop(['Male'], axis=1, inplace=True)\n\nperson_dummies_test = pd.get_dummies(test_df['Person'])\nperson_dummies_test.columns = ['Child','Female','Male']\n#person_dummies_test.drop(['Male'], axis=1, inplace=True)\n\ntrain_df = train_df.join(person_dummies_titanic)\ntest_df = test_df.join(person_dummies_test)\nif False:\n fig, (axis1,axis2) = plt.subplots(1,2,figsize=(10,5))\n\n # sns.factorplot('Person',data=train_df,kind='count',ax=axis1)\n sns.countplot(x='Person', data=train_df, ax=axis1)\n\n # average of survived for each Person(male, female, or child)\n person_perc = train_df[[\"Person\", \"Survived\"]].groupby(['Person'],as_index=False).mean()\n sns.barplot(x='Person', y='Survived', data=person_perc, ax=axis2, order=['male','female','child'])\n\ntrain_df.drop(['Person'],axis=1,inplace=True)\ntest_df.drop(['Person'],axis=1,inplace=True)", "Not surprising, woman and children had higher survival rates.", "# Pclass\n\n# sns.factorplot('Pclass',data=titanic_df,kind='count',order=[1,2,3])\nif False:\n sns.factorplot('Pclass','Survived',order=[1,2,3], data=train_df,size=5)\n\n# create dummy variables for Pclass column\npclass_dummies_titanic = pd.get_dummies(train_df['Pclass'])\npclass_dummies_titanic.columns = ['Class_1','Class_2','Class_3']\n#pclass_dummies_titanic.drop(['Class_3'], axis=1, inplace=True)\n\npclass_dummies_test = pd.get_dummies(test_df['Pclass'])\npclass_dummies_test.columns = ['Class_1','Class_2','Class_3']\n#pclass_dummies_test.drop(['Class_3'], axis=1, inplace=True)\n\ntrain_df.drop(['Pclass'],axis=1,inplace=True)\ntest_df.drop(['Pclass'],axis=1,inplace=True)\n\ntrain_df = train_df.join(pclass_dummies_titanic)\ntest_df = test_df.join(pclass_dummies_test)\n\n# Ticket\n# Code based on that here: http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html\n# a function that extracts each prefix of the ticket, returns 'XXX' if no prefix (i.e the ticket is a digit)\ndef cleanTicket(ticket):\n ticket = ticket.replace('.','')\n ticket = ticket.replace('/','')\n ticket = ticket.split()\n ticket = map(lambda t : t.strip() , ticket)\n ticket = list(filter(lambda t : not t.isdigit(), ticket))\n if len(ticket) > 0:\n return ticket[0]\n else: \n return 'XXX'\n \ntrain_df['Ticket'] = train_df['Ticket'].map(cleanTicket)\ntickets_dummies = pd.get_dummies(train_df['Ticket'],prefix='Ticket')\ntrain_df = pd.concat([train_df, tickets_dummies],axis=1)\ntrain_df.drop('Ticket',inplace=True,axis=1)\n\ntest_df['Ticket'] = test_df['Ticket'].map(cleanTicket)\ntickets_dummies = pd.get_dummies(test_df['Ticket'],prefix='Ticket')\ntest_df = pd.concat([test_df, tickets_dummies],axis=1)\ntest_df.drop('Ticket',inplace=True,axis=1)\n\ntrain_df.head()\n\n# Title\n# a map of more aggregated titles\nTitle_Dictionary = {\n \"Capt\": \"Officer\",\n \"Col\": \"Officer\",\n \"Major\": \"Officer\",\n \"Jonkheer\": \"Royalty\",\n \"Don\": \"Royalty\",\n \"Sir\" : \"Royalty\",\n \"Dr\": \"Officer\",\n \"Rev\": \"Officer\",\n \"the Countess\":\"Royalty\",\n \"Dona\": \"Royalty\",\n \"Mme\": \"Mrs\",\n \"Mlle\": \"Miss\",\n \"Ms\": \"Mrs\",\n \"Mr\" : \"Mr\",\n \"Mrs\" : \"Mrs\",\n \"Miss\" : \"Miss\",\n \"Master\" : \"Master\",\n \"Lady\" : \"Royalty\"\n\n }\n# we extract the title from each name\ntrain_df['Title'] = train_df['Name'].map(lambda name:name.split(',')[1].split('.')[0].strip())\n# we map each title\ntrain_df['Title'] = train_df.Title.map(Title_Dictionary)\n#train_df.head()\n# we extract the title from each name\ntest_df['Title'] = test_df['Name'].map(lambda name:name.split(',')[1].split('.')[0].strip())\n# we map each title\ntest_df['Title'] = test_df.Title.map(Title_Dictionary)\n#test_df.head()\n\n# encoding in dummy variable\ntitles_dummies = pd.get_dummies(train_df['Title'],prefix='Title')\ntrain_df = pd.concat([train_df,titles_dummies],axis=1)\ntitles_dummies = pd.get_dummies(test_df['Title'],prefix='Title')\ntest_df = pd.concat([test_df,titles_dummies],axis=1)\n# removing the title variable\ntrain_df.drop('Title',axis=1,inplace=True)\ntest_df.drop('Title',axis=1,inplace=True)\n\n# Convert categorical column values to ordinal for model fitting\nif False:\n le_title = LabelEncoder()\n # To convert to ordinal:\n train_df.Title = le_title.fit_transform(train_df.Title)\n test_df.Title = le_title.fit_transform(test_df.Title)\n # To convert back to categorical:\n #train_df.Title = le_title.inverse_transform(train_df.Title)\n #train_df.head()\n #test_df.head()", "Also unsurprising. The higher the booking class, then higher the chances to survive.\n\n\n\nNow lets get to actually training and building a model to make predictions with!\n\n\n\nproblems with the raw data\n\na couple NaNs in 'Embarked', so drop column\n'Name' strings can't be converted to anything useful, so drop column\nreplace NaNs in 'Fare' with median\n'Ticket' can't be converted to anything useful, so drop column\n'PassengerID' has no importance, so drop column", "#train_df.drop(['Embarked'], axis=1,inplace=True)\n#test_df.drop(['Embarked'], axis=1,inplace=True)\n# only for test_df, since there is a missing \"Fare\" values\n# could use mean or median here.\ntest_df[\"Fare\"].fillna(test_df[\"Fare\"].mean(), inplace=True)\ntrain_df.drop(['Name'], axis=1,inplace=True)\ntest_df.drop(['Name'], axis=1,inplace=True)\n\n#train_df.drop(['Ticket'], axis=1,inplace=True)\n#test_df.drop(['Ticket'], axis=1,inplace=True)\n\n#train_df.drop(['PassengerId'], axis=1,inplace=True)\n#test_df.drop(['PassengerId'], axis=1,inplace=True)\n\n# only in titanic_df, fill the two missing values with the most occurred value, which is \"S\".\ntrain_df[\"Embarked\"] = train_df[\"Embarked\"].fillna(\"S\")\n# Either to consider Embarked column in predictions,\n# and remove \"S\" dummy variable, \n# and leave \"C\" & \"Q\", since they seem to have a good rate for Survival.\n\n# OR, don't create dummy variables for Embarked column, just drop it, \n# because logically, Embarked doesn't seem to be useful in prediction.\n\nembark_dummies_train = pd.get_dummies(train_df['Embarked'])\n#embark_dummies_train.drop(['S'], axis=1, inplace=True)\n\nembark_dummies_test = pd.get_dummies(test_df['Embarked'])\n#embark_dummies_test.drop(['S'], axis=1, inplace=True)\n\ntrain_df = train_df.join(embark_dummies_train)\ntest_df = test_df.join(embark_dummies_test)\n\ntrain_df.drop(['Embarked'], axis=1,inplace=True)\ntest_df.drop(['Embarked'], axis=1,inplace=True)", "The names are also pointless, so drop them too", "## Scale all features except passengerID\nfeatures = list(train_df.columns)\nfeatures.remove('PassengerId')\ntrain_df[features] = train_df[features].apply(lambda x: x/x.max(), axis=0)\n\nfeatures = list(test_df.columns)\nfeatures.remove('PassengerId')\ntest_df[features] = test_df[features].apply(lambda x: x/x.max(), axis=0)\n\ntrain_df.head()\n\ntest_df.head()", "match up dataframe columns, by removing extras not in the test set.", "## Remove extra columns in training DF that are not in test DF\ntrain_cs = list(train_df.columns)\ntrain_cs.remove('Survived')\ntest_cs = list(test_df.columns)\nfor c in train_cs:\n if c not in test_cs:\n print repr(c)+' not in test columns, so removing it from training df'\n train_df.drop([c], axis=1,inplace=True)\nfor c in test_cs:\n if c not in train_cs:\n print repr(c)+' not in training columns, so removing it from test df'\n test_df.drop([c], axis=1,inplace=True)\n\nif False:\n print '\\nFor train_df:'\n for column in train_df:\n print \"# Nans in column '\"+column+\"' are: \"+str(train_df[column].isnull().sum())\n print 'min: ',train_df[column].min()\n print 'max: ',train_df[column].max()\n\n print '\\nFor test_df:'\n for column in test_df:\n print \"# Nans in column '\"+column+\"' are: \"+str(test_df[column].isnull().sum())\n print 'min: ',test_df[column].min()\n print 'max: ',test_df[column].max() \n\n# define training and testing sets\nX_train = train_df.drop(\"Survived\",axis=1)\nY_train = train_df[\"Survived\"]\nX_test = test_df.copy()\n\nX_train.head()\n\nX_test.head()", "Feature Selection", "clf = ExtraTreesClassifier(n_estimators=200)\nclf = clf.fit(X_train, Y_train)\nfeatures = pd.DataFrame()\nfeatures['feature'] = X_train.columns\nfeatures['importance'] = clf.feature_importances_\nfeatures.sort(['importance'],ascending=False)", "Select top features for use in models", "model = SelectFromModel(clf, prefit=True)\nX_train_new = model.transform(X_train)\nX_train_new.shape\n\nX_test_new = model.transform(X_test)\nX_test_new.shape\n\n# Logistic Regression\nlogreg = LogisticRegression()\n\nlogreg.fit(X_train_new, Y_train)\n\nY_pred = logreg.predict(X_test_new)\n\nprint('standard score ', logreg.score(X_train_new, Y_train))\nprint('cv score ',np.mean(cross_val_score(logreg, X_train_new, Y_train, cv=10)))\n\n\n# Support Vector Machines\nsvc = SVC()\n\nsvc.fit(X_train_new, Y_train)\n\nY_pred = svc.predict(X_test_new)\n\n#svc.score(X_train, Y_train)\nprint('standard score ', svc.score(X_train_new, Y_train))\nprint('cv score ',np.mean(cross_val_score(svc, X_train_new, Y_train, cv=10)))\n\n\n# Random Forests\nrandom_forest = RandomForestClassifier(n_estimators=300)\nrandom_forest.fit(X_train_new, Y_train)\nY_pred = random_forest.predict(X_test_new)\nprint('standard score ', random_forest.score(X_train_new, Y_train))\nprint('cv score ',np.mean(cross_val_score(random_forest, X_train_new, Y_train, cv=10)))\n\n\n\nacc = []\nmx_v = 0\nmx_e = 0\nests = range(10,500,10)\nif False:\n for est in ests:\n random_forest = RandomForestClassifier(n_estimators=est)\n random_forest.fit(X_train_new, Y_train)\n Y_pred = random_forest.predict(X_test_new)\n #predictions = model.predict(X_test)\n #accuracy = accuracy_score(y_test, predictions)\n accuracy = np.mean(cross_val_score(random_forest, X_train_new, Y_train, cv=5))* 100.0\n acc.append(accuracy)\n if acc[-1]>mx_v:\n mx_v = acc[-1]\n mx_e = est\n print(\"maxes were: \",(mx_e,mx_v))\n \n fig = plt.figure(figsize=(7,5)) \n subPlot = fig.add_subplot(111)\n subPlot.plot(ests,acc,linewidth=3)\n\n# From Comment by 'Ewald' at:\n# https://www.kaggle.com/c/job-salary-prediction/forums/t/4000/how-to-add-crossvalidation-to-scikit-randomforestregressor\nif True:\n num_folds = 10\n num_instances = len(X_train_new)\n seed = 7\n num_trees = 300\n max_features = 'auto'\n kfold = cross_validation.KFold(n=num_instances, n_folds=num_folds, random_state=seed)\n model = RandomForestClassifier(n_estimators=num_trees, max_features=max_features,\n min_samples_leaf=50)\n results= cross_validation.cross_val_score(model, X_train_new, Y_train, cv=kfold, n_jobs=-1)\n print(results.mean())\n\n# Another form of K-fold and hyperparameter tuning from:\n#http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html\nforest = RandomForestClassifier(max_features='sqrt')\n\nparameter_grid = {\n 'max_depth' : [3,4,5,6,7],\n 'n_estimators': [50,100,130,175,200,210,240,250],\n 'criterion': ['gini','entropy']\n }\n\ncross_validation = StratifiedKFold(Y_train, n_folds=5)\nimport timeit\ntic=timeit.default_timer()\ngrid_search = GridSearchCV(forest,\n param_grid=parameter_grid,\n cv=cross_validation)\n\ngrid_search.fit(X_train_new, Y_train)\n\nprint('Best score: {}'.format(grid_search.best_score_))\nprint('Best parameters: {}'.format(grid_search.best_params_))\ntoc = timeit.default_timer()\nprint(\"It took: \",toc-tic)\n\n# K Nearest Neighbors \nknn = KNeighborsClassifier(n_neighbors = 50)\n\nknn.fit(X_train_new, Y_train)\n\nY_pred = knn.predict(X_test_new)\n\n#knn.score(X_train_new, Y_train)\n\nprint('standard score ', knn.score(X_train_new, Y_train))\nprint('cv score ',np.mean(cross_val_score(knn, X_train_new, Y_train, cv=10)))\n\n\n# Gaussian Naive Bayes\ngaussian = GaussianNB()\n\ngaussian.fit(X_train_new, Y_train)\n\nY_pred = gaussian.predict(X_test_new)\n\n#gaussian.score(X_train, Y_train)\nprint('standard score ', gaussian.score(X_train_new, Y_train))\nprint('cv score ',np.mean(cross_val_score(gaussian, X_train_new, Y_train, cv=10)))\n\n\n# get Correlation Coefficient for each feature using Logistic Regression\ncoeff_df = DataFrame(train_df.columns.delete(0))\ncoeff_df.columns = ['Features']\ncoeff_df[\"Coefficient Estimate\"] = pd.Series(logreg.coef_[0])\n\n# preview\n#coeff_df", "<font color='red'> NEXT: TRY TO PERFORM BOOSTING WITH SKLEARN, NOT XGBoost to see how it changes above results. THEN TRY TO BUILD SINGLE LAYER NEURAL NETWORKS TO SEE HOW THEY PERFORM, THEN TRY MULTI-LAYER NEURAL NETWORKS.</font>\n\n\n\nXGBoost stuff", "if False:\n submission = pd.DataFrame({\n \"PassengerId\": test_df[\"PassengerId\"],\n \"Survived\": Y_pred\n })\n submission.to_csv('submission.csv', index=False)\n\nif False:\n ### Using XGboost\n #X_train = train_df.drop(\"Survived\",axis=1)\n #train_X = train_df.drop(\"Survived\",axis=1).as_matrix()\n X_train_new\n #X_train_new, Y_train, X_test_new\n #train_y = train_df[\"Survived\"]\n Y_train\n #test_X = test_df.drop(\"PassengerId\",axis=1).copy().as_matrix()\n X_test_new\n model = xgb.XGBClassifier(max_depth=10, n_estimators=300, learning_rate=0.05)\n model.fit(X_train_new, Y_train)\n predictions = model.predict(X_test_new)\n # plot feature importance\n plot_importance(model)\n plt.show()\n #X_train, X_test, y_train, y_test = train_test_split(X_train_new, Y_train, test_size=0.33)\n #accuracy = accuracy_score(y_test, predictions)\n #print(\"Accuracy: %.2f%%\" % (accuracy * 100.0))\n\n\n# basic try at iterative training with XGboost\n#train_X = train_df.drop(\"Survived\",axis=1).as_matrix()\n#train_y = train_df[\"Survived\"]\n#test_X = test_df.drop(\"PassengerId\",axis=1).copy().as_matrix()\n# fit model on all training data\nacc = []\nmx_v = 0\nmx_e = 0\nests = range(10,500,10)\nif False:\n for est in ests:\n #print est\n model = xgb.XGBClassifier(max_depth=5, n_estimators=ests, learning_rate=0.05)\n X_train, X_test, y_train, y_test = train_test_split(X_train_new, Y_train, test_size=0.33)#, random_state=7)\n model.fit(X_train, y_train)\n predictions = model.predict(X_test)\n accuracy = accuracy_score(y_test, predictions)\n accuracy *= 100.0\n acc.append(accuracy)\n #print(\"Accuracy: %.2f%%\" % (accuracy))\n if acc[-1]>mx_v:\n mx_v = acc[-1]\n mx_e = est\n print(\"maxes were: \",(mx_e,mx_v))\n\n fig = plt.figure(figsize=(7,5)) \n subPlot = fig.add_subplot(111)\n subPlot.plot(ests,acc,linewidth=3)\n\n\nmodel = xgb.XGBClassifier(max_depth=5, n_estimators=300, learning_rate=0.05)\nfor i in range(10):\n print \"Iteration: \"+str(i)\n # split data into train and test sets\n X_train, X_test, y_train, y_test = train_test_split(X_train_new, Y_train, test_size=0.33)#, random_state=7)\n model.fit(X_train, y_train)\n predictions = model.predict(X_test)\n accuracy = accuracy_score(y_test, predictions)\n print(\"Accuracy: %.2f%%\" % (accuracy * 100.0))\n\nprint \"After rounds of training. Results on original training data:\"\npredictions = model.predict(train_X)\naccuracy = accuracy_score(train_y, predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))\n\n\n# use feature importance for feature selection\nfrom numpy import loadtxt\nfrom numpy import sort\nfrom xgboost import XGBClassifier\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.feature_selection import SelectFromModel\nimport timeit\ntic=timeit.default_timer()\n# load data\n#dataset = loadtxt('pima-indians-diabetes.csv', delimiter=\",\")\n# split data into X and y\nX = X_train_new\nY = Y_train\n# split data into train and test sets\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=7)\n# fit model on all training data\nmodel = XGBClassifier(max_depth=10, nthread=100, n_estimators=300, learning_rate=0.05)\nmodel.fit(X_train, y_train)\n# make predictions for test data and evaluate\ny_pred = model.predict(X_test)\npredictions = [round(value) for value in y_pred]\naccuracy = accuracy_score(y_test, predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))\ntoc = timeit.default_timer()\nprint(\"It took: \",toc-tic)\n\n# Fit model using each importance as a threshold\nif False:\n thresholds = sort(model.feature_importances_)\n for thresh in thresholds:\n # select features using threshold\n selection = SelectFromModel(model, threshold=thresh, prefit=True)\n select_X_train = selection.transform(X_train)\n # train model\n selection_model = XGBClassifier(max_depth=10, nthread=100, n_estimators=300, learning_rate=0.05)\n selection_model.fit(select_X_train, y_train)\n # eval model\n select_X_test = selection.transform(X_test)\n y_pred = selection_model.predict(select_X_test)\n predictions = [round(value) for value in y_pred]\n accuracy = accuracy_score(y_test, predictions)\n print(\"Thresh=%.3f, n=%d, Accuracy: %.2f%%\" % (thresh, select_X_train.shape[1], accuracy*100.0))\n\ncv_params = {'max_depth': [3], 'min_child_weight': [1]}\nind_params = {'learning_rate': 0.05, 'n_estimators': 100, 'nthread':100, 'seed':0, 'subsample': 0.8, 'colsample_bytree': 0.8, \n 'objective': 'binary:logistic'}\noptimized_GBM = GridSearchCV(xgb.XGBClassifier(**ind_params), cv_params, \n scoring = 'accuracy', cv = 2, n_jobs = -1) \n\nimport timeit\ntic=timeit.default_timer()\n#X_train_new, Y_train, X_test_new\n\n#optimized_GBM.fit(X_train_new, Y_train)\n\n#optimized_GBM.grid_scores_\ntoc = timeit.default_timer()\nprint(\"It took: \",toc-tic)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tcstewar/testing_notebooks
Sparsity and SSPs.ipynb
gpl-2.0
[ "%matplotlib inline\nimport nengo\nimport numpy as np\nimport scipy.special", "When working with learning and neural networks, it is useful to be able to control the sparsity of the network. In this case, I'll define sparsity to mean \"the proportion of the input space that causes a particular neuron to fire\". So if a neuron has sparsity = 0.1, then it will fire for 10% of its inputs.\nIn the NEF, the only parameter we have to control sparsity is the $x_{intercept}$. This is the value where if $x \\cdot e > x_{intercept}$, then the neuron will fire.\nSo, we need to compute the $x_{intercept}$ that gives a certain level of sparsity.\nTo do this, let's consider a hyperspherical cap: https://en.wikipedia.org/wiki/Spherical_cap#Hyperspherical_cap\n<img src=https://upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Spherical_cap_diagram.tiff/lossless-page1-220px-Spherical_cap_diagram.tiff.png>\nThe volume is $V = {1 \\over 2} C_d r^d I_{2rh-h^2 \\over r^2}({d+1 \\over 2}, {1 \\over 2})$ where $C_d$ is the volume of a unit hyperball of dimension $d$ and $I_x(a,b)$ is the regularized incomplete beta function.\nThe surface area is $A = {1 \\over 2} A_d r^d I_{2rh-h^2 \\over r^2}({d-1 \\over 2}, {1 \\over 2})$ where $A_d$ is the surface area of a unit hypersphere of dimension $d$ and $I_x(a,b)$ is the regularized incomplete beta function.\nWhen we're working with SSPs, we are dealing with points on the surface of a hypersphere of radius 1, and $h=1-x_{intercept}$. So, the proportion of points inside the hyperspeherical cap is $A/A_d$, which is\n$p = {1 \\over 2} I_{1-{x_{intercept}^2}}({{d-1} \\over 2}, {1 \\over 2})$\nIf we have this proportion, but we want to compute the $x_{intercept}$, then we need to invert this function:\n$2p = I_{1-{x_{intercept}^2}}({{d-1} \\over 2}, {1 \\over 2})$\n$1-{x_{intercept}^2} = I^{-1}_{2p}({{d-1} \\over 2}, {1 \\over 2})$\n${x_{intercept}^2} = 1-I^{-1}_{2p}({{d-1} \\over 2}, {1 \\over 2})$\n$x_{intercept} = \\sqrt{1-I^{-1}_{2p}({{d-1} \\over 2}, {1 \\over 2})}$\nOf course, this formula only works for $p<0.5$. For $p>0.5$ we can use 1-x_{intercept} and flip the sign.", "def sparsity_to_x_intercept(d, p):\n sign = 1\n if p > 0.5:\n p = 1.0 - p\n sign = -1\n return sign * np.sqrt(1-scipy.special.betaincinv((d-1)/2.0, 0.5, 2*p))\n", "One thing to note is that if we want the same thing but for volume (i.e. for representing points that are inside the hypersphere), then we can do the same derivation but using the volume formula. The only difference is that instead of d-1, you get d+1. The d+1 version of this formula is what I used for the original derivation of intercepts that lead to the CosineSimilarity(D-2) suggestion for initializing intercepts (if you want a uniform distribution of sparsity). For that derivation, see https://github.com/tcstewar/testing_notebooks/blob/master/Intercept%20Distribution%20.ipynb\nLet's test this formula. We'll do it by generating a neuron with the intercept computed with this function, and measuring its sparsity by randomly sampling points on the surface of the hypersphere.", "D = 32\nN = 1000000\nsparsity = 0.1\nintercept = sparsity_to_x_intercept(D, sparsity)\n\nmodel = nengo.Network()\nwith model:\n ens = nengo.Ensemble(n_neurons=1, dimensions=D,\n intercepts=[intercept])\nsim = nengo.Simulator(model)\n\n# generate samples just on the surface of the sphere\npts = nengo.dists.UniformHypersphere(surface=True).sample(N, D)\n\n_, A = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=pts)\n \nprint('Computed sparsity:', np.mean(A>0))" ]
[ "code", "markdown", "code", "markdown", "code" ]
kimkipyo/dss_git_kkp
Python 복습/14일차.금_pandas + SQL_2/14일차_1T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (1) - 폴더 생성 및 제거.ipynb
mit
[ "1T_os, shutil 모듈을 이용한 파일,폴더 관리하기 (1) - 폴더 생성 및 제거\n\n영화별 매출 - Revenue per Film 이거 어려워. 이거 뽑아 보겠음\n데이터를 저장하고 관리하기 위해서 os, shutil - python 내장 라이브러리를 쓸 것임\n각 국가별 이름으로 (korea.csv / japan.csv...) 저장하는 거를 할 것임\n1T에는 os module, shutil module로 파일, 폴더, 압축파일(데이터) 등을 저장하고 읽고 쓰고 관리할 것임. 파이썬으로만", "import os\n#os 모듈을 통해서\n#운영체제 레벨(서버는 ex.우분투)에서 다루는 파일 폴더 생성하고 삭제하기가 가능\n#기존에는 (\"../../~~\") 이런 식으로 경로를 직접 입력 했으나\n\nos.listdir()\n#현재 폴더 안에 있는 파일들을 리스트로 뽑는 것\n\nos.listdir(\"../\")\n\nfor csv_file in os.listdir(\"../\"):\n pass", "ipynb 라는 확장자로 끝나는 파일들만 가지고 오려면", "[\n file_name\n for file_name\n in os.listdir(\"../01일차.수_입문/\")\n if file_name.endswith(\".ipynb\") # csv 파일 가져오기, 엑셀 파일 가져오기로 사용\n]", "파일에 대한 경로를 생성할 때\n\n현재 폴더 안에 있는, \"data\"라는 폴더의 \"data.csv\"의 경로\n\"data/data.csv\"\n\"./data/data.csv\" ( String 으로 입력할 때 이렇게 직접 ) // 무조건 이 방법으로\n\"/home/python/notebooks/dobestan/dss/....../data/data.csv\" - 절대 경로 // 잘 안 씀", "os.path.join(\"data\", \"data.csv\")\n\nos.curdir\n\nos.path.join(os.curdir, \"data\", \"data.csv\")\n# 이렇게 하면 경로를 알려줘. 앞으로 만들 때는 무조건 이렇게 만들겠다. \n# os.path.join(os.curdir, \"data\", file_name)", "os.curdir #current directory\nos.path.join(...)\nos.listdir(...)", "os.makedirs(\"data\") #잠재적인 문제가 있다.\n\nos.listdir() #폴더 만들기는 쉽게 됩니다.\n\nos.rmdir(\"data\") #잠재적인 문제가 있다.\n\nos.listdir()", "폴더를 만들 때, os.listdir()로 특정 폴더가 있는지 확인한 후에, 만약 있으면 삭제하고 새로운 폴더를 생성한다.\n폴더를 지울 때, 만약에", "os.makedirs(\"data\") # DATA라는 폴더 안에 간단한 텍스트 파일 만들기 \n\nos.listdir(os.path.join(os.curdir,\"data\"))\n\nos.rmdir(\"data\")\n# 폴더 안에 파일이 있으면 삭제가 안 된다\n# os.listdir()로 찾아본 다음에 폴더면 또 들어가서 다시 재귀적으로 찾아보고,\n# 파일이면 삭제하고 상위폴더로 올라와서 그리고 rmdir() ...", "설정 파일 같은 것을 수정하거나 삭제할 때\n만약에 .bash_profile => .bash_profile.tmp / ... (복사해주고 작업을 한다.)\n\n복구는 안 된다.\n\n\n위와 같은 과정의 flow는 어려워\n\n그래서 shutil이라는 파이썬 내장 모듈을 사용할 것임", "import shutil", "os - low-level (저수준)으로 파일/폴더/운영체제를 관리했다면\nshutil - high-level (고수준) 으로 파일/폴더를 관리", "os.listdir(os.path.join(os.curdir, \"data\"))\n\nshutil.rmtree(os.path.join(os.curdir, \"data\"))\n\nos.listdir(os.path.join(os.curdir, \"data\"))\n\nos.makedirs(os.path.join(os.curdir, \"data\"))\n\nshutil.rmtree(os.path.join(os.curdir, \"data\"))", "1. 국가명.csv 파일로 만들기 => world.tar.gz (world.zip) 압축하기\n2. 대륙명/국가명.csv 파일로 만들기 => 대륙명.tar.gz 압축하기\nex) Angola.csv -- 도시 정보 csv파일이 국가별로 있어야 합니다. (\"data/world/____.csv\" 이 200개 정도 있어야 함)", "os.makedirs(os.path.join(os.curdir, \"data\"))\nos.makedirs(os.path.join(os.curdir, \"data\", \"world\"))\n# 만약 \"data\", \"world\"라는 폴더가 있으면, 삭제하는 기능 ...", "df.to_csv(os.path.join(, , ___.csv))\ndf.to_csv(\"./data/world/Angola.csv\")", "# 폴더의 유무를 확인하고, 있으면 삭제한다.\nif \"data\" in os.listdir():\n print(\"./data/ 폴더를 삭제합니다.\")\n shutil.rmtree(os.path.join(os.curdir, \"data\"))\n\n# \"data\"라는 폴더를 생성하고, 그 안에 \"world\"라는 폴더를 생성한다.\nprint(\"./data/ 폴더를 생성합니다.\")\nos.makedirs(os.path.join(os.curdir, \"data\"))\nos.makedirs(os.path.join(os.curdir, \"data\", \"world\"))\n\nimport pymysql\n\ndb = pymysql.connect(\n \"db.fastcamp.us\",\n \"root\",\n \"dkstncks\",\n \"world\",\n charset='utf8'\n)\n\ncountry_df = pd.read_sql(\"SELECT * FROM Country;\", db)\ncity_df = pd.read_sql(\"SELECT * FROM City;\", db)\n#Country.Code를 바탕으로, City.CountryCode와 매칭해서 찾아야 함\n#Country.Name은 반드시 가지고 와야지 파일명으로 저장이 가능\n\ncity_groups = city_df.groupby(\"CountryCode\")\n\nfor index, row in country_df.iterrows():\n country_code = row[\"Code\"]\n country_name = row[\"Name\"]\n \n city_df = city_groups.get_group(country_code)\n city_df.to_csv(os.path.join(\"data\", \"world\", \"{country_name},csv\".format(country_name=country_name)))\n\n#\"ATA\"라는 애가 없다고 나오니까 테스트\nSQL_QUERY = \"\"\"\n SELECT *\n FROM City\n WHERE CountryCode = \"ATA\"\n ;\n\"\"\"\npd.read_sql(SQL_QUERY, db)\n\ncity_groups.get_group(\"ATA\")\n\n\"ATA\" in city_groups[\"CountryCode\"].unique()\n\n#없는게 증명 됐으니 if문 첨가\nfor index, row in country_df.iterrows():\n country_code = row[\"Code\"]\n country_name = row[\"Name\"]\n \n if country_code in city_df[\"CountryCode\"].unique():\n one_city_df = city_groups.get_group(country_code)\n one_city_df.to_csv(os.path.join(os.curdir, \"data\", \"world\", \"{country_name}.csv\".format(country_name=country_name)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
josh-gree/maths-with-python
02-programs.ipynb
mit
[ "Programs\nUsing the Python console to type in commands works fine, but has serious drawbacks. It doesn't save the work for the future. It doesn't allow the work to be re-used. It's frustrating to edit when you make a mistake, or want to make a small change. Instead, we want to write a program.\nA program is a text file containing Python commands. It can be written in any text editor. Something like the editor in spyder is ideal: it has additional features for helping you write code. However, any plain text editor will work. A program like Microsoft Word will not work, as it will try and save additional information with the file.\nLet us use a simple pair of Python commands:", "import math\nx = math.sin(1.2)", "Go to the editor in spyder and enter those commands in a file:\npython\nimport math\nx = math.sin(1.2)\nSave this file in a suitable location and with a suitable name, such as lab1_basic.py (the rules and conventions for filenames are similar to those for variable names laid out above: descriptive, lower case names without spaces). The file extension should be .py: spyder should add this automatically.\nTo run this program, either\n\npress the green \"play\" button in the toolbar;\npress the function key F5;\nselect \"Run\" from the \"Run\" menu.\n\nIn the console you should see a line like\nrunfile('/Users/ih3/PythonLabs/lab1_basic.py', wdir='/Users/ih3/PythonLabs')\nappear, and nothing else. To check that the program has worked, check the value of x. In the console just type x:", "x", "Also, in the top right of the spyder window, select the \"Variable explorer\" tab. It shows the variables that it currently knows, which should include x, its type (float) and its value.\nIf there are many variables known, you may worry that your earlier tests had already set the value for x and that the program did not actually do anything. To get back to a clean state, type %reset in the console to delete all variables - you will need to confirm that you want to do this. You can then re-run the program to test that it worked.\nUsing programs and modules\nIn previous sections we have imported and used standard Python libraries, packages or modules, such as math. This is one way of using a program, or code, that someone else has written. To do this for ourselves, we use exactly the same syntax.\nSuppose we have the file lab1_basic.py exactly as above. Write a second file containing the lines\npython\nimport lab1_basic\nprint(lab1_basic.x)\nSave this file, in the same directory as lab1_basic.py, say as lab1_import.py. When we run this program, the console should show something like\nrunfile('/Users/ih3/PythonLabs/lab1_import.py', wdir='/Users/ih3/PythonLabs')\n0.9320390859672263\nThis shows what the import statement is doing. All the library imports, definitions and operations in the imported program (lab1_basic) are performed. The results are then available to us, using the dot notation, via lab1_basic.&lt;variable&gt;, or lab1_basic.&lt;function&gt;.\nTo build up a program, we write Python commands into plain text files. When we want to use, or re-use, those definitions or results, we use import on the name of the file to recover their values.\nNote\nWe saved both files - the original lab1_basic.py, and the program that imported lab1_basic.py, in the same directory. If they were in different directories then Python would not know where to find the file it was trying to import, and would give an error. The solution to this is to create a package, which is rather more work.\nFunctions\nWe have already seen and used some functions, such as the log and sin functions from the math package. However, in programming, a function is more general; it is any set of commands that acts on some input parameters and returns some output.\nFunctions are central to effective programming, as they stop you from having to repeat yourself and reduce the chances of making a mistake. Defining and using your own functions is the next step.\nLet us write a function that converts angles from degrees to radians. The formula is\n\\begin{equation}\n \\theta_r = \\frac{\\pi}{180} \\theta_d,\n\\end{equation}\nwhere $\\theta_r$ is the angle in radians, and $\\theta_d$ is the angle in degrees. If we wanted to do this for, eg, $\\theta_d = 30^{\\circ}$, we could use the commands", "from math import pi\ntheta_d = 30.0\ntheta_r = pi / 180.0 * theta_d\n\nprint(theta_r)", "This is effective for a single angle. If we want to repeat this for many angles, we could copy and paste the code. However, this is dangerous. We could make a mistake in editing the code. We could find a mistake in our original code, and then have to remember to modify every location where we copied it to. Instead we want to have a single piece of code that performs an action, and use that piece of code without modification whenever needed.\nThis is summarized in the \"DRY\" principle: do not repeat yourself. Instead, convert the code into a function and use the function.\nWe will define the function and show that it works, then discuss how:", "from math import pi\n\ndef degrees_to_radians(theta_d):\n \"\"\"\n Convert an angle from degrees to radians.\n \n Parameters\n ----------\n \n theta_d : float\n The angle in degrees.\n \n Returns\n -------\n \n theta_r : float\n The angle in radians.\n \"\"\"\n theta_r = pi / 180.0 * theta_d\n return theta_r", "We check that it works by printing the result for multiple angles:", "print(degrees_to_radians(30.0))\nprint(degrees_to_radians(60.0))\nprint(degrees_to_radians(90.0))", "How does the function definition work? \nFirst we need to use the def command:\npython\ndef degrees_to_radians(theta_d):\nThis command effectively says \"what follows is a function\". The first word after def will be the name of the function, which can be used to call it later. This follows similar rules and conventions to variables and files (no spaces, lower case, words separated by underscores, etc.).\nAfter the function name, inside brackets, is the list of input parameters. If there are no input parameters the brackets still need to be there. If there is more than one parameter, they should be separated by commas.\nAfter the bracket there is a colon :. The use of colons to denote special \"blocks\" of code happens frequently in Python code, and we will see it again later.\nAfter the colon, all the code is indented by four spaces or one tab. Most helpful text editors, such as the spyder editor, will automatically indent the code after a function is defined. If not, use the tab key to ensure the indentation is correct. In Python, whitespace and indentation is essential: it defines where blocks of code (such as functions) start and end. In other languages special keywords or characters may be used, but in Python the indentation of the code statements is the key.\nThe statement on the next few lines is the function documentation, or docstring. \n```python\n \"\"\"\n Convert an angle from degrees to radians.\n...\n\"\"\"\n\n```\nThis is in principle optional: it's not needed to make the code run. However, documentation is extremely useful for the next user of the code. As the next user is likely to be you in a week (or a month), when you'll have forgotten the details of what you did, documentation helps you first. In reality, you should always include documentation.\nThe docstring can be any string within quotes. Using \"triple quotes\" allows the string to go across multiple lines. The docstring can be rapidly printed using the help function:", "help(degrees_to_radians)", "This allows you to quickly use code correctly without having to look at the code. We can do the same with functions from packages, such as", "help(math.sin)", "You can put whatever you like in the docstring. The format used above in the degrees_to_radians function follows the numpydoc convention, but there are other conventions that work well. One reason for following this convention can be seen in spyder. Copy the function degrees_to_radians into the console, if you have not done so already. Then, in the top right part of the window, select the \"Object inspector\" tab. Ensure that the \"Source\" is \"Console\". Type degrees_to_radians into the \"Object\" box. You should see the help above displayed, but nicely formatted.\nYou can put additional comments in your code - anything after a \"#\" character is a comment. The advantage of the docstring is how it can be easily displayed and built upon by other programs and bits of code, and the conventions that make them easier to write and understand.\nGoing back to the function itself. After the comment, the code to convert from degrees to radians starts. Compare it to the original code typed directly into the console. In the console we had\npython\nfrom math import pi\ntheta_d = 30.0\ntheta_r = pi / 180.0 * theta_d\nIn the function we have\npython\n theta_r = pi / 180.0 * theta_d\n return theta_r\nThe line\npython\n from math import pi\nis in the function file, but outside the definition of the function itself.\nThere are four differences.\n\nThe function code is indented by four spaces, or one tab.\nThe input parameter theta_d must be defined in the console, but not in the function. When the function is called the value of theta_d is given, but inside the function itself it is not: the function knows that the specific value of theta_d will be given as input.\nThe output of the function theta_r is explicitly returned, using the return statement.\nThe import statement is moved outside the function definition - this is the convention recommended by PEP8.\n\nAside from these points, the code is identical. A function, like a program, is a collection of Python statements exactly as you would type into a console. The first three differences above are the essential differences to keep in mind: the first is specific to Python (other programming languages have something similar), whilst the other differences are common to most programming languages.\nScope\nNames used internally by the function are not visible externally. Also, the name used for the output of the function need not be used externally. To see an example of this, start with a clean slate by typing %reset into the console.", "%reset", "Then copy and paste the function definition again:", "from math import pi\n\ndef degrees_to_radians(theta_d):\n \"\"\"\n Convert an angle from degrees to radians.\n \n Parameters\n ----------\n \n theta_d : float\n The angle in degrees.\n \n Returns\n -------\n \n theta_r : float\n The angle in radians.\n \"\"\"\n theta_r = pi / 180.0 * theta_d\n return theta_r", "(Alternatively you can use the history in the console by pressing the up arrow until the definition of the function you previously entered appears. Then click at the end of the function and press Return). Now call the function as", "angle = degrees_to_radians(45.0)\n\nprint(angle)", "But the variables used internally, theta_d and theta_r, are not known outside the function:", "theta_d", "This is an example of scope: the existence of variables, and their values, is restricted inside functions (and files).\nYou may note that above, we had a value of theta_d outside the function (from when we were working in the console), and a value of theta_d inside the function (as the input parameter). These do not have to match. If a variable is assigned a value inside the function then Python will take this \"local\" value. If not, Python will look outside the function. Two examples will illustrate this:", "x1 = 1.1\n\ndef print_x1():\n print(x1)\n\nprint(x1)\nprint_x1()\n\nx2 = 1.2\n\ndef print_x2():\n x2 = 2.3\n print(x2)\n\nprint(x2)\nprint_x2()", "In the first (x1) example, the variable x1 was not defined within the function, but it was used. When x1 is printed, Python has to look for the definition outside of the scope of the function, which it does successfully.\nIn the second (x2) example, the variable x2 is defined within the function. The value of x2 does not match the value of the variable with the same name defined outside the function, but that does not matter: within the function, its local value is used. When printed outside the function, the value of x2 uses the external definition, as the value defined inside the function is not known (it is \"not in scope\").\nSome care is needed with using scope in this way, as Python reads the whole function at the time it is defined when deciding scope. As an example:", "x3 = 1.3\n\ndef print_x3():\n print(x3)\n x3 = 2.4\n\nprint(x3)\nprint_x3()", "The only significant change from the second example is the order of the print statement and the assignment to x3 inside the function. Because x3 is assigned inside the function, Python wants to use the local value within the function, and will ignore the value defined outside the function. However, the print function is called before x3 has been set within the function, leading to an error.\nKeyword and default arguments\nOur original function degrees_to_radians only had one argument, the angle to be converted theta_d. Many functions will take more than one argument, and sometimes the function will take arguments that we don't always want to set. Python can make life easier in these cases.\nSuppose we wanted to know how long it takes an object released from a height $h$, in a gravitational field of strength $g$, with initial vertical speed $v$, to hit the ground. The answer is\n\\begin{equation}\n t = \\frac{1}{g} \\left( v + \\sqrt{v^2 + 2 h g} \\right).\n\\end{equation}\nWe can write this as a function:", "from math import sqrt\n\ndef drop_time(height, speed, gravity):\n \"\"\"\n Return how long it takes an object released from a height h, \n in a gravitational field of strength g, with initial vertical speed v, \n to hit the ground.\n \n Parameters\n ----------\n \n height : float\n Initial height h\n speed : float\n Initial vertical speed v\n gravity : float\n Gravitional field strength g\n \n Returns\n -------\n \n t : float\n Time the object hits the ground\n \"\"\"\n \n return (speed + sqrt(speed**2 + 2.0*height*gravity)) / gravity", "But when we start using it, it can be a bit confusing:", "print(drop_time(10.0, 0.0, 9.8))\nprint(drop_time(10.0, 1.0, 9.8))\nprint(drop_time(100.0, 9.8, 15.0))", "Is that last case correct? Did we really want to change the gravitational field, whilst at the same time using an initial velocity of exactly the value we expect for $g$?\nA far clearer use of the function comes from using keyword arguments. This is where we explicitly use the name of the function arguments. For example:", "print(drop_time(height=10.0, speed=0.0, gravity=9.8))", "The result is exactly the same, but now it's explicitly clear what we're doing. \nEven more useful: when using keyword arguments, we don't have to ensure that the order we use matches the order of the function definition:", "print(drop_time(height=100.0, gravity=9.8, speed=15.0))", "This is the same as the confusing case above, but now there is no ambiguity. Whilst it is good practice to match the order of the arguments to the function definition, it is only needed when you don't use the keywords. Using the keywords is always useful.\nWhat if we said that we were going to assume that the gravitational field strength $g$ is nearly always going to be that of Earth, $9.8$ms${}^{-2}$? We can re-define our function using a default argument:", "def drop_time(height, speed, gravity=9.8):\n \"\"\"\n Return how long it takes an object released from a height h, \n in a gravitational field of strength g, with initial vertical speed v, \n to hit the ground.\n \n Parameters\n ----------\n \n height : float\n Initial height h\n speed : float\n Initial vertical speed v\n gravity : float\n Gravitional field strength g\n \n Returns\n -------\n \n t : float\n Time the object hits the ground\n \"\"\"\n \n return (speed + sqrt(speed**2 + 2.0*height*gravity)) / gravity", "Note that there is only one difference here, in the very first line: we state that gravity=9.8. What this means is that if this function is called and the value of gravity is not specified, then it takes the value 9.8.\nFor example:", "print(drop_time(10.0, 0.0))\nprint(drop_time(height=50.0, speed=1.0))\nprint(drop_time(gravity=15.0, height=50.0, speed=1.0))", "So, we can still give a specific value for gravity when we don't want to use the value 9.8, but it isn't needed if we're happy for it to take the default value of 9.8. This works both if we use keyword arguments and if not, with certain restrictions.\nSome things to keep in mind. \n\nDefault arguments can only be used without specifying the keyword if they come after arguments without defaults. It is a very strong convention that arguments with a default come at the end of the argument list.\nThe value of default arguments can be pretty much anything, but care should be taken to get the behaviour you expect. In particular, it is strongly discouraged to allow the default value to be anything that might change, as this can lead to odd behaviour that is hard to find. In particular, allowing a default value to be a container such as a list (seen below) can lead to unexpected behaviour. See, for example, this discussion, pointing out why, and that the value of the default argument is fixed when the function is defined, not when it's called.\n\nPrinting and strings\nWe have already seen the print function used multiple times. It displays its argument(s) to the screen when called, either from the console or from within a program. It prints some representation of what it is given in the form of a string: it converts simple numbers and other objects to strings that can be shown on the screen. For example:", "import math\nx = 1.2\nname = \"Alice\"\nprint(\"Hello\")\nprint(6)\nprint(name)\nprint(x)\nprint(math.pi)\nprint(math.sin(x))\nprint(math.sin)\nprint(math)", "We see that variables are converted to their values (such as name and math.pi) and functions are called to get values (such as math.sin(x)), which are then converted to strings displayed on screen. However, functions (math.sin) and modules (math) are also \"printed\", in that a string saying what they are, and where they come from, is displayed.\nOften we want to display useful information to the screen, which means building a message that is readable and printing that. There are many ways of doing this: here we will just look at the format command. Here is an example:", "print(\"Hello {}. We set x={}.\".format(name, x))", "The format command takes the string (here \"Hello {}. We set x={}.\") and replaces the {} with the values of the variables (here name and x in order).\nWe can use the format command in this way for anything that has a string representation. For example:", "print (\"The function {} applied to x={} gives {}\".format(math.sin, x, math.sin(x)))", "There are many more ways to use the format command which can be helpful.\nWe note that format is a function, but a function applied to the string before the dot. This type of function is called a method, and we shall return to them later.\nStrings\nWe have just printed a lot of strings out, but it is useful to briefly talk about what a string is.\nIn Python a string is not just a sequence of characters. It is a Python object that contains additional information that \"lives on it\". If this information is a constant property it is called an attribute. If it is a function it is called a method. We can access this information (using the \"dot\" notation as above) to tell us things about the string, and to manipulate it.\nHere are some basic string methods:", "name = \"Alice\"\nnumber = \"13\"\nsentence = \" a b c d e \"\nprint(name.upper())\nprint(name.lower())\nprint(name.isdigit())\nprint(number.isdigit())\nprint(sentence.strip())\nprint(sentence.split())", "The use of the \"dot\" notation appears here. We saw this with accessing functions in modules and packages above; now we see it with accessing attributes and methods. It appears repeatedly in Python. The format method used above is particularly important for our purposes, but there are a lot of methods available.\nThere are other ways of manipulating strings.\nWe can join two strings using the + operator.", "print(\"Hello\" + \"Alice\")", "We can repeat strings using the * operator.", "print(\"Hello\" * 3)", "We can convert numbers to strings using the str function.", "print(str(3.4))", "We can also access individual characters (starting from 0!), or a range of characters:", "print(\"Hello\"[0])\nprint(\"Hello\"[2])\nprint(\"Hello\"[1:3])", "We will come back to this notation when discussing lists and slicing.\nNote\nThere are big differences between how Python deals with strings in Python 2.X and Python 3.X. Whilst most of the commands above will produce identical output, string handling is one of the major reasons why Python 2.X doesn't always work in Python 3.X. The ways strings are handled in Python 3.X is much better than in 2.X.\nPutting it together\nWe can now combine the introduction of programs with functions. First, create a file called lab1_function.py containing the code\n```python\nfrom math import pi\ndef degrees_to_radians(theta_d):\n \"\"\"\n Convert an angle from degrees to radians.\nParameters\n----------\n\ntheta_d : float\n The angle in degrees.\n\nReturns\n-------\n\ntheta_r : float\n The angle in radians.\n\"\"\"\ntheta_r = pi / 180.0 * theta_d\nreturn theta_r\n\n```\nThis is almost exactly the function as defined above.\nNext, write a second file lab1_use_function.py containing\n```python\nfrom lab1_function import degrees_to_radians\nprint(degrees_to_radians(15.0))\nprint(degrees_to_radians(30.0))\nprint(degrees_to_radians(45.0))\nprint(degrees_to_radians(60.0))\nprint(degrees_to_radians(75.0))\nprint(degrees_to_radians(90.0))\n```\nThis function uses our own function to convert from degrees to radians. To save typing we have used the from &lt;module&gt; import &lt;function&gt; notation. We could have instead written import lab1_function, but then every function call would need to use lab1_function.degrees_to_radians.\nThis program, when run, will print to the screen the angles $(n \\pi)/ 12$ for $n = 1, 2, \\dots, 6$.\nExercise: basic functions\nExercise 1\nWrite a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as\n\n$a=1, b=1, c=1$ (result should be $1$);\n$a=1, b=2, c=3.5$ (result should be $7.0$);\n$a=0, b=1, c=1$ (result should be $0$);\n$a=2, b=-1, c=1$ (what do you think the result should be?).\n\nExercise 2\nWrite a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula\n\\begin{equation}\n h(t) = \\frac{1}{2} g t^2.\n\\end{equation}\nUse the value of the acceleration due to gravity $g$ from scipy.constants.g. Test your code on sample values such as\n\n$H = 1$m (result should be $\\approx 0.452$s);\n$H = 10$m (result should be $\\approx 1.428$s);\n$H = 0$m (result should be $0$s);\n$H = -1$m (what do you think the result should be?).\n\nExercise 3\nWrite a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula\n\\begin{equation}\n A = \\sqrt{s (s - a) (s - b) (s - c)}, \\qquad s = \\frac{a + b + c}{2}.\n\\end{equation}\nConstruct your own test cases to cover a range of possibilities.\nExercise: Floating point numbers\nExercise 1\nComputers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if\n\\begin{equation}\n x = 1, \\qquad y = 1 + 10^{-14} \\sqrt{3}\n\\end{equation}\nthen it should be true that\n\\begin{equation}\n 10^{14} (y - x) = \\sqrt{3}.\n\\end{equation}\nCheck how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.\nNote\nThe standard floating point number holds the first 16 significant digits of a real.\nExercise 2\nNote: no coding required\nThe standard quadratic formula gives the solutions to\n\\begin{equation}\n a x^2 + b x + c = 0\n\\end{equation}\nas\n\\begin{equation}\n x = \\frac{-b \\pm \\sqrt{b^2 - 4 a c}}{2 a}.\n\\end{equation}\nShow that, if $a = 10^{-n} = c$ and $b = 10^n$ then\n\\begin{equation}\n x = \\frac{10^{2 n}}{2} \\left( -1 \\pm \\sqrt{1 - 4 \\times 10^{-4n}} \\right).\n\\end{equation}\nUsing the expansion (from Taylor's theorem)\n\\begin{equation}\n \\sqrt{1 - 4 \\times 10^{-4 n}} \\simeq 1 - 2 \\times 10^{-4 n} + \\dots, \\qquad n \\gg 1,\n\\end{equation}\nshow that\n\\begin{equation}\n x \\simeq -10^{2 n} + 10^{-2 n} \\quad \\text{and} \\quad -10^{-2n}, \\qquad n \\gg 1.\n\\end{equation}\nExercise 3\nNote: no coding required\nBy multiplying and dividing by $-b \\mp \\sqrt{b^2 - 4 a c}$, check that we can also write the solutions to the quadratic equation as\n\\begin{equation}\n x = \\frac{2 c}{-b \\mp \\sqrt{b^2 - 4 a c}}.\n\\end{equation}\nExercise 4\nUsing Python, calculate both solutions to the quadratic equation\n\\begin{equation}\n 10^{-n} x^2 + 10^n x + 10^{-n} = 0\n\\end{equation}\nfor $n = 3$ and $n = 4$ using both formulas. What do you see? How has floating point accuracy caused problems here?\nExercise 5\nThe standard definition of the derivative of a function is\n\\begin{equation}\n \\left. \\frac{\\text{d} f}{\\text{d} x} \\right|{x=X} = \\lim{\\delta \\to 0} \\frac{f(X + \\delta) - f(X)}{\\delta}.\n\\end{equation}\nWe can approximate this by computing the result for a finite value of $\\delta$:\n\\begin{equation}\n g(x, \\delta) = \\frac{f(x + \\delta) - f(x)}{\\delta}.\n\\end{equation}\nWrite a function that takes as inputs a function of one variable, $f(x)$, a location $X$, and a step length $\\delta$, and returns the approximation to the derivative given by $g$.\nExercise 6\nThe function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\\delta = 10^{-2 n}$ with $n = 1, \\dots, 7$. You should see the results initially improve, then get worse. Why is this?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/dwd/cmip6/models/sandbox-3/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: SANDBOX-3\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jhjungCode/pytorch-tutorial
04_word2vec.ipynb
mit
[ "Word2Vec\n사실 Word2vec는 noise contrastive estimator (이하 NCE) loss를 사용한다.\n아직 pytorch에서는 이 부분이 구현되어 있지 않고, 간단한 vocabulary이라서 그냥 softmax를 사용해서 이 부분을 구현하였다.\nembedding이 2개이면, 단어에 따른 간단한 Classifiaction 문제로 볼 수 있기 때문에, 큰 무리는 없을 것이다.\n※ 단, vocabulary수가 많아지면 학습속도를 높이기 위해서 NCE를 사용해야 할 것이다.", "import torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.utils.data as data_utils", "1. Dataset 준비", "import numpy as np\n\nword_pair = [['고양이', '흰'],\n ['고양이', '동물'],\n ['국화', '흰'],\n ['국화', '식물'],\n ['선인장', '초록'],\n ['선인장', '식물'],\n ['강아지', '검은'],\n ['강아지', '동물'],\n ['타조', '회색'],\n ['타조', '동물'],\n ['코끼리', '회색'],\n ['코끼리', '동물'],\n ['장미', '빨간'],\n ['장미', '식물'],\n ['자동차', '빨간'],\n ['그릇', '빨간'],\n ['민들레', '식물'],\n ['민들레', '흰']]\n\nword_list = set(np.array(word_pair).flatten())\nword_dict = {w: i for i, w in enumerate(word_list)}\nskip_grams = [[word_dict[word[0]], word_dict[word[1]]] for word in word_pair]", "Dataset Loader 설정", "label = torch.LongTensor(skip_grams)[:, 0].contiguous()\ncontext = torch.LongTensor(skip_grams)[:, 1].contiguous()\nskip_grams_dataset = data_utils.TensorDataset(label, context)\ntrain_loader = torch.utils.data.DataLoader(skip_grams_dataset, batch_size=8, shuffle=True)\ntest_loader = torch.utils.data.DataLoader(skip_grams_dataset, batch_size=1, shuffle=False)", "2. 사전 설정\n* model\n* loss\n* opimizer", "class _model(nn.Module) :\n def __init__(self):\n super(_model, self).__init__()\n self.embedding = nn.Embedding(len(word_list), 2)\n self.linear = nn.Linear(2, len(word_list), bias=True)\n \n def forward(self, x):\n x = self.embedding(x)\n x = self.linear(x)\n return F.log_softmax(x)\n \nmodel = _model()\nloss_fn = nn.NLLLoss() \noptimizer = torch.optim.Adam(model.parameters(), lr=0.1)\n", "3. Trainning loop\n* (입력 생성)\n* model 생성\n* loss 생성\n* zeroGrad\n* backpropagation\n* optimizer step (update model parameter)", "model.train()\nfor epoch in range(100):\n for data, target in train_loader:\n data, target = Variable(data), Variable(target) #(입력 생성)\n output = model(data) # model 생성\n loss = F.nll_loss(output, target) #loss 생성\n optimizer.zero_grad() # zeroGrad\n loss.backward() # calc backward gradients\n optimizer.step() # update parameters", "4. Predict & Evaluate", "model.eval()\n\ninvDic = { i : w for w, i in word_dict.items()}\nprint('Input : true : pred')\n\nfor x, y in test_loader :\n x, y = Variable(x.squeeze()), y.squeeze()\n y_pred = model(x).max(1)[1].data[0][0]\n print('{:s} : {:s} : {:s}'.format(invDic[x.data[0]], invDic[y[0]], invDic[y_pred]))", "5. plot embedding space", "import matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib inline \nmatplotlib.rc('font', family=\"NanumGothic\") \n\nfor i in label :\n x = Variable(torch.LongTensor([i]))\n fx, fy = model.embedding(x).squeeze().data\n plt.scatter(fx, fy)\n plt.annotate(invDic[i], xy=(fx, fy), xytext=(5, 2),\n textcoords='offset points', ha='right', va='bottom')\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ethen8181/machine-learning
python/cohort/cohort.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Cohort-Analysis\" data-toc-modified-id=\"Cohort-Analysis-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Cohort Analysis</a></span><ul class=\"toc-item\"><li><span><a href=\"#Example-of-Monthly-Cohort\" data-toc-modified-id=\"Example-of-Monthly-Cohort-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Example of Monthly Cohort</a></span></li><li><span><a href=\"#Stack/Unstack\" data-toc-modified-id=\"Stack/Unstack-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Stack/Unstack</a></span></li></ul></li><li><span><a href=\"#Further-Work\" data-toc-modified-id=\"Further-Work-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Further Work</a></span></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Reference</a></span></li></ul></div>", "# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(plot_style = False)\n\nos.chdir(path)\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\n%watermark -a 'Ethen' -d -t -v -p numpy,pandas,seaborn,matplotlib", "Cohort Analysis\nWhat is Cohort Analysis? and why is it valuable? To begin with, a cohort is a group of users who share something in common, be it their sign-up date, first purchase month, birth date, acquisition channel, etc. Cohort analysis is the method by which these groups are tracked over time, helping you spot trends, understand repeat behaviors (purchases, engagement, amount spent, etc.) and monitor your customer and revenue retention.\nIt's common for cohorts to be created based on a customer's first usage of the platform, where \"usage\" is dependent on your business's key metrics. For Uber or Lyft, usage would be booking a trip through one of their apps. For GrubHub, it's ordering some food. For AirBnB, it's booking a stay. With these companies, a purchase is at their core, be it taking a trip or ordering dinner — their revenues are tied to their users' purchase behavior. In others, a purchase is not central to the business model and the business is more interested in \"engagement\" with the platform. Facebook and Twitter are examples of this - are you visiting their sites every day? Are you performing some action on them - maybe a \"like\" on Facebook or a \"favorite\" on a tweet? Thus when building up a cohort analysis, it's important to consider the relationship between the event or interaction you're tracking and its relationship to your business model.\nExample of Monthly Cohort\nImagine we have the following dataset that has the standard purchase data with IDs for the order and user, order data and purchase amount. To create our monthly cohort, we'll first have to:\n\nConvert our date to a monthly-time basis.\nDetermine the user's cohort group based on their first order, which is the year and month in which the user's first purchase occurred.", "df = pd.read_csv('relay-foods.csv')\ndf.head()\n\ndef parse_dmy(s):\n \"\"\"\n convert string to datetime format\n \n References\n ----------\n http://chimera.labs.oreilly.com/books/1230000000393/ch03.html#_discussion_55\n \"\"\"\n month, day, year = s.split('/')\n return datetime(int('20' + year), int(month), int(day))\n\n\n# remove the $ sign from the TotalCharges column \n# and convert it to float\ndf['TotalCharges'] = (df['TotalCharges'].\n apply(lambda x: x.replace('$', '')).\n astype('float'))\n\n# strip out the year and month into a new column\ndf['OrderDate'] = df['OrderDate'].apply(parse_dmy)\ndf['OrderPeriod'] = df['OrderDate'].apply(lambda x: x.strftime('%Y-%m'))\ndf.head()\n\n# convert each user id into their cohort group\ndf = df.set_index('UserId')\ndf['CohortGroup'] = (df.\n groupby(level = 0)['OrderDate'].min().\n apply(lambda x: x.strftime('%Y-%m')))\ndf = df.reset_index()\ndf.head()", "Since we're looking at monthly cohorts, we need to aggregate users, orders, and amount spent by the CohortGroup within each month (OrderPeriod). After that, we wish to look at how each cohort has behaved in the months following their first purchase, so we'll need to index each cohort with respect to their first purchase month. For example, CohortPeriod = 1 will be the cohort's first month, CohortPeriod = 2 is their second, and so on. This allows us to compare cohorts across various stages of their lifetime.", "grouped = df.groupby(['CohortGroup', 'OrderPeriod'])\n\n# count the unique users, orders, and total revenue per Group + Period\ncohorts = grouped.agg({'UserId': pd.Series.nunique,\n 'OrderId': pd.Series.nunique,\n 'TotalCharges': np.sum})\n\n# make the column names more meaningful\nrenaming = {'UserId': 'TotalUsers', 'OrderId': 'TotalOrders'}\ncohorts = cohorts.rename(columns = renaming)\ncohorts.head()\n\ndef cohort_period(df):\n \"\"\"\n Creates a `CohortPeriod` column, \n which is the Nth period based on the user's first purchase.\n \"\"\"\n df['CohortPeriod'] = np.arange(len(df)) + 1\n return df\n\n\ncohorts = cohorts.groupby(level = 'CohortGroup').apply(cohort_period)\ncohorts.head()", "We're now half way done, before we proceed on with the process, we will do some sanity checking to make sure that we did everything right by performing some simple testing. We'll test data points from the original DataFrame with their corresponding values in the new cohorts DataFrame to make sure all our data transformations worked as expected. As long as none of these raise an exception, we're good.", "# unit test code chunk\nmask = (df['CohortGroup'] == '2009-01') & (df['OrderPeriod'] == '2009-01')\nx = df[mask]\ny = cohorts.loc[('2009-01', '2009-01')]\n\nassert np.allclose(x['UserId'].nunique(), y['TotalUsers'])\nassert np.allclose(x['TotalCharges'].sum(), y['TotalCharges'])\nassert np.allclose(x['OrderId'].nunique(), y['TotalOrders'])", "To calculate the user retention by cohort group, We want to look at the percentage change of each CohortGroup over time -- not the absolute change. To do this, we'll first need to create a pandas Series containing each CohortGroup and its size.", "# convert the CohortPeriod as indices instead of OrderPeriod\ncohorts = cohorts.reset_index()\ncohorts = cohorts.set_index(['CohortGroup', 'CohortPeriod'])\ncohorts.head()\n\n# create a Series holding the total size of each CohortGroup\ncohorts_size = cohorts['TotalUsers'].groupby(level = 'CohortGroup').first()\ncohorts_size.head()", "Now, we'll need to divide the TotalUsers values in cohorts by cohort_size. Since DataFrame operations are performed based on the indices of the objects, we'll use unstack on our cohorts DataFrame to create a matrix where each column represents a CohortGroup and each row is the CohortPeriod corresponding to that group.\nStack/Unstack\nIn case you're not familiar with what unstack and stack does:\nStacking a DataFrame means moving (also rotating or pivoting) the innermost column index to become the innermost row index. The inverse operation is called unstacking. It means moving the innermost row index to become the innermost column index. The following diagram depicts the operations:\n<img src=\"stack_unstack.png\" width=\"70%\" height=\"70%\">", "# applying it \nuser_retention = (cohorts['TotalUsers'].\n unstack('CohortGroup').\n divide(cohorts_size, axis = 1))\nuser_retention.head()", "The resulting DataFrame, user_retention, contains the percentage of users from the cohort purchasing within the given period. For instance, 38.4% of users in the 2009-03 cohort purchased again in month 3 (which would be May 2009).\nFinally, we can plot the cohorts over time in an effort to spot behavioral differences or similarities. Two common cohort charts are line graphs and heatmaps, both of which are shown below.", "# change default figure and font size\nplt.rcParams['figure.figsize'] = 10, 8\nplt.rcParams['font.size'] = 12\n\n\nuser_retention[['2009-06', '2009-07', '2009-08']].plot()\nplt.title('Cohorts: User Retention')\nplt.xticks(range(1, 13))\nplt.xlim(1, 12)\nplt.ylabel('% of Cohort Purchasing')\nplt.show()", "Notice that the first period of each cohort is always 100% -- this is because our cohorts are based on each user's first purchase. Meaning everyone in the cohort should have made a purchase in the first month.", "sns.set(style = 'white')\n\nplt.figure(figsize = (12, 8))\nplt.title('Cohorts: User Retention')\nsns.heatmap(user_retention.T,\n cmap = plt.cm.Blues,\n mask = user_retention.T.isnull(), # data will not be shown where it's True\n annot = True, # annotate the text on top\n fmt = '.0%') # string formatting when annot is True\nplt.show()", "Unsurprisingly, we can see from the above chart that fewer users tend to purchase as time goes on. However, we can also see that the 2009-01 cohort is the strongest, which enables us to ask targeted questions about this cohort compared to others -- What other attributes (besides first purchase month) do these users share which might be causing them to stick around? How were the majority of these users acquired? Was there a specific marketing campaign that brought them in? Did they take advantage of a promotion at sign-up? The answers to these questions would inform future marketing and product efforts.\nFurther Work\nUser retention is only one way of using cohorts to look at your business — we could have also looked at revenue retention. That is, the percentage of each cohort’s first month revenue returning in subsequent periods. User retention is important, but we shouldn’t lose sight of the revenue each cohort is bringing in (and how much of it is returning).\nTo sum it up, cohort analysis can be valuable when it comes to understanding your business's health and \"stickiness\" - the loyalty of your customers. Stickiness is critical since it’s far cheaper and easier to keep a current customer than to acquire a new one. For startups, it’s also a key indicator of product-market fit.\nAdditionally, your product evolves over time. New features are added and removed, the design changes, etc. Observing individual groups over time is a starting point to understanding how these changes affect user behavior. It’s also a good way to visualize your user retention/churn as well as formulating a basic understanding of their lifetime value.\nReference\n\nBlog: Cohort Analysis with Python\nBlog: Pivot, Pivot-Table, Stack and Unstack explained with Pictures" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/art_and_science_of_ml/solutions/neural_network.ipynb
apache-2.0
[ "Build a DNN using the Keras Functional API\nLearning objectives\n\nReview how to read in CSV file data using tf.data.\nSpecify input, hidden, and output layers in the DNN architecture.\nReview and visualize the final DNN shape.\nTrain the model locally and visualize the loss curves.\nDeploy and predict with the model using Cloud AI Platform. \n\nIntroduction\nIn this notebook, we will build a Keras DNN to predict the fare amount for NYC taxi cab rides.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.", "# You can use any Python source file as a module by executing an import statement in some other Python source file\n# The import statement combines two operations; it searches for the named module, then it binds the\n# results of that search to a name in the local scope.\nimport os, json, math\n# Import data processing libraries like Numpy and TensorFlow\nimport numpy as np\nimport tensorflow as tf\n# Python shutil module enables us to operate with file objects easily and without diving into file objects a lot.\nimport shutil\n# Show the currently installed version of TensorFlow\nprint(\"TensorFlow version: \",tf.version.VERSION)\n\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY", "Locating the CSV files\nWe will start with the CSV files that we wrote out in the other notebook. Just so you don't have to run the notebook, we saved a copy in ../data/toy_data", "# `ls` is a Linux shell command that lists directory contents\n# `l` flag list all the files with permissions and details\n!ls -l ../data/toy_data/*.csv", "Lab Task 1: Use tf.data to read the CSV files\nFirst let's define our columns of data, which column we're predicting for, and the default values.", "# Define columns of data\nCSV_COLUMNS = ['fare_amount', 'pickup_datetime',\n 'pickup_longitude', 'pickup_latitude', \n 'dropoff_longitude', 'dropoff_latitude', \n 'passenger_count', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]", "Next, let's define our features we want to use and our label(s) and then load in the dataset for training.", "# Define features you want to use\ndef features_and_labels(row_data):\n for unwanted_col in ['pickup_datetime', 'key']:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label # features, label\n\n# load the training data\ndef load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n .map(features_and_labels) # features, label\n )\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(1000).repeat()\n dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE\n return dataset", "Lab Task 2: Build a DNN with Keras\nNow let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.", "# Build a simple Keras DNN using its Functional API\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) \n\ndef build_dnn_model():\n INPUT_COLS = ['pickup_longitude', 'pickup_latitude', \n 'dropoff_longitude', 'dropoff_latitude', \n 'passenger_count']\n\n # TODO 2\n # input layer\n inputs = {\n colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')\n for colname in INPUT_COLS\n }\n # tf.feature_column.numeric_column() represents real valued or numerical features.\n feature_columns = {\n colname : tf.feature_column.numeric_column(colname)\n for colname in INPUT_COLS\n }\n \n # the constructor for DenseFeatures takes a list of numeric columns\n # The Functional API in Keras requires that you specify: LayerConstructor()(inputs)\n dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)\n\n # two hidden layers of [32, 8] just in like the BQML DNN\n h1 = tf.keras.layers.Dense(32, activation='relu', name='h1')(dnn_inputs)\n h2 = tf.keras.layers.Dense(8, activation='relu', name='h2')(h1)\n\n # final output is a linear activation because this is regression\n output = tf.keras.layers.Dense(1, activation='linear', name='fare')(h2)\n model = tf.keras.models.Model(inputs, output)\n model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])\n return model\n\nprint(\"Here is our DNN architecture so far:\\n\")\nmodel = build_dnn_model()\nprint(model.summary())", "Lab Task 3: Visualize the DNN\nWe can visualize the DNN using the Keras plot_model utility.", "# tf.keras.utils.plot_model() Converts a Keras model to dot format and save to a file.\ntf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')", "Lab Task 4: Train the model\nTo train the model, simply call model.fit().\nNote that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.", "TRAIN_BATCH_SIZE = 32\nNUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around\nNUM_EVALS = 32 # how many times to evaluate\nNUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down\n\ntrainds = load_dataset('../data/toy_data/taxi-traffic-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)\nevalds = load_dataset('../data/toy_data/taxi-traffic-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\n# Model Fit\nhistory = model.fit(trainds, \n validation_data=evalds,\n epochs=NUM_EVALS, \n steps_per_epoch=steps_per_epoch)", "Visualize the model loss curve\nNext, we will use matplotlib to draw the model's loss curves for training and validation.", "# plot\n# Use matplotlib for visualizing the model\nimport matplotlib.pyplot as plt\nnrows = 1\nncols = 2\n# The .figure() method will create a new figure, or activate an existing figure.\nfig = plt.figure(figsize=(10, 5))\n\nfor idx, key in enumerate(['loss', 'rmse']):\n ax = fig.add_subplot(nrows, ncols, idx+1)\n# The .plot() is a versatile function, and will take an arbitrary number of arguments. For example, to plot x versus y.\n plt.plot(history.history[key])\n plt.plot(history.history['val_{}'.format(key)])\n# The .title() method sets a title for the axes.\n plt.title('model {}'.format(key))\n plt.ylabel(key)\n plt.xlabel('epoch')\n# The .legend() method will place a legend on the axes.\n plt.legend(['train', 'validation'], loc='upper left');", "Lab Task 5: Predict with the model locally\nTo predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.", "# TODO 5\n# Use the model to do prediction with `model.predict()`\nmodel.predict({\n 'pickup_longitude': tf.convert_to_tensor([-73.982683]),\n 'pickup_latitude': tf.convert_to_tensor([40.742104]),\n 'dropoff_longitude': tf.convert_to_tensor([-73.983766]),\n 'dropoff_latitude': tf.convert_to_tensor([40.755174]),\n 'passenger_count': tf.convert_to_tensor([3.0]), \n}, steps=1)", "Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.21/_downloads/638c39682b0791ce4e430e4d2fcc4c45/plot_tf_dics.ipynb
bsd-3-clause
[ "%matplotlib inline", "Time-frequency beamforming using DICS\nCompute DICS source power [1]_ in a grid of time-frequency windows.\nReferences\n.. [1] Dalal et al. Five-dimensional neuroimaging: Localization of the\n time-frequency dynamics of cortical activity.\n NeuroImage (2008) vol. 40 (4) pp. 1686-1700", "# Author: Roman Goj <roman.goj@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.event import make_fixed_length_events\nfrom mne.datasets import sample\nfrom mne.time_frequency import csd_fourier\nfrom mne.beamformer import tf_dics\nfrom mne.viz import plot_source_spectrogram\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nnoise_fname = data_path + '/MEG/sample/ernoise_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name", "Read raw data", "raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Pick a selection of magnetometer channels. A subset of all channels was used\n# to speed up the example. For a solution based on all MEG channels use\n# meg=True, selection=None and add mag=4e-12 to the reject dictionary.\nleft_temporal_channels = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,\n stim=False, exclude='bads',\n selection=left_temporal_channels)\nraw.pick_channels([raw.ch_names[pick] for pick in picks])\nreject = dict(mag=4e-12)\n# Re-normalize our empty-room projectors, which should be fine after\n# subselection\nraw.info.normalize_proj()\n\n# Setting time windows. Note that tmin and tmax are set so that time-frequency\n# beamforming will be performed for a wider range of time points than will\n# later be displayed on the final spectrogram. This ensures that all time bins\n# displayed represent an average of an equal number of time windows.\ntmin, tmax, tstep = -0.5, 0.75, 0.05 # s\ntmin_plot, tmax_plot = -0.3, 0.5 # s\n\n# Read epochs\nevent_id = 1\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n baseline=None, preload=True, proj=True, reject=reject)\n\n# Read empty room noise raw data\nraw_noise = mne.io.read_raw_fif(noise_fname, preload=True)\nraw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\nraw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])\nraw_noise.info.normalize_proj()\n\n# Create noise epochs and make sure the number of noise epochs corresponds to\n# the number of data epochs\nevents_noise = make_fixed_length_events(raw_noise, event_id)\nepochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,\n tmax_plot, baseline=None, preload=True, proj=True,\n reject=reject)\nepochs_noise.info.normalize_proj()\nepochs_noise.apply_proj()\n# then make sure the number of epochs is the same\nepochs_noise = epochs_noise[:len(epochs.events)]\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd)\n\n# Read label\nlabel = mne.read_label(fname_label)", "Time-frequency beamforming based on DICS", "# Setting frequency bins as in Dalal et al. 2008\nfreq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz\nwin_lengths = [0.3, 0.2, 0.15, 0.1] # s\n# Then set FFTs length for each frequency range.\n# Should be a power of 2 to be faster.\nn_ffts = [256, 128, 128, 128]\n\n# Subtract evoked response prior to computation?\nsubtract_evoked = False\n\n# Calculating noise cross-spectral density from empty room noise for each\n# frequency bin and the corresponding time window length. To calculate noise\n# from the baseline period in the data, change epochs_noise to epochs\nnoise_csds = []\nfor freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):\n noise_csd = csd_fourier(epochs_noise, fmin=freq_bin[0], fmax=freq_bin[1],\n tmin=-win_length, tmax=0, n_fft=n_fft)\n noise_csds.append(noise_csd.sum())\n\n# Computing DICS solutions for time-frequency windows in a label in source\n# space for faster computation, use label=None for full solution\nstcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,\n freq_bins=freq_bins, subtract_evoked=subtract_evoked,\n n_ffts=n_ffts, reg=0.05, label=label, inversion='matrix')\n\n# Plotting source spectrogram for source with maximum activity\n# Note that tmin and tmax are set to display a time range that is smaller than\n# the one for which beamforming estimates were calculated. This ensures that\n# all time bins shown are a result of smoothing across an identical number of\n# time windows.\nplot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,\n source_index=None, colorbar=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sueiras/training
tensorflow/02-text/03-word_tagging/00_identify_tags_in_airline_database_embedings - EXERCISE.ipynb
gpl-3.0
[ "Identify tags in airline database\nMinimal code\n- Read dataset\n- transform data\n- Minimal model\n - Embedings\n - Dense", "from __future__ import print_function\n\nimport os \nimport numpy as np \n\nimport tensorflow as tf \nprint(tf.__version__)\n\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n\n#Show images\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# plt configuration\nplt.rcParams['figure.figsize'] = (10, 10) # size of images\nplt.rcParams['image.interpolation'] = 'nearest' # show exact image\nplt.rcParams['image.cmap'] = 'gray' # use grayscale ", "Dataset\nATIS (Airline Travel Information System) dataset. Available in: https://github.com/mesnilgr/is13/blob/master/data/load.py\nExample:\nInput (words) show flights from Boston to New York today\nOutput (labels) O O O B-dept O B-arr I-arr B-date", "# Read data\nimport pickle\nimport sys\n\natis_file = '/home/ubuntu/data/training/text/atis/atis.pkl'\nwith open(atis_file,'rb') as f:\n if sys.version_info.major==2:\n train, test, dicts = pickle.load(f) #python2.7\n else:\n train, test, dicts = pickle.load(f, encoding='bytes') #python3\n", "train / test sets:\n- X: list of input sequences\n- label: List of target labels asociated to each word in each sentence.\n\nDictionaries\n- labels2idx: To decode the labels\n- words2idx: To decode the sentences", "#Dictionaries and train test partition\nw2idx = dict()\nfor i in dicts[b'words2idx']:\n w2idx[i.decode(\"utf-8\")] = dicts[b'words2idx'][i]\n\nne2idx = dict()\nfor i in dicts[b'tables2idx']:\n ne2idx[i.decode(\"utf-8\")] = dicts[b'tables2idx'][i]\n\nlabels2idx = dict()\nfor i in dicts[b'labels2idx']:\n labels2idx[i.decode(\"utf-8\")] = dicts[b'labels2idx'][i] \nidx2w = dict((v,k) for k,v in w2idx.items())\nidx2la = dict((v,k) for k,v in labels2idx.items())\n\ntrain_x, _, train_label = train\ntest_x, _, test_label = test\n\n\n# Visualize data\nwlength = 35\nfor e in ['train','test']:\n print(e)\n for sw, sl in zip(eval(e+'_x')[:2], eval(e+'_label')[:2]):\n print( 'WORD'.rjust(wlength), 'LABEL'.rjust(wlength))\n for wx, la in zip(sw, sl): print( idx2w[wx].rjust(wlength), idx2la[la].rjust(wlength))\n print( '\\n'+'**'*30+'\\n')\n\n\n#Select words for the label 48: b'B-fromloc.city_name' in train and test to check that are different:\nfor e in ['train','test']:\n print(e)\n print('---------')\n for sw, sl in zip(eval(e+'_x')[:5], eval(e+'_label')[:5]):\n for wx, la in zip(sw, sl): \n if la==48:\n print( idx2w[wx])\n print('\\n')\n", "Data transformation\n- Convert the list of sequences of words into an array of words x characteristics.\n- The characteristics are the context of the word in the sentence.\n - For each word in the sentence, generate the context with the previous and the next words in the sentence.\n - For words at the beggining and the end, use padding to complete the context.", "# Max value of word coding to assign the ID_PAD\nID_PAD = np.max([np.max(tx) for tx in train_x]) + 1\nprint('ID_PAD: ', ID_PAD)\n\ndef context(l, size=3):\n l = list(l)\n lpadded = size // 2 * [ID_PAD] + l + size // 2 * [ID_PAD]\n out = [lpadded[i:(i + size)] for i in range(len(l))]\n return out\n\n#Example\nx = np.array([0, 1, 2, 3, 4], dtype=np.int32)\nprint('Context vectors: ', context(x))\n\n# Create train and test X y.\nX_trn=[]\nfor s in train_x:\n X_trn += context(s,size=10)\nX_trn = np.array(X_trn)\n\nX_tst=[]\nfor s in test_x:\n X_tst += context(s,size=10)\nX_tst = np.array(X_tst)\n\nprint('X trn shape: ', X_trn.shape)\nprint('X_tst shape: ',X_tst.shape)\n\n\ny_trn=[]\nfor s in train_label:\n y_trn += list(s)\ny_trn = np.array(y_trn)\nprint('y_trn shape: ',y_trn.shape)\n\ny_tst=[]\nfor s in test_label:\n y_tst += list(s)\ny_tst = np.array(y_tst)\nprint('y_tst shape: ',y_tst.shape)\n\n\nprint('Num labels: ',len(set(y_trn)))\nprint('Num words: ',len(set(idx2w)))", "First model\nArchitecture\n- tf.nn.embedding_lookup\n- Dense layer: tf.nn.relu(tf.matmul(x, W) + b)", "#General parameters\nLOG_DIR = '/tmp/tensorboard/airline/embeddings/'\n\n# data attributes\ninput_seq_length = X_trn.shape[1]\ninput_vocabulary_size = len(set(idx2w)) + 1\noutput_length = 127\n\n#Model parameters\nembedding_size=64\n\n\n# build the model: Simple LSTM with embedings\n\nfrom tensorflow.contrib.keras import layers, models, optimizers\n\nprint('Build model 1')\nseq_input = layers.Input(shape=([input_seq_length]), name='prev') \n\n#----------------------------------------\n# Put your embedding layer here\n#----------------------------------------\n\n#----------------------------------------\n# You need to do some transformation to connect the embedding out to the dense layer\n#----------------------------------------\n\n#----------------------------------------\n# Put your final dense layer layer here\n#----------------------------------------\noutput = \n\nmodel1 = models.Model(inputs=seq_input, outputs=output)\nmodel1.summary()\n\n# Optimizer\nadam_optimizer = optimizers.Adam()\nmodel1.compile(loss='sparse_categorical_crossentropy', optimizer=adam_optimizer, metrics=['accuracy'])\n\n\n#Plot the model graph\nfrom tensorflow.contrib.keras import utils\n\n# Create model image\nutils.plot_model(model1, '/tmp/model1.png')\n\n# Show image\nplt.imshow(plt.imread('/tmp/model1.png'))\n\n\n#Fit model\nhistory = model1.fit(X_trn, y_trn, batch_size=128, epochs=10,\n validation_data=(X_tst, y_tst))\n\n\n#Plot graphs in the notebook output\nplt.plot(history.history['acc'])\nplt.plot(history.history['val_acc'])\nplt.show()\n\n# Predict. Score new paragraph \ndef score_paragraph(paragraph):\n #Preprocess data\n p_w = paragraph.split()\n p_w_c = [w2idx[w] for w in p_w]\n x_score = np.array(context(p_w_c, size=10))\n \n # Score\n pred_score = model1.predict(x_score)\n response = [idx2la[l] for l in np.argmax(pred_score,axis=1)]\n \n return response\n\n\nparagraph = 'i need a business ticket in any flight with departure from alaska to las vegas monday with breakfast'\nresponse = score_paragraph(paragraph)\nwlength = 35\nfor wx, la in zip(paragraph.split(), response): print( wx.rjust(wlength), la.rjust(wlength))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
guyk1971/deep-learning
tv-script-generation/dlnd_tv_script_generation.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n word_counts = Counter(text) # creates a 'dictionary' word:count\n sorted_vocab = sorted(word_counts,key=word_counts.get, reverse=True)\n int_to_vocab = {ii: word for ii,word in enumerate(sorted_vocab)}\n vocab_to_int = {word:ii for ii,word in enumerate(sorted_vocab)}\n return (vocab_to_int, int_to_vocab)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n Tokenize=dict()\n Tokenize['.']='<PERIOD>'\n Tokenize[','] = '<COMMA>'\n Tokenize['\"'] = '<QUOTATION_MARK>'\n Tokenize[';'] = '<SEMICOLON>'\n Tokenize['!'] = '<EXCLAMATION_MARK>'\n Tokenize['?'] = '<QUESTION_MARK>'\n Tokenize['('] = '<LEFT_PAREN>'\n Tokenize[')'] = '<RIGHT_PAREN>'\n Tokenize['--'] = '<DASH>'\n Tokenize['\\n'] = '<RETURN>'\n return Tokenize\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n inputs = tf.placeholder(dtype=tf.int32, shape=(None,None), name='input')\n targets = tf.placeholder(dtype=tf.int32, shape=(None,None), name='targets')\n learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')\n \n return (inputs,targets,learning_rate)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n num_layers = 3\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n # todo: consider dropout\n # keep_prob = 0.5\n # drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([lstm]*num_layers) # if dropout applied then replace 'lstm' with 'drop'\n initial_state=tf.identity(cell.zero_state(batch_size,tf.float32),name='initial_state')\n return (cell,initial_state)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n embedding = tf.Variable(tf.random_uniform((vocab_size,embed_dim),-1,1))\n embed = tf.nn.embedding_lookup(embedding,input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n outputs,state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32) # why dtype=tf.float32 ? doesnt work with int32\n final_state = tf.identity(state,name='final_state')\n\n return (outputs,final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n embed_words = get_embed(input_data, vocab_size, embed_dim) # shape : [None, None, 300]\n rnn_outputs, final_state = build_rnn(cell, embed_words) # shape: [None, None, 256]\n logits = tf.layers.dense(rnn_outputs,vocab_size)\n return (logits,final_state)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n num_batches = len(int_text)//(batch_size*seq_length)\n trimmed_text = int_text[:num_batches*(batch_size*seq_length)]\n inputs = trimmed_text\n targets = trimmed_text[1:]+[trimmed_text[0]]\n # the below code was inspired (copied with modification) from KaRNNa exercise - get_batches)\n inputs = np.reshape(inputs,(batch_size,-1)) # now inputs.shape=[batch_size,num_batches*seq_length]\n targets = np.reshape(targets, (batch_size, -1))\n Batches=np.zeros((num_batches,2,batch_size,seq_length))\n for b,n in enumerate(range(0,inputs.shape[1],seq_length)):\n inp=np.expand_dims(inputs[:,n:n+seq_length],0)\n tar=np.expand_dims(targets[:,n:n+seq_length],0)\n Batches[b]=np.vstack((inp,tar))\n return Batches\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 100\n# Batch Size\nbatch_size = 23 # s.t. 100 batches are 1 epoch \n# RNN Size\nrnn_size = 256\n# Embedding Dimension Size\nembed_dim = 200\n# Sequence Length\nseq_length = 30\n# Learning Rate\nlearning_rate = 0.005\n# Show stats for every n number of batches\nshow_every_n_batches = 50\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n InputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitialStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\")\n return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n sampled_int=np.random.choice(len(int_to_vocab),1,p=probabilities)[0]\n return int_to_vocab[sampled_int]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hongguangguo/shogun
doc/ipython-notebooks/pca/pca_notebook.ipynb
gpl-3.0
[ "Principal Component Analysis in Shogun\nBy Abhijeet Kislay (GitHub ID: <a href='https://github.com/kislayabhi'>kislayabhi</a>)\nThis notebook is about finding Principal Components (<a href=\"http://en.wikipedia.org/wiki/Principal_component_analysis\">PCA</a>) of data (<a href=\"http://en.wikipedia.org/wiki/Unsupervised_learning\">unsupervised</a>) in Shogun. Its <a href=\"http://en.wikipedia.org/wiki/Dimensionality_reduction\">dimensional reduction</a> capabilities are further utilised to show its application in <a href=\"http://en.wikipedia.org/wiki/Data_compression\">data compression</a>, image processing and <a href=\"http://en.wikipedia.org/wiki/Facial_recognition_system\">face recognition</a>.", "%pylab inline\n%matplotlib inline\n# import all shogun classes\nfrom modshogun import *", "Some Formal Background (Skip if you just want code examples)\nPCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.\nIn machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.\nThe data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.\nHere we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\\mathbf{x}$ is 'projected down' to a lower dimensional vector $\\mathbf{y}$ by:\n$$\\mathbf{y}=\\mathbf{F}\\mathbf{x}+\\text{const}.$$\nwhere the matrix $\\mathbf{F}\\in\\mathbb{R}^{\\text{M}\\times \\text{D}}$, with $\\text{M}<\\text{D}$. Here $\\text{M}=\\dim(\\mathbf{y})$ and $\\text{D}=\\dim(\\mathbf{x})$.\nFrom the above scenario, we assume that\n\nThe number of principal components to use is $\\text{M}$.\nThe dimension of each data point is $\\text{D}$.\nThe number of data points is $\\text{N}$.\n\nWe express the approximation for datapoint $\\mathbf{x}^n$ as:$$\\mathbf{x}^n \\approx \\mathbf{c} + \\sum\\limits_{i=1}^{\\text{M}}y_i^n \\mathbf{b}^i \\equiv \\tilde{\\mathbf{x}}^n.$$\n* Here the vector $\\mathbf{c}$ is a constant and defines a point in the lower dimensional space.\n* The $\\mathbf{b}^i$ define vectors in the lower dimensional space (also known as 'principal component coefficients' or 'loadings').\n* The $y_i^n$ are the low dimensional co-ordinates of the data.\nOur motive is to find the reconstruction $\\tilde{\\mathbf{x}}^n$ given the lower dimensional representation $\\mathbf{y}^n$(which has components $y_i^n,i = 1,...,\\text{M})$. For a data space of dimension $\\dim(\\mathbf{x})=\\text{D}$, we hope to accurately describe the data using only a small number $(\\text{M}\\ll \\text{D})$ of coordinates of $\\mathbf{y}$.\nTo determine the best lower dimensional representation it is convenient to use the square distance error between $\\mathbf{x}$ and its reconstruction $\\tilde{\\mathbf{x}}$:$$\\text{E}(\\mathbf{B},\\mathbf{Y},\\mathbf{c})=\\sum\\limits_{n=1}^{\\text{N}}\\sum\\limits_{i=1}^{\\text{D}}[x_i^n - \\tilde{x}i^n]^2.$$\n* Here the basis vectors are defined as $\\mathbf{B} = [\\mathbf{b}^1,...,\\mathbf{b}^\\text{M}]$ (defining $[\\text{B}]{i,j} = b_i^j$).\n* Corresponding low dimensional coordinates are defined as $\\mathbf{Y} = [\\mathbf{y}^1,...,\\mathbf{y}^\\text{N}].$\n* Also, $x_i^n$ and $\\tilde{x}_i^n$ represents the coordinates of the data points for the original and the reconstructed data respectively.\n* The bias $\\mathbf{c}$ is given by the mean of the data $\\sum_n\\mathbf{x}^n/\\text{N}$.\nTherefore, for simplification purposes we centre our data, so as to set $\\mathbf{c}$ to zero. Now we concentrate on finding the optimal basis $\\mathbf{B}$( which has the components $\\mathbf{b}^i, i=1,...,\\text{M} $).\nDeriving the optimal linear reconstruction\nTo find the best basis vectors $\\mathbf{B}$ and corresponding low dimensional coordinates $\\mathbf{Y}$, we may minimize the sum of squared differences between each vector $\\mathbf{x}$ and its reconstruction $\\tilde{\\mathbf{x}}$:\n$\\text{E}(\\mathbf{B},\\mathbf{Y}) = \\sum\\limits_{n=1}^{\\text{N}}\\sum\\limits_{i=1}^{\\text{D}}\\left[x_i^n - \\sum\\limits_{j=1}^{\\text{M}}y_j^nb_i^j\\right]^2 = \\text{trace} \\left( (\\mathbf{X}-\\mathbf{B}\\mathbf{Y})^T(\\mathbf{X}-\\mathbf{B}\\mathbf{Y}) \\right)$\nwhere $\\mathbf{X} = [\\mathbf{x}^1,...,\\mathbf{x}^\\text{N}].$\nConsidering the above equation under the orthonormality constraint $\\mathbf{B}^T\\mathbf{B} = \\mathbf{I}$ (i.e the basis vectors are mutually orthogonal and of unit length), we differentiate it w.r.t $y_k^n$. The squared error $\\text{E}(\\mathbf{B},\\mathbf{Y})$ therefore has zero derivative when: \n$y_k^n = \\sum_i b_i^kx_i^n$\nBy substituting this solution in the above equation, the objective becomes\n$\\text{E}(\\mathbf{B}) = (\\text{N}-1)\\left[\\text{trace}(\\mathbf{S}) - \\text{trace}\\left(\\mathbf{S}\\mathbf{B}\\mathbf{B}^T\\right)\\right],$\nwhere $\\mathbf{S}$ is the sample covariance matrix of the data.\nTo minimise equation under the constraint $\\mathbf{B}^T\\mathbf{B} = \\mathbf{I}$, we use a set of Lagrange Multipliers $\\mathbf{L}$, so that the objective is to minimize: \n$-\\text{trace}\\left(\\mathbf{S}\\mathbf{B}\\mathbf{B}^T\\right)+\\text{trace}\\left(\\mathbf{L}\\left(\\mathbf{B}^T\\mathbf{B} - \\mathbf{I}\\right)\\right).$\nSince the constraint is symmetric, we can assume that $\\mathbf{L}$ is also symmetric. Differentiating with respect to $\\mathbf{B}$ and equating to zero we obtain that at the optimum \n$\\mathbf{S}\\mathbf{B} = \\mathbf{B}\\mathbf{L}$.\nThis is a form of eigen-equation so that a solution is given by taking $\\mathbf{L}$ to be diagonal and $\\mathbf{B}$ as the matrix whose columns are the corresponding eigenvectors of $\\mathbf{S}$. In this case,\n$\\text{trace}\\left(\\mathbf{S}\\mathbf{B}\\mathbf{B}^T\\right) =\\text{trace}(\\mathbf{L}),$\nwhich is the sum of the eigenvalues corresponding to the eigenvectors forming $\\mathbf{B}$. Since we wish to minimise $\\text{E}(\\mathbf{B})$, we take the eigenvectors with the largest corresponding eigenvalues.\nWhilst the solution to this eigen-problem is unique, this only serves to define the solution subspace since one may rotate and scale $\\mathbf{B}$ and $\\mathbf{Y}$ such that the value of the squared loss is exactly the same. The justification for choosing the non-rotated eigen solution is given by the additional requirement that the principal components corresponds to directions of maximal variance.\nMaximum variance criterion\nWe aim to find that single direction $\\mathbf{b}$ such that, when the data is projected onto this direction, the variance of this projection is maximal amongst all possible such projections.\nThe projection of a datapoint onto a direction $\\mathbf{b}$ is $\\mathbf{b}^T\\mathbf{x}^n$ for a unit length vector $\\mathbf{b}$. Hence the sum of squared projections is: $$\\sum\\limits_{n}\\left(\\mathbf{b}^T\\mathbf{x}^n\\right)^2 = \\mathbf{b}^T\\left[\\sum\\limits_{n}\\mathbf{x}^n(\\mathbf{x}^n)^T\\right]\\mathbf{b} = (\\text{N}-1)\\mathbf{b}^T\\mathbf{S}\\mathbf{b} = \\lambda(\\text{N} - 1)$$ \nwhich ignoring constants, is simply the negative of the equation for a single retained eigenvector $\\mathbf{b}$(with $\\mathbf{S}\\mathbf{b} = \\lambda\\mathbf{b}$). Hence the optimal single $\\text{b}$ which maximises the projection variance is given by the eigenvector corresponding to the largest eigenvalues of $\\mathbf{S}.$ The second largest eigenvector corresponds to the next orthogonal optimal direction and so on. This explains why, despite the squared loss equation being invariant with respect to arbitrary rotation of the basis vectors, the ones given by the eigen-decomposition have the additional property that they correspond to directions of maximal variance. These maximal variance directions found by PCA are called the $\\text{principal} $ $\\text{directions}.$\nThere are two eigenvalue methods through which shogun can perform PCA namely\n* Eigenvalue Decomposition Method.\n* Singular Value Decomposition.\nEVD vs SVD\n\nThe EVD viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product of $\\mathbf{X}\\mathbf{X}^\\text{T}$, where $\\mathbf{X}$ is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:\n\n$\\mathbf{S}=\\frac{1}{\\text{N}-1}\\mathbf{X}\\mathbf{X}^\\text{T},$\nwhere the $\\text{D}\\times\\text{N}$ matrix $\\mathbf{X}$ contains all the data vectors: $\\mathbf{X}=[\\mathbf{x}^1,...,\\mathbf{x}^\\text{N}].$\nWriting the $\\text{D}\\times\\text{N}$ matrix of eigenvectors as $\\mathbf{E}$ and the eigenvalues as an $\\text{N}\\times\\text{N}$ diagonal matrix $\\mathbf{\\Lambda}$, the eigen-decomposition of the covariance $\\mathbf{S}$ is\n$\\mathbf{X}\\mathbf{X}^\\text{T}\\mathbf{E}=\\mathbf{E}\\mathbf{\\Lambda}\\Longrightarrow\\mathbf{X}^\\text{T}\\mathbf{X}\\mathbf{X}^\\text{T}\\mathbf{E}=\\mathbf{X}^\\text{T}\\mathbf{E}\\mathbf{\\Lambda}\\Longrightarrow\\mathbf{X}^\\text{T}\\mathbf{X}\\tilde{\\mathbf{E}}=\\tilde{\\mathbf{E}}\\mathbf{\\Lambda},$\nwhere we defined $\\tilde{\\mathbf{E}}=\\mathbf{X}^\\text{T}\\mathbf{E}$. The final expression above represents the eigenvector equation for $\\mathbf{X}^\\text{T}\\mathbf{X}.$ This is a matrix of dimensions $\\text{N}\\times\\text{N}$ so that calculating the eigen-decomposition takes $\\mathcal{O}(\\text{N}^3)$ operations, compared with $\\mathcal{O}(\\text{D}^3)$ operations in the original high-dimensional space. We then can therefore calculate the eigenvectors $\\tilde{\\mathbf{E}}$ and eigenvalues $\\mathbf{\\Lambda}$ of this matrix more easily. Once found, we use the fact that the eigenvalues of $\\mathbf{S}$ are given by the diagonal entries of $\\mathbf{\\Lambda}$ and the eigenvectors by\n$\\mathbf{E}=\\mathbf{X}\\tilde{\\mathbf{E}}\\mathbf{\\Lambda}^{-1}$\n\nOn the other hand, applying SVD to the data matrix $\\mathbf{X}$ follows like:\n\n$\\mathbf{X}=\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}$\nwhere $\\mathbf{U}^\\text{T}\\mathbf{U}=\\mathbf{I}\\text{D}$ and $\\mathbf{V}^\\text{T}\\mathbf{V}=\\mathbf{I}\\text{N}$ and $\\mathbf{\\Sigma}$ is a diagonal matrix of the (positive) singular values. We assume that the decomposition has ordered the singular values so that the upper left diagonal element of $\\mathbf{\\Sigma}$ contains the largest singular value.\nAttempting to construct the covariance matrix $(\\mathbf{X}\\mathbf{X}^\\text{T})$from this decomposition gives:\n$\\mathbf{X}\\mathbf{X}^\\text{T} = \\left(\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}\\right)\\left(\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}\\right)^\\text{T}$\n$\\mathbf{X}\\mathbf{X}^\\text{T} = \\left(\\mathbf{U}\\mathbf{\\Sigma}\\mathbf{V}^\\text{T}\\right)\\left(\\mathbf{V}\\mathbf{\\Sigma}\\mathbf{U}^\\text{T}\\right)$\nand since $\\mathbf{V}$ is an orthogonal matrix $\\left(\\mathbf{V}^\\text{T}\\mathbf{V}=\\mathbf{I}\\right),$\n$\\mathbf{X}\\mathbf{X}^\\text{T}=\\left(\\mathbf{U}\\mathbf{\\Sigma}^\\mathbf{2}\\mathbf{U}^\\text{T}\\right)$\nSince it is in the form of an eigen-decomposition, the PCA solution given by performing the SVD decomposition of $\\mathbf{X}$, for which the eigenvectors are then given by $\\mathbf{U}$, and corresponding eigenvalues by the square of the singular values.\nCPCA Class Reference (Shogun)\nCPCA class of Shogun inherits from the CPreprocessor class. Preprocessors are transformation functions that doesn't change the domain of the input features. Specifically, CPCA performs principal component analysis on the input vectors and keeps only the specified number of eigenvectors. On preprocessing, the stored covariance matrix is used to project vectors into eigenspace.\nPerformance of PCA depends on the algorithm used according to the situation in hand.\nOur PCA preprocessor class provides 3 method options to compute the transformation matrix:\n\n\n$\\text{PCA(EVD)}$ sets $\\text{PCAmethod == EVD}$ : Eigen Value Decomposition of Covariance Matrix $(\\mathbf{XX^T}).$\nThe covariance matrix $\\mathbf{XX^T}$ is first formed internally and then\nits eigenvectors and eigenvalues are computed using QR decomposition of the matrix.\nThe time complexity of this method is $\\mathcal{O}(D^3)$ and should be used when $\\text{N > D.}$\n\n\n$\\text{PCA(SVD)}$ sets $\\text{PCAmethod == SVD}$ : Singular Value Decomposition of feature matrix $\\mathbf{X}$.\nThe transpose of feature matrix, $\\mathbf{X^T}$, is decomposed using SVD. $\\mathbf{X^T = UDV^T}.$\nThe matrix V in this decomposition contains the required eigenvectors and\nthe diagonal entries of the diagonal matrix D correspond to the non-negative\neigenvalues.The time complexity of this method is $\\mathcal{O}(DN^2)$ and should be used when $\\text{N < D.}$\n\n\n$\\text{PCA(AUTO)}$ sets $\\text{PCAmethod == AUTO}$ : This mode automagically chooses one of the above modes for the user based on whether $\\text{N>D}$ (chooses $\\text{EVD}$) or $\\text{N<D}$ (chooses $\\text{SVD}$)\n\n\nPCA on 2D data\nStep 1: Get some data\nWe will generate the toy data by adding orthogonal noise to a set of points lying on an arbitrary 2d line. We expect PCA to recover this line, which is a one-dimensional linear sub-space.", "#number of data points.\nn=100\n\n#generate a random 2d line(y1 = mx1 + c)\nm = random.randint(1,10)\nc = random.randint(1,10)\nx1 = random.random_integers(-20,20,n)\ny1=m*x1+c\n\n#generate the noise.\nnoise=random.random_sample([n]) * random.random_integers(-35,35,n)\n\n#make the noise orthogonal to the line y=mx+c and add it.\nx=x1 + noise*m/sqrt(1+square(m))\ny=y1 + noise/sqrt(1+square(m))\n\ntwoD_obsmatrix=array([x,y])\n\n#to visualise the data we must plot it.\n\nrcParams['figure.figsize'] = 7, 7 \nfigure,axis=subplots(1,1)\nxlim(-50,50)\nylim(-50,50)\naxis.plot(twoD_obsmatrix[0,:],twoD_obsmatrix[1,:],'o',color='green',markersize=6)\n\n#the line from which we generated the data is plotted in red\naxis.plot(x1[:],y1[:],linewidth=0.3,color='red')\ntitle('One-Dimensional sub-space with noise')\nxlabel(\"x axis\")\n_=ylabel(\"y axis\")", "Step 2: Subtract the mean.\nFor PCA to work properly, we must subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. So, all the $x$ values have $\\bar{x}$ subtracted, and all the $y$ values have $\\bar{y}$ subtracted from them, where:$$\\bar{\\mathbf{x}} = \\frac{\\sum\\limits_{i=1}^{n}x_i}{n}$$ $\\bar{\\mathbf{x}}$ denotes the mean of the $x_i^{'s}$\nShogun's way of doing things :\nPreprocessor PCA performs principial component analysis on input feature vectors/matrices. It provides an interface to set the target dimension by $\\text{set_target_dim method}.$ When the $\\text{init()}$ method in $\\text{PCA}$ is called with proper\nfeature matrix $\\text{X}$ (with say $\\text{N}$ number of vectors and $\\text{D}$ feature dimension), a transformation matrix is computed and stored internally.It inherenty also centralizes the data by subtracting the mean from it.", "#convert the observation matrix into dense feature matrix.\ntrain_features = RealFeatures(twoD_obsmatrix)\n\n#PCA(EVD) is choosen since N=100 and D=2 (N>D).\n#However we can also use PCA(AUTO) as it will automagically choose the appropriate method. \npreprocessor = PCA(EVD)\n\n#since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by\n#setting the target dimension to 2 to visualize both the eigen vectors.\n#However, in future examples we will get rid of this step by implementing it directly.\npreprocessor.set_target_dim(2)\n\n#Centralise the data by subtracting its mean from it.\npreprocessor.init(train_features)\n\n#get the mean for the respective dimensions.\nmean_datapoints=preprocessor.get_mean()\nmean_x=mean_datapoints[0]\nmean_y=mean_datapoints[1]", "Step 3: Calculate the covariance matrix\nTo understand the relationship between 2 dimension we define $\\text{covariance}$. It is a measure to find out how much the dimensions vary from the mean $with$ $respect$ $to$ $each$ $other.$$$cov(X,Y)=\\frac{\\sum\\limits_{i=1}^{n}(X_i-\\bar{X})(Y_i-\\bar{Y})}{n-1}$$\nA useful way to get all the possible covariance values between all the different dimensions is to calculate them all and put them in a matrix.\nExample: For a 3d dataset with usual dimensions of $x,y$ and $z$, the covariance matrix has 3 rows and 3 columns, and the values are this:\n$$\\mathbf{S} = \\quad\\begin{pmatrix}cov(x,x)&cov(x,y)&cov(x,z)\\cov(y,x)&cov(y,y)&cov(y,z)\\cov(z,x)&cov(z,y)&cov(z,z)\\end{pmatrix}$$\nStep 4: Calculate the eigenvectors and eigenvalues of the covariance matrix\nFind the eigenvectors $e^1,....e^M$ of the covariance matrix $\\mathbf{S}$.\nShogun's way of doing things :\nStep 3 and Step 4 are directly implemented by the PCA preprocessor of Shogun toolbar. The transformation matrix is essentially a $\\text{D}$$\\times$$\\text{M}$ matrix, the columns of which correspond to the eigenvectors of the covariance matrix $(\\text{X}\\text{X}^\\text{T})$ having top $\\text{M}$ eigenvalues.", "#Get the eigenvectors(We will get two of these since we set the target to 2). \nE = preprocessor.get_transformation_matrix()\n\n#Get all the eigenvalues returned by PCA.\neig_value=preprocessor.get_eigenvalues()\n\ne1 = E[:,0]\ne2 = E[:,1]\neig_value1 = eig_value[0]\neig_value2 = eig_value[1]", "Step 5: Choosing components and forming a feature vector.\nLets visualize the eigenvectors and decide upon which to choose as the $principle$ $component$ of the data set.", "#find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E\n#Here M=1\n\n#slope of e1 & e2\nm1=e1[1]/e1[0]\nm2=e2[1]/e2[0]\n\n#generate the two lines\nx1=range(-50,50)\nx2=x1\ny1=multiply(m1,x1)\ny2=multiply(m2,x2)\n\n#plot the data along with those two eigenvectors\nfigure, axis = subplots(1,1)\nxlim(-50, 50)\nylim(-50, 50)\naxis.plot(x[:], y[:],'o',color='green', markersize=5, label=\"green\")\naxis.plot(x1[:], y1[:], linewidth=0.7, color='black')\naxis.plot(x2[:], y2[:], linewidth=0.7, color='blue')\np1 = Rectangle((0, 0), 1, 1, fc=\"black\")\np2 = Rectangle((0, 0), 1, 1, fc=\"blue\")\nlegend([p1,p2],[\"1st eigenvector\",\"2nd eigenvector\"],loc='center left', bbox_to_anchor=(1, 0.5))\ntitle('Eigenvectors selection')\nxlabel(\"x axis\")\n_=ylabel(\"y axis\")", "In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.\nIt turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.\nForm the matrix $\\mathbf{E}=[\\mathbf{e}^1,...,\\mathbf{e}^M].$\nHere $\\text{M}$ represents the target dimension of our final projection", "#The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2).\n#E is the feature vector.\nE=e2", "Step 6: Projecting the data to its Principal Components.\nThis is the final step in PCA. Once we have choosen the components(eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the vector and multiply it on the left of the original dataset.\nThe lower dimensional representation of each data point $\\mathbf{x}^n$ is given by \n$\\mathbf{y}^n=\\mathbf{E}^T(\\mathbf{x}^n-\\mathbf{m})$\nHere the $\\mathbf{E}^T$ is the matrix with the eigenvectors in rows, with the most significant eigenvector at the top. The mean adjusted data, with data items in each column, with each row holding a seperate dimension is multiplied to it.\nShogun's way of doing things :\nStep 6 can be performed by shogun's PCA preprocessor as follows:\nThe transformation matrix that we got after $\\text{init()}$ is used to transform all $\\text{D-dim}$ feature matrices (with $\\text{D}$ feature dimensions) supplied, via $\\text{apply_to_feature_matrix methods}$.This transformation outputs the $\\text{M-Dim}$ approximation of all these input vectors and matrices (where $\\text{M}$ $\\leq$ $\\text{min(D,N)}$).", "#transform all 2-dimensional feature matrices to target-dimensional approximations.\nyn=preprocessor.apply_to_feature_matrix(train_features)\n\n#Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue.\n#The 2nd row of yn is choosen as it corresponds to the required eigenvector e2.\nyn1=yn[1,:]", "Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.\nStep 7: Form the approximate reconstruction of the original data $\\mathbf{x}^n$\nThe approximate reconstruction of the original datapoint $\\mathbf{x}^n$ is given by : $\\tilde{\\mathbf{x}}^n\\approx\\text{m}+\\mathbf{E}\\mathbf{y}^n$", "x_new=(yn1 * E[0]) + tile(mean_x,[n,1]).T[0]\ny_new=(yn1 * E[1]) + tile(mean_y,[n,1]).T[0]", "The new data is plotted below", "figure, axis = subplots(1,1)\nxlim(-50, 50)\nylim(-50, 50)\n\naxis.plot(x[:], y[:],'o',color='green', markersize=5, label=\"green\")\naxis.plot(x_new, y_new, 'o', color='blue', markersize=5, label=\"red\")\ntitle('PCA Projection of 2D data into 1D subspace')\nxlabel(\"x axis\")\nylabel(\"y axis\")\n\n#add some legend for information\np1 = Rectangle((0, 0), 1, 1, fc=\"r\")\np2 = Rectangle((0, 0), 1, 1, fc=\"g\")\np3 = Rectangle((0, 0), 1, 1, fc=\"b\")\nlegend([p1,p2,p3],[\"normal projection\",\"2d data\",\"1d projection\"],loc='center left', bbox_to_anchor=(1, 0.5))\n\n#plot the projections in red:\nfor i in range(n):\n axis.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red')", "PCA on a 3d data.\nStep1: Get some data\nWe generate points from a plane and then add random noise orthogonal to it. The general equation of a plane is: $$\\text{a}\\mathbf{x}+\\text{b}\\mathbf{y}+\\text{c}\\mathbf{z}+\\text{d}=0$$", "rcParams['figure.figsize'] = 8,8 \n#number of points\nn=100\n\n#generate the data\na=random.randint(1,20)\nb=random.randint(1,20)\nc=random.randint(1,20)\nd=random.randint(1,20)\n\nx1=random.random_integers(-20,20,n)\ny1=random.random_integers(-20,20,n)\nz1=-(a*x1+b*y1+d)/c\n\n#generate the noise\nnoise=random.random_sample([n])*random.random_integers(-30,30,n)\n\n#the normal unit vector is [a,b,c]/magnitude\nmagnitude=sqrt(square(a)+square(b)+square(c))\nnormal_vec=array([a,b,c]/magnitude)\n\n#add the noise orthogonally\nx=x1+noise*normal_vec[0]\ny=y1+noise*normal_vec[1]\nz=z1+noise*normal_vec[2]\nthreeD_obsmatrix=array([x,y,z])\n\n#to visualize the data, we must plot it.\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfig = pyplot.figure()\nax=fig.add_subplot(111, projection='3d')\n\n#plot the noisy data generated by distorting a plane\nax.scatter(x, y, z,marker='o', color='g')\n\nax.set_xlabel('x label')\nax.set_ylabel('y label')\nax.set_zlabel('z label')\nlegend([p2],[\"3d data\"],loc='center left', bbox_to_anchor=(1, 0.5))\ntitle('Two dimensional subspace with noise')\nxx, yy = meshgrid(range(-30,30), range(-30,30))\nzz=-(a * xx + b * yy + d) / c", "Step 2: Subtract the mean.", "#convert the observation matrix into dense feature matrix.\ntrain_features = RealFeatures(threeD_obsmatrix)\n\n#PCA(EVD) is choosen since N=100 and D=3 (N>D).\n#However we can also use PCA(AUTO) as it will automagically choose the appropriate method. \npreprocessor = PCA(EVD)\n\n#If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their\n#eigenvalues.\npreprocessor.set_target_dim(2)\npreprocessor.init(train_features)\n\n#get the mean for the respective dimensions.\nmean_datapoints=preprocessor.get_mean()\nmean_x=mean_datapoints[0]\nmean_y=mean_datapoints[1]\nmean_z=mean_datapoints[2]", "Step 3 & Step 4: Calculate the eigenvectors of the covariance matrix", "#get the required eigenvectors corresponding to top 2 eigenvalues.\nE = preprocessor.get_transformation_matrix()", "Steps 5: Choosing components and forming a feature vector.\nSince we performed PCA for a target $\\dim = 2$ for the $3 \\dim$ data, we are directly given \nthe two required eigenvectors in $\\mathbf{E}$\nE is automagically filled by setting target dimension = M. This is different from the 2d data example where we implemented this step manually.\nStep 6: Projecting the data to its Principal Components.", "#This can be performed by shogun's PCA preprocessor as follows:\nyn=preprocessor.apply_to_feature_matrix(train_features)", "Step 7: Form the approximate reconstruction of the original data $\\mathbf{x}^n$\nThe approximate reconstruction of the original datapoint $\\mathbf{x}^n$ is given by : $\\tilde{\\mathbf{x}}^n\\approx\\text{m}+\\mathbf{E}\\mathbf{y}^n$", "new_data=dot(E,yn)\n\nx_new=new_data[0,:]+tile(mean_x,[n,1]).T[0]\ny_new=new_data[1,:]+tile(mean_y,[n,1]).T[0]\nz_new=new_data[2,:]+tile(mean_z,[n,1]).T[0]\n\n#all the above points lie on the same plane. To make it more clear we will plot the projection also.\n\nfig=pyplot.figure()\nax=fig.add_subplot(111, projection='3d')\nax.scatter(x, y, z,marker='o', color='g')\nax.set_xlabel('x label')\nax.set_ylabel('y label')\nax.set_zlabel('z label')\nlegend([p1,p2,p3],[\"normal projection\",\"3d data\",\"2d projection\"],loc='center left', bbox_to_anchor=(1, 0.5))\ntitle('PCA Projection of 3D data into 2D subspace')\n\nfor i in range(100):\n ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b')\n ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r') ", "PCA Performance\nUptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\\text{(N>D)}$ but for the next example $\\text{(N<D)}$ we will be using Singular Value Decomposition.\nPractical Example : Eigenfaces\nThe problem with the image representation we are given is its high dimensionality. Two-dimensional $\\text{p} \\times \\text{q}$ grayscale images span a $\\text{m=pq}$ dimensional vector space, so an image with $\\text{100}\\times\\text{100}$ pixels lies in a $\\text{10,000}$ dimensional image space already. \nThe question is, are all dimensions really useful for us?\n$\\text{Eigenfaces}$ are based on the dimensional reduction approach of $\\text{Principal Component Analysis(PCA)}$. The basic idea is to treat each image as a vector in a high dimensional space. Then, $\\text{PCA}$ is applied to the set of images to produce a new reduced subspace that captures most of the variability between the input images. The $\\text{Pricipal Component Vectors}$(eigenvectors of the sample covariance matrix) are called the $\\text{Eigenfaces}$. Every input image can be represented as a linear combination of these eigenfaces by projecting the image onto the new eigenfaces space. Thus, we can perform the identfication process by matching in this reduced space. An input image is transformed into the $\\text{eigenspace,}$ and the nearest face is identified using a $\\text{Nearest Neighbour approach.}$\nStep 1: Get some data.\nHere data means those Images which will be used for training purposes.", "rcParams['figure.figsize'] = 10, 10 \nimport os\ndef get_imlist(path):\n \"\"\" Returns a list of filenames for all jpg images in a directory\"\"\"\n return [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.pgm')]\n\n#set path of the training images\npath_train='../../../data/att_dataset/training/'\n#set no. of rows that the images will be resized.\nk1=100\n#set no. of columns that the images will be resized.\nk2=100\n\nfilenames = get_imlist(path_train)\nfilenames = array(filenames)\n\n#n is total number of images that has to be analysed.\nn=len(filenames)", "Lets have a look on the data:", "# we will be using this often to visualize the images out there.\ndef showfig(image):\n imgplot=imshow(image, cmap='gray')\n imgplot.axes.get_xaxis().set_visible(False)\n imgplot.axes.get_yaxis().set_visible(False)\n \nimport Image\nfrom scipy import misc\n\n# to get a hang of the data, lets see some part of the dataset images.\nfig = pyplot.figure()\ntitle('The Training Dataset')\n\nfor i in range(49):\n fig.add_subplot(7,7,i+1)\n train_img=array(Image.open(filenames[i]).convert('L'))\n train_img=misc.imresize(train_img, [k1,k2])\n showfig(train_img)", "Represent every image $I_i$ as a vector $\\Gamma_i$", "#To form the observation matrix obs_matrix.\n#read the 1st image.\ntrain_img = array(Image.open(filenames[0]).convert('L'))\n\n#resize it to k1 rows and k2 columns\ntrain_img=misc.imresize(train_img, [k1,k2])\n\n#since Realfeatures accepts only data of float64 datatype, we do a type conversion\ntrain_img=array(train_img, dtype='double')\n\n#flatten it to make it a row vector.\ntrain_img=train_img.flatten()\n\n# repeat the above for all images and stack all those vectors together in a matrix\nfor i in range(1,n):\n temp=array(Image.open(filenames[i]).convert('L')) \n temp=misc.imresize(temp, [k1,k2])\n temp=array(temp, dtype='double')\n temp=temp.flatten()\n train_img=vstack([train_img,temp])\n\n#form the observation matrix \nobs_matrix=train_img.T", "Step 2: Subtract the mean\nIt is very important that the face images $I_1,I_2,...,I_M$ are $centered$ and of the $same$ size\nWe observe here that the no. of $\\dim$ for each image is far greater than no. of training images. This calls for the use of $\\text{SVD}$.\nSetting the $\\text{PCA}$ in the $\\text{AUTO}$ mode does this automagically according to the situation.", "train_features = RealFeatures(obs_matrix)\npreprocessor=PCA(AUTO)\n\npreprocessor.set_target_dim(100)\npreprocessor.init(train_features)\n\nmean=preprocessor.get_mean()", "Step 3 & Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix.", "#get the required eigenvectors corresponding to top 100 eigenvalues\nE = preprocessor.get_transformation_matrix()\n\n#lets see how these eigenfaces/eigenvectors look like:\nfig1 = pyplot.figure()\ntitle('Top 20 Eigenfaces')\n\nfor i in range(20):\n a = fig1.add_subplot(5,4,i+1)\n eigen_faces=E[:,i].reshape([k1,k2])\n showfig(eigen_faces)\n \n", "These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.\nClearly a tradeoff is required.\nWe here set for M=100.\nStep 5: Choosing components and forming a feature vector.\nSince we set target $\\dim = 100$ for this $n \\dim$ data, we are directly given the $100$ required eigenvectors in $\\mathbf{E}$\nE is automagically filled. This is different from the 2d data example where we implemented this step manually.\nStep 6: Projecting the data to its Principal Components.\nThe lower dimensional representation of each data point $\\mathbf{x}^n$ is given by $$\\mathbf{y}^n=\\mathbf{E}^T(\\mathbf{x}^n-\\mathbf{m})$$", "#we perform the required dot product.\nyn=preprocessor.apply_to_feature_matrix(train_features)", "Step 7: Form the approximate reconstruction of the original image $I_n$\nThe approximate reconstruction of the original datapoint $\\mathbf{x}^n$ is given by : $\\mathbf{x}^n\\approx\\text{m}+\\mathbf{E}\\mathbf{y}^n$", "re=tile(mean,[n,1]).T[0] + dot(E,yn)\n\n#lets plot the reconstructed images.\nfig2 = pyplot.figure()\ntitle('Reconstructed Images from 100 eigenfaces')\nfor i in range(1,50):\n re1 = re[:,i].reshape([k1,k2])\n fig2.add_subplot(7,7,i)\n showfig(re1)", "Recognition part.\nIn our face recognition process using the Eigenfaces approach, in order to recognize an unseen image, we proceed with the same preprocessing steps as applied to the training images.\nTest images are represented in terms of eigenface coefficients by projecting them into face space$\\text{(eigenspace)}$ calculated during training. Test sample is recognized by measuring the similarity distance between the test sample and all samples in the training. The similarity measure is a metric of distance calculated between two vectors. Traditional Eigenface approach utilizes $\\text{Euclidean distance}$.", "#set path of the training images\npath_train='../../../data/att_dataset/testing/'\ntest_files=get_imlist(path_train)\ntest_img=array(Image.open(test_files[0]).convert('L'))\n\nrcParams.update({'figure.figsize': (3, 3)})\n#we plot the test image , for which we have to identify a good match from the training images we already have\nfig = pyplot.figure()\ntitle('The Test Image')\nshowfig(test_img)\n\n#We flatten out our test image just the way we have done for the other images\ntest_img=misc.imresize(test_img, [k1,k2])\ntest_img=array(test_img, dtype='double')\ntest_img=test_img.flatten()\n\n#We centralise the test image by subtracting the mean from it.\ntest_f=test_img-mean", "Here we have to project our training image as well as the test image on the PCA subspace.\nThe Eigenfaces method then performs face recognition by:\n1. Projecting all training samples into the PCA subspace.\n2. Projecting the query image into the PCA subspace.\n3. Finding the nearest neighbour between the projected training images and the projected query image.", "#We have already projected our training images into pca subspace as yn.\ntrain_proj = yn\n\n#Projecting our test image into pca subspace\ntest_proj = dot(E.T, test_f)", "Shogun's way of doing things:\nShogun uses CEuclideanDistance class to compute the familiar Euclidean distance for real valued features. It computes the square root of the sum of squared disparity between the corresponding feature dimensions of two data points.\n$\\mathbf{d(x,x')=}$$\\sqrt{\\mathbf{\\sum\\limits_{i=0}^{n}}|\\mathbf{x_i}-\\mathbf{x'_i}|^2}$", "#To get Eucledian Distance as the distance measure use EuclideanDistance.\nworkfeat = RealFeatures(mat(train_proj))\ntestfeat = RealFeatures(mat(test_proj).T)\nRaRb=EuclideanDistance(testfeat, workfeat)\n\n#The distance between one test image w.r.t all the training is stacked in matrix d.\nd=empty([n,1])\nfor i in range(n):\n d[i]= RaRb.distance(0,i)\n \n#The one having the minimum distance is found out\nmin_distance_index = d.argmin()\niden=array(Image.open(filenames[min_distance_index]))\ntitle('Identified Image')\nshowfig(iden)", "References:\n[1] David Barber. Bayesian Reasoning and Machine Learning.\n[2] Lindsay I Smith. A tutorial on Principal Component Analysis.\n[3] Philipp Wanger. Face Recognition with GNU Octave/MATLAB." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xiongzhenggang/xiongzhenggang.github.io
AI/ML/week4反向传播实现.ipynb
gpl-3.0
[ "正向传播和反向传播实现\n\n反向传播算法\n\n之前我们在计算神经网络预测结果的时候我们采用了一种正向传播方法,我们从第一层开始正向一层一层进行计算,直到最后一层的$h_{\\theta}\\left(x\\right)$。\n现在,为了计算代价函数的偏导数$\\frac{\\partial}{\\partial\\Theta^{(l)}_{ij}}J\\left(\\Theta\\right)$,我们需要采用一种反向传播算法,也就是首先计算最后一层的误差,然后再一层一层反向求出各层的误差,直到倒数第二层\n可视化数据\n利用上一周的数据,先计算神经网络前向传播算法,计算输出结果,为后向传播提供预测数据", "\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom scipy.io import loadmat\nfrom sklearn.preprocessing import OneHotEncoder\n\ndata = loadmat('../data/andrew_ml_ex33507/ex3data1.mat')\ndata\n\nX = data['X']\ny = data['y']\n\nX.shape, y.shape#看下维度\n\n# 目前考虑输入是图片的像素值,20*20像素的图片有400个输入层单元,不包括需要额外添加的加上常数项。 材料已经提供了训练好的神经网络的参数,有25个单元和10个输出单元(10个输出)\nweight = loadmat(\"../data/andrew_ml_ex33507/ex3weights.mat\")\ntheta1, theta2 = weight['Theta1'], weight['Theta2']\ntheta1.shape, theta2.shape\n\nsample_idx = np.random.choice(np.arange(data['X'].shape[0]), 100)\nsample_images = data['X'][sample_idx, :]\n#展示二进制图\nfig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(8, 8))\nfor r in range(5):\n for c in range(5):\n ax_array[r, c].matshow(np.array(sample_images[5 * r + c].reshape((20, 20))).T,cmap=matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([])) ", "模型展示\n按照默认 我们设计一个输入层,一个隐藏层,一个输出层\n前向传播和代价函数\n在逻辑回归中,我们只有一个输出变量,又称标量(scalar),也只有一个因变量$y$,但是在神经网络中,我们可以有很多输出变量,我们的$h_\\theta(x)$是一个维度为$K$的向量,并且我们训练集中的因变量也是同样维度的一个向量,因此我们的代价函数会比逻辑回归更加复杂一些,为:$\\newcommand{\\subk}[1]{ #1_k }$ $$h_\\theta\\left(x\\right)\\in \\mathbb{R}^{K}$$ $${\\left({h_\\theta}\\left(x\\right)\\right)}_{i}={i}^{th} \\text{output}$$\n$J(\\Theta) = -\\frac{1}{m} \\left[ \\sum\\limits_{i=1}^{m} \\sum\\limits_{k=1}^{k} {y_k}^{(i)} \\log \\subk{(h_\\Theta(x^{(i)}))} + \\left( 1 - y_k^{(i)} \\right) \\log \\left( 1- \\subk{\\left( h_\\Theta \\left( x^{(i)} \\right) \\right)} \\right) \\right] + \\frac{\\lambda}{2m} \\sum\\limits_{l=1}^{L-1} \\sum\\limits_{i=1}^{s_l} \\sum\\limits_{j=1}^{s_{l+1}} \\left( \\Theta_{ji}^{(l)} \\right)^2$", "def sigmoid(z):\n return 1 / (1 + np.exp(-z))\n\n#2st 上面传播规律,定义第一层,并计算第二层(隐藏层)的值,并添加额外值\ndef forward_propagate(X,theta1,theta2):\n m= X.shape[0]\n a1 = np.insert(X,0, values=np.ones(m), axis=1)\n Z2 = a1*theta1.T\n a2= np.insert(sigmoid(Z2),0, values=np.ones(m), axis=1)\n Z3= a2*theta2.T\n h= sigmoid(Z3)\n return a1,Z2,a2,Z3,h \n\n# 代价函数(不带规则化项(也叫权重衰减项) Y=R(5000*10) ,这里直接使用二维矩阵,代替循环累加\ndef cost(X,Y,theta1,theta2):\n m = X.shape[0]\n X = np.matrix(X)\n Y = np.matrix(Y)\n h=forward_propagate(X,theta1,theta2)\n # multiply 矩阵size相同对应相乘\n first = np.multiply(Y,np.log(h))\n second = np.multiply((1-Y),np.log((1-h)))\n J= np.sum(first+second)\n J = (-1/m)*J\n return J\n\n# 对y标签进行编码 一开始我们得到的y是维500*1 的向量,但我们要把他编码成的矩阵。 比如说,原始y0=2,那么转化后的Y对应行就是[0,1,0...0],原始转化后的Y对应行就是[0,0...0,1]\n# Scikitlearn有一个内置的编码函数,我们可以使用这个。\nencoder = OneHotEncoder(sparse=False)\ny_onehot = encoder.fit_transform(y)\ny_onehot.shape\n\ny[0], y_onehot[0,:] # y0是数字0\n\n# 初始化设置\ninput_size = 400\nnum_labels = 10\n\ncost(X, y_onehot,theta1, theta2)\n\n# 加入正则项\ndef cost_reg(X,Y,theta1,theta2,learning_rate):\n m = X.shape[0]\n X = np.matrix(X)\n Y = np.matrix(Y)\n _,_,_,_,h=forward_propagate(X,theta1,theta2)\n first = np.multiply(Y,np.log(h))\n second = np.multiply((1-Y),np.log((1-h)))\n J= np.sum(first+second)\n # 计算正则时,第一项时不参与计算\n J = (-1/m)*J + (float(learning_rate) / (2 * m))*(np.sum(np.power(theta1[:,1:],2))+np.sum(np.power(theta2[:,1:],2)))\n return J\n\n# theta1.shape,theta2.shape\ncost_reg(X, y_onehot,theta1, theta2,1)", "反向传播\n这一部分需要你实现反向传播的算法,来计算神经网络代价函数的梯度。获得了梯度的数据,我们就可以使用工具库来计算代价函数的最小值。", "# 计算sigmoid函数的导数\ndef sigmoid_gradient(z):\n return np.multiply(sigmoid(z) ,(1-sigmoid(z)))\n# 检查\nsigmoid_gradient(0)", "初始话参数\n到目前为止我们都是初始所有参数为0,这样的初始方法对于逻辑回归来说是可行的,但是对于神经网络来说是不可行的。如果我们令所有的初始参数都为0,这将意味着我们第二层的所有激活单元都会有相同的值。同理,如果我们初始所有的参数都为一个非0的数,结果也是一样的。\n我们通常初始参数为正负ε之间的随机值,假设我们要随机初始一个尺寸为10×11的参数矩阵,代码如下:\nTheta1 = rand(10, 11) (2eps) – eps", "# 初始化设置\ninput_size = 400 #输入单元数量\nhidden_size = 25 # y隐藏单元数量\nnum_labels = 10 # 输出单元数\nepsilon = 0.001\ntheta01=np.random.rand(hidden_size,input_size+1) * 2*epsilon - epsilon# +1是添加偏置单元\ntheta02 =np.random.rand(num_labels,hidden_size+1)* 2*epsilon - epsilon\ntheta01.shape,theta02.shape", "反向传播\n反向传播的步骤是,给定训练集,先计算正向传播,再对于层的每个节点,计算误差项,这个数据衡量这个节点对最后输出的误差“贡献”了多少。 对于每个输出节点,我们可以直接计算输出值与目标值的差值,定义为δ。对于每个隐藏节点,需要基于现有权重及(l+1)层的误差,计算\n 步骤:\n\n随机初始化权重theta\n实现前向传递对任何xi 都能取得h(xi)\n实现Jθ", "# 分别得出\ndef forward_propagateNEW(X,thetalist):\n m= X.shape[0]\n a = np.insert(X,0, values=np.ones(m), axis=1)\n alist=[a]\n zlist=[]\n for i in range(len(thetalist)):\n theta= thetalist[i]\n z = a * theta\n # a= np.insert(sigmoid(z),0, values=np.ones(m), axis=1)\n a=sigmoid(z)\n if(i<len(thetalist)-1):\n a= np.insert(a,0, values=np.ones(m), axis=1)\n zlist.append(z)\n alist.append(a)\n return zlist,alist\n\n# Δ 用delta1 和delta2 替代\ndef backpropRegSelf(input_size, hidden_size, num_labels, X, y, learning_rate,L=3): # 随机化后的 这里为3层 \n m = X.shape[0]\n X = np.matrix(X)\n y = np.matrix(y)\n #初始化参数\n theta1 = (np.random.random((input_size+1,hidden_size))- 0.5)* 0.24\n theta2 = (np.random.random((hidden_size+1,num_labels))- 0.5)* 0.24\n encoder = OneHotEncoder(sparse=False)\n y_onehot = encoder.fit_transform(y) # 格式化y \n # 前向计算 每层值\n theta = [theta1, theta2]\n zlist,alist = forward_propagateNEW(X, theta)# 返回 a1 z2 a2 。。。\n # 初始化Deta\n Delta=[]\n for th in theta:\n Delta.append(np.zeros(th.shape))\n for i in range(m):\n # 以计算a z\n for l in range(L,1,-1): # 3,2 表示层数,最后一层已经算出来,单独列放\n #最后一层\n if l==L:\n delta=alist[-1][i,:]-y_onehot[i,:] # 最后一层得δ\n Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta\n else:\n zl = zlist[l-2][i,:]\n zl = np.insert(zl, 0, values=np.ones(1)) # (1, 26) 怎加偏执项\n # d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)\n # delta1 = delta1 + (d2t[:,1:]).T * a1t\n delta = np.multiply(delta*theta[l-1].T, sigmoid_gradient(zl)) # \n # 因为数组从零开始,且 Delta 为 1 2 层开始 delta 从2 层开始 # (25, 401)# (10, 26)\n Delta[l-2] = Delta[l-2] + alist[l-2][i,:].T * delta[:,1:] \n # add the gradient regularization term\n gradAll = None\n for j in range(len(Delta)):\n Delta[j][:,1:] = Delta[j][:,1:]/m + (theta[j][:,1:] * learning_rate) / m\n if gradAll is None:\n gradAll = np.ravel(Delta[j])\n else:\n tmp=np.ravel(Delta[j]) \n gradAll = np.concatenate([gradAll,tmp])\n # Delta[:,:,1:] = Delta[:,:,1:] + (theta[:,:,1:] * learning_rate) / m\n return gradAll\n\n\ngrad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)\nprint(grad2.shape)\n\n\ndef backpropReg(params, input_size, hidden_size, num_labels, X, y, learning_rate):\n m = X.shape[0]\n X = np.matrix(X)\n y = np.matrix(y)\n # reshape the parameter array into parameter matrices for each layer\n theta1 = np.matrix(np.reshape(params[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))\n theta2 = np.matrix(np.reshape(params[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))\n \n # run the feed-forward pass\n a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)\n \n # initializations\n J = 0\n delta1 = np.zeros(theta1.shape) # (25, 401)\n delta2 = np.zeros(theta2.shape) # (10, 26)\n \n # compute the cost\n for i in range(m):\n first_term = np.multiply(-y[i,:], np.log(h[i,:]))\n second_term = np.multiply((1 - y[i,:]), np.log(1 - h[i,:]))\n J += np.sum(first_term - second_term)\n \n J = J / m\n \n # add the cost regularization term\n J += (float(learning_rate) / (2 * m)) * (np.sum(np.power(theta1[:,1:], 2)) + np.sum(np.power(theta2[:,1:], 2)))\n \n # perform backpropagation\n for t in range(m):\n a1t = a1[t,:] # (1, 401)\n z2t = z2[t,:] # (1, 25)\n a2t = a2[t,:] # (1, 26)\n ht = h[t,:] # (1, 10)\n yt = y[t,:] # (1, 10)\n \n d3t = ht - yt # (1, 10)\n \n z2t = np.insert(z2t, 0, values=np.ones(1)) # (1, 26)\n d2t = np.multiply((theta2.T * d3t.T).T, sigmoid_gradient(z2t)) # (1, 26)\n \n delta1 = delta1 + (d2t[:,1:]).T * a1t\n delta2 = delta2 + d3t.T * a2t\n \n delta1 = delta1 / m\n delta2 = delta2 / m\n \n # add the gradient regularization term\n delta1[:,1:] = delta1[:,1:] + (theta1[:,1:] * learning_rate) / m\n delta2[:,1:] = delta2[:,1:] + (theta2[:,1:] * learning_rate) / m\n \n # unravel the gradient matrices into a single array\n grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))\n \n return J, grad\n\n# np.random.random(size) 返回size大小的0-1随机浮点数\nparams = (np.random.random(size=hidden_size * (input_size + 1) + num_labels * (hidden_size + 1)) - 0.5) * 0.24\nj,grad = backpropReg(params, input_size, hidden_size, num_labels, X, y, 1)\nprint(j,grad.shape)\n# j2,grad2= backpropRegSelf(input_size, hidden_size, num_labels, X, y, 1)\n# print(j2,grad2[0:10])", "梯度检验\n梯度的估计采用的方法是在代价函数上沿着切线的方向选择离两个非常近的点然后计算两个点的平均值用以估计梯度。即对于某个特定的 $\\theta$,我们计算出在 $\\theta$-$\\varepsilon $ 处和 $\\theta$+$\\varepsilon $ 的代价值($\\varepsilon $是一个非常小的值,通常选取 0.001),然后求两个代价的平均,用以估计在 $\\theta$ 处的代价值。", "# #J θ\n# input_size = 400 #输入单元数量\n# hidden_size = 25 # y隐藏单元数量\n# num_labels = 10 # 输出单元数\ndef jcost(X, y,input_size, hidden_size, output_size,theta):\n m = X.shape[0]\n X = np.matrix(X)\n y = np.matrix(y)\n theta1 = np.reshape(theta[0:hidden_size*(input_size+1)],(hidden_size,input_size+1))#(25,401)\n theta2 = np.reshape(theta[hidden_size*(input_size+1):],(output_size,hidden_size+1))#(10.26)\n _,_,_,_,h=forward_propagate(X,theta1,theta2)\n # multiply 矩阵size相同对应相乘\n first = np.multiply(y,np.log(h))\n second = np.multiply((1-y),np.log((1-h)))\n J= np.sum(first+second)\n J = (-1/m)*J\n return J\n\n\ndef check(X,y,theta1,theta2,eps):\n theta = np.concatenate((np.ravel(theta1), np.ravel(theta2)))\n gradapprox=np.zeros(len(theta))\n for i in range(len(theta)):\n thetaplus = theta\n thetaplus[i] = thetaplus[i] + eps\n thetaminus = theta\n thetaminus[i] = thetaminus[i] - eps\n gradapprox[i] = (jcost(X,y,input_size,hidden_size,num_labels,thetaplus) - jcost(X,y,input_size,hidden_size,num_labels,thetaminus)) / (2 * epsilon)\n return gradapprox\n\n# theta01.shape , theta02.shape \n# 计算很慢\ngradapprox = check(X,y_onehot,theta1,theta2,0.001)\nnumerator = np.linalg.norm(grad2-gradapprox, ord=2) # Step 1'\ndenominator = np.linalg.norm(grad2, ord=2) + np.linalg.norm(gradapprox, ord=2) # Step 2'\ndifference = numerator / denominator\nprint(difference)\n\n# 使用工具库计算参数最优解\nfrom scipy.optimize import minimize\n# opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y))\nfmin = minimize(fun=backpropReg, x0=(params), args=(input_size, hidden_size, num_labels, X, y_onehot, learning_rate), \n method='TNC', jac=True, options={'maxiter': 250})\nfmin\n\nX = np.matrix(X)\nthetafinal1 = np.matrix(np.reshape(fmin.x[:hidden_size * (input_size + 1)], (hidden_size, (input_size + 1))))\nthetafinal2 = np.matrix(np.reshape(fmin.x[hidden_size * (input_size + 1):], (num_labels, (hidden_size + 1))))\n\nprint(thetafinal1[0,1],grad2[1])\n\n# 计算使用优化后的θ得出的预测\na1, z2, a2, z3, h = forward_propagate(X, thetafinal1, thetafinal2 )\ny_pred = np.array(np.argmax(h, axis=1) + 1)\ny_pred\n\n# 最后,我们可以计算准确度,看看我们训练完毕的神经网络效果怎么样。\n# 预测值与实际值比较\nfrom sklearn.metrics import classification_report#这个包是评价报告\nprint(classification_report(y, y_pred))\n\nhidden_layer = thetafinal1[:, 1:] \nhidden_layer.shape\n\nfig, ax_array = plt.subplots(nrows=5, ncols=5, sharey=True, sharex=True, figsize=(12, 12))\nfor r in range(5):\n for c in range(5):\n ax_array[r, c].matshow(np.array(hidden_layer[5 * r + c].reshape((20, 20))),cmap=matplotlib.cm.binary)\n plt.xticks(np.array([]))\n plt.yticks(np.array([])) \n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eigendreams/TensorFlow-Tutorials
05_Ensemble_Learning.ipynb
mit
[ "TensorFlow Tutorial #05\nEnsemble Learning\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nThis tutorial shows how to use a so-called ensemble of convolutional neural networks. Instead of using a single neural network, we use several neural networks and average their outputs.\nThis is used on the MNIST data-set for recognizing hand-written digits. The ensemble improves the classification accuracy slightly on the test-set, but the difference is so small that it is possibly random. Furthermore, the ensemble mis-classifies some images that are correctly classified by some of the individual networks.\nThis tutorial builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text here is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials.\nFlowchart\nThe following chart shows roughly how the data flows in a single Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general.\nThis tutorial implements an ensemble of 5 such neural networks, where the network structure is the same but the weights and other variables are different for each network.", "from IPython.display import Image\nImage('images/02_network_flowchart.png')", "Imports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math\nimport os\n\n# Use PrettyTensor to simplify Neural Network construction.\nimport prettytensor as pt", "This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:", "tf.__version__", "Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.", "from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)", "The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))", "Class numbers\nThe class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now.", "data.test.cls = np.argmax(data.test.labels, axis=1)\ndata.validation.cls = np.argmax(data.validation.labels, axis=1)", "Helper-function for creating random training-sets\nWe will train 5 neural networks on different training-sets that are selected at random. First we combine the original training- and validation-sets into one big set. This is done for both the images and the labels.", "combined_images = np.concatenate([data.train.images, data.validation.images], axis=0)\ncombined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0)", "Check that the shape of the combined arrays is correct.", "print(combined_images.shape)\nprint(combined_labels.shape)", "Size of the combined data-set.", "combined_size = len(combined_images)\ncombined_size", "Define the size of the training-set used for each neural network. You can try and change this.", "train_size = int(0.8 * combined_size)\ntrain_size", "We do not use a validation-set during training, but this would be the size.", "validation_size = combined_size - train_size\nvalidation_size", "Helper-function for splitting the combined data-set into a random training- and validation-set.", "def random_training_set():\n # Create a randomized index into the full / combined training-set.\n idx = np.random.permutation(combined_size)\n\n # Split the random index into training- and validation-sets.\n idx_train = idx[0:train_size]\n idx_validation = idx[train_size:]\n\n # Select the images and labels for the new training-set.\n x_train = combined_images[idx_train, :]\n y_train = combined_labels[idx_train, :]\n\n # Select the images and labels for the new validation-set.\n x_validation = combined_images[idx_validation, :]\n y_validation = combined_labels[idx_validation, :]\n\n # Return the new training- and validation-sets.\n return x_train, y_train, x_validation, y_validation", "Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.", "# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10", "Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, # Images to plot, 2-d array.\n cls_true, # True class-no for images.\n ensemble_cls_pred=None, # Ensemble predicted class-no.\n best_cls_pred=None): # Best-net predicted class-no.\n\n assert len(images) == len(cls_true)\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n\n # Adjust vertical spacing if we need to print ensemble and best-net.\n if ensemble_cls_pred is None:\n hspace = 0.3\n else:\n hspace = 1.0\n fig.subplots_adjust(hspace=hspace, wspace=0.3)\n\n # For each of the sub-plots.\n for i, ax in enumerate(axes.flat):\n\n # There may not be enough images for all sub-plots.\n if i < len(images):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if ensemble_cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n msg = \"True: {0}\\nEnsemble: {1}\\nBest Net: {2}\"\n xlabel = msg.format(cls_true[i],\n ensemble_cls_pred[i],\n best_cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)", "TensorFlow Graph\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\nA TensorFlow graph consists of the following parts which will be detailed below:\n\nPlaceholder variables used for inputting data to the graph.\nVariables that are going to be optimized so as to make the convolutional network perform better.\nThe mathematical formulas for the neural network.\nA loss measure that can be used to guide the optimization of the variables.\nAn optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\nPlaceholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.", "x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')", "The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:", "x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])", "Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.", "y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')", "We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.", "y_true_cls = tf.argmax(y_true, dimension=1)", "Neural Network\nThis section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03.\nThe basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc.", "x_pretty = pt.wrap(x_image)", "Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.\nNote that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.", "with pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=16, name='layer_conv1').\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=36, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=128, name='layer_fc1').\\\n softmax_classifier(class_count=10, labels=y_true)", "Optimization Method\nPretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images.\nIt is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)", "Performance Measures\nWe need a few more performance measures to display the progress to the user.\nFirst we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "Saver\nIn order to save the variables of the neural network, we now create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below.\nNote that if you have more than 100 neural networks in the ensemble then you must increase max_to_keep accordingly.", "saver = tf.train.Saver(max_to_keep=100)", "This is the directory used for saving and retrieving the data.", "save_dir = 'checkpoints/'", "Create the directory if it does not exist.", "if not os.path.exists(save_dir):\n os.makedirs(save_dir)", "This function returns the save-path for the data-file with the given network number.", "def get_save_path(net_number):\n return save_dir + 'network' + str(net_number)", "TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Initialize variables\nThe variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it several times below.", "def init_variables():\n session.run(tf.initialize_all_variables())", "Helper-function to create a random training batch.\nThere are thousands of images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = 64", "Function for selecting a random training-batch of the given size.", "def random_batch(x_train, y_train):\n # Total number of images in the training-set.\n num_images = len(x_train)\n\n # Create a random index into the training-set.\n idx = np.random.choice(num_images,\n size=train_batch_size,\n replace=False)\n\n # Use the random index to select random images and labels.\n x_batch = x_train[idx, :] # Images.\n y_batch = y_train[idx, :] # Labels.\n\n # Return the batch.\n return x_batch, y_batch", "Helper-function to perform optimization iterations\nFunction for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.", "def optimize(num_iterations, x_train, y_train):\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = random_batch(x_train, y_train)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations and after last iteration.\n if i % 100 == 0:\n\n # Calculate the accuracy on the training-batch.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n \n # Status-message for printing.\n msg = \"Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}\"\n\n # Print it.\n print(msg.format(i + 1, acc))\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "Create ensemble of neural networks\nNumber of neural networks in the ensemble.", "num_networks = 5", "Number of optimization iterations for each neural network.", "num_iterations = 10000", "Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later.\nYou may want to skip this computation if you just want to re-run the Notebook with different analysis of the results.", "if True:\n # For each of the neural networks.\n for i in range(num_networks):\n print(\"Neural network: {0}\".format(i))\n\n # Create a random training-set. Ignore the validation-set.\n x_train, y_train, _, _ = random_training_set()\n\n # Initialize the variables of the TensorFlow graph.\n session.run(tf.initialize_all_variables())\n\n # Optimize the variables using this training-set.\n optimize(num_iterations=num_iterations,\n x_train=x_train,\n y_train=y_train)\n\n # Save the optimized variables to disk.\n saver.save(sess=session, save_path=get_save_path(i))\n\n # Print newline.\n print()", "Helper-functions for calculating and predicting classifications\nThis function calculates the predicted labels of images, that is, for each image it calculates a vector of length 10 indicating which of the 10 classes the image is.\nThe calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.", "# Split the data-set in batches of this size to limit RAM usage.\nbatch_size = 256\n\ndef predict_labels(images):\n # Number of images.\n num_images = len(images)\n\n # Allocate an array for the predicted labels which\n # will be calculated in batches and filled into this array.\n pred_labels = np.zeros(shape=(num_images, num_classes),\n dtype=np.float)\n\n # Now calculate the predicted labels for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_images:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_images)\n\n # Create a feed-dict with the images between index i and j.\n feed_dict = {x: images[i:j, :]}\n\n # Calculate the predicted labels using TensorFlow.\n pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n return pred_labels", "Calculate a boolean array whether the predicted classes for the images are correct.", "def correct_prediction(images, labels, cls_true):\n # Calculate the predicted labels.\n pred_labels = predict_labels(images=images)\n\n # Calculate the predicted class-number for each image.\n cls_pred = np.argmax(pred_labels, axis=1)\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n return correct", "Calculate a boolean array whether the images in the test-set are classified correctly.", "def test_correct():\n return correct_prediction(images = data.test.images,\n labels = data.test.labels,\n cls_true = data.test.cls)", "Calculate a boolean array whether the images in the validation-set are classified correctly.", "def validation_correct():\n return correct_prediction(images = data.validation.images,\n labels = data.validation.labels,\n cls_true = data.validation.cls)", "Helper-functions for calculating the classification accuracy\nThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4", "def classification_accuracy(correct):\n # When averaging a boolean array, False means 0 and True means 1.\n # So we are calculating: number of True / len(correct) which is\n # the same as the classification accuracy.\n return correct.mean()", "Calculate the classification accuracy on the test-set.", "def test_accuracy():\n # Get the array of booleans whether the classifications are correct\n # for the test-set.\n correct = test_correct()\n \n # Calculate the classification accuracy and return it.\n return classification_accuracy(correct)", "Calculate the classification accuracy on the original validation-set.", "def validation_accuracy():\n # Get the array of booleans whether the classifications are correct\n # for the validation-set.\n correct = validation_correct()\n \n # Calculate the classification accuracy and return it.\n return classification_accuracy(correct)", "Results and analysis\nFunction for calculating the predicted labels for all the neural networks in the ensemble. The labels are combined further below.", "def ensemble_predictions():\n # Empty list of predicted labels for each of the neural networks.\n pred_labels = []\n\n # Classification accuracy on the test-set for each network.\n test_accuracies = []\n\n # Classification accuracy on the validation-set for each network.\n val_accuracies = []\n\n # For each neural network in the ensemble.\n for i in range(num_networks):\n # Reload the variables into the TensorFlow graph.\n saver.restore(sess=session, save_path=get_save_path(i))\n\n # Calculate the classification accuracy on the test-set.\n test_acc = test_accuracy()\n\n # Append the classification accuracy to the list.\n test_accuracies.append(test_acc)\n\n # Calculate the classification accuracy on the validation-set.\n val_acc = validation_accuracy()\n\n # Append the classification accuracy to the list.\n val_accuracies.append(val_acc)\n\n # Print status message.\n msg = \"Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}\"\n print(msg.format(i, val_acc, test_acc))\n\n # Calculate the predicted labels for the images in the test-set.\n # This is already calculated in test_accuracy() above but\n # it is re-calculated here to keep the code a bit simpler.\n pred = predict_labels(images=data.test.images)\n\n # Append the predicted labels to the list.\n pred_labels.append(pred)\n \n return np.array(pred_labels), \\\n np.array(test_accuracies), \\\n np.array(val_accuracies)\n\npred_labels, test_accuracies, val_accuracies = ensemble_predictions()", "Summarize the classification accuracies on the test-set for the neural networks in the ensemble.", "print(\"Mean test-set accuracy: {0:.4f}\".format(np.mean(test_accuracies)))\nprint(\"Min test-set accuracy: {0:.4f}\".format(np.min(test_accuracies)))\nprint(\"Max test-set accuracy: {0:.4f}\".format(np.max(test_accuracies)))", "The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector.", "pred_labels.shape", "Ensemble predictions\nThere are different ways to calculate the predicted labels for the ensemble. One way is to calculate the predicted class-number for each neural network, and then select the class-number with most votes. But this requires a large number of neural networks relative to the number of classes.\nThe method used here is instead to take the average of the predicted labels for all the networks in the ensemble. This is simple to calculate and does not require a large number of networks in the ensemble.", "ensemble_pred_labels = np.mean(pred_labels, axis=0)\nensemble_pred_labels.shape", "The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual.", "ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1)\nensemble_cls_pred.shape", "Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks.", "ensemble_correct = (ensemble_cls_pred == data.test.cls)", "Negate the boolean array so we can use it to lookup incorrectly classified images.", "ensemble_incorrect = np.logical_not(ensemble_correct)", "Best neural network\nNow we find the single neural network that performed best on the test-set.\nFirst list the classification accuracies on the test-set for all the neural networks in the ensemble.", "test_accuracies", "The index of the neural network with the highest classification accuracy.", "best_net = np.argmax(test_accuracies)\nbest_net", "The best neural network's classification accuracy on the test-set.", "test_accuracies[best_net]", "Predicted labels of the best neural network.", "best_net_pred_labels = pred_labels[best_net, :, :]", "The predicted class-number.", "best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1)", "Boolean array whether the best neural network classified each image in the test-set correctly.", "best_net_correct = (best_net_cls_pred == data.test.cls)", "Boolean array whether each image is incorrectly classified.", "best_net_incorrect = np.logical_not(best_net_correct)", "Comparison of ensemble vs. the best single network\nThe number of images in the test-set that were correctly classified by the ensemble.", "np.sum(ensemble_correct)", "The number of images in the test-set that were correctly classified by the best neural network.", "np.sum(best_net_correct)", "Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network.", "ensemble_better = np.logical_and(best_net_incorrect,\n ensemble_correct)", "Number of images in the test-set where the ensemble was better than the best single network:", "ensemble_better.sum()", "Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble.", "best_net_better = np.logical_and(best_net_correct,\n ensemble_incorrect)", "Number of images in the test-set where the best single network was better than the ensemble.", "best_net_better.sum()", "Helper-functions for plotting and printing comparisons\nFunction for plotting images from the test-set and their true and predicted class-numbers.", "def plot_images_comparison(idx):\n plot_images(images=data.test.images[idx, :],\n cls_true=data.test.cls[idx],\n ensemble_cls_pred=ensemble_cls_pred[idx],\n best_cls_pred=best_net_cls_pred[idx])", "Function for printing the predicted labels.", "def print_labels(labels, idx, num=1):\n # Select the relevant labels based on idx.\n labels = labels[idx, :]\n\n # Select the first num labels.\n labels = labels[0:num, :]\n \n # Round numbers to 2 decimal points so they are easier to read.\n labels_rounded = np.round(labels, 2)\n\n # Print the rounded labels.\n print(labels_rounded)", "Function for printing the predicted labels for the ensemble of neural networks.", "def print_labels_ensemble(idx, **kwargs):\n print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs)", "Function for printing the predicted labels for the best single network.", "def print_labels_best_net(idx, **kwargs):\n print_labels(labels=best_net_pred_labels, idx=idx, **kwargs)", "Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image.", "def print_labels_all_nets(idx):\n for i in range(num_networks):\n print_labels(labels=pred_labels[i, :, :], idx=idx, num=1)", "Examples: Ensemble is better than the best network\nPlot examples of images that were correctly classified by the ensemble and incorrectly classified by the best single network.", "plot_images_comparison(idx=ensemble_better)", "The ensemble's predicted labels for the first of these images (top left image):", "print_labels_ensemble(idx=ensemble_better, num=1)", "The best network's predicted labels for the first of these images:", "print_labels_best_net(idx=ensemble_better, num=1)", "The predicted labels of all the networks in the ensemble, for the first of these images:", "print_labels_all_nets(idx=ensemble_better)", "Examples: Best network is better than ensemble\nNow plot examples of images that were incorrectly classified by the ensemble but correctly classified by the best single network.", "plot_images_comparison(idx=best_net_better)", "The ensemble's predicted labels for the first of these images (top left image):", "print_labels_ensemble(idx=best_net_better, num=1)", "The best single network's predicted labels for the first of these images:", "print_labels_best_net(idx=best_net_better, num=1)", "The predicted labels of all the networks in the ensemble, for the first of these images:", "print_labels_all_nets(idx=best_net_better)", "Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()", "Conclusion\nThis tutorial created an ensemble of 5 convolutional neural networks for classifying hand-written digits in the MNIST data-set. The ensemble worked by averaging the predicted class-labels of the 5 individual neural networks. This resulted in slightly improved classification accuracy on the test-set, with the ensemble having an accuracy of 99.1% compared to 98.9% for the best individual network.\nHowever, the ensemble did not always perform better than the individual neural networks, which sometimes classified images correctly while the ensemble misclassified those images. This suggests that the effect of using an ensemble of neural networks is somewhat random and may not provide a reliable way of improving the performance over a single neural network.\nThe form of ensemble learning used here is called bagging (or Bootstrap Aggregating), which is mainly useful for avoiding overfitting and may not be necessary for this particular neural network and data-set. So it is still possible that ensemble learning may work in other settings.\nTechnical Note\nThis implementation of ensemble learning used the TensorFlow Saver()-object to save and reload the variables of the neural network. But this functionality was really designed for another purpose and becomes very awkward to use for ensemble learning with different types of neural networks, or if you want to load multiple neural networks at the same time. There's an add-on package for TensorFlow called sk-flow which makes this much easier, but it is still in the early stages of development as of August 2016.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nChange different aspects of this program to see how it affects the performance:\nUse more neural networks in the ensemble.\nChange the size of the training-sets.\nChange the number of optimization iterations, try both more and less.\n\n\nExplain to a friend how the program works.\nDo you think Ensemble Learning is worth more research effort, or should you rather focus on improving a single neural network?\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
karlstroetmann/Artificial-Intelligence
Python/6 Classification/Support-Vector-Machines.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)", "Support Vector Machines\nThis notebook discusses <em style=\"color:blue;\">support vector machines</em>. In order to understand why we need support vector machines (abbreviated as SVMs), we \nwill first demonstrate that classifiers constructed with <em style=\"color:blue;\">logistic regression</em> sometimes behave unintuitively.\nThe Problem with Logistic Regression\nIn this section of the notebook we discuss an example that demonstrates that logistic regression is not necessarily the best classifier we can get.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport sklearn.linear_model as lm", "We construct a small data set containing just three points.", "X = np.array([[1.00, 2.00],\n [2.00, 1.00],\n [3.50, 3.50]])\nY = np.array([0, \n 0, \n 1])", "To proceed, we will plot the data points using a scatter plot. Furthermore, we plot a green line that intuitively make the best decision boundary.", "plt.figure(figsize=(12, 12))\nCorner = np.array([[0.0, 5.0], [5.0, 0.0]])\nX_pass = X[Y == 1]\nX_fail = X[Y == 0]\nsns.set(style='darkgrid')\nplt.title('A Simple Classification Problem')\nplt.axvline(x=0.0, c='k')\nplt.axhline(y=0.0, c='k')\nplt.xlabel('x1')\nplt.ylabel('x2')\nplt.xticks(np.arange(0.0, 5.1, step=0.5))\nplt.yticks(np.arange(0.0, 5.1, step=0.5))\nX1 = np.arange(0, 5.05, 0.05)\nX2 = 5 - X1\nplt.plot(X1, X2, color='green', linestyle='-')\nX1 = np.arange(0, 3.05, 0.05)\nX2 = 3 - X1\nplt.plot(X1, X2, color='cyan', linestyle=':')\nX1 = np.arange(2.0, 5.05, 0.05)\nX2 = 7 - X1\nplt.plot(X1, X2, color='cyan', linestyle=':')\nplt.scatter(Corner[:,0], Corner[:,1], color='white', marker='.')\nplt.scatter(X_pass[:,0], X_pass[:,1], color='b', marker='o') # class 1 is blue\nplt.scatter(X_fail[:,0], X_fail[:,1], color='r', marker='x') # class 2 is red", "If we want to separate the two red crosses at $(1,2)$ and $(2,1)$ from the blue bullet at $(3.5, 3.5)$, then the decision boundary that would create the \nwidest margin between these points would be given by the green line. The road separating these points would have a width of $4 \\cdot \\sqrt{2}$. \nLet us classify these data using logistic regression and see what we get. We will plot the <b style=\"color:blue;\">decision boundary</b>. If $\\vartheta_0$, $\\vartheta_1$, and $\\vartheta_2$ are the parameters of the logistic model, then the decision boundary is given by the linear equation\n$$ \\vartheta_0 + \\vartheta_1 \\cdot x_1 + \\vartheta_2 \\cdot x_2 = 0. $$\nThis can be rewritten as\n$$ x_2 = - \\frac{\\vartheta_0 + \\vartheta_1 \\cdot x_1}{\\vartheta_2}. $$\nThe function $\\texttt{plot_data_and_boundary}(X, Y, \\vartheta_0, \\vartheta_1, \\vartheta_2)$ takes the data $X$, their classes $Y$ and the parameters \n$\\vartheta_0$, $\\vartheta_1$, and $\\vartheta_2$ of the logistic model as inputs and plots the data and the decision boundary.", "def plot_data_and_boundary(X, Y, ϑ0, ϑ1, ϑ2):\n Corner = np.array([[0.0, 5.0], [5.0, 0.0]])\n X_pass = X[Y == 1]\n X_fail = X[Y == 0]\n plt.figure(figsize=(12, 12))\n sns.set(style='darkgrid')\n plt.title('A Simple Classification Problem')\n plt.axvline(x=0.0, c='k')\n plt.axhline(y=0.0, c='k')\n plt.xlabel('x1')\n plt.ylabel('x2')\n plt.xticks(np.arange(0.0, 5.1, step=0.5))\n plt.yticks(np.arange(0.0, 5.1, step=0.5))\n plt.scatter(Corner[:,0], Corner[:,1], color='white', marker='.')\n plt.scatter(X_pass[:,0], X_pass[:,1], color='blue' , marker='o') \n plt.scatter(X_fail[:,0], X_fail[:,1], color='red' , marker='x') \n a = max(- (ϑ0 + ϑ2 * 5)/ϑ1, 0.0)\n b = min(- ϑ0/ϑ1 , 5.0)\n a, b = min(a, b), max(a, b)\n X1 = np.arange(a-0.1, b+0.02, 0.05)\n X2 = -(ϑ0 + ϑ1 * X1)/ϑ2\n print('slope of decision boundary', -ϑ1/ϑ2)\n plt.plot(X1, X2, color='green')", "The function $\\texttt{train_and_plot}(X, Y)$ takes a design matrix $X$ and a vector $Y$ containing zeros and ones. It builds a regression model and plots the data together with the decision boundary.", "def train_and_plot(X, Y):\n M = lm.LogisticRegression(C=1, solver='lbfgs')\n M.fit(X, Y)\n ϑ0 = M.intercept_[0]\n ϑ1, ϑ2 = M.coef_[0]\n plot_data_and_boundary(X, Y, ϑ0, ϑ1, ϑ2)\n\ntrain_and_plot(X, Y)", "We decision boundary is closer to the blue data point than to the red data points. This is not optimal.\nThe function $\\texttt{gen_X_Y}(n)$ take a natural number $n$ and generates additional data. The number $n$ is the number of blue data points.\nConcretely, it will add $n-1$ data points to the right of the blue dot shown above. This should not really change the decision boundary as the data \ndo not provide any new information. After all, these data are to the right of the first blue dot and hence should share the class of this data point.", "def gen_X_Y(n):\n X = np.array([[1.0, 2.0], [2.0, 1.0]] +\n [[3.5 + k*0.0015, 3.5] for k in range(n)])\n Y = np.array([0, 0] + [1] * n)\n return X, Y\n\nX, Y = gen_X_Y(1000)\ntrain_and_plot(X, Y)", "When we test logistic regression with this data set, we see that the slope of the decision boundary is much steeper now and the separation of the blue dots from the red crosses is far worse than it needs to be, had the optimal decision boundary been computed.\nLet us see how <em style=\"color:blue;\">support vector machines</em> deal with these data.", "import sklearn.svm as svm", "First, we construct a support vector machine with a linear kernel and next to no regularization and train it with the data.", "M = svm.SVC(kernel='linear', C=10000)\nM.fit(X, Y)\nM.score(X, Y)", "The following function is used for plotting.", "def plot_data_and_boundary(X, Y, M, title):\n Corner = np.array([[0.0, 5.0], [5.0, 0.0]])\n X0, X1 = X[:, 0], X[:, 1]\n XX, YY = np.meshgrid(np.arange(0, 5, 0.005), np.arange(0, 5, 0.005))\n Z = M.predict(np.c_[XX.ravel(), YY.ravel()])\n Z = Z.reshape(XX.shape)\n plt.figure(figsize=(10, 10))\n sns.set(style='darkgrid')\n plt.contour(XX, YY, Z)\n plt.scatter(Corner[:,0], Corner[:,1], color='black', marker='.')\n plt.scatter(X0, X1, c=Y, edgecolors='k')\n plt.xlim(XX.min(), XX.max())\n plt.ylim(YY.min(), YY.max())\n plt.xlabel('x1')\n plt.ylabel('x2')\n plt.xticks()\n plt.yticks()\n plt.title(title)\n\nplot_data_and_boundary(X, Y, M, 'some data')", "The decision boundary separates the data perfectly because it maximizes the distance of the data from the boundary.", "X = np.array([[1.00, 2.00],\n [2.00, 1.00],\n [3.50, 3.50]])\nY = np.array([0, \n 0, \n 1])\n\nplot_data_and_boundary(X, Y, M, 'three points')\n\nimport pandas as pd", "Let's load some strange data that I have found somewhere.", "DF = pd.read_csv('strange-data.csv')\nDF.head()\n\nX = np.array(DF[['x1', 'x2']])\nY = np.array(DF['y'])\nRed = X[Y == 1]\nBlue = X[Y == 0]\n\nM = svm.SVC(kernel='rbf', gamma=400.0, C=10000)\nM.fit(X, Y)\nM.score(X, Y)\n\nX0, X1 = X[:, 0], X[:, 1]\nXX, YY = np.meshgrid(np.arange(0.0, 1.1, 0.001), np.arange(0.3, 1.0, 0.001))\nZ = M.predict(np.c_[XX.ravel(), YY.ravel()])\nZ = Z.reshape(XX.shape)\nplt.figure(figsize=(12, 12))\nplt.contour(XX, YY, Z, colors='green')\nplt.scatter(Blue[:, 0], Blue[:, 1], color='blue')\nplt.scatter(Red [:, 0], Red [:, 1], color='red')\nplt.xlabel('x1')\nplt.ylabel('x2')\nplt.title('Strange Data')", "This example shows that support vector machines with Gaussian kernel can describe very complicated structures." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
do-mpc/do-mpc
documentation/source/example_gallery/industrial_poly.ipynb
lgpl-3.0
[ "Industrial polymerization reactor\nIn this Jupyter Notebook we illustrate the example industrial_poly.\nOpen an interactive online Jupyter Notebook with this content on Binder:\n\nThe example consists of the three modules template_model.py, which describes the system model, template_mpc.py, which defines the settings for the control and template_simulator.py, which sets the parameters for the simulator.\nThe modules are used in main.py for the closed-loop execution of the controller.\nIn the following the different parts are presented. But first, we start by importing basic modules and do-mpc.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport sys\nfrom casadi import *\n\n# Add do_mpc to path. This is not necessary if it was installed via pip\nsys.path.append('../../../')\n\n# Import do_mpc package:\nimport do_mpc", "Model\nIn the following we will present the configuration, setup and connection between these blocks, starting with the model.\nThe considered model of the industrial reactor is continuous and has 10 states and 3 control inputs.\nThe model is initiated by:", "model_type = 'continuous' # either 'discrete' or 'continuous'\nmodel = do_mpc.model.Model(model_type)", "System description\nThe system consists of a reactor into which nonomer is fed.\nThe monomerturns into a polymer via a very exothermic chemical reaction.\nThe reactor is equipped with a jacket and with an External Heat Exchanger(EHE) that can both be used to control the temperature inside the reactor.\nA schematic representation of the system is presented below:\n\nThe process is modeled by a set of 8 ordinary differential equations (ODEs):\n\\begin{align}\n\\dot{m}{\\text{W}} &= \\ \\dot{m}{\\text{F}}\\, \\omega_{\\text{W,F}} \\\n\\dot{m}{\\text{A}} &= \\ \\dot{m}{\\text{F}} \\omega_{\\text{A,F}}-k_{\\text{R1}}\\, m_{\\text{A,R}}-k_{\\text{R2}}\\, m_{\\text{AWT}}\\, m_{\\text{A}}/m_{\\text{ges}} , \\\n\\dot{m}{\\text{P}} &= \\ k{\\text{R1}} \\, m_{\\text{A,R}}+p_{1}\\, k_{\\text{R2}}\\, m_{\\text{AWT}}\\, m_{\\text{A}}/ m_{\\text{ges}}, \\\n\\dot{T}{\\text{R}} &= \\ 1/(c{\\text{p,R}} m_{\\text{ges}})\\; [\\dot{m}{\\text{F}} \\; c{\\text{p,F}}\\left(T_{\\text{F}}-T_{\\text{R}}\\right) +\\Delta H_{\\text{R}} k_{\\text{R1}} m_{\\text{A,R}}-k_{\\text{K}} A\\left(T_{\\text{R}}-T_{\\text{S}}\\right) \\\n&- \\dot{m}{\\text{AWT}} \\,c{\\text{p,R}}\\left(T_{\\text{R}}-T_{\\text{EK}}\\right)],\\notag\\\n\\dot{T}{S} &= 1/(c{\\text{p,S}} m_{\\text{S}}) \\;[k_{\\text{K}} A\\left(T_{\\text{R}}-T_{\\text{S}}\\right)-k_{\\text{K}} A\\left(T_{\\text{S}}-T_{\\text{M}}\\right)], \\notag\\\n\\dot{T}{\\text{M}} &= 1/(c{\\text{p,W}} m_{\\text{M,KW}})\\;[\\dot{m}{\\text{M,KW}}\\, c{\\text{p,W}}\\left(T_{\\text{M}}^{\\text{IN}}-T_{\\text{M}}\\right) \\\n&+ k_{\\text{K}} A\\left(T_{\\text{S}}-T_{\\text{M}}\\right)]+k_{\\text{K}} A\\left(T_{\\text{S}}-T_{\\text{M}}\\right)], \\\n\\dot{T}{\\text{EK}}&= 1/(c{\\text{p,R}} m_{\\text{AWT}})\\;[\\dot{m}{\\text{AWT}} c{\\text{p,W}}\\left(T_{\\text{R}}-T_{\\text{EK}}\\right)-\\alpha\\left(T_{\\text{EK}}-T_{\\text{AWT}}\\right) \\ \n&+ k_{\\text{R2}}\\, m_{\\text{A}}\\, m_{\\text{AWT}}\\Delta H_{\\text{R}}/m_{\\text{ges}}], \\notag\\\n\\dot{T}{\\text{AWT}} &= [\\dot{m}{\\text{AWT,KW}} \\,c_{\\text{p,W}}\\,(T_{\\text{AWT}}^{\\text{IN}}-T_{\\text{AWT}})-\\alpha\\left(T_{\\text{AWT}}-T_{\\text{EK}}\\right)]/(c_{\\text{p,W}} m_{\\text{AWT,KW}}),\n\\end{align}\nwhere\n\\begin{align}\nU &= m_{\\text{P}}/(m_{\\text{A}}+m_{\\text{P}}), \\\nm_{\\text{ges}} &= \\ m_{\\text{W}}+m_{\\text{A}}+m_{\\text{P}}, \\\nk_{\\text{R1}} &= \\ k_{0} e^{\\frac{-E_{a}}{R (T_{\\text{R}}+273.15)}}\\left(k_{\\text{U1}}\\left(1-U\\right)+k_{\\text{U2}} U\\right), \\\nk_{\\text{R2}} &= \\ k_{0} e^{\\frac{-E_{a}}{R (T_{\\text{EK}}+273.15)}}\\left(k_{\\text{U1}}\\left(1-U\\right)+k_{\\text{U2}} U\\right), \\\nk_{\\text{K}} &= (m_{\\text{W}}k_{\\text{WS}}+m_{\\text{A}}k_{\\text{AS}}+m_{\\text{P}}k_{\\text{PS}})/m_{\\text{ges}},\\\nm_{\\text{A,R}} &= m_\\text{A}-m_\\text{A} m_{\\text{AWT}}/m_{\\text{ges}}.\n\\end{align}\nThe model includes mass balances for the water, monomer and product hold-ups ($m_\\text{W}$, $m_\\text{A}$, $m_\\text{P}$) and energy balances for the reactor ($T_\\text{R}$), the vessel ($T_\\text{S}$), the jacket ($T_\\text{M}$), the mixture in the external heat exchanger ($T_{\\text{EK}}$) and the coolant leaving the external heat exchanger ($T_{\\text{AWT}}$).\nThe variable $U$ denotes the polymer-monomer ratio in the reactor, $m_{\\text{ges}}$ represents the total mass, $k_{\\text{R1}}$ is the reaction rate inside the reactor and $k_{\\text{R2}}$ is the reaction rate in the external heat exchanger. The total heat transfer coefficient of the mixture inside the reactor is denoted as $k_{\\text{K}}$ and $m_{\\text{A,R}}$ represents the current amount of monomer inside the reactor.\nThe available control inputs are the feed flow $\\dot{m}{\\text{F}}$, the coolant temperature at the inlet of the jacket $T^{\\text{IN}}{\\text{M}}$ and the coolant temperature at the inlet of the external heat exchanger $T^{\\text{IN}}_{\\text{AWT}}$.\nAn overview of the parameters are listed below:\n\nImplementation\nFirst, we set the certain parameters:", "# Certain parameters\nR = 8.314\t\t\t#gas constant\nT_F = 25 + 273.15\t#feed temperature\nE_a = 8500.0\t\t#activation energy\ndelH_R = 950.0*1.00\t#sp reaction enthalpy\nA_tank = 65.0\t\t\t#area heat exchanger surface jacket 65\n\nk_0 = 7.0*1.00\t\t#sp reaction rate\nk_U2 = 32.0\t\t\t#reaction parameter 1\nk_U1 = 4.0\t\t\t#reaction parameter 2\nw_WF = .333\t\t\t#mass fraction water in feed\nw_AF = .667\t\t\t#mass fraction of A in feed\n\nm_M_KW = 5000.0\t\t#mass of coolant in jacket\nfm_M_KW = 300000.0\t\t#coolant flow in jacket 300000;\nm_AWT_KW = 1000.0\t\t#mass of coolant in EHE\nfm_AWT_KW = 100000.0\t\t#coolant flow in EHE\nm_AWT = 200.0\t\t\t#mass of product in EHE\nfm_AWT = 20000.0\t\t#product flow in EHE\nm_S = 39000.0\t\t#mass of reactor steel\n\nc_pW = 4.2\t\t\t#sp heat cap coolant\nc_pS = .47\t\t\t#sp heat cap steel\nc_pF = 3.0\t\t\t#sp heat cap feed\nc_pR = 5.0\t\t\t#sp heat cap reactor contents\n\nk_WS = 17280.0\t\t#heat transfer coeff water-steel\nk_AS = 3600.0\t\t#heat transfer coeff monomer-steel\nk_PS = 360.0\t\t\t#heat transfer coeff product-steel\n\nalfa = 5*20e4*3.6\n\np_1 = 1.0", "and afterwards the uncertain parameters:", "# Uncertain parameters:\ndelH_R = model.set_variable('_p', 'delH_R')\nk_0 = model.set_variable('_p', 'k_0')", "The 10 states of the control problem stem from the 8 ODEs, accum_monom models the amount that has been fed to the reactor via $\\dot{m}\\text{F}^{\\text{acc}} = \\dot{m}{\\text{F}}$ and T_adiab ($T_{\\text{adiab}}=\\frac{\\Delta H_{\\text{R}}}{c_{\\text{p,R}}} \\frac{m_{\\text{A}}}{m_{\\text{ges}}} + T_{\\text{R}}$, hence $\\dot{T}{\\text{adiab}}=\\frac{\\Delta H{\\text{R}}}{m_{\\text{ges}} c_{\\text{p,R}}}\\dot{m}{\\text{A}}-\n\\left(\\dot{m}{\\text{W}}+\\dot{m}{\\text{A}}+\\dot{m}{\\text{P}}\\right)\\left(\\frac{m_{\\text{A}} \\Delta H_{\\text{R}}}{m_{\\text{ges}}^2c_{\\text{p,R}}}\\right)+\\dot{T}_{\\text{R}}$) is a virtual variable that is important for safety aspects, as we will explain later.\nAll states are created in do-mpc via:", "# States struct (optimization variables):\nm_W = model.set_variable('_x', 'm_W')\nm_A = model.set_variable('_x', 'm_A')\nm_P = model.set_variable('_x', 'm_P')\nT_R = model.set_variable('_x', 'T_R')\nT_S = model.set_variable('_x', 'T_S')\nTout_M = model.set_variable('_x', 'Tout_M')\nT_EK = model.set_variable('_x', 'T_EK')\nTout_AWT = model.set_variable('_x', 'Tout_AWT')\naccum_monom = model.set_variable('_x', 'accum_monom')\nT_adiab = model.set_variable('_x', 'T_adiab')", "and the control inputs via:", "# Input struct (optimization variables):\nm_dot_f = model.set_variable('_u', 'm_dot_f')\nT_in_M = model.set_variable('_u', 'T_in_M')\nT_in_EK = model.set_variable('_u', 'T_in_EK')", "Before defining the ODE for each state variable, we create auxiliary terms:", "# algebraic equations\nU_m = m_P / (m_A + m_P)\nm_ges = m_W + m_A + m_P\nk_R1 = k_0 * exp(- E_a/(R*T_R)) * ((k_U1 * (1 - U_m)) + (k_U2 * U_m))\nk_R2 = k_0 * exp(- E_a/(R*T_EK))* ((k_U1 * (1 - U_m)) + (k_U2 * U_m))\nk_K = ((m_W / m_ges) * k_WS) + ((m_A/m_ges) * k_AS) + ((m_P/m_ges) * k_PS)", "The auxiliary terms are used for the more readable definition of the ODEs:", "# Differential equations\ndot_m_W = m_dot_f * w_WF\nmodel.set_rhs('m_W', dot_m_W)\ndot_m_A = (m_dot_f * w_AF) - (k_R1 * (m_A-((m_A*m_AWT)/(m_W+m_A+m_P)))) - (p_1 * k_R2 * (m_A/m_ges) * m_AWT)\nmodel.set_rhs('m_A', dot_m_A)\ndot_m_P = (k_R1 * (m_A-((m_A*m_AWT)/(m_W+m_A+m_P)))) + (p_1 * k_R2 * (m_A/m_ges) * m_AWT)\nmodel.set_rhs('m_P', dot_m_P)\n\ndot_T_R = 1./(c_pR * m_ges) * ((m_dot_f * c_pF * (T_F - T_R)) - (k_K *A_tank* (T_R - T_S)) - (fm_AWT * c_pR * (T_R - T_EK)) + (delH_R * k_R1 * (m_A-((m_A*m_AWT)/(m_W+m_A+m_P)))))\nmodel.set_rhs('T_R', dot_T_R)\nmodel.set_rhs('T_S', 1./(c_pS * m_S) * ((k_K *A_tank* (T_R - T_S)) - (k_K *A_tank* (T_S - Tout_M))))\nmodel.set_rhs('Tout_M', 1./(c_pW * m_M_KW) * ((fm_M_KW * c_pW * (T_in_M - Tout_M)) + (k_K *A_tank* (T_S - Tout_M))))\nmodel.set_rhs('T_EK', 1./(c_pR * m_AWT) * ((fm_AWT * c_pR * (T_R - T_EK)) - (alfa * (T_EK - Tout_AWT)) + (p_1 * k_R2 * (m_A/m_ges) * m_AWT * delH_R)))\nmodel.set_rhs('Tout_AWT', 1./(c_pW * m_AWT_KW)* ((fm_AWT_KW * c_pW * (T_in_EK - Tout_AWT)) - (alfa * (Tout_AWT - T_EK))))\nmodel.set_rhs('accum_monom', m_dot_f)\nmodel.set_rhs('T_adiab', delH_R/(m_ges*c_pR)*dot_m_A-(dot_m_A+dot_m_W+dot_m_P)*(m_A*delH_R/(m_ges*m_ges*c_pR))+dot_T_R)", "Finally, the model setup is completed:", "# Build the model\nmodel.setup()", "Controller\nNext, the model predictive controller is configured (in template_mpc.py).\nFirst, one member of the mpc class is generated with the prediction model defined above:", "mpc = do_mpc.controller.MPC(model)", "Real processes are also subject to important safety constraints that are incorporated to account for possible failures of the equipment. In this case, the maximum temperature that the reactor would reach in the case of a cooling failure is constrained to be below $109 ^\\circ$C.\nThe temperature that the reactor would achieve in the case of a complete cooling failure is $T_{\\text{adiab}}$, hence it needs to stay beneath $109 ^\\circ$C.\nWe choose the prediction horizon n_horizon, set the robust horizon n_robust to 1. The time step t_step is set to one second and parameters of the applied discretization scheme orthogonal collocation are as seen below:", "setup_mpc = {\n 'n_horizon': 20,\n 'n_robust': 1,\n 'open_loop': 0,\n 't_step': 50.0/3600.0,\n 'state_discretization': 'collocation',\n 'collocation_type': 'radau',\n 'collocation_deg': 2,\n 'collocation_ni': 2,\n 'store_full_solution': True,\n # Use MA27 linear solver in ipopt for faster calculations:\n #'nlpsol_opts': {'ipopt.linear_solver': 'MA27'}\n}\n\nmpc.set_param(**setup_mpc)", "Objective\nThe goal of the economic NMPC controller is to produce $20680~\\text{kg}$ of $m_{\\text{P}}$ as fast as possible.\nAdditionally, we add a penalty on input changes for all three control inputs, to obtain a smooth control performance.", "_x = model.x\nmterm = - _x['m_P'] # terminal cost\nlterm = - _x['m_P'] # stage cost\n\nmpc.set_objective(mterm=mterm, lterm=lterm)\n\nmpc.set_rterm(m_dot_f=0.002, T_in_M=0.004, T_in_EK=0.002) # penalty on control input changes", "Constraints\nThe temperature at which the polymerization reaction takes place strongly influences the properties of the resulting polymer. For this reason, the temperature of the reactor should be maintained in a range of $\\pm 2.0 ^\\circ$C around the desired reaction temperature $T_{\\text{set}}=90 ^\\circ$C in order to ensure that the produced polymer has the required properties. \nThe initial conditions and the bounds for all states are summarized in:\n\nand set via:", "# auxiliary term\ntemp_range = 2.0\n\n# lower bound states\nmpc.bounds['lower','_x','m_W'] = 0.0\nmpc.bounds['lower','_x','m_A'] = 0.0\nmpc.bounds['lower','_x','m_P'] = 26.0\n\nmpc.bounds['lower','_x','T_R'] = 363.15 - temp_range\nmpc.bounds['lower','_x','T_S'] = 298.0\nmpc.bounds['lower','_x','Tout_M'] = 298.0\nmpc.bounds['lower','_x','T_EK'] = 288.0\nmpc.bounds['lower','_x','Tout_AWT'] = 288.0\nmpc.bounds['lower','_x','accum_monom'] = 0.0\n\n# upper bound states\nmpc.bounds['upper','_x','T_S'] = 400.0\nmpc.bounds['upper','_x','Tout_M'] = 400.0\nmpc.bounds['upper','_x','T_EK'] = 400.0\nmpc.bounds['upper','_x','Tout_AWT'] = 400.0\nmpc.bounds['upper','_x','accum_monom'] = 30000.0\nmpc.bounds['upper','_x','T_adiab'] = 382.15", "The upper bound of the reactor temperature is set via a soft-constraint:", "mpc.set_nl_cons('T_R_UB', _x['T_R'], ub=363.15+temp_range, soft_constraint=True, penalty_term_cons=1e4)", "The bounds of the inputsare summarized below:\n\nand set via:", "# lower bound inputs\nmpc.bounds['lower','_u','m_dot_f'] = 0.0\nmpc.bounds['lower','_u','T_in_M'] = 333.15\nmpc.bounds['lower','_u','T_in_EK'] = 333.15\n\n# upper bound inputs\nmpc.bounds['upper','_u','m_dot_f'] = 3.0e4\nmpc.bounds['upper','_u','T_in_M'] = 373.15\nmpc.bounds['upper','_u','T_in_EK'] = 373.15", "Scaling\nBecause the magnitudes of the states and inputs are very different, the performance of the optimizer can be enhanced by properly scaling the states and inputs:", "# states\nmpc.scaling['_x','m_W'] = 10\nmpc.scaling['_x','m_A'] = 10\nmpc.scaling['_x','m_P'] = 10\nmpc.scaling['_x','accum_monom'] = 10\n\n# control inputs\nmpc.scaling['_u','m_dot_f'] = 100", "Uncertain values\nIn a real system, usually the model parameters cannot be determined exactly, what represents an important source of uncertainty. In this work, we consider that two of the most critical parameters of the model are not precisely known and vary with respect to their nominal value. In particular, we assume that the specific reaction enthalpy $\\Delta H_{\\text{R}}$ and the specific reaction rate $k_0$ are constant but uncertain, having values that can vary $\\pm 30 \\%$ with respect to their nominal values", "delH_R_var = np.array([950.0, 950.0 * 1.30, 950.0 * 0.70])\nk_0_var = np.array([7.0 * 1.00, 7.0 * 1.30, 7.0 * 0.70])\n\nmpc.set_uncertainty_values(delH_R = delH_R_var, k_0 = k_0_var)", "This means with n_robust=1, that 9 different scenarios are considered.\nThe setup of the MPC controller is concluded by:", "mpc.setup()", "Estimator\nWe assume, that all states can be directly measured (state-feedback):", "estimator = do_mpc.estimator.StateFeedback(model)", "Simulator\nTo create a simulator in order to run the MPC in a closed-loop, we create an instance of the do-mpc simulator which is based on the same model:", "simulator = do_mpc.simulator.Simulator(model)", "For the simulation, we use the same time step t_step as for the optimizer:", "params_simulator = {\n 'integration_tool': 'cvodes',\n 'abstol': 1e-10,\n 'reltol': 1e-10,\n 't_step': 50.0/3600.0\n}\n\nsimulator.set_param(**params_simulator)", "Realizations of uncertain parameters\nFor the simulatiom, it is necessary to define the numerical realizations of the uncertain parameters in p_num.\nFirst, we get the structure of the uncertain parameters:", "p_num = simulator.get_p_template()\ntvp_num = simulator.get_tvp_template()", "We define a function which is called in each simulation step, which returns the current realizations of the parameters with respect to defined inputs (in this case t_now):", "# uncertain parameters\np_num['delH_R'] = 950 * np.random.uniform(0.75,1.25)\np_num['k_0'] = 7 * np.random.uniform(0.75*1.25)\ndef p_fun(t_now):\n return p_num\nsimulator.set_p_fun(p_fun)", "By defining p_fun as above, the function will return a constant value for both uncertain parameters within a range of $\\pm 25\\%$ of the nomimal value.\nTo finish the configuration of the simulator, call:", "simulator.setup()", "Closed-loop simulation\nFor the simulation of the MPC configured for the CSTR, we inspect the file main.py.\nWe define the initial state of the system and set it for all parts of the closed-loop configuration:", "# Set the initial state of the controller and simulator:\n# assume nominal values of uncertain parameters as initial guess\ndelH_R_real = 950.0\nc_pR = 5.0\n\n# x0 is a property of the simulator - we obtain it and set values.\nx0 = simulator.x0\n\nx0['m_W'] = 10000.0\nx0['m_A'] = 853.0\nx0['m_P'] = 26.5\n\nx0['T_R'] = 90.0 + 273.15\nx0['T_S'] = 90.0 + 273.15\nx0['Tout_M'] = 90.0 + 273.15\nx0['T_EK'] = 35.0 + 273.15\nx0['Tout_AWT'] = 35.0 + 273.15\nx0['accum_monom'] = 300.0\nx0['T_adiab'] = x0['m_A']*delH_R_real/((x0['m_W'] + x0['m_A'] + x0['m_P']) * c_pR) + x0['T_R']\n\nmpc.x0 = x0\nsimulator.x0 = x0\nestimator.x0 = x0\n\nmpc.set_initial_guess()", "Now, we simulate the closed-loop for 100 steps (and suppress the output of the cell with the magic command %%capture):", "%%capture\nfor k in range(100):\n u0 = mpc.make_step(x0)\n y_next = simulator.make_step(u0)\n x0 = estimator.make_step(y_next)", "Animating the results\nTo animate the results, we first configure the do-mpc graphics object, which is initiated with the respective data object:", "mpc_graphics = do_mpc.graphics.Graphics(mpc.data)", "We quickly configure Matplotlib.", "from matplotlib import rcParams\nrcParams['axes.grid'] = True\nrcParams['font.size'] = 18", "We then create a figure, configure which lines to plot on which axis and add labels.", "%%capture\nfig, ax = plt.subplots(5, sharex=True, figsize=(16,12))\nplt.ion()\n# Configure plot:\nmpc_graphics.add_line(var_type='_x', var_name='T_R', axis=ax[0])\nmpc_graphics.add_line(var_type='_x', var_name='accum_monom', axis=ax[1])\nmpc_graphics.add_line(var_type='_u', var_name='m_dot_f', axis=ax[2])\nmpc_graphics.add_line(var_type='_u', var_name='T_in_M', axis=ax[3])\nmpc_graphics.add_line(var_type='_u', var_name='T_in_EK', axis=ax[4])\n\nax[0].set_ylabel('T_R [K]')\nax[1].set_ylabel('acc. monom')\nax[2].set_ylabel('m_dot_f')\nax[3].set_ylabel('T_in_M [K]')\nax[4].set_ylabel('T_in_EK [K]')\nax[4].set_xlabel('time')\n\nfig.align_ylabels()", "After importing the necessary package:", "from matplotlib.animation import FuncAnimation, ImageMagickWriter", "We obtain the animation with:", "def update(t_ind):\n print('Writing frame: {}.'.format(t_ind), end='\\r')\n mpc_graphics.plot_results(t_ind=t_ind)\n mpc_graphics.plot_predictions(t_ind=t_ind)\n mpc_graphics.reset_axes()\n lines = mpc_graphics.result_lines.full\n return lines\n\nn_steps = mpc.data['_time'].shape[0]\n\n\nanim = FuncAnimation(fig, update, frames=n_steps, blit=True)\n\ngif_writer = ImageMagickWriter(fps=5)\nanim.save('anim_poly_batch.gif', writer=gif_writer)", "We are displaying recorded values as solid lines and predicted trajectories as dashed lines. Multiple dashed lines exist for different realizations of the uncertain scenarios.\nThe most interesting behavior here can be seen in the state T_R, which has the upper bound:", "mpc.bounds['upper', '_x', 'T_R']", "Due to robust control, we are approaching this value but hold a certain distance as some possible trajectories predict a temperature increase. As the reaction finishes we can safely increase the temperature because a rapid temperature change due to uncertainy is impossible." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phanrahan/magmathon
notebooks/advanced/firrtl.ipynb
mit
[ "First, install Firrtl by following their installation instructions \nBe sure to add the directory containing the firrtl command line tool (typically firrtl/utils/bin) to your $PATH\nThe FIRRTL backend for magma is experimental and woefully lacking in support for standard mantle circuits. The core functionality has been implemented to demonstrate the capability of compiling magma circuits to FIRRTL. Pull requests that expand support for the FIRRTL are welcome.", "import magma as m\nm.set_mantle_target(\"coreir\")\nimport mantle\n\nmain = m.DefineCircuit('main', \"a\", m.In(m.Bit), \"b\", m.In(m.Bit), \"c\", m.In(m.Bit), \"d\", m.Out(m.Bit))\nd = (main.a & main.b) ^ main.c\nm.wire(d, main.d)\nm.compile(\"build/main\", main, output=\"firrtl\")\n\nwith open(\"build/main.fir\", \"r\") as f:\n print(f.read())", "Note: the ! syntax used in the next cell is jupyter notebook syntax sugar for executing a shell command", "!firrtl -i build/main.fir -o build/main.v -X verilog\n\nwith open(\"build/main.v\", \"r\") as f:\n print(f.read())\n\nwith open(\"build/sim_main.cpp\", \"w\") as sim_main_f:\n sim_main_f.write(\"\"\"\n#include \"Vmain.h\"\n#include \"verilated.h\"\n#include <cassert>\n#include <iostream>\n\nint main(int argc, char **argv, char **env) {\n Verilated::commandArgs(argc, argv);\n Vmain* top = new Vmain;\n int tests[8][4] = {\n {0, 0, 0, 0},\n {1, 0, 0, 0},\n {0, 1, 0, 0},\n {1, 1, 0, 1},\n {0, 0, 1, 1},\n {1, 0, 1, 1},\n {0, 1, 1, 1},\n {1, 1, 1, 0},\n };\n for(int i = 0; i < 8; i++) {\n int* test = tests[i];\n int a = test[0];\n int b = test[1];\n int c = test[2];\n int d = test[3];\n\n top->a = a;\n top->b = b;\n top->c = c;\n\n top->eval();\n assert(top->d == d);\n }\n\n delete top;\n std::cout << \"Success\" << std::endl;\n exit(0);\n} \n\"\"\")", "Note: The %%bash statement is a jupyter notebook magic operator that treats the cell as a bash script", "%%bash\ncd build\nverilator -Wall -Wno-DECLFILENAME --cc main.v --exe sim_main.cpp\nmake -C obj_dir -j -f Vmain.mk Vmain\n./obj_dir/Vmain" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
shirishr/My-Progress-at-Machine-Learning
Udacity_Machine_Learning/finding_donors/finding_donors.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nSupervised Learning\nProject: Finding Donors for CharityML\nWelcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features. \nThe dataset for this project originates from the UCI Machine Learning Repository. The datset was donated by Ron Kohavi and Barry Becker, after being published in the article \"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid\". You can find the article by Ron Kohavi online. The data we investigate here consists of small changes to the original dataset, such as removing the 'fnlwgt' feature and records with missing or ill-formatted entries.\n\nExploring the Data\nRun the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, 'income', will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom time import time\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualization code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the Census dataset\ndata = pd.read_csv(\"census.csv\")\ndisplay(len(data))\n# Success - Display the first record\ndisplay(data.head(n=5))", "Implementation: Data Exploration\nA cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \\$50,000. In the code cell below, you will need to compute the following:\n- The total number of records, 'n_records'\n- The number of individuals making more than \\$50,000 annually, 'n_greater_50k'.\n- The number of individuals making at most \\$50,000 annually, 'n_at_most_50k'.\n- The percentage of individuals making more than \\$50,000 annually, 'greater_percent'.\nHint: You may need to look at the table above to understand how the 'income' entries are formatted.", "# TODO: Total number of records\nn_records = len(data)\n\n# TODO: Number of records where individual's income is more than $50,000\nn_greater_50k = len(np.where(data['income']=='>50K')[0])\n\n# TODO: Number of records where individual's income is at most $50,000\nn_at_most_50k = len(np.where(data['income']=='<=50K')[0])\n\n# TODO: Percentage of individuals whose income is more than $50,000\ngreater_percent = float(n_greater_50k)/float(n_records)*100\n\n# Print the results\nprint \"Total number of records: {}\".format(n_records)\nprint \"Individuals making more than $50,000: {}\".format(n_greater_50k)\nprint \"Individuals making at most $50,000: {}\".format(n_at_most_50k)\nprint \"Percentage of individuals making more than $50,000: {:.2f}%\".format(greater_percent)", "Preparing the Data\nBefore data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as preprocessing. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms.\nTransforming Skewed Continuous Features\nA dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: 'capital-gain' and 'capital-loss'. \nRun the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.", "# Split the data into features and target label\nincome_raw = data['income']\nfeatures_raw = data.drop('income', axis = 1)\n\n# Visualize skewed continuous features of original data\nvs.distribution(data)", "For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href=\"https://en.wikipedia.org/wiki/Data_transformation_(statistics)\">logarithmic transformation</a> on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of 0 is undefined, so we must translate the values by a small amount above 0 to apply the the logarithm successfully.\nRun the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.", "# Log-transform the skewed features\nskewed = ['capital-gain', 'capital-loss']\nfeatures_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))\n\n# Visualize the new log distributions\nvs.distribution(features_raw, transformed = True)", "Normalizing Numerical Features\nIn addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as 'capital-gain' or 'capital-loss' above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.\nRun the code cell below to normalize each numerical feature. We will use sklearn.preprocessing.MinMaxScaler for this.", "# Import sklearn.preprocessing.StandardScaler\nfrom sklearn.preprocessing import MinMaxScaler\n\n# Initialize a scaler, then apply it to the features\nscaler = MinMaxScaler()\nnumerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']\nfeatures_raw[numerical] = scaler.fit_transform(data[numerical]) #Note to myself: it means (X-Xmin)/Xmax-Xmin)\n\n# Show an example of a record with scaling applied\ndisplay(features_raw.head(n = 1))", "Implementation: Data Preprocessing\nFrom the table in Exploring the Data above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called categorical variables) be converted. One popular way to convert categorical variables is by using the one-hot encoding scheme. One-hot encoding creates a \"dummy\" variable for each possible category of each non-numeric feature. For example, assume someFeature has three possible entries: A, B, or C. We then encode this feature into someFeature_A, someFeature_B and someFeature_C.\n| | someFeature | | someFeature_A | someFeature_B | someFeature_C |\n| :-: | :-: | | :-: | :-: | :-: |\n| 0 | B | | 0 | 1 | 0 |\n| 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 |\n| 2 | A | | 1 | 0 | 0 |\nAdditionally, as with the non-numeric features, we need to convert the non-numeric target label, 'income' to numerical values for the learning algorithm to work. Since there are only two possible categories for this label (\"<=50K\" and \">50K\"), we can avoid using one-hot encoding and simply encode these two categories as 0 and 1, respectively. In code cell below, you will need to implement the following:\n - Use pandas.get_dummies() to perform one-hot encoding on the 'features_raw' data.\n - Convert the target label 'income_raw' to numerical entries.\n - Set records with \"<=50K\" to 0 and records with \">50K\" to 1.", "# TODO: One-hot encode the 'features_raw' data using pandas.get_dummies()\nfeatures = pd.get_dummies(features_raw)\n\n# TODO: Encode the 'income_raw' data to numerical values\n\nincome =pd.get_dummies(income_raw)['>50K']\n#display (income)\n# Print the number of features after one-hot encoding\nencoded = list(features.columns)\nprint \"{} total features after one-hot encoding.\".format(len(encoded))\n\n# Uncomment the following line to see the encoded feature names\nprint encoded", "Shuffle and Split Data\nNow all categorical variables have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.\nRun the code cell below to perform this split.", "# Import train_test_split\nfrom sklearn.cross_validation import train_test_split\n\n# Split the 'features' and 'income' data into training and testing sets\nX_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)\n\n# Show the results of the split\nprint \"Training set has {} samples.\".format(X_train.shape[0])\nprint \"Testing set has {} samples.\".format(X_test.shape[0])", "Evaluating Model Performance\nIn this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a naive predictor.\nMetrics and the Naive Predictor\nCharityML, equipped with their research, knows individuals that make more than \\$50,000 are most likely to donate to their charity. Because of this, CharityML is particularly interested in predicting who makes more than \\$50,000 accurately. It would seem that using accuracy as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that does not make more than \\$50,000 as someone who does would be detrimental to CharityML, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \\$50,000 is more important than the model's ability to recall those individuals. We can use F-beta score as a metric that considers both precision and recall:\n$$ F_{\\beta} = (1 + \\beta^2) \\cdot \\frac{precision \\cdot recall}{\\left( \\beta^2 \\cdot precision \\right) + recall} $$\nIn particular, when $\\beta = 0.5$, more emphasis is placed on precision. This is called the F$_{0.5}$ score (or F-score for simplicity).\nLooking at the distribution of classes (those who make at most \\$50,000, and those who make more), it's clear most individuals do not make more than \\$50,000. This can greatly affect accuracy, since we could simply say \"this person does not make more than \\$50,000\" and generally be right, without ever looking at the data! Making such a statement would be called naive, since we have not considered any information to substantiate the claim. It is always important to consider the naive prediction for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \\$50,000, CharityML would identify no one as donors. \nQuestion 1 - Naive Predictor Performace\nIf we chose a model that always predicted an individual made more than \\$50,000, what would that model's accuracy and F-score be on this dataset?\nNote: You must use the code cell below and assign your results to 'accuracy' and 'fscore' to be used later.", "# TODO: Calculate accuracy\naccuracy = float(n_greater_50k)/float(n_records)\n\n# TODO: Calculate F-score using the formula above for beta = 0.5\nbeta = 0.5\nrecall = 1.0\nfscore = (1+beta**2)*(accuracy*recall)/(beta**2*accuracy+recall)\n\n# Print the results \nprint \"Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]\".format(accuracy, fscore)", "Supervised Learning Models\nThe following supervised learning models are currently available in scikit-learn that you may choose from:\n- Gaussian Naive Bayes (GaussianNB)\n- Decision Trees\n- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)\n- K-Nearest Neighbors (KNeighbors)\n- Stochastic Gradient Descent Classifier (SGDC)\n- Support Vector Machines (SVM)\n- Logistic Regression\nQuestion 2 - Model Application\nList three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen\n- Describe one real-world application in industry where the model can be applied. (You may need to do research for this — give references!)\n- What are the strengths of the model; when does it perform well?\n- What are the weaknesses of the model; when does it perform poorly?\n- What makes this model a good candidate for the problem, given what you know about the data?\nAnswer: \nmodel | real-world application | strength | weakness | why it's a good candidate\n--- | --- | --- | --- | ---\nLogistic Regression | Cancer prediction based on patient characteristics | predictions on small datasets small number of features with this can be efficient and fast | When data contains features with complexity, unless the features are carefully selected and finetuned this may suffer from under / over - fitting | This model may be suitable since the expected output is categorical and binary on top\nGaussian Naive Bayes | say automatic sorting of incoming e-mail into different categories | Simple model yet works in many complex real-world situations and requires a small number of training data| Ideal for features with no relationship with each other | It's simplicity is its power\nk-Nearest Neighbor | Applications that call for pattern recognition to determine outcome (Including recognizing faces of people in pictures) | Simplest and yet effective (facial recognition !!) | It is instance-based and lazy learning. It is sensitive to the local structure of the data | With 102 features and 45,222 observations, k-NN will be manageable computationally \nImplementation - Creating a Training and Predicting Pipeline\nTo properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.\nIn the code block below, you will need to implement the following:\n - Import fbeta_score and accuracy_score from sklearn.metrics.\n - Fit the learner to the sampled training data and record the training time.\n - Perform predictions on the test data X_test, and also on the first 300 training points X_train[:300].\n - Record the total prediction time.\n - Calculate the accuracy score for both the training subset and testing set.\n - Calculate the F-score for both the training subset and testing set.\n - Make sure that you set the beta parameter!", "# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score\nfrom sklearn.metrics import fbeta_score, accuracy_score\nfrom time import time\n\ndef train_predict(learner, sample_size, X_train, y_train, X_test, y_test): \n '''\n inputs:\n - learner: the learning algorithm to be trained and predicted on\n - sample_size: the size of samples (number) to be drawn from training set\n - X_train: features training set\n - y_train: income training set\n - X_test: features testing set\n - y_test: income testing set\n '''\n \n results = {}\n \n # TODO: Fit the learner to the training data using slicing with 'sample_size'\n X_train = X_train[:sample_size]\n y_train = y_train[:sample_size]\n \n #Several comments below talk about first 300 training samples. I assume it is controlled by the sample_size\n \n start = time() # Get start time\n learner = learner.fit(X_train, y_train)\n end = time() # Get end time\n \n # TODO: Calculate the training time\n results['train_time'] = end - start\n \n # TODO: Get the predictions on the test set,\n # then get predictions on the first 300 training samples\n start = time() # Get start time\n predictions_test = learner.predict(X_test)\n predictions_train = learner.predict(X_train)\n end = time() # Get end time\n \n # TODO: Calculate the total prediction time\n results['pred_time'] = end - start\n \n # TODO: Compute accuracy on the first 300 training samples\n results['acc_train'] = accuracy_score(y_train, predictions_train)\n \n # TODO: Compute accuracy on test set\n results['acc_test'] = accuracy_score(y_test, predictions_test)\n \n # TODO: Compute F-score on the the first 300 training samples\n results['f_train'] = fbeta_score(y_train, predictions_train, beta=beta)\n \n # TODO: Compute F-score on the test set\n results['f_test'] = fbeta_score(y_test, predictions_test, beta=beta)\n \n # Success\n print \"{} trained on {} samples.\".format(learner.__class__.__name__, sample_size)\n \n # Return the results\n return results", "Implementation: Initial Model Evaluation\nIn the code cell, you will need to implement the following:\n- Import the three supervised learning models you've discussed in the previous section.\n- Initialize the three models and store them in 'clf_A', 'clf_B', and 'clf_C'.\n - Use a 'random_state' for each model you use, if provided.\n - Note: Use the default settings for each model — you will tune one specific model in a later section.\n- Calculate the number of records equal to 1%, 10%, and 100% of the training data.\n - Store those values in 'samples_1', 'samples_10', and 'samples_100' respectively.\nNote: Depending on which algorithms you chose, the following implementation may take some time to run!", "# TODO: Import the three supervised learning models from sklearn\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.naive_bayes import GaussianNB\n\n# TODO: Initialize the three models\nseed=7\nclf_A = LogisticRegression(random_state=seed)\nclf_B = GaussianNB()\nclf_C = KNeighborsClassifier()\n\n# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data\nn_train = len(y_train)\nsamples_1 = n_train*1/100\nsamples_10 = n_train*1/10\nsamples_100 = n_train*1/1\n\n# Collect results on the learners\nresults = {}\nfor clf in [clf_A, clf_B, clf_C]:\n clf_name = clf.__class__.__name__\n results[clf_name] = {}\n for i, samples in enumerate([samples_1, samples_10, samples_100]):\n results[clf_name][i] = \\\n train_predict(clf, samples, X_train, y_train, X_test, y_test)\n\n# Run metrics visualization for the three supervised learning models chosen\nvs.evaluate(results, accuracy, fscore)\n\nfrom sklearn.metrics import confusion_matrix\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n\n\nfor clf in [clf_A, clf_B, clf_C]:\n model = clf\n cm = confusion_matrix(y_test.values, model.predict(X_test))\n # view with a heatmap\n fig = plt.figure()\n sns.heatmap(cm, annot=True, cmap='Blues', xticklabels=['no', 'yes'], yticklabels=['no', 'yes'])\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n plt.title('Confusion matrix for:\\n{}'.format(model.__class__.__name__));", "Improving Results\nIn this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F-score. \nQuestion 3 - Choosing the Best Model\nBased on the evaluation you performed earlier, in one to two paragraphs, explain to CharityML which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \\$50,000.\nHint: Your answer should include discussion of the metrics, prediction/training time, and the algorithm's suitability for the data.\nAnswer: \nI believe \"Logistic Regression\" is the most appropriate model. It has low training/testing speed, and accuracy/F-scores of test data with different training sets sizes is consistent. Training scores and testing scores are similar suggesting no overfitting. Its prediction precision for high-income is 1300/1790=72.63% and its recall is 1300/2180 = 59.63% (False positive and false negative are low)\nAlthough \"k-Nearest Neighbor\" has a slightly higer accuracy and f-score, it takes lot longer in terms of processing time. If time is not a critical factor (as it would be for a fly-by-wire airplane or a self-driving car) ,I would perfer this model. Its prediction precision for high-income is 1300/2240=58.04% and its recall is 1300/1690 = 76.92%. (False positive and false negative are low) \n\"Naive Bayers with Gaussian\" model is the fastest with least variance of scores between training and testing, however, does poorly when dataset is smaller. It prediction precision for high-income is 2100/3600=58.33% and its recall is 2100/5500 = 38.18%. (False positive is not low)\nQuestion 4 - Describing the Model in Layman's Terms\nIn one to two paragraphs, explain to CharityML, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.\nAnswer: \nThe final model chosen is called Logistic Regression. Logistics regression works with odds. What are the odds that someone's income is more than $50,000 (high-income)? It depends on what we know about the person. It also depends on what have we learnt about persons that are high-income and low-income. We have 45222 persons of which 11208 are known to be high income.\nOur model needs to learn. It needs to figure out odds of a person being high-income given the fact that the person is a college graduate. It needs to figure out odds of a person being high-income given that the person is young or given that person is married or given that the person has a job or owns a business. We will let our model read 80% of our 45,222 observations that are randomly selected so that they will cover married as well as not married or divorced, full-time workers (40 hours per week) as well as part time workers, young as well as old. See the picture of a cube below: It is am imaginary space for a model that considers age (one edge of the cube), marital_status (second edge of the cube) and say education (the third edge of the cube). This model will read the data and place it as blue balls for low-income and red balls for high-income. \n<img src=\"files/separable.png\">\nSee a plane that separates the red and the blue balls. This smart model has figured out where that plane should be, what its shape should be and what it's angle should be to best separate the red and the blue balls. It did that in what we call as the training phase after it read the data and analyzed it.\nWe need to know how well will the model perform when it has to figure out income level of a new person. This is the purpose of the 20% data that we held back. Now in the second phase that we call validation we ask it to figure out its guess without telling it the actual income level of these 20%. Are these persons high-income or lo-income? This is how we estimate its accuracy and also reliability / dependability. \nWe don't want our model to falsely identify someone as high-income when they are not or falsely identify someone as low-income when they are not. We also don't want someone exactly matching attributes of a known high-income person to be identified as low-income and vice-a-versa. See below a 2 X 2 matrix where one side is true high-income (yes/no) and the other side is predicted high-income (yes/no)\n<img src=\"files/Confusion_LR.png\">\nThe way to read this figure is that our model predicts someone as high-income 1300 times out of 1790. It also predicts someone as low-income 6300 times out of 7180. If someone is truly high-income it correctly recognizes 1300 times out of 2180. If someone is truly low-income it correctly recognizes them 6300 times out of 6790. \nEventually we will decide how many edges as age, education, marital status of a person should we have for our decision space. Remember our cube had only 3 edges but our mathematical model can have more. However, it is best to not have too many. Not all of them are important and we can afford to not use some without significant loss in accuracy but that would make our prediction process faster.\nOnce finalized these specific attributes such as age, education, marital status of a person will be provided to the model whenever a new person is to be examined, and our model will then predict whether the person is high-income (or not).\nImplementation: Model Tuning\nFine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:\n- Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.\n- Initialize the classifier you've chosen and store it in clf.\n - Set a random_state if one is available to the same state you set before.\n- Create a dictionary of parameters you wish to tune for the chosen model.\n - Example: parameters = {'parameter' : [list of values]}.\n - Note: Avoid tuning the max_features parameter of your learner if that parameter is available!\n- Use make_scorer to create an fbeta_score scoring object (with $\\beta = 0.5$).\n- Perform grid search on the classifier clf using the 'scorer', and store it in grid_obj.\n- Fit the grid search object to the training data (X_train, y_train), and store it in grid_fit.\nNote: Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!", "# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import make_scorer\n\n# TODO: Initialize the classifier\nmodel = LogisticRegression(random_state=7)\n\n# TODO: Create the parameters list you wish to tune\nparam_grid = {'solver': ['sag', 'lbfgs', 'newton-cg'],\n 'C': [0.01, 0.1, 1.0, 10.0, 100.0, 1000.0]}\n\"\"\"\nAlgorithm to use in the optimization problem.\nFor small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ is faster for large ones.\nFor multiclass problems, only ‘newton-cg’, ‘sag’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.\n‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty.(L1 regularization and L2 regularization are two closely related techniques that can be used by machine learning (ML) training algorithms to reduce model overfitting)\n‘liblinear’ might be slower in LogisticRegressionCV because it does\nnot handle warm-starting.\nNote that ‘sag’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.\n\"\"\"\n# TODO: Make an fbeta_score scoring object\nscorer = make_scorer(fbeta_score, beta=beta)\n\n# TODO: Perform grid search on the classifier using 'scorer' as the scoring method\ngrid = GridSearchCV(estimator=model, param_grid=param_grid, scoring=scorer)\n\n# TODO: Fit the grid search object to the training data and find the optimal parameters\ngrid_fit = grid.fit(X_train, y_train)\n\n# Get the estimator\nbest_clf = grid_fit.best_estimator_\n\n# Make predictions using the unoptimized and model\npredictions = (clf.fit(X_train, y_train)).predict(X_test)\nbest_predictions = best_clf.predict(X_test)\n\n# Report the before-and-afterscores\nprint \"Unoptimized model\\n------\"\nprint \"Accuracy score on testing data: {:.4f}\".format(accuracy_score(y_test, predictions))\nprint \"F-score on testing data: {:.4f}\".format(fbeta_score(y_test, predictions, beta = 0.5))\nprint \"\\nOptimized Model\\n------\"\nprint \"Final accuracy score on the testing data: {:.4f}\".format(accuracy_score(y_test, best_predictions))\nprint \"Final F-score on the testing data: {:.4f}\".format(fbeta_score(y_test, best_predictions, beta = 0.5))", "Question 5 - Final Model Evaluation\nWhat is your optimized model's accuracy and F-score on the testing data? Are these scores better or worse than the unoptimized model? How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in Question 1?\nNote: Fill in the table below with your results, and then provide discussion in the Answer box.\nResults:\n| Metric | Benchmark Predictor | Unoptimized Model | Optimized Model |\n| :------------: | :-----------------: | :---------------: | :-------------: | \n| Accuracy Score | 0.2478 | 0.8201 | 0.8494 |\n| F-score | 0.2917 | 0.6317 | 0.7008 |\nAnswer: \nThe optimized model have better accuracy and F-score than unoptimized model. The improvement is small but an improvement nonetheless !\nThe optimized model is much improvement over the benchmark predicator (naive bayes) in terms of accuracy and F-score.\n\nFeature Importance\nAn important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \\$50,000.\nChoose a scikit-learn classifier (e.g., adaboost, random forests) that has a feature_importance_ attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset.\nQuestion 6 - Feature Relevance Observation\nWhen Exploring the Data, it was shown there are thirteen available features for each individual on record in the census data.\nOf these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why?\nAnswer:\nMy guess is based on what I see in my church. People who donate significantly (High Income >$50K) are:\nelderly, happily married, professional, 9-5 jobs (4o hr week) and well-educated.\nHence in my mind the features by order of dmiminishing priority are:\nage, marital-status, occupation, hours-per-week and seducation_level\nImplementation - Extracting Feature Importance\nChoose a scikit-learn supervised learning algorithm that has a feature_importance_ attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.\nIn the code cell below, you will need to implement the following:\n - Import a supervised learning model from sklearn if it is different from the three used earlier.\n - Train the supervised model on the entire training set.\n - Extract the feature importances using '.feature_importances_'.", "# TODO: Import a supervised learning model that has 'feature_importances_'\nfrom sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier, GradientBoostingClassifier\n# TODO: Train the supervised model on the training set \nmodel = GradientBoostingClassifier().fit(X_train, y_train)\n\n# TODO: Extract the feature importances\nimportances = model.feature_importances_\n\n# Plot\nvs.feature_plot(importances, X_train, y_train)", "Question 7 - Extracting Feature Importance\nObserve the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \\$50,000.\nHow do these five features compare to the five features you discussed in Question 6? If you were close to the same answer, how does this visualization confirm your thoughts? If you were not close, why do you think these features are more relevant?\nAnswer:\nWow !! This is an eye-opener.\nEducation is the biggest factor that determines if someone is a donor, next is age, happily married and then capital-loss and capital-gain !\noccupation is immaterial !!\nHours wordked is immaterial !!!\nWhat matters is did you suffer a wall-street loss or gain !!!!\nMy takeaway from this is that you are scarred if you experience a wall-street loss OR you are generous when you experience a wall-street gain(is a lottery a capital-gain?...maybe) I did expect occupation to play a role in this but I have been proven wrong.\nMaybe the people in my church are capital-gainers (which I would not know) and they get clubbed in >$50K group\nFeature Selection\nHow does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of all features present in the data. This hints that we can attempt to reduce the feature space and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set with only the top five important features.", "# Import functionality for cloning a model\nfrom sklearn.base import clone\n\n# Reduce the feature space\nX_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]\nX_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]\n\n# Train on the \"best\" model found from grid search earlier\nclf = (clone(best_clf)).fit(X_train_reduced, y_train)\n\n# Make new predictions\nreduced_predictions = clf.predict(X_test_reduced)\n\n# Report scores from the final model using both versions of data\nprint \"Final Model trained on full data\\n------\"\nprint \"Accuracy on testing data: {:.4f}\".format(accuracy_score(y_test, best_predictions))\nprint \"F-score on testing data: {:.4f}\".format(fbeta_score(y_test, best_predictions, beta = 0.5))\nprint \"\\nFinal Model trained on reduced data\\n------\"\nprint \"Accuracy on testing data: {:.4f}\".format(accuracy_score(y_test, reduced_predictions))\nprint \"F-score on testing data: {:.4f}\".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))", "Question 8 - Effects of Feature Selection\nHow does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?\nIf training time was a factor, would you consider using the reduced data as your training set?\nAnswer:\nThe final model trained on full data has better accuracy and F-score compared to model trained on reduced data.\nIt did not look like the full data took any significant time to arrive at the results.\nHowever, if time was a factor the Final Model with reduced data is acceptable compromise" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bmorris3/salter
stats_vis.ipynb
mit
[ "%matplotlib inline\n%config InlineBackend.figure_format = \"retina\"\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom astropy.io import ascii\n\ntable = ascii.read('data/stats_table.csv', format='csv', delimiter=',')\ntable.add_index('kepid')", "If the mean flux before transit is significantly different from the mean flux after transit, mask those results.", "[n for n in table.colnames if n.startswith('ks')]\n\np = table['ttest:out_of_transit&before_midtransit-vs-out_of_transit&after_midtransit']\n\npoorly_normalized_oot_threshold = -1 \n\nmask_poorly_normalized_oot = np.log(p) > poorly_normalized_oot_threshold\n\nplt.hist(np.log(p[~np.isnan(p)]))\nplt.axvline(poorly_normalized_oot_threshold, color='r')\nplt.ylabel('freq')\nplt.xlabel('log( Ttest(before-transit, after-transit) )')\nplt.show()", "If the distribution of fluxs before transit is significantly different from the distribution of fluxs after transit, mask those results.", "p = table['ks:out_of_transit&before_midtransit-vs-out_of_transit&after_midtransit']\n\nmask_different_rms_before_vs_after_thresh = -1.5\nmask_different_rms_before_vs_after = np.log(p) > mask_different_rms_before_vs_after_thresh\n\nplt.hist(np.log(p[~np.isnan(p)]))\nplt.axvline(mask_different_rms_before_vs_after_thresh, color='r')\nplt.ylabel('freq')\nplt.xlabel('log( KS(before-transit, after-transit) )')\nplt.show()\n\ncombined_mask = mask_poorly_normalized_oot | mask_different_rms_before_vs_after\n\nprint(\"stars left after cuts:\", np.count_nonzero(table['kepid'][combined_mask]))\n\nks_in_out = table['ks:in_transit-vs-out_of_transit']\nb = table['B']\n\nthresh = 0.001\nmask_notable_intransit = ks_in_out < thresh\n\nplt.scatter(np.log(ks_in_out), b)\nplt.axvline(np.log(thresh), color='r')\n\nks_in_in = table['ks:in_transit&before_midtransit-vs-in_transit&after_midtransit']\nanderson_in_in = table['anderson:in_transit&before_midtransit-vs-in_transit&after_midtransit']\n\nb = table['B']\n\nthresh = 0.05\nmask_asymmetric_in = (ks_in_in < thresh) & (anderson_in_in < thresh)\n\nprint(table['kepid'][mask_asymmetric_in])\n\nplt.scatter(np.log(ks_in_in), rb)\nplt.axvline(np.log(thresh), color='r')\n\nlarge_planets = table['R'].data > 0.1\nclose_in_planets = table['PER'] < 10\n\nclose_in_large_planets = (large_planets & close_in_planets) & combined_mask\nfar_out_small_planets = np.logical_not(close_in_large_planets) & combined_mask\n\nnp.count_nonzero(close_in_large_planets.data), np.count_nonzero(far_out_small_planets)\n\nplt.hist(np.log(table['ks:in_transit-vs-out_of_transit'])[close_in_large_planets], \n label='close in/large', alpha=0.4, normed=True)\nplt.hist(np.log(table['ks:in_transit-vs-out_of_transit'])[far_out_small_planets],\n label='far out/small', alpha=0.4, normed=True)\nplt.legend()\nplt.xlabel('log( KS(in vs. out) )')\nplt.ylabel('Fraction of stars')\nplt.title(\"Total activity\")\nplt.show()", "It seems that close-in, large exoplanets orbit more active stars (with larger in-transit RMS) than far out/small planets", "plt.hist(np.log(table['ks:in_transit&before_midtransit-vs-in_transit&after_midtransit'])[close_in_large_planets], \n label='close in/large', alpha=0.4, normed=True)\nplt.hist(np.log(table['ks:in_transit&before_midtransit-vs-in_transit&after_midtransit'])[far_out_small_planets],\n label='far out/small', alpha=0.4, normed=True)\nplt.legend()\nplt.xlabel('log( KS(in-transit (first half) vs. in-transit (second half)) )')\nplt.ylabel('Fraction of stars')\nplt.title(\"Residual asymmetry\")\nplt.show()", "Transit residuals are more asymmetric for far-out, small exoplanets.", "plt.loglog(table['ks:in_transit-vs-out_of_transit'], \n table['PER'], '.')\nplt.xlabel('transit depth scatter: log(ks)')\nplt.ylabel('period [d]')\n\nplt.loglog(table['PER'][close_in_large_planets], \n table['ks:in_transit-vs-out_of_transit'][close_in_large_planets], 'k.', label='close in & large')\nplt.loglog(table['PER'][far_out_small_planets], \n table['ks:in_transit-vs-out_of_transit'][far_out_small_planets], 'r.', label='far out | small')\nplt.legend()\nplt.ylabel('transit depth scatter: log(ks)')\nplt.xlabel('period [d]')\nax = plt.gca()\nax.invert_yaxis()", "Stars with short period planets have disproportionately larger scatter in transit", "plt.semilogx(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets], \n table['B'][close_in_large_planets], 'k.', label='close in/large')\nplt.semilogx(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets], \n table['B'][far_out_small_planets], 'r.', label='far out/small')\nplt.legend()\nax = plt.gca()\nax.set_xlabel('transit depth scatter: log(ks)')\nax.set_ylabel('impact parameter $b$')\n\nax2 = ax.twinx()\ny2 = 1 - np.linspace(0, 1, 5)\ny2labels = np.degrees(np.arccos(y2))[::-1]\nax2.set_yticks(y2)\nax2.set_yticklabels([int(round(i)) for i in y2labels])\n#ax2.set_ylim([0, 90])\nax2.set_ylabel('abs( latitude )')\n\ndef b_to_latitude_deg(b):\n return 90 - np.degrees(np.arccos(b))\n\nabs_latitude = b_to_latitude_deg(table['B'])\n\nplt.semilogx(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets], \n abs_latitude[close_in_large_planets], 'k.', label='close in/large')\nplt.semilogx(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets], \n abs_latitude[far_out_small_planets], 'r.', label='far out/small')\nplt.legend()\nax = plt.gca()\nax.set_xlabel('in-transit asymmetry: log(ks)')\nax.set_ylabel('stellar latitude (assume aligned)')\n\nfrom scipy.stats import binned_statistic\n\nbs = binned_statistic(abs_latitude[far_out_small_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),\n statistic='median', bins=10)\nbincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])\n\nfig, ax = plt.subplots(1, 2, figsize=(10, 5))\n\nax[0].plot(abs_latitude[far_out_small_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),\n 'k.', label='far out/small')\n\nax[0].plot(bincenter, bs.statistic, label='median')\n\nax[0].invert_yaxis()\nax[0].set_ylabel('transit depth scatter: log(ks)')\nax[0].set_xlabel('stellar latitude (assume aligned)')\n\n\nbs = binned_statistic(abs_latitude[close_in_large_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),\n statistic='median', bins=10)\nbincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])\n\nax[1].plot(abs_latitude[close_in_large_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),\n 'k.', label='far out/small')\n\nax[1].plot(bincenter, bs.statistic, label='median')\n\nax[1].invert_yaxis()\nax[1].set_ylabel('transit depth scatter: log(ks)')\nax[1].set_xlabel('stellar latitude (assume aligned)')\n\nax[0].set_title('Small | far out')\nax[1].set_title('large & close in')\n\nfrom scipy.stats import binned_statistic\n\nbs = binned_statistic(abs_latitude[far_out_small_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),\n statistic='median', bins=10)\nbincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])\n\nfig, ax = plt.subplots(1, 1, figsize=(10, 8))\n\nax.plot(abs_latitude[far_out_small_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][far_out_small_planets]),\n 'k.', label='far out | small')\n\nax.plot(bincenter, bs.statistic, 'k', label='median(far out | small)')\n\nax.set_ylabel('transit depth scatter: log(ks)')\nax.set_xlabel('stellar latitude (assume aligned)')\n\n\nbs = binned_statistic(abs_latitude[close_in_large_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),\n statistic='median', bins=10)\nbincenter = 0.5 * (bs.bin_edges[:-1] + bs.bin_edges[1:])\n\nax.plot(abs_latitude[close_in_large_planets], \n np.log(table['ks:in_transit-vs-out_of_transit'][close_in_large_planets]),\n 'r.', label='close in & large')\n\nax.plot(bincenter, bs.statistic, 'r', label='median(close in & large)')\n\n# ax.set_ylabel('transit depth scatter: log(ks)')\n# ax.set_xlabel('stellar latitude (assume aligned)')\nax.legend()\nax.invert_yaxis()\n\nax.set_ylim([0, -150])\nplt.show()\n#ax.set_title('Small | far out')\n#ax.set_title('large & close in')", "In the above plot, the vertical axis is the significance of the scatter in-transit vs. out-of-transit (upward = more significant scatter).\nA transiting planet with a higher $b$ (high latitudes) will occult a larger fraction of the stellar surface (per instant) than transiting planets at lower $b$ (low latitudes). Does that mean that we should expect more scatter at higher latitudes?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ikegwukc/INFO597-DeepLearning-GameTheory
NPlayerGames/NPlayerGames.ipynb
mit
[ "N Player Game\nIterated Peace War Game\n\nI've created a version of the peace war game. The scenario for this game is members of lcdm own companies across the united states.\nCompanies in this game have 3 actions: peace, compromise, or war. If a company (or agent) chooses peace they are looking for peaceful solutions with rival agents. If an agent chooses comprimise they are willing to find some middle ground but have taken certain precautions to ensure that they are in a better position if a rival agent chooses war. If an agent chooses war they are willing to inflict harm to other agents in order to achieve a greater payoff than the other 2 actions. The Payoff matrix for 2 players:\n\nThe Payoff matrices for 3 players are below:\n\nThis payoff matrix contains the utility values of all possible outcomes if the 3rd agent chooses the peace action.\n\nThis payoff matrix contains the utility values of all possible outcomes if the 3rd agent chooses the compromise action.\n\nThis payoff matrix contains the utility values of all possible outcomes if the 3rd agent chooses the war action.\nI could continue to write payoff matrices up to N Players however it becomes tedious to represent all possible states using payoff matrices, ergo game trees are usually used for iterated or N Player Games.", "import gmaps, os # Used for interactive visualizations\nfrom game_types import NPlayerGame\nimport tensorflow as tf\nimport pandas as pd", "Configuring stuff for visualizations", "gmaps.configure(api_key=os.environ[\"GOOGLE_API_KEY\"])\nlocs = [\n [(37.760851, -122.443118), (37.760853, -122.443120)], # Silcon Valley\n [(40.092034, -88.238687), (40.092035, -88.238688)], # Urbana\n [(25.777052, -80.194957), (25.777054, -80.194959)], # Flordia\n [(40.705773, -74.010861), (40.705774, -74.010863)], # Manhattan\n [(35.898512, -78.865059), (35.898513, -78.865060)], # NC\n [(42.278052, -83.738997), (42.278053, -83.738998)], # Michigan\n [(35.844058, -106.287484), (35.844059, -106.287485)], # New Mexico\n [(33.745074, -84.390840), (33.745076, -84.390842)], # Georgia\n [(32.758009, -96.805532), (32.758011, -96.805534)], # Texas\n [(47.653022, -122.305569), (47.653024, -122.305571)], # Washington\n [(47.653532, -100.347697), (47.653533, -100.347698)], # ND\n [(34.069110, -118.246972), (34.069112, -118.246974)], # Sol Cal\n [(44.723362, -111.071472), (44.723363, -111.071473)] # WY\n]\nnames = ['Silcon Valley', 'Illinois', 'Flordia', 'Manhattan', 'North Carolina',\n 'Michigan', 'New Mexico', 'Georgia', 'Texas', 'Washington', 'North Dakota', 'Sol Cal', 'Wyoming']", "Playing Peace War Game with 14 Players for 650,000 iterations", "tf.reset_default_graph()\ngame = NPlayerGame(n_players=14) # Create 13 agents of random type\ngame.play(650000) # Play 600,000 iterations", "Grabbing scores of each agent", "agent_name, agent_score = [], []\nfor agent in game.data:\n if agent != 'id':\n agent_name.append(agent)\n agent_score.append(sum(game.data[agent]))", "Converting current score number range to smaller range maintaining ratio", "old_range = max(agent_score) - min(agent_score)\nnew_range = 35\nnew_agent_scores = []\nfor old_val in agent_score:\n new_agent_scores.append( (((old_val-min(agent_score))*new_range)/old_range) + 10 ) \n#print(old_range)", "Displaying Scores and Summary of Game:", "layers = []\nfig = gmaps.Map()\n#print('Agent Name \\t\\t\\t| Location \\t\\t\\t| Total Score')\nfor i, loc in enumerate(locs):\n #print('{0} \\t\\t\\t| {1} \\t\\t\\t| \\t\\t{2}'.format(agent_name[i], names[i], new_agent_scores[i]))\n _layer = gmaps.heatmap_layer(loc, point_radius=int(new_agent_scores[i]))\n fig.add_layer(_layer)\nd ={'Agent Name': agent_name, 'Location': names, 'Total Score': agent_score, 'Radius Size': new_agent_scores}\nfig", "(The Interactive Map may not be rendered on Github)", "pd.DataFrame(d)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
IBMDecisionOptimization/docplex-examples
examples/mp/jupyter/logical_cts.ipynb
apache-2.0
[ "Use logical constraints with decision optimization\nThis tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, leveraging logical constraints.\nWhen you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.\n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nDescribe the business problem\nHow decision optimization (prescriptive analytics) can help\nUse decision optimization\nStep 1: Import the library\nStep 2: Learn about constraint truth values\nStep 3: Learn about equivalence constraints\n\n\nSummary\n\n\nLogical constraints let you use the truth value of constraints inside the model. The truth value of a constraint \nis a binary variable equal to 1 when the constraint is satisfied, and equal to 0 when not. Adding a constraint to a model ensures that it is always satisfied. \nWith logical constraints, one can use the truth value of a constraint inside the model, allowing to choose dynamically whether a constraint is to be satisfied (or not).\nHow decision optimization can help\n\n\nPrescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. \n\n\nPrescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n\nPrescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.\n<br/>\n\n\n<u>With prescriptive analytics, you can:</u> \n\nAutomate the complex decisions and trade-offs to better manage your limited resources.\nTake advantage of a future opportunity or mitigate a future risk.\nProactively update recommendations based on changing events.\nMeet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.\n\nUse decision optimization\nStep 1: Import the library\nRun the following code to import Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.", "import sys\ntry:\n import docplex.mp\nexcept:\n raise Exception('Please install docplex. See https://pypi.org/project/docplex/')", "A restart of the kernel might be needed.\nStep 2: Learn about constraint truth values\nAny discrete linear constraint can be associated to a binary variable that holds the truth value of the constraint. \nBut first, let's explain what a discrete constraint is\nDiscrete linear constraint\nA discrete linear constraint is built from discrete coefficients and discrete variables, that is variables with type integer or binary. \nFor example, assuming x and y are integer variables:\n\n2x+3y == 1 is discrete\nx+y = 3.14 is not (because of 3.14)\n1.1 x + 2.2 y &lt;= 3 is not because of the non-integer coefficients 1.1 and 2.2\n\nThe truth value of an added constraint is always 1\nThe truth value of a linear constraint is accessed by the status_var property. This property returns a binary which can be used anywhere a variable can. However, the value of the truth value variable and the constraint are linked, both ways:\n\na constraint is satisfied if and only if its truth value variable equals 1\na constraint is not satisfied if and only if its truth value variable equals 0.\n\nIn the following small model, we show that the truth value of a constraint which has been added to a model is always equal to 1.", "from docplex.mp.model import Model\n\nm1 = Model()\nx = m1.integer_var(name='ix')\ny = m1.integer_var(name='iy')\nct = m1.add(x + y <= 3)\n# acces the truth value of a linear constraint\nct_truth = ct.status_var\nm1.maximize(x+y)\nassert m1.solve()\nprint('the truth value of [{0!s}] is {1}'.format(ct, ct_truth.solution_value))", "The truth value of a constraint not added to a model is free\nA constraint that is not added to a model, has no effect. Its truth value is free: it can be either 1 or 0.\nIn the following example, both x and y are set to their upper bound, so that the constraint is not satisfied; hence the truth value is 0.", "m2 = Model(name='logical2')\nx = m2.integer_var(name='ix', ub=4)\ny = m2.integer_var(name='iy', ub=4)\nct = (x + y <= 3)\nct_truth = ct.status_var # not m2.add() here!\nm2.maximize(x+y)\nassert m2.solve()\nm2.print_solution()\nprint('the truth value of [{0!s}] is {1}'.format(ct, ct_truth.solution_value))", "Using constraint truth values in modeling\nWe have learned about the truth value variable of linear constraints, but there's more.\nLinear constraints can be freely used in expressions: Docplex will then substitute the constraint's truth value \nvariable in the expression. \nLet's experiment again with a toy model: in this model,\nwe want to express that when x ==3 is false, then y ==4 must also be false.\nTo express this, it suffices to say that the truth value of y == 4 is less than or equal \nto the truth value of x ==3. When x==3 is false, is truthe value is 0, hence the truth value of y==4 is also zero, and y cannot be equal to 4.\nHowever, as shown in the model below, it is not necessary to use the status_var propert: using\nthe constraints in a comparison expression works fine.\nAs we maximize y, y has value 4 in the optimal solution (it is the upper bound), and consequently the constraint ct_y4 is satisfied. From the inequality between truth values,\nit follows that the truth value of ct_x2 equals 1 and x is equal to 2.\nUsing the constraints in the inequality has silently converted each constraint into its truth value.", "m3 = Model(name='logical3')\nx = m3.integer_var(name='ix', ub=4)\ny = m3.integer_var(name='iy', ub=4)\nct_x2 = (x == 2)\nct_y4 = (y == 4)\n# use constraints in comparison\nm3.add( ct_y4 <= ct_x2 )\nm3.maximize(y)\nassert m3.solve()\n# expected solution x==2, and y==4.\nm3.print_solution()", "Constraint truth values can be used with arithmetic operators, just as variables can. In the next model, we express a (slightly) more complex constraint:\n\neither x is equal to 3, or both y and z are equal to 5\n\nLet's see how we can express this easily with truth values:", "m31 = Model(name='logical31')\nx = m31.integer_var(name='ix', ub=4)\ny = m31.integer_var(name='iy', ub=10)\nz = m31.integer_var(name='iz', ub=10)\nct_x2 = (x == 3)\nct_y5 = (y == 5)\nct_z5 = (z == 5)\n#either ct_x2 is true or -both- ct_y5 and ct_z5 must be true\nm31.add( 2 * ct_x2 + (ct_y5 + ct_z5) == 2)\n# force x to be less than 2: it cannot be equal to 3!\nm31.add(x <= 2)\n# maximize sum of x,y,z\nm31.maximize(x+y+z)\nassert m31.solve()\n# the expected solution is: x=2, y=5, z=5\nassert m31.objective_value == 12\nm31.print_solution()", "As we have seen, constraints can be used in expressions. This includes the Model.sum() and Model.dot() aggregation methods.\nIn the next model, we define ten variables, one of which must be equal to 3 (we dpn't care which one, for now). As we maximize the sum of all xs variables, all will end up equal to their upper bound, except for one.", "m4 = Model(name='logical4')\nxs = m4.integer_var_list(10, ub=100)\ncts = [xi==3 for xi in xs]\nm4.add( m4.sum(cts) == 1)\nm4.maximize(m4.sum(xs))\nassert m4.solve()\nm4.print_solution()", "As we can see, all variables but one are set to their upper bound of 100. We cannot predict which variable will be set to 3. \nHowever, let's imagine that we prefer variable with a lower index to be set to 3, how can we express this preference? \nThe answer is to use an additional expression to the objective, using a scalar product of constraint truth value", "preference = m4.dot(cts, (k+1 for k in range(len(xs))))\n# we prefer lower indices for satisfying the x==3 constraint\n# so the final objective is a maximize of sum of xs -minus- the preference\nm4.maximize(m4.sum(xs) - preference)\nassert m4.solve()\nm4.print_solution()", "As expected, the x variable set to 3 now is the first one.\nUsing truth values to negate a constraint\nTruth values can be used to negate a complex constraint, by forcing its truth value to be equal to 0.\nIn the next model, we illustrate how an equality constraint can be negated by forcing its truth value to zero. This negation forbids y to be equal to 4, as it would be without this negation.\nFinally, the objective is 7 instead of 8.", "m5 = Model(name='logical5')\nx = m5.integer_var(name='ix', ub=4)\ny = m5.integer_var(name='iy', ub=4)\n# this is the equality constraint we want to negate\nct_xy7 = (y + x >= 7)\n# forcing truth value to zero means the constraint is not satisfied.\n# note how we use a constraint in an expression\nnegation = m5.add( ct_xy7 == 0)\n# maximize x+y should yield both variables to 4, but x+y cannot be greater than 7\nm5.maximize(x + y)\nassert m5.solve()\nm5.print_solution()\n# expecting 6 as objective, not 8\nassert m5.objective_value == 6\n\n# now remove the negation\nm5.remove_constraint(negation)\n# and solve again\nassert m5.solve()\n# the objective is 8 as expected: both x and y are equal to 4\nassert m5.objective_value == 8\nm5.print_solution()", "Summary\nWe have seen that linear constraints have an associated binary variable, its truth value, whose value is linked to whether or not the constraint is satisfied. \nsecond, linear constraints can be freely mixed with variables in expression to express meta-constraints that is, constraints\nabout constraints. As an example, we have shown how to use truth values to negate constraints.\nNote: the != (not_equals) operator\nSince version 2.9, Docplex provides a 'not_equal' operator, between discrete expressions. Of course, this is implemented using truth values, but the operator provides a convenient way to express this constraint.", "m6 = Model(name='logical6')\nx = m6.integer_var(name='ix', ub=4)\ny = m6.integer_var(name='iy', ub=4)\n# this is the equality constraint we want to negate\nm6.add(x +1 <= y)\nm6.add(x != 3)\nm6.add(y != 4)\n# forcing truth value to zero means the constraint is not satisfied.\n# note how we use a constraint in an expression\nm6.add(x+y <= 7)\n# maximize x+y should yield both variables to 4, \n# but here: x < y, y cannot be 4 thus x cannot be 3 either so we get x=2, y=3\nm6.maximize(x + y)\nassert m6.solve()\nm6.print_solution()\n# expecting 5 as objective, not 8\nassert m6.objective_value == 5\n", "Step 3: Learn about equivalence constraints\nAs we have seen, using a constraint in expressions automtically generates a truth value variable, whose value is linked to the status of the constraint. \nHowever, in some cases, it can be useful to relate the status of a constraint to an existing binary variable. This is the purpose of equivalence constraints.\nAn equivalence constraint relates an existing binary variable to the status of a discrete linear constraints, in both directions. The syntax is:\n`Model.add_equivalence(bvar, linear_ct, active_value, name)`\n\n\nbvar is the existing binary variable\nlinear-ct is a discrete linear constraint\nactive_value can take values 1 or 0 (the default is 1)\nname is an optional string to name the equivalence.\n\nIf the binary variable bvar equals 1, then the constraint is satisfied. Conversely, if the constraint is satisfied, the binary variable is set to 1.", "m7 = Model(name='logical7')\nsize = 7\nil = m7.integer_var_list(size, name='i', ub=10)\njl = m7.integer_var_list(size, name='j', ub=10)\nbl = m7.binary_var_list(size, name='b')\nfor k in range(size):\n # for each i, relate bl_k to il_k==5 *and* jl_k == 7\n m7.add_equivalence(bl[k], il[k] == 5)\n m7.add_equivalence(bl[k], jl[k] == 7)\n# now maximize sum of bs\nm7.maximize(m7.sum(bl))\nassert m7.solve()\nm7.print_solution()", "Step 4: Learn about indicator constraints\nThe equivalence constraint decsribed in the previous section links the value of an existing binary variable to the satisfaction of a linear constraint. In certain cases, it is sufficient to link from an existing binary variable to the constraint, but not the other way. This is what indicator constraints do.\nThe syntax is very similar to equivalence:\n`Model.add_indicator(bvar, linear_ct, active_value=1, name=None)`\n\n\nbvar is the existing binary variable\nlinear-ct is a discrete linear constraint\nactive_value can take values 1 or 0 (the default is 1)\nname is an optional string to name the indicator.\n\nThe indicator constraint works as follows: if the binary variable is set to 1, the constraint is satified; if the binary variable is set to 0, anything can happen.\nOne noteworty difference between indicators and equivalences is that, for indicators, the linear constraint need not be discrete.\nIn the following small model, we first solve without the indicator: both b and x are set to their upper bound, and the final objective is 200.\nThen we add an indicator sttaing that when b equals1, then x must be less than 3.14; the resulting objective is 103.14, as b is set to 1, which trigger the x &lt;= 31.4 constraint.\nNote that the right-hand side constraint is not discrete (because of 3.14).", "m8 = Model(name='logical8')\nx = m8.continuous_var(name='x', ub=100)\nb = m8.binary_var(name='b')\n\nm8.maximize(100*b +x)\nassert m8.solve()\nassert m8.objective_value == 200\nm8.print_solution()\nind_pi = m8.add_indicator(b, x <= 3.14)\nassert m8.solve()\nassert m8.objective_value <= 104\nm8.print_solution()", "Step 5: Learn about if-then\nIn this section we explore the Model.add_if_then construct which links the truth value of two constraints:\nModel.add_if_then(if_ct, then_ct) ensures that, when constraint if_ct is satisfied, then then_ct is also satisfied.\nWhen if_ct is not satisfied, then_ct is free to be satsfied or not.\nThe syntax is:\n`Model.add_if_then(if_ct, then_ct, negate=False)`\n\n\nif_ct is a discrete linear constraint\nthen_ct is any linear constraint (not necessarily discrete),\nnegate is an optional flag to reverse the logic, that is satisfy then_ct if if_ct is not (more on this later)\n\nAs for indicators, the then_ct need not be discrete.\nModel.add_if_then(if_ct, then_ct) is roughly equivalent to Model.add_indicator(if_ct.status_var, then_ct).", "m9 = Model(name='logical9')\nx = m9.continuous_var(name='x', ub=100)\ny = m9.integer_var(name='iy', ub = 11)\nz = m9.integer_var(name='iz', ub = 13)\n\nm9.add_if_then(y+z >= 10, x <= 3.14)\n\n# y and z are puashed to their ub, so x is down to 3.14\nm9.maximize(x + 100*(y + z))\nm9.solve()\nm9.print_solution()", "In this second variant, the objective coefficient for (y+z) is 2 instead of 100, so x domines the objective, and reache sits upper bound, while (y+z) must be less than 9, which is what we observe.", "# y and z are pushed to their ub, so x is down to 3.14\nm9.maximize(x + 2 *(y + z))\nm9.solve()\nm9.print_solution()\n\nassert abs(m9.objective_value - 118) <= 1e-2", "Summary\nWe have seen that linear constraints have an associated binary variable, its truth value, whose value is linked to whether or not the constraint is satisfied. \nsecond, linear constraints can be freely mixed with variables in expression to express meta-constraints that is, constraints\nabout constraints. As an example, we have shown how to use truth values to negate constraints.\nIn addition, we have learned to use equivalence, indicator and if_then constraints.\nYou learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate a Mathematical Programming model with logical constraints.\nReferences\n\nDecision Optimization CPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here\nContact us at dofeedback@wwpdl.vnet.ibm.com\"\n\nCopyright &copy; 2017-2019 IBM. Sample Materials." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
feststelltaste/software-analytics
courses/20190918_Uni_Leipzig/Parsing and Analysing vmstat Data the Hard Way (Demo Notebook).ipynb
gpl-3.0
[ "This notebook shows you step by step how you can transform text data from vmstat output file into a pandas DataFrame.", "%less ../datasets/vmstat_loadtest.log", "Data Input\nIn this version, I'll guide you through data parsing step by step.", "import pandas as pd\n\nraw = pd.read_csv(\"../datasets/vmstat_loadtest.log\", skiprows=1)\nraw.head()\n\ncolumns = raw.columns.str.split().values[0]\nprint(columns)\n\ndata = raw.iloc[:,0].str.split(n=len(columns)-1).apply(pd.Series)\ndata.head()\n\ndata.columns = columns\ndata.head()\n\nvmstat = data.iloc[:,:-1].apply(pd.to_numeric)\nvmstat['UTC'] = pd.to_datetime(data['UTC'])\nvmstat.head()", "Data Selection", "cpu = vmstat[['us','sy','id','wa', 'st']]\ncpu.head()", "Visualization", "%matplotlib inline\ncpu.plot.area();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adukic/nd101
seq2seq/sequence_to_sequence_implementation.ipynb
mit
[ "Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.", "import helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)", "Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.", "source_sentences[:50].split('\\n')", "source_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. source_sentences contains a sorted characters of the line.", "target_sentences[:50].split('\\n')", "Preprocess\nTo do anything useful with it, we'll need to turn the characters into a list of integers:", "def extract_character_vocab(data):\n special_words = ['<pad>', '<unk>', '<s>', '<\\s>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\\n')]\n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])", "The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.", "def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length):\n new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \\\n for sentence in source_ids]\n new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \\\n for sentence in target_ids]\n\n return new_source_ids, new_target_ids\n\n\n# Use the longest sequence as sequence length\nsequence_length = max(\n [len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids])\n\n# Pad all sequences up to sequence length\nsource_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int, \n target_letter_ids, target_letter_to_int, sequence_length)\n\nprint(\"Sequence Length\")\nprint(sequence_length)\nprint(\"\\n\")\nprint(\"Input sequence example\")\nprint(source_ids[:3])\nprint(\"\\n\")\nprint(\"Target sequence example\")\nprint(target_ids[:3])", "This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow", "from distutils.version import LooseVersion\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))", "Hyperparameters", "# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 13\ndecoding_embedding_size = 13\n# Learning Rate\nlearning_rate = 0.001", "Input", "input_data = tf.placeholder(tf.int32, [batch_size, sequence_length])\ntargets = tf.placeholder(tf.int32, [batch_size, sequence_length])\nlr = tf.placeholder(tf.float32)", "Sequence to Sequence\nThe decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\nThen, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step.\nLet's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code).\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs.\nNotice that the inference decoder feeds the output of each time step as an input to the next.\nAs for the training decoder, we can think of it as looking like this:\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\nEncoding\n\nEmbed the input data using tf.contrib.layers.embed_sequence\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.", "source_vocab_size = len(source_letter_to_int)\n\n# Encoder embedding\nenc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n# Encoder\nenc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n_, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)", "Process Decoding Input", "import numpy as np\n\n# Process the input we'll feed to the decoder\nending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])\ndec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1)\n\ndemonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length))\n\nsess = tf.InteractiveSession()\nprint(\"Targets\")\nprint(demonstration_outputs[:2])\nprint(\"\\n\")\nprint(\"Processed Decoding Input\")\nprint(sess.run(dec_input, {targets: demonstration_outputs})[:2])", "Decoding\n\nEmbed the decoding input\nBuild the decoding RNNs\nBuild the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.", "target_vocab_size = len(target_letter_to_int)\n\n# Decoder Embedding\ndec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\ndec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n# Decoder RNNs\ndec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n\nwith tf.variable_scope(\"decoding\") as decoding_scope:\n # Output Layer\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)", "Decoder During Training\n\nBuild the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder.\nApply the output layer to the output of the training decoder", "with tf.variable_scope(\"decoding\") as decoding_scope:\n # Training Decoder\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n \n # Apply output function\n train_logits = output_fn(train_pred)", "Decoder During Inference\n\nReuse the weights the biases from the training decoder using tf.variable_scope(\"decoding\", reuse=True)\nBuild the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder.\nThe output function is applied to the output in this step", "with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n # Inference Decoder\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\\s>'], \n sequence_length - 1, target_vocab_size)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)", "Optimization\nOur loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.", "# Loss function\ncost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([batch_size, sequence_length]))\n\n# Optimizer\noptimizer = tf.train.AdamOptimizer(lr)\n\n# Gradient Clipping\ngradients = optimizer.compute_gradients(cost)\ncapped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\ntrain_op = optimizer.apply_gradients(capped_gradients)", "Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.", "import numpy as np\n\ntrain_source = source_ids[batch_size:]\ntrain_target = target_ids[batch_size:]\n\nvalid_source = source_ids[:batch_size]\nvalid_target = target_ids[:batch_size]\n\nsess.run(tf.global_variables_initializer())\n\nfor epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch, targets: target_batch, lr: learning_rate})\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source})\n\n train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2)))\n valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2)))\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))", "Prediction", "input_sentence = 'hello'\n\n\ninput_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()]\ninput_sentence = input_sentence + [0] * (sequence_length - len(input_sentence))\nbatch_shell = np.zeros((batch_size, sequence_length))\nbatch_shell[0] = input_sentence\nchatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in input_sentence]))\nprint(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)]))\nprint(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/game_ai/raw/tut3.ipynb
apache-2.0
[ "Introduction\nIn the previous tutorial, you learned how to build an agent with one-step lookahead. This agent performs reasonably well, but definitely still has room for improvement! For instance, consider the potential moves in the figure below. (Note that we use zero-based numbering for the columns, so the leftmost column corresponds to col=0, the next column corresponds to col=1, and so on.)\n<center>\n<img src=\"https://i.imgur.com/aAYyy2I.png\" width=90%><br/>\n</center>\nWith one-step lookahead, the red player picks one of column 5 or 6, each with 50% probability. But, column 5 is clearly a bad move, as it lets the opponent win the game in only one more turn. Unfortunately, the agent doesn't know this, because it can only look one move into the future. \nIn this tutorial, you'll use the minimax algorithm to help the agent look farther into the future and make better-informed decisions.\nMinimax\nWe'd like to leverage information from deeper in the game tree. For now, assume we work with a depth of 3. This way, when deciding its move, the agent considers all possible game boards that can result from\n1. the agent's move, \n2. the opponent's move, and \n3. the agent's next move. \nWe'll work with a visual example. For simplicity, we assume that at each turn, both the agent and opponent have only two possible moves. Each of the blue rectangles in the figure below corresponds to a different game board.\n<center>\n<img src=\"https://i.imgur.com/BrRe7Bu.png\" width=90%><br/>\n</center>\nWe have labeled each of the \"leaf nodes\" at the bottom of the tree with the score from the heuristic. (We use made-up scores in the figure. In the code, we'll use the same heuristic from the previous tutorial.) As before, the current game board is at the top of the figure, and the agent's goal is to end up with a score that's as high as possible. \nBut notice that the agent no longer has complete control over its score -- after the agent makes its move, the opponent selects its own move. And, the opponent's selection can prove disastrous for the agent! In particular, \n- If the agent chooses the left branch, the opponent can force a score of -1.\n- If the agent chooses the right branch, the opponent can force a score of +10. \nTake the time now to check this in the figure, to make sure it makes sense to you!\nWith this in mind, you might argue that the right branch is the better choice for the agent, since it is the less risky option. Sure, it gives up the possibility of getting the large score (+40) that can only be accessed on the left branch, but it also guarantees that the agent gets at least +10 points.\nThis is the main idea behind the minimax algorithm: the agent chooses moves to get a score that is as high as possible, and it assumes the opponent will counteract this by choosing moves to force the score to be as low as possible. That is, the agent and opponent have opposing goals, and we assume the opponent plays optimally.\nSo, in practice, how does the agent use this assumption to select a move? We illustrate the agent's thought process in the figure below.\n<center>\n<img src=\"https://i.imgur.com/bWezUC3.png\" width=90%><br/>\n</center>\nIn the example, minimax assigns the move on the left a score of -1, and the move on the right is assigned a score of +10. So, the agent will select the move on the right. \nCode\nWe'll use several functions from the previous tutorial. These are defined in the hidden code cell below. (Click on the \"Code\" button below if you'd like to view them.)", "#$HIDE_INPUT$\nimport random\nimport numpy as np\n\n# Gets board at next step if agent drops piece in selected column\ndef drop_piece(grid, col, mark, config):\n next_grid = grid.copy()\n for row in range(config.rows-1, -1, -1):\n if next_grid[row][col] == 0:\n break\n next_grid[row][col] = mark\n return next_grid\n\n# Helper function for get_heuristic: checks if window satisfies heuristic conditions\ndef check_window(window, num_discs, piece, config):\n return (window.count(piece) == num_discs and window.count(0) == config.inarow-num_discs)\n \n# Helper function for get_heuristic: counts number of windows satisfying specified heuristic conditions\ndef count_windows(grid, num_discs, piece, config):\n num_windows = 0\n # horizontal\n for row in range(config.rows):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[row, col:col+config.inarow])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n # vertical\n for row in range(config.rows-(config.inarow-1)):\n for col in range(config.columns):\n window = list(grid[row:row+config.inarow, col])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n # positive diagonal\n for row in range(config.rows-(config.inarow-1)):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n # negative diagonal\n for row in range(config.inarow-1, config.rows):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n return num_windows", "We'll also need to slightly modify the heuristic from the previous tutorial, since the opponent is now able to modify the game board.\n<center>\n<img src=\"https://i.imgur.com/vQ8b1aX.png\" width=70%><br/>\n</center>\nIn particular, we need to check if the opponent has won the game by playing a disc. The new heuristic looks at each group of four adjacent locations in a (horizontal, vertical, or diagonal) line and assigns:\n- 1000000 (1e6) points if the agent has four discs in a row (the agent won), \n- 1 point if the agent filled three spots, and the remaining spot is empty (the agent wins if it fills in the empty spot), \n- -100 points if the opponent filled three spots, and the remaining spot is empty (the opponent wins by filling in the empty spot), and\n- -10000 (-1e4) points if the opponent has four discs in a row (the opponent won).\nThis is defined in the code cell below.", "# Helper function for minimax: calculates value of heuristic for grid\ndef get_heuristic(grid, mark, config):\n num_threes = count_windows(grid, 3, mark, config)\n num_fours = count_windows(grid, 4, mark, config)\n num_threes_opp = count_windows(grid, 3, mark%2+1, config)\n num_fours_opp = count_windows(grid, 4, mark%2+1, config)\n score = num_threes - 1e2*num_threes_opp - 1e4*num_fours_opp + 1e6*num_fours\n return score", "In the next code cell, we define a few additional functions that we'll need for the minimax agent.", "# Uses minimax to calculate value of dropping piece in selected column\ndef score_move(grid, col, mark, config, nsteps):\n next_grid = drop_piece(grid, col, mark, config)\n score = minimax(next_grid, nsteps-1, False, mark, config)\n return score\n\n# Helper function for minimax: checks if agent or opponent has four in a row in the window\ndef is_terminal_window(window, config):\n return window.count(1) == config.inarow or window.count(2) == config.inarow\n\n# Helper function for minimax: checks if game has ended\ndef is_terminal_node(grid, config):\n # Check for draw \n if list(grid[0, :]).count(0) == 0:\n return True\n # Check for win: horizontal, vertical, or diagonal\n # horizontal \n for row in range(config.rows):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[row, col:col+config.inarow])\n if is_terminal_window(window, config):\n return True\n # vertical\n for row in range(config.rows-(config.inarow-1)):\n for col in range(config.columns):\n window = list(grid[row:row+config.inarow, col])\n if is_terminal_window(window, config):\n return True\n # positive diagonal\n for row in range(config.rows-(config.inarow-1)):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])\n if is_terminal_window(window, config):\n return True\n # negative diagonal\n for row in range(config.inarow-1, config.rows):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])\n if is_terminal_window(window, config):\n return True\n return False\n\n# Minimax implementation\ndef minimax(node, depth, maximizingPlayer, mark, config):\n is_terminal = is_terminal_node(node, config)\n valid_moves = [c for c in range(config.columns) if node[0][c] == 0]\n if depth == 0 or is_terminal:\n return get_heuristic(node, mark, config)\n if maximizingPlayer:\n value = -np.Inf\n for col in valid_moves:\n child = drop_piece(node, col, mark, config)\n value = max(value, minimax(child, depth-1, False, mark, config))\n return value\n else:\n value = np.Inf\n for col in valid_moves:\n child = drop_piece(node, col, mark%2+1, config)\n value = min(value, minimax(child, depth-1, True, mark, config))\n return value", "We won't describe the minimax implementation in detail, but if you want to read more technical pseudocode, here's the description from Wikipedia. (Note that the pseudocode can be safely skipped!)\n<center>\n<img src=\"https://i.imgur.com/BwP9tMD.png\" width=60%>\n</center>\nFinally, we implement the minimax agent in the competition format. The N_STEPS variable is used to set the depth of the tree.", "# How deep to make the game tree: higher values take longer to run!\nN_STEPS = 3\n\ndef agent(obs, config):\n # Get list of valid moves\n valid_moves = [c for c in range(config.columns) if obs.board[c] == 0]\n # Convert the board to a 2D grid\n grid = np.asarray(obs.board).reshape(config.rows, config.columns)\n # Use the heuristic to assign a score to each possible board in the next step\n scores = dict(zip(valid_moves, [score_move(grid, col, obs.mark, config, N_STEPS) for col in valid_moves]))\n # Get a list of columns (moves) that maximize the heuristic\n max_cols = [key for key in scores.keys() if scores[key] == max(scores.values())]\n # Select at random from the maximizing columns\n return random.choice(max_cols)", "In the next code cell, we see the outcome of one game round against a random agent.", "from kaggle_environments import make, evaluate\n\n# Create the game environment\nenv = make(\"connectx\")\n\n# Two random agents play one game round\nenv.run([agent, \"random\"])\n\n# Show the game\nenv.render(mode=\"ipython\")", "And we check how we can expect it to perform on average.", "#$HIDE_INPUT$\ndef get_win_percentages(agent1, agent2, n_rounds=100):\n # Use default Connect Four setup\n config = {'rows': 6, 'columns': 7, 'inarow': 4}\n # Agent 1 goes first (roughly) half the time \n outcomes = evaluate(\"connectx\", [agent1, agent2], config, [], n_rounds//2)\n # Agent 2 goes first (roughly) half the time \n outcomes += [[b,a] for [a,b] in evaluate(\"connectx\", [agent2, agent1], config, [], n_rounds-n_rounds//2)]\n print(\"Agent 1 Win Percentage:\", np.round(outcomes.count([1,-1])/len(outcomes), 2))\n print(\"Agent 2 Win Percentage:\", np.round(outcomes.count([-1,1])/len(outcomes), 2))\n print(\"Number of Invalid Plays by Agent 1:\", outcomes.count([None, 0]))\n print(\"Number of Invalid Plays by Agent 2:\", outcomes.count([0, None]))\n\nget_win_percentages(agent1=agent, agent2=\"random\", n_rounds=50)", "Not bad!\nYour turn\nContinue to check your understanding and submit your own agent to the competition." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
seblabbe/MATH2010-Logiciels-mathematiques
ipynb-cours/16-02-23.ipynb
gpl-3.0
[ "Annonce\nLa branche étudiante de l'IEEE propose, ce jeudi 25 février de 13 h 30 à 17 h 30, une formation à Mathematica pour donner les bases de ce logiciel. L'inscription est nécessaire.\nhttp://ieee.aees.be/fr/accueil/25-francais/activites/conferences/151-formation-mathematica\nInitialisation\nPour que la division soit comme en Python 3:", "from __future__ import division", "Importer toutes les fonctions et quelques variables de Sympy:", "from sympy import *\nfrom sympy.abc import a,b,c,k,n,t,u,v,w,x,y,z\ninit_printing(pretty_print=True, use_latex='mathjax')", "Plusieurs choses ne sont pas dans les notes de cours\nCalculer le pgcd de $$p(x)=x^5 - 20x^4 + 140x^3 - 430x^2 + 579x - 270$$ et $$q(x)=x^6 - 25x^5 + 243x^4 - 1163x^3 + 2852x^2 - 3348x + 1440$$", "p = x**5 - 20*x**4 + 140*x**3 - 430*x**2 + 579*x - 270\nq = x**6 - 25*x**5 + 243*x**4 - 1163*x**3 + 2852*x**2 - 3348*x + 1440\n\ngcd(4,6)\n\ngcd(p,q)", "7 Calcul différentiel et intégral\n7.1 Limites", "from sympy.abc import x\n\nlimit(1/x, x, 0, dir='+')\n\noo\n\nlimit(1/x, x, oo)", "7.2 Sommes\nCalculer la somme des nombres 134,245,325,412,57.", "sum([134, 245, 325, 412, 57])\n\nsum([134, 245, 325, 412, 57, x])\n\nsum([134, 245, 325, 412, 57, x, []])", "Caluler la somme \n$$\\sum_{i=0}^n i$$", "from sympy.abc import i\nsummation(i, (i,0,n))\n\nsummation(i**2, (i,0,2016))", "Calculer la somme \n$$\\sum_{k=1}^\\infty {1 \\over k^6}$$", "summation(1/k**6, (k, 1, oo))", "7.3 Produit\nCalculer le produit \n$$\\prod_{n=1}^{2016} 2n+1$$", "product(2*n+1, (n,1,2016))", "7.4 Calcul différentiel\nCalculer la dérivée de $$x^5+bx$$", "b,x\n\ndiff(x**5+b*x, b)\n\ndiff(x**5+b*x, x)", "Calculer la dérivée de $$\\arcsin(x)$$", "diff(asin(x), x)", "7.5 Calcul intégral\nCalculer l'intégrale $$\\int\\log(x)\\, dx$$", "integrate(log(x), x)", "Calculer l'intégrale $$\\int a^x\\, dx$$", "integrate(a**x, x)", "Calculer l'intégrale $$\\int x^a\\, dx$$", "integrate(x**a, x)\n\nlog(100, 10)\n\nlog?", "Calculer l'intégrale \n$$\\int \\sec^2(x)\\,dx$$", "integrate(sec(x)**2, x)\n\nintegrate(integrate(x**2*y, x), y)", "$$\\int_0^5\\int_0^2 x^2y\\,dx\\,dy$$", "integrate(x**2*y, (x,0,2), (y,0,5))", "7.6 Sommes, produits, dérivées et intégrales non évaluées", "A = Sum(1/k**6, (k,1,oo))\nB = Product(2*n+1, (n,1,21))\nC = Derivative(asin(x), x)\nD = Integral(log(x), x)\n\nEq(A, A.doit())\n\nEq(B, B.doit())\n\nEq(C, C.doit())\n\nEq(D, D.doit())", "7.7 Développement en séries\nCalculer la série de Taylor de $\\tan(x)$ en $x_0=0$ d'ordre 14.", "series(tan(x), x, 0, 14)\n\nseries(sin(x), x, 0, 10)", "7.8 Équations différentielles", "from sympy import E\nA = Derivative(E**x, x)\nEq(A, A.doit())", "Trouver une fonction $f(x)$ telle que ${d\\over dx} f(x) = f(x)$", "f = Function('f')\n\nf(x)\n\neq = Eq(Derivative(f(x),x), f(x))\n\ndsolve(eq)", "Trouver une fonction $f(x)$ telle que ${d^2\\over dx^2} f(x) = -f(x)$", "eq2 = Eq(Derivative(f(x),x,x), -f(x))\n\ndsolve(eq2)\n\nDerivative(f(x),x,x,x,x,x)\n\nDerivative(f(x),x,5)\n\nf(x).diff(x,5)", "Résoudre$$y''-4y'+5y=0$$.", "from sympy.abc import x,y\neq = Eq(y(x).diff(x,x)-4*y(x).diff(x)+5*y(x),0)\ndsolve(eq, y(x))", "8 Algèbre linéaire\n8.1 Définir une matrice\nDéfinir la matrice\n$$M=\\begin{bmatrix}\n2& 9& 3\\ 4& 5& 10\\ 2& 0& 3\n\\end{bmatrix}$$", "Matrix([[2, 9, 3], [4, 5, 10], [2, 0, 3]])\n\nM = Matrix(3,3,[2, 9, 3, 4, 5, 10, 2, 0, 3])", "Définir la matrice\n$$N=\\begin{bmatrix}\n2& 9& 3\\ 4& 5& 10\\ -6& -1& -17\n\\end{bmatrix}$$", "N = Matrix(3,3,[2,9,3,4,5,10,-6,-1,-17]); N", "Définir le vecteur\n$$v=\\begin{bmatrix}\n5\\ 2\\ 1\n\\end{bmatrix}$$", "v = Matrix([5,2,1]); v", "8.2 Opérations de base", "M, N\n\nM + N\n\nM * 3\n\nM * N\n\nM * v\n\nM ** -1\n\nN ** -1\n\nM.transpose()", "8.3 Accéder aux coefficients", "M\n\nM[1,1]\n\nM[2,1]\n\nM.row(0)\n\nM.col(1)\n\nM\n\nM[0,0] = pi", "8.4 Construction de matrices particulières", "zeros(4,6)\n\nones(3,8)\n\neye(5)\n\ndiag(3,4,5)\n\ndiag(3,4,5,M)", "8.5 Matrice échelonnée réduite\nCalculer la forme échelonnée réduite de $M$ et $N$.", "M\n\nM.rref()\n\nN\n\nN.rref()", "8.6 Noyau\nCalculer le noyau des matrices $M$ et $N$.", "M.nullspace()\n\nN.nullspace()", "8.7 Déterminant\nCalculer le déterminant des matrices $M$ et $N$.", "M.det()\n\nN.det()", "8.8 Polynôme caractéristique\nCalculer le polynôme caractérisque de la matrice $M$ et de $N$.", "from sympy.abc import x\n\nM.charpoly(x)\n\nM.charpoly(x).as_expr()\n\nN.charpoly(x).as_expr()", "8.9 Valeurs propres et vecteurs propres\nCalculer les valeurs propres et vecteurs propres de\n$$K=\\begin{bmatrix}\n93& 27& -57\\ -40& 180& -140\\ -15& 27& 51\n\\end{bmatrix}$$", "K = Matrix(3,3,[93,27,-57,-40,180,-140,-15,27,51])\nK\n\nK.eigenvals()\n\nK.eigenvects()", "En général, les racines peuvent être plus compliquées:", "M.eigenvals()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xianjunzhengbackup/code
python/timeit_time.ipynb
mit
[ "import time\nimport timeit\n\nhelp(timeit.Timer)\n\nhelp(timeit.timeit)", "repeat(1,10000)重复一次,每次10000遍", "setup_sum='sum=0'\nrun_sum=\"\"\"\nfor i in range(1,1000):\n if i % 3 ==0:\n sum = sum + i\n\"\"\"\nprint(timeit.Timer(run_sum, setup=\"sum=0\").repeat(1,10000))", "这个就没有重复多少次,就一次,一次10000遍\n注意这里timeit.timeit的函数声明,第三个参数是timer,为了避过该参数,到了number这里,显式的声明该参数", "t=timeit.timeit(run_sum,setup_sum,number=10000)\n\nprint(\"Time for built-in sum(): {}\".format(t))\n\nstart=time.time()\n\nsum=0\nfor i in range(1,10000):\n if i % 3==0:\n sum+=i\n\nend=time.time()\n\nprint(\"Time for trading way to count the time is %f\"%(end-start))" ]
[ "code", "markdown", "code", "markdown", "code" ]
survey-methods/samplics
docs/source/tutorial/ttest.ipynb
mit
[ "T-test\nThe t-test module allows comparing means of continuous variables of interest to known means or across two groups. There are four main types of comparisons. \n- Comparison of one-sample mean to a known mean\n- Comparison of two groups from the same sample\n- Comparison of two means from two different samples\n- Comparison of two paired means\nTtest() is the class that implements all four type of comparisons. To run a comparison, the user call the method compare() with the appropriate parameters.", "import numpy as np\nimport pandas as pd\n\nfrom pprint import pprint\n\nfrom samplics.datasets import Auto\nfrom samplics.categorical.comparison import Ttest", "Comparison of one-sample mean to a knowm mean\nFor this comparison, the mean of a continuous variable, i.e. mpg, is compared to a know mean. In the example below, the user is testing whether the average mpg is equal to 20. Hence, the null hypothesis is H0: mean(mpg) = 20. There are three possible alternatives for this null hypotheses:\n- Ha: mean(mpg) < 20 (less_than alternative)\n- Ha: mean(mpg) > 20 (greater_than alternative)\n- Ha: Ha: mean(mpg) != 20 (not_equal alternative)\nAll three alternatives are automatically computed by the method compare(). This behavior is similar across the four type of comparisons.", "# Load Auto sample data\nauto_cls = Auto()\nauto_cls.load_data()\nauto = auto_cls.data\nmpg = auto[\"mpg\"]\n\none_sample_known_mean = Ttest(samp_type=\"one-sample\")\none_sample_known_mean.compare(y=mpg, known_mean=20)\n\nprint(one_sample_known_mean)", "The print below shows the information encapsulated in the object. point_est provides the sample mean. Similarly, stderror, stddev, lower_ci, and upper_ci provide the standard error, standard deviation, lower bound confidence interval (CI), and upper bound CI, respectively. The class member stats provides the statistics related to the three t-tests (for the three alternative hypothesis). There is additional information encapsulated in the object as shown below.", "pprint(one_sample_known_mean.__dict__)", "Comparison of two groups from the same sample\nThis type of comparison is used when the two groups are from the sample. For example, after running a survey, the user want to know if the domestic cars have the same mpg on average compare to the foreign cars. The parameter group indicates the categorical variable. NB: note that, at this point, Ttest() does not take into account potential dependencies between groups.", "foreign = auto[\"foreign\"]\n\none_sample_two_groups = Ttest(samp_type=\"one-sample\")\none_sample_two_groups.compare(y=mpg, group=foreign)\n\nprint(one_sample_two_groups)", "Since there are two groups for this comparison, the sample mean, standard error, standard deviation, lower bound CI, and upper bound CI are provided by group as Python dictionnaries. The class member stats provides statistics for the comparison assuming both equal and unequal variances.", "print(f\"\\nThese are the group means for mpg: {one_sample_two_groups.point_est}\\n\")\n\nprint(f\"These are the group standard error for mpg: {one_sample_two_groups.stderror}\\n\")\n\nprint(f\"These are the group standard deviation for mpg: {one_sample_two_groups.stddev}\\n\")\n\nprint(\"These are the computed statistics:\\n\")\npprint(one_sample_two_groups.stats)", "Comparison of two means from two different samples\nThis type of comparison should be used when the two groups come from different samples or different strata. The group are assumed independent. Otherwise, the information is similar to the previous test. Note that, when instantiating the class, we used samp_type=\"two-sample\".", "two_samples_unpaired = Ttest(samp_type=\"two-sample\", paired=False)\ntwo_samples_unpaired.compare(y=mpg, group=foreign)\n\nprint(two_samples_unpaired)\n\nprint(f\"\\nThese are the group means for mpg: {two_samples_unpaired.point_est}\\n\")\n\nprint(f\"These are the group standard error for mpg: {two_samples_unpaired.stderror}\\n\")\n\nprint(f\"These are the group standard deviation for mpg: {two_samples_unpaired.stddev}\\n\")\n\nprint(\"These are the computed statistics:\\n\")\npprint(two_samples_unpaired.stats)", "Comparison of two paired means\nWhen two measures are taken from the same observations, the paired t-test is appropriate for comparing the means.", "two_samples_paired = Ttest(samp_type=\"two-sample\", paired=True)\ntwo_samples_paired.compare(y=auto[[\"y1\", \"y2\"]], group=foreign)\n\nprint(two_samples_paired)", "varnames can be used rename teh variables", "y1 = auto[\"y1\"]\ny2 = auto[\"y2\"]\n\ntwo_samples_paired = Ttest(samp_type=\"two-sample\", paired=True)\ntwo_samples_paired.compare(y=[y1, y2], varnames= [\"group_1\", \"gourp_2\"], group=foreign)\n\nprint(two_samples_paired)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
huangziwei/pyMF3
pymf3/examples/01_using_nmf_to_factorize_face_images.ipynb
mit
[ "Using NMF to Factorize Face Images\nIn this notebook, we try to reproduce the result in Lee & Seung (1999) using the standard NMF algorithm.", "import seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n\nimport sys\nsys.path.append('/home/Repos/pyMF3/')\n\nimport pymf3\nfrom pymf3.datasets import CBCL_faces\n\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'DejaVu Sans'\n\nfaces = CBCL_faces.get_CBCL_faces(scale_images=True)", "The dataset we used here is the Face Data, the same as the one used in Lee & Seung (1999). The function CBCL_faces.get_CBCL_faces() will download and load data into a 2D matrix, in which each row is all pixel from one image. All 2429 face images in training set have been read by PIL and converted into Numpy Array. Here are some examples:", "rand_imgs = np.random.randint(0, faces.shape[0], size=7)\nfig, ax = plt.subplots(1, len(rand_imgs), figsize=(14, 2))\nfor i, img in enumerate(rand_imgs):\n ax[i].imshow(faces[img].reshape(19,19))\n ax[i].axes.get_xaxis().set_visible(False)\n ax[i].axes.get_yaxis().set_visible(False)", "In this example we didn't scale the greyscale intensites to mean and std 0.25, as Lee & Seung did in their paper. Here we only normalized them to [0,1].\nAfter 12000 iterations, we get:", "nmf = pymf3.NMF(data=faces, num_bases=49)\nnmf.factorize(niter=12000, show_progress=False)\nnmf.plot_modules(module_ndims=(19,19), num_per_row=7)\n\nnmf.plot_cost()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ThinkBayes2
examples/world_cup01.ipynb
mit
[ "Think Bayes\nThis notebook presents example code and exercise solutions for Think Bayes.\nCopyright 2018 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import classes from thinkbayes2\nfrom thinkbayes2 import Pmf, Suite\n\nimport thinkbayes2\nimport thinkplot\n\nimport numpy as np\nfrom scipy.special import gamma", "The World Cup Problem, Part One\n\nIn the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?\n\nLet's assume that Germany has some hypothetical goal-scoring rate, λ, in goals per game.\nTo represent the prior distribution of λ, I'll use a Gamma distribution with mean 1.3, which is the average number of goals per team per game in World Cup play.\nHere's what the prior looks like.", "from thinkbayes2 import MakeGammaPmf\n\nxs = np.linspace(0, 8, 101)\npmf = MakeGammaPmf(xs, 1.3)\nthinkplot.Pdf(pmf)\nthinkplot.decorate(title='Gamma PDF',\n xlabel='Goals per game',\n ylabel='PDF')\npmf.Mean()", "Exercise: Write a class called Soccer that extends Suite and defines Likelihood, which should compute the probability of the data (the time between goals in minutes) for a hypothetical goal-scoring rate, lam, in goals per game.\nHint: For a given value of lam, the time between goals is distributed exponentially.\nHere's an outline to get you started:", "class Soccer(Suite):\n \"\"\"Represents hypotheses about goal-scoring rates.\"\"\"\n\n def Likelihood(self, data, hypo):\n \"\"\"Computes the likelihood of the data under the hypothesis.\n\n hypo: scoring rate in goals per game\n data: interarrival time in minutes\n \"\"\"\n return 1\n\n# Solution goes here", "Now we can create a Soccer object and initialize it with the prior Pmf:", "soccer = Soccer(pmf)\nthinkplot.Pdf(soccer)\nthinkplot.decorate(title='Gamma prior',\n xlabel='Goals per game',\n ylabel='PDF')\nsoccer.Mean()", "Here's the update after first goal at 11 minutes.", "thinkplot.Pdf(soccer, color='0.7')\nsoccer.Update(11)\nthinkplot.Pdf(soccer)\nthinkplot.decorate(title='Posterior after 1 goal',\n xlabel='Goals per game',\n ylabel='PDF')\nsoccer.Mean()", "Here's the update after the second goal at 23 minutes (the time between first and second goals is 12 minutes).", "thinkplot.Pdf(soccer, color='0.7')\nsoccer.Update(12)\nthinkplot.Pdf(soccer)\nthinkplot.decorate(title='Posterior after 2 goals',\n xlabel='Goals per game',\n ylabel='PDF')\nsoccer.Mean()", "This distribution represents our belief about lam after two goals.\nEstimating the predictive distribution\nNow to predict the number of goals in the remaining 67 minutes. There are two sources of uncertainty:\n\n\nWe don't know the true value of λ.\n\n\nEven if we did we wouldn't know how many goals would be scored.\n\n\nWe can quantify both sources of uncertainty at the same time, like this:\n\n\nChoose a random value from the posterior distribution of λ.\n\n\nUse the chosen value to generate a random number of goals.\n\n\nIf we run these steps many times, we can estimate the distribution of goals scored.\nWe can sample a value from the posterior like this:", "lam = soccer.Random()\nlam", "Given lam, the number of goals scored in the remaining 67 minutes comes from the Poisson distribution with parameter lam * t, with t in units of goals.\nSo we can generate a random value like this:", "t = 67 / 90\nnp.random.poisson(lam * t)", "If we generate a large sample, we can see the shape of the distribution:", "sample = np.random.poisson(lam * t, size=10000)\npmf = Pmf(sample)\nthinkplot.Hist(pmf)\nthinkplot.decorate(title='Distribution of goals, known lambda',\n xlabel='Goals scored', \n ylabel='PMF')\npmf.Mean()", "But that's based on a single value of lam, so it doesn't take into account both sources of uncertainty. Instead, we should sample values from the posterior distribution and generate one prediction for each.\nExercise: Write a few lines of code to\n\n\nUse Pmf.Sample to generate a sample with n=10000 from the posterior distribution soccer.\n\n\nUse np.random.poisson to generate a random number of goals from the Poisson distribution with parameter $\\lambda t$, where t is the remaining time in the game (in units of games).\n\n\nPlot the distribution of the predicted number of goals, and print its mean.\n\n\nWhat is the probability of scoring 5 or more goals in the remainder of the game?", "# Solution goes here\n\n# Solution goes here", "Computing the predictive distribution\nAlternatively, we can compute the predictive distribution by making a mixture of Poisson distributions.\nMakePoissonPmf makes a Pmf that represents a Poisson distribution.", "from thinkbayes2 import MakePoissonPmf", "If we assume that lam is the mean of the posterior, we can generate a predictive distribution for the number of goals in the remainder of the game.", "lam = soccer.Mean()\nrem_time = 90 - 23\nlt = lam * rem_time / 90\npred = MakePoissonPmf(lt, 10)\nthinkplot.Hist(pred)\nthinkplot.decorate(title='Distribution of goals, known lambda',\n xlabel='Goals scored', \n ylabel='PMF')", "The predictive mean is about 2 goals.", "pred.Mean()", "And the chance of scoring 5 more goals is still small.", "pred.ProbGreater(4)", "But that answer is only approximate because it does not take into account our uncertainty about lam.\nThe correct method is to compute a weighted mixture of Poisson distributions, one for each possible value of lam.\nThe following figure shows the different predictive distributions for the different values of lam.", "for lam, prob in soccer.Items():\n lt = lam * rem_time / 90\n pred = MakePoissonPmf(lt, 14)\n thinkplot.Pdf(pred, color='gray', alpha=0.3, linewidth=0.5)\n\nthinkplot.decorate(title='Distribution of goals, all lambda',\n xlabel='Goals scored', \n ylabel='PMF')", "We can compute the mixture of these distributions by making a Meta-Pmf that maps from each Poisson Pmf to its probability.", "metapmf = Pmf()\n\nfor lam, prob in soccer.Items():\n lt = lam * rem_time / 90\n pred = MakePoissonPmf(lt, 15)\n metapmf[pred] = prob", "MakeMixture takes a Meta-Pmf (a Pmf that contains Pmfs) and returns a single Pmf that represents the weighted mixture of distributions:", "def MakeMixture(metapmf, label='mix'):\n \"\"\"Make a mixture distribution.\n\n Args:\n metapmf: Pmf that maps from Pmfs to probs.\n label: string label for the new Pmf.\n\n Returns: Pmf object.\n \"\"\"\n mix = Pmf(label=label)\n for pmf, p1 in metapmf.Items():\n for x, p2 in pmf.Items():\n mix[x] += p1 * p2\n return mix", "Here's the result for the World Cup problem.", "mix = MakeMixture(metapmf)\nmix.Print()", "And here's what the mixture looks like.", "thinkplot.Hist(mix)\nthinkplot.decorate(title='Posterior predictive distribution',\n xlabel='Goals scored', \n ylabel='PMF')", "Exercise: Compute the predictive mean and the probability of scoring 5 or more additional goals.", "# Solution goes here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ceos-seo/data_cube_notebooks
notebooks/water/detection/water_interoperability_similarity.ipynb
apache-2.0
[ "<a id=\"water_interoperability_similarity_top\"></a>\nWater Interoperability Similarity\n<hr>\n\nBackground\nThere are a few water classifiers for Landsat, Sentinel-1, and Sentinel-2. We will examine WOfS for Landsat, thresholding for Sentinel-1, and WOfS for Sentinel-2.\nAlthough WOfS performs well on clear water bodies, it can misclassify murky water bodies as not water. WASARD or Sentinel-1 thresholding generally perform equally well or better than WOfS – especially on murky water bodies.\nBecause WOfS uses an optical data source (Landsat), it often does not have data to make water classifications due to cloud occlusion. The same limitation applies to Sentinel-2 water detection.\nThe main reasons to use multiple data sources in the same water detection analysis are to increase temporal resolution and account for missing data.\n<hr>\n\nNotebook Description\nThis notebook checks how similar water classifications are among a selected set of sources (e.g. WOfS for Landsat, thresholding for Sentinel-1, etc.).\nThese are the steps followed:\n\nDetermine the dates of coincidence of data for the selected sensors using the CEOS COVE tool.\nAcquire water classifications for each sensor.\nShow the RGB representation of Time Slices and Water Classifications\nShow the per-time-slice percent of cloud according to each sensor as a line plot.\nShow the per-time-slice percent of water (masked with the intersected clean mask) according to each sensor as a line plot.\nShow the per-time-slice similarity (% of matching pixels) of each pair of sensors as a line plot.\n\n<hr>\n\nIndex\n\nImport Dependencies and Connect to the Data Cube\nDefine the Extents of the Analysis\nDetermine Dates of Coincidence for the Selected Sensors Using the COVE Tool\nGet Water Classifications for Each Sensor\nDetermine the time range of overlapping data for all sensors.\nDetermine the dates of close scenes among the sensors.\nGet Landsat 8 water classifications\nGet Sentinel-1 water classifications\nGet Sentinel-2 water classifications\n\n\nShow the RGB Representation of Time Slices and Water Classifications\nShow the Per-time-slice Percent of Water According to Each Sensor as a Line Plot\nShow the Per-time-slice Similarity (% of Matching Pixels) of Each Pair of Sensors as a Line Plot\n\nGetting started\nTo run this analysis, run all the cells in the notebook, starting with the \"Load packages\" cell.\nAfter finishing the analysis, return to the \"Analysis parameters\" cell, modify some values (e.g. choose a different location or time period to analyse) and re-run the analysis.\n<span id=\"water_interoperability_similarity_import\">Import Dependencies and Connect to the Data Cube &#9652;</span>\nLoad packages\nLoad key Python packages and supporting functions for the analysis.", "import sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))\n\n%matplotlib inline\n\nimport sys\nimport datacube\nimport numpy\nimport numpy as np\nimport xarray as xr\nfrom xarray.ufuncs import isnan as xr_nan\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Connect to the datacube\nActivate the datacube database, which provides functionality for loading and displaying stored Earth observation data.", "dc = datacube.Datacube(app=\"water_interoperability_similarity\")", "<span id=\"water_interoperability_similarity_define_extents\">Define the Extents of the Analysis &#9652;</span>\nAnalysis parameters\nThe following cell sets the parameters, which define the area of interest and the length of time to conduct the analysis over.\nThe parameters are\n\nlatitude: The latitude range to analyse (e.g. (-11.288, -11.086)).\nFor reasonable loading times, make sure the range spans less than ~0.1 degrees.\nlongitude: The longitude range to analyse (e.g. (130.324, 130.453)).\nFor reasonable loading times, make sure the range spans less than ~0.1 degrees.\n\nIf running the notebook for the first time, keep the default settings below.\nThis will demonstrate how the analysis works and provide meaningful results.\nThe example covers an area around Obuasi, Ghana.\nTo run the notebook for a different area, make sure Landsat 8, Sentinel-1, and Sentinel-2 data is available for the chosen area.", "# Define the area of interest\n# Obuasi, Ghana\n# latitude = (6.10, 6.26)\n# longitude = (-1.82, -1.66)\n# latitude = (6.1582, 6.2028)\n# longitude = (-1.7295, -1.6914)\n\n# DEBUG - small area of Obuasi for quick loading\n# latitude = (6.1982, 6.2028)\n# longitude = (-1.7295, -1.6914)\n\n# Tono Dam, Ghana\nlatitude = (10.8600, 10.9150) \nlongitude = (-1.1850, -1.1425)\n\n# The time range in which we want to determine \n# dates of close scenes among sensors.\ntime_extents = ('2014-01-01', '2018-12-31')\n\nfrom utils.data_cube_utilities.dc_display_map import display_map\n\ndisplay_map(longitude, latitude)", "<span id=\"water_interoperability_similarity_determine_coincidence\">Determine Dates of Coincidence for the Selected Sensors Using the COVE Tool &#9652;</span>\nWe used a tool from the Committee on Earth Observations (CEOS) called the CEOS Visualization Environment (COVE). This tool has several applications, such as Acquisition Forecaster predicts when and where future acquisitions (images) will occur, and Coverage Analyzer which shows when and where acquisitions have occurred in the past.\nFor this analysis, we used the Coincident Calculator to determine when Landsat 8, Sentinel-1, and Sentinel-2 have close dates so we can compare them on a per-time-slice basis.\nThe COVE Coincident Calculator allows users to specify the sensors to determine coincidence for. For this analysis, we first determined the dates of coincidence of Landsat 8 and Sentinel-2. We then determined dates which are close to those which have Sentinel-1 data.\nWe first found dates for which both Landsat 8 and Sentinel-2 data is available for the time range and area of interest, which were the following 8 dates:\n[April 22, 2017, July 11, 2017, September 29, 2017, December 18, 2017, March 8, 2018, May 27, 2018, August 15, 2018, November 3, 2018]\nThen we found dates for which Landsat 8 and Sentinel-1 data is available for the time range and area of interest, and then found the subset of closely matching dates, which were the following 6 dates: [July 12, 2017 (off 1), September 29, 2017, December 15, 2017 (off 3), March 9, 2018 (off 1), May 27, 2018, August 12, 2018 (off 3)] These are the daets we use in this analysis.\n<span id=\"water_interoperability_similarity_get_water_classifications\">Get Water Classifications for Each Sensor &#9652;</span>", "common_load_params = \\\n dict(latitude=latitude, longitude=longitude, \n group_by='solar_day', \n output_crs=\"epsg:4326\",\n resolution=(-0.00027,0.00027),\n dask_chunks={'latitude': 2000, 'longitude':2000, 'time':1})\n\n# The minimum percent of data that a time slice must have\n# to be kept in this analysis\nMIN_PCT_DATA = 0", "Determine the time range of overlapping data for all sensors.", "metadata = {}\n\nmetadata['Landsat 8'] = \\\n dc.load(**common_load_params,\n product='ls8_lasrc_ghana', \n time=time_extents)\n\nmetadata['Sentinel-1'] = \\\n dc.load(**common_load_params,\n product='s1monthly_gamma0_ghana', \n time=time_extents)\n\ns2a_meta = dc.load(**common_load_params,\n product='s2a_msil2a', \n time=time_extents)\ns2b_meta = dc.load(**common_load_params,\n product='s2b_msil2a', \n time=time_extents)\nmetadata['Sentinel-2'] = xr.concat((s2a_meta, s2b_meta), dim='time').sortby('time')\ndel s2a_meta, s2b_meta\n\nls8_time_rng = metadata['Landsat 8'].time.values[[0,-1]]\ns2_time_rng = metadata['Sentinel-2'].time.values[[0,-1]]\n\ntime_rng = np.stack((ls8_time_rng, s2_time_rng))\noverlapping_time = time_rng[:,0].max(), time_rng[:,1].min()", "Limit the metadata to check for close scenes to the overlapping time range.", "for sensor in metadata:\n metadata[sensor] = metadata[sensor].sel(time=slice(*overlapping_time))", "Determine the dates of close scenes among the sensors", "# Constants #\n# The maximum number of days of difference between scenes\n# from sensors for those scenes to be considered approximately coincident.\n# The Sentinel-1 max date diff is set high enough to allow any set of dates \n# from the other sensors to match with one of its dates since we will \n# select its matching dates with special logic later.\nMAX_NUM_DAYS_DIFF = {'Landsat 8':4, 'Sentinel-1':30}\n# End Constants #\n\n# all_times\nnum_datasets = len(metadata)\nds_names = list(metadata.keys())\nfirst_ds_name = ds_names[0]\n# All times for each dataset.\nds_times = {ds_name: metadata[ds_name].time.values for ds_name in ds_names}\n# The time indices for each dataset's sorted time dimension \n# currently being compared.\ntime_inds = {ds_name: 0 for ds_name in ds_names}\ncorresponding_times = {ds_name: [] for ds_name in ds_names}\n\n# The index of the dataset in `metadata` to compare times against the first.\noth_ds_ind = 1\noth_ds_name = ds_names[oth_ds_ind]\noth_ds_time_ind = time_inds[oth_ds_name]\n# For each time in the first dataset, find any \n# closely matching dates in the other datasets.\nfor first_ds_time_ind, first_ds_time in enumerate(ds_times[first_ds_name]):\n time_inds[first_ds_name] = first_ds_time_ind\n # Find a corresponding time in this other dataset.\n while True:\n oth_ds_name = ds_names[oth_ds_ind]\n oth_ds_time_ind = time_inds[oth_ds_name]\n # If we've checked all dates for the other dataset, \n # check the next first dataset time.\n if oth_ds_time_ind == len(ds_times[oth_ds_name]):\n break\n oth_ds_time = metadata[ds_names[oth_ds_ind]].time.values[oth_ds_time_ind]\n time_diff = (oth_ds_time - first_ds_time).astype('timedelta64[D]').astype(int)\n \n # If this other dataset time is too long before this\n # first dataset time, check the next other dataset time.\n if time_diff <= -MAX_NUM_DAYS_DIFF[oth_ds_name]:\n oth_ds_time_ind += 1\n time_inds[ds_names[oth_ds_ind]] = oth_ds_time_ind\n continue\n # If this other dataset time is within the acceptable range\n # of the first dataset time...\n elif abs(time_diff) <= MAX_NUM_DAYS_DIFF[oth_ds_name]:\n # If there are more datasets to find a corresponding date for\n # these current corresponding dates, check those datasets.\n if oth_ds_ind < len(ds_names)-1:\n oth_ds_ind += 1\n continue\n else: # Otherwise, record this set of corresponding dates.\n for ds_name in ds_names:\n corresponding_times[ds_name].append(ds_times[ds_name][time_inds[ds_name]])\n # Don't use these times again.\n time_inds[ds_name] = time_inds[ds_name] + 1\n oth_ds_ind = 1\n break\n # If this other dataset time is too long after this\n # first dataset time, go to the next first dataset time.\n else:\n oth_ds_ind -= 1\n break\n\n\n# convert to pandas datetime\nfor sensor in corresponding_times:\n for ind in range(len(corresponding_times[sensor])):\n corresponding_times[sensor][ind] = \\\n pd.to_datetime(corresponding_times[sensor][ind])", "The Sentinel-1 data is a monthly composite, so we need special logic for choosing data from it.", "ls8_pd_datetimes = corresponding_times['Landsat 8'] \ns1_pd_datetimes = pd.to_datetime(metadata['Sentinel-1'].time.values)\nfor time_ind, ls8_time in enumerate(ls8_pd_datetimes):\n matching_s1_time_ind = [s1_time_ind for (s1_time_ind, s1_time) \n in enumerate(s1_pd_datetimes) if \n s1_time.month == ls8_time.month][0]\n matching_s1_time = metadata['Sentinel-1'].time.values[matching_s1_time_ind]\n corresponding_times['Sentinel-1'][time_ind] = pd.to_datetime(matching_s1_time)", "Get Landsat 8 water classifications\nLoad the data", "ls8_times = corresponding_times['Landsat 8']\ns1_times = corresponding_times['Sentinel-1']\ns2_times = corresponding_times['Sentinel-2']\n\nls8_data = []\nls8_data = dc.load(**common_load_params,\n product='ls8_usgs_sr_scene', \n time=(ls8_times[0], ls8_times[-1]),\n dask_chunks = {'time': 1})\nls8_data = ls8_data.sel(time=corresponding_times['Landsat 8'], method='nearest')\nprint(f\"Subset the data to {len(ls8_data.time)} times of near coincidence.\")", "Acquire the clean mask", "from water_interoperability_utils.clean_mask import ls8_unpack_qa\n\nls8_data_mask = (ls8_data != -9999).to_array().any('variable')\nls8_clear_mask = ls8_unpack_qa(ls8_data.pixel_qa, 'clear')\nls8_water_mask = ls8_unpack_qa(ls8_data.pixel_qa, 'water')\nls8_clean_mask = (ls8_clear_mask | ls8_water_mask) & ls8_data_mask \ndel ls8_clear_mask, ls8_water_mask", "Acquire water classifications", "from water_interoperability_utils.dc_water_classifier import wofs_classify\nimport warnings\n\nwith warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", category=Warning)\n ls8_water = wofs_classify(ls8_data).wofs\nls8_water = ls8_water.where(ls8_clean_mask)", "Get Sentinel-1 water classifications\nLoad the data", "s1_data = dc.load(**common_load_params,\n product='sentinel1_ghana_monthly', \n time=(s1_times[0], s1_times[-1]),\n dask_chunks = {'time': 1})\ns1_data = s1_data.sel(time=corresponding_times['Sentinel-1'], method='nearest')\nprint(f\"Subset the data to {len(s1_data.time)} times of near coincidence.\")", "Acquire the clean mask", "s1_not_nan_da = ~xr_nan(s1_data).to_array()\ns1_clean_mask = s1_not_nan_da.min('variable')\ndel s1_not_nan_da", "Acquire water classifications", "from sklearn.impute import SimpleImputer\nfrom skimage.filters import try_all_threshold, threshold_otsu\n\nthresh_vv = threshold_otsu(s1_data.vv.values)\nthresh_vh = threshold_otsu(s1_data.vh.values)\n\nbinary_vv = s1_data.vv.values < thresh_vv\nbinary_vh = s1_data.vh.values < thresh_vh\n\ns1_water = xr.DataArray(binary_vv & binary_vh, coords=s1_data.vv.coords, \n dims=s1_data.vv.dims, attrs=s1_data.vv.attrs)\ns1_water = s1_water.where(s1_clean_mask)", "Get Sentinel-2 water classifications\nAcquire the data", "s2a_data = dc.load(**common_load_params,\n product='s2a_msil2a', \n time=(s2_times[0], s2_times[-1]),\n dask_chunks = {'time': 1})\ns2b_data = dc.load(**common_load_params,\n product='s2b_msil2a', \n time=(s2_times[0], s2_times[-1]),\n dask_chunks = {'time': 1})\ns2_data = xr.concat((s2a_data, s2b_data), dim='time').sortby('time')\ns2_data = s2_data.sel(time=corresponding_times['Sentinel-2'], method='nearest')\nprint(f\"Subsetting the data to {len(s2_data.time)} times of near coincidence.\")", "Acquire the clean mask", "# See figure 3 on this page for more information about the\n# values of the scl data for Sentinel-2: \n# https://earth.esa.int/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm\ns2_clean_mask = s2_data.scl.isin([1, 2, 3, 4, 5, 6, 7, 10, 11]) ", "Acquire water classifications", "with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", category=Warning)\n s2_water = wofs_classify(s2_data.rename(\n {'nir_1': 'nir', 'swir_1': 'swir1', 'swir_2': 'swir2'})).wofs\ns2_water = s2_water.where(s2_clean_mask)\n\nls8_data = ls8_data.compute()\nls8_clean_mask = ls8_clean_mask.compute()\n\ns1_data = s1_data.compute()\ns1_clean_mask = s1_clean_mask.compute()\n\ns2_data = s2_data.compute()\ns2_clean_mask = s2_clean_mask.compute()", "<span id=\"water_interoperability_similarity_images\">Show the RGB Representation of Time Slices and Water Classifications &#9652;</span>\nObtain the intersected clean mask for the sensors.", "intersected_clean_mask = xr.DataArray((ls8_clean_mask.values & \n s1_clean_mask.values & \n s2_clean_mask.values), \n coords=ls8_clean_mask.coords, \n dims=ls8_clean_mask.dims)\n\n# Mask the water classes.\nls8_water = ls8_water.where(intersected_clean_mask.values)\ns1_water = s1_water.where(intersected_clean_mask.values)\ns2_water = s2_water.where(intersected_clean_mask.values)\n\n# Remove any times with no data for any sensor.\ntimes_to_keep_mask = (intersected_clean_mask.sum(['latitude', 'longitude']) / \\\n intersected_clean_mask.count(['latitude', 'longitude'])) > MIN_PCT_DATA\n# The time indices to keep for visualization.\ntime_inds_subset = np.arange(len(ls8_data.time))[times_to_keep_mask.values]\n\nintersected_clean_mask_subset = \\\n intersected_clean_mask.isel(time=time_inds_subset)\n\nls8_data_subset = ls8_data.isel(time=time_inds_subset)\nls8_clean_mask_subset = ls8_clean_mask.isel(time=time_inds_subset)\nls8_water_subset = ls8_water.isel(time=time_inds_subset)\n\ns1_data_subset = s1_data.isel(time=time_inds_subset)\ns1_clean_mask_subset = s1_clean_mask.isel(time=time_inds_subset)\ns1_water_subset = s1_water.isel(time=time_inds_subset)\n\ns2_data_subset = s2_data.isel(time=time_inds_subset)\ns2_clean_mask_subset = s2_clean_mask.isel(time=time_inds_subset)\ns2_water_subset = s2_water.isel(time=time_inds_subset)", "Show the data and water classifications for each sensor as the data will be compared among them (an intersection).", "water_alpha = 0.9\n\nfor time_ind in range(len(ls8_data_subset.time)):\n fig, ax = plt.subplots(1, 3, figsize=(12, 4))\n \n # Mask out the water from the RGB so that its background segment is white instead of the RGB.\n ls8_data_subset.where(ls8_water_subset != 1)[['red', 'green', 'blue']].isel(time=time_ind).to_array().plot.imshow(ax=ax[0], vmin=0, vmax=1750)\n ls8_only_water = ls8_water_subset.where(ls8_water_subset == 1)\n ls8_only_water.isel(time=time_ind).plot.imshow(ax=ax[0], cmap='Blues', alpha=water_alpha, \n vmin=0, vmax=1, add_colorbar=False)\n ax[0].set_xlabel('Longitude')\n ax[0].set_ylabel('Latitude')\n ax[0].set_title(f\"Landsat 8 \" \\\n f\"({numpy.datetime_as_string(ls8_data_subset.time.values[time_ind], unit='D')})\")\n \n s1_data_subset.where(s1_water_subset != 1).vv.isel(time=time_ind).plot.imshow(ax=ax[1], cmap='gray', vmin=-30, vmax=-0, add_colorbar=False)\n s1_only_water = s1_water_subset.where(s1_water_subset == 1)\n s1_only_water.isel(time=time_ind).plot.imshow(ax=ax[1], cmap='Blues', alpha=water_alpha, \n vmin=0, vmax=1, add_colorbar=False)\n ax[1].set_xlabel('Longitude')\n ax[1].set_ylabel('Latitude')\n ax[1].set_title(f\"Sentinel-1 \" \\\n f\"({numpy.datetime_as_string(s1_data_subset.time.values[time_ind], unit='D')})\")\n \n s2_data_subset.where(s2_water_subset != 1)[['red', 'green', 'blue']].isel(time=time_ind).to_array().plot.imshow(ax=ax[2], vmin=0, vmax=2500)\n s2_only_water = s2_water_subset.where(s2_water_subset == 1)\n s2_only_water.isel(time=time_ind).plot.imshow(ax=ax[2], cmap='Blues', alpha=water_alpha, \n vmin=0, vmax=1, add_colorbar=False)\n ax[2].set_xlabel('Longitude')\n ax[2].set_ylabel('Latitude')\n ax[2].set_title(f\"Sentinel-2 \" \\\n f\"({numpy.datetime_as_string(s2_data_subset.time.values[time_ind], unit='D')})\")\n \n plt.tight_layout()\n plt.show()", "<span id=\"water_interoperability_similarity_pct_water_line_plot\">Show the Per-time-slice Percent of Water According to Each Sensor as a Line Plot &#9652;</span>", "ls8_water_subset_pct = \\\n ls8_water_subset.sum(['latitude', 'longitude']) / \\\n ls8_water_subset.count(['latitude', 'longitude']).compute()\n\ns1_water_subset_pct = \\\n s1_water_subset.sum(['latitude', 'longitude']) / \\\n s1_water_subset.count(['latitude', 'longitude']).compute()\ns1_water_subset_pct.time.values = ls8_water_subset_pct.time.values\n\ns2_water_subset_pct = \\\n s2_water_subset.sum(['latitude', 'longitude']) / \\\n s2_water_subset.count(['latitude', 'longitude']).compute()\ns2_water_subset_pct.time.values = ls8_water_subset_pct.time.values\n\nimport matplotlib.ticker as mtick\n\nax = plt.gca()\n\nplot_format = dict(ms=6, marker='o', alpha=0.5)\n\n(ls8_water_subset_pct*100).plot(ax=ax, **plot_format, label='Landsat 8')\n(s1_water_subset_pct*100).plot(ax=ax, **plot_format, label='Sentinel-1')\n(s2_water_subset_pct*100).plot(ax=ax, **plot_format, label='Sentinel-2')\n\nplt.ylim(0,50)\n\nax.set_xlabel('Time')\nax.set_ylabel('Percent of Intersecting Data That Is Water')\nax.yaxis.set_major_formatter(mtick.PercentFormatter())\nplt.legend()\nplt.title('Water %')\nplt.show()", "<span id=\"water_interoperability_similarity_pct_similarity_line_plot\">Show the Per-time-slice Similarity (% of Matching Pixels) of Each Pair of Sensors as a Line Plot &#9652;</span>", "from itertools import combinations\n\nax = plt.gca()\n\nwater_das = [('Landsat_8', ls8_water_subset), \n ('Sentinel-1', s1_water_subset), \n ('Sentinel-2', s2_water_subset)]\nfor i, ((sensor_1, water_1), (sensor_2, water_2)) in enumerate(combinations(water_das, 2)):\n lat_dim_ind = np.argmax(np.array(water_1.dims) == 'latitude')\n lon_dim_ind = np.argmax(np.array(water_1.dims) == 'longitude')\n \n similarity = (water_1.values == water_2.values).sum(axis=(lat_dim_ind, lon_dim_ind)) / \\\n intersected_clean_mask_subset.sum(['latitude', 'longitude'])\n (similarity*100).plot.line(ax=ax, **plot_format, label=f'{sensor_1} vs {sensor_2}')\n \n ax.set_xlabel('Time')\n ax.set_ylabel('Percent of Same Classifications')\n ax.yaxis.set_major_formatter(mtick.PercentFormatter())\n plt.legend()\n plt.title('Similarity')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mgalardini/2017_python_course
notebooks/[1a]-Exercises-solutions.ipynb
gpl-2.0
[ "Basic Python and native data structures: exercises\nDefine a function max() that takes two numbers as arguments and returns the largest of them.", "def max(number1, number2):\n if number1 > number2:\n return number1\n else:\n return number2\n \nmax(1, 2)\n\nmax(100, 10)", "Write a function find_longest_word() that takes a list of words and returns the length of the longest one", "def find_longest_word(word_list):\n \n max_word=0\n \n for w in word_list:\n if len(w) > max_word:\n max_word = len(w)\n return max_word\n\nl = ['uydfguyfg', 'ffefu', \"kuurhr\", \"hggug\", \"hgrhggrel\"]\n\nfind_longest_word(l)", "Write a function filter_long_words() that takes a list of words and an integer n and returns the list of words that are longer than n.", "words_list = ['banana', 'apple', 'orange', 'elephant', 'raspberry']\nmin_length = 7\n# with list comprehension: elegant and concise\n[word for word in words_list if len(word) >= min_length]\n\n# with a function: more explicit and simpler\ndef filter_long_words(words_list, min_length):\n output = []\n for word in words_list:\n if len(word) >= min_length:\n output.append(word)\n return output\n\nfilter_long_words(words_list, 7)", "Define a function generate_n_chars() that takes an integer n and a character c and returns a string, n characters long, consisting only of the chosen character. For example, generate_n_chars(5,\"x\") should return the string \"xxxxx\".", "def generate_n_chars(n, c):\n return c * n\n\ngenerate_n_chars(5, 'x')", "Write a program that takes list of words and returns a dictionary with the words as keys and their length as values.", "def generate_dictionary(words):\n dictionary = {}\n for word in words:\n dictionary[word] = len(word)\n return dictionary\n\ngenerate_dictionary(['python', 'blast', 'banana'])", "A pangram is a sentence that contains all the letters of the English alphabet at least once, for example: The quick brown fox jumps over the lazy dog. Your task here is to write a function to check a sentence to see if it is a pangram or not.", "# you can create a set from a string\nset('abcdef')\n\na = set('abc')\na.issubset(set('abcd'))\n\ndef check_pangram(sentence):\n letters = set('abcdefghijklmnopqrstuvwxyz')\n found = set()\n for char in sentence.lower():\n found.add(char)\n # check if our letters is a subset of all english letters\n if letters < found:\n return True\n else:\n return False\n\ncheck_pangram('The quick brown fox jumps over the lazy dog')", "\"99 Bottles of Beer\" is a traditional song in the United States and Canada. It is popular to sing on long trips, as it has a very repetitive format which is easy to memorize, and can take a long time to sing. The song's simple lyrics are as follows:\n99 bottles of beer on the wall, 99 bottles of beer.\nTake one down, pass it around, 98 bottles of beer on the wall.\n\nThe same verse is repeated, each time with one fewer bottle. The song is completed when the singer or singers reach zero.\nYour task here is write a function capable of generating all the verses of the song.", "def sing():\n verse1 = '{0} bottles of beer on the wall, {0} bottles of beer.'\n verse2 = 'Take one down, pass it around, {0} bottles of beer on the wall.'\n bottles = 99\n while bottles > 0:\n print(verse1.format(bottles))\n print(verse2.format(bottles-1))\n bottles -= 1\n\nsing()", "Write a function char_freq() that takes a string and builds a frequency listing of the characters contained in it. Represent the frequency listing as a dictionary. Try it with something like char_freq(\"abbabcbdbabdbdbabababcbcbab\").", "def char_freq(word):\n dictionary = {}\n for letter in word:\n if letter not in dictionary:\n dictionary[letter] = 1\n else:\n dictionary[letter] += 1\n return dictionary\nchar_freq('abbabcbdbabdbdbabababcbcbab')\n\ndef char_freq(word):\n dictionary = {}\n for letter in set(word):\n dictionary[letter] = word.count(letter)\n return dictionary\nchar_freq('abbabcbdbabdbdbabababcbcbab')", "Slightly more difficult ones\nWrite a function that will calculate the average word length of a text stored in a file (i.e the sum of all the lengths of the word tokens in the text, divided by the number of word tokens). Add an option to exclude blank lines or chapter headers from the computation.\nUse the aristotle.txt file contained in the data directory", "def average_word_length(input_file, skip=False):\n words = []\n for line in open(input_file, 'r'):\n # remove the newline character\n line = line.rstrip()\n if skip is True:\n # Only one character means empty line\n if len(set(line)) == 1:\n continue\n # go word by word\n for word in line.split():\n # skip \"empty\" words\n if len(word) == 0:\n continue\n words.append(len(word))\n total_words_length = 0\n for word_len in words:\n total_words_length += word_len\n return float(total_words_length)/len(words)\n\naverage_word_length('../data/aristotle.txt')\n\naverage_word_length('../data/aristotle.txt', skip=True)", "If all the previous exercises sounded boring to you\nAn anagram is a type of word play, the result of rearranging the letters of a word or phrase to produce a new word or phrase, using all the original letters exactly once; e.g., orchestra = carthorse. Using the word list in the unixdict.txt file (data directory), write a program that finds the sets of words that share the same characters that contain the most words in them.", "def anagrams(infile):\n words = {}\n for line in open(infile):\n word = line.rstrip()\n # dictionary is KEY:VALUE\n # key is list of unique letters\n # value is a set of words\n words[tuple(set(word))] = words.get(tuple(set(word)), set())\n words[tuple(set(word))].add(word)\n # return the longest set of words\n longest = set()\n for key, value in words.items():\n if len(value) > len(longest):\n longest = value\n return longest\n\ndef anagrams(infile):\n words = {}\n for line in open(infile):\n word = line.rstrip()\n # dictionary is KEY:VALUE\n # key is list of unique letters\n # value is a set of words\n words[tuple(set(word))] = words.get(tuple(set(word)), set())\n words[tuple(set(word))].add(word)\n # return the longest set of words\n return sorted(words.values(), key=lambda x: len(x))[-1]\n\nanagrams('../data/unixdict.txt')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OpenWeavers/openanalysis
doc/Langauge/15 - Exception and Exception handling.ipynb
gpl-3.0
[ "Exceptions\nIn an ideal situation, our program runs smoothly without any errors. However it is not always the case. Errors may be due to developer's fault or programmer's mistake or of computer. Source of some errors might be hard to undertsand. However it is the task of Good Programmer to handle all kinds of errors that might occur in his program. If some error condition escapes from the developer and user catches it, It is a bug in the program. Developers must update the programs periodically to fix the bugs in the software. You may remember that recent ransomware attack which caused the loss of enormous amount of data, was due to a bug in Microsoft Windows.\nFacing a first exception\nLet's write a lambda to divide 2 numbers", "div = lambda x,y : x/y\n\ndiv(8,2)\n\ndiv(0/0)", "Oh No!... It was a error. Let's handle it.\ntry-except-finally\ntry-except-finally provides an easy way to handle errors that can arise during program execution. It works similar to try-catch-finally blocks in Java and C#\nSyntax:\npython\n try:\n &lt;statement 1&gt;\n &lt;statement 2&gt;\n ...\n &lt;statement n&gt;\n except (Exception List): # Refer note\n &lt;statement 1&gt;\n &lt;statement 2&gt;\n ...\n &lt;statement n&gt;\n finally:\n &lt;cleanup 1&gt;\n &lt;cleanup 2&gt;\n ...\n &lt;cleanup n&gt;\n<div class=\"alert alert-info\">\n**Note:**\n\n\n- `finally` block is optional\n- If Exception List is empty all exceptions are handled by `except` block\n- If catching a single exception, it can be referred with its name.\n\n```python\n except RangeError as e:\n <do-something-with-e>\n```\n- Base Exception classes must be captured at last, if catching exceptions in hierarchy\n\n</div>\n\ndiv with exception handling", "def div_good(x,y):\n try:\n return x/y\n except ZeroDivisionError:\n print(\"Division by zero\")\n\ndiv_good(8,2)\n\ndiv_good(0,0)", "Note how the exception was handled\nCleaning the things up\nIn this version of div, we will return a NaN if a ZeroDivisionError occures. 'NaN' is Not a Number. 'Inf' refers infinity", "def div_clean(x,y):\n try:\n value = x/y\n except ZeroDivisionError:\n value = float('NaN')\n return value\n\ndiv_clean(4,3)\n\ndiv_clean(8,0)", "Raising Exceptions\nThe raise statement allows the programmer to force a specified exception to occur. For example:", "raise NameError('HiThere')", "The sole argument to raise indicates the exception to be raised. This must be either an exception instance or an exception class (a class that derives from Exception). If an exception class is passed, it will be implicitly instantiated by calling its constructor with no arguments:", "raise ValueError # shorthand for 'raise ValueError()'", "If you need to determine whether an exception was raised but don’t intend to handle it, a simpler form of the raise statement allows you to re-raise the exception:", "try:\n raise NameError('HiThere')\nexcept NameError:\n print('An exception flew by!')\n raise", "User-defined Exceptions\nPrograms may name their own exceptions by creating a new exception class. Exceptions should typically be derived from the Exception class, either directly or indirectly.\nException classes can be defined which do anything any other class can do, but are usually kept simple, often only offering a number of attributes that allow information about the error to be extracted by handlers for the exception. When creating a module that can raise several distinct errors, a common practice is to create a base class for exceptions defined by that module, and subclass that to create specific exception classes for different error conditions:", "class Error(Exception):\n \"\"\"Base class for exceptions in this module.\"\"\"\n pass\n\nclass InputError(Error):\n \"\"\"Exception raised for errors in the input.\n\n Attributes:\n expression -- input expression in which the error occurred\n message -- explanation of the error\n \"\"\"\n\n def __init__(self, expression, message):\n self.expression = expression\n self.message = message\n\nclass TransitionError(Error):\n \"\"\"Raised when an operation attempts a state transition that's not\n allowed.\n\n Attributes:\n previous -- state at beginning of transition\n next -- attempted new state\n message -- explanation of why the specific transition is not allowed\n \"\"\"\n\n def __init__(self, previous, next, message):\n self.previous = previous\n self.next = next\n self.message = message", "Most exceptions are defined with names that end in Error, similar to the naming of the standard exceptions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CristinaFoltea/pythonD3
IPythonD3.ipynb
bsd-2-clause
[ "IPython & D3\nLet's start with a few techniques for working with data in ipython and then build a d3 network graph.", "# import requirments \nfrom IPython.display import Image\nfrom IPython.display import display\nfrom IPython.display import HTML\nfrom datetime import *\nimport json\nfrom copy import *\nfrom pprint import *\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport json\nfrom ggplot import *\nimport networkx as nx\nfrom networkx.readwrite import json_graph\n#from __future__ import http_server\nfrom BaseHTTPServer import BaseHTTPRequestHandler\nfrom IPython.display import IFrame\nimport rpy2\n%load_ext rpy2.ipython\n%R require(\"ggplot2\")\n% matplotlib inline\nrandn = np.random.randn", "JS with IPython?\nThe nice thing about IPython is that we can write in almost any lanaguage. For example, we can use javascript below and pull in the D3 library.", "%%javascript\nrequire.config({\n paths: {\n //d3: \"http://d3js.org/d3.v3.min\" //<-- url \n d3: 'd3/d3.min.js' //<-- local path \n }\n});", "Python data | D3 Viz\nA basic method is to serialze your results and then render html that pulls in the data. In this example, we save a json file and then load the html doc in an IFrame. We're now using D3 in ipython! \nThe example below is adapted from: \n* Hagberg, A & Schult, D. & Swart, P. Networkx (2011). Github repository, https://github.com/networkx/networkx/tree/master/examples/javascript/force", "import json\nimport networkx as nx\nfrom networkx.readwrite import json_graph\nfrom IPython.display import IFrame\n\n\nG = nx.barbell_graph(6,3)\n# this d3 example uses the name attribute for the mouse-hover value,\n# so add a name to each node\nfor n in G:\n G.node[n]['name'] = n\n\n# write json formatted data\nd = json_graph.node_link_data(G) # node-link format to serialize\n\n# write json\njson.dump(d, open('force/force.json','w'))\n\n# render html inline\nIFrame('force/force.html', width=700, height=350)\n#print('Or copy all files in force/ to webserver and load force/force.html')", "Passing data from IPython to JS\nLet's create some random numbers and render them in js (see the stackoverflow explanation and discussion).", "from IPython.display import Javascript\n\nimport numpy as np\nmu, sig = 0.05, 0.2\nrnd = np.random.normal(loc=mu, scale=sig, size=4)\n\n## Use the variable rnd above in Javascript:\n\njavascript = 'element.append(\"{}\");'.format(str(rnd))\n\nJavascript(javascript)", "Passing data from JS to IPython\nWe can also interact with js to define python variables (see this example).", "from IPython.display import HTML\n\ninput_form = \"\"\"\n<div style=\"background-color:gainsboro; border:solid black; width:300px; padding:20px;\">\nName: <input type=\"text\" id=\"var_name\" value=\"foo\"><br>\nValue: <input type=\"text\" id=\"var_value\" value=\"bar\"><br>\n<button onclick=\"set_value()\">Set Value</button>\n</div>\n\"\"\"\n\njavascript = \"\"\"\n<script type=\"text/Javascript\">\n function set_value(){\n var var_name = document.getElementById('var_name').value;\n var var_value = document.getElementById('var_value').value;\n var command = var_name + \" = '\" + var_value + \"'\";\n console.log(\"Executing Command: \" + command);\n \n var kernel = IPython.notebook.kernel;\n kernel.execute(command);\n }\n</script>\n\"\"\"\n\nHTML(input_form + javascript)", "Click \"Set Value\" then run the cell below.", "print foo", "Custom D3 module.\nNow we're having fun. The simplicity of this process wins. We can pass data to javascript via a module called visualize that contains an attribute plot_circle, which uses jinja to render our js template. The advantage of using jinja to read our html is apparent: we can pass variables directly from python!", "from pythonD3 import visualize\ndata = [{'x': 10, 'y': 20, 'r': 15, 'name': 'circle one'}, \n {'x': 40, 'y': 40, 'r': 5, 'name': 'circle two'},\n {'x': 20, 'y': 30, 'r': 8, 'name': 'circle three'},\n {'x': 25, 'y': 10, 'r': 10, 'name': 'circle four'}]\n\nvisualize.plot_circle(data, id=2)\n\nvisualize.plot_chords(id=5)", "What's next?\nNow we just need to learn how to write useful javacript!\n<br>\n(enter ...the Jason)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nholtz/structural-analysis
Devel/Old/frame2d-v03/example-frame-1.ipynb
cc0-1.0
[ "Matrix Methods Example - Frame 1\nThis is the same frame as solved using the method of slope-deflection\nhere. All of the data are provided\nin CSV form directly in the cells, below.", "from __future__ import division, print_function\nfrom IPython import display\nimport salib.nbloader # so that we can directly import other notebooks\n\nimport Frame2D_v03 as f2d", "", "frame = f2d.Frame2D()\n\n%%frame_data frame nodes\nID,X,Y\na,0,0\nb,0,3000\nc,6000,3000\nd,6000,1000\n\n%%frame_data frame members\nID,NODEJ,NODEK\nab,a,b\nbc,b,c\ncd,c,d\n\n%%frame_data frame supports\nID,C0,C1,C2\na,FX,FY,\nd,FX,FY,MZ\n\n%%frame_data frame releases\nID,R", "Use very large areas so that axial deformations will be very small so as to more closely replicate\nthe slope deflection analysis.", "%%frame_data frame properties\nID,SIZE,Ix,A\nbc,,200E6,100E10\nab,,100E6,\ncd,,,\n\n%%frame_data frame node_loads\nID,DIRN,F\nb,FX,60000\n\n%%frame_data frame member_loads\nID,TYPE,W1,W2,A,B,C\nbc,UDL,-36,,,,\n\nframe.doall()", "Compare to slope-deflection solution:\nThe solutions as given in the\nslope-deflection example:\nMember end forces:\n{'Mab': 0,\n 'Mba': 54.36,\n 'Mbc': -54.36,\n 'Mcb': 97.02,\n 'Mcd': -97.02,\n 'Mdc': -59.22,\n 'Vab': -18.12,\n 'Vdc': 78.12}\nExcept for a sign change, these seem consistent. We might have a different sign convention here - I'll check into that.\nReactions:\n[v.subs(soln).n(4) for v in [Ra,Ha,Rd,Hd,Md]]\n [100.9,18.12,115.1,−78.12,−59.22]\nand except for sign, these are OK as well.\nAs for deflection, in $kN m^2$, the product $EI$ used here is:", "EI = 200000. * 100E6 / (1000*1000**2)\nEI", "and the lateral deflection of nodes $b$ and $c$ computed by slope-deflection, in $mm$, was:", "(3528/(247*EI)) * 1000", "which agrees with the displayed result, above. (note that units in the slope-deflection example were\n$kN$ and $m$ and here they are $N$ and $mm$).\n$P-\\Delta$ Analysis\nWe wouldn't expect much difference as sidesway is pretty small:", "frame.doall(pdelta=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/book1/14/conv2d_torch.ipynb
mit
[ "Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/conv2d_jax.ipynb\nFoundations of Convolutional neural nets\nBased on sec 6.2 of\nhttp://d2l.ai/chapter_convolutional-neural-networks/conv-layer.html", "import numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(seed=1)\nimport math\n\ntry:\n import torch\nexcept ModuleNotFoundError:\n %pip install -qq torch\n import torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n!mkdir figures # for saving plots\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\n# For reproducibility on different runs\ntorch.backends.cudnn.deterministic = True\ntorch.manual_seed(hash(\"by removing stochasticity\") % 2**32 - 1)\ntorch.cuda.manual_seed_all(hash(\"so runs are repeatable\") % 2**32 - 1)", "Cross correlation\n<img src=\"https://github.com/probml/pyprobml/blob/master/images/d2l-correlation.png?raw=true\" height=200>\n<img src=\"https://github.com/probml/probml-notebooks/blob/main/images/d2l-correlation.png?raw=true\" height=200>", "# Cross correlation\n\n\ndef corr2d(X, K):\n \"\"\"Compute 2D cross-correlation.\"\"\"\n h, w = K.shape\n Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))\n for i in range(Y.shape[0]):\n for j in range(Y.shape[1]):\n Y[i, j] = (X[i : i + h, j : j + w] * K).sum()\n return Y\n\n\nX = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])\nK = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\nprint(corr2d(X, K))", "Edge detection\nWe make a small image X of 1s, with a vertical stripe (of width 4) of 0s in the middle.", "X = torch.ones((6, 8))\nX[:, 2:6] = 0\nX", "Now we apply a vertical edge detector. It fires on the 1-0 and 0-1 boundaries.", "K = torch.tensor([[1.0, -1.0]])\nY = corr2d(X, K)\nprint(Y)", "It fails to detect horizontal edges.", "corr2d(X.t(), K)", "Convolution as matrix multiplication", "# K = torch.tensor([[0, 1], [2, 3]])\nK = torch.tensor([[1, 2], [3, 4]])\n\nprint(K)\n\n\ndef kernel2matrix(K):\n k, W = torch.zeros(5), torch.zeros((4, 9))\n k[:2], k[3:5] = K[0, :], K[1, :]\n W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k\n return W\n\n\nW = kernel2matrix(K)\nprint(W)\n\nX = torch.arange(9.0).reshape(3, 3)\nY = corr2d(X, K)\nprint(Y)\n\nY2 = torch.mv(W, X.reshape(-1)).reshape(2, 2)\nassert np.allclose(Y, Y2)", "Optimizing the kernel parameters\nLet's learn a kernel to match the output of our manual edge detector.", "# Construct a two-dimensional convolutional layer with 1 output channel and a\n# kernel of shape (1, 2). For the sake of simplicity, we ignore the bias here\nconv2d = nn.Conv2d(1, 1, kernel_size=(1, 2), bias=False)\n\n# The two-dimensional convolutional layer uses four-dimensional input and\n# output in the format of (example channel, height, width), where the batch\n# size (number of examples in the batch) and the number of channels are both 1\n# Defining X and Y again.\nX = torch.ones((6, 8))\nX[:, 2:6] = 0\n\nK = torch.tensor([[1.0, -1.0]])\nY = corr2d(X, K)\n\nX = X.reshape((1, 1, 6, 8))\nY = Y.reshape((1, 1, 6, 7))\n\nfor i in range(10):\n Y_hat = conv2d(X)\n l = (Y_hat - Y) ** 2\n conv2d.zero_grad()\n l.sum().backward()\n # Update the kernel\n conv2d.weight.data[:] -= 3e-2 * conv2d.weight.grad\n if (i + 1) % 2 == 0:\n print(f\"batch {i + 1}, loss {l.sum():.3f}\")\n\nprint(conv2d.weight.data.reshape((1, 2)))", "Multiple input channels\n<img src=\"https://github.com/probml/probml-notebooks/blob/main/images/d2l-conv-multi-in.png?raw=true\" height=200>", "def corr2d(X, K):\n \"\"\"Compute 2D cross-correlation.\"\"\"\n h, w = K.shape\n Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))\n for i in range(Y.shape[0]):\n for j in range(Y.shape[1]):\n Y[i, j] = torch.sum((X[i : i + h, j : j + w] * K))\n return Y\n\ndef corr2d_multi_in(X, K):\n # First, iterate through the 0th dimension (channel dimension) of `X` and\n # `K`. Then, add them together\n return sum(corr2d(x, k) for x, k in zip(X, K))\n\n\nX = torch.tensor(\n [[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]], [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]\n)\nK = torch.tensor([[[0.0, 1.0], [2.0, 3.0]], [[1.0, 2.0], [3.0, 4.0]]])\n\nprint(X.shape) # 2 channels, each 3x3\nprint(K.shape) # 2 sets of 2x2 filters\nout = corr2d_multi_in(X, K)\nprint(out.shape)\nprint(out)", "Multiple output channels", "def corr2d_multi_in_out(X, K):\n # Iterate through the 0th dimension of `K`, and each time, perform\n # cross-correlation operations with input `X`. All of the results are\n # stacked together\n return torch.stack([corr2d_multi_in(X, k) for k in K], 0)\n\n\nK = torch.stack((K, K + 1, K + 2), 0)\nprint(K.shape)\nout = corr2d_multi_in_out(X, K)\nprint(out.shape)", "1x1 convolution", "# 1x1 conv is same as multiplying each feature column at each pixel\n# by a fully connected matrix\ndef corr2d_multi_in_out_1x1(X, K):\n c_i, h, w = X.shape\n c_o = K.shape[0]\n X = X.reshape((c_i, h * w))\n K = K.reshape((c_o, c_i))\n Y = torch.matmul(K, X) # Matrix multiplication in the fully-connected layer\n return Y.reshape((c_o, h, w))\n\n\nX = torch.normal(0, 1, (3, 3, 3)) # 3 channels per pixel\nK = torch.normal(0, 1, (2, 3, 1, 1)) # map from 3 channels to 2\n\nY1 = corr2d_multi_in_out_1x1(X, K)\nY2 = corr2d_multi_in_out(X, K)\nprint(Y2.shape)\nassert float(torch.abs(Y1 - Y2).sum()) < 1e-6", "Pooling", "def pool2d(X, pool_size, mode=\"max\"):\n p_h, p_w = pool_size\n Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))\n for i in range(Y.shape[0]):\n for j in range(Y.shape[1]):\n if mode == \"max\":\n Y[i, j] = X[i : i + p_h, j : j + p_w].max()\n elif mode == \"avg\":\n Y[i, j] = X[i : i + p_h, j : j + p_w].mean()\n return Y\n\n# X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))\nX = torch.arange(16, dtype=torch.float32).reshape((4, 4))\nprint(X)\nprint(X.shape)\nprint(pool2d(X, (3, 3), \"max\"))\n\nX = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))\npool2d = nn.MaxPool2d(3, padding=0, stride=1)\nprint(pool2d(X))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mtchem/ETL-MarchMadness-data
data_to_SQLite_DB.ipynb
mit
[ "This notebook will walk you through each step of creating a SQLite Database for the kaggle.com March Machine Learning Mania 2017 competition.\n*In order to use this notebook you need to obtain the data from kaggle.com\nCreate a Database\n1. Download DB Browser for SQLite at http://sqlitebrowser.org/ \n2. Open the program and make a New Database, call it Data.db\n\nImport sqlite and pandas modules", "import sqlite3 as sql\nimport pandas as pd", "Make a connection to your database", "# make a variable containing the file path (as a raw string) to your data.db file, below is an example\ndb = r'C:\\Users\\...\\data.db'\n# the variable conn contains the connection to your database\nconn = sql.connect(db)", "Create pandas dataframes for each .csv data file provided by kaggle.com, then add each dataframe to the database as a table in the database.", "# variables containing the file paths to the .csv data (this will vary depending on where you saved the files)\nfile1 = r'C:\\Users\\user\\Desktop\\Teams.csv'\nfile2 = r'C:\\Users\\user\\Desktop\\Seasons.csv'\nfile3 = r'C:\\Users\\user\\Desktop\\RegularSeasonCompactResults.csv'\nfile4 = r'C:\\Users\\user\\Desktop\\RegularSeasonDetailedResults.csv'\nfile5 = r'C:\\Users\\user\\Desktop\\TourneyCompactResults.csv'\nfile6 = r'C:\\Users\\user\\Desktop\\TourneyDetailedResults.csv'\nfile7 = r'C:\\Users\\user\\Desktop\\TourneySeeds.csv'\nfile8 = r'C:\\Users\\user\\Desktop\\TourneySlots.csv'\n\n\n# list of files\nfiles = [file1,file2, file3, file4, file5, file6, file7, file8]\n# list of the future database Table names, with a place holder in index 0\ntable_names = ['Teams','Seasons','RegularSeasonCompactResults','RegularSeasonDetailedResults',\n 'TourneyCompactResults', 'TourneyDetailedResults','TourneySeeds','TourneySlots']\n\n# adds the csv data to the database as tables\nfor i in range(0,8):\n # create dataframe for each of the .csv files\n file_name = files[i]\n df = pd.read_csv(file_name)\n # adds the dataframe to the database\n table_name = table_names[i]\n df.to_sql(table_name, conn, if_exists='append', index=False)\n\n# closes the connection to the database\nconn.close() ", "That's it! You did it!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
statsmodels/statsmodels.github.io
v0.13.2/examples/notebooks/generated/theta-model.ipynb
bsd-3-clause
[ "The Theta Model\nThe Theta model of Assimakopoulos & Nikolopoulos (2000) is a simple method for forecasting the involves fitting two $\\theta$-lines, forecasting the lines using a Simple Exponential Smoother, and then combining the forecasts from the two lines to produce the final forecast. The model is implemented in steps:\n\nTest for seasonality\nDeseasonalize if seasonality detected\nEstimate $\\alpha$ by fitting a SES model to the data and $b_0$ by OLS.\nForecast the series\nReseasonalize if the data was deseasonalized.\n\nThe seasonality test examines the ACF at the seasonal lag $m$. If this lag is significantly different from zero then the data is deseasonalize using statsmodels.tsa.seasonal_decompose use either a multiplicative method (default) or additive. \nThe parameters of the model are $b_0$ and $\\alpha$ where $b_0$ is estimated from the OLS regression\n$$\nX_t = a_0 + b_0 (t-1) + \\epsilon_t\n$$\nand $\\alpha$ is the SES smoothing parameter in\n$$\n\\tilde{X}t = (1-\\alpha) X_t + \\alpha \\tilde{X}{t-1}\n$$\nThe forecasts are then \n$$\n \\hat{X}{T+h|T} = \\frac{\\theta-1}{\\theta} \\hat{b}_0\n \\left[h - 1 + \\frac{1}{\\hat{\\alpha}}\n - \\frac{(1-\\hat{\\alpha})^T}{\\hat{\\alpha}} \\right]\n + \\tilde{X}{T+h|T}\n$$\nUltimately $\\theta$ only plays a role in determining how much the trend is damped. If $\\theta$ is very large, then the forecast of the model is identical to that from an Integrated Moving Average with a drift,\n$$\nX_t = X_{t-1} + b_0 + (\\alpha-1)\\epsilon_{t-1} + \\epsilon_t.\n$$\nFinally, the forecasts are reseasonalized if needed.\nThis module is based on:\n\nAssimakopoulos, V., & Nikolopoulos, K. (2000). The theta model: a decomposition\n approach to forecasting. International journal of forecasting, 16(4), 521-530.\nHyndman, R. J., & Billah, B. (2003). Unmasking the Theta method.\n International Journal of Forecasting, 19(2), 287-290.\nFioruci, J. A., Pellegrini, T. R., Louzada, F., & Petropoulos, F.\n (2015). The optimized theta method. arXiv preprint arXiv:1503.03529.\n\nImports\nWe start with the standard set of imports and some tweaks to the default matplotlib style.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pandas_datareader as pdr\nimport seaborn as sns\n\nplt.rc(\"figure\", figsize=(16, 8))\nplt.rc(\"font\", size=15)\nplt.rc(\"lines\", linewidth=3)\nsns.set_style(\"darkgrid\")", "Load some Data\nWe will first look at housing starts using US data. This series is clearly seasonal but does not have a clear trend during the same.", "reader = pdr.fred.FredReader([\"HOUST\"], start=\"1980-01-01\", end=\"2020-04-01\")\ndata = reader.read()\nhousing = data.HOUST\nhousing.index.freq = housing.index.inferred_freq\nax = housing.plot()", "We fit specify the model without any options and fit it. The summary shows that the data was deseasonalized using the multiplicative method. The drift is modest and negative, and the smoothing parameter is fairly low.", "from statsmodels.tsa.forecasting.theta import ThetaModel\n\ntm = ThetaModel(housing)\nres = tm.fit()\nprint(res.summary())", "The model is first and foremost a forecasting method. Forecasts are produced using the forecast method from fitted model. Below we produce a hedgehog plot by forecasting 2-years ahead every 2 years. \nNote: the default $\\theta$ is 2.", "forecasts = {\"housing\": housing}\nfor year in range(1995, 2020, 2):\n sub = housing[: str(year)]\n res = ThetaModel(sub).fit()\n fcast = res.forecast(24)\n forecasts[str(year)] = fcast\nforecasts = pd.DataFrame(forecasts)\nax = forecasts[\"1995\":].plot(legend=False)\nchildren = ax.get_children()\nchildren[0].set_linewidth(4)\nchildren[0].set_alpha(0.3)\nchildren[0].set_color(\"#000000\")\nax.set_title(\"Housing Starts\")\nplt.tight_layout(pad=1.0)", "We could alternatively fit the log of the data. Here it makes more sense to force the deseasonalizing to use the additive method, if needed. We also fit the model parameters using MLE. This method fits the IMA\n$$ X_t = X_{t-1} + \\gamma\\epsilon_{t-1} + \\epsilon_t $$\nwhere $\\hat{\\alpha}$ = $\\min(\\hat{\\gamma}+1, 0.9998)$ using statsmodels.tsa.SARIMAX. The parameters are similar although the drift is closer to zero.", "tm = ThetaModel(np.log(housing), method=\"additive\")\nres = tm.fit(use_mle=True)\nprint(res.summary())", "The forecast only depends on the forecast trend component,\n$$\n\\hat{b}_0\n \\left[h - 1 + \\frac{1}{\\hat{\\alpha}}\n - \\frac{(1-\\hat{\\alpha})^T}{\\hat{\\alpha}} \\right],\n$$\nthe forecast from the SES (which does not change with the horizon), and the seasonal. These three components are available using the forecast_components. This allows forecasts to be constructed using multiple choices of $\\theta$ using the weight expression above.", "res.forecast_components(12)", "Personal Consumption Expenditure\nWe next look at personal consumption expenditure. This series has a clear seasonal component and a drift.", "reader = pdr.fred.FredReader([\"NA000349Q\"], start=\"1980-01-01\", end=\"2020-04-01\")\npce = reader.read()\npce.columns = [\"PCE\"]\npce.index.freq = \"QS-OCT\"\n_ = pce.plot()", "Since this series is always positive, we model the $\\ln$.", "mod = ThetaModel(np.log(pce))\nres = mod.fit()\nprint(res.summary())", "Next we explore differenced in the forecast as $\\theta$ changes. When $\\theta$ is close to 1, the drift is nearly absent. As $\\theta$ increases, the drift becomes more obvious.", "forecasts = pd.DataFrame(\n {\n \"ln PCE\": np.log(pce.PCE),\n \"theta=1.2\": res.forecast(12, theta=1.2),\n \"theta=2\": res.forecast(12),\n \"theta=3\": res.forecast(12, theta=3),\n \"No damping\": res.forecast(12, theta=np.inf),\n }\n)\n_ = forecasts.tail(36).plot()\nplt.title(\"Forecasts of ln PCE\")\nplt.tight_layout(pad=1.0)", "Finally, plot_predict can be used to visualize the predictions and prediction intervals which are constructed assuming the IMA is true.", "ax = res.plot_predict(24, theta=2)", "We conclude be producing a hedgehog plot using 2-year non-overlapping samples.", "ln_pce = np.log(pce.PCE)\nforecasts = {\"ln PCE\": ln_pce}\nfor year in range(1995, 2020, 3):\n sub = ln_pce[: str(year)]\n res = ThetaModel(sub).fit()\n fcast = res.forecast(12)\n forecasts[str(year)] = fcast\nforecasts = pd.DataFrame(forecasts)\nax = forecasts[\"1995\":].plot(legend=False)\nchildren = ax.get_children()\nchildren[0].set_linewidth(4)\nchildren[0].set_alpha(0.3)\nchildren[0].set_color(\"#000000\")\nax.set_title(\"ln PCE\")\nplt.tight_layout(pad=1.0)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/tensorflow_extended/labs/reading_data_from_bigquery_with_TFX_and_vertex_pipelines.ipynb
apache-2.0
[ "Reading Data from BigQuery with TFX and Vertex Pipelines\nLearning objectives\n\nSet up variables.\nCreate a pipeline.\nRun the pipeline on Vertex Pipelines.\n\nIntroduction\nIn this notebook, you will use the BigQueryExampleGen component which reads data from BigQuery to TFX pipelines. This notebook-based tutorial will use Google Cloud BigQuery as a data source to train an ML model. The ML pipeline will be constructed using TFX and run on Google Cloud Vertex Pipelines.\nThis notebook is based on the TFX pipeline you built in Simple TFX Pipeline for Vertex Pipelines Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook.\nBigQuery is a serverless, highly scalable, and cost-effective multi-cloud data warehouse designed for business agility. TFX can be used to read training data from BigQuery and to\npublish the trained model to BigQuery.\nSet up\nIf you have completed\nSimple TFX Pipeline for Vertex Pipelines Tutorial,\nyou will have a working GCP project and a GCS bucket and that is all you need\nfor this notebook. Please read the preliminary tutorial first if you missed it.\nNote: By default the Vertex Pipelines uses the default GCE VM service account of\nformat [project-number]-compute@developer.gserviceaccount.com. You need to\ngive a permission to use BigQuery to this account to access BigQuery in the\npipeline. Add BigQuery User role to the account.\nPlease see\nVertex documentation\nto learn more about service accounts and IAM configuration.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nInstall python packages\nYou will install required Python packages including TFX and KFP to author ML\npipelines and submit jobs to Vertex Pipelines.", "# Use the latest version of pip.\n!pip install --upgrade pip\n!pip install --upgrade \"tfx[kfp]<2\"", "Restart the kernel\nPlease ignore any incompatibility warnings and errors. Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart).\nCheck the package versions.", "import tensorflow as tf\nprint('TensorFlow version: {}'.format(tf.__version__))\nfrom tfx import v1 as tfx\nprint('TFX version: {}'.format(tfx.__version__))\nimport kfp\nprint('KFP version: {}'.format(kfp.__version__))", "Set up variables\nYou will set up some variables used to customize the pipelines below. Following\ninformation is required:\n\nGCP Project id and number. See\nIdentifying your project id and number.\nGCP Region to run pipelines. For more information about the regions that\nVertex Pipelines is available in, see the\nVertex AI locations guide.\nGoogle Cloud Storage Bucket to store pipeline outputs.\n\nEnter required values in the cell below before running it.", "GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS\nGOOGLE_CLOUD_PROJECT_NUMBER = '' # <--- ENTER THIS\nGOOGLE_CLOUD_REGION = '' # <--- ENTER THIS\nGCS_BUCKET_NAME = '' # <--- ENTER THIS\n\nif not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_PROJECT_NUMBER and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):\n from absl import logging\n logging.error('Please set all required parameters.')", "Set gcloud to use your project.", "!gcloud config set project {GOOGLE_CLOUD_PROJECT}\n\nPIPELINE_NAME = 'penguin-bigquery'\n\n# Path to various pipeline artifact.\nPIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for users' Python module.\nMODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for users' data.\nDATA_ROOT = # TODO 1: Your code here\n\n# This is the path where your model will be pushed for serving.\nSERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\nprint('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))", "Create a pipeline\nTFX pipelines are defined using Python APIs as you did in\nSimple TFX Pipeline for Vertex Pipelines Tutorial.\nYou previously used CsvExampleGen which reads data from a CSV file. In this\nnotebook, you will use\nBigQueryExampleGen\ncomponent which reads data from BigQuery.\nPrepare BigQuery query\nYou will use the same\nPalmer Penguins dataset. However, you will read it from a BigQuery table\ntfx-oss-public.palmer_penguins.palmer_penguins which is populated using the\nsame CSV file.", "%%bigquery --project {GOOGLE_CLOUD_PROJECT}\nSELECT *\nFROM `tfx-oss-public.palmer_penguins.palmer_penguins`\nLIMIT 5", "All features were already normalized to 0~1 except species which is the\nlabel. You will build a classification model which predicts the species of\npenguins.\nBigQueryExampleGen requires a query to specify which data to fetch. Because\nYou will use all the fields of all rows in the table, the query is quite simple.\nYou can also specify field names and add WHERE conditions as needed according\nto the\nBigQuery Standard SQL syntax.", "QUERY = \"SELECT * FROM `tfx-oss-public.palmer_penguins.palmer_penguins`\"", "Write model code.\nYou will use the same model code as in the\nSimple TFX Pipeline Tutorial.", "_trainer_module_file = 'penguin_trainer.py'\n\n%%writefile {_trainer_module_file}\n\n# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple\n\nfrom typing import List\nfrom absl import logging\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow_transform.tf_metadata import schema_utils\n\nfrom tfx import v1 as tfx\nfrom tfx_bsl.public import tfxio\n\nfrom tensorflow_metadata.proto.v0 import schema_pb2\n\n_FEATURE_KEYS = [\n 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'\n]\n_LABEL_KEY = 'species'\n\n_TRAIN_BATCH_SIZE = 20\n_EVAL_BATCH_SIZE = 10\n\n# Since you're not generating or creating a schema, you will instead create\n# a feature spec. Since there are a fairly small number of features this is\n# manageable for this dataset.\n_FEATURE_SPEC = {\n **{\n feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)\n for feature in _FEATURE_KEYS\n },\n _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)\n}\n\n\ndef _input_fn(file_pattern: List[str],\n data_accessor: tfx.components.DataAccessor,\n schema: schema_pb2.Schema,\n batch_size: int) -> tf.data.Dataset:\n \"\"\"Generates features and label for training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n schema: schema of the input data.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n tfxio.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_LABEL_KEY),\n schema=schema).repeat()\n\n\ndef _make_keras_model() -> tf.keras.Model:\n \"\"\"Creates a DNN Keras model for classifying penguin data.\n\n Returns:\n A Keras Model.\n \"\"\"\n # The model below is built with Functional API, please refer to\n # https://www.tensorflow.org/guide/keras/overview for all API options.\n inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]\n d = keras.layers.concatenate(inputs)\n for _ in range(2):\n d = keras.layers.Dense(8, activation='relu')(d)\n outputs = keras.layers.Dense(3)(d)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n optimizer=keras.optimizers.Adam(1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()])\n\n model.summary(print_fn=logging.info)\n return model\n\n\n# TFX Trainer will call this function.\ndef run_fn(fn_args: tfx.components.FnArgs):\n \"\"\"Train the model based on given args.\n\n Args:\n fn_args: Holds args used to train the model as name/value pairs.\n \"\"\"\n\n # This schema is usually either an output of SchemaGen or a manually-curated\n # version provided by pipeline author. A schema can also derived from TFT\n # graph if a Transform component is used. In the case when either is missing,\n # `schema_from_feature_spec` could be used to generate schema from very simple\n # feature_spec, but the schema returned would be very primitive.\n schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n schema,\n batch_size=_TRAIN_BATCH_SIZE)\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n schema,\n batch_size=_EVAL_BATCH_SIZE)\n\n model = _make_keras_model()\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps)\n\n # The result of the training should be saved in `fn_args.serving_model_dir`\n # directory.\n model.save(fn_args.serving_model_dir, save_format='tf')", "Copy the module file to GCS which can be accessed from the pipeline components.\nBecause model training happens on GCP, you need to upload this model definition.\nOtherwise, you might want to build a container image including the module file\nand use the image to run the pipeline.", "!gsutil cp {_trainer_module_file} {MODULE_ROOT}/", "Write a pipeline definition\nYou will define a function to create a TFX pipeline. You need to use\nBigQueryExampleGen which takes query as an argument. One more change from\nthe previous notebook is that you need to pass beam_pipeline_args which is\npassed to components when they are executed. You will use beam_pipeline_args\nto pass additional parameters to BigQuery.", "from typing import List, Optional\n\ndef _create_pipeline(pipeline_name: str, pipeline_root: str, query: str,\n module_file: str, serving_model_dir: str,\n beam_pipeline_args: Optional[List[str]],\n ) -> tfx.dsl.Pipeline:\n \"\"\"Creates a TFX pipeline using BigQuery.\"\"\"\n\n # NEW: Query data in BigQuery as a data source.\n example_gen = # TODO 2: Your code here\n\n # Uses user-provided Python function that trains a model.\n trainer = tfx.components.Trainer(\n module_file=module_file,\n examples=example_gen.outputs['examples'],\n train_args=tfx.proto.TrainArgs(num_steps=100),\n eval_args=tfx.proto.EvalArgs(num_steps=5))\n\n # Pushes the model to a file destination.\n pusher = tfx.components.Pusher(\n model=trainer.outputs['model'],\n push_destination=tfx.proto.PushDestination(\n filesystem=tfx.proto.PushDestination.Filesystem(\n base_directory=serving_model_dir)))\n\n components = [\n example_gen,\n trainer,\n pusher,\n ]\n\n return tfx.dsl.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n components=components,\n # NEW: `beam_pipeline_args` is required to use BigQueryExampleGen.\n beam_pipeline_args=beam_pipeline_args)", "Run the pipeline on Vertex Pipelines.\nYou will use Vertex Pipelines to run the pipeline as you did in\nSimple TFX Pipeline for Vertex Pipelines Tutorial.\nYou also need to pass beam_pipeline_args for the BigQueryExampleGen. It\nincludes configs like the name of the GCP project and the temporary storage for\nthe BigQuery execution.", "import os\n\n# You need to pass some GCP related configs to BigQuery. This is currently done\n# using `beam_pipeline_args` parameter.\nBIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS = [\n '--project=' + GOOGLE_CLOUD_PROJECT,\n '--temp_location=' + os.path.join('gs://', GCS_BUCKET_NAME, 'tmp'),\n ]\n\nPIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'\n\nrunner = tfx.orchestration.experimental.KubeflowV2DagRunner(\n config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),\n output_filename=PIPELINE_DEFINITION_FILE)\n_ = runner.run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n query=QUERY,\n module_file=os.path.join(MODULE_ROOT, _trainer_module_file),\n serving_model_dir=SERVING_MODEL_DIR,\n beam_pipeline_args=BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS))", "The generated definition file can be submitted using kfp client.", "# docs_infra: no_execute\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import pipeline_jobs\nimport logging\nlogging.getLogger().setLevel(logging.INFO)\n\naiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)\n\njob = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,\n display_name=PIPELINE_NAME)\n# Submit the job\n# TODO 3: Your code here", "Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'\nin Google Cloud Console to see the\nprogress." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/ko/r1/tutorials/keras/basic_classification.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.", "첫 번째 신경망 훈련하기: 기초적인 분류 문제\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/basic_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />구글 코랩(Colab)에서 실행하기</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/r1/tutorials/keras/basic_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />깃허브(GitHub) 소스 보기</a>\n </td>\n</table>\n\nNote: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도\n불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.\n이 번역에 개선할 부분이 있다면\ntensorflow/docs 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.\n문서 번역이나 리뷰에 참여하려면\ndocs-ko@tensorflow.org로\n메일을 보내주시기 바랍니다.\n이 튜토리얼에서는 운동화나 셔츠 같은 옷 이미지를 분류하는 신경망 모델을 훈련합니다. 상세 내용을 모두 이해하지 못해도 괜찮습니다. 여기서는 완전한 텐서플로(TensorFlow) 프로그램을 빠르게 살펴 보겠습니다. 자세한 내용은 앞으로 배우면서 더 설명합니다.\n여기에서는 텐서플로 모델을 만들고 훈련할 수 있는 고수준 API인 tf.keras를 사용합니다.", "# tensorflow와 tf.keras를 임포트합니다\nimport tensorflow.compat.v1 as tf\n\nfrom tensorflow import keras\n\n# 헬퍼(helper) 라이브러리를 임포트합니다\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)", "패션 MNIST 데이터셋 임포트하기\n10개의 범주(category)와 70,000개의 흑백 이미지로 구성된 패션 MNIST 데이터셋을 사용하겠습니다. 이미지는 해상도(28x28 픽셀)가 낮고 다음처럼 개별 옷 품목을 나타냅니다:\n<table>\n <tr><td>\n <img src=\"https://tensorflow.org/images/fashion-mnist-sprite.png\"\n alt=\"Fashion MNIST sprite\" width=\"600\">\n </td></tr>\n <tr><td align=\"center\">\n <b>그림 1.</b> <a href=\"https://github.com/zalandoresearch/fashion-mnist\">패션-MNIST 샘플</a> (Zalando, MIT License).<br/>&nbsp;\n </td></tr>\n</table>\n\n패션 MNIST는 컴퓨터 비전 분야의 \"Hello, World\" 프로그램격인 고전 MNIST 데이터셋을 대신해서 자주 사용됩니다. MNIST 데이터셋은 손글씨 숫자(0, 1, 2 등)의 이미지로 이루어져 있습니다. 여기서 사용하려는 옷 이미지와 동일한 포맷입니다.\n패션 MNIST는 일반적인 MNIST 보다 조금 더 어려운 문제이고 다양한 예제를 만들기 위해 선택했습니다. 두 데이터셋은 비교적 작기 때문에 알고리즘의 작동 여부를 확인하기 위해 사용되곤 합니다. 코드를 테스트하고 디버깅하는 용도로 좋습니다.\n네트워크를 훈련하는데 60,000개의 이미지를 사용합니다. 그다음 네트워크가 얼마나 정확하게 이미지를 분류하는지 10,000개의 이미지로 평가하겠습니다. 패션 MNIST 데이터셋은 텐서플로에서 바로 임포트하여 적재할 수 있습니다:", "fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()", "load_data() 함수를 호출하면 네 개의 넘파이(NumPy) 배열이 반환됩니다:\n\ntrain_images와 train_labels 배열은 모델 학습에 사용되는 훈련 세트입니다.\ntest_images와 test_labels 배열은 모델 테스트에 사용되는 테스트 세트입니다.\n\n이미지는 28x28 크기의 넘파이 배열이고 픽셀 값은 0과 255 사이입니다. 레이블(label)은 0에서 9까지의 정수 배열입니다. 이 값은 이미지에 있는 옷의 클래스(class)를 나타냅니다:\n<table>\n <tr>\n <th>레이블</th>\n <th>클래스</th>\n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td>\n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td>\n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td>\n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td>\n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td>\n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td>\n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td>\n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td>\n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td>\n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td>\n </tr>\n</table>\n\n각 이미지는 하나의 레이블에 매핑되어 있습니다. 데이터셋에 클래스 이름이 들어있지 않기 때문에 나중에 이미지를 출력할 때 사용하기 위해 별도의 변수를 만들어 저장합니다:", "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',\n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "데이터 탐색\n모델을 훈련하기 전에 데이터셋 구조를 살펴보죠. 다음 코드는 훈련 세트에 60,000개의 이미지가 있다는 것을 보여줍니다. 각 이미지는 28x28 픽셀로 표현됩니다:", "train_images.shape", "비슷하게 훈련 세트에는 60,000개의 레이블이 있습니다:", "len(train_labels)", "각 레이블은 0과 9사이의 정수입니다:", "train_labels", "테스트 세트에는 10,000개의 이미지가 있습니다. 이 이미지도 28x28 픽셀로 표현됩니다:", "test_images.shape", "테스트 세트는 10,000개의 이미지에 대한 레이블을 가지고 있습니다:", "len(test_labels)", "데이터 전처리\n네트워크를 훈련하기 전에 데이터를 전처리해야 합니다. 훈련 세트에 있는 첫 번째 이미지를 보면 픽셀 값의 범위가 0~255 사이라는 것을 알 수 있습니다:", "plt.figure()\nplt.imshow(train_images[0])\nplt.colorbar()\nplt.grid(False)\nplt.show()", "신경망 모델에 주입하기 전에 이 값의 범위를 0~1 사이로 조정하겠습니다. 이렇게 하려면 255로 나누어야 합니다. 훈련 세트와 테스트 세트를 동일한 방식으로 전처리하는 것이 중요합니다:", "train_images = train_images / 255.0\n\ntest_images = test_images / 255.0", "훈련 세트에서 처음 25개 이미지와 그 아래 클래스 이름을 출력해 보죠. 데이터 포맷이 올바른지 확인하고 네트워크 구성과 훈련할 준비를 마칩니다.", "plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])\nplt.show()", "모델 구성\n신경망 모델을 만들려면 모델의 층을 구성한 다음 모델을 컴파일합니다.\n층 설정\n신경망의 기본 구성 요소는 층(layer)입니다. 층은 주입된 데이터에서 표현을 추출합니다. 아마도 문제를 해결하는데 더 의미있는 표현이 추출될 것입니다.\n대부분 딥러닝은 간단한 층을 연결하여 구성됩니다. tf.keras.layers.Dense와 같은 층들의 가중치(parameter)는 훈련하는 동안 학습됩니다.", "model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation=tf.nn.relu),\n keras.layers.Dense(10, activation=tf.nn.softmax)\n])", "이 네트워크의 첫 번째 층인 tf.keras.layers.Flatten은 2차원 배열(28 x 28 픽셀)의 이미지 포맷을 28 * 28 = 784 픽셀의 1차원 배열로 변환합니다. 이 층은 이미지에 있는 픽셀의 행을 펼쳐서 일렬로 늘립니다. 이 층에는 학습되는 가중치가 없고 데이터를 변환하기만 합니다.\n픽셀을 펼친 후에는 두 개의 tf.keras.layers.Dense 층이 연속되어 연결됩니다. 이 층을 밀집 연결(densely-connected) 또는 완전 연결(fully-connected) 층이라고 부릅니다. 첫 번째 Dense 층은 128개의 노드(또는 뉴런)를 가집니다. 두 번째 (마지막) 층은 10개의 노드의 소프트맥스(softmax) 층입니다. 이 층은 10개의 확률을 반환하고 반환된 값의 전체 합은 1입니다. 각 노드는 현재 이미지가 10개 클래스 중 하나에 속할 확률을 출력합니다.\n모델 컴파일\n모델을 훈련하기 전에 필요한 몇 가지 설정이 모델 컴파일 단계에서 추가됩니다:\n\n손실 함수(Loss function)-훈련 하는 동안 모델의 오차를 측정합니다. 모델의 학습이 올바른 방향으로 향하도록 이 함수를 최소화해야 합니다.\n옵티마이저(Optimizer)-데이터와 손실 함수를 바탕으로 모델의 업데이트 방법을 결정합니다.\n지표(Metrics)-훈련 단계와 테스트 단계를 모니터링하기 위해 사용합니다. 다음 예에서는 올바르게 분류된 이미지의 비율인 정확도를 사용합니다.", "model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])", "모델 훈련\n신경망 모델을 훈련하는 단계는 다음과 같습니다:\n\n훈련 데이터를 모델에 주입합니다-이 예에서는 train_images와 train_labels 배열입니다.\n모델이 이미지와 레이블을 매핑하는 방법을 배웁니다.\n테스트 세트에 대한 모델의 예측을 만듭니다-이 예에서는 test_images 배열입니다. 이 예측이 test_labels 배열의 레이블과 맞는지 확인합니다.\n\n훈련을 시작하기 위해 model.fit 메서드를 호출하면 모델이 훈련 데이터를 학습합니다:", "model.fit(train_images, train_labels, epochs=5)", "모델이 훈련되면서 손실과 정확도 지표가 출력됩니다. 이 모델은 훈련 세트에서 약 0.88(88%) 정도의 정확도를 달성합니다.\n정확도 평가\n그다음 테스트 세트에서 모델의 성능을 비교합니다:", "test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)\n\nprint('테스트 정확도:', test_acc)", "테스트 세트의 정확도가 훈련 세트의 정확도보다 조금 낮습니다. 훈련 세트의 정확도와 테스트 세트의 정확도 사이의 차이는 과대적합(overfitting) 때문입니다. 과대적합은 머신러닝 모델이 훈련 데이터보다 새로운 데이터에서 성능이 낮아지는 현상을 말합니다.\n예측 만들기\n훈련된 모델을 사용하여 이미지에 대한 예측을 만들 수 있습니다.", "predictions = model.predict(test_images)", "여기서는 테스트 세트에 있는 각 이미지의 레이블을 예측했습니다. 첫 번째 예측을 확인해 보죠:", "predictions[0]", "이 예측은 10개의 숫자 배열로 나타납니다. 이 값은 10개의 옷 품목에 상응하는 모델의 신뢰도(confidence)를 나타냅니다. 가장 높은 신뢰도를 가진 레이블을 찾아보죠:", "np.argmax(predictions[0])", "모델은 이 이미지가 앵클 부츠(class_name[9])라고 가장 확신하고 있습니다. 이 값이 맞는지 테스트 레이블을 확인해 보죠:", "test_labels[0]", "10개의 신뢰도를 모두 그래프로 표현해 보겠습니다:", "def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n\n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n\n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array[i], true_label[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1])\n predicted_label = np.argmax(predictions_array)\n\n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')", "0번째 원소의 이미지, 예측, 신뢰도 점수 배열을 확인해 보겠습니다.", "i = 0\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\nplt.show()\n\ni = 12\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\nplt.show()", "몇 개의 이미지의 예측을 출력해 보죠. 올바르게 예측된 레이블은 파란색이고 잘못 예측된 레이블은 빨강색입니다. 숫자는 예측 레이블의 신뢰도 퍼센트(100점 만점)입니다. 신뢰도 점수가 높을 때도 잘못 예측할 수 있습니다.", "# 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다\n# 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions, test_labels)\nplt.show()", "마지막으로 훈련된 모델을 사용하여 한 이미지에 대한 예측을 만듭니다.", "# 테스트 세트에서 이미지 하나를 선택합니다\nimg = test_images[0]\n\nprint(img.shape)", "tf.keras 모델은 한 번에 샘플의 묶음 또는 배치(batch)로 예측을 만드는데 최적화되어 있습니다. 하나의 이미지를 사용할 때에도 2차원 배열로 만들어야 합니다:", "# 이미지 하나만 사용할 때도 배치에 추가합니다\nimg = (np.expand_dims(img,0))\n\nprint(img.shape)", "이제 이 이미지의 예측을 만듭니다:", "predictions_single = model.predict(img)\n\nprint(predictions_single)\n\nplot_value_array(0, predictions_single, test_labels)\nplt.xticks(range(10), class_names, rotation=45)\nplt.show()", "model.predict는 2차원 넘파이 배열을 반환하므로 첫 번째 이미지의 예측을 선택합니다:", "prediction_result = np.argmax(predictions_single[0])\nprint(prediction_result)", "이전과 마찬가지로 모델의 예측은 레이블 9입니다." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
anshbansal/anshbansal.github.io
udacity_data_science_notes/Data_Wrangling_with_MongoDB/project.ipynb
mit
[ "Data Wrangling Project\nMap Area: New Delhi, India\n\nTaken from https://mapzen.com/data/metro-extracts/metro/new-delhi_india/\n\nOverview of the data\n\nnew-delhi_india.osm 710 MB\nnew-delhi_india.osm.json 816 MB\nNumber of records = 4063611\n\nNumber of unique users 953\n\n\nThe schema of the data found in mongo db after inserting the data using https://github.com/variety/variety", "r'''<pre>\n+----------------------------------------------------------------------------+\n|\n\n key | types | occurrences | percents |\n| ---------------------- | -------- | ----------- | ------------------------ |\n| _id | ObjectId | 4063611 | 100.00000000000000000000 |\n| created | Object | 4063611 | 100.00000000000000000000 |\n| created.changeset | String | 4063611 | 100.00000000000000000000 |\n| created.timestamp | String | 4063611 | 100.00000000000000000000 |\n| created.uid | String | 4063611 | 100.00000000000000000000 |\n| created.user | String | 4063611 | 100.00000000000000000000 |\n| created.version | String | 4063611 | 100.00000000000000000000 |\n| id | String | 4063611 | 100.00000000000000000000 |\n| type | String | 4063611 | 100.00000000000000000000 |\n| pos | Array | 3374750 | 83.04805750353564519628 |\n| node_refs | Array | 688861 | 16.95194249646435125101 |\n| address | Object | 2733 | 0.06725545333940674553 |\n| address.housenumber | String | 1759 | 0.04328662364581649380 |\n| address.street | String | 1022 | 0.02515004511996842343 |\n| address.city | String | 922 | 0.02268917964834724424 |\n| address.postcode | String | 766 | 0.01885022951261821136 |\n| address.interpolation | String | 533 | 0.01311641296374086926 |\n| address.country | String | 388 | 0.00954815802989016429 |\n| address.housename | String | 180 | 0.00442955784891811699 |\n| address.state | String | 89 | 0.00219017026974284695 |\n| address.full | String | 60 | 0.00147651928297270574 |\n| address.inclusion | String | 28 | 0.00068904233205392934 |\n| address.buildingnumber | String | 23 | 0.00056599905847287051 |\n| address.suburb | String | 12 | 0.00029530385659454114 |\n| address.place | String | 8 | 0.00019686923772969410 |\n| address.locality | String | 3 | 0.00007382596414863528 |\n| address.district | String | 2 | 0.00004921730943242352 |\n| address.area | String | 1 | 0.00002460865471621176 |\n| address.block_number | String | 1 | 0.00002460865471621176 |\n| address.city_1 | String | 1 | 0.00002460865471621176 |\n| address.province | String | 1 | 0.00002460865471621176 |\n| address.street_1 | String | 1 | 0.00002460865471621176 |\n| address.street_2 | String | 1 | 0.00002460865471621176 |\n| address.street_3 | String | 1 | 0.00002460865471621176 |\n| address.subdistrict | String | 1 | 0.00002460865471621176 |\n| address.unit | String | 1 | 0.00002460865471621176 |\n+----------------------------------------------------------------------------+'''\n\nNone", "Other ideas about the datasets\nAnalysis and code starts here\nI was stuck on getting started with this project for while so I will follow a train of thought approach to this project's code. All final thoughts will be summarised before this heading.\nLet me start with adding some general functions that I will use for SAX iterating and making a sample file to work with.", "from collections import defaultdict\n\nimport xml.etree.cElementTree as ET\nimport re\n\ndef get_element(osm_file, tags=('node', 'way', 'relation')):\n \"\"\"Yield element if it is the right type of tag\n\n Reference:\n http://stackoverflow.com/questions/3095434/inserting-newlines-in-xml-file-generated-via-xml-etree-elementtree-in-python\n \"\"\"\n context = iter(ET.iterparse(osm_file, events=('start', 'end')))\n _, root = next(context)\n for event, elem in context:\n if tags is not None and elem.tag not in tags:\n continue\n if event == 'end':\n yield elem\n root.clear()\n\ndef take_sample(k, osm_file, sample_file):\n with open(sample_file, 'wb') as output:\n output.write('<?xml version=\"1.0\" encoding=\"UTF-8\"?>\\n')\n output.write('<osm>\\n ')\n\n # Write every kth top level element\n for i, element in enumerate(get_element(osm_file)):\n if i % k == 0:\n # print \"i is {}\".format(i)\n output.write(ET.tostring(element, encoding='utf-8'))\n\n output.write('</osm>')\n\n#take_sample(10, \"new-delhi_india.osm\", \"sample_10.osm\")", "Now that we have sample files let me try and understand exactly what kind of data we have in our tags", "#OSM_FILE = \"new-delhi_india.osm\"\nOSM_FILE = \"sample_100.osm\"\n\ndef get_tag_types():\n tag_types = set()\n for element in get_element(OSM_FILE, tags=None):\n tag_types.add(element.tag)\n return tag_types\n\n#get_tag_types()\n\ndef tag_attributes(osm_file, tags):\n for element in get_element(osm_file, tags):\n print element.attrib\n\n#tag_attributes(OSM_FILE, ('node',))\n\n#tag_attributes(OSM_FILE, ('nd',))\n\n#tag_attributes(OSM_FILE, ('member',))\n\n#tag_attributes(OSM_FILE, ('tag',))\n\n#tag_attributes(OSM_FILE, ('relation',))\n\n#tag_attributes(OSM_FILE, ('way',))", "Now that we have an idea about what kind of data we have in our sample file let us start with finding whether the kets that we have are fine or not", "import re\n\nlower = re.compile(r'^([a-z]|_)*$')\nlower_colon = re.compile(r'^([a-z]|_)*:([a-z]|_)*$')\nproblem_chars = re.compile(r'[=\\+/&<>;\\'\"\\?%#$@\\,\\. \\t\\r\\n]')\n\n\"\"\"\nYour task is to explore the data a bit more.\nBefore you process the data and add it into your database, you should check the\n\"k\" value for each \"<tag>\" and see if there are any potential problems.\n\nWe have provided you with 3 regular expressions to check for certain patterns\nin the tags. As we saw in the quiz earlier, we would like to change the data\nmodel and expand the \"addr:street\" type of keys to a dictionary like this:\n{\"address\": {\"street\": \"Some value\"}}\nSo, we have to see if we have such tags, and if we have any tags with\nproblematic characters.\n\nPlease complete the function 'key_type', such that we have a count of each of\nfour tag categories in a dictionary:\n \"lower\", for tags that contain only lowercase letters and are valid,\n \"lower_colon\", for otherwise valid tags with a colon in their names,\n \"problemchars\", for tags with problematic characters, and\n \"other\", for other tags that do not fall into the other three categories.\nSee the 'process_map' and 'test' functions for examples of the expected format.\n\"\"\"\n\ndef _key_type(element, keys):\n if element.tag == \"tag\":\n k = element.attrib['k']\n if problem_chars.search(k):\n print \"problemchars {}\".format(k)\n keys['problemchars'] += 1\n elif lower_colon.search(k):\n keys['lower_colon'] += 1\n elif lower.search(k):\n keys['lower'] += 1\n else:\n #print \"other {}\".format(k)\n keys['other'] += 1\n \n return keys\n\ndef keys_type():\n keys = {\"lower\": 0, \"lower_colon\": 0, \"problemchars\": 0, \"other\": 0}\n for element in get_element(OSM_FILE, ('tag',)):\n keys = _key_type(element, keys)\n \n return keys\n\nkeys_type()\n\n\"\"\"\nYour task is to explore the data a bit more.\nThe first task is a fun one - find out how many unique users\nhave contributed to the map in this particular area!\n\nThe function process_map should return a set of unique user IDs (\"uid\")\n\"\"\"\ndef unique_user_contributed(tags = ('node','relation',)):\n users = set()\n for element in get_element(OSM_FILE, tags):\n users.add(element.attrib['user'])\n return users\n \n#len(unique_user_contributed())\n\nCREATED = [\"version\", \"changeset\", \"timestamp\", \"user\", \"uid\"]\n\ndef ensure_key_value(_dict, key, val):\n if key not in _dict:\n _dict[key] = val\n return _dict[key]\n\nSTATE_MAPPING = {\n 'delhi': 'DL',\n 'uttar pradesh': 'UP',\n 'u.p.': 'UP',\n 'ncr': 'DL'\n}\n\nCITY_MAPPING = {\n 'gurugram': 'Gurgaon',\n 'gurgram': 'Gurgaon',\n 'faridabad': 'Faridabad',\n 'delh': 'Delhi',\n 'new delhi': 'Delhi',\n 'neew delhi': 'Delhi',\n 'delhi': 'Delhi',\n 'old delhi': 'Delhi',\n 'noida': 'Noida',\n 'greater noida': 'Noida',\n 'ghaziabad': 'Ghaziabad',\n 'bahadurgarh': 'Bahadurgarh',\n 'meerut': 'Meerut'\n}\n\n\n\nCITY_TO_STATE = {\n 'Gurgaon': 'HR',\n 'Faridabad': 'HR',\n 'Delhi': 'DL',\n 'Noida': 'UP',\n 'Ghaziabad': 'UP',\n 'Bahadurgarh': 'HR',\n 'Meerut': 'UP'\n}\n\n\ndef fix_address_value(address_type, value):\n \n def if_lower_in_mapping_then_replace(value, mapping):\n if value.lower() in mapping:\n value = mapping[value.lower()]\n \n if value not in set(mapping.values()):\n #print \"{} = {}\".format(address_type, value)\n pass\n return value\n \n if address_type == 'state':\n value = if_lower_in_mapping_then_replace(value, STATE_MAPPING)\n elif address_type == 'city':\n value = if_lower_in_mapping_then_replace(value, CITY_MAPPING)\n \n return value\n\n\ndef ensure_address(element_map):\n if 'address' not in element_map:\n element_map['address'] = {\n 'country': 'IN'\n }\n return element_map['address']\n\n\ndef map_city_to_states(address_map):\n if 'city' in address_map:\n city = address_map['city']\n if city in CITY_TO_STATE:\n address_map['state'] = CITY_TO_STATE[city]\n \n\ndef fix_address(element_map):\n \"\"\"\n After we are done with general processing of individual address fields\n we process it as a whole\n \"\"\"\n address_map = ensure_address(element_map)\n \n map_city_to_states(address_map)\n\n\ndef process_tags(element, node):\n for tag in element.iter('tag'):\n key = tag.attrib['k']\n value = tag.attrib['v']\n\n if problem_chars.search(key):\n continue\n\n if key.startswith(\"addr:\"):\n _parts = key.split(\":\")\n if len(_parts) > 2:\n continue\n\n obj = ensure_key_value(node, 'address', {})\n\n address_type = _parts[1]\n value = fix_address_value(address_type, value)\n\n obj[address_type] = value\n else:\n node[key] = value\n\n fix_address(node)\n\ndef shape_element(element):\n \"\"\"\n Takes an element and shapes it to be ready for insertion into the database\n \"\"\"\n node = {}\n\n if element.tag == \"node\" or element.tag == \"way\":\n\n node['type'] = element.tag\n process_tags(element, node)\n \n for nd in element.iter('nd'):\n obj = ensure_key_value(node, 'node_refs', [])\n obj.append(nd.attrib['ref'])\n\n for key, value in element.attrib.iteritems():\n if key in CREATED:\n ensure_key_value(node, 'created', {})\n node['created'][key] = value\n elif key == 'lat':\n ensure_key_value(node, 'pos', [0, 0])\n node['pos'][0] = float(value)\n elif key == 'lon':\n ensure_key_value(node, 'pos', [0, 0])\n node['pos'][1] = float(value)\n else:\n node[key] = value\n\n return node\n else:\n return None\n\nfor element in get_element(OSM_FILE):\n node = shape_element(element)\n\nimport pprint\n\ndef get_client():\n from pymongo import MongoClient\n return MongoClient('mongodb://localhost:27017/')\n \ndef get_collection(): \n collection = get_client().examples.osm\n return collection", "Let's load data", "import codecs\nimport json\n\ndef process_map(file_in, pretty = False):\n \"\"\"\n Saves file as a json ready for insertion into mongoDB using mongoimport\n \"\"\"\n # You do not need to change this file\n file_out = \"{0}.json\".format(file_in)\n #data = []\n with codecs.open(file_out, \"w\") as fo:\n for element in get_element(file_in):\n el = shape_element(element)\n if el:\n #data.append(el)\n if pretty:\n fo.write(json.dumps(el, indent=2)+\"\\n\")\n else:\n fo.write(json.dumps(el) + \"\\n\")\n #return data\n\nprocess_map(OSM_FILE)\n\ncollection = get_collection()\ncollection.count()", "Number of unique users", "len(collection.distinct(\"created.user\"))", "Not a lot of users seem to be contributing to India's map\nAnalysis start", "# some helper functions for running mongo DB queries\n\ndef aggregate_to_list(collection, query):\n result = collection.aggregate(query)\n return list(r for r in result)\n\ndef aggregate_and_show(collection, query, limit = True):\n _query = query[:]\n if limit:\n _query.append({\"$limit\": 5})\n\n pprint.pprint(aggregate_to_list(collection, query))\n \ndef aggregate(query):\n aggregate_and_show(collection, query, False)\n \ndef aggregate_distincts(field, limit = False):\n query = [\n {\"$match\": {field: {\"$exists\": 1}}},\n {\"$group\": {\"_id\": \"$\" + field,\n \"count\": {\"$sum\": 1}}},\n {\"$sort\": {\"count\": -1}}\n ]\n if limit:\n query.append({\"$limit\": 10})\n aggregate(query)", "How much people are contributing", "def contribution_of_top(n):\n result = aggregate_to_list(collection, [\n {\"$group\": {\"_id\": \"$created.user\",\n \"count\": {\"$sum\": 1}}},\n {\"$sort\": {\"count\": -1}},\n {\"$limit\": n},\n {\"$group\": {\"_id\": 1,\n \"count\": {\"$sum\": \"$count\"}}}\n ])\n \n return result[0]['count']\n\ndef contributions_of(top):\n \"\"\"\n Given a list of numbers returns a dictionary of contributions of those number of user \n \"\"\"\n result = {}\n for count in top:\n result[count] = float(contribution_of_top(count) * 100) / collection.count()\n return result\n\npprint.pprint(contributions_of([1, 5, 15, 30, 50]))", "Out of 953 total users \n- top 5 users have contributed 20%\n- top 15 users have contributed 50%\n- top 40 users have contributed 75%\n- top 50 users have contributed 90%\nTop contributing users", "aggregate([\n {\"$group\": {\"_id\": \"$created.user\",\n \"count\": {\"$sum\": 1}}},\n {\"$sort\": {\"count\": -1}},\n {\"$limit\": 10}\n ])", "Number of users making only 1 contribution", "aggregate([\n {\"$group\": {\"_id\": \"$created.user\",\n \"count\": {\"$sum\": 1}}},\n {\"$group\": {\"_id\": \"$count\", \n \"num_users\": {\"$sum\": 1}}},\n {\"$sort\": {\"_id\": 1}},\n {\"$limit\": 1}\n ])", "Number of nodes and ways", "collection.count({\"type\":\"node\"})\n\ncollection.count({\"type\":\"way\"})", "Looking at the other problems in the data\nCountry", "collection.distinct(\"address.country\")", "Country is correct. \nState\nIt is possible that the state is not a single one for New Delhi because the map of what we call \"New Delhi\" is usually of \"National Capital Region\" which includes New Delhi and some adjoining cities.", "collection.distinct(\"address.state\")", "Delhi, DL mean the same thing\nUP, uttar pradesh, U.P. mean the same thing\nNCR is not a city but a region encompassing many cities\n\nLet's look at the cases where state is mentioned as NCR. That will need to fixed on a case to case basis rather than a simple mapping", "ncr_cases = list(r for r in collection.find({\"address.state\": \"NCR\"}))\n\nncr_cases\n\nlen(ncr_cases)", "These are small number of cases so probably done by the same user", "set(element['created']['user'] for element in ncr_cases)", "My thoughts were right. Looking at the data, all the cases are in New Delhi so I can map these to New Delhi.\nSo to clean this data I can map \n- [u'Delhi', u'DL', u'NCR'] => u'DL'\n- [u'UP', u'uttar pradesh', u'U.P.'] => u'UP'", "collection.distinct(\"address.city\")", "I was hoping that as this is a map of Delhi the city will be Delhi, Gurgaon and Faridabad. Maybe the spelling and case would be different but still only these.\nBut this data needs to be cleaned. There are sector names, area names, state names etc. which should not have been there.", "collection.distinct(\"address.street\")\n\naggregate_distincts(\"address.country\")\n\naggregate_distincts(\"address.state\")\n\naggregate_distincts(\"address.city\", True)\n\naggregate_distincts(\"address.street\", True)\n\naggregate_to_list(collection, [\n {\"$match\": {\"address.city\": 'Hira Colony, Siraspur, Delhi'}}\n ])\n\naggregate_distincts(\"amenity\", True)\n\naggregate_distincts(\"landuse\", True)\n\naggregate_distincts(\"place\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
boya-zhou/kaggle_bimbo_reformat
notebooks/1_predata_whole.ipynb
mit
[ "import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nfrom sklearn.cross_validation import train_test_split\nimport xgboost as xgb\nfrom scipy import sparse\nfrom sklearn.feature_extraction import FeatureHasher\nfrom scipy.sparse import coo_matrix,csr_matrix,csc_matrix, hstack\nfrom sklearn.preprocessing import normalize\nfrom sklearn.utils import shuffle\nfrom sklearn import linear_model\nimport gc\n\npd.set_option('display.max_columns', 500)\n\n%pwd\n\n%ls\n\ndtypes = {'Semana' : 'int32',\n 'Agencia_ID' :'int32',\n 'Canal_ID' : 'int32',\n 'Ruta_SAK' : 'int32',\n 'Cliente-ID' : 'int32',\n 'Producto_ID':'int32',\n 'Venta_hoy':'float32',\n 'Venta_uni_hoy': 'int32',\n 'Dev_uni_proxima':'int32',\n 'Dev_proxima':'float32',\n 'Demanda_uni_equil':'int32'}\n\ntrain_dataset = pd.read_csv('origin/train.csv',\n usecols =['Semana','Agencia_ID','Canal_ID','Ruta_SAK','Cliente_ID','Producto_ID','Demanda_uni_equil'],\n dtype = dtypes)\ntrain_dataset['log_demand'] = train_dataset['Demanda_uni_equil'].apply(np.log1p)\ntrain_dataset.drop(['Demanda_uni_equil'],axis = 1,inplace = True)\ntrain_dataset.head()\n\npivot_train = pd.read_pickle('pivot_train_with_nan.pickle')\npivot_train = pivot_train.rename(columns={3: 'Sem3', 4: 'Sem4',5: 'Sem5', 6: 'Sem6',7: 'Sem7', 8: 'Sem8',9: 'Sem9'})\npivot_train.head()\n\npivot_train_zero = pivot_train.fillna(0)\npivot_train_zero.head()\n\npivot_test = pd.read_pickle('pivot_test.pickle')\npivot_test.rename(columns = {'Semana':'sem10_sem11'},inplace = True)\npivot_test.shape\n\npivot_test['Cliente_ID'] = pivot_test['Cliente_ID'].astype(np.int32)\npivot_test['Producto_ID'] = pivot_test['Producto_ID'].astype(np.int32)\n\npivot_test.head()\n\npivot_test.columns.values", "make the train_pivot, duplicate exist when index = ['Cliente','Producto']\nfor each cliente & producto, first find its most common Agencia_ID, Canal_ID, Ruta_SAK", "agencia_for_cliente_producto = train_dataset[['Cliente_ID','Producto_ID'\n ,'Agencia_ID']].groupby(['Cliente_ID',\n 'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()\ncanal_for_cliente_producto = train_dataset[['Cliente_ID',\n 'Producto_ID','Canal_ID']].groupby(['Cliente_ID',\n 'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()\nruta_for_cliente_producto = train_dataset[['Cliente_ID',\n 'Producto_ID','Ruta_SAK']].groupby(['Cliente_ID',\n 'Producto_ID']).agg(lambda x:x.value_counts().index[0]).reset_index()\n\ngc.collect()\n\nagencia_for_cliente_producto.to_pickle('agencia_for_cliente_producto.csv')\ncanal_for_cliente_producto.to_pickle('canal_for_cliente_producto.csv')\nruta_for_cliente_producto.to_pickle('ruta_for_cliente_producto.csv')\n\nagencia_for_cliente_producto = pd.read_pickle('agencia_for_cliente_producto.csv')\ncanal_for_cliente_producto = pd.read_pickle('canal_for_cliente_producto.csv')\nruta_for_cliente_producto = pd.read_pickle('ruta_for_cliente_producto.csv')\n\n# train_dataset['log_demand'] = train_dataset['Demanda_uni_equil'].apply(np.log1p)\npivot_train = pd.pivot_table(data= train_dataset[['Cliente_ID','Producto_ID','log_demand','Semana']],\n values='log_demand', index=['Cliente_ID','Producto_ID'],\n columns=['Semana'], aggfunc=np.mean,fill_value = 0).reset_index()\n\npivot_train.head()\n\npivot_train = pd.merge(left = pivot_train, right = agencia_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])\npivot_train = pd.merge(left = pivot_train, right = canal_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])\npivot_train = pd.merge(left = pivot_train, right = ruta_for_cliente_producto, how = 'inner', on = ['Cliente_ID','Producto_ID'])\n\n\npivot_train.to_pickle('pivot_train_with_zero.pickle')\n\npivot_train = pd.read_pickle('pivot_train_with_zero.pickle')\n\npivot_train.to_pickle('pivot_train_with_nan.pickle')\n\npivot_train = pd.read_pickle('pivot_train_with_nan.pickle')\n\npivot_train = pivot_train.rename(columns={3: 'Sem3', 4: 'Sem4',5: 'Sem5', 6: 'Sem6',7: 'Sem7', 8: 'Sem8',9: 'Sem9'})\n\npivot_train.head()\n\npivot_train.columns.values", "make pivot table of test", "test_dataset = pd.read_csv('origin/test.csv')\ntest_dataset.head()\n\ntest_dataset[test_dataset['Semana'] == 10].shape\n\ntest_dataset[test_dataset['Semana'] == 11].shape\n\npivot_test = pd.merge(left=pivot_train, right = test_dataset[['id','Cliente_ID','Producto_ID','Semana']],\n on =['Cliente_ID','Producto_ID'],how = 'inner' )\npivot_test.head()\n\npivot_test_new = pd.merge(pivot_train[['Cliente_ID', 'Producto_ID', 'Sem3', 'Sem4', 'Sem5', 'Sem6', 'Sem7',\n 'Sem8', 'Sem9']],right = test_dataset, on = ['Cliente_ID','Producto_ID'],how = 'right')\n\npivot_test_new.head()\n\npivot_test_new.to_pickle('pivot_test.pickle')\n\npivot_test.to_pickle('pivot_test.pickle')\n\npivot_test = pd.read_pickle('pivot_test.pickle')\npivot_test.head()", "groupby use Agencia_ID, Ruta_SAK, Cliente_ID, Producto_ID", "train_dataset.head()\n\nimport itertools\ncol_list = ['Agencia_ID', 'Ruta_SAK', 'Cliente_ID', 'Producto_ID']\nall_combine = itertools.combinations(col_list,2)\n\n\nlist_2element_combine = [list(tuple) for tuple in all_combine]\n\ncol_1elm_2elm = col_list + list_2element_combine\ncol_1elm_2elm\n\ntrain_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy()", "if predict week 8, use data from 3,4,5,6,7\nif predict week 9, use data from 3,4,5,6,7", "def categorical_useful(train_dataset,pivot_train):\n# if is_train:\n# train_dataset_test = train_dataset[train_dataset['Semana'] < 8].copy()\n# elif is_train == False:\n train_dataset_test = train_dataset.copy()\n \n log_demand_by_agen = train_dataset_test[['Agencia_ID','log_demand']].groupby('Agencia_ID').mean().reset_index()\n log_demand_by_ruta = train_dataset_test[['Ruta_SAK','log_demand']].groupby('Ruta_SAK').mean().reset_index()\n log_demand_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').mean().reset_index()\n log_demand_by_producto = train_dataset_test[['Producto_ID','log_demand']].groupby('Producto_ID').mean().reset_index()\n \n log_demand_by_agen_ruta = train_dataset_test[['Agencia_ID', 'Ruta_SAK',\n 'log_demand']].groupby(['Agencia_ID', 'Ruta_SAK']).mean().reset_index()\n log_demand_by_agen_cliente = train_dataset_test[['Agencia_ID', 'Cliente_ID',\n 'log_demand']].groupby(['Agencia_ID', 'Cliente_ID']).mean().reset_index()\n log_demand_by_agen_producto = train_dataset_test[['Agencia_ID', 'Producto_ID',\n 'log_demand']].groupby(['Agencia_ID', 'Producto_ID']).mean().reset_index()\n log_demand_by_ruta_cliente = train_dataset_test[['Ruta_SAK', 'Cliente_ID',\n 'log_demand']].groupby(['Ruta_SAK', 'Cliente_ID']).mean().reset_index()\n log_demand_by_ruta_producto = train_dataset_test[['Ruta_SAK', 'Producto_ID',\n 'log_demand']].groupby(['Ruta_SAK', 'Producto_ID']).mean().reset_index()\n log_demand_by_cliente_producto = train_dataset_test[['Cliente_ID', 'Producto_ID',\n 'log_demand']].groupby(['Cliente_ID', 'Producto_ID']).mean().reset_index()\n \n log_demand_by_cliente_producto_agen = train_dataset_test[[\n 'Cliente_ID','Producto_ID','Agencia_ID','log_demand']].groupby(['Cliente_ID',\n 'Agencia_ID','Producto_ID']).mean().reset_index()\n \n log_sum_by_cliente = train_dataset_test[['Cliente_ID','log_demand']].groupby('Cliente_ID').sum().reset_index()\n \n ruta_freq_semana = train_dataset[['Semana','Ruta_SAK']].groupby(['Ruta_SAK']).count().reset_index()\n clien_freq_semana = train_dataset[['Semana','Cliente_ID']].groupby(['Cliente_ID']).count().reset_index()\n agen_freq_semana = train_dataset[['Semana','Agencia_ID']].groupby(['Agencia_ID']).count().reset_index()\n prod_freq_semana = train_dataset[['Semana','Producto_ID']].groupby(['Producto_ID']).count().reset_index()\n \n \n pivot_train = pd.merge(left = pivot_train,right = ruta_freq_semana,\n how = 'left', on = ['Ruta_SAK']).rename(columns={'Semana': 'ruta_freq'})\n pivot_train = pd.merge(left = pivot_train,right = clien_freq_semana,\n how = 'left', on = ['Cliente_ID']).rename(columns={'Semana': 'clien_freq'})\n pivot_train = pd.merge(left = pivot_train,right = agen_freq_semana,\n how = 'left', on = ['Agencia_ID']).rename(columns={'Semana': 'agen_freq'})\n pivot_train = pd.merge(left = pivot_train,right = prod_freq_semana,\n how = 'left', on = ['Producto_ID']).rename(columns={'Semana': 'prod_freq'})\n \n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_agen,\n how = 'left', on = ['Agencia_ID']).rename(columns={'log_demand': 'agen_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_ruta,\n how = 'left', on = ['Ruta_SAK']).rename(columns={'log_demand': 'ruta_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_cliente,\n how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_producto,\n how = 'left', on = ['Producto_ID']).rename(columns={'log_demand': 'producto_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_agen_ruta,\n how = 'left', on = ['Agencia_ID', 'Ruta_SAK']).rename(columns={'log_demand': 'agen_ruta_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_agen_cliente,\n how = 'left', on = ['Agencia_ID', 'Cliente_ID']).rename(columns={'log_demand': 'agen_cliente_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_agen_producto,\n how = 'left', on = ['Agencia_ID', 'Producto_ID']).rename(columns={'log_demand': 'agen_producto_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_ruta_cliente,\n how = 'left', on = ['Ruta_SAK', 'Cliente_ID']).rename(columns={'log_demand': 'ruta_cliente_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_ruta_producto,\n how = 'left', on = ['Ruta_SAK', 'Producto_ID']).rename(columns={'log_demand': 'ruta_producto_for_log_de'})\n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_cliente_producto,\n how = 'left', on = ['Cliente_ID', 'Producto_ID']).rename(columns={'log_demand': 'cliente_producto_for_log_de'})\n \n pivot_train = pd.merge(left = pivot_train,\n right = log_sum_by_cliente,\n how = 'left', on = ['Cliente_ID']).rename(columns={'log_demand': 'cliente_for_log_sum'})\n \n pivot_train = pd.merge(left = pivot_train,\n right = log_demand_by_cliente_producto_agen,\n how = 'left', on = ['Cliente_ID', 'Producto_ID',\n 'Agencia_ID']).rename(columns={'log_demand': 'cliente_producto_agen_for_log_sum'})\n \n \n pivot_train['corr'] = pivot_train['producto_for_log_de'] * pivot_train['cliente_for_log_de'] / train_dataset_test['log_demand'].median()\n\n return pivot_train\n\ndef define_time_features(df, to_predict = 't_plus_1' , t_0 = 8):\n if(to_predict == 't_plus_1' ):\n df['t_min_1'] = df['Sem'+str(t_0-1)]\n if(to_predict == 't_plus_2' ):\n df['t_min_6'] = df['Sem'+str(t_0-6)]\n \n df['t_min_2'] = df['Sem'+str(t_0-2)]\n df['t_min_3'] = df['Sem'+str(t_0-3)]\n df['t_min_4'] = df['Sem'+str(t_0-4)]\n df['t_min_5'] = df['Sem'+str(t_0-5)]\n \n \n if(to_predict == 't_plus_1' ):\n df['t1_min_t2'] = df['t_min_1'] - df['t_min_2']\n df['t1_min_t3'] = df['t_min_1'] - df['t_min_3']\n df['t1_min_t4'] = df['t_min_1'] - df['t_min_4']\n df['t1_min_t5'] = df['t_min_1'] - df['t_min_5']\n \n if(to_predict == 't_plus_2' ):\n df['t2_min_t6'] = df['t_min_2'] - df['t_min_6']\n df['t3_min_t6'] = df['t_min_3'] - df['t_min_6']\n df['t4_min_t6'] = df['t_min_4'] - df['t_min_6']\n df['t5_min_t6'] = df['t_min_5'] - df['t_min_6']\n\n df['t2_min_t3'] = df['t_min_2'] - df['t_min_3']\n df['t2_min_t4'] = df['t_min_2'] - df['t_min_4']\n df['t2_min_t5'] = df['t_min_2'] - df['t_min_5']\n\n df['t3_min_t4'] = df['t_min_3'] - df['t_min_4']\n df['t3_min_t5'] = df['t_min_3'] - df['t_min_5']\n\n df['t4_min_t5'] = df['t_min_4'] - df['t_min_5']\n \n return df \n\ndef lin_regr(row, to_predict, t_0, semanas_numbers):\n row = row.copy()\n row.index = semanas_numbers\n row = row.dropna()\n if(len(row>2)):\n X = np.ones(shape=(len(row), 2))\n X[:,1] = row.index\n y = row.values\n regr = linear_model.LinearRegression()\n regr.fit(X, y)\n if(to_predict == 't_plus_1'):\n return regr.predict([[1,t_0+1]])[0]\n elif(to_predict == 't_plus_2'):\n return regr.predict([[1,t_0+2]])[0]\n else:\n return None\n\ndef lin_regr_features(pivot_df,to_predict, semanas_numbers,t_0):\n pivot_df = pivot_df.copy()\n semanas_names = ['Sem%i' %i for i in semanas_numbers]\n columns = ['Sem%i' %i for i in semanas_numbers]\n columns.append('Producto_ID')\n pivot_grouped = pivot_df[columns].groupby('Producto_ID').aggregate('mean')\n pivot_grouped['LR_prod'] = np.zeros(len(pivot_grouped))\n pivot_grouped['LR_prod'] = pivot_grouped[semanas_names].apply(lin_regr, axis = 1,\n to_predict = to_predict,\n t_0 = t_0, semanas_numbers = semanas_numbers )\n pivot_df = pd.merge(pivot_df, pivot_grouped[['LR_prod']], how='left', left_on = 'Producto_ID', right_index=True)\n pivot_df['LR_prod_corr'] = pivot_df['LR_prod'] * pivot_df['cliente_for_log_sum'] / 100\n return pivot_df\n\ncliente_tabla = pd.read_csv('origin/cliente_tabla.csv')\ntown_state = pd.read_csv('origin/town_state.csv')\n\ntown_state['town_id'] = town_state['Town'].str.split()\ntown_state['town_id'] = town_state['Town'].str.split(expand = True)\n\ndef add_pro_info(dataset):\n train_basic_feature = dataset[['Cliente_ID','Producto_ID','Agencia_ID']].copy()\n train_basic_feature.drop_duplicates(inplace = True)\n\n cliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )\n# print cliente_per_town.shape\n cliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )\n\n# print cliente_per_town.shape\n\n cliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()\n# print cliente_per_town_count.head()\n\n cliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','town_id','Agencia_ID']],\n cliente_per_town_count,on = 'town_id',how = 'inner')\n\n# print cliente_per_town_count_final.head()\n cliente_per_town_count_final.drop_duplicates(inplace = True)\n \n dataset_final = pd.merge(dataset,cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente','Agencia_ID']],\n on = ['Cliente_ID','Producto_ID','Agencia_ID'],how = 'left')\n return dataset_final\n\npre_product = pd.read_csv('preprocessed_products.csv',index_col = 0)\n\npre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce')\npre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce')\npre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce')\n\ndef add_product(dataset):\n\n dataset = pd.merge(dataset,pre_product[['ID','weight','weight_per_piece','pieces']],\n left_on = 'Producto_ID',right_on = 'ID',how = 'left')\n return dataset", "data for predict week [34567----9], time plus 2 week", "train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy()\ntrain_pivot_34567_to_9 = pivot_train_zero.loc[(pivot_train['Sem9'].notnull()),:].copy()\n\ntrain_pivot_34567_to_9 = categorical_useful(train_34567,train_pivot_34567_to_9)\n\ndel train_34567\ngc.collect()\n\ntrain_pivot_34567_to_9 = define_time_features(train_pivot_34567_to_9, to_predict = 't_plus_2' , t_0 = 9)\ntrain_pivot_34567_to_9 = lin_regr_features(train_pivot_34567_to_9,to_predict ='t_plus_2',\n semanas_numbers = [3,4,5,6,7],t_0 = 9)\n\ntrain_pivot_34567_to_9['target'] = train_pivot_34567_to_9['Sem9']\ntrain_pivot_34567_to_9.drop(['Sem8','Sem9'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_34567_to_9[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1)\n\ntrain_pivot_34567_to_9.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True)\n\ntrain_pivot_34567_to_9 = pd.concat([train_pivot_34567_to_9,train_pivot_cum_sum],axis =1)\n\ntrain_pivot_34567_to_9 = train_pivot_34567_to_9.rename(columns={'Sem3': 't_m_6_cum',\n 'Sem4': 't_m_5_cum','Sem5': 't_m_4_cum',\n 'Sem6': 't_m_3_cum','Sem7': 't_m_2_cum'})\n# add geo_info\ntrain_pivot_34567_to_9 = add_pro_info(train_pivot_34567_to_9)\n\n#add product info\ntrain_pivot_34567_to_9 = add_product(train_pivot_34567_to_9)\ntrain_pivot_34567_to_9.drop(['ID'],axis = 1,inplace = True)\n\ngc.collect()\n\ntrain_pivot_34567_to_9.head()\n\ntrain_pivot_34567_to_9.columns.values\n\nlen(train_pivot_34567_to_9.columns.values)\n\ntrain_pivot_34567_to_9.to_csv('train_pivot_34567_to_9.csv')\n\ntrain_pivot_34567_to_9 = pd.read_csv('train_pivot_34567_to_9.csv',index_col = 0)", "test_for private data, week 11", "pivot_test.head()\n\npivot_test_week11 = pivot_test.loc[pivot_test['sem10_sem11'] == 11]\npivot_test_week11.reset_index(drop=True,inplace = True)\npivot_test_week11 = pivot_test_week11.fillna(0)\npivot_test_week11.head()\n\npivot_test_week11.shape\n\ntrain_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy()\ntrain_pivot_56789_to_11 = pivot_test_week11.copy()\n\ntrain_pivot_56789_to_11 = categorical_useful(train_56789,train_pivot_56789_to_11)\n\ndel train_56789\ngc.collect()\n\ntrain_pivot_56789_to_11 = define_time_features(train_pivot_56789_to_11, to_predict = 't_plus_2' , t_0 = 11)\n\ntrain_pivot_56789_to_11 = lin_regr_features(train_pivot_56789_to_11,to_predict ='t_plus_2' , \n semanas_numbers = [5,6,7,8,9],t_0 = 9)\n\ntrain_pivot_56789_to_11.drop(['Sem3','Sem4'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_56789_to_11[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)\n\ntrain_pivot_56789_to_11.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)\n\ntrain_pivot_56789_to_11 = pd.concat([train_pivot_56789_to_11,train_pivot_cum_sum],axis =1)\n\n\ntrain_pivot_56789_to_11 = train_pivot_56789_to_11.rename(columns={'Sem5': 't_m_6_cum',\n 'Sem6': 't_m_5_cum','Sem7': 't_m_4_cum',\n 'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'})\n# add product_info\ntrain_pivot_56789_to_11 = add_pro_info(train_pivot_56789_to_11)\n\n#\ntrain_pivot_56789_to_11 = add_product(train_pivot_56789_to_11)\n\ntrain_pivot_56789_to_11.drop(['ID'],axis =1,inplace = True)\n\nfor col in train_pivot_56789_to_11.columns.values:\n train_pivot_56789_to_11[col] = train_pivot_56789_to_11[col].astype(np.float32)\n\ntrain_pivot_56789_to_11.head()\n\ntrain_pivot_56789_to_11.columns.values\n\ntrain_pivot_56789_to_11.shape\n\nnew_feature = ['id', 'ruta_freq', 'clien_freq', 'agen_freq',\n 'prod_freq', 'agen_for_log_de', 'ruta_for_log_de',\n 'cliente_for_log_de', 'producto_for_log_de', 'agen_ruta_for_log_de',\n 'agen_cliente_for_log_de', 'agen_producto_for_log_de',\n 'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',\n 'cliente_producto_for_log_de', 'cliente_for_log_sum',\n 'cliente_producto_agen_for_log_sum', 'corr', 't_min_6', 't_min_2',\n 't_min_3', 't_min_4', 't_min_5', 't2_min_t6', 't3_min_t6',\n 't4_min_t6', 't5_min_t6', 't2_min_t3', 't2_min_t4', 't2_min_t5',\n 't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',\n 't_m_6_cum', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',\n 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']\n\nlen(new_feature)\n\ntrain_pivot_56789_to_11 = train_pivot_56789_to_11[new_feature]\ntrain_pivot_56789_to_11.head()\n\ntrain_pivot_56789_to_11['id'] = train_pivot_56789_to_11['id'].astype(int) \n\ntrain_pivot_56789_to_11.head()\n\ntrain_pivot_56789_to_11.to_csv('train_pivot_56789_to_11_private.csv',index = False)", "for two week ahead 45678 to 10", "pivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10]\npivot_test_week10.reset_index(drop=True,inplace = True)\npivot_test_week10 = pivot_test_week10.fillna(0)\npivot_test_week10.head()\n\ntrain_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy()\ntrain_pivot_45678_to_10 = pivot_test_week10.copy()\n\ntrain_pivot_45678_to_10 = categorical_useful(train_45678,train_pivot_45678_to_10)\n\ndel train_45678\ngc.collect()\n\ntrain_pivot_45678_to_10 = define_time_features(train_pivot_45678_to_10, to_predict = 't_plus_2' , t_0 = 10)\n\ntrain_pivot_45678_to_10 = lin_regr_features(train_pivot_45678_to_10,to_predict ='t_plus_2' , \n semanas_numbers = [4,5,6,7,8],t_0 = 8)\n\ntrain_pivot_45678_to_10.drop(['Sem3','Sem9'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_45678_to_10[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1)\n\ntrain_pivot_45678_to_10.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True)\n\ntrain_pivot_45678_to_10 = pd.concat([train_pivot_45678_to_10,train_pivot_cum_sum],axis =1)\n\n\ntrain_pivot_45678_to_10 = train_pivot_45678_to_10.rename(columns={'Sem4': 't_m_6_cum',\n 'Sem5': 't_m_5_cum','Sem6': 't_m_4_cum',\n 'Sem7': 't_m_3_cum','Sem8': 't_m_2_cum'})\n# add product_info\ntrain_pivot_45678_to_10 = add_pro_info(train_pivot_45678_to_10)\n\n#\ntrain_pivot_45678_to_10 = add_product(train_pivot_45678_to_10)\n\ntrain_pivot_45678_to_10.drop(['ID'],axis =1,inplace = True)\n\nfor col in train_pivot_45678_to_10.columns.values:\n train_pivot_45678_to_10[col] = train_pivot_45678_to_10[col].astype(np.float32)\n\ntrain_pivot_45678_to_10.head()\n\ntrain_pivot_45678_to_10.columns.values\n\ntrain_pivot_45678_to_10 = train_pivot_45678_to_10[new_feature]\ntrain_pivot_45678_to_10['id'] = train_pivot_45678_to_10['id'].astype(int)\ntrain_pivot_45678_to_10.head()\n\ntrain_pivot_45678_to_10.to_pickle('validation_45678_10.pickle')", "data for predict week 8&9, time plus 1 week\n\ntrain_45678 for 8+1 =9", "train_45678 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7,8]), :].copy()\ntrain_pivot_45678_to_9 = pivot_train_zero.loc[(pivot_train['Sem9'].notnull()),:].copy()\n\n\ntrain_pivot_45678_to_9 = categorical_useful(train_45678,train_pivot_45678_to_9)\ntrain_pivot_45678_to_9 = define_time_features(train_pivot_45678_to_9, to_predict = 't_plus_1' , t_0 = 9)\n\n\ndel train_45678\ngc.collect()\n\ntrain_pivot_45678_to_9 = lin_regr_features(train_pivot_45678_to_9,to_predict ='t_plus_1',\n semanas_numbers = [4,5,6,7,8],t_0 = 8)\n\ntrain_pivot_45678_to_9['target'] = train_pivot_45678_to_9['Sem9']\ntrain_pivot_45678_to_9.drop(['Sem3','Sem9'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_45678_to_9[['Sem4','Sem5','Sem6','Sem7','Sem8']].cumsum(axis = 1)\n\ntrain_pivot_45678_to_9.drop(['Sem4','Sem5','Sem6','Sem7','Sem8'],axis =1,inplace = True)\n\ntrain_pivot_45678_to_9 = pd.concat([train_pivot_45678_to_9,train_pivot_cum_sum],axis =1,copy = False)\n\ntrain_pivot_45678_to_9 = train_pivot_45678_to_9.rename(columns={'Sem4': 't_m_5_cum',\n 'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum','Sem8': 't_m_1_cum'})\n# add geo_info\ntrain_pivot_45678_to_9 = add_pro_info(train_pivot_45678_to_9)\n\n#add product info\ntrain_pivot_45678_to_9 = add_product(train_pivot_45678_to_9)\ntrain_pivot_45678_to_9.drop(['ID'],axis = 1,inplace = True)\n\nfor col in train_pivot_45678_to_9.columns.values:\n train_pivot_45678_to_9[col] = train_pivot_45678_to_9[col].astype(np.float32)\n\ngc.collect()\n\ntrain_pivot_45678_to_9.head()\n\n\n\ntrain_pivot_45678_to_9.columns.values\n\ntrain_pivot_45678_to_9 = train_pivot_45678_to_9[['ruta_freq', 'clien_freq', 'agen_freq', 'prod_freq',\n 'agen_for_log_de', 'ruta_for_log_de', 'cliente_for_log_de',\n 'producto_for_log_de', 'agen_ruta_for_log_de',\n 'agen_cliente_for_log_de', 'agen_producto_for_log_de',\n 'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',\n 'cliente_producto_for_log_de', 'cliente_for_log_sum',\n 'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',\n 't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',\n 't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',\n 't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',\n 'target', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',\n 't_m_1_cum', 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']] \n\ntrain_pivot_45678_to_9.shape\n\ntrain_pivot_45678_to_9.to_csv('train_pivot_45678_to_9_whole_zero.csv')\n\n# train_pivot_45678_to_9_old = pd.read_csv('train_pivot_45678_to_9.csv',index_col = 0)\n\nsum(train_pivot_45678_to_9['target'].isnull())", "train_34567 7+1 = 8", "train_34567 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6,7]), :].copy()\ntrain_pivot_34567_to_8 = pivot_train_zero.loc[(pivot_train['Sem8'].notnull()),:].copy()\n\ntrain_pivot_34567_to_8 = categorical_useful(train_34567,train_pivot_34567_to_8)\ntrain_pivot_34567_to_8 = define_time_features(train_pivot_34567_to_8, to_predict = 't_plus_1' , t_0 = 8)\n\ndel train_34567\ngc.collect()\n\ntrain_pivot_34567_to_8 = lin_regr_features(train_pivot_34567_to_8,to_predict = 't_plus_1',\n semanas_numbers = [3,4,5,6,7],t_0 = 7)\n\ntrain_pivot_34567_to_8['target'] = train_pivot_34567_to_8['Sem8']\ntrain_pivot_34567_to_8.drop(['Sem8','Sem9'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_34567_to_8[['Sem3','Sem4','Sem5','Sem6','Sem7']].cumsum(axis = 1)\n\ntrain_pivot_34567_to_8.drop(['Sem3','Sem4','Sem5','Sem6','Sem7'],axis =1,inplace = True)\n\ntrain_pivot_34567_to_8 = pd.concat([train_pivot_34567_to_8,train_pivot_cum_sum],axis =1)\n\ntrain_pivot_34567_to_8 = train_pivot_34567_to_8.rename(columns={'Sem3': 't_m_5_cum','Sem4': 't_m_4_cum',\n 'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum',\n 'Sem7': 't_m_1_cum'})\n# add product_info\ntrain_pivot_34567_to_8 = add_pro_info(train_pivot_34567_to_8)\n\n#add product\n\ntrain_pivot_34567_to_8 = add_product(train_pivot_34567_to_8)\n\ntrain_pivot_34567_to_8.drop(['ID'],axis = 1,inplace = True)\n\nfor col in train_pivot_34567_to_8.columns.values:\n train_pivot_34567_to_8[col] = train_pivot_34567_to_8[col].astype(np.float32)\n\ngc.collect()\n\ntrain_pivot_34567_to_8.head()\n\ntrain_pivot_34567_to_8.shape\n\ntrain_pivot_34567_to_8.columns.values\n\ntrain_pivot_34567_to_8.to_csv('train_pivot_34567_to_8.csv')\n\ntrain_pivot_34567_to_8 = pd.read_csv('train_pivot_34567_to_8.csv',index_col = 0)\n\ngc.collect()", "concat train_pivot_45678_to_9 & train_pivot_34567_to_8 to perform t_plus_1, train_data is over", "train_pivot_xgb_time1 = pd.concat([train_pivot_45678_to_9, train_pivot_34567_to_8],axis = 0,copy = False)\n\ntrain_pivot_xgb_time1 = train_pivot_xgb_time1[['ruta_freq', 'clien_freq', 'agen_freq', 'prod_freq',\n 'agen_for_log_de', 'ruta_for_log_de', 'cliente_for_log_de',\n 'producto_for_log_de', 'agen_ruta_for_log_de',\n 'agen_cliente_for_log_de', 'agen_producto_for_log_de',\n 'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',\n 'cliente_producto_for_log_de', 'cliente_for_log_sum',\n 'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',\n 't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',\n 't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',\n 't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',\n 'target', 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum',\n 't_m_1_cum', 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']]\n\ntrain_pivot_xgb_time1.columns.values\n\ntrain_pivot_xgb_time1.shape\n\nnp.sum(train_pivot_xgb_time1.memory_usage())/(1024**3)\n\ntrain_pivot_xgb_time1.to_csv('train_pivot_xgb_time1_44fea_zero.csv',index = False)\n\ntrain_pivot_xgb_time1.to_csv('train_pivot_xgb_time1.csv')\n\ndel train_pivot_xgb_time1\ndel train_pivot_45678_to_9\ndel train_pivot_34567_to_8\ngc.collect()", "prepare for test data, for week 10, we use 5,6,7,8,9", "pivot_test.head()\n\npivot_test_week10 = pivot_test.loc[pivot_test['sem10_sem11'] == 10]\npivot_test_week10.reset_index(drop=True,inplace = True)\npivot_test_week10 = pivot_test_week10.fillna(0)\npivot_test_week10.head()\n\npivot_test_week10.shape\n\ntrain_56789 = train_dataset.loc[train_dataset['Semana'].isin([5,6,7,8,9]), :].copy()\ntrain_pivot_56789_to_10 = pivot_test_week10.copy()\n\ntrain_pivot_56789_to_10 = categorical_useful(train_56789,train_pivot_56789_to_10)\n\ndel train_56789\ngc.collect()\n\ntrain_pivot_56789_to_10 = define_time_features(train_pivot_56789_to_10, to_predict = 't_plus_1' , t_0 = 10)\n\ntrain_pivot_56789_to_10 = lin_regr_features(train_pivot_56789_to_10,to_predict ='t_plus_1' , \n semanas_numbers = [5,6,7,8,9],t_0 = 9)\n\ntrain_pivot_56789_to_10.drop(['Sem3','Sem4'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_56789_to_10[['Sem5','Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)\n\ntrain_pivot_56789_to_10.drop(['Sem5','Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)\n\ntrain_pivot_56789_to_10 = pd.concat([train_pivot_56789_to_10,train_pivot_cum_sum],axis =1)\n\n\ntrain_pivot_56789_to_10 = train_pivot_56789_to_10.rename(columns={'Sem5': 't_m_5_cum',\n 'Sem6': 't_m_4_cum','Sem7': 't_m_3_cum',\n 'Sem8': 't_m_2_cum','Sem9': 't_m_1_cum'})\n# add product_info\ntrain_pivot_56789_to_10 = add_pro_info(train_pivot_56789_to_10)\n\n#\ntrain_pivot_56789_to_10 = add_product(train_pivot_56789_to_10)\n\ntrain_pivot_56789_to_10.drop(['ID'],axis =1,inplace = True)\n\nfor col in train_pivot_56789_to_10.columns.values:\n train_pivot_56789_to_10[col] = train_pivot_56789_to_10[col].astype(np.float32)\n\ntrain_pivot_56789_to_10.head()\n\ntrain_pivot_56789_to_10 = train_pivot_56789_to_10[['id','ruta_freq', 'clien_freq', 'agen_freq',\n 'prod_freq', 'agen_for_log_de', 'ruta_for_log_de',\n 'cliente_for_log_de', 'producto_for_log_de', 'agen_ruta_for_log_de',\n 'agen_cliente_for_log_de', 'agen_producto_for_log_de',\n 'ruta_cliente_for_log_de', 'ruta_producto_for_log_de',\n 'cliente_producto_for_log_de', 'cliente_for_log_sum',\n 'cliente_producto_agen_for_log_sum', 'corr', 't_min_1', 't_min_2',\n 't_min_3', 't_min_4', 't_min_5', 't1_min_t2', 't1_min_t3',\n 't1_min_t4', 't1_min_t5', 't2_min_t3', 't2_min_t4', 't2_min_t5',\n 't3_min_t4', 't3_min_t5', 't4_min_t5', 'LR_prod', 'LR_prod_corr',\n 't_m_5_cum', 't_m_4_cum', 't_m_3_cum', 't_m_2_cum', 't_m_1_cum',\n 'NombreCliente', 'weight', 'weight_per_piece', 'pieces']] \n\ntrain_pivot_56789_to_10.head()\n\ntrain_pivot_56789_to_10.shape\n\nlen(train_pivot_56789_to_10.columns.values)\n\ntrain_pivot_56789_to_10.to_pickle('train_pivot_56789_to_10_44fea_zero.pickle')", "begin predict for week 11\n\ntrain_3456 for 6+2 = 8", "train_3456 = train_dataset.loc[train_dataset['Semana'].isin([3,4,5,6]), :].copy()\ntrain_pivot_3456_to_8 = pivot_train.loc[(pivot_train['Sem8'].notnull()),:].copy()\n\ntrain_pivot_3456_to_8 = categorical_useful(train_3456,train_pivot_3456_to_8)\n\ndel train_3456\ngc.collect()\n\ntrain_pivot_3456_to_8 = define_time_features(train_pivot_3456_to_8, to_predict = 't_plus_2' , t_0 = 8)\n\n#notice that the t_0 means different\ntrain_pivot_3456_to_8 = lin_regr_features(train_pivot_3456_to_8,to_predict = 't_plus_2', semanas_numbers = [3,4,5,6],t_0 = 6)\n\ntrain_pivot_3456_to_8['target'] = train_pivot_3456_to_8['Sem8']\ntrain_pivot_3456_to_8.drop(['Sem7','Sem8','Sem9'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_3456_to_8[['Sem3','Sem4','Sem5','Sem6']].cumsum(axis = 1)\n\ntrain_pivot_3456_to_8.drop(['Sem3','Sem4','Sem5','Sem6'],axis =1,inplace = True)\n\ntrain_pivot_3456_to_8 = pd.concat([train_pivot_3456_to_8,train_pivot_cum_sum],axis =1)\n\ntrain_pivot_3456_to_8 = train_pivot_3456_to_8.rename(columns={'Sem4': 't_m_4_cum',\n 'Sem5': 't_m_3_cum','Sem6': 't_m_2_cum', 'Sem3': 't_m_5_cum'})\n# add product_info\ntrain_pivot_3456_to_8 = add_pro_info(train_pivot_3456_to_8)\n\ntrain_pivot_3456_to_8 = add_product(train_pivot_3456_to_8)\n\ntrain_pivot_3456_to_8.drop(['ID'],axis =1,inplace = True)\n\ntrain_pivot_3456_to_8.head()\n\ntrain_pivot_3456_to_8.columns.values\n\ntrain_pivot_3456_to_8.to_csv('train_pivot_3456_to_8.csv')", "train_4567 for 7 + 2 = 9", "train_4567 = train_dataset.loc[train_dataset['Semana'].isin([4,5,6,7]), :].copy()\ntrain_pivot_4567_to_9 = pivot_train.loc[(pivot_train['Sem9'].notnull()),:].copy()\n\ntrain_pivot_4567_to_9 = categorical_useful(train_4567,train_pivot_4567_to_9)\n\ndel train_4567\ngc.collect()\n\ntrain_pivot_4567_to_9 = define_time_features(train_pivot_4567_to_9, to_predict = 't_plus_2' , t_0 = 9)\n\n#notice that the t_0 means different\ntrain_pivot_4567_to_9 = lin_regr_features(train_pivot_4567_to_9,to_predict = 't_plus_2', \n semanas_numbers = [4,5,6,7],t_0 = 7)\n\ntrain_pivot_4567_to_9['target'] = train_pivot_4567_to_9['Sem9']\ntrain_pivot_4567_to_9.drop(['Sem3','Sem8','Sem9'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_4567_to_9[['Sem7','Sem4','Sem5','Sem6']].cumsum(axis = 1)\n\ntrain_pivot_4567_to_9.drop(['Sem7','Sem4','Sem5','Sem6'],axis =1,inplace = True)\n\ntrain_pivot_4567_to_9 = pd.concat([train_pivot_4567_to_9,train_pivot_cum_sum],axis =1)\n\ntrain_pivot_4567_to_9 = train_pivot_4567_to_9.rename(columns={'Sem4': 't_m_5_cum',\n 'Sem5': 't_m_4_cum','Sem6': 't_m_3_cum', 'Sem7': 't_m_2_cum'})\n# add product_info\ntrain_pivot_4567_to_9 = add_pro_info(train_pivot_4567_to_9)\n\ntrain_pivot_4567_to_9 = add_product(train_pivot_4567_to_9)\n\ntrain_pivot_4567_to_9.drop(['ID'],axis =1,inplace = True)\n\ntrain_pivot_4567_to_9.head()\n\ntrain_pivot_4567_to_9.columns.values\n\ntrain_pivot_4567_to_9.to_csv('train_pivot_4567_to_9.csv')", "concat", "train_pivot_xgb_time2 = pd.concat([train_pivot_3456_to_8, train_pivot_4567_to_9],axis = 0,copy = False)\n\ntrain_pivot_xgb_time2.columns.values\n\ntrain_pivot_xgb_time2.shape\n\ntrain_pivot_xgb_time2.to_csv('train_pivot_xgb_time2_38fea.csv')\n\ntrain_pivot_xgb_time2 = pd.read_csv('train_pivot_xgb_time2.csv',index_col = 0)\ntrain_pivot_xgb_time2.head()\n\ndel train_pivot_3456_to_8\ndel train_pivot_4567_to_9\ndel train_pivot_xgb_time2\ndel train_pivot_34567_to_8\ndel train_pivot_45678_to_9\ndel train_pivot_xgb_time1\ngc.collect()", "for test data week 11, we use 6,7,8,9", "pivot_test_week11 = pivot_test_new.loc[pivot_test_new['Semana'] == 11]\npivot_test_week11.reset_index(drop=True,inplace = True)\npivot_test_week11.head()\n\npivot_test_week11.shape\n\ntrain_6789 = train_dataset.loc[train_dataset['Semana'].isin([6,7,8,9]), :].copy()\ntrain_pivot_6789_to_11 = pivot_test_week11.copy()\n\ntrain_pivot_6789_to_11 = categorical_useful(train_6789,train_pivot_6789_to_11)\n\ndel train_6789\ngc.collect()\n\ntrain_pivot_6789_to_11 = define_time_features(train_pivot_6789_to_11, to_predict = 't_plus_2' , t_0 = 11)\ntrain_pivot_6789_to_11 = lin_regr_features(train_pivot_6789_to_11,to_predict ='t_plus_2' ,\n semanas_numbers = [6,7,8,9],t_0 = 9)\n\ntrain_pivot_6789_to_11.drop(['Sem3','Sem4','Sem5'],axis =1,inplace = True)\n\n#add cum_sum\ntrain_pivot_cum_sum = train_pivot_6789_to_11[['Sem6','Sem7','Sem8','Sem9']].cumsum(axis = 1)\n\ntrain_pivot_6789_to_11.drop(['Sem6','Sem7','Sem8','Sem9'],axis =1,inplace = True)\n\ntrain_pivot_6789_to_11 = pd.concat([train_pivot_6789_to_11,train_pivot_cum_sum],axis =1)\n\ntrain_pivot_6789_to_11 = train_pivot_6789_to_11.rename(columns={'Sem6': 't_m_5_cum',\n 'Sem7': 't_m_4_cum', 'Sem8': 't_m_3_cum','Sem9': 't_m_2_cum'})\n# add product_info\ntrain_pivot_6789_to_11 = add_pro_info(train_pivot_6789_to_11)\n\ntrain_pivot_6789_to_11 = add_product(train_pivot_6789_to_11)\n\ntrain_pivot_6789_to_11.drop(['ID'],axis = 1,inplace = True)\n\ntrain_pivot_6789_to_11.head()\n\ntrain_pivot_6789_to_11.shape\n\ntrain_pivot_6789_to_11.to_pickle('train_pivot_6789_to_11_new.pickle')", "over", "% time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True)\n\n% time pivot_train_categorical_useful = categorical_useful(train_dataset,pivot_train,is_train = True)\n\npivot_train_categorical_useful_train.to_csv('pivot_train_categorical_useful_with_nan.csv')\n\npivot_train_categorical_useful_train = pd.read_csv('pivot_train_categorical_useful_with_nan.csv',index_col = 0)\npivot_train_categorical_useful_train.head()", "create time feature", "pivot_train_categorical_useful.head()\n\npivot_train_categorical_useful_time = define_time_features(pivot_train_categorical_useful,\n to_predict = 't_plus_1' , t_0 = 8)\n\npivot_train_categorical_useful_time.head()\n\npivot_train_categorical_useful_time.columns", "fit mean feature on target", "# Linear regression features \npivot_train_categorical_useful_time_LR = lin_regr_features(pivot_train_categorical_useful_time, semanas_numbers = [3,4,5,6,7])\npivot_train_categorical_useful_time_LR.head()\n\npivot_train_categorical_useful_time_LR.columns\n\npivot_train_categorical_useful_time_LR.to_csv('pivot_train_categorical_useful_time_LR.csv')\n\npivot_train_categorical_useful_time_LR = pd.read_csv('pivot_train_categorical_useful_time_LR.csv',index_col = 0)\n\npivot_train_categorical_useful_time_LR.head()", "add dummy feature", "# pivot_train_canal = pd.get_dummies(pivot_train_categorical_useful_train['Canal_ID'])\n\n# pivot_train_categorical_useful_train = pivot_train_categorical_useful_train.join(pivot_train_canal)\n# pivot_train_categorical_useful_train.head()", "add product feature", "%ls\n\npre_product = pd.read_csv('preprocessed_products.csv',index_col = 0)\npre_product.head()\n\n\npre_product['weight_per_piece'] = pd.to_numeric(pre_product['weight_per_piece'], errors='coerce')\npre_product['weight'] = pd.to_numeric(pre_product['weight'], errors='coerce')\npre_product['pieces'] = pd.to_numeric(pre_product['pieces'], errors='coerce')\n\npivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR,\n pre_product[['ID','weight','weight_per_piece']],\n left_on = 'Producto_ID',right_on = 'ID',how = 'left')\npivot_train_categorical_useful_time_LR_weight.head()\n\npivot_train_categorical_useful_time_LR_weight = pd.merge(pivot_train_categorical_useful_time_LR,\n pre_product[['ID','weight','weight_per_piece']],\n left_on = 'Producto_ID',right_on = 'ID',how = 'left')\npivot_train_categorical_useful_time_LR_weight.head()\n\npivot_train_categorical_useful_time_LR_weight.to_csv('pivot_train_categorical_useful_time_LR_weight.csv')\n\npivot_train_categorical_useful_time_LR_weight = pd.read_csv('pivot_train_categorical_useful_time_LR_weight.csv',index_col = 0)\npivot_train_categorical_useful_time_LR_weight.head()", "add town feature", "%cd '/media/siyuan/0009E198000CD19B/bimbo/origin'\n%ls\n\ncliente_tabla = pd.read_csv('cliente_tabla.csv')\ntown_state = pd.read_csv('town_state.csv')\n\ntown_state['town_id'] = town_state['Town'].str.split()\ntown_state['town_id'] = town_state['Town'].str.split(expand = True)\n\ntrain_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']]\n\ncliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )\ncliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )\n\ncliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()\ncliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000)\n\ncliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']],\n cliente_per_town_count,on = 'town_id',how = 'left')\n\npivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight,\n cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']],\n on = ['Cliente_ID','Producto_ID'],how = 'left')\n\ncliente_tabla.head()\n\ntown_state.head()\n\ntown_state['town_id'] = town_state['Town'].str.split()\ntown_state['town_id'] = town_state['Town'].str.split(expand = True)\n\ntown_state.head()\n\npivot_train_categorical_useful_time_LR_weight.columns.values\n\ntrain_basic_feature = pivot_train_categorical_useful_time_LR_weight[['Cliente_ID','Producto_ID','Agencia_ID']]\n\ncliente_per_town = pd.merge(train_basic_feature,cliente_tabla,on = 'Cliente_ID',how= 'inner' )\ncliente_per_town = pd.merge(cliente_per_town,town_state[['Agencia_ID','town_id']],on = 'Agencia_ID',how= 'inner' )\n\ncliente_per_town.head()\n\ncliente_per_town_count = cliente_per_town[['NombreCliente','town_id']].groupby('town_id').count().reset_index()\ncliente_per_town_count['NombreCliente'] = cliente_per_town_count['NombreCliente']/float(100000)\n\ncliente_per_town_count.head()\n\ncliente_per_town_count_final = pd.merge(cliente_per_town[['Cliente_ID','Producto_ID','Agencia_ID','town_id']],\n cliente_per_town_count,on = 'town_id',how = 'left')\ncliente_per_town_count_final.head()\n\npivot_train_categorical_useful_time_LR_weight_town = pd.merge(pivot_train_categorical_useful_time_LR_weight,\n cliente_per_town_count_final[['Cliente_ID','Producto_ID','NombreCliente']],\n on = ['Cliente_ID','Producto_ID'],how = 'left')\npivot_train_categorical_useful_time_LR_weight_town.head()\n\npivot_train_categorical_useful_time_LR_weight_town.columns.values", "begin xgboost training", "train_pivot_xgb_time1.columns.values\n\ntrain_pivot_xgb_time1 = train_pivot_xgb_time1.drop(['Cliente_ID','Producto_ID','Agencia_ID',\n 'Ruta_SAK','Canal_ID'],axis = 1)\n\npivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem8'].notnull()]\n# pivot_train_categorical_useful_train_time_no_nan = pivot_train_categorical_useful_train[pivot_train_categorical_useful_train['Sem9'].notnull()]\n\npivot_train_categorical_useful_train_time_no_nan_sample = pivot_train_categorical_useful_train_time_no_nan.sample(1000000)\n\n\ntrain_feature = pivot_train_categorical_useful_train_time_no_nan_sample.drop(['Sem8','Sem9'],axis = 1)\ntrain_label = pivot_train_categorical_useful_train_time_no_nan_sample[['Sem8','Sem9']]\n\n#seperate train and test data\n# datasource: sparse_week_Agencia_Canal_Ruta_normalized_csr label:train_label\n%time train_set, valid_set, train_labels, valid_labels = train_test_split(train_feature,\\\n train_label, test_size=0.10)\n\n# dtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN)\ndtrain = xgb.DMatrix(train_feature,label = train_label['Sem8'],missing=NaN)\n\nparam = {'booster':'gbtree',\n 'nthread': 7,\n 'max_depth':6, \n 'eta':0.2,\n 'silent':0,\n 'subsample':0.7, \n 'objective':'reg:linear',\n 'eval_metric':'rmse',\n 'colsample_bytree':0.7}\n# param = {'eta':0.1, 'eval_metric':'rmse','nthread': 8}\n\n\n# evallist = [(dvalid,'eval'), (dtrain,'train')]\n\nnum_round = 1000\n# plst = param.items()\n\n# bst = xgb.train( plst, dtrain, num_round, evallist )\ncvresult = xgb.cv(param, dtrain, num_round, nfold=5,show_progress=True,show_stdv=False,\n seed = 0, early_stopping_rounds=10)\nprint(cvresult.tail())", "for 1 week later\n\n\ncv rmse 0.451181 with dummy canal, time regr,\ncv rmse 0.450972 without dummy canal, time regr,\ncv rmse 0.4485676 without dummy canal, time regr, producto info\ncv rmse 0.4487434 without dummy canal, time regr, producto info, cliente_per_town\n\nfor 2 week later\n\n\ncv rmse 0.4513236 without dummy canal, time regr, producto info", "# xgb.plot_importance(cvresult)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jegibbs/phys202-2015-work
assignments/assignment02/ProjectEuler4.ipynb
mit
[ "Project Euler: Problem 4\nhttps://projecteuler.net/problem=4\nA palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.\nFind the largest palindrome made from the product of two 3-digit numbers.\nCreate empty list of products", "products = []", "Add the products of all 3-digit numbers to the list", "for i in range(1, 1000):\n for n in range(1,1000):\n products.append(i * n)", "Create empty list of palindromes", "pals = []", "Check if number is a palindrome, return list of palindromes", "def check_for_palindromes(products):\n for number in products:\n if number == int(str(number)[::-1]):\n pals.append(number)\n return(pals)", "Print maximum from list of palindromes", "print(max(check_for_palindromes(products)))", "Success!", "# This cell will be used for grading, leave it at the end of the notebook." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sadahanu/Capstone
NLP/review_breed_analysis.ipynb
mit
[ "import boto3\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport pickle\nfrom nltk.stem.snowball import SnowballStemmer\nfrom collections import Counter\nimport numpy as np\nimport time\n%matplotlib inline\n\nsnowball = SnowballStemmer('english')\n\ndf = pd.read_csv(\"s3://dogfaces/reviews/extract_breed_review.csv\")\n\ndf.head()", "Score mechanism\ninput: a probability vector of dog breeds top 3: \n\ntoy -> breed score(averaged score for that breed)\n return probability weighted review scores", "# 52 base classes:\n# source 2: classified dog names\nbreed_classes = pd.read_csv(\"s3://dogfaces/tensor_model/output_labels_20170907.txt\",names=['breed'])\nbase_breeds = breed_classes['breed'].values\n\nbase_breeds\n\nwith open('breed_lookup.pickle', 'rb') as handle:\n rev_to_breed = pickle.load(handle)\nlen(rev_to_breed)\n\nwith open('breed_dict.pickle', 'rb') as handle:\n breed_to_rev = pickle.load(handle)\nlen(breed_to_rev)\n\n# sanity check\nnot_found = 0\nfor breed in base_breeds:\n if breed not in breed_to_rev:\n if snowball.stem(breed) in breed_to_rev:\n print \"only need to stem \"+breed\n elif snowball.stem(breed) in rev_to_breed:\n print \"need to look up extened dict \"+ breed +\" : \"+str(rev_to_breed[snowball.stem(breed)])\n else:\n print \"not found \" + breed\n not_found += 1\nprint not_found", "Get each base breeds score", "mini_set = df.sample(10).copy()\nbase_breeds_set = set(base_breeds)\n\n# review_id, toy_id, breeds.....\ndef get_breed_score(df):\n score_df = []\n for idx, row in df.iterrows():\n score_row = {}\n score_row['review_id'] = row['review_id']\n score_row['toy_id'] = row['toy_id']\n score_row['rating'] = row['rating']\n try:\n breed_extract = row['breed_extract'].split(',')\n matched_item = {}\n \n for b in breed_extract:\n if b in base_breeds_set:\n matched_item[b] = matched_item.get(b,0)+1\n max_p = max(matched_item.values())\n total_base = 0\n \n for k, v in matched_item.iteritems():\n if v== max_p:\n total_base += 1\n \n for k, v in matched_item.iteritems():\n if v == max_p:\n score_row[k] = 1.0/total_base\n except:\n pass\n score_df.append(score_row) \n return score_df\n\nscored_lst = get_breed_score(df)\n\nscored_df = pd.DataFrame(scored_lst)\n\nscored_df.info()\n\nscored_df.fillna(0, inplace=True)\n\nscored_df.head()\n\nsave_data = scored_df.to_csv(index=False)\ns3_res = boto3.resource('s3')\ns3_res.Bucket('dogfaces').put_object(Key='reviews/scored_breed_review.csv', Body=save_data)\n\n# sanity check\nscored_df = pd.read_csv(\"s3://dogfaces/reviews/scored_breed_review.csv\")\n\nscored_df.info()", "Model version 1: average", "# calculating each toy's score\n#df_scored = scored_df.copy()\ndf_scored = scored_df.copy()\ndf_scored.pop('review_id')\ndf_scored.pop('rating')\ndef non_zero_count(x):\n return np.sum(x[x>0])\ndf_breed_count = df_scored.groupby('toy_id').agg(non_zero_count).reset_index()\ndf_breed_count.head()\n\nbreed_columns = [x for x in scored_df.columns if x not in ['toy_id', 'rating', 'review_id']]\nmat_scored2 = scored_df[breed_columns].copy().values\n\nmat_scored2 = scored_df['rating'].values.reshape((61202,1))*mat_scored2\n\ndf_scored_sum = pd.DataFrame(data=mat_scored2, columns=breed_columns)\ndf_scored_sum = pd.concat([scored_df['toy_id'].copy(), df_scored_sum], axis=1)\n\ndf_breed_wet_sum = df_scored_sum.groupby('toy_id').sum().reset_index()\ndf_breed_wet_sum.head()\n\ndf_breed_wet_sum.sort_values(by='toy_id', axis=0, inplace=True)\ndf_breed_count.sort_values(by='toy_id', axis=0, inplace=True)\n\nweighted_mat = df_breed_count[breed_columns].values\nweighted_sum = df_breed_wet_sum[breed_columns].values\nwith np.errstate(divide='ignore', invalid='ignore'):\n res_mat = np.true_divide(weighted_sum, weighted_mat)\n res_mat[res_mat==np.inf]=0\n res_mat = np.nan_to_num(res_mat)\n\ndf_scored_finalscore = pd.DataFrame(data=res_mat, columns=breed_columns)\ndf_scored_finalscore = pd.concat([df_breed_count['toy_id'].copy(), df_scored_finalscore], axis=1)\n\ndf_scored_finalscore.head()\n\ndf_toy = pd.read_csv(\"s3://dogfaces/reviews/toys.csv\")\ndf_toy.head(3)\n\n# make recommendations:\ndef getRecommendations(probs, score_df, toy_df, k, add_info=None):\n # probs is a dictionary\n keys = probs.keys()\n D = score_df.shape[1]-1\n prob_v = np.array(probs.values()).reshape((D,1))\n score_mat = score_df[keys].values\n fscore_mat = score_mat.dot(prob_v)\n top_ind = np.argsort(-fscore_mat[:,0])[:k]\n top_toy = score_df['toy_id'].values[top_ind]\n likely_ratings = pd.DataFrame({\"likely rating\":fscore_mat[:,0][top_ind]}, index=None)\n if not add_info:\n toy_info = toy_df[toy_df['toy_id'].isin(top_toy)][['toy_id','toy_name','price']].copy()\n else:\n add_info.extend(['toy_id','toy_name','price'])\n toy_info = toy_df[toy_df['toy_id'].isin(top_toy)][add_info].copy()\n return pd.concat([toy_info.reset_index(), likely_ratings], axis=1)\ndef getRecommendedToys():\n pass\ndef getToyDislie():\n pass\n\n# get recommendations\nfor i in xrange(53):\n probs = [0]*53\n ind = i#np.random.randint(53)\n probs[ind]=1\n print breed_columns[ind]\n test_input = dict(zip(breed_columns, probs))\n print getRecommendations(test_input,df_scored_finalscore, df_toy, 3, ['toy_link'] )\n time.sleep(2)\n\nsave_data = df_scored_finalscore.to_csv(index=False)\ns3_res = boto3.resource('s3')\ns3_res.Bucket('dogfaces').put_object(Key='reviews/scored_breed_toy.csv', Body=save_data)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jerkos/cobrapy
documentation_builder/solvers.ipynb
lgpl-2.1
[ "Solver Interface\nEach cobrapy solver must expose the following API. The solvers all will have their own distinct LP object types, but each can be manipulated by these functions. This API can be used directly when implementing algorithms efficiently on linear programs because it has 2 primary benefits:\n\n\nAvoid the overhead of creating and destroying LP's for each operation\n\n\nMany solver objects preserve the basis between subsequent LP's, making each subsequent LP solve faster\n\n\nWe will walk though the API with the cglpk solver, which links the cobrapy solver API with GLPK's C API.", "import cobra.test\n\nmodel = cobra.test.create_test_model(\"textbook\")\nsolver = cobra.solvers.cglpk", "Attributes and functions\nEach solver has some attributes:\nsolver_name\nThe name of the solver. This is the name which will be used to select the solver in cobrapy functions.", "solver.solver_name\n\nmodel.optimize(solver=\"cglpk\")", "_SUPPORTS_MILP\nThe presence of this attribute tells cobrapy that the solver supports mixed-integer linear programming", "solver._SUPPORTS_MILP", "solve\nModel.optimize is a wrapper for each solver's solve function. It takes in a cobra model and returns a solution", "solver.solve(model)", "create_problem\nThis creates the LP object for the solver.", "lp = solver.create_problem(model, objective_sense=\"maximize\")\nlp", "solve_problem\nSolve the LP object and return the solution status", "solver.solve_problem(lp)", "format_solution\nExtract a cobra.Solution object from a solved LP object", "solver.format_solution(lp, model)", "get_objective_value\nExtract the objective value from a solved LP object", "solver.get_objective_value(lp)", "get_status\nGet the solution status of a solved LP object", "solver.get_status(lp)", "change_variable_objective\nchange the objective coefficient a reaction at a particular index. This does not change any of the other objectives which have already been set. This example will double and then revert the biomass coefficient.", "model.reactions.index(\"Biomass_Ecoli_core\")\n\nsolver.change_variable_objective(lp, 12, 2)\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)\n\nsolver.change_variable_objective(lp, 12, 1)\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)", "change variable_bounds\nchange the lower and upper bounds of a reaction at a particular index. This example will set the lower bound of the biomass to an infeasible value, then revert it.", "solver.change_variable_bounds(lp, 12, 1000, 1000)\nsolver.solve_problem(lp)\n\nsolver.change_variable_bounds(lp, 12, 0, 1000)\nsolver.solve_problem(lp)", "change_coefficient\nChange a coefficient in the stoichiometric matrix. In this example, we will set the entry for ADP in the ATMP reaction to in infeasible value, then reset it.", "model.metabolites.index(\"atp_c\")\n\nmodel.reactions.index(\"ATPM\")\n\nsolver.change_coefficient(lp, 16, 10, -10)\nsolver.solve_problem(lp)\n\nsolver.change_coefficient(lp, 16, 10, -1)\nsolver.solve_problem(lp)", "set_parameter\nSet a solver parameter. Each solver will have its own particular set of unique paramters. However, some have unified names. For example, all solvers should accept \"tolerance_feasibility.\"", "solver.set_parameter(lp, \"tolerance_feasibility\", 1e-9)\n\nsolver.set_parameter(lp, \"objective_sense\", \"minimize\")\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)\n\nsolver.set_parameter(lp, \"objective_sense\", \"maximize\")\nsolver.solve_problem(lp)\nsolver.get_objective_value(lp)", "Example with FVA\nConsider flux variability analysis (FVA), which requires maximizing and minimizing every reaction with the original biomass value fixed at its optimal value. If we used the cobra Model API in a naive implementation, we would do the following:", "%%time\n# work on a copy of the model so the original is not changed\nfva_model = model.copy()\n\n# set the lower bound on the objective to be the optimal value\nf = fva_model.optimize().f\nfor objective_reaction, coefficient in fva_model.objective.items():\n objective_reaction.lower_bound = coefficient * f\n\n# now maximize and minimze every reaction to find its bounds\nfva_result = {}\nfor r in fva_model.reactions:\n fva_model.change_objective(r)\n fva_result[r.id] = {}\n fva_result[r.id][\"maximum\"] = fva_model.optimize(objective_sense=\"maximize\").f\n fva_result[r.id][\"minimum\"] = fva_model.optimize(objective_sense=\"minimize\").f", "Instead, we could use the solver API to do this more efficiently. This is roughly how cobrapy implementes FVA. It keeps uses the same LP object and repeatedly maximizes and minimizes it. This allows the solver to preserve the basis, and is much faster. The speed increase is even more noticeable the larger the model gets.", "%%time\n# create the LP object\nlp = solver.create_problem(model)\n\n# set the lower bound on the objective to be the optimal value\nsolver.solve_problem(lp)\nf = solver.get_objective_value(lp)\nfor objective_reaction, coefficient in model.objective.items():\n objective_index = model.reactions.index(objective_reaction)\n # old objective is no longer the objective\n solver.change_variable_objective(lp, objective_index, 0.)\n solver.change_variable_bounds(lp, objective_index, f * coefficient, objective_reaction.upper_bound)\n\n# now maximize and minimze every reaction to find its bounds\nfva_result = {}\nfor index, r in enumerate(model.reactions):\n solver.change_variable_objective(lp, index, 1.)\n fva_result[r.id] = {}\n solver.solve_problem(lp, objective_sense=\"maximize\")\n fva_result[r.id][\"maximum\"] = solver.get_objective_value(lp)\n solver.solve_problem(lp, objective_sense=\"minimize\")\n fva_result[r.id][\"minimum\"] = solver.get_objective_value(lp)\n solver.change_variable_objective(lp, index, 0.)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
junhwanjang/DataSchool
Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/5) 행렬의 연산과 성질.ipynb
mit
[ "행렬의 연산과 성질\n행렬에는 곱셈, 전치 이외에도 지수 함수 등의 다양한 연산을 정의할 수 있다. 각각의 정의와 성질을 알아보자.\n행렬의 부호\n행렬은 복수의 실수 값을 가지고 있으므로 행렬 전체의 부호는 정의할 수 없다. 하지만 행렬에서도 실수의 부호 정의와 유사한 기능을 가지는 정의가 존재한다. 바로 행렬의 양-한정(positive definite) 특성이다.\n모든 실수 공간 $\\mathbb{R}^n$ 의 0벡터가 아닌 벡터 $x \\in \\mathbb{R}^n$ 에 대해 다음 부등식이 성립하면 행렬 $A$ 가 양-한정(positive definite)이라고 한다.\n$$ x^T A x > 0 $$\n만약 이 식이 등호를 포함한다면 양-반한정(positive semi-definite)이라고 한다.\n$$ x^T A x \\geq 0 $$\n예를 들어 단위 행렬은 양-한정이다.\n$$ x^TI x = xT\n\\begin{bmatrix}\n1&0&\\cdots&0\\\n0&1&\\cdots&0\\\n\\vdots&\\vdots&\\ddots&\\vdots\\\n0&0&\\cdots&1\\\n\\end{bmatrix}\nx\n= x_1^2 + x_2^2 + \\cdots + x_n^2 > 0\n$$\n다음과 같은 행렬도 양-한정이다.\n$$ M = \\begin{bmatrix} 2&-1&0\\-1&2&-1\\0&-1&2 \\end{bmatrix} $$\n$$ \n\\begin{align} \nx^{\\mathrm{T}}M x \n&= \\begin{bmatrix} (2x_1-x_2)&(-x_1+2x_2-x_3)&(-x_2+2x_3) \\end{bmatrix} \\begin{bmatrix} x_1\\x_2\\x_3 \\end{bmatrix} \\\n&= 2{x_1}^2 - 2x_1x_2 + 2{x_2}^2 - 2x_2x_3 + 2{x_3}^2 \\\n&= {x_1}^2+(x_1 - x_2)^{2} + (x_2 - x_3)^{2}+{x_3}^2\n\\end{align}\n$$\n행렬의 크기\n행렬에는 크기 개념과 유사하게 하나의 행렬에 대해 하나의 실수를 대응시키는 norm, 대각 성분(trace), 행렬식(determinant)에 대한 정의가 존재한다.\n행렬 Norm\n행렬의 norm 정의는 다양하지만 그 중 많이 쓰이는 induced p-norm 정의는 다음과 같다.\n$$ \\Vert A \\Vert_p = \\left( \\sum_{i=1}^m \\sum_{j=1}^n |a_{ij}|^p \\right)^{1/p} $$\n이 중 $p=2$는 특별히 Frobenius norm 이라고 불리며 다음과 같이 표시한다.\n$$ \\Vert A \\Vert_F = \\sqrt{\\sum_{i=1}^m \\sum_{j=1}^n a_{ij}^2} $$\nNumPy에서는 linalg 서브패키지의 norm 명령으로 Frobenious norm을 계산할 수 있다.", "A = (np.arange(9) - 4).reshape((3, 3))\nA\n\nnp.linalg.norm(A)", "대각 성분\n대각 성분(trace) 행렬의 특성을 결정하는 숫자 중 하나로 정방 행렬(square matrix)에 대해서만 정의되며 다음과 같이 대각 성분(diaginal)의 합으로 계산된다.\n$$ \\operatorname{tr}(A) = a_{11} + a_{22} + \\dots + a_{nn}=\\sum_{i=1}^{n} a_{ii} $$\n대각 성분은 다음과 같은 성질을 지닌다.\n$$ \\text{tr} (cA) = c\\text{tr} (A) $$\n$$ \\text{tr} (A^T) = \\text{tr} (A) $$\n$$ \\text{tr} (A + B) = \\text{tr} (A) + \\text{tr} (B)$$\n$$ \\text{tr} (AB) = \\text{tr} (BA) $$\n$$ \\text{tr} (ABC) = \\text{tr} (BCA) = \\text{tr} (CAB) $$\n특히 마지막 성질은 trace trick이라고 하여 이차 형식(quadratic form)의 값을 구하는데 유용하게 사용된다.\n$$ x^TAx = \\text{tr}(x^TAx) = \\text{tr}(Axx^T) = \\text{tr}(xx^TA) $$\nNumPy에서는 linalg 서브패키지의 trace 명령으로 trace를 계산할 수 있다. \nNumPy에서는 linalg 서브패키지의 trace 명령으로 trace를 계산할 수 있다.", "np.trace(np.eye(3))", "행렬식\n정방 행렬 $A$의 행렬식(determinant) $\\det (A)$ 는 Laplace expansion 또는 cofactor expansion 이라고 불리는 재귀적인 방법으로 정의된다. \n이 식에서 $a_{i,j}$는 $A$의 $i$행, $j$열 원소이고 $M_{i,j}$은 정방 행렬 $A$ 에서 $i$행과 $j$열을 지워서 얻어진 행렬의 행렬식이다. 이 식에서 행이나 열은 임의의 것을 선택해도 된다. \n$$ \\det(A) = \\sum_{i=1}^n (-1)^{i+j} a_{i,j} M_{i,j} = \\sum_{j=1}^n (-1)^{i+j} a_{i,j} M_{i,j} $$\n행렬식은 다음과 같은 성질을 만족한다.\n$$ \\det(I) = 1 $$\n$$ \\det(A^{\\rm T}) = \\det(A) $$\n$$ \\det(A^{-1}) = \\frac{1}{\\det(A)}=\\det(A)^{-1} $$\n$$ \\det(AB) = \\det(A)\\det(B) $$\n$$ A \\in \\mathbf{R}^n \\;\\;\\; \\rightarrow \\;\\;\\; \\det(cA) = c^n\\det(A) $$\n또한 역행렬은 행렬식과 다음과 같은 관계를 가진다. \n$$ A^{-1} = \\dfrac{1}{\\det A} M = \\dfrac{1}{\\det A} \n\\begin{bmatrix}\nM_{1,1}&\\cdots&M_{1,n}\\\n\\vdots&\\ddots&\\vdots\\\nM_{n,1}&\\cdots&M_{n,n}\\\n\\end{bmatrix}\n$$\nNumPy에서는 linalg 서브패키지의 det 명령으로 trace를 계산할 수 있다.", "A = np.array([[1, 2], [3, 4]])\nA\n\nnp.linalg.det(A)", "전치 행렬과 대칭 행렬\n전치 연산을 통해서 얻어진 행렬을 전치 행렬(transpose matrix)이라고 한다.\n$$ [\\mathbf{A}^\\mathrm{T}]{ij} = [\\mathbf{A}]{ji} $$ \n만약 전치 행렬과 원래의 행렬이 같으면 대칭 행렬(symmetric matrix)이라고 한다.\n$$ A^\\mathrm{T} = A $$ \n전치 연산은 다음과 같은 성질을 만족한다.\n$$ ( A^\\mathrm{T} ) ^\\mathrm{T} = A $$\n$$ (A+B) ^\\mathrm{T} = A^\\mathrm{T} + B^\\mathrm{T} $$\n$$ \\left( A B \\right) ^\\mathrm{T} = B^\\mathrm{T} A^\\mathrm{T} $$\n$$ \\det(A^\\mathrm{T}) = \\det(A) $$\n$$ (A^\\mathrm{T})^{-1} = (A^{-1})^\\mathrm{T} $$\n지수 행렬\n행렬 $A$에 대해 다음과 같은 급수로 만들어지는 행렬 $e^A=\\exp A$ 를 지수 행렬(exponential matrix)이라고 한다.\n$$ e^X = \\sum_{k=0}^\\infty \\dfrac{X^k}{k!} = I + X + \\dfrac{1}{2}X^2 + \\dfrac{1}{3!}X^3 + \\cdots $$ \n지수 행렬은 다음과 같은 성질을 만족한다.\n$$ e^0 = I $$\n$$ e^{aX} e^{bX} = e^{(a+b)X} $$\n$$ e^X e^{-X} = I $$\n$$ XY = YX \\;\\; \\rightarrow \\;\\; e^Xe^Y = e^Ye^X = e^{X+Y} $$\n로그 행렬\n행렬 $A$에 대해 다음과 같은 급수로 만들어지는 행렬 $B=e^A$ 가 존재할 때, $A$를 $B$에 대한 로그 행렬이라고 하고 다음과 같이 표기한다.\n$$ A = \\log B $$\n로그 행렬은 다음과 같은 성질은 만족한다.\n만약 행렬 $A$, $B$가 모두 양-한정(positive definite)이고 $AB=BA$이면 \n$$ AB = e^{\\ln(A)+\\ln(B)} $$\n만약 행렬 $A$의 역행렬이 존재하면\n$$ A^{-1} = e^{-\\ln(A)} $$\n지수 행렬이나 로그 행렬은 NumPy에서 계산할 수 없다. SciPy의 linalg 서브패키지의 expm, logm 명령을 사용한다.", "A = np.array([[1.0, 3.0], [1.0, 4.0]])\nA\n\nB = sp.linalg.logm(A)\nB\n\nsp.linalg.expm(B)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
constantlearning/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Chapter2_MorePyMC/Chapter2.ipynb
mit
[ "Chapter 2\n\nThis chapter introduces more PyMC syntax and design patterns, and ways to think about how to model a system from a Bayesian perspective. It also contains tips and data visualization techniques for assessing goodness-of-fit for your Bayesian model.\nA little more on PyMC\nParent and Child relationships\nTo assist with describing Bayesian relationships, and to be consistent with PyMC's documentation, we introduce parent and child variables. \n\n\nparent variables are variables that influence another variable. \n\n\nchild variable are variables that are affected by other variables, i.e. are the subject of parent variables. \n\n\nA variable can be both a parent and child. For example, consider the PyMC code below.", "import pymc as pm\n\n\nparameter = pm.Exponential(\"poisson_param\", 1)\ndata_generator = pm.Poisson(\"data_generator\", parameter)\ndata_plus_one = data_generator + 1", "parameter controls the parameter of data_generator, hence influences its values. The former is a parent of the latter. By symmetry, data_generator is a child of parameter.\nLikewise, data_generator is a parent to the variable data_plus_one (hence making data_generator both a parent and child variable). Although it does not look like one, data_plus_one should be treated as a PyMC variable as it is a function of another PyMC variable, hence is a child variable to data_generator.\nThis nomenclature is introduced to help us describe relationships in PyMC modeling. You can access a variable's children and parent variables using the children and parents attributes attached to variables.", "print \"Children of `parameter`: \"\nprint parameter.children\nprint \"\\nParents of `data_generator`: \"\nprint data_generator.parents\nprint \"\\nChildren of `data_generator`: \"\nprint data_generator.children", "Of course a child can have more than one parent, and a parent can have many children.\nPyMC Variables\nAll PyMC variables also expose a value attribute. This method produces the current (possibly random) internal value of the variable. If the variable is a child variable, its value changes given the variable's parents' values. Using the same variables from before:", "print \"parameter.value =\", parameter.value\nprint \"data_generator.value =\", data_generator.value\nprint \"data_plus_one.value =\", data_plus_one.value", "PyMC is concerned with two types of programming variables: stochastic and deterministic.\n\n\nstochastic variables are variables that are not deterministic, i.e., even if you knew all the values of the variables' parents (if it even has any parents), it would still be random. Included in this category are instances of classes Poisson, DiscreteUniform, and Exponential.\n\n\ndeterministic variables are variables that are not random if the variables' parents were known. This might be confusing at first: a quick mental check is if I knew all of variable foo's parent variables, I could determine what foo's value is. \n\n\nWe will detail each below.\nInitializing Stochastic variables\nInitializing a stochastic variable requires a name argument, plus additional parameters that are class specific. For example:\nsome_variable = pm.DiscreteUniform(\"discrete_uni_var\", 0, 4)\nwhere 0, 4 are the DiscreteUniform-specific lower and upper bound on the random variable. The PyMC docs contain the specific parameters for stochastic variables. (Or use object??, for example pm.DiscreteUniform?? if you are using IPython!)\nThe name attribute is used to retrieve the posterior distribution later in the analysis, so it is best to use a descriptive name. Typically, I use the Python variable's name as the name.\nFor multivariable problems, rather than creating a Python array of stochastic variables, addressing the size keyword in the call to a Stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a Numpy array when used like one, and references to its value attribute return Numpy arrays. \nThe size argument also solves the annoying case where you may have many variables $\\beta_i, \\; i = 1,...,N$ you wish to model. Instead of creating arbitrary names and variables for each one, like:\nbeta_1 = pm.Uniform(\"beta_1\", 0, 1)\nbeta_2 = pm.Uniform(\"beta_2\", 0, 1)\n...\n\nwe can instead wrap them into a single variable:\nbetas = pm.Uniform(\"betas\", 0, 1, size=N)\n\nCalling random()\nWe can also call on a stochastic variable's random() method, which (given the parent values) will generate a new, random value. Below we demonstrate this using the texting example from the previous chapter.", "lambda_1 = pm.Exponential(\"lambda_1\", 1) # prior on first behaviour\nlambda_2 = pm.Exponential(\"lambda_2\", 1) # prior on second behaviour\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=10) # prior on behaviour change\n\nprint \"lambda_1.value = %.3f\" % lambda_1.value\nprint \"lambda_2.value = %.3f\" % lambda_2.value\nprint \"tau.value = %.3f\" % tau.value\nprint\n\nlambda_1.random(), lambda_2.random(), tau.random()\n\nprint \"After calling random() on the variables...\"\nprint \"lambda_1.value = %.3f\" % lambda_1.value\nprint \"lambda_2.value = %.3f\" % lambda_2.value\nprint \"tau.value = %.3f\" % tau.value", "The call to random stores a new value into the variable's value attribute. In fact, this new value is stored in the computer's cache for faster recall and efficiency.\nWarning: Don't update stochastic variables' values in-place.\nStraight from the PyMC docs, we quote [4]:\n\nStochastic objects' values should not be updated in-place. This confuses PyMC's caching scheme... The only way a stochastic variable's value should be updated is using statements of the following form:\n\n A.value = new_value\n\n\nThe following are in-place updates and should never be used:\n\n A.value += 3\n A.value[2,1] = 5\n A.value.attribute = new_attribute_value\n\nDeterministic variables\nSince most variables you will be modeling are stochastic, we distinguish deterministic variables with a pymc.deterministic wrapper. (If you are unfamiliar with Python wrappers (also called decorators), that's no problem. Just prepend the pymc.deterministic decorator before the variable declaration and you're good to go. No need to know more. ) The declaration of a deterministic variable uses a Python function:\n@pm.deterministic\ndef some_deterministic_var(v1=v1,):\n #jelly goes here.\n\nFor all purposes, we can treat the object some_deterministic_var as a variable and not a Python function. \nPrepending with the wrapper is the easiest way, but not the only way, to create deterministic variables: elementary operations, like addition, exponentials etc. implicitly create deterministic variables. For example, the following returns a deterministic variable:", "type(lambda_1 + lambda_2)", "The use of the deterministic wrapper was seen in the previous chapter's text-message example. Recall the model for $\\lambda$ looked like: \n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\nAnd in PyMC code:", "import numpy as np\nn_data_points = 5 # in CH1 we had ~70 data points\n\n\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_data_points)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after tau is lambda2\n return out", "Clearly, if $\\tau, \\lambda_1$ and $\\lambda_2$ are known, then $\\lambda$ is known completely, hence it is a deterministic variable. \nInside the deterministic decorator, the Stochastic variables passed in behave like scalars or Numpy arrays (if multivariable), and not like Stochastic variables. For example, running the following:\n@pm.deterministic\ndef some_deterministic(stoch=some_stochastic_var):\n return stoch.value**2\n\nwill return an AttributeError detailing that stoch does not have a value attribute. It simply needs to be stoch**2. During the learning phase, it's the variable's value that is repeatedly passed in, not the actual variable. \nNotice in the creation of the deterministic function we added defaults to each variable used in the function. This is a necessary step, and all variables must have default values. \nIncluding observations in the Model\nAt this point, it may not look like it, but we have fully specified our priors. For example, we can ask and answer questions like \"What does my prior distribution of $\\lambda_1$ look like?\"", "%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nfrom matplotlib import pyplot as plt\nfigsize(12.5, 4)\n\n\nsamples = [lambda_1.random() for i in range(20000)]\nplt.hist(samples, bins=70, normed=True, histtype=\"stepfilled\")\nplt.title(\"Prior distribution for $\\lambda_1$\")\nplt.xlim(0, 8);", "To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model. \nPyMC stochastic variables have a keyword argument observed which accepts a boolean (False by default). The keyword observed has a very simple role: fix the variable's current value, i.e. make value immutable. We have to specify an initial value in the variable's creation, equal to the observations we wish to include, typically an array (and it should be an Numpy array for speed). For example:", "data = np.array([10, 5])\nfixed_variable = pm.Poisson(\"fxd\", 1, value=data, observed=True)\nprint \"value: \", fixed_variable.value\nprint \"calling .random()\"\nfixed_variable.random()\nprint \"value: \", fixed_variable.value", "This is how we include data into our models: initializing a stochastic variable to have a fixed value. \nTo complete our text message example, we fix the PyMC variable observations to the observed dataset.", "# We're using some fake data here\ndata = np.array([10, 25, 15, 20, 35])\nobs = pm.Poisson(\"obs\", lambda_, value=data, observed=True)\nprint obs.value", "Finally...\nWe wrap all the created variables into a pm.Model class. With this Model class, we can analyze the variables as a single unit. This is an optional step, as the fitting algorithms can be sent an array of the variables rather than a Model class. I may or may not use this class in future examples ;)", "model = pm.Model([obs, lambda_, lambda_1, lambda_2, tau])", "Modeling approaches\nA good starting thought to Bayesian modeling is to think about how your data might have been generated. Position yourself in an omniscient position, and try to imagine how you would recreate the dataset. \nIn the last chapter we investigated text message data. We begin by asking how our observations may have been generated:\n\n\nWe started by thinking \"what is the best random variable to describe this count data?\" A Poisson random variable is a good candidate because it can represent count data. So we model the number of sms's received as sampled from a Poisson distribution.\n\n\nNext, we think, \"Ok, assuming sms's are Poisson-distributed, what do I need for the Poisson distribution?\" Well, the Poisson distribution has a parameter $\\lambda$. \n\n\nDo we know $\\lambda$? No. In fact, we have a suspicion that there are two $\\lambda$ values, one for the earlier behaviour and one for the latter behaviour. We don't know when the behaviour switches though, but call the switchpoint $\\tau$.\n\n\nWhat is a good distribution for the two $\\lambda$s? The exponential is good, as it assigns probabilities to positive real numbers. Well the exponential distribution has a parameter too, call it $\\alpha$.\n\n\nDo we know what the parameter $\\alpha$ might be? No. At this point, we could continue and assign a distribution to $\\alpha$, but it's better to stop once we reach a set level of ignorance: whereas we have a prior belief about $\\lambda$, (\"it probably changes over time\", \"it's likely between 10 and 30\", etc.), we don't really have any strong beliefs about $\\alpha$. So it's best to stop here. \nWhat is a good value for $\\alpha$ then? We think that the $\\lambda$s are between 10-30, so if we set $\\alpha$ really low (which corresponds to larger probability on high values) we are not reflecting our prior well. Similar, a too-high alpha misses our prior belief as well. A good idea for $\\alpha$ as to reflect our belief is to set the value so that the mean of $\\lambda$, given $\\alpha$, is equal to our observed mean. This was shown in the last chapter.\n\n\nWe have no expert opinion of when $\\tau$ might have occurred. So we will suppose $\\tau$ is from a discrete uniform distribution over the entire timespan.\n\n\nBelow we give a graphical visualization of this, where arrows denote parent-child relationships. (provided by the Daft Python library )\n<img src=\"http://i.imgur.com/7J30oCG.png\" width = 700/>\nPyMC, and other probabilistic programming languages, have been designed to tell these data-generation stories. More generally, B. Cronin writes [5]:\n\nProbabilistic programming will unlock narrative explanations of data, one of the holy grails of business analytics and the unsung hero of scientific persuasion. People think in terms of stories - thus the unreasonable power of the anecdote to drive decision-making, well-founded or not. But existing analytics largely fails to provide this kind of story; instead, numbers seemingly appear out of thin air, with little of the causal context that humans prefer when weighing their options.\n\nSame story; different ending.\nInterestingly, we can create new datasets by retelling the story.\nFor example, if we reverse the above steps, we can simulate a possible realization of the dataset.\n1. Specify when the user's behaviour switches by sampling from $\\text{DiscreteUniform}(0, 80)$:", "tau = pm.rdiscrete_uniform(0, 80)\nprint tau", "2. Draw $\\lambda_1$ and $\\lambda_2$ from an $\\text{Exp}(\\alpha)$ distribution:", "alpha = 1. / 20.\nlambda_1, lambda_2 = pm.rexponential(alpha, 2)\nprint lambda_1, lambda_2", "3. For days before $\\tau$, represent the user's received SMS count by sampling from $\\text{Poi}(\\lambda_1)$, and sample from $\\text{Poi}(\\lambda_2)$ for days after $\\tau$. For example:", "data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]\n", "4. Plot the artificial dataset:", "plt.bar(np.arange(80), data, color=\"#348ABD\")\nplt.bar(tau - 1, data[tau - 1], color=\"r\", label=\"user behaviour changed\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Artificial dataset\")\nplt.xlim(0, 80)\nplt.legend();", "It is okay that our fictional dataset does not look like our observed dataset: the probability is incredibly small it indeed would. PyMC's engine is designed to find good parameters, $\\lambda_i, \\tau$, that maximize this probability. \nThe ability to generate artificial datasets is an interesting side effect of our modeling, and we will see that this ability is a very important method of Bayesian inference. We produce a few more datasets below:", "def plot_artificial_sms_dataset():\n tau = pm.rdiscrete_uniform(0, 80)\n alpha = 1. / 20.\n lambda_1, lambda_2 = pm.rexponential(alpha, 2)\n data = np.r_[pm.rpoisson(lambda_1, tau), pm.rpoisson(lambda_2, 80 - tau)]\n plt.bar(np.arange(80), data, color=\"#348ABD\")\n plt.bar(tau - 1, data[tau - 1], color=\"r\", label=\"user behaviour changed\")\n plt.xlim(0, 80)\n\nfigsize(12.5, 5)\nplt.title(\"More example of artificial datasets\")\nfor i in range(1, 5):\n plt.subplot(4, 1, i)\n plot_artificial_sms_dataset()", "Later we will see how we use this to make predictions and test the appropriateness of our models.\nExample: Bayesian A/B testing\nA/B testing is a statistical design pattern for determining the difference of effectiveness between two different treatments. For example, a pharmaceutical company is interested in the effectiveness of drug A vs drug B. The company will test drug A on some fraction of their trials, and drug B on the other fraction (this fraction is often 1/2, but we will relax this assumption). After performing enough trials, the in-house statisticians sift through the data to determine which drug yielded better results. \nSimilarly, front-end web developers are interested in which design of their website yields more sales or some other metric of interest. They will route some fraction of visitors to site A, and the other fraction to site B, and record if the visit yielded a sale or not. The data is recorded (in real-time), and analyzed afterwards. \nOften, the post-experiment analysis is done using something called a hypothesis test like difference of means test or difference of proportions test. This involves often misunderstood quantities like a \"Z-score\" and even more confusing \"p-values\" (please don't ask). If you have taken a statistics course, you have probably been taught this technique (though not necessarily learned this technique). And if you were like me, you may have felt uncomfortable with their derivation -- good: the Bayesian approach to this problem is much more natural. \nA Simple Case\nAs this is a hacker book, we'll continue with the web-dev example. For the moment, we will focus on the analysis of site A only. Assume that there is some true $0 \\lt p_A \\lt 1$ probability that users who, upon shown site A, eventually purchase from the site. This is the true effectiveness of site A. Currently, this quantity is unknown to us. \nSuppose site A was shown to $N$ people, and $n$ people purchased from the site. One might conclude hastily that $p_A = \\frac{n}{N}$. Unfortunately, the observed frequency $\\frac{n}{N}$ does not necessarily equal $p_A$ -- there is a difference between the observed frequency and the true frequency of an event. The true frequency can be interpreted as the probability of an event occurring. For example, the true frequency of rolling a 1 on a 6-sided die is $\\frac{1}{6}$. Knowing the true frequency of events like:\n\nfraction of users who make purchases, \nfrequency of social attributes, \npercent of internet users with cats etc. \n\nare common requests we ask of Nature. Unfortunately, often Nature hides the true frequency from us and we must infer it from observed data.\nThe observed frequency is then the frequency we observe: say rolling the die 100 times you may observe 20 rolls of 1. The observed frequency, 0.2, differs from the true frequency, $\\frac{1}{6}$. We can use Bayesian statistics to infer probable values of the true frequency using an appropriate prior and observed data.\nWith respect to our A/B example, we are interested in using what we know, $N$ (the total trials administered) and $n$ (the number of conversions), to estimate what $p_A$, the true frequency of buyers, might be. \nTo set up a Bayesian model, we need to assign prior distributions to our unknown quantities. A priori, what do we think $p_A$ might be? For this example, we have no strong conviction about $p_A$, so for now, let's assume $p_A$ is uniform over [0,1]:", "import pymc as pm\n\n# The parameters are the bounds of the Uniform.\np = pm.Uniform('p', lower=0, upper=1)", "Had we had stronger beliefs, we could have expressed them in the prior above.\nFor this example, consider $p_A = 0.05$, and $N = 1500$ users shown site A, and we will simulate whether the user made a purchase or not. To simulate this from $N$ trials, we will use a Bernoulli distribution: if $X\\ \\sim \\text{Ber}(p)$, then $X$ is 1 with probability $p$ and 0 with probability $1 - p$. Of course, in practice we do not know $p_A$, but we will use it here to simulate the data.", "# set constants\np_true = 0.05 # remember, this is unknown.\nN = 1500\n\n# sample N Bernoulli random variables from Ber(0.05).\n# each random variable has a 0.05 chance of being a 1.\n# this is the data-generation step\noccurrences = pm.rbernoulli(p_true, N)\n\nprint occurrences # Remember: Python treats True == 1, and False == 0\nprint occurrences.sum()", "The observed frequency is:", "# Occurrences.mean is equal to n/N.\nprint \"What is the observed frequency in Group A? %.4f\" % occurrences.mean()\nprint \"Does this equal the true frequency? %s\" % (occurrences.mean() == p_true)", "We combine the observations into the PyMC observed variable, and run our inference algorithm:", "# include the observations, which are Bernoulli\nobs = pm.Bernoulli(\"obs\", p, value=occurrences, observed=True)\n\n# To be explained in chapter 3\nmcmc = pm.MCMC([p, obs])\nmcmc.sample(18000, 1000)", "We plot the posterior distribution of the unknown $p_A$ below:", "figsize(12.5, 4)\nplt.title(\"Posterior distribution of $p_A$, the true effectiveness of site A\")\nplt.vlines(p_true, 0, 90, linestyle=\"--\", label=\"true $p_A$ (unknown)\")\nplt.hist(mcmc.trace(\"p\")[:], bins=25, histtype=\"stepfilled\", normed=True)\nplt.legend()", "Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.\nA and B Together\nA similar analysis can be done for site B's response data to determine the analogous $p_B$. But what we are really interested in is the difference between $p_A$ and $p_B$. Let's infer $p_A$, $p_B$, and $\\text{delta} = p_A - p_B$, all at once. We can do this using PyMC's deterministic variables. (We'll assume for this exercise that $p_B = 0.04$, so $\\text{delta} = 0.01$, $N_B = 750$ (significantly less than $N_A$) and we will simulate site B's data like we did for site A's data )", "import pymc as pm\nfigsize(12, 4)\n\n# these two quantities are unknown to us.\ntrue_p_A = 0.05\ntrue_p_B = 0.04\n\n# notice the unequal sample sizes -- no problem in Bayesian analysis.\nN_A = 1500\nN_B = 750\n\n# generate some observations\nobservations_A = pm.rbernoulli(true_p_A, N_A)\nobservations_B = pm.rbernoulli(true_p_B, N_B)\nprint \"Obs from Site A: \", observations_A[:30].astype(int), \"...\"\nprint \"Obs from Site B: \", observations_B[:30].astype(int), \"...\"\n\nprint observations_A.mean()\nprint observations_B.mean()\n\n# Set up the pymc model. Again assume Uniform priors for p_A and p_B.\np_A = pm.Uniform(\"p_A\", 0, 1)\np_B = pm.Uniform(\"p_B\", 0, 1)\n\n\n# Define the deterministic delta function. This is our unknown of interest.\n@pm.deterministic\ndef delta(p_A=p_A, p_B=p_B):\n return p_A - p_B\n\n# Set of observations, in this case we have two observation datasets.\nobs_A = pm.Bernoulli(\"obs_A\", p_A, value=observations_A, observed=True)\nobs_B = pm.Bernoulli(\"obs_B\", p_B, value=observations_B, observed=True)\n\n# To be explained in chapter 3.\nmcmc = pm.MCMC([p_A, p_B, delta, obs_A, obs_B])\nmcmc.sample(20000, 1000)", "Below we plot the posterior distributions for the three unknowns:", "p_A_samples = mcmc.trace(\"p_A\")[:]\np_B_samples = mcmc.trace(\"p_B\")[:]\ndelta_samples = mcmc.trace(\"delta\")[:]\n\nfigsize(12.5, 10)\n\n# histogram of posteriors\n\nax = plt.subplot(311)\n\nplt.xlim(0, .1)\nplt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,\n label=\"posterior of $p_A$\", color=\"#A60628\", normed=True)\nplt.vlines(true_p_A, 0, 80, linestyle=\"--\", label=\"true $p_A$ (unknown)\")\nplt.legend(loc=\"upper right\")\nplt.title(\"Posterior distributions of $p_A$, $p_B$, and delta unknowns\")\n\nax = plt.subplot(312)\n\nplt.xlim(0, .1)\nplt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,\n label=\"posterior of $p_B$\", color=\"#467821\", normed=True)\nplt.vlines(true_p_B, 0, 80, linestyle=\"--\", label=\"true $p_B$ (unknown)\")\nplt.legend(loc=\"upper right\")\n\nax = plt.subplot(313)\nplt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of delta\", color=\"#7A68A6\", normed=True)\nplt.vlines(true_p_A - true_p_B, 0, 60, linestyle=\"--\",\n label=\"true delta (unknown)\")\nplt.vlines(0, 0, 60, color=\"black\", alpha=0.2)\nplt.legend(loc=\"upper right\");", "Notice that as a result of N_B &lt; N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$. \nWith respect to the posterior distribution of $\\text{delta}$, we can see that the majority of the distribution is above $\\text{delta}=0$, implying there site A's response is likely better than site B's response. The probability this inference is incorrect is easily computable:", "# Count the number of samples less than 0, i.e. the area under the curve\n# before 0, represent the probability that site A is worse than site B.\nprint \"Probability site A is WORSE than site B: %.3f\" % \\\n (delta_samples < 0).mean()\n\nprint \"Probability site A is BETTER than site B: %.3f\" % \\\n (delta_samples > 0).mean()", "If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential \"power\" than each additional data point for site A). \nTry playing with the parameters true_p_A, true_p_B, N_A, and N_B, to see what the posterior of $\\text{delta}$ looks like. Notice in all this, the difference in sample sizes between site A and site B was never mentioned: it naturally fits into Bayesian analysis.\nI hope the readers feel this style of A/B testing is more natural than hypothesis testing, which has probably confused more than helped practitioners. Later in this book, we will see two extensions of this model: the first to help dynamically adjust for bad sites, and the second will improve the speed of this computation by reducing the analysis to a single equation. \nAn algorithm for human deceit\nSocial data has an additional layer of interest as people are not always honest with responses, which adds a further complication into inference. For example, simply asking individuals \"Have you ever cheated on a test?\" will surely contain some rate of dishonesty. What you can say for certain is that the true rate is less than your observed rate (assuming individuals lie only about not cheating; I cannot imagine one who would admit \"Yes\" to cheating when in fact they hadn't cheated). \nTo present an elegant solution to circumventing this dishonesty problem, and to demonstrate Bayesian modeling, we first need to introduce the binomial distribution.\nThe Binomial Distribution\nThe binomial distribution is one of the most popular distributions, mostly because of its simplicity and usefulness. Unlike the other distributions we have encountered thus far in the book, the binomial distribution has 2 parameters: $N$, a positive integer representing $N$ trials or number of instances of potential events, and $p$, the probability of an event occurring in a single trial. Like the Poisson distribution, it is a discrete distribution, but unlike the Poisson distribution, it only weighs integers from $0$ to $N$. The mass distribution looks like:\n$$P( X = k ) = {{N}\\choose{k}} p^k(1-p)^{N-k}$$\nIf $X$ is a binomial random variable with parameters $p$ and $N$, denoted $X \\sim \\text{Bin}(N,p)$, then $X$ is the number of events that occurred in the $N$ trials (obviously $0 \\le X \\le N$), and $p$ is the probability of a single event. The larger $p$ is (while still remaining between 0 and 1), the more events are likely to occur. The expected value of a binomial is equal to $Np$. Below we plot the mass probability distribution for varying parameters.", "figsize(12.5, 4)\n\nimport scipy.stats as stats\nbinomial = stats.binom\n\nparameters = [(10, .4), (10, .9)]\ncolors = [\"#348ABD\", \"#A60628\"]\n\nfor i in range(2):\n N, p = parameters[i]\n _x = np.arange(N + 1)\n plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],\n edgecolor=colors[i],\n alpha=0.6,\n label=\"$N$: %d, $p$: %.1f\" % (N, p),\n linewidth=3)\n\nplt.legend(loc=\"upper left\")\nplt.xlim(0, 10.5)\nplt.xlabel(\"$k$\")\nplt.ylabel(\"$P(X = k)$\")\nplt.title(\"Probability mass distributions of binomial random variables\");", "The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \\sim \\text{Binomial}(N, p )$.\nThe expected value of a Bernoulli random variable is $p$. This can be seen by noting the more general Binomial random variable has expected value $Np$ and setting $N=1$.\nExample: Cheating among students\nWe will use the binomial distribution to determine the frequency of students cheating during an exam. If we let $N$ be the total number of students who took the exam, and assuming each student is interviewed post-exam (answering without consequence), we will receive integer $X$ \"Yes I did cheat\" answers. We then find the posterior distribution of $p$, given $N$, some specified prior on $p$, and observed data $X$. \nThis is a completely absurd model. No student, even with a free-pass against punishment, would admit to cheating. What we need is a better algorithm to ask students if they had cheated. Ideally the algorithm should encourage individuals to be honest while preserving privacy. The following proposed algorithm is a solution I greatly admire for its ingenuity and effectiveness:\n\nIn the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers \"Yes, I did cheat\" if the coin flip lands heads, and \"No, I did not cheat\", if the coin flip lands tails. This way, the interviewer does not know if a \"Yes\" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. \n\nI call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some Yes's are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars. \nSuppose 100 students are being surveyed for cheating, and we wish to find $p$, the proportion of cheaters. There are a few ways we can model this in PyMC. I'll demonstrate the most explicit way, and later show a simplified version. Both versions arrive at the same inference. In our data-generation model, we sample $p$, the true proportion of cheaters, from a prior. Since we are quite ignorant about $p$, we will assign it a $\\text{Uniform}(0,1)$ prior.", "import pymc as pm\n\nN = 100\np = pm.Uniform(\"freq_cheating\", 0, 1)", "Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students: 1 implies they cheated and 0 implies they did not.", "true_answers = pm.Bernoulli(\"truths\", p, size=N)", "If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$: denote a 1 as a Heads and 0 a Tails.", "first_coin_flips = pm.Bernoulli(\"first_flips\", 0.5, size=N)\nprint first_coin_flips.value", "Although not everyone flips a second time, we can still model the possible realization of second coin-flips:", "second_coin_flips = pm.Bernoulli(\"second_flips\", 0.5, size=N)\n", "Using these variables, we can return a possible realization of the observed proportion of \"Yes\" responses. We do this using a PyMC deterministic variable:", "@pm.deterministic\ndef observed_proportion(t_a=true_answers,\n fc=first_coin_flips,\n sc=second_coin_flips):\n\n observed = fc * t_a + (1 - fc) * sc\n return observed.sum() / float(N)", "The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.", "observed_proportion.value", "Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 \"Yes\" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a \"Yes\" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expect to see approximately 3/4 of all responses be \"Yes\". \nThe researchers observe a Binomial random variable, with N = 100 and p = observed_proportion with value = 35:", "X = 35\n\nobservations = pm.Binomial(\"obs\", N, observed_proportion, observed=True,\n value=X)", "Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.", "model = pm.Model([p, true_answers, first_coin_flips,\n second_coin_flips, observed_proportion, observations])\n\n# To be explained in Chapter 3!\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 15000)\n\nfigsize(12.5, 3)\np_trace = mcmc.trace(\"freq_cheating\")[:]\nplt.hist(p_trace, histtype=\"stepfilled\", normed=True, alpha=0.85, bins=30,\n label=\"posterior distribution\", color=\"#348ABD\")\nplt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)\nplt.xlim(0, 1)\nplt.legend();", "With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency? \nI would argue, yes, we have discovered something. It is implausible, according to our posterior, that there are no cheaters, i.e. the posterior assigns low probability to $p=0$. Since we started with a uniform prior, treating all values of $p$ as equally plausible, but the data ruled out $p=0$ as a possibility, we can be confident that there were cheaters. \nThis kind of algorithm can be used to gather private information from users and be reasonably confident that the data, though noisy, is truthful. \nAlternative PyMC Model\nGiven a value for $p$ (which from our god-like position we know), we can find the probability the student will answer yes: \n\\begin{align}\nP(\\text{\"Yes\"}) &= P( \\text{Heads on first coin} )P( \\text{cheater} ) + P( \\text{Tails on first coin} )P( \\text{Heads on second coin} ) \\\\\n& = \\frac{1}{2}p + \\frac{1}{2}\\frac{1}{2}\\\\\n& = \\frac{p}{2} + \\frac{1}{4}\n\\end{align}\nThus, knowing $p$ we know the probability a student will respond \"Yes\". In PyMC, we can create a deterministic function to evaluate the probability of responding \"Yes\", given $p$:", "p = pm.Uniform(\"freq_cheating\", 0, 1)\n\n\n@pm.deterministic\ndef p_skewed(p=p):\n return 0.5 * p + 0.25", "I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake. \nIf we know the probability of respondents saying \"Yes\", which is p_skewed, and we have $N=100$ students, the number of \"Yes\" responses is a binomial random variable with parameters N and p_skewed.\nThis is where we include our observed 35 \"Yes\" responses. In the declaration of the pm.Binomial, we include value = 35 and observed = True.", "yes_responses = pm.Binomial(\"number_cheaters\", 100, p_skewed,\n value=35, observed=True)", "Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.", "model = pm.Model([yes_responses, p_skewed, p])\n\n# To Be Explained in Chapter 3!\nmcmc = pm.MCMC(model)\nmcmc.sample(25000, 2500)\n\nfigsize(12.5, 3)\np_trace = mcmc.trace(\"freq_cheating\")[:]\nplt.hist(p_trace, histtype=\"stepfilled\", normed=True, alpha=0.85, bins=30,\n label=\"posterior distribution\", color=\"#348ABD\")\nplt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)\nplt.xlim(0, 1)\nplt.legend();", "More PyMC Tricks\nProtip: Lighter deterministic variables with Lambda class\nSometimes writing a deterministic function using the @pm.deterministic decorator can seem like a chore, especially for a small function. I have already mentioned that elementary math operations can produce deterministic variables implicitly, but what about operations like indexing or slicing? Built-in Lambda functions can handle this with the elegance and simplicity required. For example, \nbeta = pm.Normal(\"coefficients\", 0, size=(N, 1))\nx = np.random.randn((N, 1))\nlinear_combination = pm.Lambda(lambda x=x, beta=beta: np.dot(x.T, beta))\n\nProtip: Arrays of PyMC variables\nThere is no reason why we cannot store multiple heterogeneous PyMC variables in a Numpy array. Just remember to set the dtype of the array to object upon initialization. For example:", "N = 10\nx = np.empty(N, dtype=object)\nfor i in range(0, N):\n x[i] = pm.Exponential('x_%i' % i, (i + 1) ** 2)", "The remainder of this chapter examines some practical examples of PyMC and PyMC modeling:\nExample: Challenger Space Shuttle Disaster <span id=\"challenger\"/>\nOn January 28, 1986, the twenty-fifth flight of the U.S. space shuttle program ended in disaster when one of the rocket boosters of the Shuttle Challenger exploded shortly after lift-off, killing all seven crew members. The presidential commission on the accident concluded that it was caused by the failure of an O-ring in a field joint on the rocket booster, and that this failure was due to a faulty design that made the O-ring unacceptably sensitive to a number of factors including outside temperature. Of the previous 24 flights, data were available on failures of O-rings on 23, (one was lost at sea), and these data were discussed on the evening preceding the Challenger launch, but unfortunately only the data corresponding to the 7 flights on which there was a damage incident were considered important and these were thought to show no obvious trend. The data are shown below (see [1]):", "figsize(12.5, 3.5)\nnp.set_printoptions(precision=3, suppress=True)\nchallenger_data = np.genfromtxt(\"data/challenger_data.csv\", skip_header=1,\n usecols=[1, 2], missing_values=\"NA\",\n delimiter=\",\")\n# drop the NA values\nchallenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]\n\n# plot it, as a function of temperature (the first column)\nprint \"Temp (F), O-Ring failure?\"\nprint challenger_data\n\nplt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color=\"k\",\n alpha=0.5)\nplt.yticks([0, 1])\nplt.ylabel(\"Damage Incident?\")\nplt.xlabel(\"Outside temperature (Fahrenheit)\")\nplt.title(\"Defects of the Space Shuttle O-Rings vs temperature\")", "It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask \"At temperature $t$, what is the probability of a damage incident?\". The goal of this example is to answer that question.\nWe need a function of temperature, call it $p(t)$, that is bounded between 0 and 1 (so as to model a probability) and changes from 1 to 0 as we increase temperature. There are actually many such functions, but the most popular choice is the logistic function.\n$$p(t) = \\frac{1}{ 1 + e^{ \\;\\beta t } } $$\nIn this model, $\\beta$ is the variable we are uncertain about. Below is the function plotted for $\\beta = 1, 3, -5$.", "figsize(12, 3)\n\n\ndef logistic(x, beta):\n return 1.0 / (1.0 + np.exp(beta * x))\n\nx = np.linspace(-4, 4, 100)\nplt.plot(x, logistic(x, 1), label=r\"$\\beta = 1$\")\nplt.plot(x, logistic(x, 3), label=r\"$\\beta = 3$\")\nplt.plot(x, logistic(x, -5), label=r\"$\\beta = -5$\")\nplt.legend();", "But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function:\n$$p(t) = \\frac{1}{ 1 + e^{ \\;\\beta t + \\alpha } } $$\nSome plots are below, with differing $\\alpha$.", "def logistic(x, beta, alpha=0):\n return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))\n\nx = np.linspace(-4, 4, 100)\n\nplt.plot(x, logistic(x, 1), label=r\"$\\beta = 1$\", ls=\"--\", lw=1)\nplt.plot(x, logistic(x, 3), label=r\"$\\beta = 3$\", ls=\"--\", lw=1)\nplt.plot(x, logistic(x, -5), label=r\"$\\beta = -5$\", ls=\"--\", lw=1)\n\nplt.plot(x, logistic(x, 1, 1), label=r\"$\\beta = 1, \\alpha = 1$\",\n color=\"#348ABD\")\nplt.plot(x, logistic(x, 3, -2), label=r\"$\\beta = 3, \\alpha = -2$\",\n color=\"#A60628\")\nplt.plot(x, logistic(x, -5, 7), label=r\"$\\beta = -5, \\alpha = 7$\",\n color=\"#7A68A6\")\n\nplt.legend(loc=\"lower left\");", "Adding a constant term $\\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).\nLet's start modeling this in PyMC. The $\\beta, \\alpha$ parameters have no reason to be positive, bounded or relatively large, so they are best modeled by a Normal random variable, introduced next.\nNormal distributions\nA Normal random variable, denoted $X \\sim N(\\mu, 1/\\tau)$, has a distribution with two parameters: the mean, $\\mu$, and the precision, $\\tau$. Those familiar with the Normal distribution already have probably seen $\\sigma^2$ instead of $\\tau^{-1}$. They are in fact reciprocals of each other. The change was motivated by simpler mathematical analysis and is an artifact of older Bayesian methods. Just remember: the smaller $\\tau$, the larger the spread of the distribution (i.e. we are more uncertain); the larger $\\tau$, the tighter the distribution (i.e. we are more certain). Regardless, $\\tau$ is always positive. \nThe probability density function of a $N( \\mu, 1/\\tau)$ random variable is:\n$$ f(x | \\mu, \\tau) = \\sqrt{\\frac{\\tau}{2\\pi}} \\exp\\left( -\\frac{\\tau}{2} (x-\\mu)^2 \\right) $$\nWe plot some different density functions below.", "import scipy.stats as stats\n\nnor = stats.norm\nx = np.linspace(-8, 7, 150)\nmu = (-2, 0, 3)\ntau = (.7, 1, 2.8)\ncolors = [\"#348ABD\", \"#A60628\", \"#7A68A6\"]\nparameters = zip(mu, tau, colors)\n\nfor _mu, _tau, _color in parameters:\n plt.plot(x, nor.pdf(x, _mu, scale=1. / _tau),\n label=\"$\\mu = %d,\\;\\\\tau = %.1f$\" % (_mu, _tau), color=_color)\n plt.fill_between(x, nor.pdf(x, _mu, scale=1. / _tau), color=_color,\n alpha=.33)\n\nplt.legend(loc=\"upper right\")\nplt.xlabel(\"$x$\")\nplt.ylabel(\"density function at $x$\")\nplt.title(\"Probability distribution of three different Normal random \\\nvariables\");", "A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\\mu$. In fact, the expected value of a Normal is equal to its $\\mu$ parameter:\n$$ E[ X | \\mu, \\tau] = \\mu$$\nand its variance is equal to the inverse of $\\tau$:\n$$Var( X | \\mu, \\tau ) = \\frac{1}{\\tau}$$\nBelow we continue our modeling of the Challenger space craft:", "import pymc as pm\n\ntemperature = challenger_data[:, 0]\nD = challenger_data[:, 1] # defect or not?\n\n# notice the`value` here. We explain why below.\nbeta = pm.Normal(\"beta\", 0, 0.001, value=0)\nalpha = pm.Normal(\"alpha\", 0, 0.001, value=0)\n\n\n@pm.deterministic\ndef p(t=temperature, alpha=alpha, beta=beta):\n return 1.0 / (1. + np.exp(beta * t + alpha))", "We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like:\n$$ \\text{Defect Incident, $D_i$} \\sim \\text{Ber}( \\;p(t_i)\\; ), \\;\\; i=1..N$$\nwhere $p(t)$ is our logistic function and $t_i$ are the temperatures we have observations about. Notice in the above code we had to set the values of beta and alpha to 0. The reason for this is that if beta and alpha are very large, they make p equal to 1 or 0. Unfortunately, pm.Bernoulli does not like probabilities of exactly 0 or 1, though they are mathematically well-defined probabilities. So by setting the coefficient values to 0, we set the variable p to be a reasonable starting value. This has no effect on our results, nor does it mean we are including any additional information in our prior. It is simply a computational caveat in PyMC.", "p.value\n\n# connect the probabilities in `p` with our observations through a\n# Bernoulli random variable.\nobserved = pm.Bernoulli(\"bernoulli_obs\", p, value=D, observed=True)\n\nmodel = pm.Model([observed, beta, alpha])\n\n# Mysterious code to be explained in Chapter 3\nmap_ = pm.MAP(model)\nmap_.fit()\nmcmc = pm.MCMC(model)\nmcmc.sample(120000, 100000, 2)", "We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\\alpha$ and $\\beta$:", "alpha_samples = mcmc.trace('alpha')[:, None] # best to make them 1d\nbeta_samples = mcmc.trace('beta')[:, None]\n\nfigsize(12.5, 6)\n\n# histogram of the samples:\nplt.subplot(211)\nplt.title(r\"Posterior distributions of the variables $\\alpha, \\beta$\")\nplt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,\n label=r\"posterior of $\\beta$\", color=\"#7A68A6\", normed=True)\nplt.legend()\n\nplt.subplot(212)\nplt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,\n label=r\"posterior of $\\alpha$\", color=\"#A60628\", normed=True)\nplt.legend();", "All samples of $\\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\\beta = 0$, implying that temperature has no effect on the probability of defect. \nSimilarly, all $\\alpha$ posterior values are negative and far away from 0, implying that it is correct to believe that $\\alpha$ is significantly less than 0. \nRegarding the spread of the data, we are very uncertain about what the true parameters might be (though considering the low sample size and the large overlap of defects-to-nondefects this behaviour is perhaps expected). \nNext, let's look at the expected probability for a specific value of the temperature. That is, we average over all samples from the posterior to get a likely value for $p(t_i)$.", "t = np.linspace(temperature.min() - 5, temperature.max() + 5, 50)[:, None]\np_t = logistic(t.T, beta_samples, alpha_samples)\n\nmean_prob_t = p_t.mean(axis=0)\n\nfigsize(12.5, 4)\n\nplt.plot(t, mean_prob_t, lw=3, label=\"average posterior \\nprobability \\\nof defect\")\nplt.plot(t, p_t[0, :], ls=\"--\", label=\"realization from posterior\")\nplt.plot(t, p_t[-2, :], ls=\"--\", label=\"realization from posterior\")\nplt.scatter(temperature, D, color=\"k\", s=50, alpha=0.5)\nplt.title(\"Posterior expected value of probability of defect; \\\nplus realizations\")\nplt.legend(loc=\"lower left\")\nplt.ylim(-0.1, 1.1)\nplt.xlim(t.min(), t.max())\nplt.ylabel(\"probability\")\nplt.xlabel(\"temperature\");", "Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.\nAn interesting question to ask is for what temperatures are we most uncertain about the defect-probability? Below we plot the expected value line and the associated 95% intervals for each temperature.", "from scipy.stats.mstats import mquantiles\n\n# vectorized bottom and top 2.5% quantiles for \"confidence interval\"\nqs = mquantiles(p_t, [0.025, 0.975], axis=0)\nplt.fill_between(t[:, 0], *qs, alpha=0.7,\n color=\"#7A68A6\")\n\nplt.plot(t[:, 0], qs[0], label=\"95% CI\", color=\"#7A68A6\", alpha=0.7)\n\nplt.plot(t, mean_prob_t, lw=1, ls=\"--\", color=\"k\",\n label=\"average posterior \\nprobability of defect\")\n\nplt.xlim(t.min(), t.max())\nplt.ylim(-0.02, 1.02)\nplt.legend(loc=\"lower left\")\nplt.scatter(temperature, D, color=\"k\", s=50, alpha=0.5)\nplt.xlabel(\"temp, $t$\")\n\nplt.ylabel(\"probability estimate\")\nplt.title(\"Posterior probability estimates given temp. $t$\");", "The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.\nMore generally, we can see that as the temperature nears 60 degrees, the CI's spread out over [0,1] quickly. As we pass 70 degrees, the CI's tighten again. This can give us insight about how to proceed next: we should probably test more O-rings around 60-65 temperature to get a better estimate of probabilities in that range. Similarly, when reporting to scientists your estimates, you should be very cautious about simply telling them the expected probability, as we can see this does not reflect how wide the posterior distribution is.\nWhat about the day of the Challenger disaster?\nOn the day of the Challenger disaster, the outside temperature was 31 degrees Fahrenheit. What is the posterior distribution of a defect occurring, given this temperature? The distribution is plotted below. It looks almost guaranteed that the Challenger was going to be subject to defective O-rings.", "figsize(12.5, 2.5)\n\nprob_31 = logistic(31, beta_samples, alpha_samples)\n\nplt.xlim(0.995, 1)\nplt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')\nplt.title(\"Posterior distribution of probability of defect, given $t = 31$\")\nplt.xlabel(\"probability of defect occurring in O-ring\");", "Is our model appropriate?\nThe skeptical reader will say \"You deliberately chose the logistic function for $p(t)$ and the specific priors. Perhaps other functions or priors will give different results. How do I know I have chosen a good model?\" This is absolutely true. To consider an extreme situation, what if I had chosen the function $p(t) = 1,\\; \\forall t$, which guarantees a defect always occurring: I would have again predicted disaster on January 28th. Yet this is clearly a poorly chosen model. On the other hand, if I did choose the logistic function for $p(t)$, but specified all my priors to be very tight around 0, likely we would have very different posterior distributions. How do we know our model is an expression of the data? This encourages us to measure the model's goodness of fit.\nWe can think: how can we test whether our model is a bad fit? An idea is to compare observed data (which if we recall is a fixed stochastic variable) with an artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data. \nPreviously in this Chapter, we simulated artificial datasets for the SMS example. To do this, we sampled values from the priors. We saw how varied the resulting datasets looked like, and rarely did they mimic our observed dataset. In the current example, we should sample from the posterior distributions to create very plausible datasets. Luckily, our Bayesian framework makes this very easy. We only need to create a new Stochastic variable, that is exactly the same as our variable that stored the observations, but minus the observations themselves. If you recall, our Stochastic variable that stored our observed data was:\nobserved = pm.Bernoulli( \"bernoulli_obs\", p, value=D, observed=True)\n\nHence we create:\nsimulated_data = pm.Bernoulli(\"simulation_data\", p)\n\nLet's simulate 10 000:", "simulated = pm.Bernoulli(\"bernoulli_sim\", p)\nN = 10000\n\nmcmc = pm.MCMC([simulated, alpha, beta, observed])\nmcmc.sample(N)\n\nfigsize(12.5, 5)\n\nsimulations = mcmc.trace(\"bernoulli_sim\")[:]\nprint simulations.shape\n\nplt.title(\"Simulated dataset using posterior parameters\")\nfigsize(12.5, 6)\nfor i in range(4):\n ax = plt.subplot(4, 1, i + 1)\n plt.scatter(temperature, simulations[1000 * i, :], color=\"k\",\n s=50, alpha=0.6)", "Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).\nWe wish to assess how good our model is. \"Good\" is a subjective term of course, so results must be relative to other models. \nWe will be doing this graphically as well, which may seem like an even less objective method. The alternative is to use Bayesian p-values. These are still subjective, as the proper cutoff between good and bad is arbitrary. Gelman emphasises that the graphical tests are more illuminating [7] than p-value tests. We agree.\nThe following graphical test is a novel data-viz approach to logistic regression. The plots are called separation plots[8]. For a suite of models we wish to compare, each model is plotted on an individual separation plot. I leave most of the technical details about separation plots to the very accessible original paper, but I'll summarize their use here.\nFor each model, we calculate the proportion of times the posterior simulation proposed a value of 1 for a particular temperature, i.e. compute $P( \\;\\text{Defect} = 1 | t, \\alpha, \\beta )$ by averaging. This gives us the posterior probability of a defect at each data point in our dataset. For example, for the model we used above:", "posterior_probability = simulations.mean(axis=0)\nprint \"posterior prob of defect | realized defect \"\nfor i in range(len(D)):\n print \"%.2f | %d\" % (posterior_probability[i], D[i])", "Next we sort each column by the posterior probabilities:", "ix = np.argsort(posterior_probability)\nprint \"probb | defect \"\nfor i in range(len(D)):\n print \"%.2f | %d\" % (posterior_probability[ix[i]], D[ix[i]])", "We can present the above data better in a figure: I've wrapped this up into a separation_plot function.", "from separation_plot import separation_plot\n\n\nfigsize(11., 1.5)\nseparation_plot(posterior_probability, D)", "The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions. \nThe black vertical line is the expected number of defects we should observe, given this model. This allows the user to see how the total number of events predicted by the model compares to the actual number of events in the data.\nIt is much more informative to compare this to separation plots for other models. Below we compare our model (top) versus three others:\n\nthe perfect model, which predicts the posterior probability to be equal to 1 if a defect did occur.\na completely random model, which predicts random probabilities regardless of temperature.\na constant model: where $P(D = 1 \\; | \\; t) = c, \\;\\; \\forall t$. The best choice for $c$ is the observed frequency of defects, in this case 7/23.", "figsize(11., 1.25)\n\n# Our temperature-dependent model\nseparation_plot(posterior_probability, D)\nplt.title(\"Temperature-dependent model\")\n\n# Perfect model\n# i.e. the probability of defect is equal to if a defect occurred or not.\np = D\nseparation_plot(p, D)\nplt.title(\"Perfect model\")\n\n# random predictions\np = np.random.rand(23)\nseparation_plot(p, D)\nplt.title(\"Random model\")\n\n# constant model\nconstant_prob = 7. / 23 * np.ones(23)\nseparation_plot(constant_prob, D)\nplt.title(\"Constant-prediction model\")", "In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.\nThe perfect model, the probability line is not well shown, as it is stuck to the bottom and top of the figure. Of course the perfect model is only for demonstration, and we cannot infer any scientific inference from it.\nExercises\n1. Try putting in extreme values for our observations in the cheating example. What happens if we observe 25 affirmative responses? 10? 50? \n2. Try plotting $\\alpha$ samples versus $\\beta$ samples. Why might the resulting plot look like this?", "# type your code here.\nfigsize(12.5, 4)\n\nplt.scatter(alpha_samples, beta_samples, alpha=0.1)\nplt.title(\"Why does the plot look like this?\")\nplt.xlabel(r\"$\\alpha$\")\nplt.ylabel(r\"$\\beta$\")", "References\n\n[1] Dalal, Fowlkes and Hoadley (1989),JASA, 84, 945-957.\n[2] German Rodriguez. Datasets. In WWS509. Retrieved 30/01/2013, from http://data.princeton.edu/wws509/datasets/#smoking.\n[3] McLeish, Don, and Cyntha Struthers. STATISTICS 450/850 Estimation and Hypothesis Testing. Winter 2012. Waterloo, Ontario: 2012. Print.\n[4] Fonnesbeck, Christopher. \"Building Models.\" PyMC-Devs. N.p., n.d. Web. 26 Feb 2013. http://pymc-devs.github.com/pymc/modelbuilding.html.\n[5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. https://plus.google.com/u/0/107971134877020469960/posts/KpeRdJKR6Z1.\n[6] S.P. Brooks, E.A. Catchpole, and B.J.T. Morgan. Bayesian animal survival estimation. Statistical Science, 15: 357–376, 2000\n[7] Gelman, Andrew. \"Philosophy and the practice of Bayesian statistics.\" British Journal of Mathematical and Statistical Psychology. (2012): n. page. Web. 2 Apr. 2013.\n[8] Greenhill, Brian, Michael D. Ward, and Audrey Sacks. \"The Separation Plot: A New Visual Method for Evaluating the Fit of Binary Models.\" American Journal of Political Science. 55.No.4 (2011): n. page. Web. 2 Apr. 2013.", "from IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
muratcemkose/cy-rest-python
cytoscape-js/CytoscapeJs_and_igraph.ipynb
mit
[ "Network analysis and visualization with py2cytoscape and igraph\n\nWhat is Cytoscape?\n- An open source platform for graph analysis and visualization\n- Free! (for both academic and commercial use)\n- De-facto standard platform in biotech community (11k+ publications)\nCytoscape Ecosystem\n\nCytoscape - A Java desktop application with plugin support\nCytoscape App Store - Central repository of all applications build on top of Cytoscape API\nCytoscape.js - JavaScript library for graph data visualization", "from py2cytoscape.cytoscapejs import viewer as cyjs\nfrom py2cytoscape import util\nimport json\nimport igraph as ig", "Quick Introduction: Graph libraries and py2cytoscape\nVisualize networks generated by igraph", "import matplotlib.pyplot as plt\n\ng = ig.Graph.Barabasi(200)\npositions = g.layout_kamada_kawai()\ng_cyjs = util.from_igraph(g, layout=positions, scale=120)\n\ncyjs.render(g_cyjs, style='default2')\n\n%matplotlib inline\n\nimport networkx as nx\nfrom py2cytoscape.util.util_networkx import *\n\ndef nx_layout(graph):\n pos = nx.graphviz_layout(graph, prog='dot')\n pos2 = map(lambda position: {'x': position[0], 'y':position[1] }, pos.values())\n for node_idx in graph.nodes():\n graph.node[node_idx]['position'] = pos2[node_idx]\n\n\nsf100 = nx.barabasi_albert_graph(200, 4, 0.8)\nnx_layout(sf100)\ng3 = from_networkx(sf100)\n\ncyjs.render(g3, style='default2')\n\nyeast_network = json.load(open('../basic/sample_data/yeast.json'))\nyeast_nx = to_networkx(yeast_network)\n\nyeast_nx.nodes()\nyeast_nx.nodes()\n\n\ndef nx_layout2(graph):\n pos = nx.graphviz_layout(graph, prog='dot')\n pos2 = map(lambda position: {'x': position[0], 'y':position[1] }, pos.values())\n nodes = graph.nodes()\n for i, nodeid in enumerate(nodes):\n graph.node[nodeid]['position'] = pos2[i]\n\n\nnx_layout2(yeast_nx)\ng4 = from_networkx(yeast_nx)\n\ncyjs.render(g4, style='Directed')", "Static image\nInteraction is hard (or no interaction)\nLimited visual properties\n\n\n1. Visualize network data created in Cytoscape desktop\nLoad network JSON files generated in Cytoscape desktop", "networks = {}\nlayouts = cy.get_layouts()\n\n# Load local network files\nyeast_network = json.load(open('yeast2.cyjs'))\nnetworks['Yeast PPI Network'] = yeast_network\n\nkegg_pathway = json.load(open('kegg_tca.cyjs'))\nnetworks['KEGG: TCA Cycle Human'] = kegg_pathway\n\n# Load Visual Style file\nvs_collection = json.load(open('kegg_style.json'))\n\nstyles = {}\nfor style in vs_collection:\n style_settings = style['style']\n title = style['title']\n styles[title] = style_settings\n \nprint(styles['default'])", "Visualization with Cytoscape.js module", "def render_graph(Network, Style, Layout):\n cy.render(Network, Style, Layout)\n \ninteract(render_graph, \n Network = DropdownWidget(values=networks, value=yeast_network), \n Style = DropdownWidget(values=styles, value=styles['default']), \n Layout = DropdownWidget(values=layouts, value=layouts['Preset'])\n)", "Render KEGG Pathway as an interactive visualization", "interact(render_graph, \n Network = DropdownWidget(values=networks, value=kegg_pathway), \n Style = DropdownWidget(values=styles, value=styles['KEGG Style']), \n Layout = DropdownWidget(values=layouts, value=layouts['Preset'])\n)\n\nprint(json.dumps(kegg_pathway, indent=2))", "2. Generate graph data with NetworkX\nUse BA model to generate graph and calculate some metrics", "ba1 = nx.barabasi_albert_graph(100,3)\nclustering = nx.clustering(ba1)\ndegrees = nx.degree(ba1)\n\nnx.set_node_attributes(ba1, 'degree', degrees)\nnx.set_node_attributes(ba1, 'clustering', clustering)\n\nmin_degree = min(degrees.values())\nmax_degree = max(degrees.values())\nmin_cl = min(clustering.values())\nmax_cl = max(clustering.values())", "Customize the visualization based on network statistics (Visual Mapping)", "# Convert to Cytoscape.js compatible format\nba1_cyjs = cy.from_networkx(ba1) \n\n# Create custom Visual Style programatically\nnew_directed = styles['Directed']\nnew_directed.append({\n 'selector':'node', \n 'css':{\n 'width': 'mapData(degree,' + str(min_degree) + ',' + str(max_degree) + ', 20, 80)',\n 'height': 'mapData(degree,' + str(min_degree) + ',' + str(max_degree) + ', 20, 80)',\n 'font-size': 'mapData(degree,' + str(min_degree) + ',' + str(max_degree) + ', 10, 50)',\n 'border-width': 1,\n 'border-width': 0,\n 'opacity': 0.9,\n 'color': '#222222',\n 'background-color': 'mapData(clustering,' + str(min_cl) + ',' + str(max_cl) + ', white, red)'\n }\n})\n\nnew_directed.append({\n 'selector':'edge', \n 'css':{\n 'width':0.5,\n 'opacity': 0.5,\n 'line-color': '#aaaaaa'\n }\n})\n\nnetworks['BA Graph 1'] = ba1_cyjs\n\ninteract(render_graph, \n Network = DropdownWidget(values=networks, value=ba1_cyjs), \n Style = DropdownWidget(values=styles, value=styles['Directed']), \n Layout = DropdownWidget(values=layouts, value=layouts['Breadthfirst'])\n)", "3. Generate & layout data with igraph, and visualize it in Cytoscape.js", "# Generate with igraph\ngenerated1 = Graph.Watts_Strogatz(1,600, 4, 0.15)\nlayout = generated1.layout(\"lgl\")\n\n# Community detection\ncommunities = generated1.community_label_propagation()\ncom_count = len(communities)\nprint('Communities = ' + str(com_count))\nrainbow = RainbowPalette(n=com_count)\n\ngenerated1.vs['community'] = communities.membership\nprint(generated1.vs[100]['community'])\n\n# Assign color\nfor node in generated1.vs:\n assigned_color = rainbow[node['community']]\n node['color'] = 'rgba(' + str(assigned_color[0]*255) +',' + str(assigned_color[1]*255) + ',' + str(assigned_color[2]*255) + ')'\n\nprint(generated1.vs.attributes())\n\ngenerated1_cyjs = cy.from_igraph(generated1, layout, 20)\n\n# Override the existing style\nnew_style = styles['default']\nnew_style.append({\n 'selector':'node', \n 'css':{\n 'width':30,\n 'height':30,\n 'border-width': 0,\n 'content': 'data(community)',\n 'font-size': 22,\n 'background-color': 'data(color)'\n }\n})\n\nnew_style.append({\n 'selector':'edge', \n 'css':{\n 'width':2,\n 'opacity': 0.4\n }\n})\n\nnetworks['WS200'] = generated1_cyjs\n\ninteract(render_graph, \n Network = DropdownWidget(values=networks, value=generated1_cyjs), \n Style = DropdownWidget(values=styles, value=styles['default']), \n Layout = DropdownWidget(values=layouts, value=layouts['Preset'])\n)\n\nstart = generated1.vs[1]\nend = generated1.vs[156]\n\npaths = generated1.get_all_shortest_paths(start, end)\n\ncommunities = generated1.community_label_propagation()\nprint(len(communities))\ngenerated1.vs['community'] = communities.membership\n\n\nprint(generated1.vs['community'])\n\nedges = []\nfor path in paths:\n for index, v in enumerate(path):\n if index < len(path)-1:\n edge = (v, path[index+1])\n edges.append(edge)\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.18/_downloads/ad203e57e0d21d6623eb90e7bd84fa3c/plot_morph_surface_stc.ipynb
bsd-3-clause
[ "%matplotlib inline", "Morph surface source estimate\nThis example demonstrates how to morph an individual subject's\n:class:mne.SourceEstimate to a common reference space. We achieve this using\n:class:mne.SourceMorph. Pre-computed data will be morphed based on\na spherical representation of the cortex computed using the spherical\nregistration of FreeSurfer &lt;tut-freesurfer&gt;\n(https://surfer.nmr.mgh.harvard.edu/fswiki/SurfaceRegAndTemplates) [1]_. This\ntransform will be used to morph the surface vertices of the subject towards the\nreference vertices. Here we will use 'fsaverage' as a reference space (see\nhttps://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage).\nThe transformation will be applied to the surface source estimate. A plot\ndepicting the successful morph will be created for the spherical and inflated\nsurface representation of 'fsaverage', overlaid with the morphed surface\nsource estimate.\nReferences\n.. [1] Greve D. N., Van der Haegen L., Cai Q., Stufflebeam S., Sabuncu M.\n R., Fischl B., Brysbaert M.\n A Surface-based Analysis of Language Lateralization and Cortical\n Asymmetry. Journal of Cognitive Neuroscience 25(9), 1477-1492, 2013.\n<div class=\"alert alert-info\"><h4>Note</h4><p>For a tutorial about morphing see:\n `ch_morph`.</p></div>", "# Author: Tommy Clausner <tommy.clausner@gmail.com>\n#\n# License: BSD (3-clause)\nimport os\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)", "Setup paths", "sample_dir_raw = sample.data_path()\nsample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')\nsubjects_dir = os.path.join(sample_dir_raw, 'subjects')\n\nfname_stc = os.path.join(sample_dir, 'sample_audvis-meg')", "Load example data", "# Read stc from file\nstc = mne.read_source_estimate(fname_stc, subject='sample')", "Setting up SourceMorph for SourceEstimate\nIn MNE surface source estimates represent the source space simply as\nlists of vertices (see\ntut-source-estimate-class).\nThis list can either be obtained from\n:class:mne.SourceSpaces (src) or from the stc itself.\nSince the default spacing (resolution of surface mesh) is 5 and\nsubject_to is set to 'fsaverage', :class:mne.SourceMorph will use\ndefault ico-5 fsaverage vertices to morph, which are the special\nvalues [np.arange(10242)] * 2.\n<div class=\"alert alert-info\"><h4>Note</h4><p>This is not generally true for other subjects! The set of vertices\n used for ``fsaverage`` with ico-5 spacing was designed to be\n special. ico-5 spacings for other subjects (or other spacings\n for fsaverage) must be calculated and will not be consecutive\n integers.</p></div>\n\nIf src was not defined, the morph will actually not be precomputed, because\nwe lack the vertices from that we want to compute. Instead the morph will\nbe set up and when applying it, the actual transformation will be computed on\nthe fly.\nInitialize SourceMorph for SourceEstimate", "morph = mne.compute_source_morph(stc, subject_from='sample',\n subject_to='fsaverage',\n subjects_dir=subjects_dir)", "Apply morph to (Vector) SourceEstimate\nThe morph will be applied to the source estimate data, by giving it as the\nfirst argument to the morph we computed above.", "stc_fsaverage = morph.apply(stc)", "Plot results", "# Define plotting parameters\nsurfer_kwargs = dict(\n hemi='lh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=0.09, time_unit='s', size=(800, 800),\n smoothing_steps=5)\n\n# As spherical surface\nbrain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)\n\n# Add title\nbrain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',\n font_size=16)", "As inflated surface", "brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)\n\n# Add title\nbrain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',\n font_size=16)", "Reading and writing SourceMorph from and to disk\nAn instance of SourceMorph can be saved, by calling\n:meth:morph.save &lt;mne.SourceMorph.save&gt;.\nThis method allows for specification of a filename under which the morph\nwill be save in \".h5\" format. If no file extension is provided, \"-morph.h5\"\nwill be appended to the respective defined filename::\n&gt;&gt;&gt; morph.save('my-file-name')\n\nReading a saved source morph can be achieved by using\n:func:mne.read_source_morph::\n&gt;&gt;&gt; morph = mne.read_source_morph('my-file-name-morph.h5')\n\nOnce the environment is set up correctly, no information such as\nsubject_from or subjects_dir must be provided, since it can be\ninferred from the data and use morph to 'fsaverage' by default. SourceMorph\ncan further be used without creating an instance and assigning it to a\nvariable. Instead :func:mne.compute_source_morph and\n:meth:mne.SourceMorph.apply can be\neasily chained into a handy one-liner. Taking this together the shortest\npossible way to morph data directly would be:", "stc_fsaverage = mne.compute_source_morph(stc,\n subjects_dir=subjects_dir).apply(stc)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thempel/adaptivemd
examples/tutorial/1_example_setup_project.ipynb
lgpl-2.1
[ "First we cover some basics about adaptive sampling to get you going.\nWe will briefly talk about\n\nresources\nfiles\ngenerators\nhow to run a simple trajectory\n\nImports", "import sys, os", "Alright, let's load the package and pick the Project since we want to start a project", "from adaptivemd import Project", "Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.", "# Use this to completely remove the example-worker project from the database.\nProject.delete('tutorial')\n\nproject = Project('tutorial')", "Now we have a handle for our project. First thing is to set it up to work on a resource.\nThe Resource\nWhat is a resource?\nA Resource specifies a shared filesystem with one or more clusteres attached to it. This can be your local machine or just a regular cluster or even a group of cluster that can access the same FS (like Titan, Eos and Rhea do).\nOnce you have chosen your place to store your results it is set for the project and can (at least should) not be altered since all file references are made to match this resource.\nLet us pick a local resource on your laptop or desktop machine for now. No cluster / HPC involved for now.", "from adaptivemd import LocalResource", "We now create the Resource object", "resource = LocalResource()", "Since this object defines the path where all files will be placed, let's get the path to the shared folder. The one that can be accessed from all workers. On your local machine this is trivially the case.", "resource.shared_path", "Okay, files will be placed in $HOME/adaptivemd/. You can change this using an option when creating the Resource \npython\nLocalCluster(shared_path='$HOME/my/adaptive/folder/')\nIf you are interested in more information about Resource setup consult the documentation about Resource\nLast, we save our configured Resource and initialize our empty prohect with it. This is done once for a project and should not be altered.", "project.initialize(resource)", "Files", "from adaptivemd import File, Directory", "First we define a File object. Instead of just a string, these are used to represent files anywhere, on the cluster or your local application. There are some subclasses or extensions of File that have additional meta information like Trajectory or Frame. The underlying base object of a File is called a Location.\nWe start with a first PDB file that is located on this machine at a relative path", "pdb_file = File('file://../files/alanine/alanine.pdb')", "File like any complex object in adaptivemd can have a .name attribute that makes them easier to find later. You can either set the .name property after creation, or use a little helper method .named() to get a one-liner. This function will set .name and return itself.\nFor more information about the possibilities to specify filelocation consult the documentation for File", "pdb_file.name = 'initial_pdb'", "The .load() at the end is important. It causes the File object to load the content of the file and if you save the File object, the actual file is stored with it. This way it can simply be rewritten on the cluster or anywhere else.", "pdb_file.load()", "Generators\nTaskGenerators are instances whose purpose is to create tasks to be executed. This is similar to the\nway Kernels work. A TaskGenerator will generate Task objects for you which will be translated into a ComputeUnitDescription and executed. In simple terms:\nThe task generator creates the bash scripts for you that run a simulation or run pyemma.\nA task generator will be initialized with all parameters needed to make it work and it will now what needs to be staged to be used.\nThe engine", "from adaptivemd.engine.openmm import OpenMMEngine", "A task generator that will create jobs to run simulations. Currently it uses a little python script that will excute OpenMM. It requires conda to be added to the PATH variable or at least openmm to be installed on the cluster. If you setup your resource correctly then this should all happen automatically.\nSo let's do an example for an OpenMM engine. This is simply a small python script that makes OpenMM look like a executable. It run a simulation by providing an initial frame, OpenMM specific system.xml and integrator.xml files and some additional parameters like the platform name, how often to store simulation frames, etc.", "engine = OpenMMEngine(\n pdb_file=pdb_file,\n system_file=File('file://../files/alanine/system.xml').load(),\n integrator_file=File('file://../files/alanine/integrator.xml').load(),\n args='-r --report-interval 1 -p CPU'\n).named('openmm')", "We have now an OpenMMEngine which uses the previously made pdb File object and uses the location defined in there. The same for the OpenMM XML files and some args to run using the CPU kernel, etc.\nLast we name the engine openmm to find it later.", "engine.name", "Next, we need to set the output types we want the engine to generate. We chose a stride of 10 for the master trajectory without selection and a second trajectory with only protein atoms and native stride.\nNote that the stride and all frame number ALWAYS refer to the native steps used in the engine. In out example the engine uses 2fs time steps. So master stores every 20fs and protein every 2fs", "engine.add_output_type('master', 'master.dcd', stride=10)\nengine.add_output_type('protein', 'protein.dcd', stride=1, selection='protein')", "The modeller", "from adaptivemd.analysis.pyemma import PyEMMAAnalysis", "The instance to compute an MSM model of existing trajectories that you pass it. It is initialized with a .pdb file that is used to create features between the $c_\\alpha$ atoms. This implementaton requires a PDB but in general this is not necessay. It is specific to my PyEMMAAnalysis show case.", "modeller = PyEMMAAnalysis(\n engine=engine,\n outtype='protein',\n features={'add_inverse_distances': {'select_Backbone': None}}\n).named('pyemma')", "Again we name it pyemma for later reference.\nThe other two option chose which output type from the engine we want to analyse. We chose the protein trajectories since these are faster to load and have better time resolution.\nThe features dict expresses which features to use. In our case use all inverse distances between backbone c_alpha atoms.\nAdd generators to project\nNext step is to add these to the project for later usage. We pick the .generators store and just add it. Consider a store to work like a set() in python. It contains objects only once and is not ordered. Therefore we need a name to find the objects later. Of course you can always iterate over all objects, but the order is not given.\nTo be precise there is an order in the time of creation of the object, but it is only accurate to seconds and it really is the time it was created and not stored.", "project.generators.add(engine)\nproject.generators.add(modeller)", "Note, that you cannot add the same engine twice. But if you create a new engine it will be considered different and hence you can store it again. \nCreate one initial trajectory\nFinally we are ready to run a first trajectory that we will store as a point of reference in the project. Also it is nice to see how it works in general.\nWe are using a Worker approach. This means simply that someone (in our case the user from inside a script or a notebook) creates a list of tasks to be done and some other instance (the worker) will actually do the work.\nCreate a Trajectory object\nFirst we create the parameters for the engine to run the simulation. Since it seemed appropriate we use a Trajectory object (a special File with initial frame and length) as the input. You could of course pass these things separately, but this way, we can actualy reference the no yet existing trajectory and do stuff with it.\nA Trajectory should have a unique name and so there is a project function to get you one. It uses numbers and makes sure that this number has not been used yet in the project.", "trajectory = project.new_trajectory(engine['pdb_file'], 100, engine)\ntrajectory", "This says, initial is alanine.pdb run for 100 frames and is named xxxxxxxx.dcd.\nWhy do we need a trajectory object?\nYou might wonder why a Trajectory object is necessary. You could just build a function that will take these parameters and run a simulation. At the end it will return the trajectory object. The same object we created just now.\nThe main reason is to familiarize you with the general concept of asyncronous execution and so-called Promises. The trajectory object we built is similar to a Promise so what is that exactly?\nA Promise is a value (or an object) that represents the result of a function at some point in the future. In our case it represents a trajectory at some point in the future. Normal promises have specific functions do deal with the unknown result, for us this is a little different but the general concept stands. We create an object that represents the specifications of a Trajectory and so, regardless of the existence, we can use the trajectory as if it would exists:\nGet the length", "print trajectory.length", "and since the length is fixed, we know how many frames there are and can access them", "print trajectory[20]", "ask for a way to extend the trajectory", "print trajectory.extend(100)", "ask for a way to run the trajectory", "print trajectory.run()", "We can ask to extend it, we can save it. We can reference specific frames in it before running a simulation. You could even build a whole set of related simulations this way without running a single frame. You might understand that this is pretty powerful especially in the context of running asynchronous simulations.\nLast, we did not answer why we have two separate steps: Create the trajectory first and then a task from it. The main reason is educational:\n\nIt needs to be clear that a Trajectory can exist before running some engine or creating a task for it. The Trajectory is not a result of a simulation action.\n\nCreate a Task object\nNow, we want that this trajectory actually exists so we have to make it. This requires a Task object that knows to describe a simulation. Since Task objects are very flexible and can be complex there are helper functions (i.e. factories) to get these in an easy manner, like the ones we already created just before. Let's use the openmm engine to create an openmm task now.", "task = engine.run(trajectory)", "As an alternative you can directly use the trajectory (which knows its engine) and call .run()", "task = trajectory.run()", "That's it, just take a trajectory description and turn it into a task that contains the shell commands and needed files, etc. \nSubmit the task to the queue\nFinally we need to add this task to the things we want to be done. This is easy and only requires saving the task to the project. This is done to the project.tasks bundle and once it has been stored it can be picked up by any worker to execute it.", "project.queue(task) # shortcut for project.tasks.add(task)", "That is all we can do from here. To execute the tasks you need to run a worker using\nbash\nadaptivemdworker -l tutorial --verbose\nOnce this is done, come back here and check your results. If you want you can execute the next cell which will block until the task has been completed.", "print project.files\nprint project.trajectories", "and close the project.", "project.close()", "The final project.close() will close the DB connection." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
doudon/pymks_overview
notebooks/elasticity_2D_Multiphase.ipynb
mit
[ "Linear Elasticity in 2D for 3 Phases\nIntroduction\nThis example provides a demonstration of using PyMKS to compute the linear strain field for a three phase composite material. It demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared with the finite element data for a large problem.\nPyMKS uses the finite element tool SfePy to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.\nElastostatics Equations and Boundary Conditions\nThe governing equations for elasticostaics and the boundary conditions used in this example are the same as those provided in the Linear Elastic in 2D example. \nNote that an inappropriate boundary condition is used in this example because current version of SfePy is unable to implement a periodic plus displacement boundary condition. This leads to some issues near the edges of the domain and introduces errors into the resizing of the coefficients. We are working to fix this issue, but note that the problem is not with the MKS regression itself, but with the calibration data used. The finite element package ABAQUS includes the displaced periodic boundary condition and can be used to calibrate the MKS regression correctly.\nModeling with MKS\nCalibration Data and Delta Microstructures\nThe first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met we can expect a mean absolute error of 2% or less when comparing the MKS results with those computed using finite element methods [1]. \nBecause we are using distinct phases and the contrast is low enough to only need the first order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first order influence coefficients [2]. \nHere we use the make_delta_microstructure function from pymks.datasets to create the delta microstructures needed to calibrate the first order influence coefficients for a two phase microstructure. The make_delta_microstructure function uses SfePy to generate the data", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom pymks.datasets import make_delta_microstructures\nn = 21\nn_phases = 3\n\n", "Let's take a look at a few of the delta microstructures by importing draw_microstructures from pymks.tools.", "from pymks.tools import draw_microstructures\n\n", "Using delta microstructures for the calibration of the first order influence coefficients is essentially the same as using a unit impulse response to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain. The number of delta microstructures that are needed to calibrated the first order coefficients is $N(N-1)$ where $N$ is the number of phases, therefore in this example we need 6 delta microstructures.\nGenerating Calibration Data\nThe make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields.\nIn this example, lets look at a three phase microstructure with elastic moduli values of 80, 100 and 120 and Poisson's ratio values all equal to 0.3. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function. The number of Poisson's ratio values and elastic moduli values indicates the number of phases. Note that make_elasticFEstrain_delta does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.", "from pymks.datasets import make_elastic_FE_strain_delta\n\nelastic_modulus = (80, 100, 120)\npoissons_ratio = (0.3, 0.3, 0.3)\nmacro_strain = 0.02\nsize = (n, n)\n\n= make_elastic_FE_strain_delta(elastic_modulus=elastic_modulus,\n poissons_ratio=poissons_ratio,\n size=size, macro_strain=macro_strain)", "Let's take a look at one of the delta microstructures and the $\\varepsilon_{xx}$ strain field using the draw_microstructure_strain from pymks.tools.", "from pymks.tools import draw_microstructure_strain\n", "Because slice(None) (the default slice operator in Python, equivalent to array[:]) was passed in to the make_elasticFEstrain_delta function as the argument for strain_index, the function returns all the strain fields. Let's also take a look at the $\\varepsilon_{yy}$ and $\\varepsilon_{xy}$ strain fields.\nCalibrating First Order Influence Coefficients\nNow that we have the delta microstructures and their strain fields, we can calibrate the influence coefficients by creating an instance of the MKSLocalizationModel class and PrimitiveBasis class. Because we have 3 phases and we know their values range from [0, 2], we will create an instance of PrimitiveBasis with n_states equal to 3 and domain equal to [0, 2].\nNext we will create an instance of the MKSLocalizationModel with basis set equal to the instance of PrimitiveBasis we create.", "from pymks import MKSLocalizationModel\nfrom pymks import PrimitiveBasis\n", "Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first order influence coefficients.\nThat's it, the influence coefficient have be calibrated. Let's take a look at them.", "from pymks.tools import draw_coeff\n", "The influence coefficients for $l=0$ and $l = 1$ have a Gaussian-like shape, while the influence coefficients for $l=2$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as import. They are equivalent to the constant term in multiple linear regression with categorical variables.\nPredict of the Strain Field for a Random Microstructure\nLet's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two phase microstructure and compare it with the results from a finite element simulation. \nThe make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.", "from pymks.datasets import make_elastic_FE_strain_random\n\nnp.random.seed(101)", "Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with\nLet's take a look at the microstructure and the strain field using the draw_microstructure_strain function.\nNow to get the strain field from the MKSLocalizationModel just pass the same microstructure to the predict method.\nFinally let's compare the results from finite element simulation and the MKS model,using draw_strains_compare from pymks.tools.", "from pymks.tools import draw_strains_compare\n\n", "Let's plot the difference between the two strain fields, using draw_differences from pymks.tools.", "from pymks.tools import draw_differences\n", "The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.\nResizing the Coefficeints to use on Larger Microstructures\nThe influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.", "m = 3 * n \nsize = (m, m)\nprint size\n\n= make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,\n poissons_ratio=poissons_ratio, size=size,\n macro_strain=macro_strain)", "The influence coefficients that have already been calibrated on a $n$ by $n$ delta microstructures, need to be resized to match the shape of the new larger $m$ by $m$ microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the resize_coeff method.\nLet's now take a look that ther resized influence coefficients, using draw_coeff function.\nBecause the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.\nLet's compare the prediced strain field with the finite element result using draw_strains_compare.\nAgain, let's plot the difference between the two strain fields, using draw_differences.\nAs you can see, the results from the strain field computed with the resized influence coefficients is not as accurate as they were before they were resized. This decrease in accuracy is expected when using spectral interpolation [4].\nReferences\n[1] Binci M., Fullwood D., Kalidindi S.R., A new spectral framework for establishing localization relationships for elastic behavior of composites and their calibration to finite-element models. Acta Materialia, 2008. 56 (10) p. 2272-2282 doi:10.1016/j.actamat.2008.01.017.\n[2] Landi, G., S.R. Niezgoda, S.R. Kalidindi, Multi-scale modeling of elastic response of three-dimensional voxel-based microstructure datasets using novel DFT-based knowledge systems. Acta Materialia, 2009. 58 (7): p. 2716-2725 doi:10.1016/j.actamat.2010.01.007.\n[3] Marko, K., Kalidindi S.R., Fullwood D., Computationally efficient database and spectral interpolation for fully plastic Taylor-type crystal plasticity calculations of face-centered cubic polycrystals. International Journal of Plasticity 24 (2008) 1264–1276 doi;10.1016/j.ijplas.2007.12.002.\n[4] Marko, K. Al-Harbi H. F. , Kalidindi S.R., Crystal plasticity simulations using discrete Fourier transforms. Acta Materialia 57 (2009) 1777–1784 doi:10.1016/j.actamat.2008.12.017." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
decisionstats/pythonfordatascience
text+mining.ipynb
apache-2.0
[ "import nltk\n\n\n#nltk.download()\n\nfrom nltk.book import *\n\n\ntexts()\n\nsents()\n\n!pip install textblob\n\n!pip install tweepy\n\nfrom urllib import request\nurl = \"http://www.gutenberg.org/files/2554/2554-0.txt\"\nresponse = request.urlopen(url)\nraw = response.read().decode('utf8')\ntype(raw)\n\nlen(raw)\n\n\nraw[:75]\n\n\ntokens = nltk.word_tokenize(raw)\ntype(tokens)\n\ntokens[1:10]\n\n!pip install textmining", "!pip install stemmer\nFor Python 3 from https://stackoverflow.com/questions/15717752/python3-3-importerror-with-textmining-1-0\nConverting the textmining code to python3 solved the problem for me. To do so, I manually download the text mining package from here:\nhttps://pypi.python.org/pypi/textmining/1.0\nunzipped it:\nunzip textmining-1.0.zip\nconverted the folder to python 3:\n2to3 --output-dir=textmining-1.0_v3 -W -n textmining-1.0\nand installed it:\ncd textmining-1.0_v3\nsudo python3 setup.py install", "import textmining\n\n\ntdm = textmining.TermDocumentMatrix()\n\n\ntdm.add_doc(raw)\n\nfor row in tdm.rows(cutoff=1):\n print(row)", "also see https://stackoverflow.com/questions/15899861/efficient-term-document-matrix-with-nltk", "type(row)\n\nrow[1:10]\n\nimport pandas as pd\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n \nvec = CountVectorizer()\nX = vec.fit_transform(tokens)\ndf = pd.DataFrame(X.toarray(), columns=vec.get_feature_names())\nprint(df)\n\nprint(df)\n\ndf[['zossimov']]\n\ntype(tokens)\n\ntokens2=pd.DataFrame(tokens)\n\ntokens2.columns=['Words']\n\ntokens2.head()\n\ntokens2.Words.value_counts().head()" ]
[ "code", "markdown", "code", "markdown", "code" ]
mdeff/ntds_2017
projects/reports/stackoverflow_network/StackOverflowNetworkAnalysis.ipynb
mit
[ "Stack Overflow Network Analysis\nClaas Brüß, Simon Romanski and Maximilian Rünz\nNOTE: Originally we claimed that the data visualization used in this project will be credited for the Data Visualization course. As we finally chose a very different approach in Data Visualization, this claim does not hold anymore. Apart from using the same data set, these are two clearly seperated projects.\n1 Introduction\nThere are comprehensive studies on how groups form and work together in a face-to-face team work scenario. However, even though the development of the internet has allowed open platforms for collaboration in a massive scale, less research has been conducted on patterns of collaboration in that domain.\nThis project analyzes the stackoverflow community representing it as a graph. We are applying network analysis methods to subcommunities for libraries like Numpy in order to understand the structure of the community. \nIn a next stage we are comparing the communities to theoretical network models as well as to real network models to obtain insights about work patterns in the community. Finally we will compare those insights with proven psychological models of group work theorey to gain intuition about knowledge transfer and work in these communites. We will show that the shape of group work and knowledge transfer changes by the means of online communities.", "%matplotlib inline\n\nimport os\nimport networkx as nx\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import sparse\n\n# Own modules\nimport DataProcessing as proc\nimport DataCleaning as clean\nimport NetworkAnalysis as analysis\nimport Classification as classification\nimport NetworkEvolution as evol", "2 Data processing\nThe data behind this project was provided by Stack Overflow itself. They release frequent data dumps on archive.org. We have analyzed all posts from Stack Overflow with the stackoverflow.com-Posts.7z file. The compressed file contains a list of all questions and answeres formatted as xml.\nNote: As the analyzed xml file is more than 50 GB big, the data processing takes several hours. The data processing part can be skipped, the uploaded zip contains the constructed edge lists.", "# Paths that will be used\nposts_path = os.path.join(\"Posts.xml\")\nquestions_path = os.path.join(\"Questions.json\")\nanswers_path = os.path.join(\"Answers.json\")\nedge_list_path = os.path.join(\"Edges.json\")\nedge_list_tag_path = os.path.join(\"Tags\")", "2.1 Extract meaningful features\nBefore we started our analysis we extracted the features of the posts which are interesting for us. The provided features can be looked up in this text file. We selected the following features for our analysis:\n* PostTypeId (Question/Answer)\n* Id\n* ParentId\n* AcceptedAnswerId\n* CreationDate\n* Score\n* OwnerUserId\n* Tags\nBased on the PostType the posts are then stored then stored in the questions or answers JSON file.", "%%time\n# Create JSON for questions and answers\nproc.split_qa_json_all(questions_path, answers_path, posts_path)", "2.2 Create edge list\nHaving selected the meaningful features, we started to create the graph. Each answer is matched with its corresponding question. The result are two nodes and one edge: The nodes represent Stack Overflow users. The edge connectes those users from the inquirer to the respondent.", "%%time\n# create edge list\nproc.create_edge_list_all(questions_path, answers_path, edge_list_path)", "2.3 Split networks by tags\nAfter we tried to analyze the whole Stack Overflow network and came to the conclusion, that it is simpy too big, we decided to split the network by tags. Therefore we created one edge list per Tag (e.g numpy).", "%%time\n# split in file for each tag\nproc.split_edge_list_tags(edge_list_tag_path, edge_list_path)", "2.4 Order edges by time\nIn order to simplify the analysis of the network evolution, we ordered the edges based on the creation date of the answer.", "%%time\n# order by time\nproc.order_edge_lists_tags_time(edge_list_tag_path)", "2.5 Format edge list to txt files\nLast but not least, the json files are converted into txt edge list, so that they can be read easily by networkx.", "%%time\nproc.edge_lists_to_txt(edge_list_tag_path)", "3 Data Cleaning\nDuring the network analysis we noticed that it makes sense to clean the created network. Therefore, we implemented several filters:\n* Filter by attribute\n* Filter by degree\n* Filter by component\n* Remove self loops\nThe most frequently used filter is the filter for attributes. Using this filter we remove questions and answers with negative votes, as they are not helpful for the community.\nFurthermore this filter will be used to analyse the evolution of the network.", "network_path = os.path.join(\"Tags\", \"numpy_complete_ordered_list.txt\")\nnetwork = nx.read_edgelist(network_path,nodetype=int, data=(('time',int),('votes_q', int),('votes_a', int),('accepted', bool)))\nnetwork_directed = nx.read_edgelist(network_path, create_using=nx.DiGraph(), nodetype=int, data=(('time',int),('votes_q', int),('votes_a', int),('accepted', bool)))", "The whole stackoverflow community has more than 8 million users, 15 million questions and 23 million answers on different aspects of different libraries, programming languages and operating systems.\nHence, we decided to focus on specific widely-used libraries for our investigation. In our case we perform data analysis for a commonly used python library Numpy and compare it to another python library Matlplotlib as well as to a heavily used library for C++ called Eigen. Ultimately we will compare the entire Python community with the Numpy community.\nFor numpy we then create two networks: one directed and one undirected for different analysis purposes.", "# in epoche\nmin_time = -1\nmax_time = -1\n\nmin_q_votes = 0\nmax_q_votes = -1\nmin_a_votes = 0\nmax_a_votes = -1\naccepted = -1\n\nmin_degree = -1\nmax_degree = -1\n\nonly_gc = False\n\nno_self_loops = True", "Filter by attributes", "network_cleaned = clean.filter_network_attributes(network, min_time, max_time,\\\n min_q_votes, max_q_votes, min_a_votes, max_a_votes, accepted)\nnetwork_direted_cleaned = clean.filter_network_attributes(network_directed, min_time, max_time,\\\n min_q_votes, max_q_votes, min_a_votes, max_a_votes, accepted, directed=True)", "Filter by node degree", "network_cleaned = clean.filter_network_node_degree(network_cleaned, min_degree, max_degree)\nnetwork_direted_cleaned = clean.filter_network_node_degree(network_direted_cleaned, min_degree, max_degree)", "Only use giant component", "if only_gc:\n network_cleaned = clean.filter_network_gc(network_cleaned)\n network_direted_cleaned = clean.filter_network_gc(network_direted_cleaned)", "Remove self loops", "if no_self_loops:\n network_cleaned = clean.filter_selfloops(network_cleaned)\n network_direted_cleaned = clean.filter_selfloops(network_direted_cleaned)", "3 Data Exploration: Network properties\nWe are starting with some basic properties of the subcommunity. Each node in our graph represents one user.", "analysis.get_number_nodes(network_cleaned)", "We can see that we have roughly 20.000 users.", "analysis.get_number_edges(network_cleaned)", "And we have roughly 37.231 edges corresponding to one answer each.", "analysis.get_number_connected_components(network_cleaned)", "The number of connected components describes the amount of completely seperated groups in the community that do not interfere with each other. A deeper analysis shows that there is in fact one big group and many very small groups.", "analysis.get_size_giant_component(network_cleaned)\n\nanalysis.plot_ranking_component_size(network_cleaned)\n\nanalysis.get_number_self_loops(network_cleaned)", "The self-loops represent answers that users have given to their own questions. As this seems to be counterintuitive, we have removed those in our data cleaning.", "analysis.get_avg_degree(network_cleaned)", "The average degree for the numpy network is 3.5, we will evaluate this in the exploitation part of this report.", "analysis.get_cluster_coefficient(network_cleaned)", "The cluster coefficient is relatively low due to the model of our network. More details on that are also following in the data exploitation chapter.", "analysis.get_max_degree(network_cleaned)", "The maximal degree of a node is tremendously higher than the average degree. This is a little suspicous. Therefore, we will have a look at the distribution of the node degrees.\nDegree distribution\nTo get insights about the user behaviour, i.e., how many question do users ask and answer we war plotting a degree distribution in the following:", "analysis.plot_degree_hist(network_cleaned)\n\nanalysis.plot_degree_scatter(network_cleaned)", "These plots show a distribution that demonstrates a high number of highly connected nodes. This superlineaer distribution plot imply a hub-and-spoke topology of this network. Note the double-logarithmic scaling of the scatter plot.", "analysis.plot_in_degree_hist(network_direted_cleaned)\n\nanalysis.plot_in_degree_scatter(network_direted_cleaned)\n\nanalysis.plot_out_degree_hist(network_direted_cleaned)\n\nanalysis.plot_out_degree_scatter(network_direted_cleaned)", "Like before can observe that incoming and outgoing degree distributions both demonstrate bevaior associated with networks with hub-and-spoke topology. The outgoing distribution exhibits this even stronger than the incoming distribution indicating that certain members of the community carry an overpropotional workload in answering questions posed on the platform.\nAttribute Distribution", "analysis.analyze_attribute_q_votes(network_cleaned)\n\nanalysis.analyze_attribute_a_votes(network_cleaned)", "Another interesting property is the votes per edge as their distribution is another valuable metric to understand how user activity is concentrated on certain areas.", "analysis.analyze_attribute_time(network_cleaned)", "In general it is interesting to see the evoulution of a network over time. We can see that in this case the amount of edges is conitnuously increasing.\nComparison\nIt is instructive to compare the characteristics of the different subcommunities:", "for file in [\"python_complete_ordered_list.txt\",\\\n \"matplotlib_complete_ordered_list.txt\",\\\n \"eigen_complete_ordered_list.txt\"]:\n print(file)\n analysis.analyze_basic_file(file)\n print()", "At first one can notice, that the ratio between number of nodes and size of the giant component is quite large for all networks. Only a few inactive users are not connected to the real community. Furthermore, the number of connected components is correlated to the number of nodes of the given network. \nFor the average degree one can notice, that the libraries have an average degree around 2-3, but python as programing language has an average degree of more than 6!\nThe cluster coefficient does not seem to be related to network types.\nAll in all we can say, that libraries behave similary even if the programming language is different (eigen is written in c++). Built networks of programming language have higher degree.\n4 Data Exploitation\nIt is possible to create different graph models based on the actual graph to understand how our real world model fits into these theoretical concepts. This would be a first helpful step to understand the underlying network model. We were planning to build an Erdős–Rényi and a Barabási-Albert graph based on the following assumptions:\nIf NN denotes the number of nodes and LL the number of edges,\nwe can calculate the Erdős–Rényi graph parameter pp as follows: \n$p = \\frac{2L}{((N-1)N)}$\nRespectively, we can caluclate the parameter m for the Barabási-Albert graph.\n$m =(\\frac{L}{N})+1$\nWe noticed that building the graphs for all the subcommunities takes a lot of time as our graphs are relatively big. Hence, we are aiming for a more efficient approach to understand the network model.\nThe key difference between the two models is that the Barabási-Albert model describes a scale-free network and the Erdős–Rényi model describes a random network. Hence, the degree distribution is more likely to show how similar each model is to the original graph.\nThis is the abstraction that we would like to draw after the comparison whatsoever. As a result we can also compare the degree distribution with the two distributions representing the random network model and the scale free model. That is the Poisson distribution and the power-law distribution respectively.\n<img src=\"files/imgs/RandomNetworkDegreeDistribution.png\"/>\n<img src=\"files/imgs/ScaleFreeNetworkDegreeDistribution.png\"/>\nThey can be a bit hard to distinguish when we are handling real data in a linearly scaled graph. However, if we take the logarithm of both axes, they are clearly distinguishable.", "analysis.plot_degree_scatter(network_cleaned)", "The plots yield that the underlying graph of the Numpy subcommunity is neither a scale-free network nor a random network. Its degree distribution follows a super linear network. \nWe can still calculate γ of the distribution to compare it to other networks\nScale Free Networks\nIn scale free networks there exists a linear dependency between the logarithm of the probability and the logarithm of the degree:\n$log(p(k)) \\sim -y log(k) $\nGamma can then be calculated by fitting a linear regression between $log(p(k))$ and $log(k)$. The slope of the regression line is the gamma of the scale free network.", "analysis.get_gamma_power_law(network_cleaned)\n\nanalysis.get_in_gamma_power_law(network_direted_cleaned)\n\nanalysis.get_out_gamma_power_law(network_direted_cleaned)", "For now we will keep these numbers in mind. We will come back to them later.\nNetwork evolution\nIn most cases the growth of a network is clearly correlated with time. Most models simply regard the time until a new node joins the network as a time step. In real networks this time can widely differ and therefore we decided to plot over time to look at the changes in network in a certain timeframe rather than a certain magnitude of change. Nonetheless it is implied that more and more nodes join the network over the weeks.", "networks = evol.split_network(network)\n\nevol.plot_t_n(networks)", "<img src=\"files/imgs/WWWYearNodes.png\"/>\nNumber of Nodes over Time\nIn order to gain a better understanding of how these networks evolve over time we observed various network attributes over time to network models such as Barabási-Albert Model.\nThese two curves depict the number of nodes present in the network plotted over time. It’s clear that the growth of the network is accelerating in both cases.", "evol.plot_t_k_avg(networks)", "⟨ k⟩ over time\nThe average degree of the nodes within the network closely follows a logistic curve converging at an average degree value of about 3.6.", "evol.plot_t_k_max(networks)", "k<sub>max</sub> over Time\nThe growing number of nodes accelerates the rise in maximum degree .The acelleration or stronger than linear growth of the maximum degree suggests that new nodes show a tendency to connect to already highly connected nodes. This superlinear preferential attachment incates that we will see high values of α in the following plots.", "evol.plot_n_c(networks)", "<img src=\"files/imgs/EvolClustering.png\"/>", "evol.plot_k_avg_k_max(networks)", "<img src=\"files/imgs/EvolHubs.png\"/>\nk<sub>max</sub> over ⟨ k⟩\nAs indicated by the superlinear growth of maximum degree over time, we also see superlinear behvaior in the plot of maximum degree over average degree with values of α > 2.5.", "evol.DegreeDynamics(network_cleaned, 20)\n\n# n = 100\nanalysis.plot_degree_scatter(networks[1246320000000])\n\n# n = 1000\nanalysis.plot_degree_scatter(networks[1302566400000])\n\n# n = 10000\nanalysis.plot_degree_scatter(networks[1430179200000])\n\n# all\nanalysis.plot_degree_scatter(network_cleaned)", "<img src=\"files/imgs/EvolDegree.png\"/>\nDegree Dynamic and Degree Distribution in different Stages\nIn this section we compare the degree dynamics of the network nodes between the network implied from the StackOverflow data and network closely following the bevaiour of the Barabási-Albert Model. For this we selected degree distribution with similiar node counts N. While the Barabási-Albert network shows linearity in the degree distributions plots and the rise in the degree of the node plot lines, the and distributon plots of the StackOverflow community network show clear superlinear behaviour. The degree dynamic plot is prefiltered and only shows the plot lines for nodes that eventually reach a degree higher than 100. In these highly connected nodes we see a very quick development towards them becoming hubs in the network instagating the a topology that leans towards hub-and-spoke. In contrast to the Barabási-Albert network with a scale-free topology and power law dominated distributions.\nComparison to other real world networks\n<img src=\"files/imgs/OtherNetworks.png\"/>\nThe Stack Overflow Numpy network behaves reagaring the ratio bewteen edges and nodes similary to the smaller networks as Power Grid and E. Coli Metabolism. It also behaves similiar to their gamma. \nBut if one has a look at the parameters for the Python network, they are much closer to communication networks as the Science collaboration and Citation Network. This is most probably due to our tag selection. It is likely, that the complete Stack Overflow Networks behaves very similar to the Communication Networks with a gamma of up to 5.\nClassifier\nIn order to automatically detect super users we will train an unsupervised k-means clustering. This clustering is based on the attributes of nodes like:\n* in degree\n* out degree\n* average question votes\n* average answer votes\nKmeans follows an iterative approach:\nIn the first steps the data points are assigned the label of the nearest cluster.\n$S_i^{(t)} = \\big { x_p : \\big \\| x_p - \\mu^{(t)}_i \\big \\|^2 \\le \\big \\| x_p - \\mu^{(t)}_j \\big \\|^2 \\ \\forall j, 1 \\le j \\le k \\big}$\nAfter that a new center for each cluster is computed:\n$\\mu^{(t+1)}i = \\frac{1}{|S^{(t)}_i|} \\sum{x_j \\in S^{(t)}_i} x_j$ \nThis procedure is repeated until the location of clusters is not changing anymore.", "classification.classify_users(network_directed)", "We can see that there are very active users with label 0, who have answered ten questions and asked fourteen questions in average.\nThe users from label 4 are also asking and answering several times, but they ask very good questions which score in average 85 votes.\nUsers with label 1 are very inactive. They were only active once. These users are colored in green, located very close to the coordinates origin.\nThe super users are further away from the origin.\nGroup work models\nIn the beginning of this report we were aiming for an intuitive understanding of how collaboration networks like Stackoverflow work. Consequently, we will try to compare our extracted information with two common models for group work theory.\nFirst of all we have to define what exactly “work” is in Stackoverflow. As it is impossible to infer information about the actual project people are working on, we cannot measure the actual project work outcomes of individuals. However, we can define the knowledge transfer, i.e. answering of questions, as actual work.\nThe Belbin Team Inventory describes different roles of people that is emerging from the formation of the group and was presented in Management Teams: Why They Succeed or Fail (1981). The extended Belbin Team inventory consists of the following types:\nPlants are creative generators of ideas. \nResource Investigators provides enthusiasm at the start of a project and seizes contacts and opportunities. \nCoordinators have a talent for seeing the big picture and are therefore like to become the leader of the team. \nShaper are driven by a lot of energy and the urge to perform. Therefore they usually make sure that all possibilities are considered and shake things up if necessary. \nMonitor Evaluators unemotional observers of the project and team. \nTeamworkers ensure that is the team running effectively and without friction. \nImplementers take suggestions and ideas and turns them into action. \nCompleters are perfectionists and double-check the final outcome of the work. \nSpecialists are experts in their own particular field and typically transfer this knowledge to others. Usually the stick to their domain of expertise. \nWhile it is hard to identify some of the types, e.g. the identification of a completer would require analysis of the whole answer text, it possible to find some similarities between our users\nWe calculated the different γ for incoming and outgoing degrees, i.e. how many hubs do we have for users based on asking questions and how many hubs do we have for answering questions.", "print(\"Gamma total: {}\".format(analysis.get_gamma_power_law(network_cleaned)))\nprint(\"Gamma in: {}\".format(analysis.get_in_gamma_power_law(network_direted_cleaned)))\nprint(\"Gamma out: {}\".format(analysis.get_out_gamma_power_law(network_direted_cleaned)))\n", "We can see that in fact there are more hubs for answering questions than for asking question. This can be interpreted as experts in their domain helping the average user out. This can be mapped to the role of the Specialist of the Belbin model. In fact we can see even in the classification that we have a distinct group of people who answer more than a thousand questions without even asking one themselves. \nOur graph is only built based on questions and answers. However, another graph could be created for votes and there we could obtain the role of the Monitor Evaluator as people who are hubs for voting and Completers for hubs of user who get good votes. \nSo one insight we can get is that we have people who are specialists in their domains in our network are significantly more likely to answer questions than to ask them. \nThe second model we can use is the Tuckman model describing the task related performance and the formation of the team over time. \n<img src=\"imgs/Teamwork.png\">\nIn this case we actually have to understand that both dimension converge into one for online forums like Stackoverflow. Unlike Facebook there is now friendship system implemented. Thus, the only metric of relationship is the amount of edges between users. At the same time this the metric to measure our work outcome as define above. This assumption complies with the analysis of our edges over time.\nFurther Notes\nWe tried to find clusters using spectral analysis. Due to the size of the networks that took too long. So unfortunately we were not able to obtain any insights. \nThe same thing applies to the visualisation of the network.\nConclusion\nIn this project we have analyzed the structure of the Stackoverflow community as an example for a commonly used online platform. As the entire network was too large for meaningful examination with our means we decided to focus on the subcommunity of Numpy instead. We showed that the size of subcommunities differs but the degree distribution plots show superlinear behavior and therefore the underlying graph structure demonstrated strong tendencies towards a hub and spike topology. The comparison provides evidence for our assumption that we can generalize from the Numpy subcommunity to other communities in Stackoverflow.\nLast but not least, additonally one can note, that the network analysis comes to similar results compared to a simple user based analysis done for the Data Visualization Course: https://maxruenzepfl.github.io/StackoverflowAnalysis/indexReloaded.html\nReferences\nThe scatter plots for the data exploitation were taken from the Network Tour of Data Science Course. \nThe teamwork graphic was retrieved from here.\nThe teamwork theory papers are the following:\n* Management Teams: Why they succeed or fail Author: R. Meredith Belbin. Heinemann, 1981.\n* Zoltan, Raluca. (2016). Work group development models - the evolution from simple group to effective team. Ecoforum. 5.\n* https://www.researchgate.net/publication/299563074_Work_group_development_models_-_the_evolution_from_simple_group_to_effective_team" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gpagliuca/pyfas
docs/notebooks/OLGA_ppl.ipynb
gpl-3.0
[ "import pyfas as fa\nimport pandas as pd\nimport matplotlib.pyplot as plt\npd.options.display.max_colwidth = 120", "OLGA ppl files, examples and howto\nFor an tpl file the following methods are available:\n\n<b>filter_data</b> - return a filtered subset of trends\n<b>extract</b> - extract a single trend variable\n<b>to_excel</b> - dump all the data to an excel file\n\nThe usual workflow should be:\n\nLoad the correct tpl\nSelect the desired variable(s)\nExtract the results or dump all the variables to an excel file\nPost-process your data in Excel or in the notebook itself\n\nPpl loading\nTo load a specific tpl file the correct path and filename have to be provided:", "ppl_path = '../../pyfas/test/test_files/'\nfname = 'FC1_rev01.ppl'\nppl = fa.Ppl(ppl_path+fname)", "Profile selection\nAs for tpl files, a ppl file may contain hundreds of profiles, in particular for complex networks. For this reason a filtering method is quite useful.\nThe easiest way is to filter on all the profiles using patters, the command ppl.filter_trends(\"PT\") filters all the pressure profiless (or better, all the profiles with \"PT\" in the description, if you have defined a temperature profile in the position \"PTTOPSIDE\", for example, this profile will be selected too).\nThe resulting python dictionaly will have a unique index for each filtered profile that can be used to identify the interesting profile(s).\nIn case of an emply pattern all the available profiles will be reported.", "ppl.filter_data('PT')", "The same outpout can be reported as a pandas dataframe:", "pd.DataFrame(ppl.filter_data('PT'), index=(\"Profiles\",)).T", "Dump to excel\nTo dump all the variables in an excel file use ppl.to_excel()\nIf no path is provided an excel file with the same name of the tpl file is generated in the working folder. Depending on the tpl size this may take a while.\nExtract a specific variable\nOnce you know the variable(s) index you are interested in (see the filtering paragraph above for more info) you can extract it (or them) and use the data directly in python.\nLet's assume you are interested in the pressure and the temperature profile of the branch riser:", "pd.DataFrame(ppl.filter_data(\"TM\"), index=(\"Profiles\",)).T\n\npd.DataFrame(ppl.filter_data(\"PT\"), index=(\"Profiles\",)).T", "Our targets are:\n<i>variable 13</i> for the temperature\nand \n<i>variable 12</i> for the pressure\nNow we can proceed with the data extraction:", "ppl.extract(13)\nppl.extract(12)", "The ppl object now has the two profiles available in the data attribute:", "ppl.data.keys()", "while the label attibute stores the variable type:", "ppl.label[13]", "Ppl data structure\nThe ppl data structure at the moment contains:\n\nthe geometry profile of the branch as ppl.data[variable_index][0]\nthe selected profile at the timestep 0 as ppl.data[variable_index][1][0]\nthe selected profile at the last timestep as ppl.data[variable_index][1][-1]\n\nIn other words the first index is the variable, the second is 0 for the geometry and 1 for the data, the last one identifies the timestep.\nData processing\nThe results available in the data attribute are numpy arrays and can be easily manipulated and plotted:", "%matplotlib inline\n\ngeometry = ppl.data[12][0]\npt_riser = ppl.data[12][1]\ntm_riser = ppl.data[13][1]\n\ndef ppl_plot(geo, v0, v1, ts):\n fig, ax0 = plt.subplots(figsize=(12, 7));\n ax0.grid(True)\n p0, = ax0.plot(geo, v0[ts])\n ax0.set_ylabel(\"[C]\", fontsize=16)\n ax0.set_xlabel(\"[m]\", fontsize=16)\n ax1 = ax0.twinx()\n p1, = ax1.plot(geo, v1[ts]/1e5, 'r')\n ax1.grid(False)\n ax1.set_ylabel(\"[bara]\", fontsize=16)\n ax1.tick_params(axis=\"both\", labelsize=16)\n ax1.tick_params(axis=\"both\", labelsize=16)\n plt.legend((p0, p1), (\"Temperature profile\", \"Pressure profile\"), loc=3, fontsize=16)\n plt.title(\"P and T for case FC1\", size=20);", "To plot the last timestep:", "ppl_plot(geometry, tm_riser, pt_riser, -1)", "The time can also be used as parameter:", "import ipywidgets.widgets as widgets\nfrom ipywidgets import interact\n\ntimesteps=len(tm_riser)-1 \n\n@interact\ndef ppl_plot(ts=widgets.IntSlider(min=0, max=timesteps)):\n fig, ax0 = plt.subplots(figsize=(12, 7));\n ax0.grid(True)\n p0, = ax0.plot(geometry, tm_riser[ts])\n ax0.set_ylabel(\"[C]\", fontsize=16)\n ax0.set_xlabel(\"[m]\", fontsize=16)\n ax0.set_ylim(10, 12)\n ax1 = ax0.twinx()\n ax1.set_ylim(90, 130)\n p1, = ax1.plot(geometry, pt_riser[ts]/1e5, 'r')\n ax1.grid(False)\n ax1.set_ylabel(\"[bara]\", fontsize=16)\n ax1.tick_params(axis=\"both\", labelsize=16)\n ax1.tick_params(axis=\"both\", labelsize=16)\n plt.legend((p0, p1), (\"Temperature profile\", \"Pressure profile\"), loc=3, fontsize=16)\n plt.title(\"P and T for case FC1 @ timestep {}\".format(ts), size=20);", "<i>The above plot has an interactive widget if executed</i>\nAdvanced data processing\nAn example of advanced data processing for Python enthusiasts and professional flow assurance.\nScript below extracts variable profiles along given branches at given time steps. Usage instructions:\n- Consecutive branches are joined together and extracted profiles are written into a CSV file.\n- For unit conversion multiplication factors for every variable can be given.\n- Global variables can be redefined before every call of main(), which allows for multiple extractions in a single script run. No need to modify the functions, unless you know what you are doing.\n- Only few global variables and lists have to be defined, see below main(), there is no need to edit the functions.\n- The script does not perform error checks, make sure that all variables, branches and times (time steps) are present in the simulation file.", "import os\nimport sys\nimport time\nimport pyfas as fa\n\n\ndef getVarsInds(ppl, emptyLst):\n\n\tfor _, var in enumerate(allVar):\n\t\n\t\tlst = []\n\t\n\t\t# dictionary of the following kind:\n\t\t# {4: \"PT 'SECTION:' 'BRANCH:' 'old_offshore' '(PA)' 'Pressure'\\n\",\n\t\t# 12: \"PT 'SECTION:' 'BRANCH:' 'riser' '(PA)' 'Pressure'\\n\"}\n\t\tmyDic = ppl.filter_data(var)\n\t\n\t\tfor _, pos in enumerate(allPos):\n\n\t\t\tfor _, (k, v) in enumerate(myDic.items()):\n\t\t\t\n\t\t\t\tlstStr = v.split(\"' '\")\n\t\t\t\tlstStr1 = v.split(\" \")\n\t\t\t\t\n\t\t\t\tif lstStr1[0] == var and lstStr[2] == pos: # my var and branch\n\t\t\t\t\n\t\t\t\t\tlst.append( int(k) )\n\t\t\t\t\tbreak\n\t\t\n\t\temptyLst.append(lst)\n\n\ndef getData(ppl, pplFileName, fullLst):\n\n\tfilterTimesLstLoc = filterTimesLst\n\tif filterTime and (not filterTimesLstLoc):\n\t\tfilterTimesLstLoc = [round( ppl.time[myTS-1] / 3600.0, 3 ) for myTS in filterTimesInd]\n\t\n\tfor i, _ in enumerate(allVar):\n\t\tfor j, _ in enumerate(allPos): ppl.extract( fullLst[i][j] )\n\t\n\tfout = open(\"{0}.csv\".format(pplFileName), 'w')\n\t\n\t# write header\n\toutLine = \"\"\n\tfor i, _ in enumerate(allVar):\n\t\tfor ts in range( len(ppl.time) ):\n\t\t\toutStr = \"Pipe L [km],{0},\".format( allNam[i] )\n\t\t\tif filterTime:\n\t\t\t\tif round(ppl.time[ts] / 3600.0, 3) in filterTimesLstLoc: outLine += outStr\n\t\t\telse: outLine += outStr\n\tfout.write(\"{0}\\n\".format(outLine))\n\t\n\toutLine = \"\"\n\tfor i, _ in enumerate(allVar):\n\t\tfor ts in range( len(ppl.time) ):\n\t\t\toutStr = \"Time [hr],{0},\".format( float(ppl.time[ts]) / 3600.0 )\n\t\t\tif filterTime:\n\t\t\t\tif round(ppl.time[ts] / 3600.0, 3) in filterTimesLstLoc: outLine += outStr\n\t\t\telse: outLine += outStr\n\tfout.write(\"{0}\\n\".format(outLine))\n\t\n\t# write profiles\n\tlastGeomPoint = 0.0\n\tfor j, _ in enumerate(allPos):\n\t\t\n\t\tgeomPrfl = ppl.data[ fullLst[0][j] ][ 0 ] + lastGeomPoint # geometry profile\n\t\t\n\t\tfor p in range( len( ppl.data[ fullLst[0][j] ][ 1 ][ 0 ] ) ): # for p in range( len(geomPrfl) ): # loop over profile points\n\t\t\t\n\t\t\toutLine = \"\"\n\t\t\t\n\t\t\tfor i, _ in enumerate(allVar):\n\t\t\t\t\n\t\t\t\tfor ts in range( len(ppl.time) ): # loop over timesteps\n\t\t\t\t\t\n\t\t\t\t\tvarPrfl = ppl.data[ fullLst[i][j] ][ 1 ][ ts ] # var profile at the timestep ts\n\t\t\t\t\t\n\t\t\t\t\toutStr = \"{0},{1},\".format( float(geomPrfl[p]) / 1000.0, float(varPrfl[p]) * allMul[i] )\n\t\t\t\t\t\n\t\t\t\t\tif filterTime:\n\t\t\t\t\t\tif round(ppl.time[ts] / 3600.0, 3) in filterTimesLstLoc: outLine += outStr\n\t\t\t\t\telse:\n\t\t\t\t\t\toutLine += outStr\n\t\t\t\t\n\t\t\tfout.write(\"{0}\\n\".format(outLine))\n\t\t\n\t\tif doSpecialGeomJoin:\n\t\t\tlastGeomPoint = geomPrfl[0] - lastGeomPoint + geomPrfl[-1] # check it for the genral case with more than two sections!!!\n\t\telse:\n\t\t\tlastGeomPoint = geomPrfl[-1]\n\t\n\tfout.close()\n\n\ndef main():\n\n\tprint( \"{0} initialization\".format(time.strftime(\"%H:%M:%S\", time.localtime())) )\n\tfname = myPPLFile\n\tppl = fa.Ppl(fname)\n\t\n\tvarIndLst = [] # separate list for every var in order (all postions/branches for every var)\n\tgetVarsInds(ppl, varIndLst)\n\t\n\tprint( \"{0} extraction\".format(time.strftime(\"%H:%M:%S\", time.localtime())) )\n\t\n\tgetData(ppl, fname, varIndLst)\n\t\n\tprint( \"{0} done\".format(time.strftime(\"%H:%M:%S\", time.localtime())) )\n\n\n# global variables\nallMul = [1.0, 1.0e-5, 1.0]\nallVar = [\"TM\", \"PT\", \"ROF\"]\nallNam = [\"Temperature, degC\", \"Pressure, bara\", \"Mixture Density, kg/m3\"]\n\ndoSpecialGeomJoin = False\n\nmyPPLFile = \"OLGA_Simulation.ppl\"\n\nfilterTime = True\nfilterTimesLst = [] # as an option (if the list is not empty), time in hours rounded to three decimal points\n\n\n# extract data (i)\nfilterTimesInd = [1, 2, 3, 4, 5, 10, 50, 100] # the first time step is one (not zero!)\nallPos = [\"E_RISER\", \"E_FLOWLINE\"]\nmain()\nos.rename( \"{0}.csv\".format(myPPLFile), \"{0}_East.csv\".format(myPPLFile) )\n\n# extract data (ii)\nfilterTimesInd = [1, 2, 3, 4, 5, 10, 50, 100] # the first time step is one (not zero!)\nallPos = [\"W_RISER\", \"W_FLOWLINE\"]\nmain()\nos.rename( \"{0}.csv\".format(myPPLFile), \"{0}_West.csv\".format(myPPLFile) )" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Santana9937/language-translation
.ipynb_checkpoints/dlnd_language_translation-checkpoint.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n ###source_sent = [ sent for sent in source_text.split(\"\\n\") ]\n ###target_sent = [ sent + ' <EOS>' for sent in target_text.split(\"\\n\") ]\n \n ###source_ids = [ [ source_vocab_to_int[word] for word in sent.split() ] for sent in source_sent ]\n ###target_ids = [ [ target_vocab_to_int[word] for word in sent.split() ] for sent in target_sent ]\n \n # Advice from Udacity Reviewer\n target_ids = [[target_vocab_to_int[w] for w in s.split()] + [target_vocab_to_int['<EOS>']] for s in target_text.split('\\n')]\n source_ids = [[source_vocab_to_int[w] for w in s.split()] for s in source_text.split('\\n')]\n \n return source_ids, target_ids\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n # TODO: Implement Function\n \n input_ = tf.placeholder( tf.int32, [None, None], name = \"input\" )\n target_ = tf.placeholder( tf.int32, [None, None], name = \"target\" )\n learn_rate_ = tf.placeholder( tf.float32, None, name = \"learn_rate\" )\n keep_prob_ = tf.placeholder( tf.float32, None, name = \"keep_prob\" )\n target_sequence_length = tf.placeholder( tf.int32, [None], name=\"target_sequence_length\" )\n max_target_sequence_length = tf.reduce_max( target_sequence_length )\n source_sequence_length = tf.placeholder( tf.int32, [None], name=\"source_sequence_length\" )\n \n return input_, target_, learn_rate_, keep_prob_, target_sequence_length, max_target_sequence_length, source_sequence_length\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n \n go_id = source_vocab_to_int[ '<GO>' ]\n ending_text = tf.strided_slice( target_data, [0, 0], [batch_size, -1], [1, 1] )\n decoded_text = tf.concat( [ tf.fill([batch_size, 1], go_id), ending_text ], 1) \n \n return decoded_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n # TODO: Implement Function\n encod_inputs = tf.contrib.layers.embed_sequence( rnn_inputs, source_vocab_size, encoding_embedding_size )\n \n rnn_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell( rnn_size ) for _ in range(num_layers) ] )\n # Adding dropout layer\n rnn_cell = tf.contrib.rnn.DropoutWrapper( rnn_cell, output_keep_prob = keep_prob )\n rnn_output, rnn_state = tf.nn.dynamic_rnn( rnn_cell, encod_inputs, source_sequence_length, dtype = tf.float32 )\n \n return rnn_output, rnn_state\n \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # TODO: Implement Function\n \n decode_helper = tf.contrib.seq2seq.TrainingHelper( dec_embed_input, target_sequence_length )\n decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer )\n decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True, \n maximum_iterations= max_summary_length )\n return decoder_outputs\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n # TODO: Implement Function \n start_tokens = tf.tile( tf.constant( [start_of_sequence_id], dtype=tf.int32), \n [ batch_size ], name = \"start_tokens\" )\n decode_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( dec_embeddings, start_tokens, end_of_sequence_id )\n decoder = tf.contrib.seq2seq.BasicDecoder( dec_cell, decode_helper, encoder_state, output_layer = output_layer )\n decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode( decoder, impute_finished=True,\n maximum_iterations = max_target_sequence_length )\n \n return decoder_outputs\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "from tensorflow.python.layers import core as layers_core\n\ndef decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n \n decode_embed = tf.Variable( tf.random_uniform( [ target_vocab_size, decoding_embedding_size ] ) )\n decode_embed_input = tf.nn.embedding_lookup( decode_embed, dec_input )\n \n decode_cell = tf.contrib.rnn.MultiRNNCell( [ tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers) ] )\n # Adding dropout layer\n decode_cell = tf.contrib.rnn.DropoutWrapper( decode_cell, output_keep_prob = keep_prob )\n \n output_layer = layers_core.Dense( target_vocab_size, \n kernel_initializer = tf.truncated_normal_initializer( mean = 0.0, stddev=0.1 ) )\n \n with tf.variable_scope( \"decoding\" ) as decoding_scope:\n decode_outputs_train = decoding_layer_train( encoder_state, decode_cell, decode_embed_input, \n target_sequence_length, max_target_sequence_length, output_layer, keep_prob )\n\n SOS_id = target_vocab_to_int[ \"<GO>\" ]\n EOS_id = target_vocab_to_int[ \"<EOS>\" ]\n \n with tf.variable_scope( \"decoding\", reuse=True) as decoding_scope:\n decode_outputs_infer = decoding_layer_infer( encoder_state, decode_cell, decode_embed, SOS_id,EOS_id, \n max_target_sequence_length,target_vocab_size, output_layer, batch_size, keep_prob )\n \n return decode_outputs_train, decode_outputs_infer\n \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n \n encode_output, encode_state = encoding_layer( input_data, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, enc_embedding_size )\n \n decode_input = process_decoder_input( target_data, target_vocab_to_int, batch_size ) \n \n decode_outputs_train, decode_outputs_infer = decoding_layer( decode_input, encode_state,\n target_sequence_length, tf.reduce_max( target_sequence_length ), rnn_size, num_layers, \n target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size )\n \n return decode_outputs_train, decode_outputs_infer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# Number of Epochs\nepochs = 10\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 128\ndecoding_embedding_size = 128\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.8\ndisplay_step = 10", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n \n sequence = [ vocab_to_int.get( word, vocab_to_int[ \"<UNK>\"] ) for word in sentence.lower().split() ]\n \n return sequence\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jorgemauricio/INIFAP_Course
ejercicios/Pandas/1_Series.ipynb
mit
[ "<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nSeries\nEl primer tipo de dato que vamos a aprender en pandas es Series\nUna series es muy similar a un arreglo de Numpy, la diferencia es que una serie tiene etiquetas en su eje, por tal motivo podemos generar indices por etiquetas en vez de un numero.", "# librerias\nimport numpy as np\nimport pandas as pd", "Crear una serie\nSe pueden crear series desde listas, arreglos de numpy y diccionarios", "labels = ['a','b','c']\nmy_list = [10,20,30]\narr = np.array([10,20,30])\nd = {'a':10,'b':20,'c':30}", "Usando listas", "pd.Series(data=my_list)\n\npd.Series(data=my_list,index=labels)\n\npd.Series(my_list,labels)", "Arreglos Numpy", "pd.Series(arr)\n\npd.Series(arr,labels)", "Diccionarios", "pd.Series(d)", "Informacion en Series\nUna serie de pandas puede tener varios tipos de objetos", "pd.Series(data=labels)\n\n# Inclusive funciones\npd.Series([sum,print,len])", "Usando indices\nLa clave al usar series es el entender como se utilizan los indices, ya que pandas los utiliza para hacer consultas rapidas de informacion", "ser1 = pd.Series([1,2,3,4],index = ['USA', 'Germany','USSR', 'Japan']) \n\nser1\n\nser2 = pd.Series([1,2,5,4],index = ['USA', 'Germany','Italy', 'Japan']) \n\nser2\n\nser1['USA']", "Operations are then also done based off of index:", "ser1 + ser2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sjobeek/robostats_mcl
mcl_demonstration.ipynb
mit
[ "import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport montecarlo_localization as mcl\n%load_ext autoreload\n%autoreload 2\n#%matplotlib inline", "Monte Carlo localization\nThis notebook presents a demonstration of Erik Sjoberg's implementation of Monte Carlo localization (particle filter) on a dataset of 2d laser scans\nExample laser scan data\nNote the legs of a person which appear in the dataset; this dynamic obstacle will increase the difficulty of the localization.\n<img src=\"data/robotmovie1.gif\"/>\nCorresponding relative odometry log data", "logdata = mcl.load_log('data/log/robotdata2.log.gz')\nlogdata['x_rel'] = logdata['x'] - logdata.ix[0,'x']\nlogdata['y_rel'] = logdata['y'] - logdata.ix[0,'y']\nplt.plot(logdata['x_rel'], logdata['y_rel'])\nplt.title('Relative Odometry (x, y) in m')", "Note the significant drift in the path according to the odometry data above, which should have returned to it's initial position\nMap to localize within", "global_map = mcl.occupancy_map('data/map/wean.dat.gz')\nmcl.draw_map_state(global_map, rotate=True)", "Initialize valid particles uniformly on map\nParticles are initialized uniformy over the entire 8000cm x 8000cm area with random heading, but are re-sampled if they end up in a grid cell which is not clear with high confidence (map value > 0.8).", "sensor = mcl.laser_sensor() # Defines sensor measurement model\nparticle_list = [mcl.robot_particle(global_map, sensor)\n for _ in range(1000)]\nmcl.draw_map_state(global_map, particle_list, rotate=True)\nplt.show()", "Localization Program Execution", "from tempfile import NamedTemporaryFile\n\nVIDEO_TAG = \"\"\"<video controls>\n <source src=\"data:video/x-m4v;base64,{0}\" type=\"video/mp4\">\n Your browser does not support the video tag.\n</video>\"\"\"\n\ndef anim_to_html(anim):\n if not hasattr(anim, '_encoded_video'):\n with NamedTemporaryFile(suffix='.mp4') as f:\n anim.save(f.name, dpi=400, fps=10, extra_args=['-vcodec', 'libx264', '-pix_fmt', 'yuv420p'])\n video = open(f.name, \"rb\").read()\n anim._encoded_video = video.encode(\"base64\")\n \n return VIDEO_TAG.format(anim._encoded_video)\n\nclass ParticleMap(object):\n def __init__(self, ax, global_map, particle_list, target_particles=300, draw_max=2000, resample_period=10):\n self.ax = ax\n self.draw_max = draw_max\n self.global_map = global_map\n self.particle_list = particle_list\n mcl.draw_map_state(global_map, particle_list, ax=self.ax, draw_max=self.draw_max)\n self.i = 1\n self.target_particles = target_particles\n self.resample_period = resample_period\n\n def update(self, message):\n if self.i % self.resample_period == 0:# Resample and plot state\n self.particle_list = mcl.mcl_update(self.particle_list, message, resample=True,\n target_particles=self.target_particles) # Update\n plt.cla() \n mcl.draw_map_state(self.global_map, self.particle_list, self.ax, draw_max=self.draw_max)\n #print(pd.Series([p.weight for p in self.particle_list]).describe())\n else: # Just update particle weights / locations - do not resample\n self.particle_list = mcl.mcl_update(self.particle_list, message, \n target_particles=self.target_particles) # Update\n self.i += 1\n\nimport matplotlib.animation as animation\nimport matplotlib.animation as animation\nnp.random.seed(5)\nwean_hall_map = mcl.occupancy_map('data/map/wean.dat')\nlogdata = mcl.load_log('data/log/robotdata5.log.gz')\nlogdata_scans = logdata.query('type > 0.1').values\n\n#Initialize 100 particles uniformly in valid locations on the map\nlaser = mcl.laser_sensor(stdv_cm=100, uniform_weight=0.2)\nparticle_list = [mcl.robot_particle(wean_hall_map, laser, log_prob_descale=2000,\n sigma_fwd_pct=0.2, sigma_theta_pct=0.1)\n for _ in range(50000)]\n\nfig, ax = plt.subplots(figsize=(16,9))\n\npmap = ParticleMap(ax, wean_hall_map, particle_list,\n target_particles=300, draw_max=2000, resample_period=10)\n\n# pass a generator in \"emitter\" to produce data for the update func\nani = animation.FuncAnimation(fig, pmap.update, logdata_scans, interval=50,\n blit=False, repeat=False)\n\nani.save('./mcl_log5_50k_success.mp4'.format(i), dpi=100, fps=10, extra_args=['-vcodec', 'libx264', '-pix_fmt', 'yuv420p'])\nplt.close('all')\n#plt.show()\n#anim_to_html(ani)\n\nplt.close('all')\n\nmcl.mp4_to_html('./mcl_log1_50k_success.mp4'):", "View log data", "logdata = mcl.load_log('data/log/robotdata1.log.gz')\nlogdata['x_rel'] = logdata['x'] - logdata.ix[0,'x']\nlogdata['y_rel'] = logdata['y'] - logdata.ix[0,'y']\nplt.plot(logdata['x_rel'], logdata['y_rel'])\n\n\nlogdata['theta_rel'] = logdata['theta'] - logdata.ix[0,'theta']\nlogdata['xl_rel'] = logdata['xl'] - logdata.ix[0,'xl']\nlogdata['yl_rel'] = logdata['yl'] - logdata.ix[0,'yl']\nlogdata['thetal_rel'] = logdata['thetal'] - logdata.ix[0,'thetal']\n\nlogdata['dt'] = logdata['ts'].shift(-1) - logdata['ts']\nlogdata['dx'] = logdata['x'].shift(-1) - logdata['x']\nlogdata['dy'] = logdata['y'].shift(-1) - logdata['y']\nlogdata['dtheta'] = logdata['theta'].shift(-1) - logdata['theta']\nlogdata['dxl'] = logdata['xl'].shift(-1) - logdata['xl']\nlogdata['dyl'] = logdata['yl'].shift(-1) - logdata['yl']\nlogdata['dthetal'] = logdata['thetal'].shift(-1) - logdata['thetal']" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aufziehvogel/kaggle
quora-question-pairs/notebooks/2.0-sk-feature-engineering.ipynb
mit
[ "Feature Engineering / Analysis\nWe can approach this problem from several points of view. The more apparent approach is to try to find a relation between two questions so that they are considered duplicates. On the other hand, maybe we can also find a duplicate likelihood of a question itself. There might be some features which make a question more likely to be a duplicate than another (one idea coming to mind: short question titles, because short questions are more simple and more simple questions have already been asked before).\nTherefore, this problem is more about crafting features from the relation of two texts (which are basically two data sets) and then deducing a boolean decision from these features than it is about crafting features from one data set (as most other competitions are).\nIndividual questions\nAs the latter one is more simple, let's start with those and see if some titles are more likely to be considered duplicates. For this, we first need to create a new data set which contains only the question title and a ratio factor, how often this question was rated as duplicate of another question. E.g., if a question is in the original data set five times and out of those five times it was rated duplicate three times, then it has a duplicate likelihood of 0.6.", "import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n\ndf = pd.read_csv('../data/raw/train.csv')\ndf['question1'] = df['question1'].apply(str)\ndf['question2'] = df['question2'].apply(str)\n\ndf2 = df.copy(deep=True)\n\ndf = df.rename(columns={'question1': 'question', 'qid1': 'qid'})\ndf = df.drop(['id', 'qid2', 'question2'], axis=1)\n\ndf2 = df2.rename(columns={'question2': 'question', 'qid2': 'qid'})\ndf2 = df2.drop(['id', 'qid1', 'question1'], axis=1)\n\ndf_full = df.append(df2)\n\ndf_likelihood = df_full.groupby('qid').mean()\ndf_unique = df_full.drop_duplicates(subset='question')\ndf_unique = df_unique.set_index('qid')\ndf_unique = df_unique.drop('is_duplicate', axis=1)\ndf_likelihood = df_likelihood.join(df_unique)\ndf_likelihood = df_likelihood.rename(columns={'is_duplicate': 'dup_llh'})\ndf_likelihood['question'] = df_likelihood['question'].apply(str)\n\ndf_likelihood[(df_likelihood['dup_llh'] < 1) & (df_likelihood['dup_llh'] > 0)].head()", "Now we can go and see if there is a relation between question length and duplicate likelihood.", "df_likelihood['question_length'] = df_likelihood['question'].apply(len)\n\nsns.jointplot(x='question_length', y='dup_llh', data=df_likelihood);", "There is not much relation between question length and duplicate likelihood, but at least a very small one which we might take into consideration. Questions with a very long length above 500 characters are not considered duplicates.\nWhat happens, if we ignore all question which have an absolute 0 or absolute 1 (most of those occur only one time in the data)?", "df_relative = df_likelihood[(df_likelihood['dup_llh'] < 1) & (df_likelihood['dup_llh'] > 0)]\nsns.jointplot(x='question_length', y='dup_llh', data=df_relative);", "No correlation here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ktmud/deep-learning
student-admissions-keras/StudentAdmissionsKeras.ipynb
mit
[ "Predicting Student Admissions with Neural Networks in Keras\nIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:\n- GRE Scores (Test)\n- GPA Scores (Grades)\n- Class rank (1-4)\nThe dataset originally came from here: http://www.ats.ucla.edu/\nLoading the data\nTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:\n- https://pandas.pydata.org/pandas-docs/stable/\n- https://docs.scipy.org/", "# Importing pandas and numpy\nimport pandas as pd\nimport numpy as np\n\n# Reading the csv file into a pandas DataFrame\ndata = pd.read_csv('student_data.csv')\n\n# Printing out the first 10 rows of our data\ndata[:10]", "Plotting the data\nFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.", "# Importing matplotlib\nimport matplotlib.pyplot as plt\n\n# Function to help us plot\ndef plot_points(data):\n X = np.array(data[[\"gre\",\"gpa\"]])\n y = np.array(data[\"admit\"])\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')\n plt.xlabel('Test (GRE)')\n plt.ylabel('Grades (GPA)')\n \n# Plotting the points\nplot_points(data)\nplt.show()", "Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.", "# Separating the ranks\ndata_rank1 = data[data[\"rank\"]==1]\ndata_rank2 = data[data[\"rank\"]==2]\ndata_rank3 = data[data[\"rank\"]==3]\ndata_rank4 = data[data[\"rank\"]==4]\n\n# Plotting the graphs\nplot_points(data_rank1)\nplt.title(\"Rank 1\")\nplt.show()\nplot_points(data_rank2)\nplt.title(\"Rank 2\")\nplt.show()\nplot_points(data_rank3)\nplt.title(\"Rank 3\")\nplt.show()\nplot_points(data_rank4)\nplt.title(\"Rank 4\")\nplt.show()", "This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.\nOne-hot encoding the rank\nFor this, we'll use the get_dummies function in pandas.", "# Make dummy variables for rank\none_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)\n\n# Drop the previous rank column\none_hot_data = one_hot_data.drop('rank', axis=1)\n\n# Print the first 10 rows of our data\none_hot_data[:10]", "Scaling the data\nThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.", "# Copying our data\nprocessed_data = one_hot_data[:]\n\n# Scaling the columns\nprocessed_data['gre'] = processed_data['gre']/800\nprocessed_data['gpa'] = processed_data['gpa']/4.0\nprocessed_data[:10]", "Splitting the data into Training and Testing\nIn order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.", "sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)\ntrain_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)\n\nprint(\"Number of training samples is\", len(train_data))\nprint(\"Number of testing samples is\", len(test_data))\nprint(train_data[:10])\nprint(test_data[:10])", "Splitting the data into features and targets (labels)\nNow, as a final step before the training, we'll split the data into features (X) and targets (y).\nAlso, in Keras, we need to one-hot encode the output. We'll do this with the to_categorical function.", "import keras\n\n# Separate data and one-hot encode the output\n# Note: We're also turning the data into numpy arrays, in order to train the model in Keras\nfeatures = np.array(train_data.drop('admit', axis=1))\ntargets = np.array(keras.utils.to_categorical(train_data['admit'], 2))\nfeatures_test = np.array(test_data.drop('admit', axis=1))\ntargets_test = np.array(keras.utils.to_categorical(test_data['admit'], 2))\n\nprint(features[:10])\nprint(targets[:10])", "Defining the model architecture\nHere's where we use Keras to build our neural network.", "# Imports\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation\nfrom keras.optimizers import SGD\nfrom keras.utils import np_utils\n\n# Building the model\nmodel = Sequential()\nmodel.add(Dense(128, activation='relu', input_shape=(6,)))\nmodel.add(Dropout(.2))\nmodel.add(Dense(64, activation='relu'))\nmodel.add(Dropout(.1))\nmodel.add(Dense(2, activation='softmax'))\n\n# Compiling the model\nmodel.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nmodel.summary()", "Training the model", "# Training the model\nmodel.fit(features, targets, epochs=200, batch_size=100, verbose=0)", "Scoring the model", "# Evaluating the model on the training and testing set\nscore = model.evaluate(features, targets)\nprint(\"\\n Training Accuracy:\", score[1])\nscore = model.evaluate(features_test, targets_test)\nprint(\"\\n Testing Accuracy:\", score[1])", "Challenge: Play with the parameters!\nYou can see that we made several decisions in our training. For instance, the number of layers, the sizes of the layers, the number of epochs, etc.\nIt's your turn to play with parameters! Can you improve the accuracy? The following are other suggestions for these parameters. We'll learn the definitions later in the class:\n- Activation function: relu and sigmoid\n- Loss function: categorical_crossentropy, mean_squared_error\n- Optimizer: rmsprop, adam, ada" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
blowekamp/SimpleITK-Notebook-Answers
ConnectedThresholdAndOtherFilterPerformance.ipynb
apache-2.0
[ "import matplotlib.pyplot as plt\n%matplotlib inline\n\nimport SimpleITK as sitk\nprint( sitk.Version() )\n\n# Download data to work on\nfrom downloaddata import fetch_midas_data as fdata\n\nfrom myshow import myshow\n\nfrom __future__ import print_function\n", "Demonstrate Reporting of Progress With ConnectedThresholdImageFilter", "img = sitk.Image(100,100, sitk.sitkUInt8)\n\nctFilter = sitk.ConnectedThresholdImageFilter()\nctFilter.SetSeed([0,0])\nctFilter.SetUpper(1)\nctFilter.SetLower(0)\nctFilter.AddCommand(sitk.sitkProgressEvent, lambda: print(\"\\rProgress: {0:03.1f}%...\".format(100*ctFilter.GetProgress())))", "Demonstrate reporting of Progress", "ctFilter.Execute(img)", "Performance comparison of Median and Rank image fitlers", "img = sitk.Image(100,100,100,sitk.sitkFloat32)\nimg = sitk.AdditiveGaussianNoise(img)\nmyshow(img)\n\nradius = [3,3,3]\n\n%timeit -n 1 sitk.Median(img,radius=radius)\n\n%timeit -n 1 sitk.FastApproximateRank(img, rank=0.5, radius=radius)\n\n%timeit -n 1 sitk.Rank(img, rank=0.5, radius=radius)\n\n%timeit -n 1 sitk.Mean(img,radius=radius)", "Compare Binary and GrayScale Dilate", "img = sitk.ReadImage(fdata(\"cthead1.png\"))\nmyshow(img)\n\n# First create a simple binary image (0 and 1s) for the common cthead1.png test data set\nbimg = img >100\nmyshow(bimg)", "Visually show that for binary images the output will be the same.", "myshow(sitk.BinaryDilate(bimg, radius, sitk.sitkBall), title=\"BinaryDilate\")\nmyshow(sitk.GrayscaleDilate(bimg, radius, sitk.sitkBall), title=\"GrayscaleDilate\")", "Breifly show that there is a performance difference. These images are only 2D an small to it may not be very accurate, but the filters use different algoritms and the GrayscaleDilate changes which algorithm based on the structuring element.", "%timeit -n 100 sitk.BinaryDilate(bimg, 10, sitk.sitkBall)\n%timeit -n 100 sitk.GrayscaleDilate(bimg, 10, sitk.sitkBall)\n\n%timeit -n 100 sitk.BinaryDilate(bimg, 10, sitk.sitkBox)\n%timeit -n 100 sitk.GrayscaleDilate(bimg, 10, sitk.sitkBox)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
KUrushi/knocks
01/chapter1.ipynb
mit
[ "#-*- coding:utf-8", "00 文字列の逆順\n文字列\"streesed\"の文字を逆に並べた文字列を得よ", "strings = \"stressed\"\nprint(strings[::-1])", "01 パタトクカシーー\n「パタトクカシーー」と言う文字列の1, 3, 5, 7文字目を取り出して連結した文字列を得よ", "strings1 = u\"パタトクカシーー\"\nprint(strings1[::2])", "02「パトカー」+「タクシー」=「パタトクカシーー」\n「パトカー」+「タクシー」の文字を先頭から交互に連結して文字列「パタトクカシーー」を得よ", "strings_p = u\"パトカー\"\nstrings_t = u\"タクシー\"\nstrings_sum = ''\nfor p, t in zip(strings_p, strings_t):\n strings_sum += p + t\nprint(strings_sum)", "03. 円周率\n\"Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.\"という文を単語に分解し,各単語の(アルファベットの)文字数を先頭から出現順に並べたリストを作成せよ.", "strings3 = \"Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.\"\ncount_list = [len(i) for i in strings3.split(' ')]\ncount_list", "04. 元素記号\n\"Hi He Lied Because Boron Could Not Oxidize Fluorine. New Nations Might Also Sign Peace Security Clause. Arthur King Can.\"という文を単語に分解し,1, 5, 6, 7, 8, 9, 15, 16, 19番目の単語は先頭の1文字,それ以外の単語は先頭に2文字を取り出し,取り出した文字列から単語の位置(先頭から何番目の単語か)への連想配列(辞書型もしくはマップ型)を作成せよ.", "strings4 = \"Hi He Lied Because Boron Could Not Oxidize Fluorine. New Nations Might Also Sign Peace Security Clause. Arthur King Can.\"\nstrings4 = strings4.replace('.', '')\nstrings4 = strings4.split(' ')\ndictionary = {}\nfor i in range(len(strings4)):\n if i+1 in [1, 5, 6, 7, 8, 9, 15, 16, 19]:\n dictionary.update({strings4[i][:1]: strings4[i]})\n else:\n dictionary.update({strings4[i][:2]: strings4[i]})\nprint(dictionary)", "05 n-gram\n与えられたシーケンス(文字列やリストなど)からn-gramを作る関数を作成せよ.\nこの関数を用い,\"I am an NLPer\"という文から単語bi-gram,文字bi-gramを得よ", "def ngram(sequence, n, mode='c'):\n if mode == 'c':\n return [sequence[i:i+n] for i in range(len(sequence)-1)]\n elif mode == 'w':\n sequence = [s.strip(',.') for s in sequence.split(' ')] # スペースや記号を除去した単語リストの生成\n return [tuple(sequence[i:i+n]) for i in range(len(sequence)-1)]\n \n\nsequence = \"I am an NLPer\"\nprint(ngram(sequence, 2))\nprint(ngram(sequence, 2, 'w'))", "06. 集合\n\"paraparaparadise\"と\"paragraph\"に含まれる文字bi-gramの集合を,それぞれ, XとYとして求め,XとYの和集合,積集合,差集合を求めよ.さらに,'se'というbi-gramがXおよびYに含まれるかどうかを調べよ.", "X = set(ngram('paraparaparadise', 2))\nY = (ngram('paragraph', 2))\nprint(X.intersection(Y))\nprint(X.union(Y))\nprint(X.difference(Y))\nprint('se' in X)\nprint('se' in Y)", "07. テンプレートによる文生成\n引数x, y, zを受け取り「x時のyはz」という文字列を返す関数を実装せよ.さらに,x=12, y=\"気温\", z=22.4として,実行結果を確認せよ.", "#-*- coding:utf-8 -*-\ndef print_template(x, y, z):\n return u'%s時の%sは%s' % (x, y, z)\n\ntemplate = print_template(12, u'気温', 22.4)\nprint(template)", "08. 暗号文\n与えられた文字列の各文字を,以下の仕様で変換する関数cipherを実装せよ.\n英小文字ならば(219 - 文字コード)の文字に置換\nその他の文字はそのまま出力\nこの関数を用い,英語のメッセージを暗号化・復号化せよ.", "def cipher(sequence):\n return \"\".join((map(str, [chr(219-ord(i)) if i.islower() else i for i in sequence])))\n\nstrings = \"I am an NLPer\"\nencryption = cipher(strings)\ndecryption = cipher(encryption)\nprint(encryption)\nprint(decryption)", "09. Typoglycemia\nスペースで区切られた単語列に対して, 各単語の先頭と末尾の文字は残し, それ以外の文字の順序をランダムに並び替えるプログラムを作成せよ.\nただし,長さが4以下の単語は並び替えないこととする.\n適当な英語の文(例えば\"I couldn't believe that I could actually understand what I was reading : the phenomenal power of the human mind .\")を与え,その実行結果を確認せよ.", "import random\nsequence = \"I couldn't believe that I could actually understand what I was reading : the phenomenal power of the human mind.\"\n[s[0]+\"\".join(map(str, random.sample(s[1:-1], len(s)-2)))+s[-1] if len(s) >= 5 \\\n else str(s) for s in sequence.split(' ')]\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tipsybear/actors-simulation
notebooks/distributions.ipynb
mit
[ "Distribution Analysis\nThis notebook visualizes the distribution dynamos that subclass gvas.dynamo.Distribution. This is partly to have a debug of the distribution, but also to provide an entry point to vizualizations and simulation analysis later on.", "%matplotlib inline\n\nfrom gvas.viz import *\nfrom gvas.dynamo import Uniform, Normal\nfrom gvas.dynamo import Stream", "Uniform Distribution", "Uniform(0, 100).plot(n=100000, context='paper')", "Normal Distribution", "Normal(0, 12).plot(n=100000, context='paper')", "Streaming Data\nSimulates the flow of streaming data.", "Stream(100, 24, 10, 0.015, 15).plot(n=200, context='paper')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
EconForge/dolo
examples/notebooks/rbc_perfect_foresight.ipynb
bsd-2-clause
[ "from numpy import *\nfrom matplotlib import pyplot as plt\n\nfrom dolo import *\n\nfilename = '../models/rbc_taxes.yaml'\n\nmodel = yaml_import(filename)", "The model defined in rbc_taxes.yaml is the rbc model, with an agregate tax g that is proportional to income.", "model.calibration\n\nmodel.residuals()", "We want to compute the adjustment of the economy when this tax, goes back progressively from 10% to 0%, over 10 periods.", "exo_g = linspace(0.1,0,10) # this is a vector of size 10\nexo_g = atleast_2d(exo_g).T # the solver expects a 1x10 vector\nprint(exo_g.shape)\n\nexo_g\n\n# Let's solve for the optimal adjustment by assuming that the\n# economy returns to steady-state after T=50 periods.\nfrom dolo.algos.perfect_foresight import deterministic_solve\nsim = deterministic_solve(model, shocks=exo_g, T=50)\ndisplay(sim) # it returns a timeseries object\n\nmodel\n\nplt.plot(figsize=(10,10))\nplt.subplot(221)\nplt.plot(sim['k'], label='capital')\nplt.plot(sim['y'], label='production')\nplt.legend()\nplt.subplot(222)\nplt.plot(sim['g'], label='gvt. spending')\nplt.plot(sim['c'], label='consumption')\nplt.legend()\nplt.subplot(223)\nplt.plot(sim['n'], label='work')\nplt.plot(sim['i'], label='investment')\nplt.legend()\nplt.subplot(224)\nplt.plot(sim['w'], label='wages')\nplt.legend()" ]
[ "code", "markdown", "code", "markdown", "code" ]
msanterre/deep_learning
intro-to-rnns/Anna_KaRNNa.ipynb
mit
[ "Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">", "import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf", "First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.", "with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = sorted(set(text))\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nencoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)", "Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.", "text[:100]", "And we can see the characters encoded as integers.", "encoded[:100]", "Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.", "len(vocab)", "Making training mini-batches\nHere is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:\n<img src=\"assets/sequence_batching@1x.png\" width=500px>\n<br>\nWe have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.\nThe first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \\times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.\nAfter that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \\times (M * K)$ where $K$ is the number of batches.\nNow that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \\times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:\npython\ny[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]\nwhere x is the input batch and y is the target batch.\nThe way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.", "def get_batches(arr, n_seqs, n_steps):\n '''Create a generator that returns batches of size\n n_seqs x n_steps from arr.\n \n Arguments\n ---------\n arr: Array you want to make batches from\n n_seqs: Batch size, the number of sequences per batch\n n_steps: Number of sequence steps per batch\n '''\n # Get the number of characters per batch and number of batches we can make\n characters_per_batch = n_seqs * n_steps\n n_batches = len(arr)//characters_per_batch\n \n # Keep only enough characters to make full batches\n arr = arr[:n_batches * characters_per_batch]\n \n # Reshape into n_seqs rows\n arr = arr.reshape((n_seqs, -1))\n \n for n in range(0, arr.shape[1], n_steps):\n # The features\n x = arr[:, n:n+n_steps]\n # The targets, shifted by one\n y = np.zeros_like(x)\n y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]\n yield x, y", "Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.", "batches = get_batches(encoded, 10, 50)\nx, y = next(batches)\n\nprint('x\\n', x[:10, :10])\nprint('\\ny\\n', y[:10, :10])", "If you implemented get_batches correctly, the above output should look something like \n```\nx\n [[55 63 69 22 6 76 45 5 16 35]\n [ 5 69 1 5 12 52 6 5 56 52]\n [48 29 12 61 35 35 8 64 76 78]\n [12 5 24 39 45 29 12 56 5 63]\n [ 5 29 6 5 29 78 28 5 78 29]\n [ 5 13 6 5 36 69 78 35 52 12]\n [63 76 12 5 18 52 1 76 5 58]\n [34 5 73 39 6 5 12 52 36 5]\n [ 6 5 29 78 12 79 6 61 5 59]\n [ 5 78 69 29 24 5 6 52 5 63]]\ny\n [[63 69 22 6 76 45 5 16 35 35]\n [69 1 5 12 52 6 5 56 52 29]\n [29 12 61 35 35 8 64 76 78 28]\n [ 5 24 39 45 29 12 56 5 63 29]\n [29 6 5 29 78 28 5 78 29 45]\n [13 6 5 36 69 78 35 52 12 43]\n [76 12 5 18 52 1 76 5 58 52]\n [ 5 73 39 6 5 12 52 36 5 78]\n [ 5 29 78 12 79 6 61 5 59 63]\n [78 69 29 24 5 6 52 5 63 76]]\n ``\n although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.\nBuilding the model\nBelow is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.\n<img src=\"assets/charRNN.png\" width=500px>\nInputs\nFirst off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.", "def build_inputs(batch_size, num_steps):\n ''' Define placeholders for inputs, targets, and dropout \n \n Arguments\n ---------\n batch_size: Batch size, number of sequences per batch\n num_steps: Number of sequence steps in a batch\n \n '''\n # Declare placeholders we'll feed into the graph\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n \n # Keep probability placeholder for drop out layers\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n return inputs, targets, keep_prob", "LSTM Cell\nHere we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.\nWe first create a basic LSTM cell with\npython\nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nwhere num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with \npython\ntf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\nYou pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this\npython\ntf.contrib.rnn.MultiRNNCell([cell]*num_layers)\nThis might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like\n```python\ndef build_cell(num_units, keep_prob):\n lstm = tf.contrib.rnn.BasicLSTMCell(num_units)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\nreturn drop\n\ntf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])\n```\nEven though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.\nWe also need to create an initial cell state of all zeros. This can be done like so\npython\ninitial_state = cell.zero_state(batch_size, tf.float32)\nBelow, we implement the build_lstm function to create these LSTM cells and the initial state.", "def build_lstm(lstm_size, num_layers, batch_size, keep_prob):\n ''' Build LSTM cell.\n \n Arguments\n ---------\n keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability\n lstm_size: Size of the hidden layers in the LSTM cells\n num_layers: Number of LSTM layers\n batch_size: Batch size\n\n '''\n ### Build the LSTM Cell\n \n def build_cell(lstm_size, keep_prob):\n # Use a basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n return drop\n \n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])\n initial_state = cell.zero_state(batch_size, tf.float32)\n \n return cell, initial_state", "RNN Output\nHere we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.\nIf our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \\times M \\times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \\times M \\times L$.\nWe are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \\times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.\nOne we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.", "def build_output(lstm_output, in_size, out_size):\n ''' Build a softmax layer, return the softmax output and logits.\n \n Arguments\n ---------\n \n x: Input tensor\n in_size: Size of the input tensor, for example, size of the LSTM cells\n out_size: Size of this softmax layer\n \n '''\n\n # Reshape output so it's a bunch of rows, one row for each step for each sequence.\n # That is, the shape should be batch_size*num_steps rows by lstm_size columns\n\n seq_output = tf.concat(lstm_output, axis=1)\n print(\"Outputs: \", lstm_output)\n x = tf.reshape(seq_output, [-1, in_size])\n print(\"Shape x: \", x)\n # Connect the RNN outputs to a softmax layer\n with tf.variable_scope('softmax'):\n softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(out_size))\n \n # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n # of rows of logit outputs, one for each step and sequence\n logits = tf.matmul(x, softmax_w) + softmax_b\n \n # Use softmax to get the probabilities for predicted characters\n out = tf.nn.softmax(logits, name='predictions')\n \n return out, logits", "Training loss\nNext up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \\times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \\times C$.\nThen we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.", "def build_loss(logits, targets, lstm_size, num_classes):\n ''' Calculate the loss from the logits and the targets.\n \n Arguments\n ---------\n logits: Logits from final fully connected layer\n targets: Targets for supervised learning\n lstm_size: Number of LSTM hidden units\n num_classes: Number of classes in targets\n \n '''\n \n # One-hot encode targets and reshape to match logits, one row per batch_size per step\n y_one_hot = tf.one_hot(targets, num_classes)\n y_reshaped = tf.reshape(y_one_hot, logits.get_shape())\n \n # Softmax cross entropy loss\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)\n loss = tf.reduce_mean(loss)\n return loss", "Optimizer\nHere we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.", "def build_optimizer(loss, learning_rate, grad_clip):\n ''' Build optmizer for training, using gradient clipping.\n \n Arguments:\n loss: Network loss\n learning_rate: Learning rate for optimizer\n \n '''\n \n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n return optimizer", "Build the network\nNow we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.", "class CharRNN:\n \n def __init__(self, num_classes, batch_size=64, num_steps=50, \n lstm_size=128, num_layers=2, learning_rate=0.001, \n grad_clip=5, sampling=False):\n \n # When we're using this network for sampling later, we'll be passing in\n # one character at a time, so providing an option for that\n if sampling == True:\n batch_size, num_steps = 1, 1\n else:\n batch_size, num_steps = batch_size, num_steps\n\n tf.reset_default_graph()\n \n # Build the input placeholder tensors\n self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)\n\n # Build the LSTM cell\n cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)\n\n ### Run the data through the RNN layers\n # First, one-hot encode the input tokens\n x_one_hot = tf.one_hot(self.inputs, num_classes)\n \n # Run each sequence step through the RNN and collect the outputs\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)\n self.final_state = state\n \n # Get softmax predictions and logits\n self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)\n \n # Loss and optimizer (with gradient clipping)\n self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)\n self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)", "Hyperparameters\nHere I'm defining the hyperparameters for the network. \n\nbatch_size - Number of sequences running through the network in one pass.\nnum_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\nlstm_size - The number of units in the hidden layers.\nnum_layers - Number of hidden LSTM layers to use\nlearning_rate - Learning rate for training\nkeep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.\n\nTips and Tricks\nMonitoring Validation Loss vs. Training Loss\nIf you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\nIf your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\nIf your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\nApproximate number of parameters\nThe two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\nThe number of parameters in your model. This is printed when you start training.\nThe size of your dataset. 1MB file is approximately 1 million characters.\n\nThese two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\nI have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.\nI have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\nBest models strategy\nThe winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\nIt is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\nBy the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.", "batch_size = 100 # Sequences per batch\nnum_steps = 100 # Number of sequence steps per batch\nlstm_size = 512 # Size of hidden layers in LSTMs\nnum_layers = 2 # Number of LSTM layers\nlearning_rate = 0.001 # Learning rate\nkeep_prob = 0.5 # Dropout keep probability", "Time for training\nThis is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.\nHere I'm saving checkpoints with the format\ni{iteration number}_l{# hidden layer units}.ckpt", "epochs = 20\n# Save every N iterations\nsave_every_n = 200\n\nmodel = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,\n lstm_size=lstm_size, num_layers=num_layers, \n learning_rate=learning_rate)\n\nsaver = tf.train.Saver(max_to_keep=100)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/______.ckpt')\n counter = 0\n for e in range(epochs):\n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for x, y in get_batches(encoded, batch_size, num_steps):\n counter += 1\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: keep_prob,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.loss, \n model.final_state, \n model.optimizer], \n feed_dict=feed)\n \n end = time.time()\n print('Epoch: {}/{}... '.format(e+1, epochs),\n 'Training Step: {}... '.format(counter),\n 'Training loss: {:.4f}... '.format(batch_loss),\n '{:.4f} sec/batch'.format((end-start)))\n \n if (counter % save_every_n == 0):\n saver.save(sess, \"checkpoints/i{}_l{}.ckpt\".format(counter, lstm_size))\n \n saver.save(sess, \"checkpoints/i{}_l{}.ckpt\".format(counter, lstm_size))", "Saved checkpoints\nRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables", "tf.train.get_checkpoint_state('checkpoints')", "Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.", "def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n samples = [c for c in prime]\n model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.prediction, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.prediction, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)", "Here, pass in the path to a checkpoint and sample from the network.", "tf.train.latest_checkpoint('checkpoints')\n\ncheckpoint = tf.train.latest_checkpoint('checkpoints')\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i200_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i600_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i1200_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
theandygross/HIV_Methylation
Review/Reviews_Computation.ipynb
mit
[ "Reviewer comments", "cd ..\n\nimport NotebookImport\nfrom Parallel.Age_HIV_Features import *", "It would be useful to know how many of the CpGs associated with age (26,927), and HIV (81,361) are also strongly differentially methylated across cell-types. One could for example, plot a cell-type association p-value histogram for these sites using p-values from this Jaffe and Irizarry table: http://www.genomebiology.com/2014/15/2/R31/suppl/S3\nIf it is highly enriched for small p-values, I would be cautious about claiming that the cell type effects have been fully corrected for as the authors believe.", "dm_cell = pd.read_csv('./data/Jaffee_Supplementary_Table_2.csv', index_col=0,\n low_memory=False, skiprows=[0])", "HIV association with cell-type specific probes", "g_hiv.value_counts()\n\nvv = features['HIV (BH)']\n\nr_hiv = pd.DataFrame({c: fisher_exact_test(vv, dm_cell['p.value'] < c)\n for c in [.1, .05,.01,.001,.0001,.000001]}).T\nr_hiv.index.name = 'p-value cutoff'\ncount = pd.Series({c: sum(dm_cell['p.value'].ix[vv.index] < c)\n for c in [.1, .05,.01,.001,.0001,.000001]}).T\nr_hiv['count'] = count\nr_hiv = r_hiv[['count','odds_ratio','p']]\n\nr_hiv\n\nr_hiv = pd.DataFrame({c: fisher_exact_test(g_hiv, dm_cell['p.value'] < c)\n for c in [.1, .05,.01,.001,.0001,.000001]}).T\nr_hiv.index.name = 'p-value cutoff'\ncount = pd.Series({c: sum(dm_cell['p.value'].ix[g_hiv.index] < c)\n for c in [.1, .05,.01,.001,.0001,.000001]}).T\nr_hiv['count'] = count\nr_hiv = r_hiv[['count','odds_ratio','p']]\n\nr_hiv\n\nfc = ((dm_cell.CD4T_mean - dm_cell.CD8T_mean).abs() / \n .5*(dm_cell.CD4T_mean + dm_cell.CD8T_mean))\nr_hiv_fc = pd.DataFrame({p: fisher_exact_test(g_hiv, fc > c)\n for p,c in fc.quantile([.05,.1,.25,.5,.75,.9,.95]).iteritems()}).T\nr_hiv_fc", "Age association with cell-type specific probes", "g_age.value_counts()\n\nr_age = pd.DataFrame({c: fisher_exact_test(g_age, dm_cell['p.value'] < c)\n for c in [.1, .05,.01,.001,.0001,.000001]}).T\nr_age.index.name = 'p-value cutoff'\ncount = pd.Series({c: sum(dm_cell['p.value'].ix[g_age.index] < c)\n for c in [.1, .05,.01,.001,.0001,.000001]}).T\nr_age['count'] = count\nr_age = r_age[['count','odds_ratio','p']]\n\nr_age\n\ncount\n\nfc = ((dm_cell.CD4T_mean - dm_cell.CD8T_mean).abs() / \n .5*(dm_cell.CD4T_mean + dm_cell.CD8T_mean))\nr_age_fc = pd.DataFrame({p: fisher_exact_test(g_age, fc > c)\n for p,c in fc.quantile([.05,.1,.25,.5,.75,.9,.95]).iteritems()}).T\n\ncount = pd.Series({p: sum(fc > c)\n for p,c in fc.quantile([.05,.1,.25,.5,.75,.9,.95]).iteritems()})\nr_age_fc['count'] = count\nr_age_fc = r_age_fc[['count','odds_ratio','p']]\nr_age_fc", "It would be reassuring if the link between the HIV infection and aging signatures (from \"Unsupervised analysis shows shared phenotypes of HIV and age\") still hold when analysis is restricted to CpGs with no cell-type dependency (e.g. cell type composition p-value>0.1).", "df_hiv2 = df_hiv.ix[:, duration.index]\ndf_hiv2 = df_hiv2.dropna(1, how='all')\n\ndd = logit_adj(df_hiv2)\nm = dd.mean(1)\ns = dd.std(1)\ndf_norm = dd.subtract(m, axis=0).divide(s, axis=0)\n\nidx = ti(dm_cell['p.value'] > .1).intersection(ti(g_age))\nlen(idx)\n\nU,S,vH = frame_svd(df_norm.ix[probe_idx].ix[idx])\np = S ** 2 / sum(S ** 2)\np[:5]\n\nfig, ax = subplots(1,1, figsize=(4,3))\nrr = 1*vH[0]\nk = pred_c.index\nhiv = duration != 'Control'\nage = age\n\nsns.regplot(*match_series(age.ix[k], rr.ix[ti(hiv==0)]),\n ax=ax, label='HIV+', ci=None)\nsns.regplot(*match_series(age.ix[k], rr.ix[ti(hiv>0)]),\n ax=ax, label='Control', ci=None)\nax.set_ylabel('First PC of validated markers', size=12)\nax.set_xlabel('Chronological age (years)', size=14)\n\nax.set_yticks([0])\nax.axhline(0, ls='--', lw=2.5, color='grey', zorder=-1)\nax.set_xbound(23,70)\nax.set_ybound(-.25,.28)\nprettify_ax(ax)\nfig.tight_layout()\n\nimport statsmodels.api as sm\n\ny = vH[0]\nintercept = pd.Series(1, index=y.index)\n\nX = pd.concat([intercept, age, hiv], axis=1, keys=['Intercept', 'age', 'HIV'])\nX = X.dropna().ix[y.index]\nm1 = sm.OLS(y, X).fit()\n\nX = pd.concat([intercept, age, hiv, hiv*age], axis=1, keys=['Intercept', 'age', \n 'HIV', 'int'])\nX = X.dropna().ix[y.index]\nm2 = sm.OLS(y, X).fit()\n\nm2.compare_lr_test(m1)\n\nm1.pvalues\n\nm2.pvalues", "In Figure 1A how many sites identified from EPIC as age associated were not identified in Hannum et al? This data would be more informative if represented as a Venn diagram to show the overlapping 26,927 sites between the two analyses.", "rr1 = bhCorrection(p_vals.in_set_s1) < .01\nrr1.value_counts()\n\nrr2 = bhCorrection(p_vals.in_set_s3) < .01\nrr2.value_counts()\n\nfisher_exact_test(rr1, rr2)\n\nrr1.name = 'Hannum'\nrr2.name = 'EPIC'\nvenn_pandas(rr1, rr2)", "It seems as though the odds ratios for HIV and age are mostly opposite except in CpG islands and gene bodies. Wouldn't this argue for the effects of HIV and aging being separate except for at these two locations?", "fisher_exact_test(g_age, g_hiv)\n\nimport Setup.DX_Imports as dx\n\nby_annotation = pd.DataFrame({k: fisher_exact_test(g_age, g_hiv.ix[ti(v)])\n for k,v in dx.probe_sets.iteritems()}).T\n\nby_annotation.sort('odds_ratio')\n\na2 = age[(age > 25) & (age < 69)]\nd = pd.DataFrame({s: {c: pearson_pandas(v.ix[ti(study==s)], a2).rho\n for c,v in cell_counts.iteritems()}\n for s in ['s1','s3','HIV Control','HIV Long','HIV Short']})\nd\n\na2 = age[(age > 25) & (age < 69)]\nidx = ti(study.isin(['s1','HIV Control','s3']))\nd = pd.Series({c: pearson_pandas(v.ix[idx], a2).rho\n for c,v in cell_counts.iteritems()})\nd\n\nfig, axs = subplots(1,3, figsize=(12,4))\n\nseries_scatter(a2, cell_counts.CD8T.ix[idx], s=20, ax=axs[0], ann=None)\naxs[0].annotate('$r={0:.2f}$'.format(d['CD8T']), (50, -.04), size=18)\nseries_scatter(a2, cell_counts.CD4T.ix[idx], s=20, ax=axs[1], ann=None)\naxs[1].annotate('$r={0:.2f}$'.format(d['CD4T']), (50, -.04), size=18)\nratio = np.log((cell_counts.CD4T + .01) / (cell_counts.CD8T + .01))\nratio.name = 'log ratio'\nseries_scatter(a2.ix[idx], ratio.clip(-5,5), s=20, ax=axs[2], ann=None)\naxs[2].annotate('$r={0:.2f}$'.format(pearson_pandas(ratio, a2).rho), \n (50, -2.8), size=18)\nfor ax in axs:\n prettify_ax(ax)\n ax.set_xbound(25,70)\nfig.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JarnoRFB/qtpyvis
notebooks/caffe/inference.ipynb
mit
[ "import caffe\ncaffe.set_mode_cpu()\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.rcParams['image.cmap'] = 'gray'\n%matplotlib inline\nfrom keras.datasets import mnist\n", "Preprocess dataset.", "data = mnist.load_data()[1][0]\n# Normalize data.\ndata = data / data.max()\nplt.imshow(data[0, :, :])", "Reshape the data so it fits into the caffe net.", "seven = data[0, :, :]\nprint(seven.shape)\nseven = seven[np.newaxis, ...]\nprint(seven.shape)\n", "In Caffe the inference model is often seperate from the model for training and suffixed wit _deploy. It ususally has an Input layer that can be freely filled with data, instead of a Data layer with a fixed dataset.\nLoad the trained model and check that the inference is correct.", "model_def = 'example_caffe_mnist_model_deploy.prototxt'\nmodel_weights = 'mnist.caffemodel'\ndeploy_net = caffe.Net(model_def,\n model_weights,\n caffe.TEST)\ndeploy_net.blobs\n\ndeploy_net.blobs['data'].data[...] = seven\noutput = deploy_net.forward()\noutput['prob'][0].argmax()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stevetjoa/stanford-mir
basic_feature_extraction.ipynb
mit
[ "%matplotlib inline\nfrom pathlib import Path\nimport numpy, scipy, matplotlib.pyplot as plt, sklearn, urllib, IPython.display as ipd\nimport librosa, librosa.display\nimport stanford_mir; stanford_mir.init()", "&larr; Back to Index\nBasic Feature Extraction\nSomehow, we must extract the characteristics of our audio signal that are most relevant to the problem we are trying to solve. For example, if we want to classify instruments by timbre, we will want features that distinguish sounds by their timbre and not their pitch. If we want to perform pitch detection, we want features that distinguish pitch and not timbre.\nThis process is known as feature extraction.\nLet's begin with twenty audio files: ten kick drum samples, and ten snare drum samples. Each audio file contains one drum hit.\nRead and store each signal:", "kick_signals = [\n librosa.load(p)[0] for p in Path().glob('audio/drum_samples/train/kick_*.mp3')\n]\nsnare_signals = [\n librosa.load(p)[0] for p in Path().glob('audio/drum_samples/train/snare_*.mp3')\n]\n\nlen(kick_signals)\n\nlen(snare_signals)", "Display the kick drum signals:", "plt.figure(figsize=(15, 6))\nfor i, x in enumerate(kick_signals):\n plt.subplot(2, 5, i+1)\n librosa.display.waveplot(x[:10000])\n plt.ylim(-1, 1)", "Display the snare drum signals:", "plt.figure(figsize=(15, 6))\nfor i, x in enumerate(snare_signals):\n plt.subplot(2, 5, i+1)\n librosa.display.waveplot(x[:10000])\n plt.ylim(-1, 1)", "Constructing a Feature Vector\nA feature vector is simply a collection of features. Here is a simple function that constructs a two-dimensional feature vector from a signal:", "def extract_features(signal):\n return [\n librosa.feature.zero_crossing_rate(signal)[0, 0],\n librosa.feature.spectral_centroid(signal)[0, 0],\n ]", "If we want to aggregate all of the feature vectors among signals in a collection, we can use a list comprehension as follows:", "kick_features = numpy.array([extract_features(x) for x in kick_signals])\nsnare_features = numpy.array([extract_features(x) for x in snare_signals])", "Visualize the differences in features by plotting separate histograms for each of the classes:", "plt.figure(figsize=(14, 5))\nplt.hist(kick_features[:,0], color='b', range=(0, 0.2), alpha=0.5, bins=20)\nplt.hist(snare_features[:,0], color='r', range=(0, 0.2), alpha=0.5, bins=20)\nplt.legend(('kicks', 'snares'))\nplt.xlabel('Zero Crossing Rate')\nplt.ylabel('Count')\n\nplt.figure(figsize=(14, 5))\nplt.hist(kick_features[:,1], color='b', range=(0, 4000), bins=30, alpha=0.6)\nplt.hist(snare_features[:,1], color='r', range=(0, 4000), bins=30, alpha=0.6)\nplt.legend(('kicks', 'snares'))\nplt.xlabel('Spectral Centroid (frequency bin)')\nplt.ylabel('Count')", "Feature Scaling\nThe features that we used in the previous example included zero crossing rate and spectral centroid. These two features are expressed using different units. This discrepancy can pose problems when performing classification later. Therefore, we will normalize each feature vector to a common range and store the normalization parameters for later use. \nMany techniques exist for scaling your features. For now, we'll use sklearn.preprocessing.MinMaxScaler. MinMaxScaler returns an array of scaled values such that each feature dimension is in the range -1 to 1.\nLet's concatenate all of our feature vectors into one feature table:", "feature_table = numpy.vstack((kick_features, snare_features))\nprint(feature_table.shape)", "Scale each feature dimension to be in the range -1 to 1:", "scaler = sklearn.preprocessing.MinMaxScaler(feature_range=(-1, 1))\ntraining_features = scaler.fit_transform(feature_table)\nprint(training_features.min(axis=0))\nprint(training_features.max(axis=0))", "Plot the scaled features:", "plt.scatter(training_features[:10,0], training_features[:10,1], c='b')\nplt.scatter(training_features[10:,0], training_features[10:,1], c='r')\nplt.xlabel('Zero Crossing Rate')\nplt.ylabel('Spectral Centroid')", "&larr; Back to Index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lemonyhermit/CodingYoga
python-for-developers/Chapter5/.ipynb_checkpoints/Chapter5_Types-checkpoint.ipynb
gpl-2.0
[ "Python for Developers\nFirst Edition\nChapter 5: Types\n\nVariables in the Python interpreter are created by assignment and destroyed by the garbage collector, when there are no more references to them.\nVariable names must start with a letter or underscore (_) and be followed by letters, digits or underscores (_).  Uppercase and lowercase letters are considered different.\nThere are several pre-defined simple types of data in Python, such as:\n\nNumbers (integer, real, complex, ... )\nText\n\nFurthermore, there are types that function as collections. The main ones are:\n\nList\nTuple\nDictionary\n\nPython types can be:\n\nMutable: allow the contents of the variables to be changed.\nImmutable: do not allow the contents of variables to be changed.\n\nIn Python, variable names are references that can be changed at execution time.\nThe most common types and routines are implemented in the form of builtins, i.e. they are always available at runtime, without the need to import any library.\nNumbers\nPython provides some numeric types as builtins:\n\nInteger (int): i = 1\nFloating Point real (float): f = 3.14\nComplex (complex): c = 3 + 4j\n\nIn addition to the conventional integers, there are also long integers, whose dimensions are arbitrary and limited by the available memory. Conversions between integer and long are performed automatically. The builtin function int() can be used to convert other types to integer, including base changes.\nExample:", "# Converting real to integer\nprint 'int(3.14) =', int(3.14)\n\n# Converting integer to real\nprint 'float(5) =', float(5)\n\n# Calculation between integer and real results in real\nprint '5.0 / 2 + 3 = ', 5.0 / 2 + 3\n\n# Integers in other base\nprint \"int('20', 8) =\", int('20', 8) # base 8\nprint \"int('20', 16) =\", int('20', 16) # base 16\n\n# Operations with complex numbers\nc = 3 + 4j\nprint 'c =', c\nprint 'Real Part:', c.real\nprint 'Imaginary Part:', c.imag\nprint 'Conjugate:', c.conjugate()", "The real numbers can also be represented in scientific notation, for example: 1.2e22.\nPython has a number of defined operators for handling numbers through arithmetic calculations, logic operations (that test whether a condition is true or false) or bitwise processing (where the numbers are processed in binary form).\nArithmetic Operations:\n\nSum (+)\nDifference (-)\nMultiplication (&#42;)\nDivision (/): between two integers the result is equal to the integer division. In other cases, the result is real.\nInteger Division (//): the result is truncated to the next lower integer, even when applied to real numbers, but in this case the result type is real too.\nModule (%): returns the remainder of the division.\nPower (&#42;&#42;): can be used to calculate the root, through fractional exponents (eg 100 ** 0.5).\nPositive (+)\nNegative (-)\n\nLogical Operations:\n\nLess than (<)\nGreater than (>)\nLess than or equal to (<=)\nGreater than or equal to (>=)\nEqual to (==)\nNot equal to (!=)\n\nBitwise Operations:\n\nLeft Shift (<<)\nRight Shift (>>)\nAnd (&)\nOr (|)\nExclusive Or (^)\nInversion (~)\n\nDuring the operations, numbers are converted appropriately (eg. (1.5+4j) + 3 gives 4.5+4j).\nBesides operators, there are also some builtin features to handle numeric types: abs(), which returns the absolute value of the number, oct(), which converts to octal, hex(), which converts for hexadecimal, pow(), which raises a number by another and round(), which returns a real number with the specified rounding.\nText\nStrings are Python builtins for handling text. As they are immutable, you can not add, remove or change any character in a string. To perform these operations, Python needs to create a new string.\nTypes:\n\nStandard String: s = 'Led Zeppelin'\nUnicode String: u = u'Björk'\n\nThe standard string can be converted to unicode by using the function unicode().\nString initializations can be made:\n\nWith single or double quotes.\nOn several consecutive lines, provided that it's between three single or double quotes.\nWithout expansion characters (example: s = r '\\ n', where s will contain the characters \\ and n).\n\nString Operations:", "s = 'Camel'\n\n# Concatenation\nprint 'The ' + s + ' ran away!'\n\n# Interpolation\nprint 'Size of %s => %d' % (s, len(s))\n\n# String processed as a sequence\nfor ch in s: print ch\n\n# Strings are objects\nif s.startswith('C'): print s.upper()\n\n# what will happen? \nprint 3 * s\n# 3 * s is consistent with s + s + s", "The operator % is used for string interpolation. The interpolation is more efficient in use of memory than the conventional concatenation.\nSymbols used in the interpolation:\n\n%s: string.\n%d: integer.\n%o: octal.\n%x: hexacimal.\n%f: real.\n%e: real exponential.\n%%: percent sign.\n\nSymbols can be used to display numbers in various formats.\nExample:", "# Zeros left\nprint 'Now is %02d:%02d.' % (16, 30)\n\n# Real (The number after the decimal point specifies how many decimal digits )\nprint 'Percent: %.1f%%, Exponencial:%.2e' % (5.333, 0.00314)\n\n# Octal and hexadecimal\nprint 'Decimal: %d, Octal: %o, Hexadecimal: %x' % (10, 10, 10)", "Since version 2.6, in addition to interpolation operator %, the string method and function format() is available.\nExamples:", "musicians = [('Page', 'guitarist', 'Led Zeppelin'),\n('Fripp', 'guitarist', 'King Crimson')]\n\n# Parameters are identified by order\nmsg = '{0} is {1} of {2}'\n\nfor name, function, band in musicians:\n print(msg.format(name, function, band))\n\n# Parameters are identified by name\nmsg = '{greeting}, it is {hour:02d}:{minute:02d}'\n\nprint msg.format(greeting='Good Morning', hour=7, minute=30)\n\n# Builtin function format()\nprint 'Pi =', format(3.14159, '.3e')", "The function format() can be used only to format one piece of data each time.\nSlices of strings can be obtained by adding indexes between brackets after a string.\n\nPython indexes:\n\nStart with zero.\nCount from the end if they are negative.\nCan be defined as sections, in the form [start: end + 1: step]. If not set the start, it will be considered as zero. If not set end + 1, it will be considered the size of the object. The step (between characters), if not set, is 1.\n\nIt is possible to invert strings by using a negative step:", "print 'Python'[::-1]\n# shows: nohtyP", "Various functions for dealing with text are implemented in the module string.", "import string\n\n# the alphabet\na = string.ascii_letters\n\n# Shifting left the alphabet\nb = a[1:] + a[0]\n\n# The function maketrans() creates a translation table\n# from the characters of both strings it received as parameters.\n# The characters not present in the table will be \n# copied to the output.\ntab = string.maketrans(a, b)\n\n# The message...\nmsg = '''This text will be translated..\nIt will become very strange.\n'''\n# The function translate() uses the translation table\n# created by maketrans() to translate the string\nprint string.translate(msg, tab)", "The module also implements a type called Template, which is a model string that can be filled through a dictionary. Identifiers are initialized by a dollar sign ($) and may be surrounded by curly braces, to avoid confusion.\nExample:", "import string\n\n# Creates a template string\nst = string.Template('$warning occurred in $when')\n\n# Fills the model with a dictionary\ns = st.substitute({'warning': 'Lack of electricity',\n 'when': 'April 3, 2002'})\n\n# Shows:\n# Lack of electricity occurred in April 3, 2002\nprint s", "It is possible to use mutable strings in Python through the UserString module, which defines the MutableString type", "import UserString\n\ns = UserString.MutableString('Python')\ns[0] = 'p'\n\nprint s # shows \"python\"", "Mutable Strings are less efficient than immutable strings, as they are more complex (in terms of the structure), which is reflected in increased consumption of resources (CPU and memory).\nThe unicode strings can be converted to conventional strings through the decode() method and the reverse path can be done by the method encode().\nExample:", "# Unicode String \nu = u'Hüsker Dü'\n# Convert to str\ns = u.encode('latin1')\nprint s, '=>', type(s)\n\n# String str\ns = 'Hüsker Dü'\nu = s.decode('latin1')\n\nprint repr(u), '=>', type(u)", "To use both methods, it is necessary to pass as an argument the compliant coding. The most used are \"latin1\" \"utf8\".\nLists\nLists are collections of heterogeneous objects, which can be of any type, including other lists.\nLists in the Python are mutable and can be changed at any time. Lists can be sliced ​​in the same way as strings, but as the lists are mutable, it is possible to make assignments to the list items.\nSyntax:\nlist = [a, b, ..., z]\n\nCommon operations with lists:", "# a new list: 70s Brit Progs\nprogs = ['Yes', 'Genesis', 'Pink Floyd', 'ELP']\n\n# processing the entire list\nfor prog in progs:\n print prog\n\n# Changing the last element\nprogs[-1] = 'King Crimson'\n\n# Including\nprogs.append('Camel')\n\n# Removing\nprogs.remove('Pink Floyd')\n\n# Ordering \nprogs.sort()\n\n# Inverting\nprogs.reverse()\n\n# prints with number order\nfor i, prog in enumerate(progs):\n print i + 1, '=>', prog\n\n# prints from de second item\nprint progs[1:]", "The function enumerate() returns a tuple of two elements in each iteration: a sequence number and an item from the corresponding sequence.\nThe list has a pop() method that helps the implementation of queues and stacks:", "my_list = ['A', 'B', 'C']\nprint 'list:', my_list\n\n# The empty list is evaluated as false\nwhile my_list:\n # In queues, the first item is the first to go out\n # pop(0) removes and returns the first item \n print 'Left', my_list.pop(0), ', remain', len(my_list)\n\n# More items on the list\nmy_list += ['D', 'E', 'F']\nprint 'list:', my_list\n\nwhile my_list:\n # On stacks, the first item is the last to go out\n # pop() removes and retorns the last item\n print 'Left', my_list.pop(), ', remain', len(my_list)", "The sort (sort) and reversal (reverse) operations are performed in the list and do not create new lists.\nTuples\nSimilar to lists, but immutable: it's not possible to append, delete or make assignments to the items.\nSyntax:\nmy_tuple = (a, b, ..., z)\n\nThe parentheses are optional.\nFeature: a tuple with only one element is represented as:\nt1 = (1,)\nThe tuple elements can be referenced the same way as the elements of a list:\nfirst_element = tuple[0]\n\nLists can be converted into tuples:\nmy_tuple = tuple(my_list)\n\nAnd tuples can be converted into lists:\nmy_list = list(my_tuple)\n\nWhile tuple can contain mutable elements, these elements can not undergo assignment, as this would change the reference to the object.\nExample (using the interactive mode):\n&gt;&gt;&gt; t = ([1, 2], 4)\n&gt;&gt;&gt; t[0].append(3)\n&gt;&gt;&gt; t\n([1, 2, 3], 4)\n&gt;&gt;&gt; t[0] = [1, 2, 3]\nTraceback (most recent call last):\n File \"&lt;input&gt;\", line 1, in ?\nTypeError: object does not support item assignment\n&gt;&gt;&gt;\n\nTuples are more efficient than conventional lists, as they consume less computing resources (memory) because they are simpler structures the same way immutable strings are in relation to mutable strings.\nOther types of sequences\nAlso in the builtins, Python provides:\n\nset: mutable sequence univocal (without repetitions) unordered.\nfrozenset: immutable sequence univocal unordered.\n\nBoth types implement set operations, such as: union, intersection e difference.\nExample:", "# Data sets\ns1 = set(range(3))\ns2 = set(range(10, 7, -1))\ns3 = set(range(2, 10, 2))\n\n# Shows the data\nprint 's1:', s1, '\\ns2:', s2, '\\ns3:', s3\n\n# Union\ns1s2 = s1.union(s2)\nprint 'Union of s1 and s2:', s1s2\n\n# Difference\nprint 'Difference with s3:', s1s2.difference(s3)\n\n# Intersectiono\nprint 'Intersection with s3:', s1s2.intersection(s3)\n\n# Tests if a set includes the other\nif s1.issuperset([1, 2]):\n print 's1 includes 1 and 2'\n\n# Tests if there is no common elements\nif s1.isdisjoint(s2):\n print 's1 and s2 have no common elements'", "When one list is converted to a set, the repetitions are discarded.\nIn version 2.6, a builtin type for mutable characters list, called bytearray is also available.\nDictionaries\nA dictionary is a list of associations composed by a unique key and corresponding structures. Dictionaries are mutable, like lists.\nThe key must be an immutable type, usually strings, but can also be tuples or numeric types. On the other hand the items of dictionaries can be either mutable or immutable. The Python dictionary provides no guarantee that the keys are ordered.\nSyntax:\ndictionary = {'a': a, 'b': b, ..., 'z': z}\n\nStructure:\n\nExample of a dictionary:\ndic = {'name': 'Shirley Manson', 'band': 'Garbage'}\n\nAcessing elements:\nprint dic['name']\n\nAdding elements:\ndic['album'] = 'Version 2.0'\n\nRemoving one elemento from a dictionary:\ndel dic['album']\n\nGetting the items, keys and values:\nitems = dic.items()\nkeys = dic.keys()\nvalues = dic.values()\n\nExamples with dictionaries:", "# Progs and their albums\nprogs = {'Yes': ['Close To The Edge', 'Fragile'],\n 'Genesis': ['Foxtrot', 'The Nursery Crime'],\n 'ELP': ['Brain Salad Surgery']}\n\n# More progs\nprogs['King Crimson'] = ['Red', 'Discipline']\n\n# items() returns a list of \n# tuples with key and value \nfor prog, albums in progs.items():\n print prog, '=>', albums\n\n# If there is 'ELP', removes\nif progs.has_key('ELP'):\n del progs['ELP']", "Sparse matrix example:", "# Sparse Matrix implemented\n# with dictionary\n\n# Sparse Matrix is a structure\n# that only stores values that are\n# present in the matrix\n\ndim = 6, 12\nmat = {}\n\n# Tuples are immutable\n# Each tuple represents\n# a position in the matrix\nmat[3, 7] = 3\nmat[4, 6] = 5\nmat[6, 3] = 7\nmat[5, 4] = 6\nmat[2, 9] = 4\nmat[1, 0] = 9\n\nfor lin in range(dim[0]):\n for col in range(dim[1]):\n # Method get(key, value)\n # returns the key value\n # in dictionary or \n # if the key doesn't exists\n # returns the second argument\n print mat.get((lin, col), 0),\n print", "Generating the sparse matrix:", "# Matrix in form of string\nmatrix = '''0 0 0 0 0 0 0 0 0 0 0 0\n9 0 0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 4 0 0\n0 0 0 0 0 0 0 3 0 0 0 0\n0 0 0 0 0 0 5 0 0 0 0 0\n0 0 0 0 6 0 0 0 0 0 0 0'''\n\nmat = {}\n\n# split the matrix in lines\nfor row, line in enumerate(matrix.splitlines()):\n\n # Splits the line int cols\n for col, column in enumerate(line.split()):\n\n column = int(column)\n # Places the column in the result,\n # if it is differente from zero\n if column:\n mat[row, col] = column\n\nprint mat\n# The counting starts with zero\nprint 'Complete matrix size:', (row + 1) * (col + 1)\nprint 'Sparse matrix size:', len(mat)", "The sparse matrix is a good solution for processing structures in which most of the items remain empty, like spreadsheets for example.\nTrue, False and Null\nIn Python, the boolean type (bool) is a specialization of the integer type (int). The True value is equal to 1, while the False value is equal to zero.\nThe following values ​​are considered false:\n\nFalse.\nNone (null).\n0 (zero).\n'' (empty string).\n[] (empty list).\n() (empty tuple).\n{} (emtpy dicionary).\nOther structures with size equal zero.\n\nAll other objects out of that list are considered true.\nThe object None, which is of type NoneType, in Python represents the null and is evaluated as false by the interpreter.\nBoolean Operators\nWith logical operators it is possible to build more complex conditions to control conditional jumps and loops.\nBoolean operators in Python are: and, or , not , is , in.\n\nand: returns a true value if and only if it receives two expressions that are true.\nor : returns a false value if and only if it receives two expressions that are false.\nnot : returns false if it receives a true expression and vice versa.\nis: returns true if it receives two references to the same object false otherwise.\nin : returns true if you receive an item and a list and the item occur one or more times in the list false otherwise.\n\nThe calculation of the resulting operation and is as follows: if the first expression is true, the result will be the second expression, otherwise it will be the first. \nAs for the operator or if the first expression is false, the result will be the second expression, otherwise it will be the first. For other operators, the return will be of type bool (True or False).\nExamples:", "print 0 and 3 # Shows 0\nprint 2 and 3 # Shows 3\n\nprint 0 or 3 # Shows 3\nprint 2 or 3 # Shows 2\n\nprint not 0 # Shows True\nprint not 2 # Shows False\nprint 2 in (2, 3) # Shows True\nprint 2 is 3 # Shows False", "Besides boolean operators, there are the functions all(), which returns true when all of the items in the sequence passed as parameters are true, and any(), which returns true if any item is true." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
therealAJ/python-sandbox
data-science/learning/ud1/DataScience/ItemBasedCF.ipynb
gpl-3.0
[ "Item-Based Collaborative Filtering\nAs before, we'll start by importing the MovieLens 100K data set into a pandas DataFrame:", "import pandas as pd\n\nr_cols = ['user_id', 'movie_id', 'rating']\nratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\\t', names=r_cols, usecols=range(3))\n\nm_cols = ['movie_id', 'title']\nmovies = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.item', sep='|', names=m_cols, usecols=range(2))\n\nratings = pd.merge(movies, ratings)\n\nratings.head()", "Now we'll pivot this table to construct a nice matrix of users and the movies they rated. NaN indicates missing data, or movies that a given user did not watch:", "userRatings = ratings.pivot_table(index=['user_id'],columns=['title'],values='rating')\nuserRatings.head()", "Now the magic happens - pandas has a built-in corr() method that will compute a correlation score for every column pair in the matrix! This gives us a correlation score between every pair of movies (where at least one user rated both movies - otherwise NaN's will show up.) That's amazing!", "corrMatrix = userRatings.corr()\ncorrMatrix.head()", "However, we want to avoid spurious results that happened from just a handful of users that happened to rate the same pair of movies. In order to restrict our results to movies that lots of people rated together - and also give us more popular results that are more easily recongnizable - we'll use the min_periods argument to throw out results where fewer than 100 users rated a given movie pair:", "corrMatrix = userRatings.corr(method='pearson', min_periods=100)\ncorrMatrix.head()", "Now let's produce some movie recommendations for user ID 0, who I manually added to the data set as a test case. This guy really likes Star Wars and The Empire Strikes Back, but hated Gone with the Wind. I'll extract his ratings from the userRatings DataFrame, and use dropna() to get rid of missing data (leaving me only with a Series of the movies I actually rated:)", "myRatings = userRatings.loc[0].dropna()\nmyRatings", "Now, let's go through each movie I rated one at a time, and build up a list of possible recommendations based on the movies similar to the ones I rated.\nSo for each movie I rated, I'll retrieve the list of similar movies from our correlation matrix. I'll then scale those correlation scores by how well I rated the movie they are similar to, so movies similar to ones I liked count more than movies similar to ones I hated:", "simCandidates = pd.Series()\nfor i in range(0, len(myRatings.index)):\n print \"Adding sims for \" + myRatings.index[i] + \"...\"\n # Retrieve similar movies to this one that I rated\n sims = corrMatrix[myRatings.index[i]].dropna()\n # Now scale its similarity by how well I rated this movie\n sims = sims.map(lambda x: x * myRatings[i])\n # Add the score to the list of similarity candidates\n simCandidates = simCandidates.append(sims)\n \n#Glance at our results so far:\nprint \"sorting...\"\nsimCandidates.sort_values(inplace = True, ascending = False)\nprint simCandidates.head(10)", "This is starting to look like something useful! Note that some of the same movies came up more than once, because they were similar to more than one movie I rated. We'll use groupby() to add together the scores from movies that show up more than once, so they'll count more:", "simCandidates = simCandidates.groupby(simCandidates.index).sum()\n\nsimCandidates.sort_values(inplace = True, ascending = False)\nsimCandidates.head(10)", "The last thing we have to do is filter out movies I've already rated, as recommending a movie I've already watched isn't helpful:", "filteredSims = simCandidates.drop(myRatings.index)\nfilteredSims.head(10)", "There we have it!\nExercise\nCan you improve on these results? Perhaps a different method or min_periods value on the correlation computation would produce more interesting results.\nAlso, it looks like some movies similar to Gone with the Wind - which I hated - made it through to the final list of recommendations. Perhaps movies similar to ones the user rated poorly should actually be penalized, instead of just scaled down?\nThere are also probably some outliers in the user rating data set - some users may have rated a huge amount of movies and have a disporportionate effect on the results. Go back to earlier lectures to learn how to identify these outliers, and see if removing them improves things.\nFor an even bigger project: we're evaluating the result qualitatively here, but we could actually apply train/test and measure our ability to predict user ratings for movies they've already watched. Whether that's actually a measure of a \"good\" recommendation is debatable, though!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
RyanAlberts/Springbaord-Capstone-Project
Mini_Project_Linear_Regression.ipynb
mit
[ "Regression in Python\n\nThis is a very quick run-through of some basic statistical concepts, adapted from Lab 4 in Harvard's CS109 course. Please feel free to try the original lab if you're feeling ambitious :-) The CS109 git repository also has the solutions if you're stuck.\n\nLinear Regression Models\nPrediction using linear regression\nSome re-sampling methods \nTrain-Test splits\nCross Validation\n\n\n\nLinear regression is used to model and predict continuous outcomes while logistic regression is used to model binary outcomes. We'll see some examples of linear regression as well as Train-test splits.\nThe packages we'll cover are: statsmodels, seaborn, and scikit-learn. While we don't explicitly teach statsmodels and seaborn in the Springboard workshop, those are great libraries to know.\n\n<img width=600 height=300 src=\"https://imgs.xkcd.com/comics/sustainable.png\"/>", "# special IPython command to prepare the notebook for matplotlib and other libraries\n%pylab inline \n\nimport numpy as np\nimport pandas as pd\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\nimport sklearn\n\nimport seaborn as sns\n\n# special matplotlib argument for improved plots\nfrom matplotlib import rcParams\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")\n", "Part 1: Linear Regression\nPurpose of linear regression\n\n<div class=\"span5 alert alert-info\">\n\n<p> Given a dataset $X$ and $Y$, linear regression can be used to: </p>\n<ul>\n <li> Build a <b>predictive model</b> to predict future values of $X_i$ without a $Y$ value. </li>\n <li> Model the <b>strength of the relationship</b> between each dependent variable $X_i$ and $Y$</li>\n <ul>\n <li> Sometimes not all $X_i$ will have a relationship with $Y$</li>\n <li> Need to figure out which $X_i$ contributes most information to determine $Y$ </li>\n </ul>\n <li>Linear regression is used in so many applications that I won't warrant this with examples. It is in many cases, the first pass prediction algorithm for continuous outcomes. </li>\n</ul>\n</div>\n\nA brief recap (feel free to skip if you don't care about the math)\n\nLinear Regression is a method to model the relationship between a set of independent variables $X$ (also knowns as explanatory variables, features, predictors) and a dependent variable $Y$. This method assumes the relationship between each predictor $X$ is linearly related to the dependent variable $Y$. \n$$ Y = \\beta_0 + \\beta_1 X + \\epsilon$$\nwhere $\\epsilon$ is considered as an unobservable random variable that adds noise to the linear relationship. This is the simplest form of linear regression (one variable), we'll call this the simple model. \n\n\n$\\beta_0$ is the intercept of the linear model\n\n\nMultiple linear regression is when you have more than one independent variable\n\n$X_1$, $X_2$, $X_3$, $\\ldots$\n\n\n\n$$ Y = \\beta_0 + \\beta_1 X_1 + \\ldots + \\beta_p X_p + \\epsilon$$ \n\nBack to the simple model. The model in linear regression is the conditional mean of $Y$ given the values in $X$ is expressed a linear function. \n\n$$ y = f(x) = E(Y | X = x)$$ \n\nhttp://www.learner.org/courses/againstallodds/about/glossary.html\n\nThe goal is to estimate the coefficients (e.g. $\\beta_0$ and $\\beta_1$). We represent the estimates of the coefficients with a \"hat\" on top of the letter. \n\n$$ \\hat{\\beta}_0, \\hat{\\beta}_1 $$\n\nOnce you estimate the coefficients $\\hat{\\beta}_0$ and $\\hat{\\beta}_1$, you can use these to predict new values of $Y$\n\n$$\\hat{y} = \\hat{\\beta}_0 + \\hat{\\beta}_1 x_1$$\n\nHow do you estimate the coefficients? \nThere are many ways to fit a linear regression model\nThe method called least squares is one of the most common methods\nWe will discuss least squares today\n\n\n\nEstimating $\\hat\\beta$: Least squares\n\nLeast squares is a method that can estimate the coefficients of a linear model by minimizing the difference between the following: \n$$ S = \\sum_{i=1}^N r_i = \\sum_{i=1}^N (y_i - (\\beta_0 + \\beta_1 x_i))^2 $$\nwhere $N$ is the number of observations. \n\nWe will not go into the mathematical details, but the least squares estimates $\\hat{\\beta}_0$ and $\\hat{\\beta}_1$ minimize the sum of the squared residuals $r_i = y_i - (\\beta_0 + \\beta_1 x_i)$ in the model (i.e. makes the difference between the observed $y_i$ and linear model $\\beta_0 + \\beta_1 x_i$ as small as possible). \n\nThe solution can be written in compact matrix notation as\n$$\\hat\\beta = (X^T X)^{-1}X^T Y$$ \nWe wanted to show you this in case you remember linear algebra, in order for this solution to exist we need $X^T X$ to be invertible. Of course this requires a few extra assumptions, $X$ must be full rank so that $X^T X$ is invertible, etc. This is important for us because this means that having redundant features in our regression models will lead to poorly fitting (and unstable) models. We'll see an implementation of this in the extra linear regression example.\nNote: The \"hat\" means it is an estimate of the coefficient. \n\nPart 2: Boston Housing Data Set\nThe Boston Housing data set contains information about the housing values in suburbs of Boston. This dataset was originally taken from the StatLib library which is maintained at Carnegie Mellon University and is now available on the UCI Machine Learning Repository. \nLoad the Boston Housing data set from sklearn\n\nThis data set is available in the sklearn python module which is how we will access it today.", "from sklearn.datasets import load_boston\nboston = load_boston()\n\nboston.keys()\n\nboston.data.shape\n\n# Print column names\nprint boston.feature_names\n\n# Print description of Boston housing data set\nprint boston.DESCR", "Now let's explore the data set itself.", "bos = pd.DataFrame(boston.data)\nbos.head()", "There are no column names in the DataFrame. Let's add those.", "bos.columns = boston.feature_names\nbos.head()", "Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.", "print boston.target.shape\n\nbos['PRICE'] = boston.target\nbos.head()", "EDA and Summary Statistics\n\nLet's explore this data set. First we use describe() to get basic summary statistics for each of the columns.", "bos.describe()", "Scatter plots\n\nLet's look at some scatter plots for three variables: 'CRIM', 'RM' and 'PTRATIO'. \nWhat kind of relationship do you see? e.g. positive, negative? linear? non-linear?", "plt.scatter(bos.CRIM, bos.PRICE)\nplt.xlabel(\"Per capita crime rate by town (CRIM)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between CRIM and Price\")", "Your turn: Create scatter plots between RM and PRICE, and PTRATIO and PRICE. What do you notice?", "#your turn: scatter plot between *RM* and *PRICE*\n\nplt.scatter(bos.RM, bos.PRICE)\nplt.xlabel(\"Average Number of Rooms per Dwelling\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between No. of Rooms and Price\");\n\n#your turn: scatter plot between *PTRATIO* and *PRICE*\n\nplt.scatter(bos.PTRATIO, bos.PRICE)\nplt.xlabel(\"Pupil-Teacher Ratio by town\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between PTRatio and Price\");", "Your turn: What are some other numeric variables of interest? Plot scatter plots with these variables and PRICE.", "#your turn: create some other scatter plots\n", "Scatter Plots using Seaborn\n\nSeaborn is a cool Python plotting library built on top of matplotlib. It provides convenient syntax and shortcuts for many common types of plots, along with better-looking defaults.\nWe can also use seaborn regplot for the scatterplot above. This provides automatic linear regression fits (useful for data exploration later on). Here's one example below.", "sns.regplot(y=\"PRICE\", x=\"RM\", data=bos, fit_reg = True)", "Histograms\n\nHistograms are a useful way to visually summarize the statistical properties of numeric variables. They can give you an idea of the mean and the spread of the variables as well as outliers.", "plt.hist(bos.CRIM)\nplt.title(\"CRIM\")\nplt.xlabel(\"Crime rate per capita\")\nplt.ylabel(\"Frequency\")\nplt.show()", "Your turn: Plot separate histograms and one for RM, one for PTRATIO. Any interesting observations?", "#your turn\n\nplt.hist(bos.PTRATIO)\nplt.title(\"PTRATIO\")\nplt.xlabel(\"Pupil-Teacher Ratio\")\nplt.ylabel(\"Frequency\")\nplt.show()", "Linear regression with Boston housing data example\n\nHere, \n$Y$ = boston housing prices (also called \"target\" data in python)\nand\n$X$ = all the other features (or independent variables)\nwhich we will use to fit a linear regression model and predict Boston housing prices. We will use the least squares method as the way to estimate the coefficients. \nWe'll use two ways of fitting a linear regression. We recommend the first but the second is also powerful in its features.\nFitting Linear Regression using statsmodels\n\nStatsmodels is a great Python library for a lot of basic and inferential statistics. It also provides basic regression functions using an R-like syntax, so it's commonly used by statisticians. While we don't cover statsmodels officially in the Data Science Intensive, it's a good library to have in your toolbox. Here's a quick example of what you could do with it.", "# Import regression modules\n# ols - stands for Ordinary least squares, we'll use this\nimport statsmodels.api as sm\nfrom statsmodels.formula.api import ols\n\n# statsmodels works nicely with pandas dataframes\n# The thing inside the \"quotes\" is called a formula, a bit on that below\nm = ols('PRICE ~ RM',bos).fit()\nprint m.summary()", "Interpreting coefficients\nThere is a ton of information in this output. But we'll concentrate on the coefficient table (middle table). We can interpret the RM coefficient (9.1021) by first noticing that the p-value (under P&gt;|t|) is so small, basically zero. We can interpret the coefficient as, if we compare two groups of towns, one where the average number of rooms is say $5$ and the other group is the same except that they all have $6$ rooms. For these two groups the average difference in house prices is about $9.1$ (in thousands) so about $\\$9,100$ difference. The confidence interval fives us a range of plausible values for this difference, about ($\\$8,279, \\$9,925$), deffinitely not chump change. \nstatsmodels formulas\n\nThis formula notation will seem familiar to R users, but will take some getting used to for people coming from other languages or are new to statistics.\nThe formula gives instruction for a general structure for a regression call. For statsmodels (ols or logit) calls you need to have a Pandas dataframe with column names that you will add to your formula. In the below example you need a pandas data frame that includes the columns named (Outcome, X1,X2, ...), bbut you don't need to build a new dataframe for every regression. Use the same dataframe with all these things in it. The structure is very simple:\nOutcome ~ X1\nBut of course we want to to be able to handle more complex models, for example multiple regression is doone like this:\nOutcome ~ X1 + X2 + X3\nThis is the very basic structure but it should be enough to get you through the homework. Things can get much more complex, for a quick run-down of further uses see the statsmodels help page.\nLet's see how our model actually fit our data. We can see below that there is a ceiling effect, we should probably look into that. Also, for large values of $Y$ we get underpredictions, most predictions are below the 45-degree gridlines. \nYour turn: Create a scatterpot between the predicted prices, available in m.fittedvalues and the original prices. How does the plot look?", "# your turn\nplt.scatter(m.fittedvalues, bos.PRICE)\nplt.xlabel(\"Predicted Price\")\nplt.ylabel(\"Original Housing Price\")\nplt.title(\"Predicted vs. Original Prices\");", "Fitting Linear Regression using sklearn", "from sklearn.linear_model import LinearRegression\nX = bos.drop('PRICE', axis = 1)\n\n# This creates a LinearRegression object\nlm = LinearRegression()\nlm", "What can you do with a LinearRegression object?\n\nCheck out the scikit-learn docs here. We have listed the main functions here.\nMain functions | Description\n--- | --- \nlm.fit() | Fit a linear model\nlm.predit() | Predict Y using the linear model with estimated coefficients\nlm.score() | Returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, as the proportion of total variation of outcomes explained by the model\nWhat output can you get?", "# Look inside lm object\n#lm.", "Output | Description\n--- | --- \nlm.coef_ | Estimated coefficients\nlm.intercept_ | Estimated intercept \nFit a linear model\n\nThe lm.fit() function estimates the coefficients the linear regression using least squares.", "# Use all 13 predictors to fit linear regression model\nlm.fit(X, bos.PRICE)", "Your turn: How would you change the model to not fit an intercept term? Would you recommend not having an intercept?\nEstimated intercept and coefficients\nLet's look at the estimated coefficients from the linear model using 1m.intercept_ and lm.coef_. \nAfter we have fit our linear regression model using the least squares method, we want to see what are the estimates of our coefficients $\\beta_0$, $\\beta_1$, ..., $\\beta_{13}$: \n$$ \\hat{\\beta}0, \\hat{\\beta}_1, \\ldots, \\hat{\\beta}{13} $$", "print 'Estimated intercept coefficient:', lm.intercept_\n\nprint 'Number of coefficients:', len(lm.coef_)\n\n# The coefficients\npd.DataFrame(zip(X.columns, lm.coef_), columns = ['features', 'estimatedCoefficients'])", "Predict Prices\nWe can calculate the predicted prices ($\\hat{Y}_i$) using lm.predict. \n$$ \\hat{Y}i = \\hat{\\beta}_0 + \\hat{\\beta}_1 X_1 + \\ldots \\hat{\\beta}{13} X_{13} $$", "# first five predicted prices\nlm.predict(X)[0:5]", "Your turn: \n\nHistogram: Plot a histogram of all the predicted prices\nScatter Plot: Let's plot the true prices compared to the predicted prices to see they disagree (we did this with statsmodels before).", "# your turn\n\nplt.hist(lm.predict(X))\nplt.title(\"Predicted Prices\")\nplt.xlabel(\"Price\")\nplt.ylabel(\"Frequency\")\nplt.show()", "Residual sum of squares\nLet's calculate the residual sum of squares \n$$ S = \\sum_{i=1}^N r_i = \\sum_{i=1}^N (y_i - (\\beta_0 + \\beta_1 x_i))^2 $$", "print np.sum((bos.PRICE - lm.predict(X)) ** 2)", "Mean squared error\n\nThis is simple the mean of the residual sum of squares.\nYour turn: Calculate the mean squared error and print it.", "#your turn\nprint np.mean((bos.PRICE - lm.predict(X)) ** 2)", "Relationship between PTRATIO and housing price\n\nTry fitting a linear regression model using only the 'PTRATIO' (pupil-teacher ratio by town)\nCalculate the mean squared error.", "lm = LinearRegression()\nlm.fit(X[['PTRATIO']], bos.PRICE)\n#pd.DataFrame(zip(X.columns, lm.coef_), columns = ['features', 'estimatedCoefficients'])\n\nprint np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2)\n\nmsePTRATIO = np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2)\nprint msePTRATIO", "We can also plot the fitted linear regression line.", "plt.scatter(bos.PTRATIO, bos.PRICE)\nplt.xlabel(\"Pupil-to-Teacher Ratio (PTRATIO)\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between PTRATIO and Price\")\n\nplt.plot(bos.PTRATIO, lm.predict(X[['PTRATIO']]), color='blue', linewidth=3)\nplt.show()", "Your turn\n\nTry fitting a linear regression model using three independent variables\n\n'CRIM' (per capita crime rate by town)\n'RM' (average number of rooms per dwelling)\n'PTRATIO' (pupil-teacher ratio by town)\n\nCalculate the mean squared error.", "# your turn\nlm = LinearRegression()\nlm.fit(X[['CRIM']], bos.PRICE)\n\nprint np.mean((bos.PRICE - lm.predict(X[['CRIM']])) ** 2)\n\nplt.scatter(bos.CRIM, bos.PRICE)\nplt.xlabel(\"Crime per Capita\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between CRIM and Price\")\n\nplt.plot(bos.CRIM, lm.predict(X[['CRIM']]), color='blue', linewidth=3)\nplt.show()", "Other important things to think about when fitting a linear regression model\n\n<div class=\"span5 alert alert-danger\">\n<ul>\n <li>**Linearity**. The dependent variable $Y$ is a linear combination of the regression coefficients and the independent variables $X$. </li>\n <li>**Constant standard deviation**. The SD of the dependent variable $Y$ should be constant for different values of X. \n <ul>\n <li>e.g. PTRATIO\n </ul>\n </li>\n <li> **Normal distribution for errors**. The $\\epsilon$ term we discussed at the beginning are assumed to be normally distributed. \n $$ \\epsilon_i \\sim N(0, \\sigma^2)$$\nSometimes the distributions of responses $Y$ may not be normally distributed at any given value of $X$. e.g. skewed positively or negatively. </li>\n<li> **Independent errors**. The observations are assumed to be obtained independently.\n <ul>\n <li>e.g. Observations across time may be correlated\n </ul>\n</li>\n</ul> \n\n</div>\n\nPart 3: Training and Test Data sets\nPurpose of splitting data into Training/testing sets\n\n<div class=\"span5 alert alert-info\">\n\n<p> Let's stick to the linear regression example: </p>\n<ul>\n <li> We built our model with the requirement that the model fit the data well. </li>\n <li> As a side-effect, the model will fit <b>THIS</b> dataset well. What about new data? </li>\n <ul>\n <li> We wanted the model for predictions, right?</li>\n </ul>\n <li> One simple solution, leave out some data (for <b>testing</b>) and <b>train</b> the model on the rest </li>\n <li> This also leads directly to the idea of cross-validation, next section. </li> \n</ul>\n</div>\n\n\nOne way of doing this is you can create training and testing data sets manually.", "X_train = X[:-50]\nX_test = X[-50:]\nY_train = bos.PRICE[:-50]\nY_test = bos.PRICE[-50:]\nprint X_train.shape\nprint X_test.shape\nprint Y_train.shape\nprint Y_test.shape\nX_train.head()", "Another way, is to split the data into random train and test subsets using the function train_test_split in sklearn.cross_validation. Here's the documentation.", "#X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(\n# X, bos.PRICE, test_size=0.33, random_state = 5)\nprint X_train.shape\nprint X_test.shape\nprint Y_train.shape\nprint Y_test.shape\nX_train", "Your turn: Let's build a linear regression model using our new training data sets. \n\nFit a linear regression model to the training set\nPredict the output on the test set", "# your turn\nlm = LinearRegression()\nlm.fit(X_train[['RM']], Y_train)\n\nprint np.mean((Y_train - lm.predict(X_train[['RM']])) ** 2)\n\nplt.scatter(X_train.RM, Y_train)\nplt.xlabel(\"No. of Rooms\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between No. of Rooms and Price\")\n\nplt.plot(X_train.RM, lm.predict(X_train[['RM']]), color='blue', linewidth=3)\nplt.show()", "Your turn:\nCalculate the mean squared error \n\nusing just the test data\nusing just the training data\n\nAre they pretty similar or very different? What does that mean?", "# your turn\nlm = LinearRegression()\nlm.fit(X_test[['RM']], Y_test)\n\nprint np.mean((Y_test - lm.predict(X_test[['RM']])) ** 2)\n\nplt.scatter(X_test.RM, Y_test)\nplt.xlabel(\"No. of Rooms\")\nplt.ylabel(\"Housing Price\")\nplt.title(\"Relationship between No. of Rooms and Price\")\n\nplt.plot(X_test.RM, lm.predict(X_test[['RM']]), color='blue', linewidth=3)\nplt.show()", "Residual plots", "plt.scatter(lm.predict(X_train), lm.predict(X_train) - Y_train, c='b', s=40, alpha=0.5)\nplt.scatter(lm.predict(X_test), lm.predict(X_test) - Y_test, c='g', s=40)\nplt.hlines(y = 0, xmin=0, xmax = 50)\nplt.title('Residual Plot using training (blue) and test (green) data')\nplt.ylabel('Residuals')", "Your turn: Do you think this linear regression model generalizes well on the test data?\nK-fold Cross-validation as an extension of this idea\n\n<div class=\"span5 alert alert-info\">\n\n<p> A simple extension of the Test/train split is called K-fold cross-validation. </p>\n\n<p> Here's the procedure:</p>\n<ul>\n <li> randomly assign your $n$ samples to one of $K$ groups. They'll each have about $n/k$ samples</li>\n <li> For each group $k$: </li>\n <ul>\n <li> Fit the model (e.g. run regression) on all data excluding the $k^{th}$ group</li>\n <li> Use the model to predict the outcomes in group $k$</li>\n <li> Calculate your prediction error for each observation in $k^{th}$ group (e.g. $(Y_i - \\hat{Y}_i)^2$ for regression, $\\mathbb{1}(Y_i = \\hat{Y}_i)$ for logistic regression). </li>\n </ul>\n <li> Calculate the average prediction error across all samples $Err_{CV} = \\frac{1}{n}\\sum_{i=1}^n (Y_i - \\hat{Y}_i)^2$ </li>\n</ul>\n</div>\n\n\nLuckily you don't have to do this entire process all by hand (for loops, etc.) every single time, sci-kit learn has a very nice implementation of this, have a look at the documentation.\nYour turn (extra credit): Implement K-Fold cross-validation using the procedure above and Boston Housing data set using $K=4$. How does the average prediction error compare to the train-test split above?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
action_gap_rl/notebooks/mode_regression.ipynb
apache-2.0
[ "Copyright 2020 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Default title text\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Goal\nWe want to build a model $h_\\theta(s) \\rightarrow a^$ which predicts the mode $a^$ of some target distribution, for which we have unnormalized log-probabilities, $y$.\nStated another way, we want to find an appropriate loss L s.t. it is tractable to solve\n$$\n\\text{argmin}\\theta \\sum\\limits{i=1}^N L(\\pi(a_i \\mid h_\\theta(s_i)), y_i)\n$$\nwhere $\\theta$ are model parameters, $h$ is a model that depends on context $s$ and outputs the mode of $\\pi$, and $\\pi(a \\mid \\text{mode})$ is the predicted probability of action $a$. $y$ is information about the target distribution. We want the mode of $\\pi$ to correspond to areas where $y$ is maximized.\nCandidate\n$$\nL(a, h_\\theta(s), y) = \\left\\lvert -\\lvert h_\\theta(s) - a\\rvert^p - y \\right\\rvert^{1/p}\n$$\nwhere $p$ is close to 0.", "import numpy as np\nimport tensorflow.compat.v2 as tf\nimport matplotlib.pyplot as plt\n\ntf.enable_v2_behavior()\n\nn = 20\nlower, upper = -2, 2\nmodes_x, modes_y = zip(*[ # (x, y)\n (-1., -20),\n (0.0, -20),\n (1., -20),\n])\n\nx = np.concatenate((modes_x, np.random.uniform(lower, upper, size=n-len(modes_x))))\ny = np.random.uniform(-1000, -300, size=n)\ny[:len(modes_x)] = modes_y\ny /= np.max(np.abs(y))\n\nplt.scatter(x, y)\nplt.show()", "Generalize the loss above to\n$$\nL(a, h_\\theta(s), y) = \\left\\lvert -\\lvert h_\\theta(s) - a\\rvert^p - y \\right\\rvert^{q}\n$$\nwhere $q = 1/p$. Bringing $p$ arbitrarily close to 0 converges to the non-differentiable $L^0$ loss. Choosing $p=0.1$ seems to be small enough for practable purposes.", "# Mode\n\np=0.1\nq=1/p\nt = np.linspace(-2,2,200).reshape(-1, 1)\nL = np.mean(np.power(np.abs(-np.power(np.abs(t - x[np.newaxis,:]), p) - y[np.newaxis,:]), q), axis=1)\nplt.plot(t, L)\nplt.show()\n\n# Median\n\np=1.0\nq=2.0\nt = np.linspace(-2,2,200).reshape(-1, 1)\nL = np.mean(np.power(np.abs(-np.power(np.abs(t - x[np.newaxis,:]), p) - y[np.newaxis,:]), q), axis=1)\n#plt.ylim(0.5, 1.)\nplt.plot(t, L)\nplt.show()\n\n# Mean\n\np=2.0\nq=2.0\nt = np.linspace(-2,2,200).reshape(-1, 1)\nL = np.mean(np.power(np.abs(-np.power(np.abs(t - x[np.newaxis,:]), p) - y[np.newaxis,:]), q), axis=1)\n#plt.ylim(0.5, 1.)\nplt.plot(t, L)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
root-mirror/training
INSIGHTS2018/Exercises/WorkingWithFiles/WritingOnFilesExercise.ipynb
gpl-2.0
[ "Writing on files\nThis is a Python notebook in which you will practice the concepts learned during the lectures.\nStartup ROOT\nImport the ROOT module: this will activate the integration layer with the notebook automatically", "import ROOT", "Writing histograms\nCreate a TFile containing three histograms filled with random numbers distributed according to a Gaus, an exponential and a uniform distribution.\nClose the file: you will reopen it later.", "rndm = ROOT.TRandom3(1)\n\nfilename = \"histos.root\"\n\n# Here open a file and create three histograms\n\nfor i in xrange(1024):\n # Use the following lines to feed the Fill method of the histograms in order to fill\n rndm.Gaus()\n rndm.Exp(1)\n rndm.Uniform(-4,4)\n\n# Here write the three histograms on the file and close the file \n ", "Now, you can invoke the ls command from within the notebook to list the files in this directory. Check that the file is there. You can invoke the rootls command to see what's inside the file.", "! ls .\n! echo Now listing the content of the file\n! rootls -l #filename here", "Access the histograms and draw them in Python. Remember that you need to create a TCanvas before and draw it too in order to inline the plots in the notebooks.\nYou can switch to the interactive JavaScript visualisation using the %jsroot on \"magic\" command.", "%jsroot on\nf = ROOT.TFile(filename)\nc = ROOT.TCanvas()\nc.Divide(2,2)\nc.cd(1)\nf.gaus.Draw()\n# finish the drawing in each pad\n# Draw the Canvas", "You can now repeat the exercise above using C++. Transform the cell in a C++ cell using the %%cpp \"magic\".", "%cpp\nTFile f(\"histos.root\");\nTH1F *hg, *he, *hu;\nf.GetObject(\"gaus\", hg);\n// ... read the histograms and draw them in each pad", "Inspect the content of the file: TXMLFile\nROOT provides a different kind of TFile, TXMLFile. It has the same interface and it's very useful to better understand how objects are written in files by ROOT.\nRepeat the exercise above, either on Python or C++ - your choice, using a TXMLFILE rather than a TFile and then display its content with the cat command. Can you see how the content of the individual bins of the histograms is stored? And the colour of its markers?\nDo you understand why the xml file is bigger than the root one even if they have the same content?", "f = ROOT.TXMLFile(\"histos.xml\",\"RECREATE\")\n\nhg = ROOT.TH1F(\"gaus\",\"Gaussian numbers\", 64, -4, 4)\nhe = ROOT.TH1F(\"expo\",\"Exponential numbers\", 64, -4, 4)\nhu = ROOT.TH1F(\"unif\",\"Uniform numbers\", 64, -4, 4)\nfor i in xrange(1024):\n hg.Fill(rndm.Gaus())\n # ... Same as above!\n\n! ls -l histos.xml histos.root\n\n! cat histos.xml" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
dev/_downloads/4211928bfede6cdca0afdb2d06bea2d1/Find_Natural_Neighbors_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Find Natural Neighbors Verification\nFinding natural neighbors in a triangulation\nA triangle is a natural neighbor of a point if that point is within a circumscribed\ncircle (\"circumcircle\") containing the triangle.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import Delaunay\n\nfrom metpy.interpolate.geometry import circumcircle_radius, find_natural_neighbors\n\n# Create test observations, test points, and plot the triangulation and points.\ngx, gy = np.meshgrid(np.arange(0, 20, 4), np.arange(0, 20, 4))\npts = np.vstack([gx.ravel(), gy.ravel()]).T\ntri = Delaunay(pts)\n\nfig, ax = plt.subplots(figsize=(15, 10))\nfor i, inds in enumerate(tri.simplices):\n pts = tri.points[inds]\n x, y = np.vstack((pts, pts[0])).T\n ax.plot(x, y)\n ax.annotate(i, xy=(np.mean(x), np.mean(y)))\n\ntest_points = np.array([[2, 2], [5, 10], [12, 13.4], [12, 8], [20, 20]])\n\nfor i, (x, y) in enumerate(test_points):\n ax.plot(x, y, 'k.', markersize=6)\n ax.annotate('test ' + str(i), xy=(x, y))", "Since finding natural neighbors already calculates circumcenters, return\nthat information for later use.\nThe key of the neighbors dictionary refers to the test point index, and the list of integers\nare the triangles that are natural neighbors of that particular test point.\nSince point 4 is far away from the triangulation, it has no natural neighbors.\nPoint 3 is at the confluence of several triangles so it has many natural neighbors.", "neighbors, circumcenters = find_natural_neighbors(tri, test_points)\nprint(neighbors)", "We can plot all of the triangles as well as the circles representing the circumcircles", "fig, ax = plt.subplots(figsize=(15, 10))\nfor i, inds in enumerate(tri.simplices):\n pts = tri.points[inds]\n x, y = np.vstack((pts, pts[0])).T\n ax.plot(x, y)\n ax.annotate(i, xy=(np.mean(x), np.mean(y)))\n\n# Using circumcenters and calculated circumradii, plot the circumcircles\nfor idx, cc in enumerate(circumcenters):\n ax.plot(cc[0], cc[1], 'k.', markersize=5)\n circ = plt.Circle(cc, circumcircle_radius(*tri.points[tri.simplices[idx]]),\n edgecolor='k', facecolor='none', transform=fig.axes[0].transData)\n ax.add_artist(circ)\n\nax.set_aspect('equal', 'datalim')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
whitead/numerical_stats
unit_9/lectures/lecture_2.ipynb
gpl-3.0
[ "Linear Algebra in NumPy\nUnit 9, Lecture 2\nNumerical Methods and Statistics\n\nProf. Andrew White, March 30, 2020", "import random\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom math import sqrt, pi, erf\nimport scipy.stats\nimport numpy.linalg", "Working with Matrices in Numpy\nWe saw earlier in the class how to create numpy matrices. Let's review that and learn about explicit initialization\nExplicit Initialization\nYou can explitily set the values in your matrix by first creating a list and then converting it into a numpy array", "matrix = [ [4,3], [6, 2] ]\nprint('As Python list:')\nprint(matrix)\n\nnp_matrix = np.array(matrix)\n\nprint('The shape of the array:', np.shape(np_matrix))\nprint('The numpy matrix/array:')\nprint(np_matrix)", "You can use multiple lines in python to specify your list. This can make the formatting cleaner", "np_matrix_2 = np.array([\n [ 4, 3], \n [ 1, 2], \n [-1, 4], \n [ 4, 2] \n ])\nprint(np_matrix_2)", "Create and Set\nYou can also create an array and then set the elements.", "np_matrix_3 = np.zeros( (2, 10) )\n\nprint(np_matrix_3)\n\nnp_matrix_3[:, 1] = 2\nprint(np_matrix_3)\n\nnp_matrix_3[0, :] = -1\nprint(np_matrix_3)\n\nnp_matrix_3[1, 6] = 43\nprint(np_matrix_3)\n\nrows, columns = np.shape(np_matrix_3) #get the number of rows and columns\nfor i in range(columns): #Do a for loop over columns\n np_matrix_3[1, i] = i ** 2 #Set the value of the 2nd row, ith column to be i^2\n \nprint(np_matrix_3)", "Linear Algebra\nThe linear algebra routines for python are in the numpy.linalg library. See here\nMatrix Multiplication\nMatrix multiplication is done with the dot method. Let's compare that with *", "np_matrix_1 = np.random.random( (2, 4) ) #create a random 2 x 4 array\nnp_matrix_2 = np.random.random( (4, 1) ) #create a random 4 x 1 array\n\nprint(np_matrix_1.dot(np_matrix_2))", "So, dot correctly gives us a 2x1 matrix as expected for the two shapes\nUsing the special @ character:", "print(np_matrix_1 @ np_matrix_2)", "The element-by-element multiplication, *, doesn't work on different sized arrays.", "print(np_matrix_1 * np_matrix_2)", "Method vs Function\nInstead of using dot as a method (it comes after a .), you can use the dot function as well. Let's see an example:", "print(np_matrix_1.dot(np_matrix_2))\n\nprint(np.dot(np_matrix_1, np_matrix_2))", "Matrix Rank\nThe rank of a matrix can be found with singular value decomposition. In numpy, we can do this simply with a call to linalg.matrix_rank", "import numpy.linalg as linalg\n\nmatrix = [ [1, 0], [0, 0] ]\nnp_matrix = np.array(matrix)\n\nprint(linalg.matrix_rank(np_matrix))\n\n", "Matrix Inverse\nThe inverse of a matrix can be found using the linalg.inverse command. Consider the following system of equations:\n$$\\begin{array}{lr}\n3 x + 2 y + z & = 5\\\n2 x - y & = 4 \\\nx + y - 2z & = 12 \\\n\\end{array}$$\nWe can encode it as a matrix equation:\n$$\\left[\\begin{array}{lcr}\n3 & 2 & 1\\\n2 & -1 & 0\\\n1 & 1 & -2\\\n\\end{array}\\right]\n\\left[\\begin{array}{l}\nx\\\ny\\\nz\\\n\\end{array}\\right]\n=\n\\left[\\begin{array}{l}\n5\\\n4\\\n12\\\n\\end{array}\\right]$$\n$$\\mathbf{A}\\mathbf{x} = \\mathbf{b}$$\n$$\\mathbf{A}^{-1}\\mathbf{b} = \\mathbf{x}$$", "\n#Enter the data as lists\na_matrix = [[3, 2, 1],\n [2,-1,0],\n [1,1,-2]]\nb_matrix = [5, 4, 12]\n\n#convert them to numpy arrays/matrices\nnp_a_matrix = np.array(a_matrix)\nnp_b_matrix = np.array(b_matrix).transpose()\n\n#Solve the problem\nnp_a_inv = linalg.inv(np_a_matrix)\nnp_x_matrix = np_a_inv @ np_b_matrix\n\n#print the solution\nprint(np_x_matrix)\n\n#check to make sure the answer works\nprint(np_a_matrix @ np_x_matrix)", "Computation cost for inverse\nComputing a matrix inverse can be VERY expensive for large matrices. Do not exceed about 500 x 500 matrices\nEigenvectors/Eigenvalues\nBefore trying to understand what an eigenvector is, let's try to understand their analogue, a stationary point.\nA stationary point of a function $f(x)$ is an $x$ such that:\n$$x = f(x)$$\nConsider this function:\n$$f(x) = x - \\frac{x^2 - 612}{2x}$$\nIf we found a stationary point, that would be mean that\n$$x = x - \\frac{x^2 - 612}{2x} $$\nor \n$$ x^2 = 612 $$\nMore generally, you can find a square root of $A$ by finding a stationary point to:\n$$f(x) = x - \\frac{x^2 - A}{2x} $$\nIn this case, you can find the stationary point by just doing $x_{i+1} = f(x_i)$ until you are stationary", "x = 1\nfor i in range(10):\n x = x - (x**2 - 612) / (2 * x)\n print(i, x)", "Eigenvectors/Eigenvalues\nMatrices are analogues of functions. They take in a vector and return a vector.\n$$\\mathbf{A}\\mathbf{x} = \\mathbf{y}$$\nJust like stationary points, there is sometimes a special vector which has this property:\n$$\\mathbf{A}\\mathbf{x} = \\mathbf{x}$$\nSuch a vector is called an eigenvector. It turns out such vectors are rarely always exists. If we instead allow a scalar, we can find a whole bunch like this:\n$$\\mathbf{A}\\mathbf{v} =\\lambda\\mathbf{v}$$\nThese are like the stationary points above, except we are getting back our input times a constant. That means it's a particular direction that is unchanged, not the value. \nFinding Eigenvectors/Eigenvalues\nEigenvalues/eigenvectors can be found easily as well in python, including for complex numbers and sparse matrices. The command linalg.eigh will return only the real eigenvalues/eigenvectors. That assumes your matrix is Hermitian, meaning it is symmetric (if your matrix is real numbers). Use eig to get general possibly complex eigenvalues Here's an easy example:\nLet's consider this matrix: \n$$\nA = \\left[\\begin{array}{lr}\n3 & 1\\\n1 & 3\\\n\\end{array}\\right]$$\nImagine it as a geometry operator. It takes in a 2D vector and morphs it into another 2D vector.\n$$\\vec{x} = [1, 0]$$\n$$A \\,\\vec{x}^T = [3, 1]^T$$\nNow is there a particular direction where $\\mathbf{A}$ cannot affect it?", "A = np.array([[3,1], [1,3]])\n\ne_values, e_vectors = np.linalg.eig(A)\n\nprint(e_vectors)\nprint(e_values)", "So that means $v_1 = [0.7, 0.7]$ and $v_2 = [-0.7, 0.7]$. Let's find out:", "v1 = e_vectors[:,0]\nv2 = e_vectors[:,1]\n\nA @ v1", "Yes, that is the same direction! And notice that it's 4 times as much as the input vector, which is what the eigenvalue is telling us.\nA random matrix will almost never be Hermitian, so look out for complex numbers. In engineering, your matrices commonly be Hermitian.", "A = np.random.normal(size=(3,3))\ne_values, e_vectors = linalg.eig(A)\nprint(e_values)\nprint(e_vectors)\n", "Notice that there are compelx eigenvalues, so eigh was not correct to use" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
esa-as/2016-ml-contest
MandMs/Facies_classification-M&Ms_SVM_rbf_kernel_optimal.ipynb
apache-2.0
[ "Facies classification using an SVM classifier with RBF kernel\nContest entry by: <a href=\"https://github.com/mycarta\">Matteo Niccoli</a> and <a href=\"https://github.com/dahlmb\">Mark Dahl</a>\nOriginal contest notebook by Brendon Hall, Enthought\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/88x31.png\" /></a><br /><span xmlns:dct=\"http://purl.org/dc/terms/\" property=\"dct:title\">The code and ideas in this notebook,</span> by <span xmlns:cc=\"http://creativecommons.org/ns#\" property=\"cc:attributionName\">Matteo Niccoli and Mark Dahl,</span> are licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\">Creative Commons Attribution 4.0 International License</a>.\nIn this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). \nThe dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a support vector machine to classify facies types. \nThe plan\nAfter a quick exploration of the dataset, we will:\n- run cross-validated grid search (with stratified k-fold) for parameter tuning\n- look at validation curves of RMS error versus regularization parameter to ensure we are not over fitting\n- train a new classifier with tuned parameters using leave-one-well-out as a method of testing\nExploring the dataset\nFirst, we will examine the data set we will use to train the classifier.", "%matplotlib inline\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nfrom sklearn import preprocessing\nfrom sklearn.metrics import f1_score, accuracy_score, make_scorer\n\nfrom sklearn.model_selection import LeaveOneGroupOut, validation_curve\nimport pandas as pd\nfrom pandas import set_option\nset_option(\"display.max_rows\", 10)\npd.options.mode.chained_assignment = None\n\nfilename = 'facies_vectors.csv'\ntraining_data = pd.read_csv(filename)\ntraining_data", "This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. \nThe seven predictor variables are:\n* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),\nphotoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.\n* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)\nThe nine discrete facies (classes of rocks) are: \n1. Nonmarine sandstone\n2. Nonmarine coarse siltstone \n3. Nonmarine fine siltstone \n4. Marine siltstone and shale \n5. Mudstone (limestone)\n6. Wackestone (limestone)\n7. Dolomite\n8. Packstone-grainstone (limestone)\n9. Phylloid-algal bafflestone (limestone)\nThese facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.\nFacies |Label| Adjacent Facies\n:---: | :---: |:--:\n1 |SS| 2\n2 |CSiS| 1,3\n3 |FSiS| 2\n4 |SiSh| 5\n5 |MS| 4,6\n6 |WS| 5,7\n7 |D| 6,8\n8 |PS| 6,7,9\n9 |BS| 7,8\nLet's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.", "training_data['Well Name'] = training_data['Well Name'].astype('category')\ntraining_data['Formation'] = training_data['Formation'].astype('category')\ntraining_data['Well Name'].unique()", "These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. \nBefore we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.", "# 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale \n#5=mudstone 6=wackestone 7=dolomite 8=packstone 9=bafflestone\nfacies_colors = ['#F4D03F', '#F5B041', '#DC7633','#A569BD',\n '#000000', '#000080', '#2E86C1', '#AED6F1', '#196F3D']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n#facies_color_map is a dictionary that maps facies labels\n#to their respective colors\nfacies_color_map = {}\nfor ind, label in enumerate(facies_labels):\n facies_color_map[label] = facies_colors[ind]\n\ndef label_facies(row, labels):\n return labels[ row['Facies'] -1]\n \ntraining_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)\ntraining_data.describe()", "This is a quick view of the statistical distribution of the input variables. Looking at the count values, most values have 4149 valid values except for PE, which has 3232. We will drop the feature vectors that don't have a valid PE entry.", "PE_mask = training_data['PE'].notnull().values\ntraining_data = training_data[PE_mask]\n\ntraining_data.describe()", "Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, and we are also using depth. We also get a vector of the facies labels that correspond to each feature vector.", "y = training_data['Facies'].values\nprint y[25:40]\nprint np.shape(y)\n\nX = training_data.drop(['Formation', 'Well Name','Facies','FaciesLabels'], axis=1)\nprint np.shape(X)\nX.describe(percentiles=[.05, .25, .50, .75, .95])\n\nscaler = preprocessing.StandardScaler().fit(X)\nX = scaler.transform(X)", "Make performance scorers\nUsed to evaluate performance.", "Fscorer = make_scorer(f1_score, average = 'micro')\n# Ascorer = make_scorer(accuracy_score)", "Stratified K-fold validation to evaluate model performance\nOne of the key steps in machine learning is to estimate a model's performance on data that it has not seen before.\nScikit-learn provides a simple utility utility (train_test_split) to partition the data into a training and a test set, but the disadvantage with that is that we ignore a portion of our dataset during training. An additional disadvantage of simple spit, inherent to log data, is that there's a depth dependence. \nA possible strategy to avoid this is cross-validation. With k-fold cross-validation we randomly split the data into k-folds without replacement, where k-1 folds are used for training and one fold for testing. The process is repeated k times, and the performance is obtained by taking the average of the k individual performances.\nStratified k-fold is an improvement over standard k-fold in that the class proportions are preserved in each fold to ensure that each fold is representative of the class proportions in the data.\nGrid search for parameter tuning\nAnother important aspect of machine learning is the search for the optimal model parameters (i.e. those that will yield the best performance). This tuning is done using grid search.\nThe above short summary is based on Sebastian Raschka's <a href=\"https://github.com/rasbt/python-machine-learning-book\"> Python Machine Learning</a> book.", "from sklearn.model_selection import GridSearchCV", "Putting it all together: crossvalidated grid search\nFirst things first, how many samples do we have for each leave-one-well-out split?\nSince this will be our validation method, it is good to select a number of k-fold splits that will use for training the same number of samples (on average) we'd have with validation.", "wells = training_data[\"Well Name\"].values\nlogo = LeaveOneGroupOut()\n\nn_samples =[]\nfor train, test in logo.split(X, y, groups=wells):\n well_name = wells[test[0]]\n print well_name, 'out: ', np.shape(train)[0], 'training samples - ', np.shape(test)[0], 'test samples'\n n_samples.append(np.shape(train)[0])", "Let's get the average number of training samples (with the exclusion of Recruit F9). This is going to be the desired size for the folds.", "n_samples = np.delete(n_samples,5,0)\nave_n = (n_samples.sum()/len(n_samples))", "From that and the total number of samples we can estimate the appropriate number of folds for cross validation, knowing that:\nsize_folds = n_samples/n_splits \ngives: \nn_splits = n_samples/size_folds", "(len(y)/((len(y)-ave_n)))", "SVM classifier\nSimilar to the classifier in the article (but, as you will see, it uses a different kernel).", "from sklearn import svm\nSVC_classifier = svm.SVC(kernel = 'rbf', cache_size = 2400, random_state=1)", "We've found in our previous notebook that the rbf kernel performs better than the linear kernel.\nWe will now run grid search with a reasonable range for C (based on our experience) and a large range of gamma values. The intention is to ensure we do not settle on a local minimum for gamma, and then to revisit the value of C with validation curves.", "parm_grid={'C': [0.1, 0.05, 0.1, 0.5, 1, 5, 10, 15, 20, 25, 50],\n 'gamma':[0.00001, 0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000]}\n\ngrid_search = GridSearchCV(SVC_classifier,\n param_grid=parm_grid,\n scoring = Fscorer, \n n_jobs = 7, cv = 7)\n \ngrid_search.fit(X, y)\n\nprint('Best score: {}'.format(grid_search.best_score_))\nprint('Best parameters: {}'.format(grid_search.best_params_))\n\ngrid_search.best_estimator_", "Leave one well out validation curves against C parameter to prevent overfitting\nTheir use is very nicely illustrated in this tutorial:\nhttp://www.astroml.org/sklearn_tutorial/practical.html#learning-curves\nhttps://github.com/jakevdp/sklearn_pycon2015/blob/master/notebooks/05-Validation.ipynb", "def rms_error(model, X, y):\n y_pred = model.predict(X)\n return np.sqrt(np.mean((y - y_pred) ** 2))\n\nfrom sklearn import svm\nSVC_classifier_LOWO_VC = svm.SVC(cache_size=2400, class_weight=None, coef0=0.0,\n decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf',\n max_iter=-1, probability=False, random_state=1, shrinking=True,\n tol=0.001, verbose=False)\n\nparm_range1 = np.logspace(-2, 6, 9)\n\ntrain_scores1, test_scores1 = validation_curve(SVC_classifier_LOWO_VC, X, y, \"C\", parm_range1, \n cv =logo.split(X, y, groups=wells), \n scoring = rms_error, n_jobs = 9)\n\ntrain_scores_mean1 = np.mean(train_scores1, axis=1)\n#train_scores_std1= np.std(train_scores1, axis=1)\ntest_scores_mean1 = np.mean(test_scores1, axis=1)\n#test_scores_std1 = np.std(test_scores1, axis=1)\n\nprint test_scores_mean1\nprint np.amin(test_scores_mean1) # optimal (minimum) average RMS test error\nprint np.logspace(-2, 6, 9)[test_scores_mean1.argmin(axis=0)] # optimal C parameter value\n\nplt.figure(figsize=(10,10))\n\nplt.title(\"Validation Curve with SVM\")\nplt.xlabel(\"C\")\nplt.ylabel(\"RMS\")\n#plt.ylim(0.2, 1)\nlw = 2\n\nplt.semilogx(parm_range1, train_scores_mean1, label=\"Training error\", color=\"darkorange\", lw=lw)\nplt.semilogx(parm_range1, test_scores_mean1, label=\"Cross-validation error\", color=\"navy\", lw=lw)\n\nplt.legend(loc=\"best\")\n#plt.gca().invert_xaxis()\nplt.show()", "Confusion matrix and average test F1 score\nLet's see how we do with predicting the actual facies, by looking at a confusion matrix. We do this by keeping the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before.", "from sklearn import svm\nSVC_classifier_conf = svm.SVC(C = 100, cache_size=2400, class_weight=None, coef0=0.0,\n decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf',\n max_iter=-1, probability=False, random_state=1, shrinking=True,\n tol=0.001, verbose=False)\n\nSVC_classifier_conf.fit(X,y)\nsvc_pred = SVC_classifier_conf.predict(X)\n\nfrom sklearn.metrics import confusion_matrix\nfrom classification_utilities import display_cm, display_adj_cm\n\nconf = confusion_matrix(svc_pred, y)\ndisplay_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)\n\nf1_SVC = []\n\nfor train, test in logo.split(X, y, groups=wells):\n well_name = wells[test[0]]\n SVC_classifier_conf.fit(X[train], y[train])\n pred = SVC_classifier_conf.predict(X[test])\n sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro')\n print(\"{:>20s} {:.3f}\".format(well_name, sc))\n f1_SVC.append(sc)\n \nprint \"-Average leave-one-well-out F1 Score: %6f\" % (sum(f1_SVC)/(1.0*(len(f1_SVC))))", "Predicting, displaying, and saving facies for blind wells\nFor the plot we will use a function from the original notebook.", "blind = pd.read_csv('validation_data_nofacies.csv') \nX_blind = np.array(blind.drop(['Formation', 'Well Name'], axis=1)) \nX_blind = scaler.transform(X_blind) \ny_pred = SVC_classifier_conf.fit(X, y).predict(X_blind) \nblind['Facies'] = y_pred\n\ndef make_facies_log_plot(logs, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im=ax[5].imshow(cluster, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[5])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-1):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)\n\nmake_facies_log_plot(blind[blind['Well Name'] == 'STUART'], facies_colors)\nmake_facies_log_plot(blind[blind['Well Name'] == 'CRAWFORD'], facies_colors)\n\nnp.save('ypred.npy', y_pred)", "Displaying predicted versus original facies in the training data\nThis is a nice display to finish up with, as it gives us a visual idea of the predicted faces where we have facies from the core observations.\nThe plot we will use a function from the original notebook. Let's look at the well with the lowest F1 from the previous code block, CROSS H CATTLE, and the one with the highest F1 (excluding Recruit F9), which is SHRIMPLIN.", "def compare_facies_plot(logs, compadre, facies_colors):\n #make sure logs are sorted by depth\n logs = logs.sort_values(by='Depth')\n cmap_facies = colors.ListedColormap(\n facies_colors[0:len(facies_colors)], 'indexed')\n \n ztop=logs.Depth.min(); zbot=logs.Depth.max()\n \n cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)\n cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)\n \n f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))\n ax[0].plot(logs.GR, logs.Depth, '-g')\n ax[1].plot(logs.ILD_log10, logs.Depth, '-')\n ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')\n ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')\n ax[4].plot(logs.PE, logs.Depth, '-', color='black')\n im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',\n cmap=cmap_facies,vmin=1,vmax=9)\n \n divider = make_axes_locatable(ax[6])\n cax = divider.append_axes(\"right\", size=\"20%\", pad=0.05)\n cbar=plt.colorbar(im2, cax=cax)\n cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', \n 'SiSh', ' MS ', ' WS ', ' D ', \n ' PS ', ' BS ']))\n cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')\n \n for i in range(len(ax)-2):\n ax[i].set_ylim(ztop,zbot)\n ax[i].invert_yaxis()\n ax[i].grid()\n ax[i].locator_params(axis='x', nbins=3)\n \n ax[0].set_xlabel(\"GR\")\n ax[0].set_xlim(logs.GR.min(),logs.GR.max())\n ax[1].set_xlabel(\"ILD_log10\")\n ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())\n ax[2].set_xlabel(\"DeltaPHI\")\n ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())\n ax[3].set_xlabel(\"PHIND\")\n ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())\n ax[4].set_xlabel(\"PE\")\n ax[4].set_xlim(logs.PE.min(),logs.PE.max())\n ax[5].set_xlabel('Facies')\n ax[6].set_xlabel(compadre)\n \n ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])\n ax[4].set_yticklabels([]); ax[5].set_yticklabels([])\n ax[5].set_xticklabels([])\n ax[6].set_xticklabels([])\n f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)\n\nSVC_classifier_conf.fit(X,y)\npred = SVC_classifier_conf.predict(X)\n\nX = training_data\nX['Prediction'] = pred\n\ncompare_facies_plot(X[X['Well Name'] == 'CROSS H CATTLE'], 'Prediction', facies_colors)\ncompare_facies_plot(X[X['Well Name'] == 'SHRIMPLIN'], 'Prediction', facies_colors)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astro313/REU2017
Intro_4.ipynb
mit
[ "One way of running programs in python is by executing a script, with run &lt;script.py&gt; in python or python &lt;script.py&gt; in terminal. \nWhat if you realize that something in the script is wrong after you have executed the file, or for whatever reason you want to interupt the program?\nYou can use ctrl+c to abort the program which, essentially, is throwing an \"exception\" -- KeyboardInterrupt exception. We will briefly talk about Exception later in this notebook.\nIf you are writing some new code (to a python script) and you are unsure whether or not it will work, instead of doing run &lt;script.py&gt; and then manually interupting your code with ctrl+c, there are other more elegant ways. In this notebook, we will go over some ways of debugging in python.\n1) basic level: using print statements", "# Let's first define a broken function\ndef blah(a, b):\n c = 10\n return a/b - c\n\n# call the function \n# define some varables to pass to the function \naa = 5\nbb = 10\nprint blah(aa, bb) # call the function", "As we know, 5/10 - 10 = -9.5 and not -10, so something must be wrong inside the function. In this simple example, it may be super obvious that we are dividing an integer with an integer, and will get back an integer. (Division between integers is defined as returning the integer part of the result, throwing away the remainder. The same division operator does real division when operating with floats, very confusing, right?).\nBut this is good enough to show why simple print statements will suffice in some cases.\nOk, assuming that we didn't know what the problem was, we will use print to find out what went wrong.", "def blah(a, b):\n c = 10\n print \"a: \", a\n print \"b: \", b\n print \"c: \", c\n print \"a/b = %d/%d = %f\" %(a,b,a/b)\n print \"output:\", a/b - c\n return a/b - c\n\nblah(aa, bb) ", "From this, it's clear that a/b is the problem since a/b = 1/2 and not 0. And you can quickly go an fix that step. For example by using float(b), or by multiplying by 1. , the dot makes it a float.\nBut using print statements may be inconvenient if the code takes a long time to run, and also you may want to chcek the values of other variables to diagnose the problem. If you use crtl+C at this point, you will lose the values stored inside all variables. Perhaps you would want to go back and put another print statement inside the code and run it again to check another variable. But this goes on... and you may have to go back many times! \nAlternatively, you can use the pdb module, which is an interactive source debugger. The variables are presevered at breakpoint, and can interactively step through each line of your code.\n(see more at https://pymotw.com/2/pdb/)\nTo use, you can enable pdb either before an Exception is caught, or after. In Jupyter notebook, pdb can be enabled with the magic command %pdb, %pdb on, %pdb 1, disabled with %pdb, %pdb off or %pdb 0.", "%pdb 0\n\n% pdb off\n\n% pdb on\n\n%pdb\n\n%pdb", "After you've enabled pdb, type help to show available commands. Some commands are e.g. step, quit, restart.\nIf you have set pdb on before an exception is triggered, (I)python can call the interactive pdb debugger after the traceback printout.\nIf you want to activate the debugger AFTER an exception is caught/fired, without having to rerun your code, you can use the %debug magic (or debug in ipython).\nIf you are running some python scripts, where instead of running code line by line you want to run a large chunk of code before checking the variables or stepping through the code line-by-line, it's useful to use import pdb; pdb.set_trace(). \nGo to ipython or terminal and execute pdb1.py to see how it is used in practice inside python scripts.\nIf you know where you want to exit the code a priori, you can use sys.exit().", "import sys\n\na = [1,2,3]\nprint a\nsys.exit()\n\nb = 'hahaha'\nprint b", "Catching Errors\nSome common types of errors:\n\nNameError:\nundefined variables\n\n\nLogic error:\nharder to debug\nusually associate with the equation missing something\n\n\nIOError\nTypeError", "# Way to handle errors inside scripts\n\ntry:\n # what we want the code to do\nexcept: # when the above lines generate errors, will immediately jump to exception handler here, not finishing all the lines in try\n # Do something else\n\n# Some Example usage of try…except:\n# use default behavior if encounter IOError\ntry:\n import astropy \nexcept ImportError:\n print(\"Astropy not installed...\")\n\n# Slightly more complex:\n# Try, raise, except, else, finally\ntry:\n print ('blah')\n raise ValueError() # throws an error\nexcept ValueError, Err: # only catches 0 division errors\n print (\"We caught an error! \")\nelse:\n print (\"here, if it didn't go through except...no errors are caught\")\nfinally:\n print (\"literally, finally... Useful for cleaning files, or closing files.\")\n\n# If we didn't have an error...\n# \ntry:\n print ('blah')\n# raise ValueError() # throws an error\nexcept ValueError, Err: # only catches 0 division errors\n print (\"We caught an error! \")\nelse:\n print (\"here, if it didn't go through except... no errors are caught\")\nfinally:\n print (\"literally, finally... Useful for cleaning files, or closing files.\")", "But sometimes you may want to use if... else instead of try...except.\n\nIf the program knows how to fall back to a default, that's not an unexpected event\nExceptions should only be used to handle exceptional cases\ne.g. something requiring users' attention\n\n\n\nConditions\nBooleans are equivalent to 0 (False) and 1 (True) inside python", "import numpy as np\n\nmask = [True, True, False]\n\nprint np.sum(mask) # same as counting number where mask == True\n\ndebug = False\n\nif debug:\n print \"...\"\n\ndebug = True\n\nif debug:\n print \"...\"\n\n# define a number\nx = 33\n\n# print it if it is greater than 30 but smaller than 50\nif x > 30 and x < 50:\n print x\n\n# print if number not np.nan\nif not np.isnan(x):\n print x\n\n# Introducing numpy.where()\nimport numpy as np\nnp.where?\n\n# Example 1\n\na = [0.1, 1, 3, 10, 100]\na = np.array(a) # so we can use np.where\n\n# one way..\nconditionIdx = ((a<=10) & (a>=1))\nprint conditionIdx # boolean\nnew = a[conditionIdx]\n\n# or directly\nnew_a = a[((a <= 10) & (a>=1))]\n\n\n# you can also use np.where\nnew_a = a[np.where((a <= 10) & (a>=1))]\n\n# Example 2 -- replacement using np.where\nbeam_ga = np.where(tz > 0, img[tx, ty, tz], 0)\n\n# np.where(if condition is TRUE, then TRUE operation, else)\n# Here, to mask out beam value for z<0", "Now let's try some exercises! Exercises.ipynb" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td1a_home/2020_graph.ipynb
mit
[ "Algo - Graphe - Composantes connexes\nLes graphes sont un outil très utilisé pour modéliser un ensemble de relations entre personnes, entre produits, entre consommateurs. Les utilisations sont nombreuses, systèmes de recommandations, modélisation de la propagation d'une épidémie, modélisation de flux, plus court chemin dans un graphe.\nUn graphe est décrit par un ensemble V de noeuds (ou vertices en anglais) et un ensemble d'arcs (ou edges en anglais) reliant deux noeuds entre eux. Chaque arc peut être orienté - un seul chemin d'une extrémité à l'autre est possible - ou pas.\nL'algorithme proposé dans ce notebook calcule les composantes connexes dans un graphe non orienté. Un graphe connexe vérifie la propriété suivante : pour tout couple de noeuds, il existe un chemin - une séquence d'arcs - reliant ces deux noeuds. Si un graphe est connexe, il contient une seule composante connexe, s'il ne l'est pas, alors on peut le diviser en sous-graphe connexe de telle sorte qu'il n'existe aucun arc reliant deux noeuds appartenant à des sous-graphes distincts.\nUn graphe est entièrement défini par sa matrice d'adjacence $M=(m_{ij})$ : $m_{ij} = 1$ si les noeuds $V_i$ et $V_j$ sont reliés entre eux, 0 sinon. Si le graphe est orienté alors $m_{ij} = m_{ji}$ : la matrice est symétrique.", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\n%matplotlib inline", "Enoncé\nQ1 : construire une matrice d'adjacence symétrique aléatoire", "def random_adjacency_matrix(n, alpha=0.3):\n # alpha est le taux de remplissage, plus il est faible, plus la probabilité\n # d'avoir plusieurs composantes connexes est grande.\n # ...\n return None", "Q2 : calculer les valeurs propres et les trier par ordre croissant\nIl faudra recommencer avec plusieurs valeurs de alpha différentes pour avoir une idée de ce qu'il se passe.\nQ3 : que fait l'algorithme suivant\nOn crée un tableau T=list(range(n)) où n est le nombre de noeuds.\nPour tous les arcs $V=(E_i, E_j)$ faire T[i] = T[j] = min(T[i], T[j]).\nRecommencer tant qu'une valeur de T est mise à jour.\nQ4 : construire un algorithme qui retourne les composantes connexes d'un graphe\nRéponses\nQ1 : construire une matrice d'adjacence symétrique aléatoire\nOn change un peu l'énoncé et on remplace les valeurs nulles sur la diagonale par l'opposé du degré de chaque noeud. Le degré d'un noeud est le nombre d'arc relié à ce noeud. De cette façon, la somme des coefficients sur une ligne est nulle. Donc il existe un vecteur propre associé à la valeur propre 0.", "import numpy\n\ndef random_symmetric_adjacency_matrix(n, alpha=0.3):\n rnd = numpy.random.rand(n, n)\n rnd = (rnd + rnd.T) / 2 # symétrique\n rnd2 = rnd.copy() # copie\n rnd2[rnd <= alpha] = 1\n rnd2[rnd > alpha] = 0\n for i in range(n):\n rnd2[i, i] = 0 # 0 sur la diagonale\n rnd2[i, i] = - rnd2[i, :].sum()\n return rnd2\n\nrandom_symmetric_adjacency_matrix(5, alpha=0.5)\n\nfrom tqdm import tqdm\nimport pandas\n\nN = 2000\nobs = []\nfor alpha in tqdm([0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.]):\n total_nb1 = 0\n for i in range(0, N):\n mat = random_symmetric_adjacency_matrix(10, alpha=alpha)\n nb1 = (mat.ravel() == 1).sum()\n total_nb1 += nb1\n obs.append(dict(alpha=alpha, emptyness=total_nb1 / (mat.size - mat.shape[0]) / N))\n\ndf = pandas.DataFrame(obs)\ndf.plot(x=\"alpha\", y=\"emptyness\", title='proportion de coefficient nuls');", "Q2 : calculer les valeurs propres et les trier par ordre croissant", "w, v = numpy.linalg.eig(mat)\nw\n\nsum(numpy.abs(w) < 1e-7)\n\nN = 1000\nobs = []\nfor alpha in tqdm([0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.]):\n total_null = 0\n for i in range(0, N):\n mat = random_symmetric_adjacency_matrix(10, alpha=alpha)\n w, v = numpy.linalg.eig(mat)\n nb_null = sum(numpy.abs(w) < 1e-7)\n total_null += nb_null\n obs.append(dict(alpha=alpha, null=total_null / N))\n\ndf = pandas.DataFrame(obs)\ndf.plot(x=\"alpha\", y=\"null\", title='nombre de valeurs propres nulles');", "On peut lire ce graphe de plusieurs façons. Tout d'abord, si alpha=0, il n'y a aucun arc et la matrice est nulle, toutes les valeurs propres sont nulles. Si alpha est petit, il y a peu de coefficients non nuls et il est impossible de compresser l'information qu'elle contient en une matrice de rang inférieur.\nQ3 : que fait l'algorithme suivant\nOn crée un tableau T=list(range(n)) où n est le nombre de noeuds.\nPour tous les arcs $V=(E_i, E_j)$ faire T[i] = T[j] = min(T[i], T[j]).\nRecommencer tant qu'une valeur de T est mise à jour.", "def connex_components(mat):\n N = mat.shape[0]\n T = numpy.arange(N)\n \n modifications = True\n while modifications:\n modifications = False\n for i in range(N):\n for j in range(i+1, N):\n if mat[i, j] == 1 and T[i] != T[j]:\n T[i] = T[j] = min(T[i], T[j])\n modifications = True\n return T\n\nmat = random_symmetric_adjacency_matrix(10, alpha=0.2)\nres = connex_components(mat)\nres", "Le nombre de composantes connexes correspond au nombre de numéro distincts dans le tableau que la fonction retourne.", "import matplotlib.pyplot as plt\n\nN = 100\nobs = []\nfor alpha in tqdm([0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.]):\n total_null = 0\n total_cnx = 0\n for i in range(0, N):\n mat = random_symmetric_adjacency_matrix(10, alpha=alpha)\n cnx = len(set(connex_components(mat)))\n w, v = numpy.linalg.eig(mat)\n nb_null = sum(numpy.abs(w) < 1e-7)\n total_null += nb_null\n total_cnx += cnx\n obs.append(dict(alpha=alpha, null=total_null / N, cnx=total_cnx / N))\n\ndf = pandas.DataFrame(obs)\nfig, ax = plt.subplots(1, 2, figsize=(14, 4))\ndf.plot(\n x=\"alpha\", y=[\"null\", \"cnx\"], ax=ax[0],\n title='nombre de valeurs propres nulles\\net nombre de composantes connexes')\ndf.plot(\n x=\"null\", y=[\"cnx\"], ax=ax[1],\n title='x=valeurs propres nulles\\ny=composantes connexes');", "Le nombre de composantes connexes semble égal au nombre de valeurs propres nulles de la matrice d'adjacence dans laquelle le coefficient sur la diagonale est le degré du noeud.\nQ4 : construire un algorithme qui retourne les composantes connexes d'un graphe\nOn construit un dictionnaire qui accumule les éléments dans des listes associés à chaque numéro de composante connexe.", "def connex_components_indices(mat):\n cnx = connex_components(mat)\n res = {}\n for i, c in enumerate(cnx):\n if c not in res:\n res[c] = []\n res[c].append(i)\n return res\n\nmat = random_symmetric_adjacency_matrix(10, alpha=0.3)\nconnex_components_indices(mat)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tayden/titanic-death-decider
titanic-death-decider.ipynb
mit
[ "First the data is loaded into Pandas data frames", "import numpy as np\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Read the input datasets\ntrain_data = pd.read_csv('../input/train.csv')\ntest_data = pd.read_csv('../input/test.csv')\n\n# Fill missing numeric values with median for that column\ntrain_data['Age'].fillna(train_data['Age'].mean(), inplace=True)\ntest_data['Age'].fillna(test_data['Age'].mean(), inplace=True)\ntest_data['Fare'].fillna(test_data['Fare'].mean(), inplace=True)\n\nprint(train_data.info())\nprint(test_data.info())", "Next select a subset of our train_data to use for training the model", "# Encode sex as int 0=female, 1=male\ntrain_data['Sex'] = train_data['Sex'].apply(lambda x: int(x == 'male'))\n\n# Extract the features we want to use\nX = train_data[['Pclass', 'Sex', 'Age', 'Fare', 'SibSp', 'Parch']].as_matrix()\nprint(np.shape(X))\n\n# Extract survival target\ny = train_data[['Survived']].values.ravel()\nprint(np.shape(y))", "Now train the SVM classifier and get validation accuracy using K-Folds cross validation", "from sklearn.svm import SVC\nfrom sklearn.model_selection import KFold, cross_val_score\nfrom sklearn.preprocessing import MinMaxScaler\n\n# Build the classifier\nkf = KFold(n_splits=3)\nmodel = SVC(kernel='rbf', C=300)\n\nscores = []\nfor train, test in kf.split(X):\n # Normalize training and test data using train data norm parameters\n normalizer = MinMaxScaler().fit(X[train])\n X_train = normalizer.transform(X[train])\n X_test = normalizer.transform(X[test])\n \n scores.append(model.fit(X_train, y[train]).score(X_test, y[test]))\n \nprint(\"Mean 3-fold cross validation accuracy: %s\" % np.mean(scores))", "Make predictions on the test data and output the results", "# Create model with all training data\nnormalizer = MinMaxScaler().fit(X)\nX = normalizer.transform(X)\nclassifier = model.fit(X, y)\n\n# Encode sex as int 0=female, 1=male\ntest_data['Sex'] = test_data['Sex'].apply(lambda x: int(x == 'male'))\n\n# Extract desired features\nX_ = test_data[['Pclass', 'Sex', 'Age', 'Fare', 'SibSp', 'Parch']].as_matrix()\nX_ = normalizer.transform(X_)\n\n# Predict if passengers survived using model\ny_ = classifier.predict(X_)\n\n# Append the survived attribute to the test data\ntest_data['Survived'] = y_\npredictions = test_data[['PassengerId', 'Survived']]\nprint(predictions)\n\n# Save the output for submission\npredictions.to_csv('submission.csv', index=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OSGeo-live/CesiumWidget
GSOC/notebooks/Projects/CESIUM/CesiumWidget Example.ipynb
apache-2.0
[ "Cesium Widget Example\nThis is an example notebook to sow how to bind the Cesiumjs with the IPython interactive widget system.", "from CesiumWidget import CesiumWidget\nfrom IPython import display\nfrom czml_example import simple_czml, complex_czml", "The code:\nfrom czml_example import simple_czml, complex_czml\nSimply import some CZML data for the viewer to display.\nCreate widget object", "cesiumExample = CesiumWidget(width=\"100%\",czml=simple_czml, enable_lighting=True)", "Display the widget:", "#cesiumExample", "Add some data to the viewer\n\nA simple czml", "#cesiumExample.czml = simple_czml", "A more complex CZML example", "#cesiumExample.czml = complex_czml", "Now let's make some interactive widget:", "from __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed\nfrom ipywidgets import widgets", "store the CZML objet in a dictionary and use their name as keys\ndefine a function to switch between CZML\nbind the IPython intercat class to the function", "myczml = {'simple_czml':simple_czml, 'complex_czml':complex_czml}\n\nmyplace = {'Eboli, IT':'', 'Woods Hole, MA':'', 'Durham, NH':''}\n\nimport geocoder\nimport time\nfor i in myplace.keys():\n g = geocoder.google(i)\n print(g.latlng)\n myplace[i]=g.latlng\n\nmyplace\n\ndef f(CZML):\n cesiumExample.czml = myczml[CZML]\n\ndef z(Location,z=(0,20000000)):\n cesiumExample.zoom_to(myplace[Location][1],myplace[Location][0],z)\n\ninteract(f, CZML=('simple_czml','complex_czml')), interact(z, Location=('Eboli, IT','Woods Hole, MA','Durham, NH'));\n\ncesiumExample" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MikeLing/shogun
doc/ipython-notebooks/neuralnets/neuralnets_digits.ipynb
gpl-3.0
[ "Neural Nets for Digit Classification\nby Khaled Nasr as a part of a <a href=\"https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752\">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn\nThis notebook illustrates how to use the NeuralNets module to teach a neural network to recognize digits. It also explores the different optimization and regularization methods supported by the module. Convolutional neural networks are also discussed.\nIntroduction\nAn Artificial Neural Network is a machine learning model that is inspired by the way biological nervous systems, such as the brain, process information. The building block of neural networks is called a neuron. All a neuron does is take a weighted sum of its inputs and pass it through some non-linear function (activation function) to produce its output. A (feed-forward) neural network is a bunch of neurons arranged in layers, where each neuron in layer i takes its input from all the neurons in layer i-1. For more information on how neural networks work, follow this link.\nIn this notebook, we'll look at how a neural network can be used to recognize digits. We'll train the network on the USPS dataset of handwritten digits.\nWe'll start by loading the data and dividing it into a training set, a validation set, and a test set. The USPS dataset has 9298 examples of handwritten digits. We'll intentionally use just a small portion (1000 examples) of the dataset for training . This is to keep training time small and to illustrate the effects of different regularization methods.", "%pylab inline\n%matplotlib inline\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\nfrom scipy.io import loadmat\nfrom shogun import RealFeatures, MulticlassLabels, Math\n\n# load the dataset\ndataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))\n\nXall = dataset['data']\n# the usps dataset has the digits labeled from 1 to 10 \n# we'll subtract 1 to make them in the 0-9 range instead\nYall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 \n\n# 1000 examples for training\nXtrain = RealFeatures(Xall[:,0:1000])\nYtrain = MulticlassLabels(Yall[0:1000])\n\n# 4000 examples for validation\nXval = RealFeatures(Xall[:,1001:5001])\nYval = MulticlassLabels(Yall[1001:5001])\n\n# the rest for testing\nXtest = RealFeatures(Xall[:,5002:-1])\nYtest = MulticlassLabels(Yall[5002:-1])\n\n# initialize the random number generator with a fixed seed, for repeatability\nMath.init_random(10)", "Creating the network\nTo create a neural network in shogun, we'll first create an instance of NeuralNetwork and then initialize it by telling it how many inputs it has and what type of layers it contains. To specifiy the layers of the network a DynamicObjectArray is used. The array contains instances of NeuralLayer-based classes that determine the type of neurons each layer consists of. Some of the supported layer types are: NeuralLinearLayer, NeuralLogisticLayer and\nNeuralSoftmaxLayer.\nWe'll create a feed-forward, fully connected (every neuron is connected to all neurons in the layer below) neural network with 2 logistic hidden layers and a softmax output layer. The network will have 256 inputs, one for each pixel (16*16 image). The first hidden layer will have 256 neurons, the second will have 128 neurons, and the output layer will have 10 neurons, one for each digit class. Note that we're using a big network, compared with the size of the training set. This is to emphasize the effects of different regularization methods. We'll try training the network with:\n\nNo regularization\nL2 regularization\nL1 regularization\nDropout regularization\n\nTherefore, we'll create 4 versions of the network, train each one of them differently, and then compare the results on the validation set.", "from shogun import NeuralNetwork, NeuralInputLayer, NeuralLogisticLayer, NeuralSoftmaxLayer\nfrom shogun import DynamicObjectArray\n\n# setup the layers\nlayers = DynamicObjectArray()\nlayers.append_element(NeuralInputLayer(256)) # input layer, 256 neurons\nlayers.append_element(NeuralLogisticLayer(256)) # first hidden layer, 256 neurons\nlayers.append_element(NeuralLogisticLayer(128)) # second hidden layer, 128 neurons\nlayers.append_element(NeuralSoftmaxLayer(10)) # output layer, 10 neurons\n\n# create the networks\nnet_no_reg = NeuralNetwork(layers)\nnet_no_reg.quick_connect()\nnet_no_reg.initialize_neural_network()\n\nnet_l2 = NeuralNetwork(layers)\nnet_l2.quick_connect()\nnet_l2.initialize_neural_network()\n\nnet_l1 = NeuralNetwork(layers)\nnet_l1.quick_connect()\nnet_l1.initialize_neural_network()\n\nnet_dropout = NeuralNetwork(layers)\nnet_dropout.quick_connect()\nnet_dropout.initialize_neural_network()", "We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it.", "# import networkx, install if necessary\ntry:\n import networkx as nx\nexcept ImportError:\n import pip\n pip.main(['install', '--user', 'networkx'])\n import networkx as nx\n \nG = nx.DiGraph()\npos = {}\n\nfor i in range(8):\n pos['X'+str(i)] = (i,0) # 8 neurons in the input layer\n pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer\n \n for j in range(8): G.add_edge('X'+str(j),'H'+str(i))\n \n if i<4:\n pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer\n for j in range(8): G.add_edge('H'+str(j),'U'+str(i))\n \n if i<6:\n pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer\n for j in range(4): G.add_edge('U'+str(j),'Y'+str(i))\n\nnx.draw(G, pos, node_color='y', node_size=750)", "Training\nNeuralNetwork supports two methods for training: LBFGS (default) and mini-batch gradient descent.\nLBFGS is a full-batch optimization methods, it looks at the entire training set each time before it changes the network's parameters. This makes it slow with large datasets. However, it works very well with small/medium size datasets and is very easy to use as it requires no parameter tuning.\nMini-batch Gradient Descent looks at only a small portion of the training set (a mini-batch) before each step, which it makes it suitable for large datasets. However, it's a bit harder to use than LBFGS because it requires some tuning for its parameters (learning rate, learning rate decay,..)\nTraining in NeuralNetwork stops when:\n\nNumber of epochs (iterations over the entire training set) exceeds max_num_epochs\nThe (percentage) difference in error between the current and previous iterations is smaller than epsilon, i.e the error is not anymore being reduced by training\n\nTo see all the options supported for training, check the documentation\nWe'll first write a small function to calculate the classification accuracy on the validation set, so that we can compare different models:", "from shogun import MulticlassAccuracy\n\ndef compute_accuracy(net, X, Y):\n predictions = net.apply_multiclass(X)\n\n evaluator = MulticlassAccuracy()\n accuracy = evaluator.evaluate(predictions, Y)\n return accuracy*100", "Training without regularization\nWe'll start by training the first network without regularization using LBFGS optimization. Note that LBFGS is suitable because we're using a small dataset.", "net_no_reg.set_epsilon(1e-6)\nnet_no_reg.set_max_num_epochs(600)\n\n# uncomment this line to allow the training progress to be printed on the console\n#from shogun import MSG_INFO; net_no_reg.io.set_loglevel(MSG_INFO)\n\nnet_no_reg.set_labels(Ytrain)\nnet_no_reg.train(Xtrain) # this might take a while, depending on your machine\n\n# compute accuracy on the validation set\nprint \"Without regularization, accuracy on the validation set =\", compute_accuracy(net_no_reg, Xval, Yval), \"%\"", "Training with L2 regularization\nWe'll train another network, but with L2 regularization. This type of regularization attempts to prevent overfitting by penalizing large weights. This is done by adding $\\frac{1}{2} \\lambda \\Vert W \\Vert_2$ to the optimization objective that the network tries to minimize, where $\\lambda$ is the regularization coefficient.", "# turn on L2 regularization\nnet_l2.set_l2_coefficient(3e-4)\n\nnet_l2.set_epsilon(1e-6)\nnet_l2.set_max_num_epochs(600)\n\nnet_l2.set_labels(Ytrain)\nnet_l2.train(Xtrain) # this might take a while, depending on your machine\n\n# compute accuracy on the validation set\nprint \"With L2 regularization, accuracy on the validation set =\", compute_accuracy(net_l2, Xval, Yval), \"%\"", "Training with L1 regularization\nWe'll now try L1 regularization. It works by by adding $\\lambda \\Vert W \\Vert_1$ to the optimzation objective. This has the effect of penalizing all non-zero weights, therefore pushing all the weights to be close to 0.", "# turn on L1 regularization\nnet_l1.set_l1_coefficient(3e-5)\n\nnet_l1.set_epsilon(e-6)\nnet_l1.set_max_num_epochs(600)\n\nnet_l1.set_labels(Ytrain)\nnet_l1.train(Xtrain) # this might take a while, depending on your machine\n\n# compute accuracy on the validation set\nprint \"With L1 regularization, accuracy on the validation set =\", compute_accuracy(net_l1, Xval, Yval), \"%\"", "Training with dropout\nThe idea behind dropout is very simple: randomly ignore some neurons during each training iteration. When used on neurons in the hidden layers, it has the effect of forcing each neuron to learn to extract features that are useful in any context, regardless of what the other hidden neurons in its layer decide to do. Dropout can also be used on the inputs to the network by randomly omitting a small fraction of them during each iteration.\nWhen using dropout, it's usually useful to limit the L2 norm of a neuron's incoming weights vector to some constant value.\nDue to the stochastic nature of dropout, LBFGS optimization doesn't work well with it, therefore we'll use mini-batch gradient descent instead.", "from shogun import NNOM_GRADIENT_DESCENT\n\n# set the dropout probabilty for neurons in the hidden layers\nnet_dropout.set_dropout_hidden(0.5)\n# set the dropout probabilty for the inputs\nnet_dropout.set_dropout_input(0.2)\n# limit the maximum incoming weights vector lengh for neurons\nnet_dropout.set_max_norm(15)\n\nnet_dropout.set_epsilon(1e-6)\nnet_dropout.set_max_num_epochs(600)\n\n# use gradient descent for optimization\nnet_dropout.set_optimization_method(NNOM_GRADIENT_DESCENT)\nnet_dropout.set_gd_learning_rate(0.5)\nnet_dropout.set_gd_mini_batch_size(100)\n\nnet_dropout.set_labels(Ytrain)\nnet_dropout.train(Xtrain) # this might take a while, depending on your machine\n\n# compute accuracy on the validation set\nprint \"With dropout, accuracy on the validation set =\", compute_accuracy(net_dropout, Xval, Yval), \"%\"", "Convolutional Neural Networks\nNow we'll look at a different type of network, namely convolutional neural networks. A convolutional net operates on two principles:\n\nLocal connectivity: Convolutional nets work with inputs that have some sort of spacial structure, where the order of the inputs features matter, i.e images. Local connectivity means that each neuron will be connected only to a small neighbourhood of pixels.\nWeight sharing: Different neurons use the same set of weights. This greatly reduces the number of free parameters, and therefore makes the optimization process easier and acts as a good regularizer. \n\nWith that in mind, each layer in a convolutional network consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. The convolution operation satisfies the local connectivity and the weight sharing constraints. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. This adds some translation invarience and improves the performance.\nConvolutional nets in Shogun are handled through the CNeuralNetwork class along with the CNeuralConvolutionalLayer class. A CNeuralConvolutionalLayer represents a convolutional layer with multiple feature maps, optional max-pooling, and support for different types of activation functions\nNow we'll creates a convolutional neural network with two convolutional layers and a softmax output layer. We'll use the rectified linear activation function for the convolutional layers:", "from shogun import NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR\n\n# prepere the layers\nlayers_conv = DynamicObjectArray()\n\n# input layer, a 16x16 image single channel image\nlayers_conv.append_element(NeuralInputLayer(16,16,1)) \n\n# the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters)\n# and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps\nlayers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 10, 2, 2, 2, 2))\n\n# the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters)\n# and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps\nlayers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2))\n\n# output layer\nlayers_conv.append_element(NeuralSoftmaxLayer(10))\n\n# create and initialize the network\nnet_conv = NeuralNetwork(layers_conv)\nnet_conv.quick_connect()\nnet_conv.initialize_neural_network()", "Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization:", "# 50% dropout in the input layer\nnet_conv.set_dropout_input(0.5)\n\n# max-norm regularization\nnet_conv.set_max_norm(1.0)\n\n# set gradient descent parameters\nnet_conv.set_optimization_method(NNOM_GRADIENT_DESCENT)\nnet_conv.set_gd_learning_rate(0.01)\nnet_conv.set_gd_mini_batch_size(100)\nnet_conv.set_epsilon(0.0)\nnet_conv.set_max_num_epochs(100)\n\n# start training\nnet_conv.set_labels(Ytrain)\nnet_conv.train(Xtrain)\n\n# compute accuracy on the validation set\nprint \"With a convolutional network, accuracy on the validation set =\", compute_accuracy(net_conv, Xval, Yval), \"%\"", "Evaluation\nAccording the accuracy on the validation set, the convolutional network works best in out case. Now we'll measure its performance on the test set:", "print \"Accuracy on the test set using the convolutional network =\", compute_accuracy(net_conv, Xtest, Ytest), \"%\"", "We can also look at some of the images and the network's response to each of them:", "predictions = net_conv.apply_multiclass(Xtest)\n\n_=figure(figsize=(10,12))\n# plot some images, with the predicted label as the title of each image\n# this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg \nfor i in range(100):\n ax=subplot(10,10,i+1)\n title(int(predictions[i]))\n ax.imshow(Xtest[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r)\n ax.set_xticks([])\n ax.set_yticks([])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.19/_downloads/d5764d6befb13ad52368247a508e45f6/plot_3d_to_2d.ipynb
bsd-3-clause
[ "%matplotlib inline", "====================================================\nHow to convert 3D electrode positions to a 2D image.\n====================================================\nSometimes we want to convert a 3D representation of electrodes into a 2D\nimage. For example, if we are using electrocorticography it is common to\ncreate scatterplots on top of a brain, with each point representing an\nelectrode.\nIn this example, we'll show two ways of doing this in MNE-Python. First,\nif we have the 3D locations of each electrode then we can use Mayavi to\ntake a snapshot of a view of the brain. If we do not have these 3D locations,\nand only have a 2D image of the electrodes on the brain, we can use the\n:class:mne.viz.ClickableImage class to choose our own electrode positions\non the image.", "# Authors: Christopher Holdgraf <choldgraf@berkeley.edu>\n#\n# License: BSD (3-clause)\nfrom scipy.io import loadmat\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom os import path as op\n\nimport mne\nfrom mne.viz import ClickableImage # noqa\nfrom mne.viz import (plot_alignment, snapshot_brain_montage,\n set_3d_view)\n\n\nprint(__doc__)\n\nsubjects_dir = mne.datasets.sample.data_path() + '/subjects'\npath_data = mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat'\n\n# We've already clicked and exported\nlayout_path = op.join(op.dirname(mne.__file__), 'data', 'image')\nlayout_name = 'custom_layout.lout'", "Load data\nFirst we'll load a sample ECoG dataset which we'll use for generating\na 2D snapshot.", "mat = loadmat(path_data)\nch_names = mat['ch_names'].tolist()\nelec = mat['elec'] # electrode coordinates in meters\n# Now we make a montage stating that the sEEG contacts are in head\n# coordinate system (although they are in MRI). This is compensated\n# by the fact that below we do not specicty a trans file so the Head<->MRI\n# transform is the identity.\nmontage = mne.channels.make_dig_montage(ch_pos=dict(zip(ch_names, elec)),\n coord_frame='head')\ninfo = mne.create_info(ch_names, 1000., 'ecog', montage=montage)\nprint('Created %s channel positions' % len(ch_names))", "Project 3D electrodes to a 2D snapshot\nBecause we have the 3D location of each electrode, we can use the\n:func:mne.viz.snapshot_brain_montage function to return a 2D image along\nwith the electrode positions on that image. We use this in conjunction with\n:func:mne.viz.plot_alignment, which visualizes electrode positions.", "fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,\n surfaces=['pial'], meg=False)\nset_3d_view(figure=fig, azimuth=200, elevation=70)\nxy, im = snapshot_brain_montage(fig, montage)\n\n# Convert from a dictionary to array to plot\nxy_pts = np.vstack([xy[ch] for ch in info['ch_names']])\n\n# Define an arbitrary \"activity\" pattern for viz\nactivity = np.linspace(100, 200, xy_pts.shape[0])\n\n# This allows us to use matplotlib to create arbitrary 2d scatterplots\nfig2, ax = plt.subplots(figsize=(10, 10))\nax.imshow(im)\nax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')\nax.set_axis_off()\n# fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage", "Manually creating 2D electrode positions\nIf we don't have the 3D electrode positions then we can still create a\n2D representation of the electrodes. Assuming that you can see the electrodes\non the 2D image, we can use :class:mne.viz.ClickableImage to open the image\ninteractively. You can click points on the image and the x/y coordinate will\nbe stored.\nWe'll open an image file, then use ClickableImage to\nreturn 2D locations of mouse clicks (or load a file already created).\nThen, we'll return these xy positions as a layout for use with plotting topo\nmaps.", "# This code opens the image so you can click on it. Commented out\n# because we've stored the clicks as a layout file already.\n\n# # The click coordinates are stored as a list of tuples\n# im = plt.imread('./brain.png')\n# click = ClickableImage(im)\n# click.plot_clicks()\n\n# # Generate a layout from our clicks and normalize by the image\n# print('Generating and saving layout...')\n# lt = click.to_layout()\n# lt.save(op.join(layout_path, layout_name)) # To save if we want\n\n# # We've already got the layout, load it\nlt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)\nx = lt.pos[:, 0] * float(im.shape[1])\ny = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position\nfig, ax = plt.subplots()\nax.imshow(im)\nax.scatter(x, y, s=120, color='r')\nplt.autoscale(tight=True)\nax.set_axis_off()\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]