repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
tensorflow/model-optimization
tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Pruning comprehensive guide\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/model_optimization/guide/pruning/comprehensive_guide\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/pruning/comprehensive_guide.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nWelcome to the comprehensive guide for Keras weight pruning.\nThis page documents various use cases and shows how to use the API for each one. Once you know which APIs you need, find the parameters and the low-level details in the\nAPI docs.\n\nIf you want to see the benefits of pruning and what's supported, see the overview.\nFor a single end-to-end example, see the pruning example.\n\nThe following use cases are covered:\n* Define and train a pruned model.\n * Sequential and Functional.\n * Keras model.fit and custom training loops\n* Checkpoint and deserialize a pruned model.\n* Deploy a pruned model and see compression benefits.\nFor configuration of the pruning algorithm, refer to the tfmot.sparsity.keras.prune_low_magnitude API docs.\nSetup\nFor finding the APIs you need and understanding purposes, you can run but skip reading this section.", "! pip install -q tensorflow-model-optimization\n\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow_model_optimization as tfmot\n\n%load_ext tensorboard\n\nimport tempfile\n\ninput_shape = [20]\nx_train = np.random.randn(1, 20).astype(np.float32)\ny_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)\n\ndef setup_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(20, input_shape=input_shape),\n tf.keras.layers.Flatten()\n ])\n return model\n\ndef setup_pretrained_weights():\n model = setup_model()\n\n model.compile(\n loss=tf.keras.losses.categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy']\n )\n\n model.fit(x_train, y_train)\n\n _, pretrained_weights = tempfile.mkstemp('.tf')\n\n model.save_weights(pretrained_weights)\n\n return pretrained_weights\n\ndef get_gzipped_model_size(model):\n # Returns size of gzipped model, in bytes.\n import os\n import zipfile\n\n _, keras_file = tempfile.mkstemp('.h5')\n model.save(keras_file, include_optimizer=False)\n\n _, zipped_file = tempfile.mkstemp('.zip')\n with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:\n f.write(keras_file)\n\n return os.path.getsize(zipped_file)\n\nsetup_model()\npretrained_weights = setup_pretrained_weights()", "Define model\nPrune whole model (Sequential and Functional)\nTips for better model accuracy:\n* Try \"Prune some layers\" to skip pruning the layers that reduce accuracy the most.\n* It's generally better to finetune with pruning as opposed to training from scratch.\nTo make the whole model train with pruning, apply tfmot.sparsity.keras.prune_low_magnitude to the model.", "base_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended.\n\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\nmodel_for_pruning.summary()", "Prune some layers (Sequential and Functional)\nPruning a model can have a negative effect on accuracy. You can selectively prune layers of a model to explore the trade-off between accuracy, speed, and model size.\nTips for better model accuracy:\n* It's generally better to finetune with pruning as opposed to training from scratch.\n* Try pruning the later layers instead of the first layers.\n* Avoid pruning critical layers (e.g. attention mechanism).\nMore:\n* The tfmot.sparsity.keras.prune_low_magnitude API docs provide details on how to vary the pruning configuration per layer.\nIn the example below, prune only the Dense layers.", "# Create a base model\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\n\n# Helper function uses `prune_low_magnitude` to make only the \n# Dense layers train with pruning.\ndef apply_pruning_to_dense(layer):\n if isinstance(layer, tf.keras.layers.Dense):\n return tfmot.sparsity.keras.prune_low_magnitude(layer)\n return layer\n\n# Use `tf.keras.models.clone_model` to apply `apply_pruning_to_dense` \n# to the layers of the model.\nmodel_for_pruning = tf.keras.models.clone_model(\n base_model,\n clone_function=apply_pruning_to_dense,\n)\n\nmodel_for_pruning.summary()", "While this example used the type of the layer to decide what to prune, the easiest way to prune a particular layer is to set its name property, and look for that name in the clone_function.", "print(base_model.layers[0].name)", "More readable but potentially lower model accuracy\nThis is not compatible with fine-tuning with pruning, which is why it may be less accurate than the above examples which\nsupport fine-tuning.\nWhile prune_low_magnitude can be applied while defining the initial model, loading the weights after does not work in the below examples.\nFunctional example", "# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.\ni = tf.keras.Input(shape=(20,))\nx = tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(10))(i)\no = tf.keras.layers.Flatten()(x)\nmodel_for_pruning = tf.keras.Model(inputs=i, outputs=o)\n\nmodel_for_pruning.summary()", "Sequential example", "# Use `prune_low_magnitude` to make the `Dense` layer train with pruning.\nmodel_for_pruning = tf.keras.Sequential([\n tfmot.sparsity.keras.prune_low_magnitude(tf.keras.layers.Dense(20, input_shape=input_shape)),\n tf.keras.layers.Flatten()\n])\n\nmodel_for_pruning.summary()", "Prune custom Keras layer or modify parts of layer to prune\nCommon mistake: pruning the bias usually harms model accuracy too much.\ntfmot.sparsity.keras.PrunableLayer serves two use cases:\n1. Prune a custom Keras layer\n2. Modify parts of a built-in Keras layer to prune.\nFor an example, the API defaults to only pruning the kernel of the\nDense layer. The example below prunes the bias also.", "class MyDenseLayer(tf.keras.layers.Dense, tfmot.sparsity.keras.PrunableLayer):\n\n def get_prunable_weights(self):\n # Prune bias also, though that usually harms model accuracy too much.\n return [self.kernel, self.bias]\n\n# Use `prune_low_magnitude` to make the `MyDenseLayer` layer train with pruning.\nmodel_for_pruning = tf.keras.Sequential([\n tfmot.sparsity.keras.prune_low_magnitude(MyDenseLayer(20, input_shape=input_shape)),\n tf.keras.layers.Flatten()\n])\n\nmodel_for_pruning.summary()\n", "Train model\nModel.fit\nCall the tfmot.sparsity.keras.UpdatePruningStep callback during training. \nTo help debug training, use the tfmot.sparsity.keras.PruningSummaries callback.", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\nlog_dir = tempfile.mkdtemp()\ncallbacks = [\n tfmot.sparsity.keras.UpdatePruningStep(),\n # Log sparsity and other metrics in Tensorboard.\n tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir)\n]\n\nmodel_for_pruning.compile(\n loss=tf.keras.losses.categorical_crossentropy,\n optimizer='adam',\n metrics=['accuracy']\n)\n\nmodel_for_pruning.fit(\n x_train,\n y_train,\n callbacks=callbacks,\n epochs=2,\n)\n\n#docs_infra: no_execute\n%tensorboard --logdir={log_dir}", "For non-Colab users, you can see the results of a previous run of this code block on TensorBoard.dev.\nCustom training loop\nCall the tfmot.sparsity.keras.UpdatePruningStep callback during training. \nTo help debug training, use the tfmot.sparsity.keras.PruningSummaries callback.", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\n# Boilerplate\nloss = tf.keras.losses.categorical_crossentropy\noptimizer = tf.keras.optimizers.Adam()\nlog_dir = tempfile.mkdtemp()\nunused_arg = -1\nepochs = 2\nbatches = 1 # example is hardcoded so that the number of batches cannot change.\n\n# Non-boilerplate.\nmodel_for_pruning.optimizer = optimizer\nstep_callback = tfmot.sparsity.keras.UpdatePruningStep()\nstep_callback.set_model(model_for_pruning)\nlog_callback = tfmot.sparsity.keras.PruningSummaries(log_dir=log_dir) # Log sparsity and other metrics in Tensorboard.\nlog_callback.set_model(model_for_pruning)\n\nstep_callback.on_train_begin() # run pruning callback\nfor _ in range(epochs):\n log_callback.on_epoch_begin(epoch=unused_arg) # run pruning callback\n for _ in range(batches):\n step_callback.on_train_batch_begin(batch=unused_arg) # run pruning callback\n\n with tf.GradientTape() as tape:\n logits = model_for_pruning(x_train, training=True)\n loss_value = loss(y_train, logits)\n grads = tape.gradient(loss_value, model_for_pruning.trainable_variables)\n optimizer.apply_gradients(zip(grads, model_for_pruning.trainable_variables))\n\n step_callback.on_epoch_end(batch=unused_arg) # run pruning callback\n\n#docs_infra: no_execute\n%tensorboard --logdir={log_dir}", "For non-Colab users, you can see the results of a previous run of this code block on TensorBoard.dev.\nImprove pruned model accuracy\nFirst, look at the tfmot.sparsity.keras.prune_low_magnitude API docs\nto understand what a pruning schedule is and the math of\neach type of pruning schedule.\nTips:\n\n\nHave a learning rate that's not too high or too low when the model is pruning. Consider the pruning schedule to be a hyperparameter.\n\n\nAs a quick test, try experimenting with pruning a model to the final sparsity at the begining of training by setting begin_step to 0 with a tfmot.sparsity.keras.ConstantSparsity schedule. You might get lucky with good results.\n\n\nDo not prune very frequently to give the model time to recover. The pruning schedule provides a decent default frequency.\n\n\nFor general ideas to improve model accuracy, look for tips for your use case(s) under \"Define model\".\n\n\nCheckpoint and deserialize\nYou must preserve the optimizer step during checkpointing. This means while you can use Keras HDF5 models for checkpointing, you cannot use Keras HDF5 weights.", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\n_, keras_model_file = tempfile.mkstemp('.h5')\n\n# Checkpoint: saving the optimizer is necessary (include_optimizer=True is the default).\nmodel_for_pruning.save(keras_model_file, include_optimizer=True)", "The above applies generally. The code below is only needed for the HDF5 model format (not HDF5 weights and other formats).", "# Deserialize model.\nwith tfmot.sparsity.keras.prune_scope():\n loaded_model = tf.keras.models.load_model(keras_model_file)\n\nloaded_model.summary()", "Deploy pruned model\nExport model with size compression\nCommon mistake: both strip_pruning and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression\nbenefits of pruning.", "# Define the model.\nbase_model = setup_model()\nbase_model.load_weights(pretrained_weights) # optional but recommended for model accuracy\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model)\n\n# Typically you train the model here.\n\nmodel_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)\n\nprint(\"final model\")\nmodel_for_export.summary()\n\nprint(\"\\n\")\nprint(\"Size of gzipped pruned model without stripping: %.2f bytes\" % (get_gzipped_model_size(model_for_pruning)))\nprint(\"Size of gzipped pruned model with stripping: %.2f bytes\" % (get_gzipped_model_size(model_for_export)))", "Hardware-specific optimizations\nOnce different backends enable pruning to improve latency, using block sparsity can improve latency for certain hardware.\nIncreasing the block size will decrease the peak sparsity that's achievable for a target model accuracy. Despite this, latency can still improve.\nFor details on what's supported for block sparsity, see\nthe tfmot.sparsity.keras.prune_low_magnitude API docs.", "base_model = setup_model()\n\n# For using intrinsics on a CPU with 128-bit registers, together with 8-bit\n# quantized weights, a 1x16 block size is nice because the block perfectly\n# fits into the register.\npruning_params = {'block_size': [1, 16]}\nmodel_for_pruning = tfmot.sparsity.keras.prune_low_magnitude(base_model, **pruning_params)\n\nmodel_for_pruning.summary()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
johnnyliu27/openmc
examples/jupyter/tally-arithmetic.ipynb
mit
[ "This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.", "import glob\n\nfrom IPython.display import Image\nimport numpy as np\nimport openmc", "Generate Input Files\nFirst we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pin.", "# 1.6 enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide('U235', 3.7503e-4)\nfuel.add_nuclide('U238', 2.2625e-2)\nfuel.add_nuclide('O16', 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide('H1', 4.9457e-2)\nwater.add_nuclide('O16', 2.4732e-2)\nwater.add_nuclide('B10', 8.0042e-6)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide('Zr90', 7.2758e-3)", "With our three materials, we can now create a materials file object that can be exported to an actual XML file.", "# Instantiate a Materials collection\nmaterials_file = openmc.Materials([fuel, water, zircaloy])\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()", "Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six planes.", "# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)\n\n# Create boundary planes to surround the geometry\n# Use both reflective and vacuum boundaries to make life interesting\nmin_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')\nmin_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')\nmax_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')\nmin_z = openmc.ZPlane(z0=-100., boundary_type='vacuum')\nmax_z = openmc.ZPlane(z0=+100., boundary_type='vacuum')", "With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.", "# Create a Universe to encapsulate a fuel pin\npin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')\n\n# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel')\nfuel_cell.fill = fuel\nfuel_cell.region = -fuel_outer_radius\npin_cell_universe.add_cell(fuel_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\npin_cell_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius\npin_cell_universe.add_cell(moderator_cell)", "OpenMC requires that there is a \"root\" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.", "# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = pin_cell_universe\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z\n\n# Create root Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(root_cell)", "We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.", "# Create Geometry and set root Universe\ngeometry = openmc.Geometry(root_universe)\n\n# Export to \"geometry.xml\"\ngeometry.export_to_xml()", "With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.", "# OpenMC simulation parameters\nbatches = 20\ninactive = 5\nparticles = 2500\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True}\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-0.63, -0.63, -100., 0.63, 0.63, 100.]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.source.Source(space=uniform_dist)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.", "# Instantiate a Plot\nplot = openmc.Plot(plot_id=1)\nplot.filename = 'materials-xy'\nplot.origin = [0, 0, 0]\nplot.width = [1.26, 1.26]\nplot.pixels = [250, 250]\nplot.color_by = 'material'\n\n# Instantiate a Plots collection and export to \"plots.xml\"\nplot_file = openmc.Plots([plot])\nplot_file.export_to_xml()", "With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.", "# Run openmc in plotting mode\nopenmc.plot_geometry(output=False)\n\n# Convert OpenMC's funky ppm to png\n!convert materials-xy.ppm materials-xy.png\n\n# Display the materials plot inline\nImage(filename='materials-xy.png')", "As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.", "# Instantiate an empty Tallies object\ntallies_file = openmc.Tallies()\n\n# Create Tallies to compute microscopic multi-group cross-sections\n\n# Instantiate energy filter for multi-group cross-section Tallies\nenergy_filter = openmc.EnergyFilter([0., 0.625, 20.0e6])\n\n# Instantiate flux Tally in moderator and fuel\ntally = openmc.Tally(name='flux')\ntally.filters = [openmc.CellFilter([fuel_cell, moderator_cell])]\ntally.filters.append(energy_filter)\ntally.scores = ['flux']\ntallies_file.append(tally)\n\n# Instantiate reaction rate Tally in fuel\ntally = openmc.Tally(name='fuel rxn rates')\ntally.filters = [openmc.CellFilter(fuel_cell)]\ntally.filters.append(energy_filter)\ntally.scores = ['nu-fission', 'scatter']\ntally.nuclides = ['U238', 'U235']\ntallies_file.append(tally)\n\n# Instantiate reaction rate Tally in moderator\ntally = openmc.Tally(name='moderator rxn rates')\ntally.filters = [openmc.CellFilter(moderator_cell)]\ntally.filters.append(energy_filter)\ntally.scores = ['absorption', 'total']\ntally.nuclides = ['O16', 'H1']\ntallies_file.append(tally)\n\n# Instantiate a tally mesh\nmesh = openmc.Mesh(mesh_id=1)\nmesh.type = 'regular'\nmesh.dimension = [1, 1, 1]\nmesh.lower_left = [-0.63, -0.63, -100.]\nmesh.width = [1.26, 1.26, 200.]\nmeshsurface_filter = openmc.MeshSurfaceFilter(mesh)\n\n# Instantiate thermal, fast, and total leakage tallies\nleak = openmc.Tally(name='leakage')\nleak.filters = [meshsurface_filter]\nleak.scores = ['current']\ntallies_file.append(leak)\n\nthermal_leak = openmc.Tally(name='thermal leakage')\nthermal_leak.filters = [meshsurface_filter, openmc.EnergyFilter([0., 0.625])]\nthermal_leak.scores = ['current']\ntallies_file.append(thermal_leak)\n\nfast_leak = openmc.Tally(name='fast leakage')\nfast_leak.filters = [meshsurface_filter, openmc.EnergyFilter([0.625, 20.0e6])]\nfast_leak.scores = ['current']\ntallies_file.append(fast_leak)\n\n# K-Eigenvalue (infinity) tallies\nfiss_rate = openmc.Tally(name='fiss. rate')\nabs_rate = openmc.Tally(name='abs. rate')\nfiss_rate.scores = ['nu-fission']\nabs_rate.scores = ['absorption']\ntallies_file += (fiss_rate, abs_rate)\n\n# Resonance Escape Probability tallies\ntherm_abs_rate = openmc.Tally(name='therm. abs. rate')\ntherm_abs_rate.scores = ['absorption']\ntherm_abs_rate.filters = [openmc.EnergyFilter([0., 0.625])]\ntallies_file.append(therm_abs_rate)\n\n# Thermal Flux Utilization tallies\nfuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')\nfuel_therm_abs_rate.scores = ['absorption']\nfuel_therm_abs_rate.filters = [openmc.EnergyFilter([0., 0.625]),\n openmc.CellFilter([fuel_cell])]\ntallies_file.append(fuel_therm_abs_rate)\n\n# Fast Fission Factor tallies\ntherm_fiss_rate = openmc.Tally(name='therm. fiss. rate')\ntherm_fiss_rate.scores = ['nu-fission']\ntherm_fiss_rate.filters = [openmc.EnergyFilter([0., 0.625])]\ntallies_file.append(therm_fiss_rate)\n\n# Instantiate energy filter to illustrate Tally slicing\nfine_energy_filter = openmc.EnergyFilter(np.logspace(np.log10(1e-2), np.log10(20.0e6), 10))\n\n# Instantiate flux Tally in moderator and fuel\ntally = openmc.Tally(name='need-to-slice')\ntally.filters = [openmc.CellFilter([fuel_cell, moderator_cell])]\ntally.filters.append(fine_energy_filter)\ntally.scores = ['nu-fission', 'scatter']\ntally.nuclides = ['H1', 'U238']\ntallies_file.append(tally)\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()", "Now we a have a complete set of inputs, so we can go ahead and run our simulation.", "# Run OpenMC!\nopenmc.run()", "Tally Data Processing\nOur simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.", "# Load the statepoint file\nsp = openmc.StatePoint('statepoint.20.h5')", "We have a tally of the total fission rate and the total absorption rate, so we can calculate k-eff as:\n$$k_{eff} = \\frac{\\langle \\nu \\Sigma_f \\phi \\rangle}{\\langle \\Sigma_a \\phi \\rangle + \\langle L \\rangle}$$\nIn this notation, $\\langle \\cdot \\rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.", "# Get the fission and absorption rate tallies\nfiss_rate = sp.get_tally(name='fiss. rate')\nabs_rate = sp.get_tally(name='abs. rate')\n\n# Get the leakage tally\nleak = sp.get_tally(name='leakage')\nleak = leak.summation(filter_type=openmc.MeshSurfaceFilter, remove_filter=True)\n\n# Compute k-infinity using tally arithmetic\nkeff = fiss_rate / (abs_rate + leak)\nkeff.get_pandas_dataframe()", "Notice that even though the neutron production rate, absorption rate, and current are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!\nOften in textbooks you'll see k-eff represented using the six-factor formula $$k_{eff} = p \\epsilon f \\eta P_{FNL} P_{TNL}.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\\frac{\\langle\\Sigma_a\\phi\\rangle_T + \\langle L \\rangle_T}{\\langle\\Sigma_a\\phi\\rangle + \\langle L \\rangle_T}$$ where the subscript $T$ means thermal energies.", "# Compute resonance escape probability using tally arithmetic\ntherm_abs_rate = sp.get_tally(name='therm. abs. rate')\nthermal_leak = sp.get_tally(name='thermal leakage')\nthermal_leak = thermal_leak.summation(filter_type=openmc.MeshSurfaceFilter, remove_filter=True)\nres_esc = (therm_abs_rate + thermal_leak) / (abs_rate + thermal_leak)\nres_esc.get_pandas_dataframe()", "The fast fission factor can be calculated as\n$$\\epsilon=\\frac{\\langle\\nu\\Sigma_f\\phi\\rangle}{\\langle\\nu\\Sigma_f\\phi\\rangle_T}$$", "# Compute fast fission factor factor using tally arithmetic\ntherm_fiss_rate = sp.get_tally(name='therm. fiss. rate')\nfast_fiss = fiss_rate / therm_fiss_rate\nfast_fiss.get_pandas_dataframe()", "The thermal flux utilization is calculated as\n$$f=\\frac{\\langle\\Sigma_a\\phi\\rangle^F_T}{\\langle\\Sigma_a\\phi\\rangle_T}$$\nwhere the superscript $F$ denotes fuel.", "# Compute thermal flux utilization factor using tally arithmetic\nfuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')\ntherm_util = fuel_therm_abs_rate / therm_abs_rate\ntherm_util.get_pandas_dataframe()", "The next factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\\eta = \\frac{\\langle \\nu\\Sigma_f\\phi \\rangle_T}{\\langle \\Sigma_a \\phi \\rangle^F_T}$$", "# Compute neutrons produced per absorption (eta) using tally arithmetic\neta = therm_fiss_rate / fuel_therm_abs_rate\neta.get_pandas_dataframe()", "There are two leakage factors to account for fast and thermal leakage. The fast non-leakage probability is computed as $$P_{FNL} = \\frac{\\langle \\Sigma_a\\phi \\rangle + \\langle L \\rangle_T}{\\langle \\Sigma_a \\phi \\rangle + \\langle L \\rangle}$$", "p_fnl = (abs_rate + thermal_leak) / (abs_rate + leak)\np_fnl.get_pandas_dataframe()", "The final factor is the thermal non-leakage probability and is computed as $$P_{TNL} = \\frac{\\langle \\Sigma_a\\phi \\rangle_T}{\\langle \\Sigma_a \\phi \\rangle_T + \\langle L \\rangle_T}$$", "p_tnl = therm_abs_rate / (therm_abs_rate + thermal_leak)\np_tnl.get_pandas_dataframe()", "Now we can calculate $k_{eff}$ using the product of the factors form the four-factor formula.", "keff = res_esc * fast_fiss * therm_util * eta * p_fnl * p_tnl\nkeff.get_pandas_dataframe()", "We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.\nLet's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.", "# Compute microscopic multi-group cross-sections\nflux = sp.get_tally(name='flux')\nflux = flux.get_slice(filters=[openmc.CellFilter], filter_bins=[(fuel_cell.id,)])\nfuel_rxn_rates = sp.get_tally(name='fuel rxn rates')\nmod_rxn_rates = sp.get_tally(name='moderator rxn rates')\n\nfuel_xs = fuel_rxn_rates / flux\nfuel_xs.get_pandas_dataframe()", "We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.", "# Show how to use Tally.get_values(...) with a CrossScore\nnu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])\nprint(nu_fiss_xs)", "The same idea can be used not only for scores but also for filters and nuclides.", "# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide\nu235_scatter_xs = fuel_xs.get_values(nuclides=['(U235 / total)'], \n scores=['(scatter / flux)'])\nprint(u235_scatter_xs)\n\n# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore\nfast_scatter_xs = fuel_xs.get_values(filters=[openmc.EnergyFilter], \n filter_bins=[((0.625, 20.0e6),)], \n scores=['(scatter / flux)'])\nprint(fast_scatter_xs)", "A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.", "# \"Slice\" the nu-fission data into a new derived Tally\nnu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])\nnu_fission_rates.get_pandas_dataframe()\n\n# \"Slice\" the H-1 scatter data in the moderator Cell into a new derived Tally\nneed_to_slice = sp.get_tally(name='need-to-slice')\nslice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H1'],\n filters=[openmc.CellFilter], filter_bins=[(moderator_cell.id,)])\nslice_test.get_pandas_dataframe()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
HaFl/ufldl-tutorial-python
Logistic_Regression.ipynb
mit
[ "%matplotlib inline\n\nimport scipy.optimize\nimport time\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.datasets import fetch_mldata\n\ndef normalize_features(train, test):\n \"\"\"Normalizes train set features to a standard normal distribution\n (zero mean and unit variance). The same procedure is then applied\n to the test set features.\n \"\"\"\n train_mean = train.mean(axis=0)\n # +0.1 to avoid division by zero in this specific case\n train_std = train.std(axis=0) + 0.1\n \n train = (train - train_mean) / train_std\n test = (test - train_mean) / train_std\n return train, test", "First get and preprocess the data.", "# get data: contains 70k samples of which the last 10k are meant for testing\nmnist = fetch_mldata('MNIST original', data_home='./data')\n\n# prepare for concat\ny_all = mnist.target[:, np.newaxis]\n\n# intercept term to be added\nintercept = np.ones_like(y_all)\n\n# normalize the data (zero mean and unit variance)\ntrain_normalized, test_normalized = normalize_features(\n mnist.data[:60000, :],\n mnist.data[60000:, :],\n)\n\n# concat intercept, X, and y so that shuffling is easier in a next step\ntrain_all = np.hstack((\n intercept[:60000],\n train_normalized,\n y_all[:60000],\n))\ntest_all = np.hstack((\n intercept[60000:],\n test_normalized,\n y_all[60000:],\n))", "I don't think this randomization step is really needed in our case, but let's stick with the ufldl tutorial here.", "np.random.shuffle(train_all)\nnp.random.shuffle(test_all)", "Now prepare the final train and test datasets. Let's only pick the data for the digits 0 and 1.", "# train data\ntrain_X = train_all[np.logical_or(train_all[:, -1] == 0, train_all[:, -1] == 1), :-1]\ntrain_y = train_all[np.logical_or(train_all[:, -1] == 0, train_all[:, -1] == 1), -1]\n\n# test data\ntest_X = test_all[np.logical_or(test_all[:, -1] == 0, test_all[:, -1] == 1), :-1] \ntest_y = test_all[np.logical_or(test_all[:, -1] == 0, test_all[:, -1] == 1), -1]\n\ndef sigmoid(z):\n return 1 / (1 + np.exp(-z))\n\ndef cost_function(theta, X, y):\n h = sigmoid(X.dot(theta))\n return -sum(y * np.log(h) + (1 - y) * np.log(1 - h))\n\ndef gradient(theta, X, y):\n errors = sigmoid(X.dot(theta)) - y\n return errors.dot(X)\n\nJ_history = []\n\nt0 = time.time()\nres = scipy.optimize.minimize(\n fun=cost_function,\n x0=np.random.rand(train_X.shape[1]) * 0.001,\n args=(train_X, train_y),\n method='L-BFGS-B',\n jac=gradient,\n options={'maxiter': 100, 'disp': True},\n callback=lambda x: J_history.append(cost_function(x, train_X, train_y)),\n)\nt1 = time.time()\n\nprint('Optimization took {s} seconds'.format(s=t1 - t0))\noptimal_theta = res.x\n\nplt.plot(J_history, marker='o')\nplt.xlabel('Iterations')\nplt.ylabel('J(theta)')\n\ndef accuracy(theta, X, y):\n correct = np.sum(np.equal(y, (sigmoid(X.dot(theta))) > 0.5))\n return correct / y.size\n\nprint('Training accuracy: {acc}'.format(acc=accuracy(res.x, train_X, train_y)))\nprint('Test accuracy: {acc}'.format(acc=accuracy(res.x, test_X, test_y)))", "Looking good, right? Well, look closer...\nI actually had to use the L-BFGS-B optimization method for it to work.<br>\nHad I used the expected BFGS method, nan and inf values due to log(0) would have made trouble.<br>\nWhy? I can think of two reasons:\n1. Even if being multiplied with 0, the log(0) expression is still evaluated by numpy. And unfortunately: 0 * np.nan = np.nan.\n2. Floating point arithmetic limits which don't exist in Mathematics.\nOne way to counteract those issues is to substitute the troubling values:", "def safe_log(x, nan_substitute=-1e+4):\n l = np.log(x)\n l[np.logical_or(np.isnan(l), np.isinf(l))] = nan_substitute\n return l\n\ndef cost_function_safe(theta, X, y):\n h = sigmoid(X.dot(theta))\n return -sum(y * safe_log(h) + (1 - y) * safe_log(1 - h))\n\nJ_history = []\n\nt0 = time.time()\nres = scipy.optimize.minimize(\n fun=cost_function_safe,\n x0=np.random.rand(train_X.shape[1]) * 0.001,\n args=(train_X, train_y),\n method='BFGS',\n jac=gradient,\n options={'maxiter': 100, 'disp': True},\n callback=lambda x: J_history.append(cost_function_safe(x, train_X, train_y)),\n)\nt1 = time.time()\n\nprint('Optimization took {s} seconds'.format(s=t1 - t0))\noptimal_theta = res.x", "<br>\nNotice that the above optimization procedure doesn't converge due to the substitutions (which doesn't allow the gradients to further improve (= get smaller) at some point). Therefore, it used all 100 allowed iterations.", "plt.plot(J_history, marker='o')\nplt.xlabel('Iterations')\nplt.ylabel('J(theta)')\n\nprint('Training accuracy: {acc}'.format(acc=accuracy(res.x, train_X, train_y)))\nprint('Test accuracy: {acc}'.format(acc=accuracy(res.x, test_X, test_y)))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Caranarq/01_Dmine
Datasets/INERE/INERE.ipynb
gpl-3.0
[ "Estandarizacion de datos del Inventario Nacional de Energias Renovables\n1. Introduccion\nParámetros que salen de esta fuente\nID |Descripción\n---|:----------\nP0009|Potencial de aprovechamiento energía solar\nP0010|Potencial de aprovechamiento energía eólica\nP0011|Potencial de aprovechamiento energía geotérmica\nP0012|Potencial de aprovechamiento energía de biomasa\nP0606|Generación mediante fuentes renovables de energía\nP0607|Potencial de fuentes renovables de energía\nP0608|Capacidad instalada para aprovechar fuentes renovables de energía", "descripciones = {\n'P0009' : 'Potencial de aprovechamiento energía solar',\n'P0010' : 'Potencial de aprovechamiento energía eólica',\n'P0011' : 'Potencial de aprovechamiento energía geotérmica',\n'P0012' : 'Potencial de aprovechamiento energía de biomasa',\n'P0606' : 'Generación mediante fuentes renovables de energía',\n'P0607' : 'Potencial de fuentes renovables de energía',\n'P0608' : 'Capacidad instalada para aprovechar fuentes renovables de energía'\n}\n\n# Librerias utilizadas\nimport pandas as pd\nimport sys\nimport urllib\nimport os\nimport csv\n\n# Configuracion del sistema\nprint('Python {} on {}'.format(sys.version, sys.platform))\nprint('Pandas version: {}'.format(pd.__version__))\nimport platform; print('Running on {} {}'.format(platform.system(), platform.release()))", "Descarga de datos\nLos datos se encuentran en la plataforma del Inventario Nacional de Energias Renovables (INERE) ubicada en https://dgel.energia.gob.mx/inere/, y tienen que descargarse manualmente porque su página está elaborada en Flash y no permite la descarga sistematizada de datos. A veces ni funciona por sí misma, manda errores al azar.\nSe descargaron dos datasets, uno que contiene el Inventario Actual y otro con el Inventario Potencial de energías renovables a nivel nacional.\nComo la base de datos no incluye claves geoestadísticas, estas tienen que asignarse manualmente. A continuacion se muestra el encabezado del archivo que se procesó a mano.", "# Lectura del dataset de energia renovable actual como descargado\ndirectorio = r'D:\\PCCS\\00_RawData\\01_CSV\\INERE\\\\'\narchivo = directorio+'Actual Energia Renovable.xls'\nraw_actual = pd.read_excel(archivo).dropna()\nraw_actual.head()\n# El dataset envía error cuando se intenta leer directamente", "Ninguno de los dos datasets puede ser leido por python tal como fue descargado, por lo que tienen que abrirse en excel y guardarse nuevamente en formato xlsx.\nDataset energia renovable actual", "# Lectura del dataset de energia renovable actual después de ser re-guardado en excel\ndirectorio = r'D:\\PCCS\\00_RawData\\01_CSV\\INERE\\\\'\narchivo = directorio+'Actual Energia Renovable.xlsx'\nraw_actual = pd.read_excel(archivo).dropna()\nraw_actual.head()", "Se asignó CVE_MUN manualmente a la mayoría de los registros. No fue posible encontrar una clave geoestadística para las siguientes combinaciones de estado/municipio \nESTADO |MUNICIPIO\n-------|:----------\nVeracruz|Jiotepec\nChiapas|Atizapan\nOaxaca|Motzorongo\nGuerrero|La Venta\nJalisco|Santa Rosa\nPara los siguientes registros, la CVE_MUN fue intuida desde el nombre de la población o el nombre del proyecto:\nESTADO |MUNICIPIO|CVE_MUN|PROYECTO\n-------|:----------|-------|------\nPuebla|Atencingo|21051|Ingenio de Atencingo\nPuebla|Tatlahuquitepec|21186|Mazatepec\nA continuación se presenta el encabezado del dataset procesado manualmente, incluyendo columnas que se utilizaron como auxiliares para la identificación de municipios", "# Lectura del dataset de energia renovable actual procesado manualmente\ndirectorio = r'D:\\PCCS\\00_RawData\\01_CSV\\INERE\\\\'\narchivo = directorio+'Actual CVE_GEO.xlsx'\nactual_proc = pd.read_excel(archivo, dtype={'CVE_MUN': 'str'}).dropna()\nactual_proc.head()", "Para guardar el dataset y utilizarlo en la construcción del parámetro, se eliminarán algunas columnas.", "list(actual_proc)\n\n# Eliminacion de columnas redundantes y temporales\ndel(actual_proc['ESTADO'])\ndel(actual_proc['MUNICIPIO'])\ndel(actual_proc['3EDO3'])\ndel(actual_proc['3MUN3'])\ndel(actual_proc['GEO_EDO'])\ndel(actual_proc['GEOEDO_3MUN'])\ndel(actual_proc['GEO_MUN_Nom'])\n\n# Nombre Unico de Coloumnas\nactual_proc = actual_proc.rename(columns = {\n 'NOMBRE' : 'NOMBRE PROYECTO', \n 'PRODUCTOR': 'SECTOR PRODUCCION', \n 'TIPO': 'TIPO FUENTE ENER', \n 'UNIDADES': 'UNIDADES GEN'})\n\n# Asignacion de CVE_MUN como indice \nactual_proc.set_index('CVE_MUN', inplace=True)\nactual_proc.head()\n\n# Metadatos estándar\nmetadatos = {\n 'Nombre del Dataset': 'Inventario Actual de Energias Renovables',\n 'Descripcion del dataset': 'Plantas de generación de energía a partir de fuentes renovables en la República Mexicana',\n 'Disponibilidad Temporal': '2014',\n 'Periodo de actualizacion': 'No Determinada',\n 'Nivel de Desagregacion': 'Localidad, Municipal, Estatal, Nacional',\n 'Notas': None,\n 'Fuente': 'SENER',\n 'URL_Fuente': 'https://dgel.energia.gob.mx/inere/',\n 'Dataset base': None,\n}\n\n# Convertir metadatos a dataframe\nactualmeta = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None)\nactualmeta.columns = ['Descripcion']\nactualmeta = actualmeta.rename_axis('Metadato')\nactualmeta\n\nlist(actual_proc)\n\n# Descripciones de columnas\nvariables = {\n 'NOMBRE PROYECTO': 'Nombre del proyecto de produccion de energia',\n 'SECTOR PRODUCCION': 'Sector al que pertenece el proyecto de produccion de energia',\n 'TIPO FUENTE ENER': 'Tipo de fuente de donde se obtiene la energía',\n 'UNIDADES GEN': 'Numero de generadores instalados por proyecto',\n 'CAPACIDAD INSTALADA (MW)': 'Capacidad Instalada en Megawatts',\n 'GENERACIÓN (GWh/a) ' : 'Generación de Gigawatts/hora al año'\n}\n\n# Convertir descripciones a dataframe\nactualvars = pd.DataFrame.from_dict(variables, orient='index', dtype=None)\nactualvars.columns = ['Descripcion']\nactualvars = actualvars.rename_axis('Mnemonico')\nactualvars\n\n# Guardar dataset limpio para creacion de parametro.\nfile = r'D:\\PCCS\\01_Dmine\\Datasets\\INERE\\ER_Actual.xlsx'\nwriter = pd.ExcelWriter(file)\nactual_proc.to_excel(writer, sheet_name = 'DATOS')\nactualmeta.to_excel(writer, sheet_name = 'METADATOS')\nactualvars.to_excel(writer, sheet_name = 'VARIABLES')\nwriter.save()\nprint('---------------TERMINADO---------------')", "Dataset Potencial de Energia Renovable", "# Lectura del dataset de potencial de energia renovable después de ser re-guardado en excel\ndirectorio = r'D:\\PCCS\\00_RawData\\01_CSV\\INERE\\\\'\narchivo = directorio+'Potencial Energia Renovable.xlsx'\nraw_potencial = pd.read_excel(archivo, dtype={'CVE_MUN': 'str'}).dropna()\nraw_potencial.head()\n\n# Eliminacion de columnas redundantes y temporales\npotencial_proc = raw_potencial\ndel(potencial_proc['ESTADO'])\ndel(potencial_proc['MUNICIPIO'])\ndel(potencial_proc['3EDO3'])\ndel(potencial_proc['3MUN3'])\ndel(potencial_proc['GEO_EDO'])\ndel(potencial_proc['GEOEDO_3MUN'])\ndel(potencial_proc['GEO_MUN_Nom'])\n\npotencial_proc.head()\n\npotencial_proc['SUBCLASIFICACIÓN'].unique()\n\n# Nombre Unico de Coloumnas\npotencial_proc = potencial_proc.rename(columns = {\n 'PROYECTO' : 'NOMBRE PROYECTO', \n 'CLASIFICACIÓN': 'PROBABILIDAD', \n 'TIPO': 'TIPO FUENTE ENER',\n 'SUBCLASIFICACIÓN': 'NOTAS'})\n\n# Asignacion de CVE_MUN como indice \npotencial_proc.set_index('CVE_MUN', inplace=True)\npotencial_proc.head()\n\n# Metadatos estándar\nmetadatos = {\n 'Nombre del Dataset': 'Inventario Potencial de Energias Renovables',\n 'Descripcion del dataset': 'listado de Proyectos con potencial para generar energía a partir de fuentes renovables',\n 'Disponibilidad Temporal': '2014',\n 'Periodo de actualizacion': 'No Determinada',\n 'Nivel de Desagregacion': 'Localidad, Municipal, Estatal, Nacional',\n 'Notas': None,\n 'Fuente': 'SENER',\n 'URL_Fuente': 'https://dgel.energia.gob.mx/inere/',\n 'Dataset base': None,\n}\n\n# Convertir metadatos a dataframe\npotenmeta = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None)\npotenmeta.columns = ['Descripcion']\npotenmeta = potenmeta.rename_axis('Metadato')\npotenmeta\n\nlist(potencial_proc)\n\npotencial_proc['FUENTE'].unique()\n\n# Descripciones de columnas\nvariables = {\n 'NOMBRE PROYECTO': 'Nombre del proyecto de produccion de energia',\n 'TIPO FUENTE ENER': 'Tipo de fuente de donde se obtiene la energía',\n 'PROBABILIDAD': 'Certeza respecto al proyecto deproduccion de energía',\n 'NOTAS': 'Notas',\n 'CAPACIDAD INSTALABLE (MW)': 'Capacidad Instalable en Megawatts',\n 'POTENCIAL (GWh/a) ' : 'Potencial de Generación de Gigawatts/hora al año',\n 'FUENTE': 'Fuente de información'\n}\n\n# Convertir descripciones a dataframe\npotencialvars = pd.DataFrame.from_dict(variables, orient='index', dtype=None)\npotencialvars.columns = ['Descripcion']\npotencialvars = potencialvars.rename_axis('Mnemonico')\npotencialvars\n\n# Guardar dataset limpio para creacion de parametro.\nfile = r'D:\\PCCS\\01_Dmine\\Datasets\\INERE\\ER_Potencial.xlsx'\nwriter = pd.ExcelWriter(file)\npotencial_proc.to_excel(writer, sheet_name = 'DATOS')\npotenmeta.to_excel(writer, sheet_name = 'METADATOS')\npotencialvars.to_excel(writer, sheet_name = 'VARIABLES')\nwriter.save()\nprint('---------------TERMINADO---------------')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AnthonyHorton/huntsman-specs
Huntsman Telephoto Array specifications.ipynb
mit
[ "Huntsman Telephoto Array specifications\nIntroduction\nThe Huntsman Telephoto Array is an astronomical imaging system consisting of 1-10 imaging units attached to a telescope mount. The concept is closely based on the Dragonfly Telephoto Array, pictured below.\n\nImaging units\nEach imaging unit comprises a Canon EF 400mm f/2.8L IS II USM camera lens, an SBIG STF-8300M ccd camera and an adaptor from Birger Engineering. The prototype imaging unit ('the Huntsman Eye') is shown below. The tripod adaptor bracket is bolted to the main lens body, this bracket could be removed and the bolt holes used for direct attachment to a support structure.\n\nTelescope mount\nThe Huntsman Telephoto Array will use a Software Bisque Paramount ME II telescope mount. A Solidworks eDrawing is available as part of the mount documentation set.\n\nSupport structure\nThe Huntsman Telephoto Array support structure must enable the imaging units to be assembled into an array and attached to the telescope mount. The Dragonfly Telephoto Array has adopted a modular solution based on tubular structures around each lens, as can be seen in the photo below.\n\nEnclosure\nThe Huntsman Telephoto Array is expected to be housed in an Astro Dome 4500, a 4.5 metre diameter telescope dome with a 1.2 metre wide aperture. A 21 year old example at Mt Kent Observatory is shown below.\n\nScience requirements\nSpatial sampling\nDerived from the specifications of the chosen hardware.", "import math\nfrom astropy import units as u\n\npixel_pitch = 5.4 * u.micron / u.pixel # STF-8300M pixel pitch \nfocal_length = 400 * u.millimeter # Canon EF 400 mm f/2.8L IS II USM focal length \nresolution = (3326, 2504) * u.pixel # STF-8300M resolution in pixels, (x, y)\n\nsampling = (pixel_pitch / focal_length).to(u.radian/u.pixel, equivalencies = u.equivalencies.dimensionless_angles())\nsampling.to(u.arcsec/u.pixel)", "Each imaging unit shall deliver an on-sky spatial sampling of $2.8\\pm 0.1'' /$ pixel\n\nField of view\nDerived from the specifications of the chosen hardware.", "fov = resolution * sampling\nfov.to(u.degree)", "Each imaging unit shall deliver an instantaneous field of view of $2.6 \\pm 0.1 \\times 1.9 \\pm 0.1$ degrees \n\nExposure time\nIndividual exposure times of 5-30 minutes are anticipated (5-10 minutes for broadband observations, 30 minutes for narrowband).", "exposure_times = ((5, 10, 30) * u.minute)\nexposure_times", "The system shall meet all requirements with exposure times of up to 30 minutes\n\nNumber of imaging units\nThe maximum number of imaging units per telescope mount is really determined by the mount payload mass limit and the aperture size of the enclosure. The Dragonfly Telephoto Array are currently operating with 10 imaging units on a single mount, the Huntsman Telephoto Array should be capable of at least matching this.", "n_units = (1, 4, 10)\nn_units", "The system shall support up to at least 10 imaging units per telescope mount\n\nImaging unit alignment\nGiven the large field of view tight coalignment of individual imaging units is not required, or even particularly desirable.", "coalignment_tolerance = 5 * u.arcminute\ncoalignment_tolerance", "All imaging units should point in the same direction to within a tolerance of 5 arcminutes radius on sky (TBC)\n\nAll data will be resampled prior to combination so some relative rotation between imaging units is acceptable.", "north_alignment_tolerance = 2.5 * u.degree\nnorth_alignment_tolerance", "All imaging units shall have the camera y axis aligned with the North-South axis to within a tolerance of $\\pm$2.5 degrees (TBC) \n\nImage quality\nAbraham & van Dokkum (2014) report that imaging units of the design proposed for the Huntsman Telephoto Array are capable of producing a point spread function (PSF) with full width at half maximum (FWHM) of $\\sim1.5''$, as measured by (undersampled) 3rd order polynomial fitting by SExtractor. When image sensor tilts (PSF degradation $<0.4''$) and imperfect telescope tracking are taken into account average FWHM of $< 2''$ were still achieved across the entire field of view. The Huntsman Telephoto Array should at least match this.", "central_fwhm = 1.5 * u.arcsecond\ntilt_fwhm_degradation = 0.4 * u.arcsecond\nmax_fwhm = 2 * u.arcsecond\nmax_fwhm", "The system shall deliver a PSF with average FWHM $< 2''$ over the full field of view, as measured using a 3rd order polynomial fit performed wth the SExtrator software\n\nFilters\nFor the primary science project we anticipate using SDSS-type g & r bandpass filters, typically with half of the imaging units equipped with one and half with the other though there may be targets for which we would want to use a different mix of filters. During bright of Moon it will not be possible to make useful observations for the primary science project and so during these times we may use narrowband filters, e.g. H-$\\alpha$. To do this it must be possible to change filters between nights but it is not necessary that this be a motorised/automated process.\n\nEach imaging unit shall be equipped with an optical bandpass filter \nIt must be possible to change filters between nights\nThe set of filters shall contain at least one SDSS-type filter of either g or r band for each imaging unit\n\nSky coverage\nThe system should allow the observation of targets at any position on the sky that corresponds to a reasonable airmass, i.e. $<2$.", "max_zenith_distance = 60 * u.degree\nmax_zenith_distance", "The system shall satisfy all functional requirements (e.g. image quality, alignment) while observing any sky position with a zenith distance less than 60 degrees. The system is not required to meet functional requirements if observing a sky position with a zenith distance of greater than 60 degrees\n\nMechanical requirements\nSupport structure(s)\n\nThe mechanical support structure(s) shall allow the number of imaging units specified in the science requirements to be attached to the telescope mount", "n_units", "Imaging unit interface\n\nThe support structure(s) shall attach to the imaging units via the Canon EF 400mm f/2.8L IS II USM camera lens tripod mount bolt hole pattern and/or clamping of the camera lens body\n\nTelescope mount interface\n\nThe support structure(s) shall attach to the telescope mount via the standard interface plate, the Paramount ME II Versa-Plate (drawing here)\n\nAlignment\n\nThe support structure(s) shall ensure that the imaging units are aligned to within the tolerances specified in the science requirements", "coalignment_tolerance\n\nnorth_alignment_tolerance", "Flexure\nThe support structure(s) must be rigid enough so that flexure will not prevent the system from achieving the image quality specification from the science requirements. This requires the pointing of all imaging units to remain constant relative to either the telescope mount axes (if not autoguiding) or the autoguider pointing (if using autoguiding) to within a set tolerance for the duration of any individual exposure.\nThe tolerance can be calculated from the delivered image quality specification and expected imaging unit image quality.", "fwhm_to_rms = (2 * (2 * math.log(2))**0.5)**-1\nmax_flexure_rms = fwhm_to_rms * (max_fwhm**2 - (central_fwhm + tilt_fwhm_degradation)**2)**0.5\nmax_flexure_rms", "A given exposure time corresponds to an angle of rotation about the telescope mount hour angle axis.", "ha_angles = (exposure_times.to(u.hour) * (u.hourangle / u.hour)).to(u.degree)\nha_angles", "The support structure(s) shall ensure that the pointing of all imaging units shall remain fixed relative to the telescope mount axes to within 0.27 arcseconds rms while the hour angle axis rotates through any 7.5 degree angle, for any position of the declination axis, within the sky coverage requirement's zenith distance range", "max_zenith_distance", "Mass\nThe telescope mount is rated for a maximum payload (not including counterweights) of 109 kg, therefore the total mass of imaging units plus support structure(s) should not exceed this value. The mass of the lens is 4.1 kg (source here), the mass of the CCD camera is 0.8 kg (source here) and the mass of the adaptor is estimated to be no more than 0.2 kg.", "lens_mass = 4.1 * u.kilogram\ncamera_mass = 0.8 * u.kilogram\nadaptor_mass = 0.2 * u.kilogram\nimaging_unit_mass = lens_mass + camera_mass + adaptor_mass\n\nmax_payload_mass = 109 * u.kilogram\n\nmax_struture_mass = max_payload_mass - max(n_units) * imaging_unit_mass\nmax_struture_mass", "The total mass of all support structure(s) shall be less than 58 kg\n\nFootprint\nThe support structure(s) needs to position the imaging units such that their combined beam footprint will pass through the dome aperture without vignetting. Translating this requirement into an allowed space envelope for the imaging units is not straightforward as the geometry is complicated: the axes of the telescope mount will be offset from each other, from the geometric centre of the dome and from the centre of the imaging unit array. 3D modelling of the mount, imaging unit array and enclosure will be required to verify this for all sky positions however as a general principle the imaging units should be as closely packed as possible to minimise the overall size of their combined beam footprint.\nEnvironmental\nThe system is intended to be placed at a moderate altitude site in mainland Australia. The expected ranges of environmental conditions during operation and storage are as follows." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
betatim/studyGroupJupyter
polyglot-python.ipynb
mit
[ "IPython speaks many languages\nInsert a joke about Glossolalia and snakes.\nRun commands in bash and get useful representations back:", "files = !ls\nfiles", "or run multiple lines in a cell that is all bash", "%%bash\nuname -a\necho $PWD", "Cython\nCython is an optimising static compiler for both the Python programming language and the extended Cython programming language (based on Pyrex). It makes writing C extensions for Python as easy as Python itself.\nQuasi random numbers, Tim's knowledge on wrapping C/C++ libraries for easy use from python\nTL;DR: use python syntax, get C speed.\nLoad the cython extension. Read more about ipython extensions", "# requires `conda install cython`\n%load_ext Cython\n\ndef f(x):\n return x**2-x\n\ndef integrate_f(a, b, N):\n s = 0; dx = (b-a)/N\n for i in range(N):\n s += f(a+i*dx)\n return s * dx\n\n%%cython\ncdef double fcy(double x) except? -2:\n return x**2-x\n\ndef integrate_fcy(double a, double b, int N):\n cdef int i\n cdef double s, dx\n s = 0; dx = (b-a)/N\n for i in range(N):\n s += fcy(a+i*dx)\n return s * dx\n\n%timeit integrate_f(0, 1, 100)\n%timeit integrate_fcy(0, 1, 100)", "Getting help\nWhat options does the cython magic have?", "%%cython?\n\n%%cython -lm\n# Link the m library (like g++ linker argument)\nfrom libc.math cimport sin\nprint 'sin(1)=', sin(1)\n\n%%cython -a\ncdef double fcy(double x) except? -2:\n return x**2-x\n\ndef integrate_fcy(double a, double b, int N):\n cdef int i\n cdef double s, dx\n s = 0; dx = (b-a)/N\n for i in range(N):\n s += fcy(a+i*dx)\n return s * dx" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LxMLS/lxmls-toolkit
labs/notebooks/non_linear_sequence_classifiers/exercise_2.ipynb
mit
[ "WSJ Data", "%load_ext autoreload\n%autoreload 2\n\n# Load Part-of-Speech data \nfrom lxmls.readers.pos_corpus import PostagCorpusData\ndata = PostagCorpusData()", "Check Numpy and Pytorch Gradients match\nAs we did with the feed-forward network, we will no implement a Recurrent Neural Network (RNN) in Pytorch. For this complete the log forward() method in \nlxmls/deep_learning/pytorch_models/rnn.py\n\nLoad the RNN model in numpy and Python for comparison", "from lxmls.deep_learning.numpy_models.rnn import NumpyRNN\nnumpy_model = NumpyRNN(\n input_size=data.input_size,\n embedding_size=50,\n hidden_size=20,\n output_size=data.output_size,\n learning_rate=0.1\n)\n\nfrom lxmls.deep_learning.pytorch_models.rnn import PytorchRNN\nmodel = PytorchRNN(\n input_size=data.input_size,\n embedding_size=50,\n hidden_size=20,\n output_size=data.output_size,\n learning_rate=0.1\n)", "To debug your code you can compare the numpy and Pytorch gradients using", "# Get gradients for both models\nbatch = data.batches('train', batch_size=1)[0]\ngradient_numpy = numpy_model.backpropagation(batch['input'], batch['output'])\ngradient = model.backpropagation(batch['input'], batch['output'])\n\ngradient[0].shape, gradient_numpy[0].shape", "and then plotting them with matplotlib", "%matplotlib inline\nimport matplotlib.pyplot as plt\n# Gradient for word embeddings in the example\nplt.subplot(2,2,1)\nplt.imshow(gradient_numpy[0][batch['input'], :], aspect='auto', interpolation='nearest')\nplt.colorbar()\nplt.subplot(2,2,2)\nplt.imshow(gradient[0].numpy()[batch['input'], :], aspect='auto', interpolation='nearest')\nplt.colorbar()\n# Gradient for word embeddings in the example\nplt.subplot(2,2,3)\nplt.imshow(gradient_numpy[1], aspect='auto', interpolation='nearest')\nplt.colorbar()\nplt.subplot(2,2,4)\nplt.imshow(gradient[1].numpy(), aspect='auto', interpolation='nearest')\nplt.colorbar()\nplt.show()\n\n# Alterbative native CuDNN native implementation of RNNs\nfrom lxmls.deep_learning.pytorch_models.rnn import FastPytorchRNN\nfast_model = FastPytorchRNN(\n input_size=data.input_size,\n embedding_size=50,\n hidden_size=20,\n output_size=data.output_size,\n learning_rate=0.1\n)", "Train model\nOnce you are confident that your implementation is working correctly you can run it on the POS task using the Pytorch code from the Exercise 6.1.", "num_epochs = 10\n\nmodel = model\n\nimport numpy as np\nimport time\n\n# Get batch iterators for train and test\ntrain_batches = data.batches('train', batch_size=1)\ndev_set = data.batches('dev', batch_size=1)\ntest_set = data.batches('test', batch_size=1)\n\n# Epoch loop\nstart = time.time()\nfor epoch in range(num_epochs):\n\n # Batch loop\n for batch in train_batches:\n model.update(input=batch['input'], output=batch['output'])\n\n # Evaluation dev\n is_hit = []\n for batch in dev_set:\n is_hit.extend(model.predict(input=batch['input']) == batch['output'])\n accuracy = 100*np.mean(is_hit)\n\n # Inform user\n print(\"Epoch %d: dev accuracy %2.2f %%\" % (epoch+1, accuracy))\n\nprint(\"Training took %2.2f seconds per epoch\" % ((time.time() - start)/num_epochs)) \n \n# Evaluation test\nis_hit = []\nfor batch in test_set:\n is_hit.extend(model.predict(input=batch['input']) == batch['output'])\naccuracy = 100*np.mean(is_hit)\n\n# Inform user\nprint(\"Test accuracy %2.2f %%\" % accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deeplycloudy/lmaworkshop
TRACER-2021/flash_processing.ipynb
bsd-2-clause
[ "From VHF sources to lightning flashes\nUsing space-time criteria, we will group LMA data into flashes. We will also create a 2D gridded version of these flash data to look at VHF source density, flash extent density, and average flash area. \nBackground reading: Bruning and MacGorman (2013, JAS) show how flash area is defined from the convex hull of the VHF point sources (Fig. 2) and show what flash extent density and average flash area look like for a supercell (Fig. 4). We'll use the same definitions here.", "import glob\nimport numpy as np\nimport datetime\nimport xarray as xr\nimport pandas as pd\nimport pyproj as proj4\n\nfrom pyxlma.lmalib.io import read as lma_read\nfrom pyxlma.lmalib.flash.cluster import cluster_flashes\nfrom pyxlma.lmalib.flash.properties import flash_stats, filter_flashes\nfrom pyxlma.lmalib.grid import create_regular_grid, assign_regular_bins, events_to_grid\nfrom pyxlma.plot.xlma_plot_feature import color_by_time, plot_points, setup_hist, plot_3d_grid, subset\nfrom pyxlma.plot.xlma_base_plot import subplot_labels, inset_view, BlankPlot\n\nimport sys, glob\n\nfrom lmalibtracer.coords.sbu import get_sbu_proj as get_coord_proj\n\n%matplotlib widget\nimport matplotlib.pyplot as plt", "Configurable Parameters\nThese are the parameters that will be used by the flash sorting and gridding algorithms - it's a fairly small list.\nThese parameters are the ones we would expect to adjust when controlling for noise. We also require that a flash have more than one point!", "filenames = glob.glob('/data/Houston/130619/LYLOUT_130619_2[0-1]*.dat.gz')\n\n# Adjust this to match the length of the dataset we read in. It is used to set up \n# the total duration of the gridding along the time dimension.\nduration_min = 120\n\n# Source to flash \nchi2max = 1.0\nstationsmin = 6\nmin_events_per_flash = 5\n\n# There is a parameter to change the gridding from the 1 min default time resolution.\ngrid_time_delta_sec = 60\nresolution_m = 1000\nlatlon_grid=False # False uses the stereographic coordinate grid.", "Read VHF source data and identify flashes\nIncludes filtering by flash event count.", "print(\"Reading files\")\nlma_data, starttime = lma_read.dataset(filenames)\ngood_events = (lma_data.event_stations >= stationsmin) & (lma_data.event_chi2 <= chi2max)\nlma_data = lma_data[{'number_of_events':good_events}]\n\ndttuple = [starttime, starttime+datetime.timedelta(minutes=duration_min)]\n# dttuple = lma_data.Datetime.min(), lma_data.Datetime.max()\ntstring = 'LMA {}-{}'.format(dttuple[0].strftime('%H%M'),\n dttuple[1].strftime('%H%M UTC %d %B %Y '))\nprint(tstring)\n\nprint(\"Clustering flashes\")\nds = cluster_flashes(lma_data)\nprint(\"Calculating flash stats\")\nds = flash_stats(ds)\nds = filter_flashes(ds, flash_event_count=(min_events_per_flash, None))\n# ds0 = ds.copy()\nprint(ds)", "Contents of the flash-sorted LMA data structure.\nOnce the cell above finishes running, you'll see that our LMA data structure has grown. There is a new number_of_flashes dimension, and associated flash data variables. We still have all of the events, but they have been filtered to be just the ones that meet the stations and chi_2 criteria.\nPerhaps the most important two variables in terms of understanding the data are the event_parent_flash_id and flash_id variables. Their dimensions are number_of_events and number of flashes, respectively. Each flash has a unique flash_id, an unsigned integer, and then each event in that flash is labled with that integer. Therefore, those two variables define which events go with which flashes, and let us pair up flash-level statstics (such as flash_area) with the events used to calculate those statistics.", "print(ds.event_parent_flash_id)\nprint('-----')\nprint('-----')\nprint(ds.flash_id)", "Let's see how many events are in our flashes. We can also check for an expected correlation: do the number of events increase with flash area?", "event_count_bins = 2**np.arange(12)-0.5\nprint(event_count_bins.astype(int))\nfig, ax = plt.subplots(1,1)\nart = ds.flash_event_count.plot.hist(bins=event_count_bins, ax=ax)\nax.semilogx()\n\nevent_count_bins = 2**np.arange(12)-0.5\nprint(event_count_bins.astype(int))\nfig, ax = plt.subplots(1,1)\nart = ds.plot.scatter('flash_event_count', 'flash_area', marker='s', s=1, ax=ax)\nax.semilogx()", "Plot the VHF event data\nxlma-python has built-in plotting capabilities to make a standard plot style that has a plan view, two vertical projections, and a time height view of the events.\nWe don't actually need the flash-sorted data, but it doesn't hurt to have it in the dataset.", "alt_data = ds.event_altitude.values/1000.0\nlon_data = ds.event_longitude.values\nlat_data = ds.event_latitude.values\ntime_data = pd.Series(ds.event_time) # because time comparisons\nchi_data = ds.event_chi2.values\nstation_data = ds.event_stations.values\n\n# Plot color map and marker size\nplot_cmap = 'plasma'\nplot_s = 5\n\n\ntlim_sub = [pd.to_datetime(starttime), pd.to_datetime(pd.to_datetime(starttime) + np.asarray(60, 'timedelta64[m]'))]\ntstring = 'LMA {}-{}'.format(tlim_sub[0].strftime('%H%M'),\n tlim_sub[1].strftime('%H%M UTC %d %B %Y '))\n\nclat, clon = float(lma_data.network_center_latitude), float(lma_data.network_center_longitude)\nxlim = [clon-0.75, clon+0.75]\nylim = [clat-0.75, clat+0.75]\nzlim = [0, 21]\nxchi = 1.0\nstationmin = 6.0\n\nlon_set, lat_set, alt_set, time_set, selection = subset(\n lon_data, lat_data, alt_data, time_data, chi_data, station_data,\n xlim, ylim, zlim, tlim_sub, xchi, stationmin)\n\nbk_plot = BlankPlot(pd.to_datetime(tlim_sub[0]), bkgmap=True, \n xlim=xlim, ylim=ylim, zlim=zlim, tlim=tlim_sub, title=tstring)\n\n# Add a view of where the subset is\nxdiv = ydiv = 0.1\ninset_view(bk_plot, lon_data, lat_data, xlim, ylim, xdiv, ydiv,\n buffer=0.5, inset_size=0.15, plot_cmap = 'plasma', bkgmap = True)\n# Add some subplot labels\nsubplot_labels(bk_plot)\n# Add a range ring\nbk_plot.ax_plan.tissot(rad_km=40.0, lons=clon, lats=clat, n_samples=80,\n facecolor='none',edgecolor='k')\n# Add the station locations\nstn_art = bk_plot.ax_plan.plot(lma_data['station_longitude'], \n lma_data['station_latitude'], 'wD', mec='k', ms=5)\n\nif len(lon_set)==0:\n bk_plot.ax_hist.text(0.02,1,'No Sources',fontsize=12)\nelse:\n plot_vmin, plot_vmax, plot_c = color_by_time(time_set, tlim_sub)\n plot_points(bk_plot, lon_set, lat_set, alt_set, time_set,\n plot_cmap, plot_s, plot_vmin, plot_vmax, plot_c)\n\nplt.show()\n\n# We can save a publication-ready plot using this line … and you can change to .pdf to get a vector plot.\n# plt.savefig('./images/' + dttuple[0].strftime('%y%m%d') +\n# '/relampago_points_' + dttuple[0].strftime('%Y%m%d_%H%M.png'))", "Saving the data … or not\nAt this stage we could save the flash-sorted data to a NetCDF file. However, we're going to pass on that step right now, in favor of adding a basic gridded dataset to our saved file.", "if False:\n print(\"Writing data\")\n duration_sec = (dttuple[1]-dttuple[0]).total_seconds()\n date_fmt = \"LYLOUT_%y%m%d_%H%M%S_{0:04d}_flash.nc\".format(int(duration_sec))\n outfile = dttuple[0].strftime(date_fmt)\n\n # Compress the variables.\n comp = dict(zlib=True, complevel=5)\n encoding = {var: comp for var in ds.data_vars}\n ds.to_netcdf(outfile, encoding=encoding)", "Above, we set latlon_grid = False; instead, let's use the 500 m stereographic grid defined by Stony Brook University's cell-tracking team. It's defined in lmalibtracer and was imported above. The definition is not too complicated - it only requires specifying the radious of a spherical earth and a coordinate center location in addition. All the other coordinate transformations are handled for us by pyproj.\nThe next block of code is a bit long, mainly beause we have repeated the blocks for defining the grid two ways. Note that we could uncomment a few lines and create a 3D grid, too. But it's useful to have flash extent density as a 2D instead of 3D analysis for quick-look visualizations, and it's also sufficient for most of our science.", "print(\"Setting up grid spec\")\ngrid_dt = np.asarray(grid_time_delta_sec, dtype='m8[s]')\ngrid_t0 = np.asarray(dttuple[0]).astype('datetime64[ns]')\ngrid_t1 = np.asarray(dttuple[1]).astype('datetime64[ns]')\ntime_range = (grid_t0, grid_t1+grid_dt, grid_dt)\n\n# Change the dictionaries below to a consistent set of coordinates\n# and adjust grid_spatial_coords in the call to events_to_grid to\n# change what is gridded (various time series of 1D, 2D, 3D grids)\n\nif latlon_grid:\n # Houston\n # center = 29.7600000, -95.3700000\n lat_range = (27.75, 31.75, 0.025)\n lon_range = (-97.37, -93.37, 0.025)\n alt_range = (0, 18e3, 1.0e3)\n\n\n grid_edge_ranges ={\n 'grid_latitude_edge':lat_range,\n 'grid_longitude_edge':lon_range,\n # 'grid_altitude_edge':alt_range,\n 'grid_time_edge':time_range,\n }\n grid_center_names ={\n 'grid_latitude_edge':'grid_latitude',\n 'grid_longitude_edge':'grid_longitude',\n # 'grid_altitude_edge':'grid_altitude',\n 'grid_time_edge':'grid_time',\n }\n\n event_coord_names = {\n 'event_latitude':'grid_latitude_edge',\n 'event_longitude':'grid_longitude_edge',\n # 'event_altitude':'grid_altitude_edge',\n 'event_time':'grid_time_edge',\n }\n\n flash_ctr_names = {\n 'flash_init_latitude':'grid_latitude_edge',\n 'flash_init_longitude':'grid_longitude_edge',\n # 'flash_init_altitude':'grid_altitude_edge',\n 'flash_time_start':'grid_time_edge',\n }\n flash_init_names = {\n 'flash_center_latitude':'grid_latitude_edge',\n 'flash_center_longitude':'grid_longitude_edge',\n # 'flash_center_altitude':'grid_altitude_edge',\n 'flash_time_start':'grid_time_edge',\n }\nelse:\n # Project lon, lat to SBU map projection\n sbu_lla, sbu_map, x_edge, y_edge = get_coord_proj()\n sbu_dx = x_edge[1] - x_edge[0]\n sbu_dy = y_edge[1] - y_edge[0]\n lma_sbu_xratio = resolution_m/sbu_dx\n lma_sbu_yratio = resolution_m/sbu_dy\n trnsf_to_map = proj4.Transformer.from_crs(sbu_lla, sbu_map)\n trnsf_from_map = proj4.Transformer.from_crs(sbu_map, sbu_lla)\n lmax, lmay = trnsf_to_map.transform(#sbu_lla, sbu_map,\n ds.event_longitude.data,\n ds.event_latitude.data)\n lma_initx, lma_inity = trnsf_to_map.transform(#sbu_lla, sbu_map,\n ds.flash_init_longitude.data,\n ds.flash_init_latitude.data)\n lma_ctrx, lma_ctry = trnsf_to_map.transform(#sbu_lla, sbu_map,\n ds.flash_center_longitude.data,\n ds.flash_center_latitude.data)\n ds['event_x'] = xr.DataArray(lmax, dims='number_of_events')\n ds['event_y'] = xr.DataArray(lmay, dims='number_of_events')\n ds['flash_init_x'] = xr.DataArray(lma_initx, dims='number_of_flashes')\n ds['flash_init_y'] = xr.DataArray(lma_inity, dims='number_of_flashes')\n ds['flash_ctr_x'] = xr.DataArray(lma_ctrx, dims='number_of_flashes')\n ds['flash_ctr_y'] = xr.DataArray(lma_ctry, dims='number_of_flashes')\n\n grid_edge_ranges ={\n 'grid_x_edge':(x_edge[0],x_edge[-1]+.001,sbu_dx*lma_sbu_xratio),\n 'grid_y_edge':(y_edge[0],y_edge[-1]+.001,sbu_dy*lma_sbu_yratio),\n # 'grid_altitude_edge':alt_range,\n 'grid_time_edge':time_range,\n }\n grid_center_names ={\n 'grid_x_edge':'grid_x',\n 'grid_y_edge':'grid_y',\n # 'grid_altitude_edge':'grid_altitude',\n 'grid_time_edge':'grid_time',\n }\n\n event_coord_names = {\n 'event_x':'grid_x_edge',\n 'event_y':'grid_y_edge',\n # 'event_altitude':'grid_altitude_edge',\n 'event_time':'grid_time_edge',\n }\n\n flash_ctr_names = {\n 'flash_init_x':'grid_x_edge',\n 'flash_init_y':'grid_y_edge',\n # 'flash_init_altitude':'grid_altitude_edge',\n 'flash_time_start':'grid_time_edge',\n }\n flash_init_names = {\n 'flash_ctr_x':'grid_x_edge',\n 'flash_ctr_y':'grid_y_edge',\n # 'flash_center_altitude':'grid_altitude_edge',\n 'flash_time_start':'grid_time_edge',\n }\n\n\nprint(\"Creating regular grid\")\ngrid_ds = create_regular_grid(grid_edge_ranges, grid_center_names)\nif latlon_grid:\n pass\nelse:\n ctrx, ctry = np.meshgrid(grid_ds.grid_x, grid_ds.grid_y)\n hlon, hlat = trnsf_from_map.transform(ctrx, ctry)\n # Add lon lat to the dataset, too.\n ds['lon'] = xr.DataArray(hlon, dims=['grid_y', 'grid_x'],\n attrs={'standard_name':'longitude'})\n ds['lat'] = xr.DataArray(hlat, dims=['grid_y', 'grid_x'],\n attrs={'standard_name':'latitude'})\n\nprint(\"Finding grid position for flashes\")\npixel_id_var = 'event_pixel_id'\nds_ev = assign_regular_bins(grid_ds, ds, event_coord_names,\n pixel_id_var=pixel_id_var, append_indices=True)\n# ds_flctr = assign_regular_bins(grid_ds, ds, flash_ctr_names,\n# pixel_id_var='flash_ctr_pixel_id', append_indices=True)\n# flctr_gb = ds.groupby('flash_ctr_pixel_id')\n# ds_flini = assign_regular_bins(grid_ds, ds, flash_init_names,\n# pixel_id_var='flash_init_pixel_id', append_indices=True)\n# flini_gb = ds.groupby('flash_init_pixel_id')\n\n# print('===== ev_gb')\n# for event_pixel_id, dsegb in ev_gb:\n# print(dsegb)\n# break\n# print('===== flctr_gb')\n# for event_pixel_id, dsfgb in flctr_gb:\n# print(dsfgb)\n# break\n\nprint(\"Gridding data\")\nif latlon_grid:\n grid_spatial_coords=['grid_time', None, 'grid_latitude', 'grid_longitude']\n event_spatial_vars = ('event_altitude', 'event_latitude', 'event_longitude')\nelse:\n grid_spatial_coords=['grid_time', None, 'grid_y', 'grid_x']\n event_spatial_vars = ('event_altitude', 'event_y', 'event_x')\n\n# print(ds_ev)\n# print(grid_ds)\ngrid_ds = events_to_grid(ds_ev, grid_ds, min_points_per_flash=3,\n pixel_id_var=pixel_id_var,\n event_spatial_vars=event_spatial_vars,\n grid_spatial_coords=grid_spatial_coords)\n\n# Let's combine the flash and event data with the gridded data into one giant data structure.\nboth_ds = xr.combine_by_coords((grid_ds, ds))\nprint(both_ds)", "Looking at the gridded data\nOnce the cells above have been run, a new data structure (both_ds) will have the events, flashes, and their gridded versions. Note the new dimensions for the center and edges of the grid boxes (grid_time: 24, grid_time_edge: 25, grid_x: 250, grid_x_edge: 251, grid_y: 250, grid_y_edge: 251), and the new variables like flash_extent_density with dimensions (grid_time, grid_y, grid_x).\nLet's plot flash extent density for the 42nd time step!", "time_idx = 42\nfig, ax = plt.subplots(1,1)\nboth_ds.flash_extent_density[time_idx, :, :].plot.imshow(ax=ax)\nprint(both_ds.grid_time_edge[time_idx:time_idx+2].data)", "Add widget interactivity\nIt's a bit tedious to change the time index manually. Let'sInteract!. It's possible to build much nicer interfaces, but this is enough for a quick look.", "both_ds.dims['grid_time']\n\nfrom ipywidgets import interact #, interactive, fixed, interact_manual\nfig = plt.figure()\n\nn_times = both_ds.dims['grid_time']\n@interact(time_idx=(0, n_times-1))\ndef plot(time_idx=0):\n fig.clear()\n ax = fig.add_subplot(1,1,1)\n both_ds.flash_extent_density[time_idx, :, :].plot.imshow(ax=ax, vmin=0, vmax=5)", "Aggregating in time\nWith these low flash rates, it's hard to get a sense of everything in the dataset, so let's sum across the time dimension. (xarray also has a rolling window function.)", "fig = plt.figure()\nax = fig.add_subplot(1,1,1)\nfrom matplotlib.colors import LogNorm\nboth_ds.flash_extent_density.sum('grid_time').plot.imshow(ax=ax, norm=LogNorm(1, 300))", "Finally, write the data.\nOnce we save the data to disk, we can reload the data an re-run any of the plots above without reprocessing everything. We'll make files like this from the post-processed LMA data for each day during ESCAPE/TRACER, and they will be one of our data deliverables to the NCAR EOL catalog, in accordance with the ESCAPE data management plan as proposed in the grant.", "if True:\n print(\"Writing data\")\n duration_sec = (dttuple[1]-dttuple[0]).total_seconds()\n if latlon_grid:\n date_fmt = \"LYLOUT_%y%m%d_%H%M%S_{0:04d}_grid.nc\".format(int(duration_sec))\n else:\n date_fmt = \"LYLOUT_%y%m%d_%H%M%S_{0:04d}_map{1:d}m.nc\".format(\n int(duration_sec), resolution_m)\n outfile = dttuple[0].strftime(date_fmt)\n\n comp = dict(zlib=True, complevel=5)\n encoding = {var: comp for var in both_ds.data_vars}\n both_ds.to_netcdf(outfile, encoding=encoding)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bhargavchippada/randomfun
NeuralEquationFinder/NeuralEquationFinder_Part_2.ipynb
mit
[ "In part 1, we ourselves created the additional features (x^2, x^3). Wouldn't it be nice if we create activation functions to do just that and let the neural network decide the weights for connections during training.", "import torch\nfrom torch.autograd import Variable\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n# Setup the training and test tensors\n# Let's generate 400 examples\nN = 400\nx = np.random.uniform(low=-75, high=100, size=N)\ny = 2*x\n\nX_tensor = Variable(torch.FloatTensor(x), requires_grad=False)\ny_tensor = Variable(torch.FloatTensor(y), requires_grad=False)\n\n# Test set initialization\nx_test = np.array([-2.5, 0.0, 19])\nX_test_tsr = Variable(torch.FloatTensor(x_test), requires_grad=False)\n\n# Normalized features\nX_min = torch.min(X_tensor)\nX_max = torch.max(X_tensor)\nX_mean = torch.mean(X_tensor)\nX_sub_mean = X_tensor-X_mean.expand_as(X_tensor)\nX_max_min = X_max-X_min + 1e-7\nX_norm_tsr = X_sub_mean/X_max_min.expand_as(X_sub_mean)\n\nX_test_sub_mean = X_test_tsr-X_mean.expand_as(X_test_tsr)\nX_test_norm_tsr = X_test_sub_mean/X_max_min.expand_as(X_test_sub_mean)\n\n# Implement version-2 neural network\nimport math\nfrom time import time\nfrom collections import OrderedDict\n\ndef RunV2NNTraining(X, y, model, learning_rate=1e-5, epochs=5000, batch_size=None, X_test=None, \n use_optimizer=None, adam_betas=(0.9, 0.999)):\n # Neural Net\n X_size = X.size()\n N = X_size[0]\n \n loss_fn = torch.nn.MSELoss(size_average=True)\n \n # Choose Optimizer\n optimizer = None\n if use_optimizer:\n if use_optimizer == 'SGD':\n optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)\n elif use_optimizer == 'Adam':\n optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=adam_betas)\n elif use_optimizer == 'Adadelta':\n optimizer = torch.optim.Adadelta(model.parameters(), lr=learning_rate)\n elif use_optimizer == 'ASGD':\n optimizer = torch.optim.ASGD(model.parameters(), lr=learning_rate)\n elif use_optimizer == 'RMSprop':\n optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)\n elif use_optimizer == 'Adagrad':\n optimizer = torch.optim.Adagrad(model.parameters(), lr=learning_rate)\n else:\n print(\"Invalid Optimizer\")\n use_optimizer=None\n \n losses = []\n loss = None\n start_time = time()\n for t in range(epochs):\n num_batches = 1\n X_batch = None\n y_batch = None\n if batch_size:\n num_batches = math.ceil(N/batch_size)\n else:\n batch_size = N\n \n shuffle = torch.randperm(N)\n \n for b in range(num_batches):\n lower_index = b*batch_size\n upper_index = min(lower_index+batch_size, N)\n indices = shuffle[lower_index:upper_index]\n X_batch = X[indices]\n y_batch = y[indices]\n \n y_pred = model(X_batch)\n loss = loss_fn(y_pred, y_batch)\n \n if use_optimizer:\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n else:\n # Zero the gradients before running the backward pass.\n model.zero_grad()\n loss.backward()\n\n # Update the weights using gradient descent. Each parameter is a Variable, so\n # we can access its data and gradients like we did before.\n for param in model.parameters():\n param.data -= learning_rate * param.grad.data\n losses.append(loss.data[0])\n\n end_time = time()\n time_taken = end_time - start_time\n print(\"Time Taken = %.2f seconds \" % time_taken)\n print(\"Final Loss: \", loss.data[0])\n print(\"Parameters [w_1, w_2, w_3, b]: \")\n \n for name, param in model.named_parameters():\n print(name)\n print(param.data)\n\n # plot Loss vs Iterations\n plt.plot(losses)\n plt.title('Loss history')\n plt.xlabel('Iteration')\n plt.ylabel('Loss')\n plt.show()\n \n # Predictions on Test set\n if X_test:\n print(\"Test:\")\n print(\"X_test: \", X_test.data)\n print(\"y_pred: \", model(X_test))\n \ndef GetV2NNLoss(X, y, model):\n loss_fn = torch.nn.MSELoss(size_average=True)\n y_pred = model(X)\n loss = loss_fn(y_pred, y)\n return loss.data[0]", "Time to create our x^n activation function", "class PowerNet(torch.nn.Module):\n def __init__(self, n):\n super(PowerNet, self).__init__()\n self.n = n\n self.linear = torch.nn.Linear(1, 1)\n\n def forward(self, x):\n return self.linear(x).pow(self.n)\n\nPow123Net_Mask = Variable(torch.FloatTensor([1.0,1.0,1.0]), requires_grad=False)\nclass Pow123Net(torch.nn.Module):\n def __init__(self):\n super(Pow123Net, self).__init__()\n self.p1 = PowerNet(1)\n self.p2 = PowerNet(2)\n self.p3 = PowerNet(3)\n \n def forward(self, x):\n x1 = self.p1.forward(x)\n x2 = self.p2.forward(x)\n x3 = self.p3.forward(x)\n xc = torch.cat((x1, x2, x3), 1)\n return xc*Pow123Net_Mask.expand_as(xc)", "Unnormalized features", "# use_optimizer can be Adam, RMSprop, Adadelta, ASGD, SGD, Adagrad\nmodel = torch.nn.Sequential(OrderedDict([\n (\"Pow123Net\", Pow123Net()),\n (\"FC\", torch.nn.Linear(3, 1))]\n ))\nRunV2NNTraining(X=X_tensor.view(-1,1), y=y_tensor, model=model, batch_size=None, epochs=25000, learning_rate=5e-3, \n X_test=X_test_tsr.view(-1,1), use_optimizer='Adam')\n\n# Now, how do we find the equation?\n# One way to find is to see the effect of each activation on the loss\nprint(\"Final Loss: \", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))\n\n# mask f(x) = x\nPow123Net_Mask[0] = 0.0\nprint(\"Loss with x masked: \", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))\nPow123Net_Mask[0] = 1.0\n\n# mask f(x) = x^2\nPow123Net_Mask[1] = 0.0\nprint(\"Loss with x^2 masked: \", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))\nPow123Net_Mask[1] = 1.0\n\n# mask f(x) = x^3\nPow123Net_Mask[2] = 0.0\nprint(\"Loss with x^3 masked: \", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))\nPow123Net_Mask[2] = 1.0\n\n# Clearly activations X^2 and X^3 are not important\n# Now what is the final equation?\np1_w = None\np1_b = None\nfc1_w = None\nfc_b = None\nfor name, param in model.named_parameters():\n if name == 'Pow123Net.p1.linear.weight':\n p1_w = param.data[0]\n if name == 'Pow123Net.p1.linear.bias':\n p1_b = param.data[0]\n if name == 'FC.weight':\n fc1_w = param.data[0,0]\n if name == 'FC.bias':\n fc_b = param.data[0]\n \ncoeff_x = p1_w*fc1_w\nconst = p1_b*fc1_w+fc_b\nprint(\"Finally the equation is y = \",coeff_x[0],\"*x + \", const)\nprint(\"Pretty close to y = 2*x\")", "Normalized features\nAfter normalizing the features, SGD is not converging! what? and there was no performance advantage compared to unnormalized features.", "model = torch.nn.Sequential(OrderedDict([\n (\"Pow123Net\", Pow123Net()),\n (\"FC\", torch.nn.Linear(3, 1))]\n ))\nRunV2NNTraining(X=X_norm_tsr.view(-1,1), y=y_tensor, model=model, batch_size=None, epochs=25000, learning_rate=1e-1, \n X_test=X_test_norm_tsr.view(-1,1), use_optimizer='Adam')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bwgref/nustar_pysolar
notebooks/20180928/Mosaic 20180928.ipynb
mit
[ "from nustar_pysolar import planning, io\nimport astropy.units as u\nimport warnings\nwarnings.filterwarnings('ignore')", "Download the list of occultation periods from the MOC at Berkeley.\nNote that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.", "fname = io.download_occultation_times(outdir='../data/')\nprint(fname)", "Download the NuSTAR TLE archive.\nThis contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.\nThe times, line1, and line2 elements are now the TLE elements for each epoch.", "tlefile = io.download_tle(outdir='../data')\nprint(tlefile)\ntimes, line1, line2 = io.read_tle_file(tlefile)", "Here is where we define the observing window that we want to use.\nNote that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.", "tstart = '2018-09-27T12:00:00'\ntend = '2018-09-29T12:10:00'\norbits = planning.sunlight_periods(fname, tstart, tend)", "We want to know how to orient NuSTAR for the Sun.\nWe can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the \"slew in\" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).\nThis is what you tell the SOC you want the \"Sky PA angle\" to be.", "pa = planning.get_nustar_roll(tstart, 0)\nprint(\"NuSTAR Roll angle for Det0 in NE quadrant: {}\".format(pa))", "Set up the offset you want to use here:\nThe first element is the direction +WEST of the center of the Sun, the second is the offset +NORTH of the center of the Sun.\nIf you want multiple pointing locations you can either specify an array of offsets or do this \"by hand\" below.", "offset = [0., 0.]*u.arcsec", "Loop over each orbit and correct the pointing for the same heliocentric pointing position.\nNote that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here.", "for ind, orbit in enumerate(orbits):\n midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])\n sky_pos = planning.get_skyfield_position(midTime, offset, parallax_correction=True)\n print(\"Orbit: {}\".format(ind))\n print(\"Orbit start: {} Orbit end: {}\".format(orbit[0].isoformat(), orbit[1].isoformat()))\n print('Aim time: {} RA (deg): {} Dec (deg): {}'.format(midTime.isoformat(), sky_pos[0], sky_pos[1]))\n print(\"\")", "This is where you actually make the Mosaic", "# Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known \n# bug to be fixed.\n\norbit = orbits[20]\nplanning.make_mosaic(orbit, write_output=True, make_regions=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Soil-Carbon-Coalition/atlasdata
Length of green season from time-series Landsat NDVI.ipynb
mit
[ "Length of green season from cloud-filtered Landsat SR\nThis takes as input a time-series csv file generated by Google Earth Engine of cloud-filtered Landsat Surface Reflectance NDVI from Landsats 5, 7, and 8 on a geometry such as point or polygon. The getLogs function counts the days between the first and last observations above a threshold value, for each year of data. Thus for winter growing seasons this will not be adequate.\nData can be sparse for some years due to cloudiness or missing Landsat data.", "import pandas as pd, numpy as np\nfrom datetime import datetime, timedelta\nimport matplotlib.pyplot as plt\n%matplotlib inline", "get the data\nfrom Earth Engine. For example, this script generates a time series from Landsat 5, 7, and 8 Surface Reflectance products on plot GBRO1 (pasture) or GBRO2 (cropland) in North Dakota.\nhttps://code.earthengine.google.com/070a15b53111ad808b01e0ace1de433b\nplot locations: \nGBRO1: https://atlasbiowork.com/sites/615\nGBRO2: https://atlasbiowork.com/sites/616", "#function to count days above threshold NDVI from time series data\ndef getLogs(self, th): #args: dataframe, threshold NDVI value\n df=self\n df['next']=df.nd.shift(-1)\n df['prev']=df.nd.shift(1)\n #find first above threshold\n df['first'] = ((df.nd>=th)&((df.prev<th)|(df.prev.isnull())))\n #find last above threshold\n df['last'] = ((df.nd>=th)&((df.next<th)|(df.next.isnull())))\n #credit 16 days for single observations above threshold\n singles = df[(df['first']==True) & (df['last']==True)].nd.count()*16\n #now remove these\n df = df[~((df['first']==True) & (df['last']==True))]\n #remove all but first and last\n df = df[((df['first']==True) | (df['last']==True))]\n #get intervals between first and last\n df['nextdate'] = df.date.shift(-1)\n df['inc'] = (df['nextdate']-df['date']).dt.days #increment in days\n return int(df[df['first']==True].inc.sum()+singles)\n ", "Crop field plot", "df = pd.read_csv('/Users/Peter/Downloads/gbro2.csv')\nthreshold = .25\nlocation= 'Brown Ranch crop field, GBRO2'\n\n\ndf['date'] = pd.to_datetime(df['system:time_start'])\ndf.index = df['date']\ndel df['system:time_start']\ndf = df.dropna() # drop the rows without observations (masked)\ndf = df[df['date'].dt.year<2018] #chop off 2018 which EE didn't do\nlogs = df.groupby(df['date'].dt.year).apply(getLogs,th=threshold)\ncount = df.groupby(df['date'].dt.year).agg({'nd':'count'})\ndf = logs.to_frame().join(count)\ndf.columns = ['days of green','number of observations']\ndf['rolling mean']=df['days of green'].rolling(5).mean()\ndf\n\ndf.plot(figsize=(15,10), grid=True, lw=5, title='Days of Landsat NDVI above '+str(threshold)+' at '+location)", "Pasture plot", "df = pd.read_csv('/Users/Peter/Downloads/gbro1.csv')\nthreshold = .25\nlocation= 'Brown Ranch pasture, GBRO1'\n\ndf['date'] = pd.to_datetime(df['system:time_start'])\ndf.index = df['date']\ndel df['system:time_start']\ndf = df.dropna() # drop the rows without observations (masked)\ndf = df[df['date'].dt.year<2018] #chop off 2018 which EE didn't do\nlogs = df.groupby(df['date'].dt.year).apply(getLogs,th=threshold)\ncount = df.groupby(df['date'].dt.year).agg({'nd':'count'})\ndf = logs.to_frame().join(count)\ndf.columns = ['days of green','number of observations']\ndf['rolling mean']=df['days of green'].rolling(5).mean()\ndf\n\ndf.plot(figsize=(15,10), grid=True, lw=5, title='Days of Landsat NDVI above '+str(threshold)+' at '+location)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/tensorflow-without-a-phd
tensorflow-mnist-tutorial/keras_02_mnist_dense.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/tensorflow-without-a-phd/blob/master/tensorflow-mnist-tutorial/keras_02_mnist_dense.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nParameters", "BATCH_SIZE = 128\nEPOCHS = 10\n\ntraining_images_file = 'gs://mnist-public/train-images-idx3-ubyte'\ntraining_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'\nvalidation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'\nvalidation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'", "Imports", "import os, re, math, json, shutil, pprint\nimport PIL.Image, PIL.ImageFont, PIL.ImageDraw\nimport IPython.display as display\nimport numpy as np\nimport tensorflow as tf\nfrom matplotlib import pyplot as plt\nprint(\"Tensorflow version \" + tf.__version__)\n\n#@title visualization utilities [RUN ME]\n\"\"\"\nThis cell contains helper functions used for visualization\nand downloads only. You can skip reading it. There is very\nlittle useful Keras/Tensorflow code here.\n\"\"\"\n\n# Matplotlib config\nplt.ioff()\nplt.rc('image', cmap='gray_r')\nplt.rc('grid', linewidth=1)\nplt.rc('xtick', top=False, bottom=False, labelsize='large')\nplt.rc('ytick', left=False, right=False, labelsize='large')\nplt.rc('axes', facecolor='F8F8F8', titlesize=\"large\", edgecolor='white')\nplt.rc('text', color='a8151a')\nplt.rc('figure', facecolor='F0F0F0', figsize=(16,9))\n# Matplotlib fonts\nMATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), \"mpl-data/fonts/ttf\")\n\n# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)\ndef dataset_to_numpy_util(training_dataset, validation_dataset, N):\n \n # get one batch from each: 10000 validation digits, N training digits\n batch_train_ds = training_dataset.unbatch().batch(N)\n \n # eager execution: loop through datasets normally\n if tf.executing_eagerly():\n for validation_digits, validation_labels in validation_dataset:\n validation_digits = validation_digits.numpy()\n validation_labels = validation_labels.numpy()\n break\n for training_digits, training_labels in batch_train_ds:\n training_digits = training_digits.numpy()\n training_labels = training_labels.numpy()\n break\n \n else:\n v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()\n t_images, t_labels = batch_train_ds.make_one_shot_iterator().get_next()\n # Run once, get one batch. Session.run returns numpy results\n with tf.Session() as ses:\n (validation_digits, validation_labels,\n training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])\n \n # these were one-hot encoded in the dataset\n validation_labels = np.argmax(validation_labels, axis=1)\n training_labels = np.argmax(training_labels, axis=1)\n \n return (training_digits, training_labels,\n validation_digits, validation_labels)\n\n# create digits from local fonts for testing\ndef create_digits_from_local_fonts(n):\n font_labels = []\n img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1\n font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)\n font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)\n d = PIL.ImageDraw.Draw(img)\n for i in range(n):\n font_labels.append(i%10)\n d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)\n font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)\n font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])\n return font_digits, font_labels\n\n# utility to display a row of digits with their predictions\ndef display_digits(digits, predictions, labels, title, n):\n fig = plt.figure(figsize=(13,3))\n digits = np.reshape(digits, [n, 28, 28])\n digits = np.swapaxes(digits, 0, 1)\n digits = np.reshape(digits, [28, 28*n])\n plt.yticks([])\n plt.xticks([28*x+14 for x in range(n)], predictions)\n plt.grid(b=None)\n for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):\n if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red\n plt.imshow(digits)\n plt.grid(None)\n plt.title(title)\n display.display(fig)\n \n# utility to display multiple rows of digits, sorted by unrecognized/recognized status\ndef display_top_unrecognized(digits, predictions, labels, n, lines):\n idx = np.argsort(predictions==labels) # sort order: unrecognized first\n for i in range(lines):\n display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],\n \"{} sample validation digits out of {} with bad predictions in red and sorted first\".format(n*lines, len(digits)) if i==0 else \"\", n)\n\ndef plot_learning_rate(lr_func, epochs):\n xx = np.arange(epochs+1, dtype=np.float)\n y = [lr_decay(x) for x in xx]\n fig, ax = plt.subplots(figsize=(9, 6))\n ax.set_xlabel('epochs')\n ax.set_title('Learning rate\\ndecays from {:0.3g} to {:0.3g}'.format(y[0], y[-2]))\n ax.minorticks_on()\n ax.grid(True, which='major', axis='both', linestyle='-', linewidth=1)\n ax.grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)\n ax.step(xx,y, linewidth=3, where='post')\n display.display(fig)\n\nclass PlotTraining(tf.keras.callbacks.Callback):\n def __init__(self, sample_rate=1, zoom=1):\n self.sample_rate = sample_rate\n self.step = 0\n self.zoom = zoom\n self.steps_per_epoch = 60000//BATCH_SIZE\n\n def on_train_begin(self, logs={}):\n self.batch_history = {}\n self.batch_step = []\n self.epoch_history = {}\n self.epoch_step = []\n self.fig, self.axes = plt.subplots(1, 2, figsize=(16, 7))\n plt.ioff()\n\n def on_batch_end(self, batch, logs={}):\n if (batch % self.sample_rate) == 0:\n self.batch_step.append(self.step)\n for k,v in logs.items():\n # do not log \"batch\" and \"size\" metrics that do not change\n # do not log training accuracy \"acc\"\n if k=='batch' or k=='size':# or k=='acc':\n continue\n self.batch_history.setdefault(k, []).append(v)\n self.step += 1\n\n def on_epoch_end(self, epoch, logs={}):\n plt.close(self.fig)\n self.axes[0].cla()\n self.axes[1].cla()\n \n self.axes[0].set_ylim(0, 1.2/self.zoom)\n self.axes[1].set_ylim(1-1/self.zoom/2, 1+0.1/self.zoom/2)\n \n self.epoch_step.append(self.step)\n for k,v in logs.items():\n # only log validation metrics\n if not k.startswith('val_'):\n continue\n self.epoch_history.setdefault(k, []).append(v)\n\n display.clear_output(wait=True)\n \n for k,v in self.batch_history.items():\n self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.batch_step) / self.steps_per_epoch, v, label=k)\n \n for k,v in self.epoch_history.items():\n self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.epoch_step) / self.steps_per_epoch, v, label=k, linewidth=3)\n \n self.axes[0].legend()\n self.axes[1].legend()\n self.axes[0].set_xlabel('epochs')\n self.axes[1].set_xlabel('epochs')\n self.axes[0].minorticks_on()\n self.axes[0].grid(True, which='major', axis='both', linestyle='-', linewidth=1)\n self.axes[0].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)\n self.axes[1].minorticks_on()\n self.axes[1].grid(True, which='major', axis='both', linestyle='-', linewidth=1)\n self.axes[1].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)\n display.display(self.fig)", "tf.data.Dataset: parse files and prepare training and validation datasets\nPlease read the best practices for building input pipelines with tf.data.Dataset", "AUTO = tf.data.experimental.AUTOTUNE\n\ndef read_label(tf_bytestring):\n label = tf.io.decode_raw(tf_bytestring, tf.uint8)\n label = tf.reshape(label, [])\n label = tf.one_hot(label, 10)\n return label\n \ndef read_image(tf_bytestring):\n image = tf.io.decode_raw(tf_bytestring, tf.uint8)\n image = tf.cast(image, tf.float32)/256.0\n image = tf.reshape(image, [28*28])\n return image\n \ndef load_dataset(image_file, label_file):\n imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)\n imagedataset = imagedataset.map(read_image, num_parallel_calls=16)\n labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)\n labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)\n dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))\n return dataset \n \ndef get_training_dataset(image_file, label_file, batch_size):\n dataset = load_dataset(image_file, label_file)\n dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset\n dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)\n dataset = dataset.repeat() # Mandatory for Keras for now\n dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed\n dataset = dataset.prefetch(AUTO) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)\n return dataset\n \ndef get_validation_dataset(image_file, label_file):\n dataset = load_dataset(image_file, label_file)\n dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset\n dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch\n dataset = dataset.repeat() # Mandatory for Keras for now\n return dataset\n\n# instantiate the datasets\ntraining_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)\nvalidation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)\n\n# For TPU, we will need a function that returns the dataset\ntraining_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)\nvalidation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)", "Let's have a look at the data", "N = 24\n(training_digits, training_labels,\n validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)\ndisplay_digits(training_digits, training_labels, training_labels, \"training digits and their labels\", N)\ndisplay_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], \"validation digits and their labels\", N)\nfont_digits, font_labels = create_digits_from_local_fonts(N)", "Keras model\nIf you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD", "model = tf.keras.Sequential(\n [\n tf.keras.layers.Input(shape=(28*28,)),\n tf.keras.layers.Dense(200, activation='relu'),\n tf.keras.layers.Dense(60, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n\nmodel.compile(optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# print model layers\nmodel.summary()\n\n# utility callback that displays training curves\nplot_training = PlotTraining(sample_rate=10, zoom=1)", "Train and validate the model", "steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset\nprint(\"Steps per epoch: \", steps_per_epoch)\n\nhistory = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,\n validation_data=validation_dataset, validation_steps=1, callbacks=[plot_training])", "Visualize predictions", "# recognize digits from local fonts\nprobabilities = model.predict(font_digits, steps=1)\npredicted_labels = np.argmax(probabilities, axis=1)\ndisplay_digits(font_digits, predicted_labels, font_labels, \"predictions from local fonts (bad predictions in red)\", N)\n\n# recognize validation digits\nprobabilities = model.predict(validation_digits, steps=1)\npredicted_labels = np.argmax(probabilities, axis=1)\ndisplay_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)", "License\n\nauthor: Martin Gorner<br>\ntwitter: @martin_gorner\n\nCopyright 2019 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\nThis is not an official Google product but sample code provided for an educational purpose" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
supergis/git_notebook
geospatial/openstreetmap/osm-overpass-node.ipynb
gpl-3.0
[ "#!/usr/bin/python\n#coding=utf-8", "OSM-overpass服务接口使用,在线查询OpenStreetMap开放空间数据库。\n by openthings@163.com, 2016-04-23. \n\noverpy-使用overpass api接口的python library,这里将返回结果集保存为JSON格式。\n\n安装:$ pip install overpy\n文档:http://python-overpy.readthedocs.org/en/latest/example.html#basic-example \n接口:http://wiki.openstreetmap.org/wiki/Overpass_API\n本工具例程基于上述文档例程进行编写。", "import os, sys, gc\nimport time\nimport json\n\nimport overpy\nfrom pprint import *", "调用overpass接口,获取result数据结果集。\n\n由于通过网络返回,容易中断,而且是在内存中处理,不适合创建大的查询集。", "#范围:纬度1,经度1,纬度2,经度2\n#返回:result\ndef get_osm():\n query = \"[out:json];node(50.745,7.17,50.75,7.18);out;\"\n osm_op_api = overpy.Overpass()\n result = osm_op_api.query(query)\n\n print(\"Nodes: \",len(result.nodes))\n print(\"Ways: \",len(result.ways))\n print(\"Relations: \",len(result.relations))\n return result", "在线获取osm数据.", "result = get_osm()", "显示node的属性信息(仅显示前3个node的信息)。", "nodeset = result.nodes[0:3]\npprint(nodeset)", "遍历node的子集,该子集由上一步产生。", "for n in nodeset:\n print(n.id,n.lat,n.lon)", "将查询到的数据集合转换为json格式,写入json格式的文件.\n( 该格式可由Spark直接载入: SQLContext.read.json() )。", "def node2json(node):\n jsonNode=\"{\\\"id\\\":\\\"%s\\\", \\\"lat\\\":\\\"%s\\\", \\\"lon\\\":\\\"%s\\\"}\"%(node.id,node.lat,node.lon)\n return jsonNode\n\ndef node2jsonfile(fname,nodeset):\n fnode = open(fname,\"w+\")\n for n in nodeset:\n jn = node2json(n) + \"\\n\"\n fnode.write(jn)\n fnode.close()\n print(\"Nodes:\",len(nodeset),\", Write to: \",fname)", "执行json文件保存操作。", "node2jsonfile(\"overpass.osm_node.json\",result.nodes) ", "查看一下文件。", "!ls -l -h" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kungsik/text_fabric_sample
tutorial_3.ipynb
gpl-3.0
[ "Text-Fabric Api 활용예제 3", "from tf.fabric import Fabric\n\nETCBC = 'hebrew/etcbc4c'\nPHONO = 'hebrew/phono'\nTF = Fabric( modules=[ETCBC, PHONO], silent=False )\n\napi = TF.load('''\n book chapter verse\n sp nu gn ps vt vs st\n otype\n det\n g_word_utf8 trailer_utf8\n lex_utf8 lex voc_utf8\n g_prs_utf8 g_uvf_utf8\n prs_gn prs_nu prs_ps g_cons_utf8\n gloss \n''')\n\napi.makeAvailableIn(globals())", "성서 본문을 큰 단위의 word node가 아닌 각 단어 요소들로 잘라서 출력함\n창세기 1:2 연습", "verseNode = T.nodeFromSection(('Genesis', 1, 2))\nwordsNode = L.d(verseNode, otype='word')\nprint(wordsNode)", "Text feature가 아닌 feature의 g_word_utf8의 값을 이용하여 첫 번째 word node 출력", "F.g_word_utf8.v(wordsNode[0])", "위를 응용하여 창세기 1:2 전체를 반복문을 이용하여 출력\nF.trailer_utf8은 글자 사이에 간격이 있는지, 혹은 특수 문자가 있는지를 판단하는 값이다. 따라서 문장을 이을 때 필수적", "\"\"\"절수 추가\"\"\"\nverse = str(T.sectionFromNode(verseNode)[2])\n\nfor w in wordsNode:\n verse += F.g_word_utf8.v(w)\n if F.trailer_utf8.v(w):\n verse += F.trailer_utf8.v(w)\n\nprint(verse)", "창세기 1장 전체 출력", "chpNode = T.nodeFromSection(('Genesis', 1))\nverseNode = L.d(chpNode, otype='verse')\nverse = \"\"\n\nfor v in verseNode:\n verse += str(T.sectionFromNode(v)[2])\n wordsNode = L.d(v, otype='word')\n for w in wordsNode:\n verse += F.g_word_utf8.v(w)\n if F.trailer_utf8.v(w):\n verse += F.trailer_utf8.v(w)\n\nprint(verse)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td1a/j2048_correction.ipynb
mit
[ "1A.2 - 2048 - stratégie gagnante - correction\nLe jeu 2048 est assez simple et fut populaire en son temps. Comment imaginer une stratégie qui dépasser le 2048 ?", "from jyquickhelper import add_notebook_menu\nadd_notebook_menu()\n\nfrom pyquickhelper.helpgen import NbImage\nNbImage(\"images/2048.png\", width=200)", "Exercice 1 : implémenter les règles du jeu\nOn veut pouvoir enchaîner les coups et simuler des parties. On crée plusieurs qu'on teste à chaque fois avant de passer à la suivante. La première fonction create_game crée un jeu vide.", "import numpy\n\ndef create_game():\n return numpy.zeros((4,4), dtype=int)\n\ncreate_game()", "La seconde tire un nombre aléatoire et l'ajoute dans une case vide choisie au hasard s'il en reste. S'il n'en reste plus, le jeu est terminée. On utilise la fonction ravel pour transformer la matrice en un tableau et vérifier qu'il y a des éléments nuls. C'est la fonction gameover1 ou alors on peut utiliser la fonction masked_not_equal. C'est la fonction gameover.", "import random\n\ndef gameover1(game):\n arr = game.ravel()\n arr = game[game==0]\n return len(arr) == 0\n\ndef gameover(game):\n return numpy.ma.masked_not_equal(game, 0).count() == 0\n\ndef joue(game):\n if gameover(game):\n raise Exception(\"Game Over\\n\" + str(game))\n else:\n while True:\n i = random.randint(0, game.shape[0]-1)\n j = random.randint(0, game.shape[1]-1)\n if game[i,j] == 0:\n n = random.randint(0,3)\n game[i,j] = 4 if n == 0 else 2\n break\n return game\n\ngame = create_game()\njoue(game) ", "On joue un second coup.", "joue(game)", "On vérifie qu'au bout de 16 fois, la fonction génère une exception.", "game = create_game()\n\niter = 0\nwhile True:\n try:\n joue(game)\n except Exception as e:\n print(\"itération\", iter)\n print(game)\n break\n iter += 1", "Pour jouer un coup, il faut faire tomber les nombres. C'est la même chose quelque soit la colonne ou la ligne. On créer une fonction pour cela.", "def process_line(line):\n res = []\n for n in line:\n if n == 0:\n # Un 0, on passe.\n continue\n if len(res) == 0:\n # Premier nombre, on ajoute au résultat.\n res.append(n)\n else:\n prev = res[-1]\n if prev == n:\n # Si le nombre est identique on combine.\n res[-1] = 2*n\n else:\n # Sinon on ajoute.\n res.append(n)\n while len(res) < len(line):\n res.append(0)\n return res\n\nprocess_line([2,2,4,0])\n\nprocess_line([0,2,0,0])", "On écrit la fonction de mise à jour pour les 4 directions et répétées 4 fois.", "def update_game(game, direction):\n if direction == 0:\n lines = [process_line(game[i,:]) for i in range(game.shape[0])]\n game = numpy.array(lines)\n elif direction == 1:\n lines = [process_line(game[:,i]) for i in range(game.shape[1])]\n game = numpy.array(lines).T\n elif direction == 2:\n lines = [list(reversed(process_line(game[i,::-1]))) for i in range(game.shape[0])]\n game = numpy.array(lines)\n elif direction == 3:\n lines = [list(reversed(process_line(game[::-1,i]))) for i in range(game.shape[1])]\n game = numpy.array(lines).T\n return game", "On teste pour 5 coups.", "game = create_game()\nfor i in range(0,5):\n game = joue(game)\n print('-------------')\n print(game)\n direction = i % 4\n game = update_game(game, direction)\n print(\"direction=\",direction)\n print(game) ", "Exercice 2 : implémenter une stratégie\n(à venir)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dietmarw/EK5312_ElectricalMachines
Chapman/Ch4-Problem_4-03.ipynb
unlicense
[ "Excercises Electric Machinery Fundamentals\nChapter 4\nProblem 4-3", "%pylab notebook", "Description\nAssume that the field current of the generator in Problem 4-2 has been adjusted to a value of $5\\,A$.\n(a)\n\nWhat will the terminal voltage of this generator be if it is connected to a $\\Delta$-connected load with an impedance of $24\\,\\Omega\\angle 25°$?\n\n(b)\n\nSketch the phasor diagram of this generator.\n\n(c)\n\nWhat is the efficiency of the generator at these conditions?\n\n(d)\nNow assume that another identical $\\Delta$-connected load is to be paralleled with the first one. \n\nWhat happens to the phasor diagram for the generator?\n\n(e)\n\nWhat is the new terminal voltage after the load has been added?\n\n(f)\n\nWhat must be done to restore the terminal voltage to its original value?", "If = 5.0 # [A]\nPF = 0.9\nXs = 2.5 # [Ohm]\nRa = 0.2 # [Ohm]\nZload = 24 * (cos(25/180.0 * pi) + sin(25/180.0 * pi)*1j)\nP = 50e6 # [W]\nPf_w = 1.0e6 # [W]\nPcore = 1.5e6 # [W]\nPstray = 0 # [W]\nn_m = 1800 # [r/min]", "SOLUTION\n(a)\nIf the field current is $5.0 A$, the open-circuit terminal voltage will be about $16,500\\,V$, and the open-circuit phase voltage in the generator (and hence $E_A$ ) will be $\\frac{16,500\\,V}{\\sqrt{3}}$ .", "Vl = 16.5e3 #[V]\nia = P / (sqrt(3) * Vl)\nIa_angle = -arccos(PF)\nIa = ia * (cos(Ia_angle) + sin(Ia_angle)*1j)\n\nEa = Vl / sqrt(3)\nprint('Ea = {:.0f} V'.format(Ea))", "The load is $\\Delta$-connected with three impedances of $24\\,\\Omega \\angle 25^\\circ$ . From the Y-$\\Delta$ transform, this load is equivalent to a Y-connected load with three impedances of:", "Z = Zload/3\nZ_angle = arctan(Z.imag/Z.real)\nprint('Z = {:.0f} Ω ∠{:.0f}° '.format(abs(Z), Z_angle / pi*180))", "The resulting per-phase equivalent circuit is shown below:\n<img src=\"figs/Problem_4-03a.jpg\" width=\"70%\">\nThe magnitude of the phase current flowing in this generator is:\n$$I_A = \\frac{E_A}{|R_A + jX_S +Z|}$$", "ia = Ea / (abs(Ra + Xs*1j + Z))\nprint('ia = {:.0f} A'.format(ia))\nIa = ia * (cos(-Z_angle) + sin(-Z_angle)*1j)\nIa_angle = arctan(Ia.imag/Ia.real)", "Therefore, the magnitude of the phase voltage is:\n$$V_\\phi = I_AZ$$", "V_phase = ia * abs(Z)\nprint('V_phase = {:.0f} V'.format(V_phase))", "and the terminal voltage is:\n$$V_T = \\sqrt{3}V_\\phi$$", "Vt = sqrt(3) * V_phase\nprint('''\nVt = {:.0f} V\n============'''.format(Vt))", "(b)\nArmature current is $I_A = 1004\\,A\\angle -25°$ , and the phase voltage is $V_\\phi = 8032\\,V\\angle 0°$. Therefore, the internal generated voltage is:\n$$\\vec{E}A = \\vec{V}\\phi + R_A\\vec{I}_A + jX_S\\vec{I}_A$$", "EA = V_phase + Ra*Ia + Xs*1j*Ia\nEA_angle = arctan(EA.imag/EA.real)\nprint('EA = {:.0f} V ∠{:.1f}°'.format(abs(EA), EA_angle / pi *180))", "The resulting phasor diagram is shown below (not to scale and with some round-off errors):\n<img src=\"figs/Problem_4-03b.jpg\" width=\"70%\">\n(c)\nThe efficiency of the generator under these conditions can be found as follows:\n$$P_\\text{out} = 3 V_\\phi I_A \\cos{\\theta}$$", "Pout = 3 * V_phase * abs(Ia) * cos(Ia_angle)\nprint('Pout = {:.1f} MW'.format(Pout/1e6))", "$$P_\\text{CU} = 3I^2_AR_A$$", "Pcu = 3 * abs(Ia)**2 * Ra\nprint('Pcu = {:.1f} kW'.format(Pcu/1e3))", "$P_\\text{F\\&W} = 1\\,MW$:", "Pf_w = 1e6 # [W] ", "$P_\\text{core} = 1.5\\,MW$:", "Pcore = 1.5e6 # [W]", "$P_\\text{stray} \\approx 0$ (assumed):", "Pstray = 0 # [W]", "$$P_\\text{in} = P_\\text{out} + P_\\text{CU} + P_\\text{F\\&W} + P_\\text{core} + P_\\text{stray}$$", "Pin = Pout + Pcu + Pf_w + Pcore + Pstray\nprint('Pin = {:.1f} MW'.format(Pin/1e6))", "$$\\eta = \\frac{P_\\text{out}}{P_\\text{in}} \\cdot 100\\%$$", "eta = Pout / Pin\nprint('''\nη = {:.1f} %\n=========='''.format(eta*100))", "(d)\nTo get the basic idea of what happens, we will ignore the armature resistance for the moment. If the field current and the rotational speed of the generator are constant, then the magnitude of $E_A( = K \\phi\\omega)$ is constant. The quantity $jX_S \\vec{I}A$ increases in length at the same angle, while the magnitude of $\\vec{E}_A$ must remain constant. Therefore, $\\vec{E}_A$ “swings” out along the arc of constant magnitude until the new $jX_S \\vec{I}_S$ fits exactly between $\\vec{V}\\phi$ and $\\vec{E}_A$ .\n<img src=\"figs/Problem_4-03c.jpg\" width=\"60%\">\n(e)\nThe new impedance per phase will be half of the old value, so", "Znew = Z * 0.5\nZnew_angle = arctan(Z.imag/Z.real)\nprint('Z = {:.0f} Ω ∠{:.0f}° '.format(abs(Znew), Znew_angle /pi*180))", "The magnitude of the phase current flowing in this generator is:\n$$I_A = \\frac{E_A}{|R_A + jX_S +Z_\\text{new}|}$$", "ia = Ea / (abs(Ra + Xs*1j + Znew))\nprint('ia = {:.1f} A'.format(ia))", "Therefore, the magnitude of the phase voltage is:\n$$V_\\phi = I_AZ_\\text{new}$$", "V_phase = ia * abs(Znew)\nprint('V_phase = {:.1f} V'.format(V_phase))", "and the terminal voltage is:\n$$V_T = \\sqrt{3}V_\\phi$$", "Vt = sqrt(3) * V_phase\nprint('''\nVt = {:.1f} V\n=============='''.format(Vt))", "(f)\nTo restore the terminal voltage to its original value, increase the field current: $I_F\\uparrow$ ." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
ex07-Interpolate 2D field on regular and irregular grids.ipynb
mit
[ "%load_ext load_style\n%load_style talk.css", "Interpolate 2D field on regular and irregular grids\nFrom https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html, we know There are several general interpolation facilities available in SciPy, for data in 1, 2, and higher dimensions:\n\nA class representing an interpolant (interp1d) in 1-D, offering several interpolation methods.\nConvenience function griddata offering a simple interface to interpolation in N dimensions (N = 1, 2, 3, 4, ...). Object-oriented interface for the underlying routines is also available.\nFunctions for 1- and 2-dimensional (smoothed) cubic-spline interpolation, based on the FORTRAN library FITPACK. There are both procedural and object-oriented interfaces for the FITPACK library.\nInterpolation using Radial Basis Functions.\n\nIn this notebook, we mainly use interp2d for regular grids while griddata for irregular grids.\n1. Load basic libs", "% matplotlib inline\n\nimport numpy as np\nfrom scipy.interpolate import interp2d, griddata\nimport matplotlib.pyplot as plt\nimport numpy.ma as ma\nfrom numpy.random import uniform, seed\n\nfrom netCDF4 import Dataset as netcdf # netcdf4-python module", "2. Work on regular grids\n2.1 Extract data from skt data", "ncset= netcdf(r'data\\skt.mon.mean.nc')\n\nlon = ncset['lon'][:] \nlat = ncset['lat'][:] \nskt = ncset['skt'][0,:,:] \nskt.shape", "2.2 Prepare new grids of longitude and latitude\nKeep longitude and latitude in a monotonic increasing manner", "lat_new = np.linspace(np.min(lat), np.max(lat), skt.shape[0]*2)\nlon_new = np.linspace(np.min(lon), np.max(lon), skt.shape[1]*2)", "2.3 Contruct interploate function and apply to new grids", "func = interp2d(lon, lat, skt, kind='cubic')\n# apply to new level and latitude\nsktnew = func(lon_new, lat_new)", "2.4 Have a comparision plot", "f, axarr = plt.subplots(2)\nf.set_figwidth(12)\nf.set_figheight(9)\n\n[lons, lats] = np.meshgrid(lon, lat)\naxarr[0].pcolormesh(lons, lats, skt)\naxarr[0].set_title('SKT in Jan, 1948 [$^oC$]')\n\n[lons_new, lats_new] = np.meshgrid(lon_new, lat_new)\naxarr[1].pcolormesh(lons_new, lats_new, sktnew)", "3. Work on irregular grids\nA commonly asked question on the matplotlib mailing lists is \"how do I make a contour plot of my irregularly spaced data?\". The answer is, first you interpolate it to a regular grid. As of version 0.98.3, matplotlib provides a griddata function that behaves similarly to the matlab version. It performs \"natural neighbor interpolation\" of irregularly spaced data a regular grid, which you can then plot with contour, imshow or pcolor.\n3.1 Have a try on fake data\nAn example from http://scipy-cookbook.readthedocs.io/items/Matplotlib_Gridding_irregularly_spaced_data.html", "# make up some randomly distributed data\nseed(1234)\nnpts = 200\nx = uniform(-2,2,npts)\ny = uniform(-2,2,npts)\nz = x*np.exp(-x**2-y**2)\n# define grid.\nxi = np.linspace(-2.1,2.1,100)\nyi = np.linspace(-2.1,2.1,100)\n# grid the data.\nzi = griddata((x, y), z, (xi[None,:], yi[:,None]), method='cubic')\n# contour the gridded data, plotting dots at the randomly spaced data points.\nCS = plt.contour(xi,yi,zi,15,linewidths=0.5,colors='k')\nCS = plt.contourf(xi,yi,zi,15,cmap=plt.cm.jet)\nplt.colorbar() # draw colorbar\n# plot data points.\nplt.scatter(x,y,marker='o',c='b',s=5)\nplt.xlim(-2,2)\nplt.ylim(-2,2)\nplt.title('griddata test (%d points)' % npts)", "3.2 Try skt data\nHere the interpolation method of griddata is used. Before using it, have to use ravel to transform data from 2D into 1D array (i.e., like scatter dots).", "# grid the data.\nskt_new = griddata((lons.ravel(), lats.ravel()), skt.ravel(), (lon_new[None,:], lat_new[:,None]), method='cubic')\n\nf, axarr = plt.subplots(2)\nf.set_figwidth(12)\nf.set_figheight(9)\n\n[lons, lats] = np.meshgrid(lon, lat)\naxarr[0].pcolormesh(lons, lats, skt)\naxarr[0].set_title('SKT in Jan, 1948 [$^oC$]')\n\n[lons_new, lats_new] = np.meshgrid(lon_new, lat_new)\naxarr[1].pcolormesh(lons_new, lats_new, skt_new)", "References\nhttp://unidata.github.io/netcdf4-python/\nJohn D. Hunter. Matplotlib: A 2D Graphics Environment, Computing in Science & Engineering, 9, 90-95 (2007), DOI:10.1109/MCSE.2007.55\nStéfan van der Walt, S. Chris Colbert and Gaël Varoquaux. The NumPy Array: A Structure for Efficient Numerical Computation, Computing in Science & Engineering, 13, 22-30 (2011), DOI:10.1109/MCSE.2011.37\nKalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.24/_downloads/3bb9354e99617f5fdf32e50748fc566d/15_inplace.ipynb
bsd-3-clause
[ "%matplotlib inline", "Modifying data in-place\nMany of MNE-Python's data objects (~mne.io.Raw, ~mne.Epochs, ~mne.Evoked,\netc) have methods that modify the data in-place (either optionally or\nobligatorily). This can be advantageous when working with large datasets\nbecause it reduces the amount of computer memory needed to perform the\ncomputations. However, it can lead to unexpected results if you're not aware\nthat it's happening. This tutorial provides a few examples of in-place\nprocessing, and how and when to avoid it.\nAs usual we'll start by importing the modules we need and\nloading some example data &lt;sample-dataset&gt;:", "import os\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\n# the preload flag loads the data into memory now\nraw = mne.io.read_raw_fif(sample_data_raw_file, preload=True)\nraw.crop(tmax=10.) # raw.crop() always happens in-place", "Signal processing\nMost MNE-Python data objects have built-in methods for filtering, including\nhigh-, low-, and band-pass filters (~mne.io.Raw.filter), band-stop filters\n(~mne.io.Raw.notch_filter),\nHilbert transforms (~mne.io.Raw.apply_hilbert),\nand even arbitrary or user-defined functions (~mne.io.Raw.apply_function).\nThese typically always modify data in-place, so if we want to preserve\nthe unprocessed data for comparison, we must first make a copy of it. For\nexample:", "original_raw = raw.copy()\nraw.apply_hilbert()\nprint(f'original data type was {original_raw.get_data().dtype}, after '\n f'apply_hilbert the data type changed to {raw.get_data().dtype}.')", "Channel picking\nAnother group of methods where data is modified in-place are the\nchannel-picking methods. For example:", "print(f'original data had {original_raw.info[\"nchan\"]} channels.')\noriginal_raw.pick('eeg') # selects only the EEG channels\nprint(f'after picking, it has {original_raw.info[\"nchan\"]} channels.')", "Note also that when picking only EEG channels, projectors that affected only\nthe magnetometers were dropped, since there are no longer any magnetometer\nchannels.\nThe copy parameter\nAbove we saw an example of using the ~mne.io.Raw.copy method to facilitate\ncomparing data before and after processing. This is not needed when using\ncertain MNE-Python functions, because they have a function parameter\nwhere you can specify copy=True (return a modified copy of the data) or\ncopy=False (operate in-place). For example, mne.set_eeg_reference is\none such function; notice that here we plot original_raw after the\nrereferencing has been done, but original_raw is unaffected because\nwe specified copy=True:", "rereferenced_raw, ref_data = mne.set_eeg_reference(original_raw, ['EEG 003'],\n copy=True)\noriginal_raw.plot()\nrereferenced_raw.plot()", "Another example is the picking function mne.pick_info, which operates on\nmne.Info dictionaries rather than on data objects. See\ntut-info-class for details.\nSummary\nGenerally speaking, you should expect that methods of data objects will\noperate in-place, and functions that take a data object as a parameter will\noperate on a copy of the data (unless the function has a copy parameter\nand it defaults to False or you specify copy=False).\nDuring the exploratory phase of your analysis, where you might want\nto try out the effects of different data cleaning approaches, you should get\nused to patterns like raw.copy().filter(...).plot() or\nraw.copy().apply_proj().plot_psd() if you want to avoid having to re-load\ndata and repeat earlier steps each time you change a computation (see the\nsect-meth-chain section for more info on method chaining)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hunterherrin/phys202-2015-work
assignments/assignment07/AlgorithmsEx01.ipynb
mit
[ "Algorithms Exercise 1\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np", "Word counting\nWrite a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:\n\nSplit the string into lines using splitlines.\nSplit each line into a list of words and merge the lists for each line.\nUse Python's builtin filter function to remove all punctuation.\nIf stop_words is a list, remove all occurences of the words in the list.\nIf stop_words is a space delimeted string of words, split them and remove them.\nRemove any remaining empty words.\nMake all words lowercase.", "s=\"\"\"\nAPRIL--this is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\nstop_words='the is'\ns=s.splitlines()\ny=[]\nfor i in s:\n c=i.split()\n y.append(c)\ny\nz=[]\nfor j in range(len(y)):\n z=z+y[j]\nb=' '.join(z)\nu=list(filter(punctuation_split, b))\nv=''.join(u)\nif isinstance(stop_words, str)== True:\n stop_words=stop_words.split()\n for i in range(len(stop_words)):\n v=v.replace(' '+stop_words[i],'')\n v=v.replace(' ','')\nelse:\n for i in range(len(stop_words)):\n v=v.replace(stop_words[i],'')\n v=v.replace(' ','')\nv=v.lower()\nu\n\ndef punctuation_split(x):\n if x == \"'\" or x == '`' or x == '~' or x == '!' or x == '@' or x == '#' or x == '$' or x == '%' or x == '^' or x == '&' or x == '*' or x == '(' or x == ')' or x == '-' or x == '_' or x == '=' or x == '+' or x == '[' or x == ']' or x == '{' or x == '}' or x == '|' or x == '\\\\' or x == '\"' or x == ':' or x == ';' or x == '<' or x == '>' or x == ',' or x == '.' or x == '?' or x == '/':\n return False\n return True\nu=list(filter(punctuation_split, b))\n''.join(u)\n\ndef tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\\:;\"<,>.?/}\\\\'):\n \"\"\"Split a string into a list of words, removing punctuation and stop words.\"\"\"\n s=s.replace('-',' ')\n s=s.replace('--',' ')\n s=s.splitlines() #Collaborated with Kevin Phung\n y=[]\n for i in s:\n c=i.split()\n y.append(c)\n z=[]\n for j in range(len(y)):\n z=z+y[j]\n b=' '.join(z)\n u=list(filter(punctuation_split, b))\n v=''.join(u)\n if stop_words==None:\n v=v.replace(' ','')\n elif isinstance(stop_words, str)== True:\n stop_words=stop_words.split()\n for i in range(len(stop_words)):\n v=v.replace(' '+stop_words[i]+' ',' ')\n \n else:\n for i in range(len(stop_words)):\n v=v.replace(' '+stop_words[i],'')\n v=v.replace(' ','')\n v=v.lower()\n return(v.split())\n\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\ntokenize(wasteland, stop_words='is the of and')\n\nassert tokenize(\"This, is the way; that things will end\", stop_words=['the', 'is']) == \\\n ['this', 'way', 'that', 'things', 'will', 'end']\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\n\nassert tokenize(wasteland, stop_words='is the of and') == \\\n ['april','cruellest','month','breeding','lilacs','out','dead','land',\n 'mixing','memory','desire','stirring','dull','roots','with','spring',\n 'rain']\n\ntokenize(wasteland, stop_words='is the of and')\n\ntokenize('this and the this from and a a a')", "Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.", "def count_words(data):\n \"\"\"Return a word count dictionary from the list of words in data.\"\"\"\n word_dictionary={}\n for i in data:\n if i not in word_dictionary:\n word_dictionary[i]=1\n else:\n word_dictionary[i]=word_dictionary[i]+1\n return word_dictionary\n\nassert count_words(tokenize('this and the this from and a a a')) == \\\n {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}\n\nsorted", "Write a function sort_word_counts that return a list of sorted word counts:\n\nEach element of the list should be a (word, count) tuple.\nThe list should be sorted by the word counts, with the higest counts coming first.\nTo perform this sort, look at using the sorted function with a custom key and reverse\n argument.", "def sort_word_counts(wc):\n \"\"\"Return a list of 2-tuples of (word, count), sorted by count descending.\"\"\"\n x=sorted(wc, key=wc.get, reverse=True)\n y=sorted(wc.values(), reverse=True)\n return list(zip(x,y))\n\n\nsort_word_counts(count_words(tokenize('this and a the this this and a a a')))\n\n\nassert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \\\n [('a', 4), ('this', 3), ('and', 2), ('the', 1)]", "Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:\n\nRead the file into a string.\nTokenize with stop words of 'the of and a to in is it that as'.\nPerform a word count, the sort and save the result in a variable named swc.", "\nnnn=open('mobydick_chapter1.txt')\nmobypenis=nnn.read()\n\nswc=sort_word_counts(count_words(tokenize(mobypenis, 'the of and a to in is it that as')))\nswc\n\nassert swc[0]==('i',43)\nassert len(swc)==848", "Create a \"Cleveland Style\" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...", "ff=np.array(swc)\ndd=ff[range(50),0]\ndd\ncc=ff[range(50),1]\ncc\n\nplt.figure(figsize=(10,10))\nplt.scatter(cc, range(50))\nplt.yticks(range(50), dd)\nplt.title('Most Common Words in Moby Dick First Chapter')\nplt.xlabel('Number of times word appears')\nplt.tight_layout()\n\nff\n\nassert True # use this for grading the dotplot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
owlas/magpy
docs/source/notebooks/single-particle-equilibrium.ipynb
bsd-3-clause
[ "Thermal equilibrium of a single particle\nIn a large ensemble of identical systems each member will have a different state due to thermal fluctuations, even if all the systems were initialised in the same initial state.\nAs we integrate the dynamics of the ensemble we will have a distribution of states (i.e. the states of each member of the system). However, as the ensemble evolves, the distribution over the states eventually reaches a stationary distribution: the Boltzmann distribution. Even though the state of each member in the ensemble contitues to fluctuate, the ensemble as a whole is in a stastistical equilibrium (thermal equilibrium).\nFor an ensemble of single particles, we can compute the Boltzmann distribution by hand. In this example, we compare the analytical solution with the result of simulating an ensemble with Magpy.\nProblem setup\nA single particle has a uniaxial anisotropy axis $K$ and a magnetic moment of three components (x,y,z components). The angle $\\theta$ is the angle between the magnetic moment and the anisotropy axis.\n\nBoltzmann distribution\nThe Boltzmann distribution represents of states over the ensemble; here the state is the solid angle $\\phi=\\sin(\\theta)$ (i.e. the distribution over the surface of the sphere). The distribution is parameterised by the temperature of the system and the energy landscape of the problem.\n$$p(\\theta) = \\frac{\\sin(\\theta)e^{-E(\\theta)/(K_BT)}}{Z}$$\nwhere $Z$ is called the partition function:\n$$Z=\\int_\\theta \\sin(\\theta)e^{-E(\\theta)/(K_BT)}\\mathrm{d}\\theta$$\nStoner-Wohlfarth model\nThe energy function for a single domain magnetic nanoparticle is given by the Stoner-Wohlfarth equation:\n$$\\frac{E\\left(\\theta\\right)}{K_BT}=-\\sigma\\cos^2\\theta$$\nwhere $\\sigma$ is called the normalised anisotropy strength:\n$$\\sigma=\\frac{KV}{K_BT}$$\nFunctions for analytic solution", "import numpy as np\n\n# anisotropy energy of the system\ndef anisotropy_e(theta, sigma):\n return -sigma*np.cos(theta)**2\n\n# numerator of the Boltzmann distribution\n# (i.e. without the partition function Z)\ndef p_unorm(theta, sigma):\n return np.sin(theta)*np.exp(-anisotropy_e(theta, sigma))", "We use the quadrature rule to numerically evaluate the partition function $Z$.", "from scipy.integrate import quad\n\n# The analytic Boltzmann distribution\ndef boltzmann(thetas, sigma):\n Z = quad(lambda t: p_unorm(t, sigma), 0, thetas[-1])[0]\n distribution = np.array([\n p_unorm(t, sigma) / Z for t in thetas\n ])\n return distribution", "Energy landscape\nWe can plot the energy landscape (energy as a function of the system variables)", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nthetas = np.linspace(0, np.pi, 1000)\nsigmas = [1, 3, 5, 7]\n\ne_landscape = [anisotropy_e(thetas, s) for s in sigmas]\nfor s, e in zip(sigmas, e_landscape):\n plt.plot(thetas, e, label='$\\sigma={}$'.format(s))\nplt.legend(); plt.xlabel('Angle (radians)'); plt.ylabel('Energy')\nplt.title('Energy landscape for a single particle');", "We observe that:\n - The energy of the system has two minima: one alongside each direction of the anisotropy axis.\n - The minima are separated by a maxima: perpendicular to the anisotropy axis.\n - Stronger anisotropy increases the size of the energy barrier between the two minima.\nEquilibrium distribution (Boltzmann)\nWe can also plot the equilibrium distribution of the system, which is the probability distribution over the system states in a large ensemble of systems.", "p_dist = [boltzmann(thetas, s) for s in sigmas]\n\nfor s, p in zip(sigmas, p_dist):\n plt.plot(thetas, p, label='$\\sigma={}$'.format(s))\nplt.legend(); plt.xlabel('Angle (radians)')\nplt.ylabel('Probability of angle')\nplt.title('Probability distribution of angle');", "What does this mean? If we had an ensemble of single particles, the distribution of the states of those particles varies greatly depending on $\\sigma$. Remember we can decrease $\\sigma$ by reducing the anisotropy strength or particle size or by increasing the temperature.\n - When $\\sigma$ is high, most of the particles in the ensemble will be found closely aligned with the anisotropy axis.\n - When $\\sigma$ is low, the states of the particles are more evenly distributed.\nMagpy equilibrium\nUsing Magpy, we can simulate the dynamics of the state of a single nanoparticle. If we simulate a large ensemble of these systems for 'long enough', the distribution of states will reach equilibrium. If Magpy is implemented correctly, we should recover the analytical distribution from above.\nSet up the model\nSelect the parameters for the single particle", "import magpy as mp\n\n# These parameters will determine the distribution\nK = 1e5\nr = 7e-9\nT = 300\nkdir = [0., 0., 1.]\n\n# These parameters affect the dynamics but\n# have no effect on the equilibrium\nMs = 400e3\nlocation = [0., 0., 0.]\nalpha=1.0\ninitial_direction = [0., 0., 1.]\n\n# Normalised anisotropy strength KV/KB/T\nV = 4./3 * np.pi * r**3\nkb = mp.core.get_KB()\nsigma = K * V / kb / T\nprint(sigma)\n\nimport magpy as mp\n\nsingle_particle = mp.Model(\n anisotropy=[K],\n anisotropy_axis=[kdir],\n damping=alpha,\n location=[location],\n magnetisation=Ms,\n magnetisation_direction=[initial_direction],\n radius=[r],\n temperature=T\n)", "Create an ensemble\nFrom the single particle we create an ensemble of 10,000 identical particles.", "particle_ensemble = mp.EnsembleModel(\n base_model=single_particle, N=10000\n)", "Simulate\nNow we simulate! We don't need to simulate for very long because $\\sigma$ is very high and the system will reach equilibrium quickly.", "res = particle_ensemble.simulate(\n end_time=1e-9, time_step=1e-12, max_samples=50,\n random_state=1001, implicit_solve=True\n)", "Check that we have equilibriated", "plt.plot(res.time, res.ensemble_magnetisation())\nplt.title('10,000 single particles - ensemble magnetisation')\nplt.xlabel('Time'); plt.ylabel('Magnetisation');", "We can see that the system has reached a local minima. We could let the simulation run until the ensemble relaxes into both minima but it would take a very long time because the energy barrier is so high in this example.\nCompute theta\nThe results of the simulation are x,y,z coordinates of the magnetisation of each particle in the ensemble. We need to convert these into angles.", "M_z = np.array([state['z'][0] for state in res.final_state()])\nm_z = M_z / Ms\nsimulated_thetas = np.arccos(m_z)", "Compare to analytical solution\nNow we compare our empirical distribution of states to the analytical distribution that we computed above.", "theta_grid = np.linspace(0.0, simulated_thetas.max(), 100)\nanalytical_probability = boltzmann(theta_grid, sigma)\n\nplt.hist(simulated_thetas, normed=True, bins=80, label='Simulated');\nplt.plot(theta_grid, analytical_probability, label='Analytical')\nplt.title('Simulated and analytical distributions')\nplt.xlabel('Angle (radians)'); plt.ylabel('Probability of angle');", "The results look good! We could simulate an even bigger ensemble to produce a smoother empirical distribution." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bollwyvl/ip-cad
CAD Widget Example.ipynb
bsd-3-clause
[ "Widget: CAD\n\n<i class=\"fa fa-info-circle fa-2x text-primary\"></i> Execute each of these cells in order, such as with <label class=\"label label-default\">Shift+Enter</label>\n\nFirst, load CAD from your module:", "from ipcad.widgets import CAD", "Then, create an instance of CAD:", "cadExample = CAD(assembly_url=\"examples/data/cutter/index.json\", height=500)", "Display the widget:", "cadExample\n\nfrom IPython.html.widgets import interact\n@interact(near=(1, 100), far=(100, 400))\ndef cam(near, far):\n cadExample.camera_near, cadExample.camera_far = near, far", "<i class=\"fa fa-image fa-2x\"></i> You should see a little text box with some controls in it.\n\nNow, you can update the value:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.19/_downloads/2784a8d5822ed9797c0330f973573c10/plot_stats_cluster_erp.ipynb
bsd-3-clause
[ "%matplotlib inline", "Visualising statistical significance thresholds on EEG data\nMNE-Python provides a range of tools for statistical hypothesis testing\nand the visualisation of the results. Here, we show a few options for\nexploratory and confirmatory tests - e.g., targeted t-tests, cluster-based\npermutation approaches (here with Threshold-Free Cluster Enhancement);\nand how to visualise the results.\nThe underlying data comes from [1]; we contrast long vs. short words.\nTFCE is described in [2].", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import ttest_ind\n\nimport mne\nfrom mne.channels import find_ch_connectivity, make_1020_channel_selections\nfrom mne.stats import spatio_temporal_cluster_test\n\nnp.random.seed(0)\n\n# Load the data\npath = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'\nepochs = mne.read_epochs(path)\nname = \"NumberOfLetters\"\n\n# Split up the data by the median length in letters via the attached metadata\nmedian_value = str(epochs.metadata[name].median())\nlong_words = epochs[name + \" > \" + median_value]\nshort_words = epochs[name + \" < \" + median_value]", "If we have a specific point in space and time we wish to test, it can be\nconvenient to convert the data into Pandas Dataframe format. In this case,\nthe :class:mne.Epochs object has a convenient\n:meth:mne.Epochs.to_data_frame method, which returns a dataframe.\nThis dataframe can then be queried for specific time windows and sensors.\nThe extracted data can be submitted to standard statistical tests. Here,\nwe conduct t-tests on the difference between long and short words.", "time_windows = ((.2, .25), (.35, .45))\nelecs = [\"Fz\", \"Cz\", \"Pz\"]\n\n# display the EEG data in Pandas format (first 5 rows)\nprint(epochs.to_data_frame()[elecs].head())\n\nreport = \"{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}\"\nprint(\"\\nTargeted statistical test results:\")\nfor (tmin, tmax) in time_windows:\n long_df = long_words.copy().crop(tmin, tmax).to_data_frame()\n short_df = short_words.copy().crop(tmin, tmax).to_data_frame()\n for elec in elecs:\n # extract data\n A = long_df[elec].groupby(\"condition\").mean()\n B = short_df[elec].groupby(\"condition\").mean()\n\n # conduct t test\n t, p = ttest_ind(A, B)\n\n # display results\n format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,\n df=len(epochs.events) - 2, t_val=t, p=p)\n print(report.format(**format_dict))", "Absent specific hypotheses, we can also conduct an exploratory\nmass-univariate analysis at all sensors and time points. This requires\ncorrecting for multiple tests.\nMNE offers various methods for this; amongst them, cluster-based permutation\nmethods allow deriving power from the spatio-temoral correlation structure\nof the data. Here, we use TFCE.", "# Calculate statistical thresholds\ncon = find_ch_connectivity(epochs.info, \"eeg\")\n\n# Extract data: transpose because the cluster test requires channels to be last\n# In this case, inference is done over items. In the same manner, we could\n# also conduct the test over, e.g., subjects.\nX = [long_words.get_data().transpose(0, 2, 1),\n short_words.get_data().transpose(0, 2, 1)]\ntfce = dict(start=.2, step=.2)\n\nt_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(\n X, tfce, n_permutations=100) # a more standard number would be 1000+\nsignificant_points = cluster_pv.reshape(t_obs.shape).T < .05\nprint(str(significant_points.sum()) + \" points selected by TFCE ...\")", "The results of these mass univariate analyses can be visualised by plotting\n:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)\nand masking points for significance.\nHere, we group channels by Regions of Interest to facilitate localising\neffects on the head.", "# We need an evoked object to plot the image to be masked\nevoked = mne.combine_evoked([long_words.average(), -short_words.average()],\n weights='equal') # calculate difference wave\ntime_unit = dict(time_unit=\"s\")\nevoked.plot_joint(title=\"Long vs. short words\", ts_args=time_unit,\n topomap_args=time_unit) # show difference wave\n\n# Create ROIs by checking channel labels\nselections = make_1020_channel_selections(evoked.info, midline=\"12z\")\n\n# Visualize the results\nfig, axes = plt.subplots(nrows=3, figsize=(8, 8))\naxes = {sel: ax for sel, ax in zip(selections, axes.ravel())}\nevoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,\n mask=significant_points, show_names=\"all\", titles=None,\n **time_unit)\nplt.colorbar(axes[\"Left\"].images[-1], ax=list(axes.values()), shrink=.3,\n label=\"uV\")\n\nplt.show()", "References\n.. [1] Dufau, S., Grainger, J., Midgley, KJ., Holcomb, PJ. A thousand\n words are worth a picture: Snapshots of printed-word processing in an\n event-related potential megastudy. Psychological Science, 2015\n.. [2] Smith and Nichols 2009, \"Threshold-free cluster enhancement:\n addressing problems of smoothing, threshold dependence, and\n localisation in cluster inference\", NeuroImage 44 (2009) 83-98." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kit-cel/lecture-examples
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_BPSK.ipynb
gpl-2.0
[ "BPSK Demodulation in Nonlinear Channels with Deep Neural Networks\nThis code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>\nThis code illustrates:\n* demodulation of BPSK symbols in highly nonlinear channels using an artificial neural network, implemented via PyTorch", "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\n%matplotlib inline \n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\nprint(\"We are using the following device for learning:\",device)", "Specify the parameters of the transmission as the fiber length $L$ (in km), the fiber nonlinearity coefficienty $\\gamma$ (given in 1/W/km) and the total noise power $P_n$ (given in dBM. The noise is due to amplified spontaneous emission in amplifiers along the link). We assume a model of a dispersion-less fiber affected by nonlinearity. The model, which is described for instance in [1] is given by an iterative application of the equation\n$$\nx_{k+1} = x_k\\exp\\left(\\jmath\\frac{L}{K}\\gamma|x_k|^2\\right) + n_{k+1},\\qquad 0 \\leq k < K\n$$\nwhere $x_0$ is the channel input (the modulated, complex symbols) and $x_K$ is the channel output. $K$ denotes the number of steps taken to simulate the channel Usually $K=50$ gives a good approximation.\n[1] S. Li, C. Häger, N. Garcia, and H. Wymeersch, \"Achievable Information Rates for Nonlinear Fiber Communication via End-to-end Autoencoder Learning,\" Proc. ECOC, Rome, Sep. 2018", "# Length of transmission (in km)\nL = 5000\n\n# fiber nonlinearity coefficient\ngamma = 1.27\n\nPn = -21.3 # noise power (in dBm)\n\nKstep = 50 # number of steps used in the channel model\n\ndef simulate_channel(x, Pin): \n # modulate bpsk\n input_power_linear = 10**((Pin-30)/10)\n norm_factor = np.sqrt(input_power_linear);\n bpsk = (1 - 2*x) * norm_factor\n\n # noise variance per step \n sigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2) \n\n temp = np.array(bpsk, copy=True)\n for i in range(Kstep):\n power = np.absolute(temp)**2\n rotcoff = (L / Kstep) * gamma * power\n temp = temp * np.exp(1j*rotcoff) + sigma*(np.random.randn(len(x)) + 1j*np.random.randn(len(x)))\n return temp", "We consider BPSK transmission over this channel.\nShow constellation as a function of the fiber input power. When the input power is small, the effect of the nonlinearity is small (as $\\jmath\\frac{L}{K}\\gamma|x_k|^2 \\approx 0$) and the transmission is dominated by the additive noise. If the input power becomes larger, the effect of the noise (the noise power is constant) becomes less pronounced, but the constellation rotates due to the larger input power and hence effect of the nonlinearity.", "length = 5000\n\ndef plot_constellation(Pin):\n t = np.random.randint(2,size=length)\n r = simulate_channel(t, Pin)\n\n plt.figure(figsize=(6,6))\n font = {'size' : 14}\n plt.rc('font', **font)\n plt.rc('text', usetex=matplotlib.checkdep_usetex(True))\n plt.scatter(np.real(r), np.imag(r), c=t, cmap='coolwarm')\n plt.xlabel(r'$\\Re\\{r\\}$',fontsize=14)\n plt.ylabel(r'$\\Im\\{r\\}$',fontsize=14)\n plt.axis('equal')\n plt.title('Received constellation (L = %d km, $P_{in} = %1.2f$\\,dBm)' % (L, Pin)) \n #plt.savefig('bpsk_received_zd_%1.2f.pdf' % Pin,bbox_inches='tight')\n \ninteractive_update = interactive(plot_constellation, Pin = widgets.FloatSlider(min=-10.0,max=10.0,step=0.1,value=1, continuous_update=False, description='Input Power Pin (dBm)', style={'description_width': 'initial'}, layout=widgets.Layout(width='50%')))\n\n\noutput = interactive_update.children[-1]\noutput.layout.height = '500px'\ninteractive_update", "Helper function to plot the constellation together with the decision region. Note that a bit is decided as \"1\" if $\\sigma(\\boldsymbol{\\theta}^\\mathrm{T}\\boldsymbol{r}) > \\frac12$, i.e., if $\\boldsymbol{\\theta}^\\mathrm{T}\\boldsymbol{r}$ > 0. The decision line is therefore given by $\\theta_1\\Re{r} + \\theta_2\\Im{r} = 0$, i.e., $\\Im{r} = -\\frac{\\theta_1}{\\theta_2}\\Re{r}$\nGenerate training, validation and testing data sets", "# helper function to compute the bit error rate\ndef BER(predictions, labels):\n decision = predictions >= 0.5\n temp = decision != (labels != 0)\n return np.mean(temp)\n\n# set input power\nPin = 3\n\n# validation set. Training examples are generated on the fly\nN_valid = 100000\n\n\nhidden_neurons_1 = 8\nhidden_neurons_2 = 14\n\n\ny_valid = np.random.randint(2,size=N_valid)\nr = simulate_channel(y_valid, Pin)\n\n# find extension of data (for normalization and plotting)\next_x = max(abs(np.real(r)))\next_y = max(abs(np.imag(r)))\next_max = max(ext_x,ext_y)*1.2\n\n# scale data to be between 0 and 1\nX_valid = torch.from_numpy(np.column_stack((np.real(r), np.imag(r))) / ext_max).float().to(device)\n\n\n# meshgrid for plotting\nmgx,mgy = np.meshgrid(np.linspace(-ext_max,ext_max,200), np.linspace(-ext_max,ext_max,200))\nmeshgrid = torch.from_numpy(np.column_stack((np.reshape(mgx,(-1,1)),np.reshape(mgy,(-1,1)))) / ext_max).float().to(device)\n\nclass Receiver_Network(nn.Module):\n def __init__(self, hidden1_neurons, hidden2_neurons):\n super(Receiver_Network, self).__init__()\n # Linear function, 2 input neurons (real and imaginary part) \n self.fc1 = nn.Linear(2, hidden1_neurons) \n\n # Non-linearity\n self.activation_function = nn.ELU()\n \n # Linear function (hidden layer)\n self.fc2 = nn.Linear(hidden1_neurons, hidden2_neurons) \n \n # Output function \n self.fc3 = nn.Linear(hidden2_neurons, 1)\n \n\n def forward(self, x):\n # Linear function, first layer\n out = self.fc1(x)\n\n # Non-linearity, first layer\n out = self.activation_function(out)\n \n # Linear function, second layer\n out = self.fc2(out)\n \n # Non-linearity, second layer\n out = self.activation_function(out)\n \n # Linear function, third layer\n out = self.fc3(out)\n \n return out\n\nmodel = Receiver_Network(hidden_neurons_1, hidden_neurons_2)\nmodel.to(device)\n\nsigmoid = nn.Sigmoid()\n\n\n# channel parameters\nnorm_factor = np.sqrt(10**((Pin-30)/10));\nsigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2)\n\n# Binary Cross Entropy loss\nloss_fn = nn.BCEWithLogitsLoss()\n\n# Adam Optimizer\noptimizer = optim.Adam(model.parameters()) \n\n\n# Training parameters\nnum_epochs = 160\nbatches_per_epoch = 300\n\n# Vary batch size during training\nbatch_size_per_epoch = np.linspace(100,10000,num=num_epochs)\n\n\nvalidation_BERs = np.zeros(num_epochs)\ndecision_region_evolution = []\n\nfor epoch in range(num_epochs):\n batch_labels = torch.empty(int(batch_size_per_epoch[epoch]), device=device)\n noise = torch.empty((int(batch_size_per_epoch[epoch]),2), device=device, requires_grad=False) \n\n for step in range(batches_per_epoch):\n # sample new mini-batch directory on the GPU (if available) \n batch_labels.random_(2)\n # channel simulation directly on the GPU\n bpsk = ((1 - 2*batch_labels) * norm_factor).unsqueeze(-1) * torch.tensor([1.0,0.0],device=device)\n\n for i in range(Kstep):\n power = torch.norm(bpsk, dim=1) ** 2\n rotcoff = (L / Kstep) * gamma * power\n noise.normal_(mean=0, std=sigma) # sample noise\n \n # phase rotation due to nonlinearity\n temp1 = bpsk[:,0] * torch.cos(rotcoff) - bpsk[:,1] * torch.sin(rotcoff) \n temp2 = bpsk[:,0] * torch.sin(rotcoff) + bpsk[:,1] * torch.cos(rotcoff) \n bpsk = torch.stack([temp1, temp2], dim=1) + noise\n\n bpsk = bpsk / ext_max\n outputs = model(bpsk)\n\n # compute loss\n loss = loss_fn(outputs.squeeze(), batch_labels)\n \n # compute gradients\n loss.backward()\n \n optimizer.step()\n # reset gradients\n optimizer.zero_grad()\n \n # compute validation BER\n out_valid = sigmoid(model(X_valid))\n validation_BERs[epoch] = BER(out_valid.detach().cpu().numpy().squeeze(), y_valid)\n \n print('Validation BER after epoch %d: %f (loss %1.8f)' % (epoch, validation_BERs[epoch], loss.detach().cpu().numpy())) \n \n # store decision region for generating the animation\n mesh_prediction = sigmoid(model(meshgrid)) \n decision_region_evolution.append(0.195*mesh_prediction.detach().cpu().numpy() + 0.4)\n\n\nplt.figure(figsize=(8,8))\nplt.contourf(mgx,mgy,decision_region_evolution[-1].reshape(mgy.shape).T,cmap='coolwarm',vmin=0.3,vmax=0.695)\nplt.scatter(X_valid[:,0].cpu()*ext_max, X_valid[:,1].cpu() * ext_max, c=y_valid, cmap='coolwarm')\nprint(Pin)\nplt.axis('scaled')\nplt.xlabel(r'$\\Re\\{r\\}$',fontsize=16)\nplt.ylabel(r'$\\Im\\{r\\}$',fontsize=16)\n#plt.title(title,fontsize=16)\n#plt.savefig('after_optimization.pdf',bbox_inches='tight')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
simpleblob/ml_algorithms_stepbystep
algo_example_NN_regularization.ipynb
mit
[ "import pandas as pd\nimport numpy as np\nimport time\nimport time\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport math\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (8, 6)\nplt.rcParams['font.size'] = 14\n\n%%html\n<style>\ntable {float:left}\n</style>\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits(n_class=10)\nprint type(digits)", "(Continuing from here)\n\nRegularization\nSource: http://www.deeplearningbook.org/contents/regularization.html\nSource2: http://neuralnetworksanddeeplearning.com/chap3.html\nSource3: http://www.machinelearning.org/proceedings/icml2004/papers/354.pdf\nBefore moving on to more and more complex NN models, I think it's a good idea we tackle a fundamental problem of all models first -- overfitting.\nModel tends to overfit, the more data we throw at it, the more features and points it can use to mimic a pattern, regardless if that pattern is really signal or noise.\nExample of model trying too hard (blue line)\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/6/68/Overfitted_Data.png\" style=\"width:400px;\" align=\"left\" />\n<div style=\"clear:both;\"></div>\n\nThere are many kind of promising approaches to solve this problem, the technical term for this is \"regularization\" -- meaning something like \"making a model more regular/general and not too specific\".\nSome regularization techniques\nThese are the ones we will go over today\n1. L1,L2 regularization - adding some term to the loss function to penalize overfitting\n2. Early stopping - stopping the model learning before it overfits too much\n3. Dropout - removing some NN nodes to make for a simpler model\nWe will be using this Loss function as a base to build on.\n$$ L_{base} = \\dfrac{1}{2m}\\sum_{j=1}^m \\bigl\\|\\;f(w,x_j) - y_j\\;\\bigr\\|^2 $$\n\nL2 Regularization\na.k.a. ridge regression, Tikhonov regularization\nThe most common type of regularization. we add a penalty to the loss function using square of the weights. In effect we penalty the weights with \"peak\" values, preferring all weight have a closer magnitude to each other.\n$$ L = L_{base} + \\dfrac{\\lambda}{2m}\\sum w^2 $$\nwhere $\\lambda$ is a scaling variable called \"regularization parameter\"\nDue to the loss function change, our gradient update will change as well, with additional term:\n$$ w(t+1) = w(t) - \\eta \\cdot \\Delta w - \\bigl( \\dfrac{\\eta \\cdot \\lambda}{m} w(t) \\bigr)$$\nwe can simplify into\n$$ w(t+1) = \\bigl( 1 - \\dfrac{\\eta \\cdot \\lambda}{m} \\bigr) w(t) - \\eta \\cdot \\Delta w $$\nDo note that the gradient update for bias component is still the same, unaffected by L2.\nL1 Regularization\nInstead of squaring the weights, we use absolute value instead.\n$$ L = L_{base} + \\dfrac{\\lambda}{2m}\\sum \\left|w \\right| $$\n$$ w(t+1) = w(t) - \\dfrac{\\eta \\cdot \\lambda}{m} \\bigl| w(t) \\bigr| - \\eta \\cdot \\Delta w $$\nIn practice, L2 almost always beat L1 in regularization performance. So we are going to code just L2 here.\n\nEarly stopping\nwe actually implemented this in the code already as a stopping criteria. For the below code, we will replace with this check: if no improvement in 10 epoch, stop.\nDropout\nThis method concept is similar to doing ensemble models (ie. randomforest) with NN. \nHere are the steps:\n1. Pick some nodes at random\n2. remove them from our NN \n3. train the models with mini-batch of our data\n4. update the weights with our aggregate gradients\n5. repeat step 1\nSo effectively, we are training NN with different set of nodes/structures, and then aggregate the result.\n\nModel validation\nPreviously, we didn't seperate the data into training and test data. But for this topic, we need that in order to see the overfitting problem and improvement.\nWe will split data randomly into 80/20 train/test.", "#set size of input, features, hidden, target\ninstance_size = digits.images.shape[0]\nfeature_size = digits.images.shape[1]*digits.images.shape[2]\ntarget_size = 10\nhidden_size = 15\n\n#make a flat 10 output with all zeros\nY = np.zeros((instance_size,10))\nfor j in range(0,instance_size):\n Y[j][digits.target[j]] = 1\n\n#make a row of 64 input features instead of 8x8\nX = digits.images[0:instance_size].reshape(instance_size,feature_size)\nX = (X-8)/8 #normalized \n\n#split train and test dataset\ntrain_split = 0.6\ntrain_size = int(train_split*instance_size)\ntest_size = instance_size - train_size\nindex = np.random.permutation(instance_size)\ntrain_ix, test_ix = index[:train_size], index[train_size:]\nY_train , Y_test = Y[train_ix,:], Y[test_ix,:]\nX_train , X_test = X[train_ix,:], X[test_ix,:]\n\nXb = np.insert(X_train,0,1,axis=1) #add bias input, always activated\nXb_test = np.insert(X_test,0,1,axis=1) #add bias input, always activated", "NN v1 - Base program\nFirst, we will run the base-version of NN (no regularization)", "def sigmoid(w,X):\n a = 1.0/(1.0 + np.exp(-w.dot(X.transpose())))\n return a.transpose()\n\ndef loss_func(Y,y_pred,size):\n return (0.5/size)*np.sum((Y-y_pred)**2) #element-wise operation then aggregate\n\ndef initialize(target_size,feature_size,hidden_size):\n # for weights --> index = (output node , input node)\n w_hid = (np.random.rand(hidden_size,feature_size+1)-0.5) #randomized, and don't forget the bias!\n w_out = (np.random.rand(target_size,hidden_size+1)-0.5) #randomized, and don't forget the bias!\n\n #for f --> index = (data row , node) --- no need to initialize, just for documentation\n #f_hid = np.random.rand(train_size,hidden_size)\n #f_out = np.random.rand(train_size,target_size)\n\n #for deltas --> index = (data row , node) --- no need to initialize, just for documentation\n #delta_hid = np.random.rand(train_size,hidden_size)\n #delta_out = np.random.rand(train_size,target_size)\n \n return w_hid,w_out\n\ndef calc_forward(w_hid,w_out,Xb):\n f_hid = sigmoid(w_hid,Xb)\n f_hid_b = np.insert(f_hid,0,1,axis=1) #bias activation for next layer\n f_out = sigmoid(w_out,f_hid_b)\n return f_hid, f_hid_b, f_out\n\n\n#initialize stuff\nw_hid,w_out = initialize(target_size,feature_size,hidden_size)\nlearning_rate = 0.7/train_size\nlearning_rate_bias = 0.7/train_size\nloss, loss_test = [],[]\n\n#run configuration\nmax_epoch = 5000\nmin_loss_criterion = 0\n\n#doing 1st forward pass to calculate loss\nf_hid, f_hid_b, f_out = calc_forward(w_hid,w_out,Xb)\nloss.append(loss_func(Y_train,f_out,train_size))\nf_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\nloss_test.append(loss_func(Y_test,f_out_test,test_size))\n\nstart_time = time.clock()\n\nprint 'start_loss = {}'.format(loss_test[0])\n\nfor i in range(0,max_epoch):\n \n #update the weights of output layer\n delta_out = (f_out - Y_train)*(f_out)*(1-f_out) #element-wise operation \n wgrad_out = np.einsum('ki,kj->ij', delta_out, f_hid) #dot operation already sums it up \n w_out_bef = w_out.copy()\n w_out[:,1:] = w_out[:,1:] -learning_rate*(wgrad_out)\n w_out[:,0] = w_out[:,0] -learning_rate_bias*np.sum(delta_out,axis=0)*1.0\n\n #update the weights of hidden layer\n delta_hid = delta_out.dot(w_out_bef[:,1:])*(f_hid)*(1-f_hid) #dot then element-wise operation \n wgrad_hid = np.einsum('ki,kj->ij',delta_hid,Xb[:,1:])\n w_hid[:,1:] = w_hid[:,1:] -learning_rate*wgrad_hid\n w_hid[:,0] = w_hid[:,0] -learning_rate_bias*np.sum(delta_hid,axis=0)*1.0\n \n #re-calculate loss\n f_hid, f_hid_b, f_out = calc_forward(w_hid,w_out,Xb)\n loss.append(loss_func(Y_train,f_out,train_size))\n f_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\n loss_test.append(loss_func(Y_test,f_out_test,test_size))\n\n #stopping criterion\n if (i>20) and ((loss_test[-11] - loss_test[-1]) < min_loss_criterion): #10th previous loss\n print 'stop at {}'.format(i)\n break\n\nprint 'end_loss = {}'.format(loss_test[-1])\nprint 'run time = {:.2f}s'.format(time.clock()-start_time)\n\ndf_run = pd.DataFrame(data=np.array([loss,loss_test]).T,columns=['train','test'])\ndf_run.plot() \naxes = plt.gca()\naxes.set_ylim([0,0.6])\n", "The program finishes after max 5,000 epoch (no early stop), the test set has a bit higher loss cost but it's actually stable, gradual curve downward.", "from sklearn.metrics import confusion_matrix\ndef show_confusion_matrix(cm_mat):\n accuracy = np.trace(cm_mat)*100.0/test_size\n print 'Test set Accuracy = {:.2f}%'.format(accuracy)\n df_temp = pd.DataFrame(cm_mat.flatten()[np.newaxis].T,columns = ['values'])\n plt.figure(figsize = (6,4),dpi=600)\n sns.heatmap(cm_mat.T, cbar=True ,annot=True, fmt=',.0f')\n plt.title('Confusion Matrix')\n plt.xlabel('Truth')\n plt.ylabel('Predicted')\n\n#get the prediction to compare with target\ny_pred = np.argmax(f_out_test,axis=1)\ncm_mat = confusion_matrix(digits.target[test_ix],y_pred)\nshow_confusion_matrix(cm_mat)", "The confusion matrix with test dataset shows that the NN actually fits pretty well. Accuracy 95%+, similar to the training test.\n\nNN v2 - with L2 regularization\nNow let's try with what we have learn: L2, early stop (already there), and dropout.\n+ for L2, we can just change the loss func\n+ for dropout, we can randomly set the f_hid result to 0.5.", "def loss_func_L2(Y,y_pred,size,w_hid,w_out,scale_param):\n return loss_func(Y,y_pred,size) + 0.5*scale_param/size*(np.sum(w_hid**2)+np.sum(w_out**2)) \n\n#initialize stuff\nw_hid,w_out = initialize(target_size,feature_size,hidden_size)\nlearning_rate = 0.7/train_size\nlearning_rate_bias = 0.7/train_size\nL2_scale_param = 0.2\nloss, loss_test = [],[]\n\n#run configuration\nmax_epoch = 5000\nmin_loss_criterion = 0\n\n#doing 1st forward pass to calculate loss\nf_hid, f_hid_b, f_out = calc_forward(w_hid,w_out,Xb)\nloss.append(loss_func_L2(Y_train,f_out,train_size,w_hid,w_out,L2_scale_param))\nf_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\nloss_test.append(loss_func_L2(Y_test,f_out_test,test_size,w_hid,w_out,L2_scale_param))\n\nstart_time = time.clock()\n\nprint 'start_loss = {}'.format(loss_test[0])\n\nfor i in range(0,max_epoch):\n \n #update the weights of output layer\n delta_out = (f_out - Y_train)*(f_out)*(1-f_out) #element-wise operation \n wgrad_out = np.einsum('ki,kj->ij', delta_out, f_hid) #dot operation already sums it up \n w_out_bef = w_out.copy()\n w_out[:,1:] = (1-learning_rate*L2_scale_param/train_size)*w_out[:,1:] -learning_rate*(wgrad_out)\n w_out[:,0] = w_out[:,0] -learning_rate_bias*np.sum(delta_out,axis=0)*1.0\n\n #update the weights of hidden layer\n delta_hid = delta_out.dot(w_out_bef[:,1:])*(f_hid)*(1-f_hid) #dot then element-wise operation \n wgrad_hid = np.einsum('ki,kj->ij',delta_hid,Xb[:,1:])\n w_hid[:,1:] = (1-learning_rate*L2_scale_param/train_size)*w_hid[:,1:] -learning_rate*wgrad_hid\n w_hid[:,0] = w_hid[:,0] -learning_rate_bias*np.sum(delta_hid,axis=0)*1.0\n \n #re-calculate loss\n f_hid, f_hid_b, f_out = calc_forward(w_hid,w_out,Xb)\n loss.append(loss_func_L2(Y_train,f_out,train_size,w_hid,w_out,L2_scale_param))\n f_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\n loss_test.append(loss_func_L2(Y_test,f_out_test,test_size,w_hid,w_out,L2_scale_param))\n\n #stopping criterion\n if (i>20) and ((loss_test[-11] - loss_test[-1]) < min_loss_criterion): #10th previous loss\n print 'stop at {}'.format(i)\n break\n\nprint 'end_loss = {}'.format(loss_test[-1])\nprint 'run time = {:.2f}s'.format(time.clock()-start_time)\n\ndf_run = pd.DataFrame(data=np.array([loss,loss_test]).T,columns=['train','test'])\ndf_run.plot() \naxes = plt.gca()\naxes.set_ylim([0,0.4])", "It stops before 3,000 epoch -- mainly due to the added L2 terms.", "#get the prediction to compare with target\ny_pred = np.argmax(f_out_test,axis=1)\ncm_mat = confusion_matrix(digits.target[test_ix],y_pred)\nshow_confusion_matrix(cm_mat)", "The accuracy does drop a bit, however the NN should be more robust (given new dataset, this test set seems to fit well even for v1, indicating there might be no need for L2).\n\nNN v3 - L2 and dropout\nNow let's do it together with dropout.\nwe are implementing this version called \"inverted dropout\", scaling at training time.", "def calc_forward_withdropout(w_hid,w_out,Xb,dropout_p):\n #drop hidden layer only\n f_hid = sigmoid(w_hid,Xb)\n dropout_mask_hid = (np.random.rand(*f_hid.shape) < dropout_p) / dropout_p\n f_hid *= dropout_mask_hid\n f_hid_b = np.insert(f_hid,0,1,axis=1) #bias activation for next layer\n f_out = sigmoid(w_out,f_hid_b)\n return f_hid, f_hid_b, f_out\n\n#initialize stuff\nw_hid,w_out = initialize(target_size,feature_size,hidden_size)\nlearning_rate = 0.7/train_size\nlearning_rate_bias = 0.7/train_size\nL2_scale_param = 0.2\nloss, loss_test = [],[]\n\n#add dropout parameter\ndropout_p = 0.8 #chance of neuron not getting dropped.\n\n#run configuration\nmax_epoch = 5000\nmin_loss_criterion = 0\n\n#doing 1st forward pass to calculate loss\nf_hid, f_hid_b, f_out = calc_forward(w_hid,w_out,Xb)\nloss.append(loss_func_L2(Y_train,f_out,train_size,w_hid,w_out,L2_scale_param))\nf_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\nloss_test.append(loss_func_L2(Y_test,f_out_test,test_size,w_hid,w_out,L2_scale_param))\n\nstart_time = time.clock()\n\nprint 'start_loss = {}'.format(loss_test[0])\n\nfor i in range(0,max_epoch):\n \n #update the weights of output layer\n delta_out = (f_out - Y_train)*(f_out)*(1-f_out) #element-wise operation \n wgrad_out = np.einsum('ki,kj->ij', delta_out, f_hid) #dot operation already sums it up \n w_out_bef = w_out.copy()\n w_out[:,1:] = (1-learning_rate*L2_scale_param/train_size)*w_out[:,1:] -learning_rate*(wgrad_out)\n w_out[:,0] = w_out[:,0] -learning_rate_bias*np.sum(delta_out,axis=0)*1.0\n\n #update the weights of hidden layer\n delta_hid = delta_out.dot(w_out_bef[:,1:])*(f_hid)*(1-f_hid) #dot then element-wise operation \n wgrad_hid = np.einsum('ki,kj->ij',delta_hid,Xb[:,1:])\n w_hid[:,1:] = (1-learning_rate*L2_scale_param/train_size)*w_hid[:,1:] -learning_rate*wgrad_hid\n w_hid[:,0] = w_hid[:,0] -learning_rate_bias*np.sum(delta_hid,axis=0)*1.0\n \n #re-calculate loss\n f_hid, f_hid_b, f_out = calc_forward_withdropout(w_hid,w_out,Xb,dropout_p)\n loss.append(loss_func_L2(Y_train,f_out,train_size,w_hid,w_out,L2_scale_param))\n f_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\n loss_test.append(loss_func_L2(Y_test,f_out_test,test_size,w_hid,w_out,L2_scale_param))\n\n #stopping criterion\n if (i>20) and ((loss_test[-11] - loss_test[-1]) < min_loss_criterion): #10th previous loss\n print 'stop at {}'.format(i)\n break\n\nprint 'end_loss = {}'.format(loss_test[-1])\nprint 'run time = {:.2f}s'.format(time.clock()-start_time)\n\ndf_run = pd.DataFrame(data=np.array([loss,loss_test]).T,columns=['train','test'])\ndf_run.plot() \naxes = plt.gca()\naxes.set_ylim([0,0.6])", "There are a lot of \"jiggling\" on the losses of training dataset, due to the random dropout. However, the loss curve of test dataset is still a smooth downtrend.", "#get the prediction to compare with target\ny_pred = np.argmax(f_out_test,axis=1)\ncm_mat = confusion_matrix(digits.target[test_ix],y_pred)\nshow_confusion_matrix(cm_mat)", "this is an interesting result. My interpretation is that our model is not \"complex\" enough due to the neuron dropout, instead now we have an underfitting rather than over fitting problem.\n\"one\" and \"eight\" is really confusing for our NN.\n\nNN v4 - our tweaked model\nLet's try tuning the parameters and optimize them for minimum loss cost on test dataset.", "#change number of hidden neurons\nhidden_size = 30\n\n#initialize stuff\nw_hid,w_out = initialize(target_size,feature_size,hidden_size)\nlearning_rate = 0.7/train_size\nlearning_rate_bias = 0.7/train_size\nL2_scale_param = 0.05\nloss, loss_test = [],[]\n\n#add dropout parameter\ndropout_p = 0.80 #chance of neuron not getting dropped.\n\n#run configuration\nmax_epoch = 10000\nmin_loss_criterion = -10**-2\n\n#doing 1st forward pass to calculate loss\nf_hid, f_hid_b, f_out = calc_forward(w_hid,w_out,Xb)\nloss.append(loss_func_L2(Y_train,f_out,train_size,w_hid,w_out,L2_scale_param))\nf_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\nloss_test.append(loss_func_L2(Y_test,f_out_test,test_size,w_hid,w_out,L2_scale_param))\n\nstart_time = time.clock()\n\nprint 'start_loss = {}'.format(loss_test[0])\n\nfor i in range(0,max_epoch):\n \n #update the weights of output layer\n delta_out = (f_out - Y_train)*(f_out)*(1-f_out) #element-wise operation \n wgrad_out = np.einsum('ki,kj->ij', delta_out, f_hid) #dot operation already sums it up \n w_out_bef = w_out.copy()\n w_out[:,1:] = (1-learning_rate*L2_scale_param/train_size)*w_out[:,1:] -learning_rate*(wgrad_out)\n w_out[:,0] = w_out[:,0] -learning_rate_bias*np.sum(delta_out,axis=0)*1.0\n\n #update the weights of hidden layer\n delta_hid = delta_out.dot(w_out_bef[:,1:])*(f_hid)*(1-f_hid) #dot then element-wise operation \n wgrad_hid = np.einsum('ki,kj->ij',delta_hid,Xb[:,1:])\n w_hid[:,1:] = (1-learning_rate*L2_scale_param/train_size)*w_hid[:,1:] -learning_rate*wgrad_hid\n w_hid[:,0] = w_hid[:,0] -learning_rate_bias*np.sum(delta_hid,axis=0)*1.0\n \n #re-calculate loss\n f_hid, f_hid_b, f_out = calc_forward_withdropout(w_hid,w_out,Xb,dropout_p)\n loss.append(loss_func_L2(Y_train,f_out,train_size,w_hid,w_out,L2_scale_param))\n f_hid_test, f_hid_b_test, f_out_test = calc_forward(w_hid,w_out,Xb_test)\n loss_test.append(loss_func_L2(Y_test,f_out_test,test_size,w_hid,w_out,L2_scale_param))\n\n #stopping criterion\n if (i>20) and ((loss_test[-11] - loss_test[-1]) < min_loss_criterion): #10th previous loss\n print 'stop at {}'.format(i)\n break\n\nprint 'end_loss = {}'.format(loss_test[-1])\n\ndf_run = pd.DataFrame(data=np.array([loss,loss_test]).T,columns=['train','test'])\ndf_run.plot() \naxes = plt.gca()\naxes.set_ylim([0,0.6])\n\n#get the prediction to compare with target\ny_pred = np.argmax(f_out_test,axis=1)\ncm_mat = confusion_matrix(digits.target[test_ix],y_pred)\nshow_confusion_matrix(cm_mat)", "Conclusion\nIt seems we are coming up to the limit of our simple 2-layer model. We double the epoch to 10K run and it still reaches at most around 96.6% accuracy or 3.4% error rate in our small test dataset.\nHere are the historical best results from official MNIST page, Neural Nets performance.\n| Classifier | Pre-processing | Test error rate (%) | Reference |\n|:------------------------------------------------------------------------: |:-------------------------------: |:---------------: |:--------------------------------------------------------------------: |\n| 2-layer NN, 300 hidden units, mean square error | none | 4.7 | LeCun et al. 1998 |\n| 2-layer NN, 300 HU, MSE, [distortions] | none | 3.6 | LeCun et al. 1998 |\n| 2-layer NN, 300 HU | deskewing | 1.6 | LeCun et al. 1998 |\n| 3-layer NN, 300+100 hidden units | none | 3.05 | LeCun et al. 1998 |\n| 3-layer NN, 500+150 HU [distortions] | none | 2.45 | LeCun et al. 1998 |\n| 3-layer NN, 500+300 HU, softmax, cross entropy, weight decay | none | 1.53 | Hinton, unpublished, 2005 |\n| 2-layer NN, 800 HU, Cross-Entropy Loss | none | 1.6 | Simard et al., ICDAR 2003 |\n| 2-layer NN, 800 HU, cross-entropy [elastic distortions] | none | 0.7 | Simard et al., ICDAR 2003 |\n| 6-layer NN 784-2500-2000-1500-1000-500-10 (on GPU) [elastic distortions] | none | 0.35 | Ciresan et al. Neural Computation 10, 2010 and arXiv 1003.0358, 2010 |\n| committee of 25 NN 784-800-10 [elastic distortions] | width normalization, deslanting | 0.39 | Meier et al. ICDAR 2011 |\n| deep convex net, unsup pre-training [no distortions] | none | 0.83 | Deng et al. Interspeech 2010 |\n| Convolutional net LeNet-1 | subsampling to 16x16 pixels | 1.7 | LeCun et al. 1998 |\n| Convolutional net LeNet-4 | none | 1.1 | LeCun et al. 1998 |\n| Convolutional net LeNet-5, [no distortions] | none | 0.95 | LeCun et al. 1998 |\n| Convolutional net LeNet-5, [distortions] | none | 0.8 | LeCun et al. 1998 |\n| Convolutional net Boosted LeNet-4, [distortions] | none | 0.7 | LeCun et al. 1998 |\n| Trainable feature extractor + SVMs [no distortions] | none | 0.83 | Lauer et al., Pattern Recognition 40-6, 2007 |\n| Trainable feature extractor + SVMs [elastic distortions] | none | 0.56 | Lauer et al., Pattern Recognition 40-6, 2007 |\n| Trainable feature extractor + SVMs [affine distortions] | none | 0.54 | Lauer et al., Pattern Recognition 40-6, 2007 |\n| unsupervised sparse features + SVM, [no distortions] | none | 0.59 | Labusch et al., IEEE TNN 2008 |\n| Convolutional net, cross-entropy [elastic distortions] | none | 0.4 | Simard et al., ICDAR 2003 |\n| large conv. net, random features [no distortions] | none | 0.89 | Ranzato et al., CVPR 2007 |\n| large conv. net, unsup features [no distortions] | none | 0.62 | Ranzato et al., CVPR 2007 |\n| large conv. net, unsup pretraining [elastic distortions] | none | 0.39 | Ranzato et al., NIPS 2006 |\n| large conv. net, unsup pretraining [no distortions] | none | 0.53 | Jarrett et al., ICCV 2009 |\n| large/deep conv. net, 1-20-40-60-80-100-120-120-10 [elastic distortions] | none | 0.35 | Ciresan et al. IJCAI 2011 |\n| committee of 7 conv. net, 1-20-P-40-P-150-10 [elastic distortions] | width normalization | 0.27 +-0.02 | Ciresan et al. ICDAR 2011 |\n| committee of 35 conv. net, 1-20-P-40-P-150-10 [elastic distortions] | width normalization | 0.23 | Ciresan et al. CVPR 2012 |" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/applied-machine-learning-intensive
content/02_data/01_introduction_to_pandas/colab.ipynb
apache-2.0
[ "Copyright 2020 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Introduction to Pandas\nPandas is an open-source library for data analysis and manipulation. It is a go-to toolkit for data scientists and is used extensively in this course.\nPandas integrates seamlessly with other Python libraries such as NumPy and Matplotlib for numeric processing and visualizations.\nWhen using Pandas, we will primarily interact with DataFrames and Series, which we will introduce in this lab.\nImporting Pandas\nIn order to use Pandas, you must import it. This is as simple as:\npython\nimport pandas\nHowever, you'll rarely see Pandas imported this way. By convention programmers rename Pandas to pd. This isn't a requirement, but it is a pattern that you'll see repeated often.\nTo import Pandas in the conventional manner run the code block below.", "import pandas as pd\n\npd.__version__", "After importing Pandas as pd we can use pandas by calling methods provided by pd. In the code block above we printed the Pandas version.\nPandas went 1.0.0 on January 29, 2020. The interface should stay relatively stable until a 2.0.0 release is declared sometime in the future. If you ever have a problem where a Pandas function isn't acting the way you think it should, be sure to check out which version you are using and find the documentation for that specific version.\nPandas Series\nA Series represents a sequential list of data. It is a foundational building block of the powerful DataFrame that we'll cover later in this lab.\nCreating a Series\nWe create a new Series object as we would any Python object:\npython\ns = pd.Series()\nThis creates a new, empty Series object, which isn't very interesting. You can create a series object with data by passing it a list or tuple:", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\n\nseries = pd.Series(temperatures)\n\nprint(type(series))\nprint(series)", "Here we created a new pandas.core.series.Series object with ten values presumably representing some temperature measurement.\nAnalyzing a Series\nYou can ask the series to compute information about itself. The describe() method provides statistics about the series.", "series.describe()", "You can also find other information about a Series such as if its values are all unique:", "series.is_unique", "Or if it is monotonically increasing or decreasing:", "print(series.is_monotonic)", "Exercise 1: Standard Deviation\nCreate a series using the list of values provided below. Then, using a function in the Series class, find the standard deviation of the values in that series and store it in the variable std_dev.\nStudent Solution", "import pandas as pd\n\nweights = (120, 143, 98, 280, 175, 205, 210, 115, 122, 175, 201)\n\nseries = None # Create a series and assign it here.\n\nstd_dev = None # Find the standard deviation of the series and assign it here.\n\nprint(std_dev)", "Accessing Values\nLet's take another look at the first series that we created in this lab:", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\n\nseries = pd.Series(temperatures)\n\nprint(type(series))\nprint(series)", "We can see the values printed down the right-side column. But what are those numbers along the left?\nThey are indices.\nYou are probably thinking that Series objects feel a whole lot like lists, tuples, and NumPy arrays. If so, you are correct.\nThey are very similar to these other sequential data structures, and individual items in a series can be accessed by index as expected.", "series[4]", "You can also loop over the values in a Series.", "for temp in series:\n print(temp)", "Modifying Values\nSeries are mutable, so you can modify individual values.", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\nseries = pd.Series(temperatures)\n\nprint(series[1])\n\nseries[1] = 65\n\nprint(series[1])", "You can also modify all of the elements in a series using standard Python expressions. For instance, if we wanted to add 1 to every item in a series, we can just do:", "series + 1", "Note that this doesn't actually change the Series though. To do that we need to assign the computation back to our original series.\nMore operations than addition can be applied. You can add, subtract, multiple, divide, and more with a simple Python expression.", "series = series + 1", "You can remove values from the series by index using pop:", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\nseries = pd.Series(temperatures)\n\nprint(series)\n\nseries.pop(4)\n\nprint(series)", "Notice that when we print the series out a second time, the index with value 4 is missing. After we pop the value out, the index is no longer valid to access!", "try:\n print(series[4])\nexcept:\n print('Unable to print the value at index 4')", "In order to get the indices back into a smooth sequential order, we can call the reset_index function. We pass the argument drop=True to tell Pandas not to save the old index as a new column. We pass the argument inplace=True to tell Pandas to modify the series directly instead of making a copy.", "series.reset_index(drop=True, inplace=True)\nseries", "This is very different from what we would expect from a normal Python list! While it is possible to use pop on a list, the indices will automatically reset.", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\n\nprint(temperatures)\n\ntemperatures.pop(4)\n\nprint(temperatures[4])", "You can also add values to a Series by appending another Series to it. We pass the argument ignore_index=True to tell Pandas to append the values with new indices, rather than copying over the old indices of the appended values. In this case, that means the new values (66 and 74) get the indices 10 and 11, rather than 0 and 1:", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\nseries = pd.Series(temperatures)\n\nprint(series)\n\nnew_series = pd.Series([66, 74])\nseries = series.append(new_series, ignore_index=True)\n\nprint(series)", "Exercise 2: Sorting a Series\nFind the correct method in the Series documentation to sort the values in series in ascending order. Be sure the indices are also sorted and that the new sorted series is stored in the series variable.\nStudent Solution", "temperatures = [55, 63, 72, 65, 63, 75, 67, 59, 82, 54]\nseries = pd.Series(temperatures)\n\n# Your code goes here.\n\nprint(series.sort_values())", "Pandas DataFrame\nNow that we have a basic understanding of Series, let's dive into the DataFrame. If you picture Series as a list of data, you can think of DataFrame as a table of data.\nA DataFrame consists of one or more Series presented in a tabular format. Each Series in the DataFrame is a column.\nCreating a DataFrame\nWe can create an empty DataFrame using the DataFrame class in Pandas:\npython\ndf = pd.DataFrame()\nBut an empty DataFrame isn't particularly exciting. Instead, let's create a DataFrame using a few series.\nIn the code block below you'll see that we have three series:\n\nCities\nPopulations of those cities\nNumber of airports in those cities", "city_names = pd.Series([\n 'Atlanta', \n 'Austin', \n 'Kansas City',\n 'New York City', \n 'Portland', \n 'San Francisco', \n 'Seattle',\n])\n\npopulation = pd.Series([\n 498044,\n 964254,\n 491918,\n 8398748,\n 653115,\n 883305, \n 744955,\n])\n\nnum_airports = pd.Series([\n 2,\n 2,\n 8,\n 3,\n 1,\n 3,\n 2,\n])\n\nprint(city_names, population, num_airports)", "We can now combine these series into a DataFrame, using a dictionary with keys as the column names and values as the series:", "df = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\nprint(df)", "The data is now displayed in a tabular format. We can see that there are three columns: City Name, Population, and Airports. There are six rows, each row representing the data for a single city.\nIn the block above we used the print function to display the DataFrame, which printed out the data in a plain text form. Colab and other notebook environments can \"pretty print\" DataFrames if you make it the last part of a code block and don't wrap the variable in a print statement. Run the code block below to see this in action.", "df = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\ndf", "That's much easier on the eyes! The rows are colored in an alternating background color scheme, which makes long rows of data easier to view.\nAnalyzing a DataFrame\nSimilar to a Series, you can ask the DataFrame to compute information about itself. The describe() method provides statistics about the DataFrame.", "df.describe()", "These are the same statistics that we got when we called describe on a Series above. As you work with Pandas, you'll find that many of the methods that operate on Series also work with DataFrame objects.\nDid you notice something missing in the output from describe though?\nWe have three columns in our DataFrame, but only two columns have statistics printed for them. This is because describe only works with numeric Series by default, and the 'City Name' column is a string.\nTo show all columns add an include='all' argument to describe:", "df.describe(include='all')", "We now get a few more metrics specific to string columns: unique, top, and freq. We also now can see the 'City Name' column.\nIf we want to look at the data we could print the entire DataFrame, but that doesn't scale well for really large DataFrames. The head method is a way to just look at the first few rows of a DataFrame.", "df.head()", "Conversely, the tail method returns the last few rows of a data frame.", "df.tail()", "You can also choose the number of rows you want to print as part of head and tail.", "df.head(12)", "These are useful ways at taking a look at actual data, but they can have some inherent bias in them. If the data is sorted by any column values, head or tail might show a skewed view of the data.\nOne way to combat this is to always look at both the head and tail of your data. Another way is to randomly sample your data and look at the sample. This will reduce the chance that you are seeing a lopsided view of your data.\nWe can also visualize the data in a DataFrame. The hist command will make a histogram of each of the numerical columns. As you will see, some of these histograms are more informative than others.", "_ = df.hist()", "What Information Might We Gain From These Histograms?\nIn the airports histogram, we can see that there is one outlier (Kansas City), and all other cities have roughly two airports.\nIn the population histogram, we can see that there is also one outlier (New York City), which has an order of magnitude more population, such that all other populations are very close to zero in comparison. We also see here how the axis can get very messy.\nExercise 3: Sampling Data\nFind a method in the DataFrame documentation that returns a random sample of your DataFrame. Call that method and make it return five rows of data.\nStudent Solution", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\n# Your Code Goes Here", "Accessing Values\nWe saw that individual values in a Series can be accessed using indexing similar to that seen in standard Python lists and tuples. Accessing values in DataFrame objects is a little more involved.\nAccessing Columns\nTo access an entire column of data you can index the DataFrame by column name. For instance, to return the entire City Name column as a Series you can run the code below:", "df['City Name']", "But what if you want a DataFrame instead of a Series?\nIn this case, you index the DataFrame using a list, where the list contains the name of the column that you want returned as a DataFrame:", "df[['City Name']]", "Similarly, you can return more than one column in the resultant DataFrame:", "df[['City Name', 'Population']]", "Sometimes you might also see columns of data referenced using the dot notation:", "df.Population", "This is a neat trick, but it is problematic for a couple of reasons:\n\nYou can only get a Series back.\nIt is impossible to reference columns with spaces in the names with this notation (ex. 'City Name').\nIt is confusing if a column has the same name as an inbuilt method of a DataFrame, such as size.\n\nWe mention this notation because you'll likely see it. However, we don't advise using it.\nAccessing Rows\nIn order to access rows of data, you can't use standard indexing. It would seem natural to index using a numeric row value, but as you can see in the example below, this yields a KeyError.", "try:\n df[1]\nexcept KeyError:\n print('Got KeyError')", "This is because the default indexing is to look for column names, and numbers are valid column names. If you had a column named 1 in a DataFrame with at least two rows, Pandas wouldn't know if you wanted row 1 or column 1.\nIn order to index by row, you must use the iloc feature of the DataFrame object.", "df.iloc[1]", "The code above returns the second row of data in the DataFrame as a Series.\nYou can also return multiple rows using slices:", "df.iloc[1:3]", "As an aside, if you do use a range, then iloc is optional since columns can't be referenced in a range, and the default selector can disambiguate what you are doing. This can be a little confusing, though, so try to avoid it.", "df[1:3]", "If you want sparse rows that don't fall into an easily defined range, you can pass iloc a list of rows that you would like returned:", "df.iloc[[1, 3]]", "Exercise 4: Single Row as a DataFrame\nGiven the methods of accessing rows in a DataFrame that we have learned so far, how would you access the third row in the df DataFrame defined below as a DataFrame itself (as opposed to as a Series)?\nStudent Solution", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n \n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\n# Your Code Goes Here", "Accessing Row/Column Intersections\nWe've learned how to access columns by direct indexing on the DataFrame. We've learned how to access rows by using iloc. You can combine these two access methods using the loc functionality of the DataFrame object.\nSimply call loc and pass it two arguments:\n\nThe row(s) you want to access\nThe column(s) you want to access\n\nIn the example below we access the 'City Name' in the third row of the DataFrame:", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\ndf.loc[2, 'City Name']", "In the example below we access the 'City Name' and 'Airports' columns in the third and fourth rows of the DataFrame:", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\ndf.loc[[2,3], ['City Name', 'Airports']]", "We will learn more about loc in the next section. Specifically, we will come to understand how using loc enables us to access a DataFrame directly in order to modify it.\nModifying Values\nThere are many ways to modify values in a DataFrame. We'll look at a few of the more straightforward ways in this section.\nModifying Individual Values\nThe easiest way to modify a single value in a DataFrame is to directly index it on the left-hand sign of an expression.\nLet's say the Seattle area got a new commercial airport called Paine Field. If we want to increment the number of airports for Seattle, we could access the Seattle airport count directly and modify it:", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\ndf.loc[6, 'Airports'] = 3\ndf", "Modifying an Entire Column\nModifying a single value is a great skill to have, especially when working with small numbers of outliers. However, you'll often want to work with larger swaths of data.\nWhen would you want to do this?\nConsider the 'Population' column that we have been working with in this lab. It is integer-valued, however in some cases it might be better to work with the \"thousands\" value. For this we can do column-level modifications.\nIn the example below we simply divide the population by 1,000.", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\ndf['Population'] /= 1000\ndf", "Instead of overwriting the existing column, you may instead want to create a new column. This can be done by assigning to a new column name:", "city_names = pd.Series(['Atlanta', 'Austin', 'Kansas City', 'New York City', \n 'Portland', 'San Francisco', 'Seattle'])\npopulation = pd.Series([498044, 964254, 491918, 8398748, 653115, 883305, 744955])\nnum_airports = pd.Series([2, 2, 8, 3, 1, 3, 2])\n\ndf = pd.DataFrame({\n 'City Name': city_names,\n 'Population': population, \n 'Airports': num_airports,\n})\n\ndf['Population_M'] = df['Population'] / 1000\ndf", "Fetching Data\nSo far we have created the data that we have worked with from scratch. In reality, we'll load our data from a file system, the internet, a database, or one of many other sources.\nThroughout this course, we'll load data in many ways. Let's start by loading the data from the internet directly.\nFor this, we'll use the Pandas method read_csv. This method can read comma-separated data from a URL. See an example below:", "url = \"https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv\"\ncalifornia_housing_dataframe = pd.read_csv(url)\ncalifornia_housing_dataframe", "We now have a DataFrame full of data about housing prices in California. This is a classic dataset that we'll look at more closely in future labs. For now, we'll load it in and try to get an understanding of the data.\nExercise 5: Exploring Data\nIn this exercise we will write code to explore the California housing dataset mentioned earlier in this lab. As seen previously, we can load the data using the following code:", "url = \"https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv\"\ncalifornia_housing_df = pd.read_csv(url)\ncalifornia_housing_df", "Question 1: Histograms\nThis question will have two parts: one coding and one data analysis.\nQuestion 1.1: Display Histograms\nWrite the code to display histograms for all numeric columns in the california_housing_df object.\nStudent Solution", "# Your Code Goes Here", "Question 1.2: Histogram Analysis\nTwo of the histograms have two strong peaks rather than one. Which columns are these? What do you think this tells us about the data?\nStudent Solution\nWhat are the names of the two columns with two strong peaks each?\n1. Write the first column name here\n1. Write the second column name here\nWhat insights do you gather from the columns with dual peaks?:\n* Write your answer here\n\nQuestion 2: Ordering\nDoes there seem to be any obvious ordering to the data? If so, what is the ordering? Show the code that you used to determine your answer.\nStudent Solution\nIs there any ordering?\n* (Yes/No)\nIf there was ordering, what columns were sorted and in what order (ascending/descending)?:\n* Write your answer here\nWhat code did you use to determine the answer?", "# Your code goes here", "Exercise 6: Creating a New Column\nCreate a new column in california_housing_df called persons_per_bedroom that is the ratio of population to total_bedrooms.\nStudent Solution", "url = \"https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv\"\ncalifornia_housing_df = pd.read_csv(url)\n\n# Your Code Goes Here", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Microno95/DESolver
docs/examples/pyaudi/Example 2 - PyAudi - Differential Intelligence - Vectorised Duals.ipynb
mit
[ "Differential Intelligence - Vectorised Duals\n(original by Dario Izzo - extended by Ekin Ozturk)\nIn this notebook we show the use of desolver and vectorised gduals for the numerical integration of multiple initial conditions following the notebook example here differential intelligence.\nImporting Stuff", "%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nimport os\nimport numpy as np\n\nos.environ['DES_BACKEND'] = 'numpy'\nimport desolver as de\nimport desolver.backend as D\n\nD.set_float_fmt('gdual_vdouble')", "Controller representation and “simulator”\nTake as an example the task of learning a robotic controller. In neuro evolution (or Evolutionary Robotics), the controller is typically represented by a neural network, but for the purpose of explaining this new learning concept we will use a polinomial representation for the controller. Later, changing the controller into a NN with weights as parameters will not change the essence of what is done here.", "# Definition of the controller in terms of some weights parameters\ndef u(state, weights):\n x,v = state\n a,b,c,e,f,g = weights\n return a + b*x + c*v + e*x*v + f*x**2 + g*v**2\n\n# Definition of the equation of motion (our physics simulator propagating the system to its next state)\ndef eom(state, weights):\n x,v = state\n dx = v\n dv = u(state, weights)\n return (dx, dv)", "Numerical Integration - Runge-Kutta 45 Cash-Karp Method\nIn Evolutionary Robotics, Euler propagators are commonly used, but we would like to use a higher order integration scheme that is adaptive in order to minimise computation, and increase the accuracy and precision of the results.\nWe are using gdual_vdouble in order to integrate multiple initial states simultaneously without any significant loss of computation time due to the very efficient vectorisation of gdual_vdouble computations.", "num_different_integrations = 16\nweights = D.array([D.gdual_vdouble([0.2*(np.random.uniform()-0.5)]*num_different_integrations, _, 4) for _ in \"abcefg\"])\nx = [D.gdual_vdouble([2.*(np.random.uniform()-0.5) for i in range(num_different_integrations)])]\nv = [D.gdual_vdouble([2.*(np.random.uniform()-0.5) for i in range(num_different_integrations)])]\ny0 = D.array(x + v, dtype=D.gdual_vdouble)\n\ndef rhs(t, state, weights, **kwargs):\n return D.array(eom(state, weights))", "We integrate the system using the Runge-Kutta-Cash-Karp scheme as the numerical integration system with a dense output computed using a piecewise C1 Hermite interpolating spline. \nThis particular interpolator is used as it satisfies not only the state boundary conditions, but also the gradients and is well suited for approximating the solution continuously up to first order in time.\nNote that the entire integration is done using gduals and thus the solution and the interpolating spline stored in the OdeSystem instance, pyaudi_integration, contains all the information about how the state reached by our robot changes when we change the control weights.", "# We restrict the integration time due to the fact that the controller \n# is quadratic in the state and causes the state to grow very rapidly.\n\npyaudi_integration = de.OdeSystem(rhs, y0=y0, dense_output=True, t=(0, 3.), dt=0.1, rtol=1e-12, atol=1e-12, constants=dict(weights=weights))\n\npyaudi_integration.set_method(\"RK45\")\npyaudi_integration.integrate(eta=True)\n\nx,v = pyaudi_integration.sol(D.linspace(0, 1, 20)).T", "We numerically integrate 16 initial states and see the paths they follow based on the controller we have defined.", "for _x, _v in zip(D.to_float(x).T, D.to_float(v).T):\n plt.plot(_x, _v, 'k')\nplt.plot(x[0].constant_cf, v[0].constant_cf, 'ro')\nplt.show()\n\nxf, vf = x[-1], v[-1]\n\nprint(\"initial xf: {}\".format(xf.constant_cf))\nprint(\"initial vf: {}\".format(vf.constant_cf))", "Studying the effects of the weights on the behavior\nWe have represented all the robot behavior (x, v) as a polynomial function of the weights. So we now know what happens to the behaviour if we change the weights!! Lets see … we only consider the final state, but the same can be done for all states before.\nFurthermore, we can compute this for all the different initial states that we integrated thus finding out how the final state of the robot changes for multiple initial conditions.", "dweights = dict({'da': -0.002, 'db': 0.003, 'dc': -0.02, 'de': 0.03, 'df': 0.02, 'dg': -0.01})\n#Lets predict the new final position of our 'robot' if we change his controller as defined above\nprint(\"new xf: {}\".format(xf.evaluate(dweights)))\nprint(\"new vf: {}\".format(vf.evaluate(dweights)))", "Check that we learned the correct map\nWe now simulate again our behavior using the new weights to see where we end up to check if the prediction made after our differential learning is correct.", "new_weights = D.array([it + dweights['d' + it.symbol_set[0]] for it in weights])\n\npyaudi_integration2 = de.OdeSystem(rhs, y0=y0, dense_output=True, t=(pyaudi_integration.t[0], pyaudi_integration.t[-1]), dt=0.1, rtol=1e-12, atol=1e-12, constants=dict(weights=new_weights))\n\npyaudi_integration2.set_method(\"RK45\")\npyaudi_integration2.integrate(eta=True)\n\nplt.figure(figsize=(16,16))\nx2, v2 = pyaudi_integration2.sol(D.linspace(0, 1, 20)).T\nfor idx, (_x, _v) in enumerate(zip(D.to_float(x).T, D.to_float(v).T)):\n if idx == 0:\n plt.plot(_x,_v,'C0',label='original')\n else:\n plt.plot(_x,_v,'C0')\nfor idx, (_x, _v) in enumerate(zip(D.to_float(x2).T, D.to_float(v2).T)):\n if idx == 0:\n plt.plot(_x,_v,'C1',label='simulation')\n else:\n plt.plot(_x,_v,'C1')\nfor idx, (_x, _v) in enumerate(zip(D.array([it.evaluate(dweights) for it in x]).T,D.array([it.evaluate(dweights) for it in v]).T)):\n if idx == 0:\n plt.plot(_x,_v,'C2+', markersize=8.,label='differential learning')\n else:\n plt.plot(_x,_v,'C2+', markersize=8.)\n# plt.plot(x[0].constant_cf, v[0].constant_cf, 'ro')\nplt.legend(loc=2)\nplt.show()", "Since we have integrated multiple trajectories, printing all the final states and comparing it to the evaluated polynomial is visually difficult to parse. Instead, we look at the maximum and mean absolute differences between the numerical integration and the map evaluated with the new weights.", "print(\"Maximum Absolute Difference xf:\\t{}\".format(D.max(D.abs(D.to_float(x2[-1]) - D.to_float(x[-1].evaluate(dweights))))))\nprint(\"Mean Absolute Difference xf: \\t{}\".format(D.mean(D.abs(D.to_float(x2[-1]) - D.to_float(x[-1].evaluate(dweights))))))\nprint()\nprint(\"Maximum Absolute Difference vf:\\t{}\".format(D.max(D.abs(D.to_float(v2[-1]) - D.to_float(v[-1].evaluate(dweights))))))\nprint(\"Mean Absolute Difference vf: \\t{}\".format(D.mean(D.abs(D.to_float(v2[-1]) - D.to_float(v[-1].evaluate(dweights))))))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
griffinfoster/fundamentals_of_interferometry
6_Deconvolution/6_5_source_finding.ipynb
gpl-2.0
[ "Outline\nGlossary\n6. Deconvolution in Imaging \nPrevious: 6.3 Residuals and Image Quality \nNext: 6.x Further Reading and References", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS\n\nimport matplotlib\nfrom scipy import optimize\nimport astropy.io.fits\n\nmatplotlib.rcParams.update({'font.size': 18})\nmatplotlib.rcParams.update({'figure.figsize': [12,8]} )", "6.5 Source Finding\nIn radio astronomy, source finding is the process through which the attributes of radio sources -- such as flux density and mophorlogy -- are measured from data. In this section we will only cover source finding in the image plane.\nSource finding techniques usually involve four steps, i) charecterizing the noise (or background estimation), ii) thresholding the data based on knowledge of the noise, iii) finding regions in the thresholded image with \"similar\" neighbouring pixels (this is that same as blob detection in image processing), and iv) parameterizing these 'blobs' through a function (usually a 2D Gaussian). The source attributes are then estimated from the parameterization of the blobs.\n6.5.1 Noise Charecterization\nAs mentioned before, the radio data we process with source finders is noisy. To charecterize this noise we need to make a few assumptions about its nature, namely we assume that the niose results from some stochastic process and that it can be described by a normal distribution\n$$ G(x \\, | \\, \\mu,\\sigma^2) = \\frac{1}{\\sigma \\sqrt{2\\pi}}\\text{exp}\\left( \\frac{-(x-\\mu)^2}{2\\sigma^2}\\right) $$\nwhere, $\\mu$ is the mean (or expected value) of the variable $x$, and $\\sigma^2$ is the variance of the distribution; $\\sigma$ is the standard deviation. Hence, the noise can be parameterized through the mean and the standard deviation. Let us illustrate this with an example. Bellow is a noise image from a MeerKAT simulation, along with a histogram of of the pixels (in log space).", "noise_image = \"../data/fits/noise_image.fits\"\nwith astropy.io.fits.open(noise_image) as hdu:\n data = hdu[0].data[0,0,...]\n\nfig, (image, hist) = plt.subplots(1, 2, figsize=(18,6))\nhistogram, bins = np.histogram(data.flatten(), bins=401)\n\ndmin = data.min()\ndmax = data.max()\nx = np.linspace(dmin, dmax, 401)\n\nim = image.imshow(data)\n\nmean = data.mean()\nsigma = data.std()\npeak = histogram.max()\n\ngauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2))\n\nfitdata = gauss(x, peak, mean, sigma)\n\nplt.plot(x, fitdata)\nplt.plot(x, histogram, \"o\")\nplt.yscale('log')\nplt.ylim(1)", "Now, in reality the noise has to measured in the presence of astrophysical emission. Furthermore, radio images are also contaminated by various instrumental effects which can manifest as spurious emission in the image domain. All these factors make it difficult to charercterize the noise in a synthesized image. Since the noise generally dominates the images, the mean and standard deviation of the entire image are still fairly good approximations of the noise. Let us now insert a few sources (image and flux distribution shown below) in the noise image from earlier and then try to estimate noise.", "noise_image = \"../data/fits/star_model_image.fits\"\nwith astropy.io.fits.open(noise_image) as hdu:\n data = hdu[0].data[0,0,...]\n\nfig, (image, hist) = plt.subplots(1, 2, figsize=(18,6))\nhistogram, bins = np.histogram(data.flatten(), bins=101)\n\n\ndmin = data.min()\ndmax = data.max()\nx = np.linspace(dmin, dmax, 101)\n\nim = image.imshow(data)\n\nmean = data.mean()\nsigma_std = data.std()\n\npeak = histogram.max()\n\ngauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2))\n\nfitdata_std = gauss(x, peak, mean, sigma_std)\n\nplt.plot(x, fitdata_std, label=\"STD DEV\")\n\nplt.plot(x, histogram, \"o\", label=\"Data\")\nplt.legend(loc=1)\n\nplt.yscale('log')\nplt.ylim(1)", "The pixel statistics of the image are no longer Gaussian as apparent from the long trail of the flux distribution. Constructing a Gaussian model from the mean and standard deviation results in a poor fit (blue line in the figure on the right). A better method to estimate the variance is to measure the dispersion of the data points about the mean (or median), this is the mean/median absolute deviation (MAD) technique. We will refer to the to median absolute deviation as the MAD Median, and the mean absolute deviation as the MAD Mean. A synthesis imaging specific method to estimate the variance of the noise is to only consider the negative pixels. This works under the assumption that all the astrophysical emission (at least in Stokes I) has a positive flux density. The Figure below shows noise estimates from methods mentioned above.", "mean = data.mean()\nsigma_std = data.std()\nsigma_neg = data[data<0].std() * 2\nmad_mean = lambda a: np.mean( abs(a - np.mean(a) ))\nsigma_mad_median = np.median( abs(data - np.median(data) ))\n\nmad_mean = lambda a: np.mean( abs(a - np.mean(a) ))\nsigma_mad_mean = mad_mean(data)\n\npeak = histogram.max()\n\ngauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2))\n\nfitdata_std = gauss(x, peak, mean, sigma_std)\nfitdata_mad_median = gauss(x, peak, mean, sigma_mad_median)\nfitdata_mad_mean = gauss(x, peak, mean, sigma_mad_mean)\nfitdata_neg = gauss(x, peak, mean, sigma_neg)\n\nplt.plot(x, fitdata_std, label=\"STD DEV\")\nplt.plot(x, fitdata_mad_median, label=\"MAD Median\")\nplt.plot(x, fitdata_mad_mean, label=\"MAD Mean\")\nplt.plot(x, fitdata_neg, label=\"Negative STD DEV\")\nplt.plot(x, histogram, \"o\", label=\"Data\")\nplt.legend(loc=1)\n\nplt.yscale('log')\nplt.ylim(1)", "The MAD and negtive value standard deviation methods produce a better solution to the noise distribution in the presence of sources.\n6.5.2 Blob Detection and Charercterization\nOnce the noise has been estimated, the next step is to find and charecterize sources in the image. Generically in image processing this is known as blob detection. In a simple case during synthesis imaging we define a blob as a group contiguous pixels whose spatial intensity profile can be modelled by a 2D Gaussian function. Of course, more advanced functions could be used. Generally, we would like to group together near by pixels, such as spatially 'close' sky model components from deconvolution, into a single complex source. Our interferometric array has finite spatial resolution, so we can further constrain our blobs not to be significantly smaller than the image resolution. We define two further constraints of a blob, the peak and boundary thresholds. The peak threshold, defined as\n$$ \n \\sigma_\\text{peak} = n * \\sigma,\n$$\nis the minimum intensity the maximum pixel in a blob must have relative to the image noise. That is, all blobs with peak pixel lower than $\\sigma_\\text{peak}$ will be excluded from being considered sources. And the boundary threshold\n$$\n \\sigma_\\text{boundary} = m * \\sigma,\n$$\ndefines the boundary of a blob, $m$ and $n$ are natural numbers with $m$ < $n$. \n6.5.2.1 A simple source finder\nWe are now in a position to write a simple source finder. To do so we implement the following steps: \n\nEstimate the image noise and set peak and boundary threshold values.\nBlank out all pixel values below the boundary value.\nFind Peaks in image.\nFor each peak, fit a 2D Gaussian and subtract the Gaussian fit from the image.\nRepeat until the image has no pixels above the detection threshold.", "def gauss2D(x, y, amp, mean_x, mean_y, sigma_x, sigma_y):\n \"\"\" Generate a 2D Gaussian image\"\"\"\n gx = -(x - mean_x)**2/(2*sigma_x**2)\n gy = -(y - mean_y)**2/(2*sigma_y**2)\n \n return amp * np.exp( gx + gy)\n\ndef err(p, xx, yy, data):\n \"\"\"2D Gaussian error function\"\"\"\n return gauss2D(xx.flatten(), yy.flatten(), *p) - data.flatten()\n\ndef fit_gaussian(data, psf_pix):\n \"\"\"Fit a gaussian to a 2D data set\"\"\"\n \n width = data.shape[0]\n mean_x, mean_y = width/2, width/2\n amp = data.max()\n sigma_x, sigma_y = psf_pix, psf_pix\n params0 = amp, mean_x, mean_y, sigma_x,sigma_y\n \n npix_x, npix_y = data.shape\n x = np.linspace(0, npix_x, npix_x)\n y = np.linspace(0, npix_y, npix_y)\n xx, yy = np.meshgrid(x, y)\n \n \n params, pcov, infoDict, errmsg, sucess = optimize.leastsq(err, \n params0, args=(xx.flatten(), yy.flatten(),\n data.flatten()), full_output=1)\n \n \n perr = abs(np.diagonal(pcov))**0.5\n model = gauss2D(xx, yy, *params)\n \n return params, perr, model\n\ndef source_finder(data, peak, boundary, width, psf_pix):\n \"\"\"A simple source finding tool\"\"\"\n \n # first we make an estimate of the noise. Lets use the MAD mean\n sigma_noise = mad_mean(data)\n\n # Use noise estimate to set peak and boundary thresholds\n peak_sigma = sigma_noise*peak\n boundary_sigma = sigma_noise*boundary\n \n # Pad the image to avoid hitting the edge of the image\n pad = width*2\n residual = np.pad(data, pad_width=((pad, pad), (pad, pad)), mode=\"constant\")\n model = np.zeros(residual.shape)\n \n # Create slice to remove the padding later on\n imslice = [slice(pad, -pad), slice(pad,-pad)]\n \n catalog = [] \n \n # We will need to convert the fitted sigma values to a width\n FWHM = 2*np.sqrt(2*np.log(2))\n \n while True:\n \n # Check if the brightest pixel is at least as bright as the sigma_peak\n # Otherwise stop.\n max_pix = residual.max()\n if max_pix<peak_sigma:\n break\n \n xpix, ypix = np.where(residual==max_pix)\n xpix = xpix[0] # Get first element\n ypix = ypix[0] # Get first element\n \n # Make slice that selects box of size width centred around bright brightest pixel\n subim_slice = [ slice(xpix-width/2, xpix+width/2),\n slice(ypix-width/2, ypix+width/2) ]\n \n # apply slice to get subimage\n subimage = residual[subim_slice]\n \n \n # blank out pixels below the boundary threshold\n mask = subimage > boundary_sigma\n \n # Fit gaussian to submimage\n params, perr, _model = fit_gaussian(subimage*mask, psf_pix)\n \n amp, mean_x, mean_y, sigma_x,sigma_y = params\n amp_err, mean_x_err, mean_y_err, sigma_x_err, sigma_y_err = perr\n \n # Remember to reposition the source in original image\n pos_x = xpix + (width/2 - mean_x) - pad\n pos_y = ypix + (width/2 - mean_y) - pad\n \n # Convert sigma values to FWHM lengths\n size_x = FWHM*sigma_x\n size_y = FWHM*sigma_y\n \n # Add modelled source to model image\n model[subim_slice] = _model\n \n # create new source\n source = (\n amp,\n pos_x,\n pos_y,\n size_x,\n size_y\n )\n \n # add source to catalogue\n catalog.append(source)\n \n # update residual image\n residual[subim_slice] -= _model \n \n return catalog, model[imslice], residual[imslice], sigma_noise\n", "Using this source finder we can produce a sky model which contains all 17 sources in our test image from earlier in the section.", "test_image = \"../data/fits/star_model_image.fits\"\nwith astropy.io.fits.open(test_image) as hdu:\n data = hdu[0].data[0,0,...]\n \ncatalog, model, residual, sigma_noise = source_finder(data, 5, 2, 50, 10)\n\nprint \"Peak_Flux Pix_x Pix_y Size_x Size_y\"\nfor source in catalog:\n print \" %.4f %.1f %.1f %.2f %.2f\"%source\n\nfig, (img, mod, res) = plt.subplots(1, 3, figsize=(24,12))\nvmin, vmax = sigma_noise, data.max()\n\nim = img.imshow(data, vmin=vmin, vmax=vmax)\nimg.set_title(\"Data\")\n\nmod.imshow(model, vmin=vmin, vmax=vmax)\nmod.set_title(\"Model\")\n\nres.imshow(residual, vmin=vmin, vmax=vmax)\nres.set_title(\"Residual\")\n\ncbar_ax = fig.add_axes([0.92, 0.25, 0.02, 0.5])\nfig.colorbar(im, cax=cbar_ax, format=\"%.2g\")", "The flux and position on each source varies from the true sky model due to the image noise and distribution. The source finding algorithm we above is heuristic example. It has two major flaws : i) it is capable to handling a situation where two or more sources are close enough to each other that would fall within the same sub-image from which the source parameters are estimated, and ii) the noise in radio images is often non-uniform and 'local' noise estimates are required in order to set thresholds. More advanced source finders are used to work on specific source types such as extended objects and line spectra.\n\nNext: 6.x Further Reading and References\n<div class=warn><b>Future Additions:</b></div>\n\n\ndescribe MAD and negative standard deviation methods\nfigure titles and labels\ndiscussion on source finders commonly in use\nexample: change the background noise or threshold values\nexample: kat-7 standard image after deconvolution\nexample: complex extended source\nexample: location-dependent noise variations" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ahwillia/RecNetLearn
tutorials/FORCE_Learning_recurrent_feedforward.ipynb
mit
[ "Embedding a Feedforward Cascade in a Recurrent Network\nAlex Williams 10/24/2015\nIf you are viewing a static version of this notebook (e.g. on nbviewer), you can launch an interactive session by clicking below:\n\nThere has been renewed interest in feedforward networks in both theoretical (Ganguli et al., 2008; Goldman, 2009; Murphy & Miller, 2009) and experimental (Long et al. 2010; Harvey et al. 2012) neuroscience lately. On a structural level, most neural circuits under study are highly recurrent. However, recurrent networks can still encode simple feedforward dynamics as we'll show in this notebook (also see Ganguli & Latham, 2009, for an intuitive overview).\n<img src=\"./feedforward.png\" width=450>", "from __future__ import division\nfrom scipy.integrate import odeint,ode\nfrom numpy import zeros,ones,eye,tanh,dot,outer,sqrt,linspace,pi,exp,tile,arange,reshape\nfrom numpy.random import uniform,normal,choice\nimport pylab as plt\nimport numpy as np\n%matplotlib inline", "Methods\nConsider a recurrent network initialized with random connectivity. We split the network into two groups &mdash; neurons in the first group participate in the feedforward cascade, and neurons in the second group do not. We use recursive least-squares to train the presynaptic weights for each of the neurons in the cascade. The presynaptic weights for the second group of neurons are left untrained.\nThe second group of neurons provides chaotic behavior that helps stabilize and time the feedforward cascade. This is necessary for the target feedforward cascade used in this example. The intrinsic dynamics of the system are too fast to match the slow timescale of the target pattern we use.\n<img src=\"./rec-ff-net.png\" width=600>\nWe previously applied FORCE learning to the output/readout weights of a recurrent network (see notebook here). In this case we will train a subset of the recurrent connections in the network (blue lines in the schematic above). This is described in the supplemental materials of Susillo & Abbott (2009). We start with random initial synaptic weights for all recurrent connections and random weights for the input stimulus to the network.<sup><a href=\"#f1b\" id=\"f1t\">[1]</a></sup> The dynamics are given by:\n$$\\mathbf{\\dot{x}} = -\\mathbf{x} + J \\tanh(\\mathbf{x}) + \\mathbf{u}(t)$$\nwhere $\\mathbf{x}$ is a vector holding the activation of all neurons, the firing rates are $\\tanh(\\mathbf{x})$, the matrix $J$ holds the synaptic weights of the recurrent connections, and $\\mathbf{u}(t)$ is the input/stimulus, which is applied in periodic step pulses.\nEach neuron participating in the feedforward cascade/sequence has a target function for its firing rate. We use a Gaussian for this example:\n$$f_i(t) = 2 \\exp \\left [ \\frac{-(t-\\mu_i)^2}{18} \\right ] - 1$$\nwhere $\\mu_i$ is the time of peak firing for neuron $i$. Here, $t$ is the time since the last stimulus pulse was delivered &mdash; to reiterate, we repeatedly apply the stimulus as a step pulse during training.\nWe apply recursive least-squares to train the pre-synaptic weights for each neuron participating in the cascade. Denote the $i$<sup>th</sup> row of $J$ as $\\mathbf{j}_i$ (these are the presynaptic inputs to neuron $i$). For each neuron, we store a running estimate of the inverse correlation matrix, $P_i$, and use this to tune our update of the presynaptic weights:\n$$\\mathbf{q} = P_i \\tanh [\\mathbf{x}]$$\n$$c = \\frac{1}{1+ \\mathbf{q}^T \\tanh(\\mathbf{x})}$$\n$$\\mathbf{j}_i \\rightarrow \\mathbf{j}_i + c(f_i(t)- \\tanh (x_i) ) \\mathbf{q}$$\n$$P_{i} \\rightarrow P_{i} - c \\mathbf{q} \\mathbf{q}^T$$\nWe initialize each $P_i$ to the identity matrix at the beginning of training.\nTraining the Network", "## Network parameters and initial conditions\nN1 = 20 # neurons in chain\nN2 = 20 # neurons not in chain\nN = N1+N2\ntI = 10\nJ = normal(0,sqrt(1/N),(N,N))\nx0 = uniform(-1,1,N)\ntmax = 2*N1+2*tI\ndt = 0.5\nu = uniform(-1,1,N)\ng = 1.5\n\n## Target firing rate for neuron i and time t0\ntarget = lambda t0,i: 2.0*exp(-(((t0%tmax)-(2*i+tI+3))**2)/(2.0*9)) - 1.0\n\ndef f1(t0,x):\n ## input to network at beginning of trial\n if (t0%tmax) < tI: return -x + g*dot(J,tanh_x) + u\n ## no input after tI units of time\n else: return -x + g*dot(J,tanh_x)\n\nP = []\nfor i in range(N1):\n # Running estimate of the inverse correlation matrix\n P.append(eye(N))\n\nlr = 1.0 # learning rate\n\n# simulation data: state, output, time, weight updates\nx,z,t,wu = [x0],[],[0],[zeros(N1).tolist()]\n\n# Set up ode solver\nsolver = ode(f1)\nsolver.set_initial_value(x0)\n\n# Integrate ode, update weights, repeat\nwhile t[-1] < 25*tmax:\n tanh_x = tanh(x[-1]) # cache firing rates\n wu.append([])\n \n # train rates at the beginning of the simulation\n if t[-1]<22*tmax:\n for i in range(N1):\n error = target(t[-1],i) - tanh_x[i]\n q = dot(P[i],tanh_x)\n c = lr / (1 + dot(q,tanh_x))\n P[i] = P[i] - c*outer(q,q)\n J[i,:] += c*error*q\n wu[-1].append(np.sum(np.abs(c*error*q)))\n else:\n # Store zero for the weight update\n for i in range(N1): wu[-1].append(0)\n \n solver.integrate(solver.t+dt)\n x.append(solver.y)\n t.append(solver.t)\n\nx = np.array(x)\nr = tanh(x) # firing rates\nt = np.array(t)\nwu = np.array(wu)\nwu = reshape(wu,(len(t),N1))\n\npos = 2*arange(N)\noffset = tile(pos[::-1],(len(t),1))\ntarg = np.array([target(t,i) for i in range(N1)]).T\n\nplt.figure(figsize=(12,11))\nplt.subplot(3,1,1)\nplt.plot(t,targ + offset[:,:N1],'-r')\nplt.plot(t,r[:,:N1] + offset[:,:N1],'-k')\nplt.yticks([]),plt.xticks([]),plt.xlim([t[0],t[-1]])\nplt.title('Trained subset of network (target pattern in red)')\nplt.subplot(3,1,2)\nplt.plot(t,r[:,N1:] + offset[:,N1:],'-k')\nplt.yticks([]),plt.xticks([]),plt.xlim([t[0],t[-1]])\nplt.title('Untrained subset of network')\nplt.subplot(3,1,3)\nplt.plot(t,wu + offset[:,:N1],'-k')\nplt.yticks([]),plt.xlim([t[0],t[-1]]),plt.xlabel('time (a.u.)')\nplt.title('Change in presynaptic weights for each trained neuron')\nplt.show()", "Test the behavior\nWe want the network to only produce a feedforward cascade only in response to a stimulus input. Note that this doesn't always work &mdash; it is difficult for the network to perform this task. Nonetheless, the training works pretty well most of the time.<sup><a href=\"#f2b\" id=\"f2t\">[2]</a></sup>", "tstim = [80,125,170,190]\n\ndef f2(t0,x):\n ## input to network at beginning of trial\n for ts in tstim:\n if t0 > ts and t0 < ts+tI: return -x + g*dot(J,tanh(x)) + u\n ## no input after tI units of time\n return -x + g*dot(J,tanh(x))\n\n# Set up ode solver\nsolver = ode(f2)\nsolver.set_initial_value(x[-1,:])\n\nx_test,t_test = [x[-1,:]],[0]\nwhile t_test[-1] < 250:\n solver.integrate(solver.t + dt)\n x_test.append(solver.y)\n t_test.append(solver.t)\n\nx_test = np.array(x_test)\nr_test = tanh(x_test) # firing rates\nt_test = np.array(t_test)\n\npos = 2*arange(N)\noffset = tile(pos[::-1],(len(t_test),1))\n\nplt.figure(figsize=(10,5))\nplt.plot(t_test,r_test[:,:N1] + offset[:,:N1],'-k')\nplt.plot(tstim,ones(len(tstim))*80,'or',ms=8)\nplt.ylim([37,82]), plt.yticks([]), plt.xlabel('time (a.u.)')\nplt.title('After Training. Stimulus applied at red points.\\n',fontweight='bold')\nplt.show()", "Note that when we apply two inputs in quick succession (the last two inputs) the feedforward cascade restarts.\nConnectivity matrix", "plt.matshow(J)\nplt.title(\"Connectivity Matrix, Post-Training\")", "Notes:\n<a href=\"#f1t\" id=\"f1b\">[1]</a> Note that the schematic network only shows a subset of the connections between neurons &mdash; showing all connections does not make for a nice illustration. In reality, the neurons are all-to-all connected and the stimulus projects with different weights to all neurons. All connections can be either inhibitory or excitatory and can flip sign during training.\n<a href=\"#f2t\" id=\"f2b\">[2]</a> As always, email me if you have tricks, corrections, or other improvements to add to the code.\nLicense:\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-sa/4.0/88x31.png\" /></a><br>This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\">Creative Commons Attribution-ShareAlike 4.0 International License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yl565/statsmodels
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
bsd-3-clause
[ "Detrending, Stylized Facts and the Business Cycle\nIn an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as \"structural time series models\") to derive stylized facts of the business cycle.\nTheir paper begins:\n\"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step\nin macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic\nproperties of the data and (2) present meaningful information.\"\n\nIn particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.\nStatsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom IPython.display import display, Latex", "Unobserved Components\nThe unobserved components model available in Statsmodels can be written as:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{\\gamma{t}}{\\text{seasonal}} + \\underbrace{c{t}}{\\text{cycle}} + \\sum{j=1}^k \\underbrace{\\beta_j x_{jt}}{\\text{explanatory}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nsee Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.\nTrend\nThe trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.\n$$\n\\begin{align}\n\\underbrace{\\mu_{t+1}}{\\text{level}} & = \\mu_t + \\nu_t + \\eta{t+1} \\qquad & \\eta_{t+1} \\sim N(0, \\sigma_\\eta^2) \\\\\n\\underbrace{\\nu_{t+1}}{\\text{trend}} & = \\nu_t + \\zeta{t+1} & \\zeta_{t+1} \\sim N(0, \\sigma_\\zeta^2) \\\n\\end{align}\n$$\nwhere the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.\nFor both elements (level and trend), we can consider models in which:\n\nThe element is included vs excluded (if the trend is included, there must also be a level included).\nThe element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)\n\nThe only additional parameters to be estimated via MLE are the variances of any included stochastic components.\nThis leads to the following specifications:\n| | Level | Trend | Stochastic Level | Stochastic Trend |\n|----------------------------------------------------------------------|-------|-------|------------------|------------------|\n| Constant | ✓ | | | |\n| Local Level <br /> (random walk) | ✓ | | ✓ | |\n| Deterministic trend | ✓ | ✓ | | |\n| Local level with deterministic trend <br /> (random walk with drift) | ✓ | ✓ | ✓ | |\n| Local linear trend | ✓ | ✓ | ✓ | ✓ |\n| Smooth trend <br /> (integrated random walk) | ✓ | ✓ | | ✓ |\nSeasonal\nThe seasonal component is written as:\n<span>$$\n\\gamma_t = - \\sum_{j=1}^{s-1} \\gamma_{t+1-j} + \\omega_t \\qquad \\omega_t \\sim N(0, \\sigma_\\omega^2)\n$$</span>\nThe periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.\nThe variants of this model are:\n\nThe periodicity s\nWhether or not to make the seasonal effects stochastic.\n\nIf the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).\nCycle\nThe cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between \"1.5 and 12 years\" (see Durbin and Koopman).\nThe cycle is written as:\n<span>$$\n\\begin{align}\nc_{t+1} & = c_t \\cos \\lambda_c + c_t^ \\sin \\lambda_c + \\tilde \\omega_t \\qquad & \\tilde \\omega_t \\sim N(0, \\sigma_{\\tilde \\omega}^2) \\\\\nc_{t+1}^ & = -c_t \\sin \\lambda_c + c_t^ \\cos \\lambda_c + \\tilde \\omega_t^ & \\tilde \\omega_t^* \\sim N(0, \\sigma_{\\tilde \\omega}^2)\n\\end{align}\n$$</span>\nThe parameter $\\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).\nIrregular\nThe irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.\n$$\n\\varepsilon_t \\sim N(0, \\sigma_\\varepsilon^2)\n$$\nIn some cases, we may want to generalize the irregular component to allow for autoregressive effects:\n$$\n\\varepsilon_t = \\rho(L) \\varepsilon_{t-1} + \\epsilon_t, \\qquad \\epsilon_t \\sim N(0, \\sigma_\\epsilon^2)\n$$\nIn this case, the autoregressive parameters would also be estimated via MLE.\nRegression effects\nWe may want to allow for explanatory variables by including additional terms\n<span>$$\n\\sum_{j=1}^k \\beta_j x_{jt}\n$$</span>\nor for intervention effects by including\n<span>$$\n\\begin{align}\n\\delta w_t \\qquad \\text{where} \\qquad w_t & = 0, \\qquad t < \\tau, \\\\\n& = 1, \\qquad t \\ge \\tau\n\\end{align}\n$$</span>\nThese additional parameters could be estimated via MLE or by including them as components of the state space formulation.\nData\nFollowing Harvey and Jaeger, we will consider the following time series:\n\nUS real GNP, \"output\", (GNPC96)\nUS GNP implicit price deflator, \"prices\", (GNPDEF)\nUS monetary base, \"money\", (AMBSL)\n\nThe time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.\nAll data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.", "# Datasets\ntry:\n from pandas_datareader.data import DataReader\nexcept ImportError:\n from pandas.io.data import DataReader\n\n# Get the raw data\nstart = '1948-01'\nend = '2008-01'\nus_gnp = DataReader('GNPC96', 'fred', start=start, end=end)\nus_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)\nus_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS').mean()\nrecessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS').last().values[:,0]\n\n# Construct the dataframe\ndta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)\ndta.columns = ['US GNP','US Prices','US monetary base']\ndates = dta.index._mpl_repr()", "To get a sense of these three variables over the timeframe, we can plot them:", "# Plot the data\nax = dta.plot(figsize=(13,3))\nylim = ax.get_ylim()\nax.xaxis.grid()\nax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);", "Model\nSince the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{c{t}}{\\text{cycle}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nThe irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:\n\nLocal linear trend (the \"unrestricted\" model)\nSmooth trend (the \"restricted\" model, since we are forcing $\\sigma_\\eta = 0$)\n\nBelow, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.", "# Model specifications\n\n# Unrestricted model, using string specification\nunrestricted_model = {\n 'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Unrestricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# local linear trend model with a stochastic damped cycle:\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }\n\n# The restricted model forces a smooth trend\nrestricted_model = {\n 'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Restricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# smooth trend model with a stochastic damped cycle. Notice\n# that the difference from the local linear trend model is that\n# `stochastic_level=False` here.\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }", "We now fit the following models:\n\nOutput, unrestricted model\nPrices, unrestricted model\nPrices, restricted model\nMoney, unrestricted model\nMoney, restricted model", "# Output\noutput_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)\noutput_res = output_mod.fit(method='powell', disp=False)\n\n# Prices\nprices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)\nprices_res = prices_mod.fit(method='powell', disp=False)\n\nprices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)\nprices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)\n\n# Money\nmoney_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)\nmoney_res = money_mod.fit(method='powell', disp=False)\n\nmoney_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)\nmoney_restricted_res = money_restricted_mod.fit(method='powell', disp=False)", "Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.", "print(output_res.summary())", "For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.\nThe plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.", "fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));", "Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.", "# Create Table I\ntable_i = np.zeros((5,6))\n\nstart = dta.index[0]\nend = dta.index[-1]\ntime_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)\nmodels = [\n ('US GNP', time_range, 'None'),\n ('US Prices', time_range, 'None'),\n ('US Prices', time_range, r'$\\sigma_\\eta^2 = 0$'),\n ('US monetary base', time_range, 'None'),\n ('US monetary base', time_range, r'$\\sigma_\\eta^2 = 0$'),\n]\nindex = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])\nparameter_symbols = [\n r'$\\sigma_\\zeta^2$', r'$\\sigma_\\eta^2$', r'$\\sigma_\\kappa^2$', r'$\\rho$',\n r'$2 \\pi / \\lambda_c$', r'$\\sigma_\\varepsilon^2$',\n]\n\ni = 0\nfor res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):\n if res.model.stochastic_level:\n (sigma_irregular, sigma_level, sigma_trend,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n else:\n (sigma_irregular, sigma_level,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n sigma_trend = np.nan\n period_cycle = 2 * np.pi / frequency_cycle\n \n table_i[i, :] = [\n sigma_level*1e7, sigma_trend*1e7,\n sigma_cycle*1e7, damping_cycle, period_cycle,\n sigma_irregular*1e7\n ]\n i += 1\n \npd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')\ntable_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)\ntable_i" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledeprogram/algorithms
class7/donow/Gruen_Gianna_7_donow.ipynb
gpl-3.0
[ "Apply logistic regression to categorize whether a county had high mortality rate due to contamination\n1. Import the necessary packages to read in the data, plot, and create a logistic regression model", "import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nfrom sklearn.linear_model import LogisticRegression", "2. Read in the hanford.csv file in the data/ folder", "df = pd.read_csv('hanford.csv')", "<img src=\"../../images/hanford_variables.png\"></img>\n3. Calculate the basic descriptive statistics on the data", "df.describe()\n\niqr = df.quantile(q=0.75) - df.quantile(q=0.25)\niqr\n\nual = df.quantile(q=0.75) + (iqr * 1.5)\nual\n\nlal = df.quantile(q=0.25) - (iqr * 1.5)\nlal\n\ndf.plot(kind='scatter', x='Exposure', y='Mortality')", "4. Find a reasonable threshold to say exposure is high and recode the data", "for value in df['Exposure']:\n if value < ual['Exposure']:\n print(value)\n\n# Find new reasonable threshold!\n# Choosing 6\n\ndf['high_exposure'] = df['Exposure'].apply(lambda x:1 if x>6 else 0)\n\ndf\n\n# dataset = df[['Mortality']].join([pd.get_dummies(df['Exposure'],prefix=\"Exposure\"),df.high_exposure])\n\n# dataset", "5. Create a logistic regression model", "from sklearn.linear_model import LogisticRegression\n\nlm = LogisticRegression()\n\nx = np.asarray(df[['Mortality']])\ny = np.asarray(df['high_exposure'])\n\nlm = lm.fit(x,y)\n\nlm.score(x,y)\n\nlm.coef_\n\nlm.intercept_\n\nplt.plot(x,lm.coef_[0]*x+lm.intercept_[0])", "6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50", "df['high_mortality'] = df['Mortality'].apply(lambda x:1 if x>150 else 0)\n\nlm2 = LogisticRegression()\n\nx2 = np.asarray(df[['Exposure']])\ny2 = np.asarray(df['high_mortality'])\n\nlm2 = lm2.fit(x2,y2)\n\nlm2.predict(50)\n\n# According to the prediction the mortality rate is high at an exposure level of 50." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vicente-gonzalez-ruiz/YAPT
scientific_computation/numpy.ipynb
cc0-1.0
[ "SciPy.org's NumPy\n<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Intro\" data-toc-modified-id=\"Intro-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Intro</a></span></li><li><span><a href=\"#Looking-for-information\" data-toc-modified-id=\"Looking-for-information-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Looking for information</a></span></li><li><span><a href=\"#Data-types\" data-toc-modified-id=\"Data-types-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/user/basics.types.html\" target=\"_blank\">Data types</a></a></span></li><li><span><a href=\"#Array-creation,-allocation-and-initialization\" data-toc-modified-id=\"Array-creation,-allocation-and-initialization-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/user/quickstart.html#array-creation\" target=\"_blank\">Array creation, allocation and initialization</a></a></span><ul class=\"toc-item\"><li><span><a href=\"#Allocation-(without-initializing-the-data-array)\" data-toc-modified-id=\"Allocation-(without-initializing-the-data-array)-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Allocation (without initializing the data array)</a></span></li><li><span><a href=\"#Allocating-and-initalizing\" data-toc-modified-id=\"Allocating-and-initalizing-4.2\"><span class=\"toc-item-num\">4.2&nbsp;&nbsp;</span>Allocating and initalizing</a></span><ul class=\"toc-item\"><li><span><a href=\"#Using-a-list\" data-toc-modified-id=\"Using-a-list-4.2.1\"><span class=\"toc-item-num\">4.2.1&nbsp;&nbsp;</span>Using a list</a></span></li><li><span><a href=\"#Using-&quot;initializers&quot;\" data-toc-modified-id=\"Using-&quot;initializers&quot;-4.2.2\"><span class=\"toc-item-num\">4.2.2&nbsp;&nbsp;</span>Using \"initializers\"</a></span></li></ul></li><li><span><a href=\"#Defining-types\" data-toc-modified-id=\"Defining-types-4.3\"><span class=\"toc-item-num\">4.3&nbsp;&nbsp;</span>Defining <a href=\"https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html\" target=\"_blank\">types</a></a></span></li><li><span><a href=\"#Reshaping\" data-toc-modified-id=\"Reshaping-4.4\"><span class=\"toc-item-num\">4.4&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/user/quickstart.html#shape-manipulation\" target=\"_blank\">Reshaping</a></a></span></li><li><span><a href=\"#C-vs-Fortran-order\" data-toc-modified-id=\"C-vs-Fortran-order-4.5\"><span class=\"toc-item-num\">4.5&nbsp;&nbsp;</span>C vs Fortran order</a></span></li><li><span><a href=\"#Creation-(without-allocation-memory)\" data-toc-modified-id=\"Creation-(without-allocation-memory)-4.6\"><span class=\"toc-item-num\">4.6&nbsp;&nbsp;</span>Creation (without allocation memory)</a></span></li></ul></li><li><span><a href=\"#Views-and-copies\" data-toc-modified-id=\"Views-and-copies-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/user/quickstart.html#copies-and-views\" target=\"_blank\">Views and copies</a></a></span><ul class=\"toc-item\"><li><span><a href=\"#Pointer-copy\" data-toc-modified-id=\"Pointer-copy-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Pointer copy</a></span></li><li><span><a href=\"#Shallow-copy-(view)\" data-toc-modified-id=\"Shallow-copy-(view)-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Shallow copy (view)</a></span></li><li><span><a href=\"#Deep-(object-+--data)-copy\" data-toc-modified-id=\"Deep-(object-+--data)-copy-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Deep (object + data) copy</a></span></li></ul></li><li><span><a href=\"#More-about-array-indexing-and-iterating\" data-toc-modified-id=\"More-about-array-indexing-and-iterating-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>More about array <a href=\"https://docs.scipy.org/doc/numpy/user/basics.indexing.html\" target=\"_blank\">indexing</a> and <a href=\"https://docs.scipy.org/doc/numpy/user/quickstart.html#indexing-slicing-and-iterating\" target=\"_blank\">iterating</a></a></span><ul class=\"toc-item\"><li><span><a href=\"#More-about-simple-indexing\" data-toc-modified-id=\"More-about-simple-indexing-6.1\"><span class=\"toc-item-num\">6.1&nbsp;&nbsp;</span>More about simple indexing</a></span></li><li><span><a href=\"#Array-indexing-(also-known-as-fancy-indexing)\" data-toc-modified-id=\"Array-indexing-(also-known-as-fancy-indexing)-6.2\"><span class=\"toc-item-num\">6.2&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/user/quickstart.html#indexing-with-arrays-of-indices\" target=\"_blank\">Array indexing</a> (also known as <a href=\"https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing\" target=\"_blank\">fancy indexing</a>)</a></span></li><li><span><a href=\"#Boolean-array-indexing\" data-toc-modified-id=\"Boolean-array-indexing-6.3\"><span class=\"toc-item-num\">6.3&nbsp;&nbsp;</span>Boolean array indexing</a></span></li><li><span><a href=\"#Iterating\" data-toc-modified-id=\"Iterating-6.4\"><span class=\"toc-item-num\">6.4&nbsp;&nbsp;</span>Iterating</a></span></li></ul></li><li><span><a href=\"#Extending-arrays\" data-toc-modified-id=\"Extending-arrays-7\"><span class=\"toc-item-num\">7&nbsp;&nbsp;</span>Extending arrays</a></span></li><li><span><a href=\"#Permuting-(swapping)-dimensions-(axes)\" data-toc-modified-id=\"Permuting-(swapping)-dimensions-(axes)-8\"><span class=\"toc-item-num\">8&nbsp;&nbsp;</span>Permuting (swapping) dimensions (axes)</a></span></li><li><span><a href=\"#Increasing-and-decreasing-dimensions\" data-toc-modified-id=\"Increasing-and-decreasing-dimensions-9\"><span class=\"toc-item-num\">9&nbsp;&nbsp;</span>Increasing and decreasing dimensions</a></span></li><li><span><a href=\"#Slicing\" data-toc-modified-id=\"Slicing-10\"><span class=\"toc-item-num\">10&nbsp;&nbsp;</span>Slicing</a></span></li><li><span><a href=\"#Mathematics\" data-toc-modified-id=\"Mathematics-11\"><span class=\"toc-item-num\">11&nbsp;&nbsp;</span>Mathematics</a></span><ul class=\"toc-item\"><li><span><a href=\"#Some-school-math\" data-toc-modified-id=\"Some-school-math-11.1\"><span class=\"toc-item-num\">11.1&nbsp;&nbsp;</span>Some <em>school</em> math</a></span></li><li><span><a href=\"#Some-measurements\" data-toc-modified-id=\"Some-measurements-11.2\"><span class=\"toc-item-num\">11.2&nbsp;&nbsp;</span>Some measurements</a></span></li><li><span><a href=\"#Some-vector-math\" data-toc-modified-id=\"Some-vector-math-11.3\"><span class=\"toc-item-num\">11.3&nbsp;&nbsp;</span>Some vector math</a></span></li><li><span><a href=\"#Some-matrix-math\" data-toc-modified-id=\"Some-matrix-math-11.4\"><span class=\"toc-item-num\">11.4&nbsp;&nbsp;</span>Some matrix math</a></span></li></ul></li><li><span><a href=\"#Broadcasting\" data-toc-modified-id=\"Broadcasting-12\"><span class=\"toc-item-num\">12&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html\" target=\"_blank\">Broadcasting</a></a></span></li><li><span><a href=\"#How-fast-is-Numpy's-array-math?\" data-toc-modified-id=\"How-fast-is-Numpy's-array-math?-13\"><span class=\"toc-item-num\">13&nbsp;&nbsp;</span>How fast is Numpy's array math?</a></span></li><li><span><a href=\"#Arrays-of-objects\" data-toc-modified-id=\"Arrays-of-objects-14\"><span class=\"toc-item-num\">14&nbsp;&nbsp;</span>Arrays of objects</a></span></li><li><span><a href=\"#Structured-arrays\" data-toc-modified-id=\"Structured-arrays-15\"><span class=\"toc-item-num\">15&nbsp;&nbsp;</span>Structured arrays</a></span></li><li><span><a href=\"#Disk-I/O\" data-toc-modified-id=\"Disk-I/O-16\"><span class=\"toc-item-num\">16&nbsp;&nbsp;</span>Disk I/O</a></span><ul class=\"toc-item\"><li><span><a href=\"#Endianness\" data-toc-modified-id=\"Endianness-16.1\"><span class=\"toc-item-num\">16.1&nbsp;&nbsp;</span><a href=\"https://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.byteorder.html\" target=\"_blank\">Endianness</a></a></span></li></ul></li><li><span><a href=\"#Timming-NumPy?\" data-toc-modified-id=\"Timming-NumPy?-17\"><span class=\"toc-item-num\">17&nbsp;&nbsp;</span>Timming NumPy?</a></span></li></ul></div>\n\nIntro\nNumPy extends lists in Python when we need to work with arrays of numbers, providing a higher performance (throught vectorization) and functionality (basically, Linear Algebra).", "try:\n import numpy as np\nexcept:\n !pip3 install numpy --user\n import numpy as np", "Looking for information", "np.lookfor(\"invert\")", "Using IPython, remember that it's possible to use the tabulator to extend some command or to use a wildcard to get information about the numpy's stuff:", "np.*?", "Data types\nNumPy can works (create, operate and I/O) with arrays containig:\n1. Signed and unsigned integers of 8, 16, 32 and 64 bits.\n2. Floating point numbers of 32 and 64 bits.\n3. Complex (floating point) numbers of 64 and 128 bits.\n4. Strings.\nArray creation, allocation and initialization\nA word about efficiency\nNumPy has been designed to be efficient when it use blocks of memory (possiblely contiguous) of constant size. In other words, when the structures are static.\nAllocation (without initializing the data array)", "A = np.empty(12)\nprint(A) # Uninitialized garbage values, possiblely at random.\n\nprint(np.empty_like(A))", "Allocating and initalizing\nUsing a list", "l = [1, 2, 3]\ntype(l)\n\nA = np.array(l)\nA\n\nA = np.array([[1,1.0],(1+1j,.3)])\nprint(A)", "Using \"initializers\"", "np.zeros(10)\n\nnp.zeros((5,5))\n\nnp.ones(10)\n\nnp.full((5,5), 2)\n\nnp.arange(10)\n\nnp.linspace(1., 4., 6)\n\nnp.random.rand(10)\n\n# The random number generator is not reset in each call\nprint(np.random.random((5,5)))\nprint(np.random.random((5,5)))\n\nprint(np.eye(5)) # Identity matrix\n\nA = np.array([i for i in range(5)]) # 1-D list comprehension\nprint(A, A[1], A.shape)\n\nA = np.array([[j+i*4 for j in range(4)] for i in range(5)]) # 2-D list comprehension\nprint(A, A.shape)", "Defining types", "A = np.array([1, 2], dtype=np.uint8)\nA\n\ntype(A[0])", "Reshaping", "A = np.array([[1,2,3],[4,5,6]])\nprint(A)\nprint(A.shape)\n\nnp.reshape(A, (3, 2))", "C vs Fortran order", "A = np.empty((5,4), dtype=np.int16, order='C') # C order is the default\nfor row in range(A.shape[0]):\n for col in range(A.shape[1]):\n A[row, col] = col+row*A.shape[1]\nprint(A)\n\nprint(np.isfortran(A))\n\nB = np.reshape(A, (4,5), order='F')\nprint(B)\n\nprint(np.isfortran(B))\n\nprint(A.flags)\nprint(B.flags)", "Dynamic creation\nBe careful, dynamic arrays in NumPy are slower than static ones.\n\nCreating an empty (unallocated) array for working with 8-bits unsigned integers:", "A = np.array([], dtype=np.uint8)\nprint(A, A.dtype)\nA\n\nA = np.concatenate((A, np.array([1, 2], dtype=np.uint8)))\nA", "(see NumPy's data types)\n\nNumPy arrays are objects of the \"numpy.ndarray\" class:", "print(type(A))", "All arrays has at least one dimension and a length in each dimenssion.", "print(f\"number of dimensions={A.ndim}, shape={A.shape}\")", "Views and copies\nPointer copy\n\nSimple assignments do not copy of array objects or of their data:", "# https://stackoverflow.com/questions/56090021/list-comprehension-python-prime-numbers\nPrimes_less_than_100 = np.array([x for x in range(2,100) if not any([x % y == 0 for y in range(2, int(x/2)+1)])])\nPrimes_less_than_100\n\nA = Primes_less_than_100 # This is a copy of pointers\nA\n\nA[0]=1\nA\n\nPrimes_less_than_100\n\nid(A)\n\nid(Primes_less_than_100)\n\nA is Primes_less_than_100", "Shallow copy (view)", "print(Primes_less_than_100.shape)\n\nA = Primes_less_than_100.reshape(5,5)\nprint(A.shape)\n\nA is Primes_less_than_100\n\nA.base is Primes_less_than_100\n\nA.flags.owndata\n\nprint(Primes_less_than_100, A)\n\nA[0,0] = 2\nprint(Primes_less_than_100, A)", "Deep (object + data) copy", "Primes_less_than_100 = np.array([x for x in range(2,100) if not any([x % y == 0 for y in range(2, int(x/2)+1)])])\nA = np.copy(Primes_less_than_100)\n\nA is Primes_less_than_100\n\nPrimes_less_than_100[0]\n\nA[0] = 1\n\nprint(Primes_less_than_100[0], A[0])\n\ntimeit A=Primes_less_than_100 # This is much faster, depending on the size of the array\n\n%timeit A = np.copy(Primes_less_than_100)", "More about array indexing and iterating\nNumPy arrays are indexed by nonnegative integers.\nMore about simple indexing", "A = np.arange(20).reshape(5, 4)\nprint(A)\nprint(A[1, 2]) # [row, column]\n\nprint(A[1]) # Get the second row\n\nA[1][2] # [row][column]. Access first to the row, and then, to the column", "Be careful:", "timeit A[1][2]\n\ntimeit A[1,2] # Access directly to the element", "Array indexing (also known as fancy indexing)", "A = np.arange(20).reshape(5, 4)\nprint(A)\ndim_0_coordinates = [0, 1, 2]\ndim_1_coordinates = [3, 2, 1]\nprint(A[dim_0_coordinates, dim_1_coordinates])", "Notice that the indices can be determined by lists:", "print(A[[i for i in range(3)], [i for i in range(3,0,-1)]])", "... or by NumPy arrays:", "print(np.arange(3))\nprint(np.arange(3,0,-1))\nprint(A[np.arange(3), np.arange(3,0,-1)])\n\nA = np.arange(100).reshape(10,10)\nprint(A)\nlst_of_rows = [1, 2, 4]\nlst_of_columns = [1, 2, 5]\nsub_matrix = A[lst_of_rows][:, lst_of_columns]\nprint(sub_matrix)", "Be careful, advanced indexing always returns a copy of the data (contrast with basic slicing that returns a view).", "B = A[[0, 1, 2], [0, 1, 2]]\nprint(B)\n\nprint(A)\n\nB[...] = -1\n\nprint(B)\n\nprint(A)", "### Boolean array indexing\n\nFinding the elements bigger than ...", "A = np.arange(20)\nprint(A, A.shape)\nbool_idx = (A>12)\nprint(bool_idx, bool_idx.shape)", "Printing the elements bigger than ...", "print(A[bool_idx])", "Getting the elements of an array smaller than ...:", "A = (100*(0.5-np.random.rand(25))).astype(np.int16).reshape(5,5)\nprint(A)\nprint(A[A<0]) # Notice that len(A[A<0]) <= len(A)", "Changing the elements smaller than ...:", "A[A<0] = 0\nprint(A)", "Iterating", "for row in A:\n print(row)\n\nfor element in A.flat:\n print(element, end=' ')", "Extending arrays\n\nAppending elements:", "A = np.arange(3)\nprint(A)\nA = np.append(A, 4)\nprint(A)\n\nA = np.append(-1, A)\nprint(A)\n\nB = np.concatenate((A, A), axis=None)\nprint(B)\n\nB = np.concatenate((A, A), axis=0)\nprint(B)\n\nA = np.arange(30).reshape(5,6)\nprint(A)\n\nB = np.ones(5, dtype=np.int16).reshape(5,1)\nprint(B)\n\nC = np.hstack((A, B))\nprint(C)\n\nB = np.ones(7, dtype=np.int16)\nprint(B)\n\nC = np.vstack((C, B))\nprint(C)", "Permuting (swapping) dimensions (axes)\n\nPermuting dimensions only makes sense when the number of dimensions is > 1:", "A = np.arange(20).reshape(5,4)\nprint(A, A.shape)\n\nprint(np.transpose(A), np.transpose(A).shape)\n\nprint(A.T, A.T.shape)", "Transposing permutes all the dimensions:", "A = np.arange(60).reshape(3,5,4)\nprint(A, A.shape)\n\nprint(A.T, A.T.shape)", "Increasing and decreasing dimensions\n\nShape and dimensions:", "A = np.arange(5)\nprint(A, A.shape, A.ndim)", "Increasing the dimensions on the right:", "B = A[:, None]\nprint(B, B.shape, B.ndim)", "Increasing the dimensions on the left:", "B = A[None, :]\nprint(B, B.shape, B.ndim)", "For convenience, NumPy provides the np.newaxis object instead None (althougt both are quivalent):", "B = A[np.newaxis, :]\nprint(B, B.shape, B.ndim)", "Slicing", "A = np.arange(10)\nprint(A)\n\nprint(A[1:3]) # [start:end] (end not included)\n\nprint(A[1:5:2]) # [start:end:step]\n\nprint(A[:3]) # By default, the first one\n\nprint(A[3:])\n\nprint(A[::3])\n\nprint(A, A[:], A[::])\n\nprint(A[::-1])\n\nA = np.arange(50).reshape(5,10)\nprint(A)\nprint(A[:]) # <- works, although it is preferable ...\nprint(A[:,:]) # <- ... this\n\nprint(A[0:2,:])\n\nprint(A[:,1::2])\n\nprint(A[A.shape[0]-2:,A.shape[1]-2:]) # bottom-right 2x2 array:\n\nprint(A[2]) # Row extraction\nprint(A[2:3,:]) # Sub-matrix extraction", "Slices are views of the same data:", "A = np.arange(10)\nprint(A)\nB = A[1:3] # B is simply a new view of A\nB[:] = 1000\nprint(B)\nprint(A)\n\nA = np.arange(10)\nprint(A)\nB = A[::-1]\nB[1] = 1000\nprint(B)\nprint(A)", "Copying slices:", "A = np.arange(10)\nprint(A)\nB = A[::-1].copy()\nB[1] = 1000\nprint(B)\nprint(A)", "Ellipsis:", "A = np.arange(27).reshape(3,3,3)\nprint(A)\n\nprint(A[1,:,:])\n\nprint(A[1,...])", "Mathematics\nSome school math", "A = np.random.randint(low=-1, high=+2, size=10)\nprint(A)\nprint(-A)\n\nB = np.random.randint(low=-1, high=+2, size=10)\nprint(A)\nprint(B)\nprint(\"-\"*31)\nprint(A+B)\n\nprint(A*B)\nprint(A/B)\nprint(A//2)\nprint(A>>1)\nprint(A*2)\nprint(A**2)\nprint(1/A)\nprint(np.absolute(A))", "Some measurements", "A = np.arange(10)\nprint(A)\n\nprint(\"Sum =\", np.sum(A))\nprint(\"Max =\", np.max(A))\nprint(\"Min =\", np.min(A))\nprint(\"L0 norm =\", np.linalg.norm(A, ord=0)) # np.max(A)\nprint(\"L1 norm =\", np.linalg.norm(A, ord=1)) # np.sum(A)\nprint(\"L2 norm =\", np.linalg.norm(A)) # math.sqrt(sum(A_i**2 for A_i in A))\nprint(\"L4 norm =\", np.linalg.norm(A, ord=4))", "Some vector math", "from IPython.display import display, Math\nA = np.arange(10)\nprint(A)\nB = A[::-1]\ndisplay(Math(r\"A \\cdot B={}\".format(np.dot(A, B))))\nprint(\"A dot product B =\", np.dot(A, B))\nprint(\"sum(A_i*B_i for A_i, B_i in zip(A, B))=\", sum(A_i*B_i for A_i, B_i in zip(A, B)))\nprint(\"sum(A[:]*B[:]) =\", sum(A[:]*B[:]))\n\nA = np.array([1, 2, 3])\nB = np.array([-1, -2, -3])\ndisplay(Math(r\"A \\times B={}\".format(np.cross(A, B))))\n# https://stackoverflow.com/questions/1984799/cross-product-of-two-vectors-in-python\nprint(f\"[A[1]*B[2] - A[2]*B[1], A[2]*B[0] - A[0]*B[2], A[0]*B[1] - A[1]*B[0]] = [{A[1]*B[2] - A[2]*B[1]} {A[2]*B[0] - A[0]*B[2]} {A[0]*B[1] - A[1]*B[0]}]\")\n\nA = np.arange(4).reshape(2,2)\nprint(A)\nB = A[::-1]\nprint(B)\nprint(\"A dot product B = \", np.dot(A, B)) # dot[i,j] = sum(A[i,:] * B[:,j])\n# https://stackoverflow.com/questions/11033573/difference-between-numpy-dot-and-inner\nfor i in range(A.shape[0]):\n for j in range(A.shape[1]):\n print(np.sum(A[i,:]*B[:,j]), end=\" \")\n print(\"\")\nprint(\"A inner product B =\", np.inner(A, B)) # inner[i,j] = (A[i,:] * B[j,:])\nfor i in range(A.shape[0]):\n for j in range(A.shape[1]):\n print(np.sum(A[i,:]*B[j,:]), end=\" \")\n print(\"\")", "Some matrix math", "A = np.array([[(i+j)%2 for j in range(10)] for i in range(10)])\nprint(A, A.shape)\n\nB = np.array([[1] for i in range(10)])\nprint(B, B.shape)\n\nC = A @ B # Product matrix-matrix\nprint(C)\n\nprint(C.T, C.T.shape, C.shape) # Transpose\n\nprint(\"Determinant =\", np.linalg.det(A))\n\nR = np.random.rand(5,5)\niR = np.linalg.inv(R)\nprint(\"Inverse =\", np.linalg.inv(iR)) # Matrix inverse\n\nprint(np.round(R @ iR))\n\nprint(R @ iR)\n\nprint(np.round(iR @ R))\n\nprint(iR @ R)\n\nR = np.random.rand(5,4)\niR = np.linalg.pinv(R) # Pseudo-inverse\nprint(iR)\n\nprint(np.round(R @ iR))\n\nprint(R @ iR)\n\nprint(np.round(iR @ R))\n\nprint(iR @ R)", "Broadcasting\nIn vectorized operations, NumPy \"extends\" scalars and arrays with one of its dimensions equal to the size of the other(s) array(s).", "A = np.array([1, 2, 3])\nb = 1\nprint(A+b) # Scalars are broadcasted\n\nB = [1]\nprint(A+B) # When possible, arrays are broadcasted in all possible dimensions\n\nA = np.ones((5,3), dtype=np.int16)\nprint(A)\nB = np.arange(3)\nprint(B)\nprint(A+B) # Broadcasting in the axis 0\n\nprint(A)\nB = np.arange(5).reshape((5, 1))\nprint(B)\nprint(A+B) # Broadcasting in the axis 1", "Two dimensions are compatible when they are equal, or\none of them is 1. Otherwise a ValueError: frames are not aligned is thrown.", "B = np.arange(4)[:, None]\nprint(B)\n\nprint(A.shape)\n\nprint(B.shape)\n\ntry:\n A + B\nexcept ValueError as e:\n print(\"ValueError exception: \", end='')\n if hasattr(e, 'message'):\n print(e.message)\n else:\n print(e)", "How fast is Numpy's array math?", "A = np.array([[(i*10+j) for j in range(10)] for i in range(10)])\nprint(A, A.shape)", "Add B[] to all the rows of A[][] using scalar arithmetic:", "C = np.empty_like(A)\ndef add():\n for i in range(A.shape[1]):\n for j in range(A.shape[0]):\n C[i, j] = A[i, j] + B[j]\n%timeit add()\nprint(C)", "Add B[] to all the rows of B[][] using vectorial computation:", "C = np.empty_like(A)\ndef add():\n for i in range(A.shape[1]):\n C[i, :] = A[i, :] + B\n%timeit add()\nprint(C)", "Add B[] to all the rows of A[][] using fully vectorial computation:", "%timeit C = A + B # <- broadcasting is faster\nprint(C)", "Arrays of objects\n\nFor example, an array of strings:", "A = np.array(['hello', 'world!'])\nprint(A)\nprint(A.shape)\nprint(np.char.upper(A))", "Simulating a dictionary:", "A = np.array([(\"Spain\", 100), (\"France\", 200), (\"Italy\", 300)])\nprint(A) # Notice that all the elements are srings\nprint(A.shape)\nprint(A[:,0])\nprint(A[A[:,0] == \"France\"])\nprint(A[A[:,0] == \"France\"][:,1])\nprint(\"The value associated to the key France is\", A[A[:,0] == \"France\"][:,1][0])", "A dictionary is faster:", "%timeit A[A[:,0] == \"France\"][:,1][0]\ndictionary = {\"Spain\":100, \"France\":200, \"Italy\":300}\nprint(dictionary[\"France\"])\n%timeit dictionary[\"France\"]", "However, this difference can be smaller depending on the type of search:", "others = [value for key, value in dictionary.items() if key != \"France\"]\nprint(others)\n%timeit [value for key, value in dictionary.items() if key != \"France\"]\n\nprint(A[:,0] != \"France\")\nprint(A[A[:,0] != \"France\"])\nprint(A[A[:,0] != \"France\"][:,1])\nprint(A[A[:,0] != \"France\"][:,1].astype(np.int16))\n%timeit A[A[:,0] != \"France\"][:,1].astype(np.int16)", "Structured arrays\n\nCreate a 1D array of (two) records, where each record has the structure (int, float, char[10]).", "X = np.array([(1, 2., \"Hello\"), (3, 4., \"World\")],\n dtype=[(\"first\", \"i4\"),(\"second\", \"f4\"), (\"third\", \"S10\")])\n# See https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html\nprint(X)", "Get the first element of every record:", "print(X[\"first\"])", "Get the first record:", "print(X[0])", "Get the second element of every record:", "print(X[\"second\"])", "Third element of every record:", "print(X[\"third\"])", "Disk I/O\n\nOutput data to an ASCII file:", "Data = np.array([[1., 200.], [2., 150.], [3., 250.]])\nnp.savetxt(\"data.txt\", Data)\n!cat Data.txt", "Input data from an ASCII file:", "np.genfromtxt('data.txt')", "Output data to a binary file (using the native endianness):", "ofile = open(\"data.float64\", mode=\"wb\")\nData.tofile(ofile)", "Input data from a binary file (using the native endianness):", "print(np.fromfile(\"data.float64\", dtype=np.float64))", "Numpy and C use the same endianness:", "!cat create_float64.c\n!gcc create_float64.c -o create_float64\n!./create_float64\n\nprint(np.fromfile(\"data.float64\", dtype=np.float64))", "Specifiying the endianness:", "print(np.fromfile(\"data.float64\", dtype=\">d\"))\n# (> = bit-endian, d = double, see https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html)", "Make the things easier:", "ofile = open(\"data.npy\", mode=\"wb\")\nA = (100*np.random.rand(2,3)).astype(np.uint16)\nprint(A)\n\nnp.save(ofile, A)\n\n!ls save*\n\nprint(np.load(\"data.npy\"))", "Endianness", "print(A.dtype.byteorder)", "Timming NumPy?\n\nLets define a list and compute the sum of its elements, timing it:", "l = list(range(0,100000)); print(type(l), l[:10])\n%timeit sum(l)", "An now, lets create a numpy's array and time the sum of its elements:", "A = np.arange(0, 100000); print(type(A), A[:10])\n%timeit np.sum(A)", "And what about a pure C implementation of an equivalent computation:", "!cat sum_array.c\n!gcc -O3 sum_array.c -o sum_array\n%timeit !./sum_array", "Another example:", "# Example extracted from https://github.com/pyHPC/pyhpc-tutorial\nlst = range(1000000)\n\nfor i in lst[:10]:\n print(i, end=' ')\nprint()\n\n%timeit [i + 1 for i in lst] # A Python list comprehension (iteration happens in C but with PyObjects)\nx = [i + 1 for i in lst]\n\nprint(x[:10])\n\narr = np.arange(1000000) # A NumPy list of integers\n%timeit arr + 1 # Use operator overloading for nice syntax, now iteration is in C with ints\ny = arr + 1\n\nprint(y[:10])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RyRose/College-Projects
lab4/Lab 4.ipynb
mit
[ "Lab 4\nRyan Rose\nScientific Computing\n9/21/2016", "## Imports!\n\n%matplotlib inline\nimport os\nimport re\nimport string\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nfrom matplotlib.mlab import PCA\nfrom scipy.cluster.vq import kmeans, vq", "Loading Fifty Books\nFirst, we load all fifty books from their text files.", "os.chdir(\"/home/ryan/School/scientific_computing/labs/lab4/books\")\nfilenames = os.listdir()\n\nbooks = []\nfor name in filenames:\n with open(name) as f:\n books.append(f.read())", "Cleaning up the Data\nNext, we create a mapping of titles to their book's text along with removing the Project Gutenberg header and footer.", "def get_title(text):\n pattern = \"\\*\\*\\*\\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\\*\\*\\*\"\n m = re.search(pattern, text)\n if m:\n return m.group(2).strip()\n return None\n\ndef remove_gutenberg_info(text):\n pattern = \"\\*\\*\\*\\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\\*\\*\\*\"\n start = re.search(pattern, text).end()\n pattern = \"\\*\\*\\*\\s*END OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\\*\\*\\*\"\n end = re.search(pattern, text).start()\n return text[start:end]\n\ncut_off_books = { get_title(book):remove_gutenberg_info(book) for book in books}\npd.DataFrame(cut_off_books, index=[\"Book's Text\"]).T.head()", "Next, we iterate through all of the words, strip all characters that are not upper or lower-case letters. If the the resulting word is considered, non-empty, we throw it out. Else, we add the word in all lowercase stripped of all non-ASCII letters to our list of words for that book.\nThis is useful to determine word frequencies.", "def strip_word(word, alphabet):\n ret = \"\"\n for c in word:\n if c in alphabet:\n ret += c.lower()\n if len(ret) == 0:\n return None\n else:\n return ret\n\ndef get_words(book):\n alphabet = set(string.ascii_letters)\n b = book.split()\n words = []\n for word in b:\n w = strip_word(word, alphabet)\n if w:\n words.append(w)\n return words\n\ncut_books = {name:get_words(book) for name, book in cut_off_books.items()}", "Determining Frequencies\nNow, we determine the frequencies for each word and putting them in a dictionary for each book.", "def get_word_freq(words):\n word_counts = {}\n for word in words:\n if word in word_counts:\n word_counts[word] += 1\n else:\n word_counts[word] = 1\n return word_counts\n\nbook_freqs = {}\nfor name, words in cut_books.items():\n book_freqs[name] = get_word_freq(words)", "Top 20 Words\nNow, let's determine the top 20 words across the whole corpus", "total_word_count = {}\nfor dicts in book_freqs.values():\n for word, count in dicts.items():\n if word in total_word_count:\n total_word_count[word] += count\n else:\n total_word_count[word] = count\n\na, b = zip(*total_word_count.items())\ntuples = list(zip(b, a))\ntuples.sort()\ntuples.reverse()\ntuples[:20]\n\n_, top_20_words = zip(*tuples[:20])\ntop_20_words", "Creating the 20-dimensional vectors\nUsing the top 20 words above, let's determine the book vectors.", "def filter_frequencies(frequencies, words):\n d = {}\n for word, freq in frequencies.items():\n if word in words:\n d[word] = freq\n return d\n\nlabels = {}\nfor name, freqs in book_freqs.items():\n labels[name] = filter_frequencies(freqs, top_20_words)\n\ndf = pd.DataFrame(labels).fillna(0)\ndf = (df / df.sum()).T\ndf.head()", "Creating the Elbow Graph\nLet's try each k and see what makes the sharpest elbow.", "kvals = []\ndists = []\nfor k in range(2, 11):\n centroids, distortion = kmeans(df, k)\n kvals.append(k)\n dists.append(distortion)\n\nplt.plot(kvals, dists)\nplt.show()", "We can see that the best k is 3 or 6.\nClustering\nLet's cluster based on k = 3 and plot the clusters.", "centroids, _ = kmeans(df, 3)\nidx, _ = vq(df, centroids)\nclusters = {}\nfor i, cluster in enumerate(idx):\n if cluster in clusters:\n clusters[cluster].append(df.iloc[i].name)\n else:\n clusters[cluster] = [df.iloc[i].name]\nclusters", "Do the clusters make sense?\nYes. For instance, we can see that The Republic and The Iliad of Homer are in the same cluster. \nPerforming PCA\nNow, let's perform PCA and determine the most important elements and plot the clusters.", "m = PCA(df)\n\nfig, ax = plt.subplots()\n\nfor i in range(len(idx)):\n plt.plot(m.Y[idx==i, 0], m.Y[idx==i, 1], \"o\", alpha=.75)\n\nfor index, (x, y) in enumerate(zip(m.Y[:, 0], m.Y[:, 1])):\n plt.text(x, y, df.index[index])\n \nfig.set_size_inches(36,40)\nplt.show()\n\nm.sigma.sort_values()[-2:]", "We can see the data clusters well and the most important words are i and the based on them having the standard deviation. This is based on the concept of PCA.fracs aligning to the variance based on this documentation: https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml. And, since PCA.sigma is the square root of the variance, the highest standard deviation should correspond to the highest value for the PCA.fracs. Then, i and the are the most important words \nNew Book\nSo, we continue as before by loading Anne of Green Gables, parsing it, creating an array, and normalizing the book vector.", "with open(\"../pg45.txt\") as f:\n anne = f.read()\nget_title(anne)\n\nanne_cut = remove_gutenberg_info(anne)\nanne_words = get_words(anne_cut)\nanne_freq = {get_title(anne):filter_frequencies(get_word_freq(anne_words), top_20_words)}\nanne_frame = pd.DataFrame(anne_freq).fillna(0)\nanne_frame = (anne_frame / anne_frame.sum()).T\nanne_frame", "Now, let's do k-means based on the previously determined k.", "df_with_anne = df.append(anne_frame).sort_index()\n\ncentroids, _ = kmeans(df_with_anne, 3)\nidx2, _ = vq(df_with_anne, centroids)\nclusters = {}\nfor i, cluster in enumerate(idx2):\n if cluster in clusters:\n clusters[cluster].append(df_with_anne.iloc[i].name)\n else:\n clusters[cluster] = [df_with_anne.iloc[i].name]\nclusters\n\ncoords = m.project(np.array(anne_frame).flatten())\n\nfig, _ = plt.subplots()\n\nplt.plot(coords[0], coords[1], \"s\", markeredgewidth=5)\n\nfor i in range(len(idx)):\n plt.plot(m.Y[idx==i, 0], m.Y[idx==i, 1], \"o\", alpha=.75)\n\nfor index, (x, y) in enumerate(zip(m.Y[:, 0], m.Y[:, 1])):\n plt.text(x, y, df.index[index])\n \nfig.set_size_inches(36,40)\n\nplt.show()", "We can see that the new book is the black square above. In addition, it makes sense it fits into that cluster especially when we compare it to Jane Eyre.\nStop Words", "stop_words_text = open(\"../common-english-words.txt\").read()\nstop_words = stop_words_text.split(\",\")\nstop_words[:5]\n\nword_counts_without_stop = [t for t in tuples if t[1] not in stop_words]\nword_counts_without_stop[:20]\n\n_, top_20_without_stop = zip(*word_counts_without_stop[:20])\ntop_20_without_stop\n\nno_stop_labels = {}\nfor name, freqs in book_freqs.items():\n no_stop_labels[name] = filter_frequencies(freqs, top_20_without_stop)\n\ndf_without_stop = pd.DataFrame(no_stop_labels).fillna(0)\ndf_without_stop = (df_without_stop / df_without_stop.sum()).T\ndf_without_stop.head()\n\nkvals = []\ndists = []\nfor k in range(2, 11):\n centroids, distortion = kmeans(df_without_stop, k)\n kvals.append(k)\n dists.append(distortion)\n\nplt.plot(kvals, dists)\nplt.show()", "We can see that our k could be 3 or 7. Let's choose 7.", "centroids, _ = kmeans(df_without_stop, 7)\nidx3, _ = vq(df, centroids)\nclusters = {}\nfor i, cluster in enumerate(idx3):\n if cluster in clusters:\n clusters[cluster].append(df_without_stop.iloc[i].name)\n else:\n clusters[cluster] = [df_without_stop.iloc[i].name]\nclusters\n\nm2 = PCA(df_without_stop)\n\nfig, _ = plt.subplots()\n\nfor i in range(len(idx3)):\n plt.plot(m2.Y[idx3==i, 0], m2.Y[idx3==i, 1], \"o\", alpha=.75)\n \nfor index, (x, y) in enumerate(zip(m2.Y[:, 0], m2.Y[:, 1])):\n plt.text(x, y, df_without_stop.index[index])\n \nfig.set_size_inches(36,40)\nplt.show()\n\nm2.sigma.sort_values()[-2:]", "We can see that man and mr are the most important words in this set. This seems to signify male-dominated stories and characters. This makes sense given that historically stories typically focus on men.\nConclusion\nWe can see that books' words cluster based on year and genre. In addition, we found an interesting finding without stop words that this specific data set is male-dominated." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jingfengli/jingfengli.github.io
Plots.ipynb
apache-2.0
[ "use the zipcode", "%matplotlib inline\nimport pandas as pd\nimport mpld3\nmpld3.enable_notebook()\n%matplotlib inline\nimport matplotlib\nimport seaborn as sns\nmatplotlib.rcParams['savefig.dpi'] = 2 * matplotlib.rcParams['savefig.dpi']\n\nimport pandas as pd\n\nMaps = pd.read_csv('./zip_code_database.csv')\n\nimport dill\nwith open('../LendingClubPrediction/CleanedUpData.pkl','rb') as in_strm:\n df = dill.load(in_strm)\n\n\ndf['int_rate'] = [float(df.int_rate.iloc[i][:-1]) for i in xrange(len(df.int_rate))]\n\ncounty = pd.read_csv('../zcta_county_rel_10.txt')\n\nvalide_zip= pd.unique(df.zip_code)\n\nzip_number =[int(i.replace('xx','')) for i in valide_zip]\n\na=(Maps.zip/100).floordiv(1)\n\nzip_indata = pd.DataFrame({'zip_3':zip_number})\n\nMaps['zip_3'] = a\n\nafter_merge=Maps.merge(zip_indata,how='inner',on='zip_3')\n\nafter_merge.to_csv('trial_1.csv')\n\npd.unique(after_merge.zip).shape\n\ncounty_zip = after_merge.merge(county,how='inner',left_on='zip',right_on='ZCTA5')\n\ncounty_zip.iloc[:500].to_csv('county_1.csv')\n\ncounty_zip.columns\n\nfrom bokeh.sampledata import us_states, us_counties, unemployment\nfrom bokeh.plotting import figure, show, output_file, output_notebook\n\n\nus_states = us_states.data.copy()\nus_counties = us_counties.data.copy()\nunemployment = unemployment.data\n\ndel us_states[\"HI\"]\ndel us_states[\"AK\"]\n\nstate_xs = [us_states[code][\"lons\"] for code in us_states]\nstate_ys = [us_states[code][\"lats\"] for code in us_states]\n\ncounty_xs=[us_counties[code][\"lons\"] for code in us_counties if us_counties[code][\"state\"] not in [\"ak\", \"hi\", \"pr\", \"gu\", \"vi\", \"mp\", \"as\"]]\ncounty_ys=[us_counties[code][\"lats\"] for code in us_counties if us_counties[code][\"state\"] not in [\"ak\", \"hi\", \"pr\", \"gu\", \"vi\", \"mp\", \"as\"]]\n\ncolors = [\"#F1EEF6\", \"#D4B9DA\", \"#C994C7\", \"#DF65B0\", \"#DD1C77\", \"#980043\"]\n\ncounty_colors = []\nfor county_id in us_counties:\n if us_counties[county_id][\"state\"] in [\"ak\", \"hi\", \"pr\", \"gu\", \"vi\", \"mp\", \"as\"]:\n continue\n try:\n rate = unemployment[county_id]\n idx = min(int(rate/2), 5)\n county_colors.append(colors[idx])\n except KeyError:\n county_colors.append(\"black\")\n\n\n\nlen(county_xs[40])", "LET US JUST DO STATES\nPLANS:\n1. include a page of exploring different features\n 1.1 Need to get the map one working. \n 1.1.1 give options about further filtering based on FICO credit score, and LC (MAYBE, depends on whether I have time or not.)\n 1.2 Need to bar graphs about FICO score, home owner, other features too. So the KEY is make it PRETTY\n\n2. the prediction app\n 2.1 Need to explain why use the ensemble model, Simple answer is that it works better. LOL. Boosting decision tree seems to work better, though logistic regression seems to be enough. \n 2.2 Another issue is that if I am going to deploy to Heroku, probably I can only use logistic regression. \n 2.3 Engineer some other features?? Like the difference between low and high score. Maybe some nlp stuff, not sure this will work. Or the employment rate of the states? Libral vs Conservative? This sorts of thing. I should just factor it in use a mean model.\n\n3. maybe a profolio, and a trainning program\n 3.1 This is an afterthought. \n 3.2 involves with getting the API working\n\nsome API stuff", "import simplejson\nfrom requests_oauthlib import OAuth1\nimport requests\nwith open(\"./lendingclub_secrets.json.nogit\") as fh:\n secrets = simplejson.loads(fh.read())\n\n# create an auth object\nauth = OAuth1(\n secrets[\"api_key\"],\n# secrets[\"api_secret\"],\n# secrets[\"access_token\"],\n# secrets[\"access_token_secret\"]\n)\n\n\n\n\n# Query Parameters:\n# showAll — A non-required Boolean parameter that defines the contents of the result.\n# showAll= False\n\nparams={'showAll' : True}\nr = requests.get(\"https://api.lendingclub.com/api/investor/v1/loans/listing\", headers={'Authorization':'5Rtb7dWC4Wps2C3VRAn2hBQVERg='},params=params)\n# r = requests.get(\"https://api.lendingclub.com/api/64233077/v1/loans/listing\", params=params)\n\n# \nprint r\n\nloanlist=r.json()\nprint len(loanlist['loans'])\n\nimport dill\nwith open('loanlist_3.pkl','wb') as out_strm:\n dill.dump(loanlist,out_strm)\n\nloanlist['loans'][0]\n\nlen(loanlist['loans'])\n\nids = []\nfor i in range(348):\n ids.append(loanlist['loans'][i]['memberId'])\n\n# output_file(\"choropleth.html\", title=\"choropleth.py example\")\n# output_notebook()\n# TOOLS = (\"hover,save\")\n# p = figure(title=\"US Unemployment 2009\", toolbar_location=\"left\",tools =TOOLS,\n# plot_width=1100, plot_height=700)\n\n# p.patches(county_xs, county_ys, fill_color=county_colors, fill_alpha=0.7,\n# line_color=\"white\", line_width=0.5)\n# p.patches(state_xs, state_ys, fill_alpha=0.0,\n# line_color=\"#884444\", line_width=2)\n\n# show(p)\n\nimport dill\nwith open('loanlist_2.pkl','rb') as in_strm:\n loanlist_2 = dill.load(in_strm)\nwith open('loanlist.pkl','rb') as in_strm:\n loanlist_1 = dill.load(in_strm)\n\nids_1 = [loanlist_1['loans'][i]['id'] for i in range(348)]\nids_2 = [loanlist_2['loans'][i]['id'] for i in range(len(loanlist_2['loans']))]\n\n\nmissing =[]\nfor i in ids_1:\n if i not in ids_2:\n missing.append(i)\n\nlen(missing)\n\nloanlist_1['loans'][1]\n\ndf.columns", "Build the data used for the figure", "df['paidoff'] = (df.stat== 1)\n\nbystate = pd.DataFrame()\nbystate['mean_rate'] = df.groupby([u'addr_state']).mean().paidoff\nbystate['default_rate'] = 1- df.groupby([u'addr_state']).mean().paidoff\n\nbystate['count'] = df.groupby([u'addr_state']).count().paidoff\nbystate['fico_range_high'] = df.groupby([u'addr_state']).mean().fico_range_high\nbystate['fico_range_high'] = df.groupby([u'addr_state']).mean().fico_range_high\n\n\nbystate.to_csv('bystate.csv')\n\ndf[df.addr_state=='IA']\n\nfrom bokeh.plotting import figure, show, output_file\n", "Loan by grade", "from bokeh.models import HoverTool, ColumnDataSource\nfrom collections import OrderedDict\n\n\n# bygrade = pd.DataFrame()\n# bygrade['mean_rate'] = df.groupby([u'sub_grade']).mean().paidoff\n# bygrade['default_rate'] = 1-df.groupby([u'sub_grade']).mean().paidoff\n# bygrade['int_rate'] = df.groupby([u'sub_grade']).mean().int_rate\n\n# bygrade['counts'] = df.groupby([u'sub_grade']).count().paidoff\n\nbygrade = pd.DataFrame()\nbygrade['mean_rate'] = df.groupby([u'grade']).mean().paidoff\nbygrade['default_rate'] = 1-df.groupby([u'grade']).mean().paidoff\nbygrade['int_rate'] = df.groupby([u'grade']).mean().int_rate\n\nbygrade['counts'] = df.groupby([u'grade']).count().paidoff\n\nxgrades = [i for i in bygrade.index]\npay_grades = bygrade.mean_rate.values*100\ndft_grades = bygrade.default_rate.values*100\n\n# xx = df.groupby([u'sub_grade']).mean()\n\n# TOOLS = \"hover,save\"\n\n# p = figure(background_fill=\"#EFE8E2\", \n# x_range=xgrades,\n# x_axis_label='LC grade', y_axis_label=('Pay off rate (%)'),\n# y_range = [0, 100],\n# title=\"Loan Outcome by LendingClub Grade\",\n# tools = TOOLS,\n# plot_width=800, \n# plot_height=400)\n\n# source1 = ColumnDataSource(\n# data=dict(pay_grades=pay_grades, dft_grades=dft_grades,int_rate=bygrade['int_rate'].values)\n# )\n# source2 = ColumnDataSource(\n# data=dict(pay_grades=pay_grades, dft_grades=dft_grades,int_rate=bygrade['int_rate'].values)\n# )\n\n# p.rect(xgrades, pay_grades/2, 0.6, pay_grades,\n# fill_color=\"#08c994\", source = source1)\n# p.rect(xgrades, dft_grades/2 + pay_grades, 0.6,dft_grades,\n# fill_color=\"#ff5a00\", source = source2)\n\n\n# hover = p.select(dict(type=HoverTool))\n# hover.tooltips = OrderedDict([\n# ('Grade', \"$x\"),\n# ('Payoff rate (%)', '@pay_grades'),\n# ('Default rate (%)', '@dft_grades'),\n# ('Interest (%)','@int_rate'),\n# ])\n\n# show(p)", "Loan by credit score", "byfico = pd.DataFrame()\nbyfico['mean_rate'] = df.groupby([u'fico_range_high']).mean().paidoff.iloc[2:]\nbyfico['default_rate'] = 1- df.groupby([u'fico_range_high']).mean().paidoff[2:]\nbyfico['count'] = df.groupby([u'fico_range_high']).count().paidoff[2:]\n\n# TOOLS = \"hover,save\"\n# xfico = [(str(int(byfico.index[i]-4)) + ' - ' +str(int(byfico.index[i]))) for i in xrange(len(byfico.index))]\n\n# p = figure(background_fill=\"#EFE8E2\", \n# x_axis_label='Pay off rate (%)', y_axis_label='FICO score',\n# x_range = [0, 100] , \n# y_range = xfico[::-1],\n# title=\"Loan Outcome by FICO score\",\n# tools = TOOLS,\n# plot_width=800, \n# plot_height=600)\n\n# source1 = ColumnDataSource(\n# data=dict(fico_score=xfico, dft_rate=byfico['default_rate'].iloc[::-1]*100,payoff_rate=byfico['mean_rate'].iloc[::-1]*100)\n# )\n# source2 = ColumnDataSource(\n# data=dict(fico_score=xfico, dft_rate=byfico['default_rate'].iloc[::-1]*100,payoff_rate=byfico['mean_rate'].iloc[::-1]*100)\n# )\n\n# p.rect(byfico['mean_rate'].iloc[::-1]*100/2, \n# xfico[::-1],byfico['mean_rate'].iloc[::-1]*100 ,0.8, \n# fill_color=\"#08c994\", source = source1)\n\n# p.rect(byfico['default_rate'].iloc[::-1]*100/2 + byfico['mean_rate'].iloc[::-1]*100, \n# xfico[::-1], byfico['default_rate'].iloc[::-1]*100, 0.8,\n# fill_color=\"#ff5a00\", source = source2)\n\n# hover = p.select(dict(type=HoverTool))\n# hover.tooltips = OrderedDict([\n# ('FICO score', \"$y\"),\n# ('Payoff rate (%)', '@payoff_rate'),\n# ('Default rate (%)', '@dft_rate'),\n\n# ])\n\n# show(p)\n\ndf.columns\n\ndf['dti_bin'] = np.floor(df.dti/5)*5\n\n(pd.unique(df.emp_length))\n\n# N = 100\n# x = np.random.random(size=N) * 100\n# y = np.random.random(size=N) * 100\n# radii = np.random.random(size=N) * 1.5\n# colors = [\"#%02x%02x%02x\" % (r, g, 150) for r, g in zip(np.floor(50+2*x), np.floor(30+2*y))]\n# colors", "Payoff rate by home ownership and emp_length", "# data = df.groupby(['home_ownership','emp_length']).mean().paidoff\n\n# from collections import OrderedDict\n\n# import numpy as np\n\n# from bokeh.plotting import ColumnDataSource, figure, show, output_file\n# from bokeh.models import HoverTool\n\n# # Read in the data with pandas. Convert the year column to string\n# home_ownership = ['RENT', 'OWN', 'MORTGAGE']\n# emp_lenght = [ 'n/a', '< 1 year', '1 year', '3 years', '2 years', '4 years', '5 years',\n# '6 years', '7 years', '8 years', '9 years','10+ years']\n\n# # colors = [\n# # \"#08C994\", \"#26BB81\", \"#45AD6F\", \"#649F5C\", \"#83914A\",\n# # \"#A28337\", \"#C17525\", \"#E06712\", \"#FF5A00\"\n# # ]\n# colors = [\n# \"#75968f\", \"#a5bab7\", \"#c9d9d3\", \"#e2e2e2\", \"#dfccce\",\n# \"#ddb7b1\", \"#cc7878\", \"#933b41\", \"#550b1d\"\n# ]\n# a=sorted(data.values)[::7]\n\n# home = []\n# emp = []\n# color = []\n# rate = []\n# for y in emp_lenght:\n# for m in home_ownership:\n# home.append(m)\n# emp.append(y)\n# rate_by_home_emp = data[m][y]\n# rate.append(rate_by_home_emp*100)\n# for i in xrange(1,9):\n# if rate_by_home_emp > a[i-1] and rate_by_home_emp<= a[i]:\n# ci = 9-i\n# # ci = int((rate_by_home_emp - min(data))/(max(data)-min(data))*8.9)\n \n# color.append(colors[ci])\n\n# output_notebook()\n\n# TOOLS = \"hover,save\"\n\n# p = figure(\n# x_axis_label='Employment length (%)', y_axis_label='House Ownership',\n# y_range=home_ownership, x_range=emp_lenght ,\n# x_axis_location=\"above\", plot_width=800, plot_height=400,\n# toolbar_location=\"left\", tools=TOOLS)\n\n# source = ColumnDataSource(\n# data=dict(home=home, emp=emp,color=color, rate=rate)\n# )\n# p.rect(\"emp\", \"home\", 1, 1, source=source, color=\"color\",line_color=None)\n\n# p.grid.grid_line_color = None\n# p.axis.axis_line_color = None\n# p.axis.major_tick_line_color = None\n# p.axis.major_label_text_font_size = \"12pt\"\n# p.axis.major_label_standoff = 0\n# p.xaxis.major_label_orientation = np.pi/3\n\n# hover = p.select(dict(type=HoverTool))\n# hover.tooltips = OrderedDict([\n# ('Pay off rate (%)', '@rate'),\n# ])\n\n# show(p) # show the plot\n\nmax(np.random.random(size=1000) * 100)\n\n\n354/4/8", "pofolio generator" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rrbb014/data_science
fastcampus_dss/2016_05_17/0517_02__SymPy를 사용한 함수 미분.ipynb
mit
[ "SymPy를 사용한 함수 미분\n데이터 분석에서 미분의 필요성\n그다지 관련이 없어 보이지만 사실 데이터 분석에도 미분(differentiation)이 필요하다. 데이터 분석의 목표 중 하나는 확률 모형의 모수(parameter)나 상태 변수(state)를 추정(estimation)하는 작업이다. 이러한 작업은 근본적으로 함수의 최소점 혹은 최대점을 찾는 최적화(optimization) 작업이며 미분 혹은 편미분을 사용한 도함수를 필요로 한다. 따라서 함수 미분의 지식은 데이터 분석 및 머신 러닝의 각종 내부 구조를 이해하는데 필수적인다.\n다행스러운 점은 데이터 분석자 입장에서 필요한 미분의 수준은 그다지 높지 않다는 점이다. 보통은 선형 다항식이나 지수함수의 편미분 정도의 개념만 알고 있으면 되고 대부분의 경우 최적화 라이브러리를 사용하거나 theano, tensorflow 등의 라이브러리에서 도함수나 미분값을 계산해 주기 때문에 실제로 도함수를 구할 일도 그다지 많지는 않다. \n함수와 변수\n프로그래밍을 익힌 사람에게는 변수(variable)와 함수(function)의 개념이 낯설지 않다. 변수란 실제 값을 대표하는 기호이며 함수는 이러한 변수를 기반으로 만들어진 수식으로 변수값이 어떤 수치로 결정되면 함수 값도 수식에 의해 결정된다.\n변수는 보통 $x$, $y$, $z$ 등 알파벳 소문자로 표시하며 함수는 $f(x)$, $g(x,y)$ 와 같이 사용할 입력 변수를 괄호안에 넣어 표시한다. 함수의 결과를 다른 변수에 넣어 다시 사용하는 경우도 있다.\n$$ y = f(x) $$\n$$ z = g(y) = g(f(x)) $$\n파이썬의 함수는 이러한 함수의 개념을 그대로 구현한 것이다.", "def f(x):\n return 2*x \n\nx = 10\ny = f(x)\nprint(x, y)", "역함수(inverse function)는 함수의 입력과 출력을 반대로 한 것이며 다음과 같은 기호로 표시한다.\n$$ y = f(x), \\;\\;\\; \\rightarrow \\;\\;\\; x = f^{-1}(y) $$\n예측 문제와 함수\n예측(prediction) 문제는 독립 변수, 혹은 feature $x$를 입력으로 하여 원하는 종속 변수 혹은 targer $y$와 가능한한 비슷한 값을 만드는 함수 $f$를 찾는 문제라고 할 수 있다.\n$$ y \\approx \\hat{y} = f(x) $$\n데이터 분석에서 많이 사용되는 함수들\n데이터 분석에서 많이 사용되는 함수의 형태는 다항식(polynomial) 함수, 지수(exponential) 함수, 로그(log) 함수 등이다.\n다항식 함수\n다항식 함수는 상수항 $c_0$, 일차항 $c_1x$, 이차항 $c_2x^2$, $\\cdots$ 등의 거듭제곱 항의 선형 조합으로 이루어진 함수이다. 다음은 단변수(uni-variate) 다항식 함수의 전형적인 형태이다.\n$$ f(x) = c_0 + c_1 x + c_2 x^2 + \\cdots + c_n x^n $$\n지수 함수와 로그 함수\n밑(base)를 오일러 수 $e$로 하는 지수함수는 다음과 같이 표시한다. 이는 $e$라는 숫자를 $x$번 거듭제곱한 것이라 생각하면 된다.\n$$ y = e^x $$\n또는\n$$ y = \\exp x $$\n지수 함수의 역함수는 자연로그 함수이다.\n$$ y = \\log x $$\n만약 밑이 $e$가 아닌 경우에는 다음과 같이 변형하여 사용한다.\n$$ y = a^x = e^{\\log a \\cdot x} $$\n함수의 그래프와 기울기\n함수의 형상을 직관적으로 파악하기 위해 그래프(graph)를 사용하기도 한다. 파이썬에서는 matplotlib의 라인 플롯을 사용하여 그래프를 만들 수 있다.\n다만 matplotlib에서는 구체적인 위치가 있어야지만 플롯을 만들 수 있기 때문에 그래프를 작성할 $x$ 영역을 작은 구간으로 나눈 벡터를 생성하고 이 벡터 값에 대한 함수값을 계산하여 그래프를 작성한다. 구간의 간격이 너무 크면 그래프가 부정확해지고 구간의 간격이 너무 작으면 쓸데없이 세부적인 그림을 그리게 되므로 계산 시간이 증가하고 메모리 등의 리소스가 낭비된다.", "x = np.linspace(-0.9, 2.9, 100)\ny = x**3 - 3*x**2 + x\nplt.plot(x, y);", "함수의 그래프는 앞에서 그린 것처럼 부드러운 곡선(curve)의 형태로 나타나는 경우가 많다. 이 곡선에 대해 한 점만 공통으로 가지는 접선(tangent)를 그릴 수 있는데 이 접선이 수평선과 이루는 각도를 기울기(slope)라고 한다.", "x = np.linspace(-0.9, 2.9, 100)\ny = x**3-3*x**2+x\nplt.plot(x, y)\nplt.plot(0, 0, 'ro'); plt.plot(x, x, 'r:');\nplt.plot(1, -1, 'go'); plt.plot(x, (3*1**2-6*1+1)*(x-1)-1, 'g:');", "미분\n미분(differenciation)이란 이러한 함수로부터 새로운 함수를 도출하는 변환의 일종이다. 미분을 통해 만들어진 새로운 함수는 원래 함수의 기울기(slope)를 나타낸다. 미분으로 만들어진 함수를 원래 함수의 도함수(derivative)라고 한다. 실제로는 극한과 수렴이라는 복잡한 개념을 사용하여 미분을 정의하지만 최적화(optimization)를 위해서는 단순히 기울기를 뜻한다고만 알아도 충분하다.\n도함수는 함수 기호에 뒤에 prime 윗첨자를 붙이거나 함수 기호의 앞에 $\\dfrac{d}{dx}$, $\\dfrac{\\partial}{\\partial x}$ 등을 붙여서 표시한다. 분수처럼 표기하기도 하는데 분모의 위치에는 미분하고자 하는 변수가 오고 분자의 위치에는 미분하는 함수 자체의 기호나 혹은 함수 계산의 결과로 얻어지는 변수를 넣는다.\n예를 들어 $y = f(x)$라는 함수를 미분하면 다음과 같다.\n$$ f'(x) = \\dfrac{d}{dx}(f) = \\dfrac{df}{dx} = \\dfrac{d}{dx}(y) = \\dfrac{dy}{dx} $$\n미분 공식\n현실적으로 미분은 다음에 설명할 몇가지 공식(formula)를 조합하여 원래 함수에서 도함수를 도출하는 과정이다. 함수가 복잡해지면 몇 페이지에 달아는 공식집이 필요할 정도이지만 여기에서는 가장 핵심적인 몇가지 공식만을 소개한다. 다양한 미분 공식에 대해 알고 싶다면 다음 웹사이트들을 참조한다.\n\nhttps://en.wikipedia.org/wiki/Derivative#Rules_of_computation\nhttps://en.wikipedia.org/wiki/Differentiation_rules\n\n기본 미분 공식 (암기)\n\n상수\n\n$$ \\dfrac{d}{dx}(c) = 0 $$\n$$ \\dfrac{d}{dx}(cf) = c \\cdot \\dfrac{df}{dx} $$\n\n거듭제곱\n\n$$ \\dfrac{d}{dx}(x^n) = n x^{n-1} $$\n\n로그\n\n$$ \\dfrac{d}{dx}(\\log x) = \\dfrac{1}{x} $$\n\n지수\n\n$$ \\dfrac{d}{dx}(e^x) = e^x $$\n\n선형 조합\n\n$$ \\dfrac{d}{dx}\\left(c_1 f_1 + c_2 f_2 \\right) = c_1 \\dfrac{df_1}{dx} + c_2 \\dfrac{df_2}{dx}$$\n이러한 기본 공식을 사용하여 다음 함수를 미분하면,\n$$ y = 1 + 2x + 3x^2 + 4\\exp(x) + 5\\log(x) $$ \n답은 다음과 같다.\n$$ \\dfrac{dy}{dx} = 2 + 6x + 4\\exp(x) + \\dfrac{5}{x} $$\n곱셈 법칙\n어떤 함수의 형태가 두 개의 함수를 곱한 것과 같을 때는 다음과 같이 각 개별 함수의 도함수를 사용하여 원래의 함수의 도함수를 구한다.\n$$ \\dfrac{d}{dx}\\left( f \\cdot g \\right) = \\dfrac{df}{dx} \\cdot g + f \\cdot \\dfrac{dg}{dx} $$\n곱셈 법칙을 사용하면 다음과 같은 함수를 미분하여,\n$$ f = x \\cdot \\exp(x) $$\n다음과 같은 도함수를 구한다.\n$$ \\dfrac{df}{dx} = \\exp(x) + x \\exp(x) $$\n연쇄 법칙\n연쇄 법칙(chain rule)은 미분하고자 하는 함수가 어떤 두 함수의 nested form 인 경우 적용할 수 있다.\n$$ f(x) = h(g(x)) $$\n인 경우 도함수는 다음과 같이 구한다.\n$$ \\dfrac{df}{dx} = \\dfrac{df}{dg} \\cdot \\dfrac{dg}{dx} $$\n예를 들어 정규 분포의 확률 밀도 함수는 기본적으로 다음과 같은 형태라고 볼 수 있다.\n$$ f = \\exp \\dfrac{(x-\\mu)^2}{\\sigma^2} $$\n이 함수의 도함수는 다음과 같이 구할 수 있다.\n$$ f = exp(z) \\;,\\;\\;\\;\\; z = \\dfrac{y^2}{\\sigma^2} \\;,\\;\\;\\;\\; y = x-\\mu $$\n$$ \\dfrac{df}{dx} = \\dfrac{df}{dz} \\cdot \\dfrac{dz}{dy} \\cdot \\dfrac{dy}{dx} $$\n$$ \\dfrac{df}{dz} = \\exp(z) = \\exp \\dfrac{(x-\\mu)^2}{\\sigma^2} $$\n$$ \\dfrac{dz}{dy} = \\dfrac{2y}{\\sigma^2} = \\dfrac{2(x-\\mu)}{\\sigma^2} $$\n$$ \\dfrac{dy}{dx} = 1 $$\n$$ \\dfrac{df}{dx} = \\dfrac{2(x-\\mu)}{\\sigma^2} \\exp \\dfrac{(x-\\mu)^2}{\\sigma^2}$$\n로그함수의 미분\n로그 함수에 연쇄 법칙을 적용하면 다음과 같은 규칙을 얻을 수 있다.\n$$ \\dfrac{d}{dx} \\log f(x) = \\dfrac{f'(x)}{f(x)} $$\n편미분\n만약 함수가 두 개 이상의 독립변수를 가지는 다변수 함수인 경우에도 미분 즉, 기울기는 하나의 변수에 대해서만 구할 수 있다. 이를 편미분(partial differentiation)이라고 한다. 따라서 편미분의 결과로 하나의 함수에 대해 여러개의 도함수가 나올 수 있다.\n다음은 편미분의 간단한 예이다.\n$$ f(x,y) = x^2 + xy + y^2 $$\n$$ f_x(x,y) = \\dfrac{\\partial f}{\\partial x} = 2x + y $$\n$$ f_y(x,y) = \\dfrac{\\partial f}{\\partial y} = x + 2y $$\nSymPy\nSymPy는 심볼릭 연산(symbolic operation)을 지원하기 위한 파이썬 패키지이다. 심볼릭 연산이란 사람이 연필로 계산하는 미분/적분과 동일한 형태의 연산을 말한다. 즉, $x^2$의 미분 연산을 수행하면 그 결과가 $2x$란 형태로 출력된다.\n딥 러닝(deep learning) 등에 많이 사용되는 파이썬의 theano 패키지나 tensorflow 패키지도 뉴럴 네트워크 트레이닝시에 필요한 기울기 함수 계산을 위해 이러한 심볼릭 연산 기능을 갖추고 있다. \n이를 위해서는 SymPy의 symbols 명령을 사용하여 $x$라는 기호가 단순한 숫자나 벡터 변수가 아닌 기호에 해당하는 것임을 알려주어야 한다.", "import sympy\nsympy.init_printing(use_latex='mathjax') # Juypter 노트북에서 수학식의 LaTeX 표현을 위해 필요함\n\nx = sympy.symbols('x')\nx\n\ntype(x)", "일단 심볼 변수를 정의하면 이를 사용하여 다음과 같이 함수를 정의한다. 이 때 수학 함수는 SymPy 전용 함수를 사용해야 한다.", "f = x * sympy.exp(x)\nf", "함수가 정의되면 diff 명령으로 미분을 할 수 있다. 또한 simplify 명령으로 소인수분해 등을 통한 수식 정리가 가능하다.", "sympy.diff(f)\n\nsympy.simplify(sympy.diff(f)) # 인수분해", "편미분을 하는 경우에는 어떤 변수로 미분하는지를 명시해야 한다.", "x, y = sympy.symbols('x y')\nf = x**2 + x*y + y**2\nf\n\nsympy.diff(f, x)\n\nsympy.diff(f, y)", "복수의 기호를 사용하는 경우에도 편미분을 해야 한다.", "x, mu, sigma = sympy.symbols('x mu sigma')\nf = sympy.exp((x-mu)**2)/sigma**2\nf\n\nsympy.diff(f, x)\n\nsympy.simplify(sympy.diff(f, x))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vberthiaume/vblandr
udacity/udacity/1_notmnist.ipynb
apache-2.0
[ "First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.", "# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport tarfile\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle", "Deep Learning\nAssignment 1\nThe objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\nThis notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.", "url = 'http://yaroslavvb.com/upload/notMNIST/'\n\ndef maybe_download(filename, expected_bytes, force=False):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n if force or not os.path.exists(filename):\n filename, _ = urlretrieve(url + filename, filename)\n statinfo = os.stat(filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified', filename)\n else:\n raise Exception(\n 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n return filename\n\nstrRawCompressedTrainSetFilename = maybe_download('notMNIST_large.tar.gz', 247336696)\nstrRawCompressedTestSetFilename = maybe_download('notMNIST_small.tar.gz', 8458043)", "Extract the dataset from the compressed .tar.gz file.\nThis should give you a set of directories, labelled A through J.", "s_iNum_classes = 10\nnp.random.seed(133)\n\ndef maybe_extract(filename, force=False):\n root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n if os.path.isdir(root) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping extraction of %s.' % (root, filename))\n else:\n print('Extracting data for %s. This may take a while. Please wait.' % root)\n tar = tarfile.open(filename)\n sys.stdout.flush()\n tar.extractall()\n tar.close()\n data_folders = [\n os.path.join(root, d) for d in sorted(os.listdir(root))\n if os.path.isdir(os.path.join(root, d))]\n if len(data_folders) != s_iNum_classes:\n raise Exception(\n 'Expected %d folders, one per class. Found %d instead.' % (\n s_iNum_classes, len(data_folders)))\n print(data_folders)\n return data_folders\n\nprint(\"s_strListExtractedTrainFolderNames: \")\ns_strListExtractedTrainFolderNames = maybe_extract(strRawCompressedTrainSetFilename)\nprint(\"\\ns_strListExtractedTestFolderNames: \")\ns_strListExtractedTestFolderNames = maybe_extract(strRawCompressedTestSetFilename)", "Problem 1\nLet's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.", "######################################## SKIP THIS CELL ############################################\nfrom IPython.display import Image\nImage(filename='./notMNIST_large/A/Z2xlZXN0ZWFrLnR0Zg==.png')", "Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\nWe'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. \nA few images might not be readable, we'll just skip them.", "s_iImage_size = 28 # Pixel width and height.\ns_fPixel_depth = 255.0 # Number of levels per pixel.\n\ndef load_letter(folder, min_num_images):\n \"\"\"Load the data for a single letter label, insuring you have at least min_num_images.\"\"\"\n image_files = os.listdir(folder)\n #An ndarray is a (often fixed) multidimensional container of items of the same type and size\n #so here, we're building a 3d array with indexes (image index, x,y), and type float32\n dataset = np.ndarray(shape=(len(image_files), s_iImage_size, s_iImage_size), dtype=np.float32)\n image_index = 0\n #for each image in the current folder (A, B, etc)\n print(folder)\n for image in os.listdir(folder):\n #get the full image path\n image_file = os.path.join(folder, image)\n try:\n #read image as a bunch of floats, and normalize those floats by using pixel_depth \n image_data = (ndimage.imread(image_file).astype(float) - s_fPixel_depth / 2) / s_fPixel_depth\n #ensure image shape is standard\n if image_data.shape != (s_iImage_size, s_iImage_size):\n raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n #and put it in the dataset\n dataset[image_index, :, :] = image_data\n image_index += 1\n except IOError as e:\n print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n \n num_images = image_index\n dataset = dataset[0:num_images, :, :]\n if num_images < min_num_images:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n \n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n \ndef maybe_pickle(p_strDataFolderNames, p_iMin_num_images_per_class, p_bForce=False):\n dataset_names = []\n #data_folders are either the train or test set. folders within those are A, B, etc\n for strCurFolderName in p_strDataFolderNames:\n #we will serialize those subfolders (A, B, etc), that's what pickling is\n strCurSetFilename = strCurFolderName + '.pickle'\n #add the name of the current pickled subfolder to the list\n dataset_names.append(strCurSetFilename)\n #if the pickled folder already exists, skip\n if os.path.exists(strCurSetFilename) and not p_bForce:\n # You may override by setting force=True.\n print('%s already present - Skipping pickling.' % strCurSetFilename)\n else:\n #call the load_letter function def above \n print('Pickling %s.' % strCurSetFilename)\n dataset = load_letter(strCurFolderName, p_iMin_num_images_per_class)\n try:\n #and try to pickle it\n with open(strCurSetFilename, 'wb') as f:\n pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', set_filename, ':', e)\n \n return dataset_names\n\ns_strListPickledTrainFilenames = maybe_pickle(s_strListExtractedTrainFolderNames, 45000)\ns_strListPickledTestFilenames = maybe_pickle(s_strListExtractedTestFolderNames, 1800)\n\nprint(\"\\ns_strListPickledTrainFilenames: \", s_strListPickledTrainFilenames)\nprint(\"\\ns_strListPickledTestFilenames: \", s_strListPickledTestFilenames)", "Problem 2\nLet's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.", "######################################## SKIP THIS CELL ############################################\n#un-serialize first sub-folder of the train set\nrandom_class_id = np.random.randint(0,s_iNum_classes)\nunpickled_rnd_train_set = pickle.load(open(s_strListPickledTrainFilenames[random_class_id]))\n#get xy array representing random image\nrandom_img_id = np.random.randint(0,unpickled_rnd_train_set.shape[0])\nfirst_img = unpickled_rnd_train_set[random_img_id,:,:]\n# checking image shape, it is 28x28 pixels\n# print(\"image %d from class %d with shape %d\" %(random_img_id, random_class_id, first_img.shape))\nprint(\"image \", random_img_id, \" from class \", random_class_id, \" with shape \", first_img.shape)\n# denormalization, but commented since doesn't change anything for imshow. The way i understand\n# this, is that in these images, the each one of the 28x28 pixels is only encoding grayscale, not\n# rgb. And the imshow doc says that it can handle grayscale arrays that are normalized\n# (http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.imshow). \n# s_fPixel_depth = 255.0 # Number of levels per pixel.\n# first_img = first_img*s_fPixel_depth + s_fPixel_depth/2\n# print(first_img[0,:])\nplt.imshow(first_img)\nplt.show()", "Problem 3\nAnother check: we expect the data to be balanced across classes. Verify that.", "######################################## SKIP THIS CELL ############################################\n#cycle through all train and test sets and count how many examples we have? Also need to check\n#their mean and variance?\nall_counts = np.zeros(s_iNum_classes)\nall_means = np.zeros(s_iNum_classes)\nall_variances = np.zeros(s_iNum_classes)\n\n#for cur_class_id, cur_class in enumerate(unpickled_all_train_sets):\nfor cur_class_id in range(s_iNum_classes):\n #we unpickle here a 3d array with shape: image_ids, xs, ys\n unpickled_cur_train_set = pickle.load(open(s_strListPickledTrainFilenames[cur_class_id]))\n print (\"class \", cur_class_id)\n for cur_image_id in range(len(unpickled_cur_train_set)):\n# print (\"image\", cur_image_id)\n all_counts[cur_class_id] += 1\n# cur_image = unpickled_cur_train_set()\n all_means[cur_class_id] += np.mean(unpickled_cur_train_set[cur_image_id])\n all_variances[cur_class_id] += np.var(unpickled_cur_train_set[cur_image_id])\n \nprint (\"all_counts: %d\", all_counts)\n\nall_means = np.divide(all_means, s_iNum_classes)\nprint (\"mean of all_means: \", all_means)\n\nall_variances = np.divide(all_variances, s_iNum_classes)\nprint (\"mean of all_variances: \", all_variances)", "Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune s_iTrainSize as needed. The labels will be stored into a separate array of integers 0 through 9.\nAlso create a validation dataset for hyperparameter tuning.", "#from p_iNb_rows and p_iImg_size: \n# return dataset: an empty 3d array that is [p_iNb_rows, p_iImg_size, p_iImg_size]\n# return labels: an empty vector that is [p_iNb_rows]\ndef make_arrays(p_iNb_rows, p_iImg_size):\n if p_iNb_rows:\n dataset = np.ndarray((p_iNb_rows, p_iImg_size, p_iImg_size), dtype=np.float32)\n labels = np.ndarray(p_iNb_rows, dtype=np.int32)\n else:\n dataset, labels = None, None\n return dataset, labels\n\n#p_strListPickle_files is an array containing the filenames of the pickled data\ndef merge_datasets(p_strListPickledFilenames, p_iTrainSize, p_iValidSize=0):\n iNum_classes = len(p_strListPickledFilenames)\n #make empty arrays for validation and training sets and labels\n valid_dataset, valid_labels = make_arrays(p_iValidSize, s_iImage_size)\n train_dataset, train_labels = make_arrays(p_iTrainSize, s_iImage_size)\n \n #number of items per class. // is an int division in python3, not sure in python2\n iNbrOfValidItemsPerClass = p_iValidSize // iNum_classes\n iNbrOfTrainItemPerClass = p_iTrainSize // iNum_classes\n \n #figure out useful indexes for the loop\n iStartValidId, iStartTrainId = 0, 0\n iEndValidId, iEndTrainId = iNbrOfValidItemsPerClass, iNbrOfTrainItemPerClass\n iEndListId = iNbrOfValidItemsPerClass+iNbrOfTrainItemPerClass\n \n #for each file in p_strListPickledFilenames\n for iPickleFileId, strPickleFilename in enumerate(p_strListPickledFilenames): \n try:\n #open the file\n with open(strPickleFilename, 'rb') as f:\n print (strPickleFilename)\n #unpicke 3d array for current file\n threeDCurLetterSet = pickle.load(f)\n # let's shuffle the items to have random validation and training set. \n # np.random.shuffle suffles only first dimension\n np.random.shuffle(threeDCurLetterSet)\n \n #if we asked for a validation set\n if valid_dataset is not None:\n #the first iNbrOfValidItemsPerClass items in letter_set are used for the validation set\n threeDValidItems = threeDCurLetterSet[:iNbrOfValidItemsPerClass, :, :]\n valid_dataset[iStartValidId:iEndValidId, :, :] = threeDValidItems\n #label all images with the current file id \n valid_labels[iStartValidId:iEndValidId] = iPickleFileId\n #update ids for the train set\n iStartValidId += iNbrOfValidItemsPerClass\n iEndValidId += iNbrOfValidItemsPerClass\n \n #the rest of the items are used for the training set\n threeDTrainItems = threeDCurLetterSet[iNbrOfValidItemsPerClass:iEndListId, :, :]\n train_dataset[iStartTrainId:iEndTrainId, :, :] = threeDTrainItems\n train_labels[iStartTrainId:iEndTrainId] = iPickleFileId\n iStartTrainId += iNbrOfTrainItemPerClass\n iEndTrainId += iNbrOfTrainItemPerClass\n except Exception as e:\n print('Unable to process data from', strPickleFilename, ':', e)\n raise \n return valid_dataset, valid_labels, train_dataset, train_labels\n\n#original values \n# s_iTrainSize = 200000\n# s_iValid_size = 10000\n# s_iTestSize = 10000\ns_iTrainSize = 200000\ns_iValid_size = 10000\ns_iTestSize = 10000\n\n#call merge_datasets on data_sets and labels\ns_threeDValidDataset, s_vValidLabels, s_threeDTrainDataset, s_vTrainLabels = merge_datasets(s_strListPickledTrainFilenames, s_iTrainSize, s_iValid_size)\n_, _, s_threeDTestDataset, s_vTestLabels = merge_datasets(s_strListPickledTestFilenames, s_iTestSize)\n\n#print shapes for data sets and their respective labels. data sets are 3d arrays with [image_id,x,y] and labels\n#are [image_ids]\nprint('Training:', s_threeDTrainDataset.shape, s_vTrainLabels.shape)\nprint('Validation:', s_threeDValidDataset.shape, s_vValidLabels.shape)\nprint('Testing:', s_threeDTestDataset.shape, s_vTestLabels.shape)", "Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.", "def randomize(p_3dDataset, p_vLabels):\n #with int x as parameter, np.random.permutation returns a random permutation of np.arange(x)\n vPermutation = np.random.permutation(p_vLabels.shape[0])\n threeDShuffledDataset = p_3dDataset[vPermutation,:,:]\n threeDShuffledLabels = p_vLabels [vPermutation]\n return threeDShuffledDataset, threeDShuffledLabels\n\ns_threeDTrainDataset, s_vTrainLabels = randomize(s_threeDTrainDataset, s_vTrainLabels)\ns_threeDTestDataset, s_vTestLabels = randomize(s_threeDTestDataset, s_vTestLabels)\ns_threeDValidDataset, s_vValidLabels = randomize(s_threeDValidDataset, s_vValidLabels)\n\nprint(s_threeDTrainDataset.shape)\nprint(s_threeDTestDataset.shape)\nprint(s_threeDValidDataset.shape)", "Problem 4\nConvince yourself that the data is still good after shuffling!", "######################################## SKIP THIS CELL ############################################\n#cycle through train, validation, and test sets to count how many items we have for each label, and calculate\n#their mean and variance\ns_vAllShuffledMeans = np.zeros(3)\ns_vAllShuffledVars = np.zeros(3)\n\nfor iCurTrainingImageId in range(s_threeDTrainDataset.shape[0]):\n s_vAllShuffledMeans[0] += np.mean(s_threeDTrainDataset[iCurTrainingImageId]) / s_threeDTrainDataset.shape[0]\n s_vAllShuffledVars[0] += np.var(s_threeDTrainDataset[iCurTrainingImageId]) / s_threeDTrainDataset.shape[0]\n\nprint (\"TRAIN mean: \", s_vAllShuffledMeans[0], \"\\t variance:\", s_vAllShuffledVars[0])\n\nfor iCurTestImageId in range(s_threeDTestDataset.shape[0]):\n s_vAllShuffledMeans[1] += np.mean(s_threeDTestDataset[iCurTestImageId]) / s_threeDTestDataset.shape[0]\n s_vAllShuffledVars[1] += np.var(s_threeDTestDataset[iCurTestImageId]) / s_threeDTestDataset.shape[0]\n\nprint (\"TEST mean: \", s_vAllShuffledMeans[1], \"\\t variance:\", s_vAllShuffledVars[1])\n \nfor iCurValidImageId in range(s_threeDValidDataset.shape[0]):\n s_vAllShuffledMeans[2] += np.mean(s_threeDValidDataset[iCurValidImageId]) / s_threeDValidDataset.shape[0]\n s_vAllShuffledVars[2] += np.var(s_threeDValidDataset[iCurValidImageId]) / s_threeDValidDataset.shape[0]\n\nprint (\"VALID mean: \", s_vAllShuffledMeans[2], \"\\t variance:\", s_vAllShuffledVars[2])", "Finally, let's save the data for later reuse:", "pickle_file = 'notMNIST.pickle'\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'train_dataset': s_threeDTrainDataset,\n 'train_labels': s_vTrainLabels,\n 'valid_dataset': s_threeDValidDataset,\n 'valid_labels': s_vValidLabels,\n 'test_dataset': s_threeDTestDataset,\n 'test_labels': s_vTestLabels,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)", "Problem 5\nBy construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\nMeasure how much overlap there is between training, validation and test samples.\nOptional questions:\n- What about near duplicates between datasets? (images that are almost identical)\n- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.", "######################################## SKIP THIS CELL ############################################\n# all_doubles = np.zeros(2)\n\n# for iCurTrainImageId in range(s_threeDTrainDataset.shape[0]):\n# if iCurTrainImageId % 10 == 0:\n# print (iCurTrainImageId)\n# for iCurTestImageId in range(s_threeDTestDataset.shape[0]):\n# if np.array_equal(s_threeDTrainDataset[iCurTrainImageId], s_threeDTestDataset[iCurTestImageId]):\n# all_doubles[0] += 1\n \n# for iCurValidImageId in range(s_threeDValidDataset.shape[0]):\n# if np.array_equal(s_threeDTrainDataset[iCurTrainImageId], s_threeDValidDataset[iCurValidImageId]):\n# all_doubles[1] += 1\n\n# print(all_doubles[0])\n# print(all_doubles[1])\n\n#eythian solution, with my edits\nall_doubles = np.zeros(2)\ns_threeDTrainDataset.flags.writeable=False #this is probably optional\ns_threeDTestDataset.flags.writeable=False\ndup_dict={} #using {} declares a dictionary. this dictionnary will store pairs of keys (image hash) and values (train_data image id)\nfor idx,img in enumerate(s_threeDTrainDataset):\n h = hash(img.data) #hash returns a hash value for its argument. equal numerical arguments produce the same hash value\n #'h in dup_dict' tests whether the dictionnary contains the h key, I assume this is very fast\n if h in dup_dict: # and (s_threeDTrainDataset[dup_dict[h]].data == img.data): #the second part of this is probably redundant...\n #print ('Duplicate image: %d matches %d' % (idx, dup_dict[h]))\n all_doubles[0] += 1\n dup_dict[h] = idx\nfor idx,img in enumerate(s_threeDTestDataset):\n h = hash(img.data)\n if h in dup_dict: # and (s_threeDTrainDataset[dup_dict[h]].data == img.data): #vb commented this last part, it doesn't do anything\n #print ('Test image %d is in the training set' % idx)\n all_doubles[1] += 1\nprint(all_doubles[0])\nprint(all_doubles[1])", "Problem 6\nLet's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\nTrain a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\nOptional question: train an off-the-shelf model on all the data!", "### taking inspiration from http://scikit-learn.org/stable/auto_examples/calibration/plot_compare_calibration.html#example-calibration-plot-compare-calibration-py\nfrom sklearn import datasets\nfrom sklearn.calibration import calibration_curve\n\ntrain_samples = 100 # number of samples used for training\ntest_samples = 50 #number of samples for test\n\n#training patterns. x is input pattern, y is target pattern or label\nX_train = s_threeDTrainDataset[:train_samples]\n#fit function below expects to have a vector as the second dimension, not an array\nX_train = X_train.reshape([X_train.shape[0],X_train.shape[1]*X_train.shape[2]])\ny_train = s_vTrainLabels[:train_samples]\n\n#test patterns\nX_test = s_threeDTestDataset[:test_samples]\nX_test = X_test.reshape([X_test.shape[0],X_test.shape[1]*X_test.shape[2]])\ny_test = s_vTestLabels[:test_samples]\n\n# Create classifier\nlr = LogisticRegression()\n\n#create plots\nplt.figure(figsize=(10, 10))\nax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)\nax2 = plt.subplot2grid((3, 1), (2, 0))\nax1.plot([0, 1], [0, 1], \"k:\", label=\"Perfectly calibrated\") \n\n#try to fit the training data\nlr.fit(X_train, y_train)\n\n#assess how confident (how probable it is correct) the model is at predicting test classifications\nprob_pos = lr.predict_proba(X_test)[:, 1]\n \n#fraction_of_positives, mean_predicted_value = calibration_curve(y_test, prob_pos, n_bins=10)\n\n#ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\", label=\"%s\" % (name, ))\nax2.hist(prob_pos, range=(0, 1), bins=10, label='Logistic', histtype=\"step\", lw=2)\n\n# ax1.set_ylabel(\"Fraction of positives\")\n# ax1.set_ylim([-0.05, 1.05])\n# ax1.legend(loc=\"lower right\")\n# ax1.set_title('Calibration plots (reliability curve)')\n\nax2.set_xlabel(\"Mean predicted value\")\nax2.set_ylabel(\"Count\")\nax2.legend(loc=\"upper center\", ncol=2)\n\nplt.tight_layout()\nplt.show()\n\n########################### SKIP; ORIGINAL LOGISTIC CODE #################################\nprint(__doc__)\n\n# Author: Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>\n# License: BSD Style.\n\nimport numpy as np\nnp.random.seed(0)\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn import datasets\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.calibration import calibration_curve\n\n#\nX, y = datasets.make_classification(n_samples=100000, n_features=20, n_informative=2, n_redundant=2)\n\ntrain_samples = 100 # Samples used for training the models\n\nX_train = X[:train_samples]\nX_test = X[train_samples:]\ny_train = y[:train_samples]\ny_test = y[train_samples:]\n\n# Create classifiers\nlr = LogisticRegression()\n# gnb = GaussianNB()\n# svc = LinearSVC(C=1.0)\n# rfc = RandomForestClassifier(n_estimators=100)\n\n\n###############################################################################\n# Plot calibration plots\n\nplt.figure(figsize=(10, 10))\nax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)\nax2 = plt.subplot2grid((3, 1), (2, 0))\n\nax1.plot([0, 1], [0, 1], \"k:\", label=\"Perfectly calibrated\")\nfor clf, name in [(lr, 'Logistic')]:\n# (gnb, 'Naive Bayes'),\n# (svc, 'Support Vector Classification'),\n# (rfc, 'Random Forest')]:\n clf.fit(X_train, y_train)\n if hasattr(clf, \"predict_proba\"):\n prob_pos = clf.predict_proba(X_test)[:, 1]\n else: # use decision function\n prob_pos = clf.decision_function(X_test)\n prob_pos = \\\n (prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())\n fraction_of_positives, mean_predicted_value = \\\n calibration_curve(y_test, prob_pos, n_bins=10)\n\n ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\",\n label=\"%s\" % (name, ))\n\n ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,\n histtype=\"step\", lw=2)\n\nax1.set_ylabel(\"Fraction of positives\")\nax1.set_ylim([-0.05, 1.05])\nax1.legend(loc=\"lower right\")\nax1.set_title('Calibration plots (reliability curve)')\n\nax2.set_xlabel(\"Mean predicted value\")\nax2.set_ylabel(\"Count\")\nax2.legend(loc=\"upper center\", ncol=2)\n\nplt.tight_layout()\nplt.show()\n\n########################### SKIP; ORIGINAL LOGISTIC CODE FOR 10 CLASSES #################################\nprint(__doc__)\n\n# Author: Jan Hendrik Metzen <jhm@informatik.uni-bremen.de>\n# License: BSD Style.\n\nimport numpy as np\nnp.random.seed(0)\n\nimport matplotlib.pyplot as plt\n\nfrom sklearn import datasets\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.calibration import calibration_curve\n\nX, y = datasets.make_classification(n_samples=100000, n_features=20, n_informative=2, n_redundant=2)\n\ntrain_samples = 100 # Samples used for training the models\n\nX_train = X[:train_samples]\nX_test = X[train_samples:]\ny_train = y[:train_samples]\ny_test = y[train_samples:]\n\n# Create classifiers\nlr = LogisticRegression()\n# gnb = GaussianNB()\n# svc = LinearSVC(C=1.0)\n# rfc = RandomForestClassifier(n_estimators=100)\n\n\n###############################################################################\n# Plot calibration plots\n\nplt.figure(figsize=(10, 10))\nax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)\nax2 = plt.subplot2grid((3, 1), (2, 0))\n\nax1.plot([0, 1], [0, 1], \"k:\", label=\"Perfectly calibrated\")\nfor clf, name in [(lr, 'Logistic')]:\n# (gnb, 'Naive Bayes'),\n# (svc, 'Support Vector Classification'),\n# (rfc, 'Random Forest')]:\n clf.fit(X_train, y_train)\n if hasattr(clf, \"predict_proba\"):\n prob_pos = clf.predict_proba(X_test)[:, 1]\n else: # use decision function\n prob_pos = clf.decision_function(X_test)\n prob_pos = (prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())\n fraction_of_positives, mean_predicted_value = calibration_curve(y_test, prob_pos, n_bins=10)\n\n ax1.plot(mean_predicted_value, fraction_of_positives, \"s-\", label=\"%s\" % (name, ))\n\n ax2.hist(prob_pos, range=(0, 1), bins=10, label=name, histtype=\"step\", lw=2)\n\nax1.set_ylabel(\"Fraction of positives\")\nax1.set_ylim([-0.05, 1.05])\nax1.legend(loc=\"lower right\")\nax1.set_title('Calibration plots (reliability curve)')\n\nax2.set_xlabel(\"Mean predicted value\")\nax2.set_ylabel(\"Count\")\nax2.legend(loc=\"upper center\", ncol=2)\n\nplt.tight_layout()\nplt.show()\n\n################# SOMEONE ELSE'S CODE ##############################\nimport time\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.neighbors import KNeighborsClassifier\n\ndef forum(algo, ntrain, ntest):\n# X_train = s_threeDTrainDataset[:train_samples]\n# X_train = X_train.reshape([X_train.shape[0],X_train.shape[1]*X_train.shape[2]])\n# y_train = s_vTrainLabels[:train_samples]\n\n wh = s_threeDTrainDataset.shape[1] * s_threeDTrainDataset.shape[2]\n X = s_threeDTrainDataset[:ntrain].reshape(ntrain, wh)\n Xtest = s_threeDTestDataset[:ntest].reshape(ntest, wh)\n Y = s_vTrainLabels[:ntrain]\n Ytest = s_vTestLabels[:ntest]\n\n t0 = time.time()\n algo.fit(X, Y)\n score = algo.score(Xtest, Ytest) * 100\n elapsed = time.time() - t0\n print('{} score: {:.1f}% under {:.2f}s'.format(type(algo), score, elapsed))\n\nforum(KNeighborsClassifier(), ntrain=50000, ntest=1000)\nforum(LogisticRegression(C=10.0, penalty='l1', multi_class='ovr', tol=0.01), ntrain=50000, ntest=1000)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nkarast/notebooks
.ipynb_checkpoints/pandas_data_munging-checkpoint.ipynb
mit
[ "Data Munging with Pandas\nTable of Contents\n 1. Let's load the dataset\n 2. Let's plot the dataset\n 2.1 scatter_matrix\n 2.2 Bar plot\n 2.3 Histogram\n 2.4 Box plot\n 2.5 Area plot\n 2.6 Scatter plot\n 2.7 Hexbin plot\n 2.8 Pie plot\n 2.9 Missing data on plots\n 2.10 Density/KDE plots\n 2.11 Andrews Curves\n 2.12 Parallel Coordinates\n 2.13 Lag plot\n 2.14 Autocorrelation plot\n 2.15 Bootstrap plot\n 2.16 RadVis plot\n 3. Data holders\n 3.1 Series\n 3.1.1 Series from ndarray\n 3.1.2 Series from dict\n 3.1.3 Series from scalar\n 3.2 DataFrame\n 3.2.1 DataFrame from dict of Series o dicts\n 3.2.2 DataFrame from dict of ndarrays\n 3.2.3 DataFrame from structured/record arrays\n 3.2.4 DataFrame from lists of dicts \n 3.2.5 DataFrame from dict of tuples\n 3.2.6 DataFrame from Series\n 4. Dealing with missing data\n 4.1 Automatic Formating of date strings\n 4.2 Handle of NaNs\n 4.2.1 Find NaN values\n 4.2.1.1 NaN and None\n 4.2.2 Fill NaN with fixed value\n 4.2.3 Fill NaN with mean or median\n 4.2.4 Drop NaN rows\n 4.2.5 Calculations with NaN\n 4.3 Reading Bad Input files\n 4.4 \nLet's focus on the Pandas dataframe Munging stuff\nThe modules that we'll need:", "%matplotlib inline\n\nimport numpy as np\n\nfrom sklearn import tree\n\nimport pandas as pd", "1. Let's load the dataset\nWe'll load one of the example datasets from sklearn!", "from sklearn import datasets\n\niris = datasets.load_iris()", "This is a sklearn dataset including the \n- .DESCR : general description\n- .data : all features\n- .features_names : names of features\n- .target : target values expressed as values / numbered classes\n- .target_names : names of the target classes \n- .shape : can be applied both on .data and .target and give the (row,column) tuple", "iris.DESCR;\niris.data;\niris.feature_names\n\niris.target;\niris.target_names\n\niris.data.shape\n\ntype(iris.data)", "Let's use pandas' to convert the ndarray into dataframe, keeping the column names!", "dataframe = pd.DataFrame(iris.data, columns=iris.feature_names)\n\ndataframe.head()\n\ntarget_df = pd.DataFrame(iris.target, columns=['Species'])\n\ntarget_df.tail(2)", "2. Let's plot the dataset!\n2.1 Scatter_matrix\nUsing pandas' scatter_matrix function.\nThis is similar to R's pairs and makes a quick scatter plot map for the quantitative\nfeatures of the dataframe\nFirst we'll make a list named colors as a book-keeper.\nThen to visualise the different target categories (i.e. the Species) we'll use the target numerical categorisation (0,1,2) to differently colour each row of the dataframe based on the target variable", "colors = list()\n\npalette = { 0: \"red\", 1: \"green\", 2:\"blue\"}\n\nfor c in np.nditer(iris.target):\n colors.append(palette[int(c)]) # c is 0,1 or 2 and we append red, green blue \n\nscatterplot = pd.scatter_matrix(dataframe, \n alpha = 0.3,\n figsize = (10,10),\n diagonal = 'hist', #'kde'\n color = colors, \n marker = 'o',\n grid = True)", "There are various plot methods inside pandas.\nNow to make the scatter plot matrix\n2.2 A bar plot", "dataframe.plot(kind='bar', stacked=True, figsize=(10,8)) #barh", "2.3 Histogram (for one column, for all, or split by diff )", "dataframe['sepal length (cm)'].plot(kind='hist', color='red', alpha=0.3, figsize=(10,8))\n\ndataframe.plot(kind='hist', alpha=0.3, orientation='horizontal', \n cumulative=True, bins=15, figsize=(10,8))\n\ndataframe.diff().hist(color='k', alpha=0.5, bins=50, figsize=(10,8))\n # diff() gets the difference of all\n # rows with row-0 ", "2.4 Box plots\nLet's remind here that box plot is a convenient way of depicting groups of numerical data and their quartiles. \nThe first quartile is the 25th percentile (splits off the lowest 25% of data from the highest 75%), the second quartile (median) is the 50th percentile (cuts the dataset in half) and the third quartile is the upper quartile or the 75th percentile (splits off the highest 25% from the lowest 75%). IRQ (interquantile range) is the difference between upper and lower quartiles IRQ=Q3-Q1).\nThe line is the median (Q2 or 50%). \nThe box represents the IRQ, the from median to highest values is the Q2 to Q3 region, while the lowest is the Q1 to Q2 region. \n```\n _ -> maximum\n |\n |\n --- -> third quartile\n | |\n |_| -> median (Q2)\n | |\n | |\n --- -> first quartile\n |\n |\n _____ -> minimum \n```\n Outliers --> 3 x IRQ\n Suspected Outliers 1.5 x IRQ", "dataframe.plot(kind='box', vert=False, by='X', figsize=(10,8))", "2.5 Area plots!", "dataframe.plot(kind='area', alpha=0.6, figsize=(10,8))", "2.6 Scatter Plots\n\nDefining the x and y axis\nplotting two sets with the ax=ax\npalette height", "dataframe.plot(kind='scatter', x='sepal length (cm)', y='sepal width (cm)', figsize=(10,8))\n\nax = dataframe.plot(kind='scatter', x='sepal length (cm)', y='sepal width (cm)', \n color='DarkBlue', label='Group 1');\ndataframe.plot(kind='scatter', x='petal length (cm)', y='petal width (cm)', \n color='DarkGreen', label='Group 2' , figsize=(10,8),\n ax=ax); # ax = ax to overlay\n # if not set 2 plots made\n\ndataframe.plot(kind='scatter', x='petal length (cm)', y='petal width (cm)', \n c='sepal width (cm)', s=50, figsize=(10,8))", "2.7 Hexbin\nHexbin are alternative to scatter plots if the data are too dense to plot individually", "dataframe.plot(kind='hexbin', x='petal length (cm)', y='petal width (cm)', \n gridsize=25, figsize=(10,8))\n ", "2.8 Pie plots\nPies are good for series", "series = pd.Series(3 * np.random.rand(4), index=['a', 'b', 'c', 'd'], name='series')\nseries.plot(kind='pie', figsize=(10,10))", "2.9 Missing data\nDepending on the plot type pandas handle by default the NAs\nPlot type | Handling\n---|---\nLine | Leave gaps at NA\nLine stacked | Fill 0's\nBar | Fill 0's\nScatter | Drop NA\nHistogram | Drop NA\nBox | Drop NA\nArea | Fill 0's\nKDE (density) | Fill 0's\nHexbin | Drop NA\nPie | Fill 0\nIf this is not the wanted behaviour use: fillna() or dropna() before plotting\n\n2.10 Density/KDE Plot", "dataframe.ix[:,0:2].plot(kind='kde', figsize=(10,8))\n# this is to take all rows for the first two columns of the dataframe (or df.iloc[:,0:2])", "2.11 Andrews Curves\nThey are used to plot multivariate data as a large number of curves that are created using the atributes of samples as coefficients for Fourier series. By coloring these curves per class it is possible to visualize data clustering.\nIt is a good way to visualize structure in high-dimensional data. Each data point $x={x_1 , x_2, ..., x_d}$ defines a finite Fourrier series:\n$f_{x}(t) = \\frac{x_1}{\\sqrt{2}} + x_2 sin(t) + x_3 cos(t) + x_4 sin(2t) + x_5 cos(2t)+...$\nThis function is plotted for $-\\pi < t < \\pi$. Therefore, each data point may be viewed as a line between $-\\pi$ and $\\pi$. The formula can be thought of as the projection of the data point onto the vector \n$(\\frac{1}{\\sqrt{2}}, sin(t), cos(t), sin(2t), cos(2t),...)$\nEach df row is a line per category. On the horizontal axis is the $-\\pi < t < \\pi$ and on the vertical the $f_x(t)$.\nCurves belonging to samples of the same class will usually be closer together and form larger structures!", "from pandas.tools.plotting import andrews_curves\n\nnew_df = dataframe;\nnew_df['Species']=target_df;\nnew_df.head(5)\n\nandrews_curves(new_df, \"Species\")", "2.12 Parallel Coordinates\nParallel coordinates is a way of visualising high-dimensional geometry and analysing multivariate data.\nTo show a set of points in a n-dimensional space, the horizontal axis is split consisting n parallel vertical (typically) lines. A point in the n-dimensional space is represented as a polyline with vertices on the parallel axes; the position of the vertex on the i-th axis corresponds to the i-th coordinate of the point.\nThis type of visualisation is closely related to time series visualisation, with the difference that it is applied on data that have no time dependance.\nThree points must be taken into account when the plot is used for statistical inference:\n1. The order of the axis is critical for finding features (many reorderings are done in practice).\n2. The rotation of the axis is a translation in the parallel coordinates; if the lines intersected outside the parallel axes, it can be translated between them by rotations (i.e. 180$\\circ$).\n3. The scaling is necessary since the plot is based on interpolation of consecutive pairs of variables. Thus the variables must be on a common scale and orthogonal to each parallel axis. \nPoints that tend to cluster will appear closer together.", "from pandas.tools.plotting import parallel_coordinates\nparallel_coordinates(new_df, \"Species\")", "2.13 Lag plot\nLag plots are used to identify randomness in data. Random data should not exhibit any structure in the lag plot\nNon-random structure implies that data are not random.\nA lag is a fixed time displacement. For example, given a dataset $Y1,~Y2~...,~Yn$, $Y2$ and $Y7$ have lag $5$ since $7 - 2 = 5$. Lag plots can be generated for any arbitrary lag, although the most commonly used lag is $1$.", "from pandas.tools.plotting import lag_plot\nlag_plot(new_df)", "2.14 Autocorrelation Plot\nThis is to check randomness in series. It computes the autocorrelations for data values at varying time lags. \nIf series are random, the autocorrelation should be near zero for any and all time-lag separation.\nIf time series are non-random then one or more of the autocorrelations will be significantly* non-zero. *\nThe horizontal lines correspond to the 95% and 99% confidence bands (dashed = 99%)\ndefinitions:\n\n\nVertical axis : autocorrelation coefficient: $R_{h}=C_{h}/ C_{0}$, \nwhere $C_{h}$ is the autocovariance :\n$$C_{h} = \\frac{1}{N} \\sum_{t=1}^{N-h}(Y_{t}-\\bar{Y})(Y_{t+h}-\\bar{Y})$$\nand $$C_{0} = \\frac{\\sum_{t=1}^{N}(Y_{t}-\\bar{Y})^{2}}{N}$$\n $-1 \\leq R_{h}\\leq 1$ \nN.B. Some times the the autocovariance function is given as \n$$C_{h} = \\frac{1}{N-h} \\sum_{t=1}^{N-h}(Y_{t}-\\bar{Y})(Y_{t+h}-\\bar{Y})$$\nwhich has less bias (1/N-h) \n\n\nHorizontal Axis: time lag $h$, $h=(1,2,...)$\n\n\nThe above line containes several reference lines :\n\nthe middle line is at zero\nthe other four lines are the 95% and 99% confidence bands. Note that there are two distinct formulas for generating confidence bands.\nif the autocorrelation plot is being used to test randomness (no time dependence in data) the following formula is recomended $$CL = \\pm\\frac{z_{1-a/2}}{\\sqrt{N}}$$, where $N$ is the sample size, $z$ the cumulative distribution function of the standard normal distribution and $\\alpha$ the the significance level \nif autocorrelation plot is made using the ARIMA model fitting then the the confidence bands are generated by : $$CL= \\pm z_{1-a/2} \\cdot \\sqrt{ \\frac{1}{N}(1+2\\sum_{i}^{k}y_{i}^2) } $$ where $k$ is the lag, $N$ the sample size, $z$ the cumulative distribution function of the standard normal distribution and $\\alpha$ the significance level.", "from pandas.tools.plotting import autocorrelation_plot\ndata = pd.Series(0.7 * np.random.rand(1000) +\n 0.3 * np.sin(np.linspace(-9 * np.pi, 9 * np.pi, num=1000)))\n\nautocorrelation_plot(data)", "2.15 Bootstrap plot\nBootstrap plots are made to assess the uncertainty of a statistic metric (mean, median, midrange etc)\nTo make a bootstrap uncertainty estimate for one metric given a dataset, a subset of the sample of size les or equal to the size of the dataset is generated from the data and the statistic is calculated. This subset is generated with replacement, and thus each data point can be resampled multiple times or not at all. The process is repeated, usually 500 to 1000 times. The computed values for the statistic form an estimate of the sampling distribution of the statistics.\nFor example, in a sample of 50 values you want to bootstrap the median. You generate a subset of the sample with 50 elements and calculate the median. Repeat this 500 times, so that you have at least 500 values for the median. To calculate the 90% confidence interval for the median, the sample of medians are sorted into ascending order and the value of the 25th median is the lower confidence limit, while the value of 475th median is the upper confidence limit (first and third quartiles).\nThe plots generated are the series and the histograms for mean, median and mid-range. For uniformly distributed values, mid-range has the smallest variance.", "from pandas.tools.plotting import bootstrap_plot\ndata = pd.Series(np.random.rand(1000));\n \nbootstrap_plot(data, size=50, samples=500, color='grey') ", "2.16 RadViz Plot\nRadVis is a way to visualise multivariate data; to visualise n-dimensional points into a two dimensional space. In this case, the mapping is not linear. The technique is based on a simple spring tension minimization algorithm.\nImagine $n$ points, $S_1,S_2,...S_n$ arranged to be equally spaced around the cirumference of the unit circle. Now suppose a set of $n$ springs being fixed at one end to each of these points and the other ends to a puck. Finally, assume the stiffness constant (as in Hooke's law) of the j$th$ string in the $x_{ij}$ for one of the data points $i$. \nIf the puck is released and allowed to reach equilibrium position, the coordinates of this position, $(u_i, v_j)^{T}$, are the projection in the two dimensional space of the point $(x_{i1}, x_{i2}, ... x_{in})^{T}$ in the $n$-dimensional space. If the $(u_i, v_j)^{T}$ is computed for $i=1,2...n$ and the points plotted, a visualisation of the $n$-dimensional dataset in the two dimensions is achieved.\nTo understand more about the projection of the $n$-dimensional space into the two dimensional one, consider the forces acting on the puck. When the puck is in equilibrium, there are no resultant forces acting on it (their sum is 0). Denoting the position vectors of $S_1$ to $S_n$ by $\\mathbf{S_1}$ to $\\mathbf{S_n}$ and putting $\\mathbf{u_{i}}=(u_i, v_i)^{T}$ we have:\n$$\\sum_{j=1,n}=(\\mathbf{S}{j}-\\mathbf{u}{i})x_{ij} = 0 $$\nwhich when solved for $\\mathbf{u}_{i}$ :\n$$\\mathbf{u}{i} = \\sum{j=1,n}w_{ij}\\mathbf{S}_j$$\nwhere \n$$ w_{ij} = \\left( \\sum_{j=1,n} x_{ij} \\right)^{-1} x_{ij}$$\nThis means that for each $i$ (i.e. dataframe row), $\\mathbf{u_i}$ is the weighted mean of the $\\mathbf{S_j}$'s whose weights are the $n$ variables for case $i$ normalised to unity. \nN.B. This normalisation makes the mapping $\\mathcal{R}^{n}\\rightarrow \\mathcal{R}^{2}$ non-linear.", "from pandas.tools.plotting import radviz\n\nradviz(new_df, 'Species')", "----\n3. Data Holders\nNow let's do see some 'data wrangling' commands with pandas framework. These are to re-shape, select and extract information from a given dataset.\nLet's dive a bit into the different data structures\n3.1. Series\nSeries is in fact an 1D array with labels that holds a specific data type (double, int, string etc). The labels (axis labels) are referred to as index. \nTo create a Series you need data and an index definition. \n\n\nData can be:\n\ndictionary\nndarray\nscalar value (i.e. 5)\n\n\n\nIndex can be:\n\nany list of strings \n\n\n\nIf no index is included in the definition, the indices will be numerical from 0 to len(ndarray)-1.\n3.1.1. Series from ndarray", "s = pd.Series(np.random.randn(5), index = ['a','b','c','d','e']) \n # 5 is the # of observations\n # only one axis in tuple -> only 1D series\n\ns\n\ns.plot(kind='kde')", "this does not work for a 2D ndarray", "#s = pd.Series(np.random.randn(5,5)) # this returns an exception : Data must be 1-dimensional", "3.1.2. Series from dictionary\nGenerating from a dictionary, the keys are taken as index.\nIf extra index argument is given this the keys values are overwritten for the matching cases, while if the index argument is larger than the len(dict.keys()) then NaN's are created.", "d = {'a':125., 'b':500, 'c':400, 'e':240}; d\n\ns2 = pd.Series(d); s2\n\ns2.plot(kind='bar', alpha=0.8, color='pink', ylim=(0, 1000))\n\ns3 = pd.Series(d, index=['a','b','c','d','e','f']); s3\n\ns3.plot(kind='bar', alpha=0.8, color='cyan', ylim=(0, 1000))", "So indeed the series are filled with NaN's and the plots with 0's!.\n\n3.1.3. Series from scalar\nNow introducing series with scalar input, the values will all be identical.", "s4 = pd.Series(5, index=['a','b','c']); s4", "Series can behave as ndarrays or as dictionaries including all the operations!\nFor example", "s2[3]\n\ns2[:4]\n\ns2.median(); s2.mean(); s2.std(); s2.quantile(0.50)\n\ns2[s2>s2.median()]\n\ns2['a']\n\n'a' in s2", "Also vectorised operations can be done. The difference with ndarray is that the result is aligned automatically based on the labels.", "s+s2; s2*2; np.exp(s)\n\ns5 = pd.Series(np.random.randn(5), name='something'); s5.name", "3.2 DataFrames\nOn the other side, we have DataFrames. A dataframe is like an Excel spreadsheet or a SQL table. Dataframes can accept various many different kinds of inputs\n\nDictionay of 1D ndarrays, lists, dict or Series\n2D numpy.ndarray\nstructured or record ndarray\nseries\nanother DataFrame\n\nApart from data, the input arguments can be columns and index.\n3.2.1 DataFrame from dict of Series or dicts\nThe result index will be the union of the indexes of the various Series. Nested dicts will be converted to Series first.", "d = {'one' : pd.Series([1.,2.,3.], index = ['a','b','c']),\n 'two' : pd.Series([1., 2., 3., 4.], index=['a','b','c','d']) }; d\n\ndf = pd.DataFrame(d); df\n\npd.DataFrame(d, index = ['d','b','a']) # rearanging stuff\n\npd.DataFrame(d, index = ['d','b','a'], columns=['two' , 'three']) # there is no 'three' col", "The index and columns can be accessed directly:", "df.index\n\ndf.columns", "3.2.2 DataFrame from dict of ndarrays/lists", "d = {'one' : [1.,2.,3.,4.],\n 'two' : [4.,3.,2.,1.]}; d \n\npd.DataFrame(d)\n\npd.DataFrame(d, index=['a','b','c','d'])", "3.2.3 DataFrame from structured/record arrays\nThese arrays are made using named indices.\nFor example", "x = np.array([(1,2,'Hello'), (2,3,'World') ],\n dtype=[('foo','i4'),('bar','f4'), ('baz', 'S10')]) # structured array\n\nx[0]\n\nx['foo']\n\nx.dtype.names\n\nx[['foo','bar']] # access multiple fields simultaneously", "", "y = np.rec.array([(1,2.,'Hello'),(2,3.,\"World\")], \n dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) ## record array\n\ny[0]\n\ny['foo']\n\ny.dtype.names\n\ny[['foo','bar']] # access multiple fields simultaneously\n\ny[1].baz # access one element", "So now making a DF from such arrays:", "data = np.zeros((2,), dtype=[('A', 'i4'),('B', 'f4'),('C', 'a10')])\n\ndata[:] = [(1,2.,'Hello'), (2,3.,\"World\")]\n\npd.DataFrame(data)\n\npd.DataFrame(data, index=['first', 'second'])\n\npd.DataFrame(data, columns=['C', 'A', 'B'])", "3.2.4 DataFrame from list of dicts\nNow make a list of dictionaries", "data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]\n\npd.DataFrame(data2)", "3.2.5 DataFrame from dict of tuples\nNow make a dict containing tuples:", "pd.DataFrame({('a', 'b'): {('A', 'B'): 1, ('A', 'C'): 2},\n ('a', 'a'): {('A', 'C'): 3, ('A', 'B'): 4},\n ('a', 'c'): {('A', 'B'): 5, ('A', 'C'): 6},\n ('b', 'a'): {('A', 'C'): 7, ('A', 'B'): 8},\n ('b', 'b'): {('A', 'D'): 9, ('A', 'B'): 10}})", "3.2.6 DataFrame from Series\nThis is including various series for the creation of a DF\n\n\n4. Dealing with dates and problematic data\nNow let's make one dataframe and see what we can do when we have dates and NaN values in it.", "# Create the csv file\nout = open(\"file.csv\", 'write');\ns = '''Date,Temperature_city_1,Temperature_city_2,Temperature_city_3,Which_destination\\n\n20140910,80,32,40,1\\n\n20140911,100,50,36,2\\n\n20140912,102,55,46,1\\n\n20140912,60,20,35,3\\n\n20140914,60,,32,3\\n\n20140914,,57,42,2''';\nout.write(s);\nout.close();\n\n# read it in pandas\ndf_csv = pd.read_csv(\"file.csv\", sep=','); df_csv", "4.1 Automatic Formating of \"date strings\"\nThe read_csv() function has an option that specifyies which column contains dates. Then these dates can be parsed and formated accordingly. \nJust include the argiment parse_dates=[list of columns with dates]", "# read it in pandas and process dates\ndf_csv2 = pd.read_csv(\"file.csv\", sep=',', parse_dates=[0]); df_csv2", "4.2 Handle NaNs\nThere are various ways of dealing with NaN values within pandas. But first one must find them!\n4.2.1 Find NaN Values\nThere are various ways of finding NaN values. Mainly these are revolting around the isnull() and notnull() functions.", "df_nulls = pd.read_csv(\"file.csv\", sep=',', parse_dates=[0]); df_nulls\n\npd.isnull(df_nulls['Temperature_city_1']) # looking for nulls in one column\n\ndf_nulls['Temperature_city_1'].isnull() # same thing different format\n\ndf_nulls.notnull() # look for actual values (NON NaN's) in the whole df", "4.2.1.1 NaN and None\nIt is very important to remember that NaN is not None :", "print np.nan;\nnp.nan == None", "Also NaN are not comparable, this means that", "np.nan == np.nan", "Pandas/Numpy use this to separate None and NaN, since", "None == None", "4.2.2 Fill NaN with fixed value\nOne may choose to replace NaN with a certain fixed value (fillna(val))", "# read it in pandas and process dates\ndf_csv3 = pd.read_csv(\"file.csv\", sep=',', parse_dates=[0]);\ndf_csv3.fillna(-1)", "keep in mind that this does not replace the values in the original DataFrame, but it must be stored in a copy. \nSo let's replace and see the values:", "df_csv4 = df_csv3.fillna(-1); df_csv4", "the df_csv4 has indeed replaced NaN with -1, but what happened of df_csv3 ?", "df_csv3", "df_csv3 is the original DF and has not replaced its NaN values.\n\n4.2.2 Fill NaN with mean or Median\nOne can replace the NaN values with the column mean or median so to minimise the guessing error. This can easily be done by", "print type(df_csv3.mean(axis=0));\ndf_csv3.mean(axis=0)\n\ndf_csv5 = df_csv3.fillna(df_csv3.mean(axis=0)); df_csv5", "N.B. The argument axis=0 implies the calculation of the means that span the rows so the obtained means that they extend column-wise. The axis=1 spans columns and the row-wise results are obtained.\n\n4.2.5 Drop NaN rows\nOne might choose to drop the NaN values this can be done by", "df_csv6 = df_csv3.dropna(); df_csv6", "4.2.6 Interpolate NaN\nSeries and DataFrames have the interpolate method, which performs linear interpolation (by default) for the missing datapoints", "df_nulls\n\ndf_nulls.count()\n\ndf_nulls.interpolate().count() # first interpolate the NaN\n\ndf_nulls.interpolate()", "to see how this interpolation is done we can plot it", "df_nulls.plot(title='no linear interpolation')\ndf_nulls.interpolate().plot(title='with linear interpolation')", "4.2.5 Calculations with NaNs\nMissing values do propagate naturally in operations with Pandas Objects", "## Let's make some dfs with NaNs\na1 = pd.Series(np.random.randn(3), index = ['e','f','h']);\na2 = pd.Series(np.random.randn(5), index = ['a','c','e','f','h']);\na = {'one': a1, 'two': a2};\ndfa = pd.DataFrame(a); \nprint dfa\n\nb1 = pd.Series(np.random.randn(2), index = ['e','f']);\nb2 = pd.Series(np.random.randn(5), index = ['a','c','e','f','h']);\nb3 = pd.Series(np.random.randn(5), index = ['a','c','e','f','h']);\nb = {'one':b1, 'two':b2, 'three':b3};\ndfb = pd.DataFrame(b);\nprint dfb\n\ndfa+dfb ## NaN are propagated properly\n\n(dfa+dfb).mean(axis=0) #mean for rows", "Functions like cumsum() (cumulative sum over requested axis) and cumprod() (cumulative product over requested axis) ignore the NaNs. Similarly the groupby()", "dfa.cumsum()\n\ndfa.groupby('one').mean()", "4.3 Reading Bad Input Files\nThere is also the case - especially in the real world - that the dataset which is loaded is bad or erroeneous. When loading such dataset with load_csv() the program will exit with an error. A workaround is to ignore bad lines by the error_bad_lines option. \nAssume that we have the following bad dataset:", "f = open('bad_dataset.csv', 'write')\nf.write('Val1,Val2,Val3\\n')\nf.write('1,1,1\\n')\nf.write('2,2,2,2\\n') # 4 columns, 3 expected!\nf.write('3,3,3\\n')\nf.close()", "Now lets load it as always", "bdf = pd.read_csv('bad_dataset.csv')", "Notice the CParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4\nNow trying with the option:", "bdf = pd.read_csv('bad_dataset.csv', error_bad_lines=False) #i.e. \"I don't want a parser check\"\n\nbdf", "So now line (2,2,2,2) is dropped. N.B.: Notice the change of the indices" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Agent007/deepchem
examples/notebooks/protein_ligand_complex_notebook.ipynb
mit
[ "Basic Protein-Ligand Affinity Models\nTutorial: Use machine learning to model protein-ligand affinity.\nWritten by Evan Feinberg and Bharath Ramsundar\nCopyright 2016, Stanford University\nThis DeepChem tutorial demonstrates how to use mach.ine learning for modeling protein-ligand binding affinity\nOverview:\nIn this tutorial, you will trace an arc from loading a raw dataset to fitting a cutting edge ML technique for predicting binding affinities. This will be accomplished by writing simple commands to access the deepchem Python API, encompassing the following broad steps:\n\nLoading a chemical dataset, consisting of a series of protein-ligand complexes.\nFeaturizing each protein-ligand complexes with various featurization schemes. \nFitting a series of models with these featurized protein-ligand complexes.\nVisualizing the results.\n\nFirst, let's point to a \"dataset\" file. This can come in the format of a CSV file or Pandas DataFrame. Regardless\nof file format, it must be columnar data, where each row is a molecular system, and each column represents\na different piece of information about that system. For instance, in this example, every row reflects a \nprotein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string\nof the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines\nin a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone.\nThis should become clearer with the example. (Make sure to set DISPLAY = True)", "%load_ext autoreload\n%autoreload 2\n%pdb off\n# set DISPLAY = True when running tutorial\nDISPLAY = False\n# set PARALLELIZE to true if you want to use ipyparallel\nPARALLELIZE = False\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport deepchem as dc\nfrom deepchem.utils import download_url\n\nimport os\n\ndownload_url(\"https://s3-us-west-1.amazonaws.com/deepchem.io/datasets/pdbbind_core_df.csv.gz\")\ndata_dir = os.path.join(dc.utils.get_data_dir())\ndataset_file= os.path.join(dc.utils.get_data_dir(), \"pdbbind_core_df.csv.gz\")\nraw_dataset = dc.utils.save.load_from_disk(dataset_file)", "Let's see what dataset looks like:", "print(\"Type of dataset is: %s\" % str(type(raw_dataset)))\nprint(raw_dataset[:5])\nprint(\"Shape of dataset is: %s\" % str(raw_dataset.shape))", "One of the missions of deepchem is to form a synapse between the chemical and the algorithmic worlds: to be able to leverage the powerful and diverse array of tools available in Python to analyze molecules. This ethos applies to visual as much as quantitative examination:", "import nglview\nimport tempfile\nimport os\nimport mdtraj as md\nimport numpy as np\nimport deepchem.utils.visualization\n#from deepchem.utils.visualization import combine_mdtraj, visualize_complex, convert_lines_to_mdtraj\n\ndef combine_mdtraj(protein, ligand):\n chain = protein.topology.add_chain()\n residue = protein.topology.add_residue(\"LIG\", chain, resSeq=1)\n for atom in ligand.topology.atoms:\n protein.topology.add_atom(atom.name, atom.element, residue)\n protein.xyz = np.hstack([protein.xyz, ligand.xyz])\n protein.topology.create_standard_bonds()\n return protein\n\ndef visualize_complex(complex_mdtraj):\n ligand_atoms = [a.index for a in complex_mdtraj.topology.atoms if \"LIG\" in str(a.residue)]\n binding_pocket_atoms = md.compute_neighbors(complex_mdtraj, 0.5, ligand_atoms)[0]\n binding_pocket_residues = list(set([complex_mdtraj.topology.atom(a).residue.resSeq for a in binding_pocket_atoms]))\n binding_pocket_residues = [str(r) for r in binding_pocket_residues]\n binding_pocket_residues = \" or \".join(binding_pocket_residues)\n\n traj = nglview.MDTrajTrajectory( complex_mdtraj ) # load file from RCSB PDB\n ngltraj = nglview.NGLWidget( traj )\n ngltraj.representations = [\n { \"type\": \"cartoon\", \"params\": {\n \"sele\": \"protein\", \"color\": \"residueindex\"\n } },\n { \"type\": \"licorice\", \"params\": {\n \"sele\": \"(not hydrogen) and (%s)\" % binding_pocket_residues\n } },\n { \"type\": \"ball+stick\", \"params\": {\n \"sele\": \"LIG\"\n } }\n ]\n return ngltraj\n\ndef visualize_ligand(ligand_mdtraj):\n traj = nglview.MDTrajTrajectory( ligand_mdtraj ) # load file from RCSB PDB\n ngltraj = nglview.NGLWidget( traj )\n ngltraj.representations = [\n { \"type\": \"ball+stick\", \"params\": {\"sele\": \"all\" } } ]\n return ngltraj\n\ndef convert_lines_to_mdtraj(molecule_lines):\n molecule_lines = molecule_lines.strip('[').strip(']').replace(\"'\",\"\").replace(\"\\\\n\", \"\").split(\", \")\n tempdir = tempfile.mkdtemp()\n molecule_file = os.path.join(tempdir, \"molecule.pdb\")\n with open(molecule_file, \"w\") as f:\n for line in molecule_lines:\n f.write(\"%s\\n\" % line)\n molecule_mdtraj = md.load(molecule_file)\n return molecule_mdtraj\n\nfirst_protein, first_ligand = raw_dataset.iloc[0][\"protein_pdb\"], raw_dataset.iloc[0][\"ligand_pdb\"]\nprotein_mdtraj = convert_lines_to_mdtraj(first_protein)\nligand_mdtraj = convert_lines_to_mdtraj(first_ligand)\ncomplex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)\n\nngltraj = visualize_complex(complex_mdtraj)\nngltraj", "Now that we're oriented, let's use ML to do some chemistry. \nSo, step (2) will entail featurizing the dataset.\nThe available featurizations that come standard with deepchem are ECFP4 fingerprints, RDKit descriptors, NNScore-style bdescriptors, and hybrid binding pocket descriptors. Details can be found on deepchem.io.", "grid_featurizer = dc.feat.RdkitGridFeaturizer(\n voxel_width=16.0, feature_types=\"voxel_combined\", \n voxel_feature_types=[\"ecfp\", \"splif\", \"hbond\", \"pi_stack\", \"cation_pi\", \"salt_bridge\"], \n ecfp_power=5, splif_power=5, parallel=True, flatten=True)\ncompound_featurizer = dc.feat.CircularFingerprint(size=128)", "Note how we separate our featurizers into those that featurize individual chemical compounds, compound_featurizers, and those that featurize molecular complexes, complex_featurizers.\nNow, let's perform the actual featurization. Calling loader.featurize() will return an instance of class Dataset. Internally, loader.featurize() (a) computes the specified features on the data, (b) transforms the inputs into X and y NumPy arrays suitable for ML algorithms, and (c) constructs a Dataset() instance that has useful methods, such as an iterator, over the featurized data. This is a little complicated, so we will use MoleculeNet to featurize the PDBBind core set for us.", "PDBBIND_tasks, (train_dataset, valid_dataset, test_dataset), transformers = dc.molnet.load_pdbbind_grid()", "Now, we conduct a train-test split. If you'd like, you can choose splittype=\"scaffold\" instead to perform a train-test split based on Bemis-Murcko scaffolds.\nWe generate separate instances of the Dataset() object to hermetically seal the train dataset from the test dataset. This style lends itself easily to validation-set type hyperparameter searches, which we will illustate in a separate section of this tutorial. \nThe performance of many ML algorithms hinges greatly on careful data preprocessing. Deepchem comes standard with a few options for such preprocessing.\nNow, we're ready to do some learning! \nTo fit a deepchem model, first we instantiate one of the provided (or user-written) model classes. In this case, we have a created a convenience class to wrap around any ML model available in Sci-Kit Learn that can in turn be used to interoperate with deepchem. To instantiate an SklearnModel, you will need (a) task_types, (b) model_params, another dict as illustrated below, and (c) a model_instance defining the type of model you would like to fit, in this case a RandomForestRegressor.", "from sklearn.ensemble import RandomForestRegressor\n\nsklearn_model = RandomForestRegressor(n_estimators=100)\nmodel = dc.models.SklearnModel(sklearn_model)\nmodel.fit(train_dataset)\n\nfrom deepchem.utils.evaluate import Evaluator\nimport pandas as pd\n\nmetric = dc.metrics.Metric(dc.metrics.r2_score)\n\nevaluator = Evaluator(model, train_dataset, transformers)\ntrain_r2score = evaluator.compute_model_performance([metric])\nprint(\"RF Train set R^2 %f\" % (train_r2score[\"r2_score\"]))\n\nevaluator = Evaluator(model, valid_dataset, transformers)\nvalid_r2score = evaluator.compute_model_performance([metric])\nprint(\"RF Valid set R^2 %f\" % (valid_r2score[\"r2_score\"]))", "In this simple example, in few yet intuitive lines of code, we traced the machine learning arc from featurizing a raw dataset to fitting and evaluating a model. \nHere, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands \"drug-like.\"", "predictions = model.predict(test_dataset)\nprint(predictions)\n\n# TODO(rbharath): This cell visualizes the ligand with highest predicted activity. Commenting it out for now. Fix this later\n#from deepchem.utils.visualization import visualize_ligand\n\n#top_ligand = predictions.iloc[0]['ids']\n#ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==top_ligand]['ligand_pdb'].values[0])\n#if DISPLAY:\n# ngltraj = visualize_ligand(ligand1)\n# ngltraj\n\n# TODO(rbharath): This cell visualizes the ligand with lowest predicted activity. Commenting it out for now. Fix this later\n#worst_ligand = predictions.iloc[predictions.shape[0]-2]['ids']\n#ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==worst_ligand]['ligand_pdb'].values[0])\n#if DISPLAY:\n# ngltraj = visualize_ligand(ligand1)\n# ngltraj", "The protein-ligand complex view.\nThe preceding simple example, in few yet intuitive lines of code, traces the machine learning arc from featurizing a raw dataset to fitting and evaluating a model. \nIn this next section, we illustrate deepchem's modularity, and thereby the ease with which one can explore different featurization schemes, different models, and combinations thereof, to achieve the best performance on a given dataset. We will demonstrate this by examining protein-ligand interactions. \nIn the previous section, we featurized only the ligand. The signal we observed in R^2 reflects the ability of grid fingerprints and random forests to learn general features that make ligands \"drug-like.\" In this section, we demonstrate how to use hyperparameter searching to find a higher scoring ligands.", "def rf_model_builder(model_params, model_dir):\n sklearn_model = RandomForestRegressor(**model_params)\n return dc.models.SklearnModel(sklearn_model, model_dir)\n\nparams_dict = {\n \"n_estimators\": [10, 50, 100],\n \"max_features\": [\"auto\", \"sqrt\", \"log2\", None],\n}\n\nmetric = dc.metrics.Metric(dc.metrics.r2_score)\noptimizer = dc.hyper.HyperparamOpt(rf_model_builder)\nbest_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search(\n params_dict, train_dataset, valid_dataset, transformers,\n metric=metric)\n\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nrf_predicted_test = best_rf.predict(test_dataset)\nrf_true_test = test_dataset.y\nplt.scatter(rf_predicted_test, rf_true_test)\nplt.xlabel('Predicted pIC50s')\nplt.ylabel('True IC50')\nplt.title(r'RF predicted IC50 vs. True pIC50')\nplt.xlim([2, 11])\nplt.ylim([2, 11])\nplt.plot([2, 11], [2, 11], color='k')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JasonJWilliamsNY/biocoding_2015
lessons/biocoding_2015_pythonlab_04.ipynb
cc0-1.0
[ "Review of lists, loops, and more...\nHere is another broken piece of code. Using what you learned from yesterday's lessons \n\nfix what is broken\nmake comments to explain what is going on line-by-line (talk to the duck)\nif you have time make some improvements!", "# build a random dna sequence...\n\nfrom numpy import random\nfinal_sequence_length = eighty\ninitial_sequence_length = 81\ndna_sequence = ''\nmy_nucleotides = [a,t,g,c]\nmy_nucleotide_probs = [0.25,0.25,0.25,0.3]\nwhile initial_sequence_length < final_sequence_length:\nnucleotide = random.choice(my_nucleotides,p=my_nucleotide_p)\ndna_sequence = dna_sequence + nucleotide\ninitial_sequence_length = initial_sequence_length + 1\nprint '>random_sequence (length:%d)\\n%s' % (len(dna_sequence), dna_sequence)", "Moving on to dictionaries\nWe are just about done with the data structures we will use in this Python course. There are other Python data structures, and many more concepts, but for now we will consider dictonaries:\nCheck the type of my_dictionary in the cell below", "my_dictionary = {}", "As with lists, a dictionary can be initialized empty, this time with the braces {}. Dictionaries have some properties in common with lists and string, but there are some key differences. Dictionaries are:\n\nitterable\nunordered\nindexed (by keys)\n\nTry printing the following dictionary based on some of the data recorded in a chart we used earlier\n|Group|Number of Mice|Average Mass(g)|Group Id|\n|-----|--------------|---------------|--------|\n|alpha|3|17.0|CGJ28371|\n|beta|5|16.4|SJW99399|\n|gamma|6|17.8|PWS29382|", "my_mouse_exp = {'alpha_id':'CGJ28371',\n 'alpha_avr_mass':17.0,\n 'alpha_no_mice':'3'}\n\nprint my_mouse_exp", "Based on the chart above, add the values for Group Id,Average Mass(g), and Number of Mice for the beta, an gamma groups using parallel variable names (e.g. group_id...):", "my_mouse_exp = {'alpha_id':'CGJ28371',\n 'alpha_avr_mass':17.0,\n 'alpha_no_mice':'3',}\n\n", "You can also explicitly add inidividual entries to your dictionary:", "my_mouse_exp['alpha_experimenter'] = 'CGJ'\nprint my_mouse_exp", "You can also use variables and other string slicing methods we used earlier:", "beta_group_id = 'SJW99399'\n\nmy_mouse_exp['beta_experimenter'] = beta_group_id[0:3]\nprint my_mouse_exp", "One important property of a dictionary is that you can call entries explicitly (rather than referencing indicies like 0, 1, or 2). First, here is some terminology for a dictionary object:\ndictionary = { key:value }\nA dictionary consists of some key (this is the name you choose for your entry) and some value (this is the entry itself). Generally keys are strings, but could be almost anything except a list. A value can be just about anything. \nYou can call a specific value from a dictionary by giving its key:", "my_mouse_exp['alpha_id']", "You can also see a list of the keys a dictionary has:", "my_mouse_exp.keys()", "You can also check the values", "my_mouse_exp.values()", "Translating RNA > Protein\nRNA codons are translated into aminio acids according to a standard genetic code (see chart below); amino acids are represented here by their one-letter abbreviations.", "amino_acids = {\n 'AUA':'I', 'AUC':'I', 'AUU':'I', 'AUG':'M',\n 'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACU':'T',\n 'AAC':'N', 'AAU':'N', 'AAA':'K', 'AAG':'K',\n 'AGC':'S', 'AGU':'S', 'AGA':'R', 'AGG':'R',\n 'CUA':'L', 'CUC':'L', 'CUG':'L', 'CUU':'L',\n 'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCU':'P',\n 'CAC':'H', 'CAU':'H', 'CAA':'Q', 'CAG':'Q',\n 'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGU':'R',\n 'GUA':'V', 'GUC':'V', 'GUG':'V', 'GUU':'V',\n 'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCU':'A',\n 'GAC':'D', 'GAU':'D', 'GAA':'E', 'GAG':'E',\n 'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGU':'G',\n 'UCA':'S', 'UCC':'S', 'UCG':'S', 'UCU':'S',\n 'UUC':'F', 'UUU':'F', 'UUA':'L', 'UUG':'L',\n 'UAC':'Y', 'UAU':'Y', 'UAA':'_', 'UAG':'_',\n 'UGC':'C', 'UGU':'C', 'UGA':'_', 'UGG':'W'\n}", "Using what you have learned so far\nWrite the appropriate code to translate an RNA string to a protein sequence:", "rna = 'AUGCAUGCGAAUGCAGCGGCUAGCAGACUGACUGUUAUGCUGGGAUCGUGCCGCUAG'\n\n\n#This may or may not be helpful, but remeber\n#you can itterate over an arbitrary range of elements/numbers\n#using the range() function\n", "Does your code work on the following RNA sequence?", "rna = 'AUGCAAGACAGGGAUCUAUUUACGAUCAGGCAUCGAUCGAUCGAUGCUAGCUAGCGGGAUCGCACGAUACUAGCCCGAUGCUAGCUUUUAUGCUCGUAGCUGCCCGUACGUUAUUUAGCCUGCUGUGCGAAUGCAGCGGCUAGCAGACUGACUGUUAUGCUGGGAUCGUGCCGCUAG'", "Bonus: Can you translate this sequence in all 3 reading frames?", "rna = 'AUGCAAGACAGGGAUCUAUUUACGAUCAGGCAUCGAUCGAUCGAUGCUAGCUAGCGGGAUCGCACGAUACUAGCCCGAUGCUAGCUUUUAUGCUCGUAGCUGCCCGUACGUUAUUUAGCCUGCUGUGCGAAUGCAGCGGCUAGCAGACUGACUGUUAUGCUGGGAUCGUGCCGCUAG'", "Translating DNA to RNA TO PROTEIN\nNow that we have done each of the major biological sequences, you should be able to translate a DNA sequence into an RNA, and a protein", "dna = 'ACGTCGTTTACGTACGGGAGTCGTACGATCCTCCCGTAGCTCGGGATCGTTTTATCGTAGCGGGAT'", "FUNctions\nNow that we have learned how to do several thing together, lets wap them into a function. We have already been using several functions so we can write our own:", "def print_double():\n print \"Hello world\"\n print \"Hello world\"\n \nprint_double()", "Two things are happening in this function, let's examine some of its elements:\ndef function_name( ):<br>### (indent) instruction_block\n\ndef - this special word indicates you are defining a new Python function\nfunction_name(): - this is the arbitrary name for your function, followed by parentheses\ninstruction_block - following an ident, this is a block of instructions. Everything included in this function must be indented to this level. \n\nThere is also one other element\nfunction_name( ):\nThis line is the function call. As long as a function is defined above this call, the function will be run. \nfix this code block, and call the function twice:", "print_tripple()\n\ndef print_tripple():\n print \"Hello world\"\n print \"Hello world\"\n print \"Hello world\"", "Local vs global\nOne other important element of functions is that variables included in the function, are not defined outside of the function:", "def prints_dna_len():\n dna = 'gatgcattatcgtgagc'\n \n\nprints_dna_len()\n \nprint dna\nprint len(dna)", "Variables defined inside the function are local to that function. Conversely, variables defined outside of the function, are global and are defined everywhere in the block of code. This concept is refered to as namespace.", "more_dna = 'aaatcgatttttttt'\n\ndef prints_dna_twice():\n print more_dna\n print more_dna\n \nprints_dna_twice()", "Returning values\nFunctions can also themslves return a value for use in other parts of your code. In this case the return keyword explicitly returns the value rna_1.", "def dna_to_rna():\n dna_1 = 'agcttttacgtcgatcctgcta'\n rna_1 = dna_1.replace('t','u')\n return rna_1\n\n\nprint dna_to_rna()\nprint type(dna_to_rna())", "Challenge: Write some functions to do the following:\nWrite a function that calculates the GC content of a DNA string\nWrite a function that generates a random string of DNA of random a random length\nParameters\nFunctions can also accept one or more parameters, we could expand our definition like this:\ndef function_name(parameter1, parameter2, parameterN):<br>### (indent) instruction_block\nThe parameter can have any name: the name of the parmeter becomes the name of a local variable for used within the function:", "def prints_rna_sequence(rna):\n if 't' not in rna:\n print rna\n else:\n print 'this is not rna!'\n \nprints_rna_sequence('agaucgagcuacgua')\nprints_rna_sequence('atcgcgcatcgatct')\n", "In the statement above, we tell the function that it should be called with one parameter,\nand that parameter should be assigned the value rna within the function.", "prints_rna_sequence()\n\n#.find method returns the string index (e.g. string[x]) if the search string is found\n# my_string = abc\n# my_string.find('a') would have the value 0 (e.g. string[0])\n# If there is no match to the search, the .find() function returns -1\n\n\ndef print_dna_and_rna(dna,rna):\n if dna.find('t')!= -1:\n print 'here is your dna %s' % dna\n elif dna.find('u')!= -1:\n print 'This is RNA!: %s' % dna\n if rna.find('t')!= -1:\n print 'This is DNA!: %s' % rna\n elif rna.find('u')!= -1:\n print 'here is your rna %s' % rna\n\n\nprint_dna_and_rna('agatccgtcg','uagcugacug')\nprint_dna_and_rna('uagcugacug','agatccgtcg')", "Function paramters can also be made optional. To make a paramter optional, you must give it a default value. That value could be the keyword None, an empty value like '', or a default value:", "def print_dna_and_rna(dna, rna='', number_of_times_to_print=1):\n if dna.find('t')!= -1:\n print 'here is your dna %s \\n' % dna * number_of_times_to_print \n elif dna.find('u')!= -1:\n print 'This is RNA!: %s \\n' % dna * number_of_times_to_print \n if rna.find('t')!= -1:\n print 'This is DNA!: %s \\n' % rna * number_of_times_to_print \n elif rna.find('u')!= -1:\n print 'here is your rna %s \\n'% rna * number_of_times_to_print \n\nprint_dna_and_rna('agatccgtcg',)\nprint_dna_and_rna('uagcugacug','agatccgtcg',2)\nprint_dna_and_rna('agatccgtcg', number_of_times_to_print=6)", "Challenge: Write a function that generates a random string of DNA of random a random length: use optional parameters to set the length of the strings and the probabilities of the nuclotides" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
keras-team/keras-io
examples/vision/ipynb/mixup.ipynb
apache-2.0
[ "MixUp augmentation for image classification\nAuthor: Sayak Paul<br>\nDate created: 2021/03/06<br>\nLast modified: 2021/03/06<br>\nDescription: Data augmentation using the mixup technique for image classification.\nIntroduction\nmixup is a domain-agnostic data augmentation technique proposed in mixup: Beyond Empirical Risk Minimization\nby Zhang et al. It's implemented with the following formulas:\n\n(Note that the lambda values are values with the [0, 1] range and are sampled from the\nBeta distribution.)\nThe technique is quite systematically named - we are literally mixing up the features and\ntheir corresponding labels. Implementation-wise it's simple. Neural networks are prone\nto memorizing corrupt labels. mixup relaxes this by\ncombining different features with one another (same happens for the labels too) so that\na network does not get overconfident about the relationship between the features and\ntheir labels.\nmixup is specifically useful when we are not sure about selecting a set of augmentation\ntransforms for a given dataset, medical imaging datasets, for example. mixup can be\nextended to a variety of data modalities such as computer vision, naturallanguage\nprocessing, speech, and so on.\nThis example requires TensorFlow 2.4 or higher.\nSetup", "import numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom tensorflow.keras import layers", "Prepare the dataset\nIn this example, we will be using the FashionMNIST dataset. But this same recipe can\nbe used for other classification datasets as well.", "(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\nx_train = x_train.astype(\"float32\") / 255.0\nx_train = np.reshape(x_train, (-1, 28, 28, 1))\ny_train = tf.one_hot(y_train, 10)\n\nx_test = x_test.astype(\"float32\") / 255.0\nx_test = np.reshape(x_test, (-1, 28, 28, 1))\ny_test = tf.one_hot(y_test, 10)", "Define hyperparameters", "AUTO = tf.data.AUTOTUNE\nBATCH_SIZE = 64\nEPOCHS = 10", "Convert the data into TensorFlow Dataset objects", "# Put aside a few samples to create our validation set\nval_samples = 2000\nx_val, y_val = x_train[:val_samples], y_train[:val_samples]\nnew_x_train, new_y_train = x_train[val_samples:], y_train[val_samples:]\n\ntrain_ds_one = (\n tf.data.Dataset.from_tensor_slices((new_x_train, new_y_train))\n .shuffle(BATCH_SIZE * 100)\n .batch(BATCH_SIZE)\n)\ntrain_ds_two = (\n tf.data.Dataset.from_tensor_slices((new_x_train, new_y_train))\n .shuffle(BATCH_SIZE * 100)\n .batch(BATCH_SIZE)\n)\n# Because we will be mixing up the images and their corresponding labels, we will be\n# combining two shuffled datasets from the same training data.\ntrain_ds = tf.data.Dataset.zip((train_ds_one, train_ds_two))\n\nval_ds = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(BATCH_SIZE)\n\ntest_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE)", "Define the mixup technique function\nTo perform the mixup routine, we create new virtual datasets using the training data from\nthe same dataset, and apply a lambda value within the [0, 1] range sampled from a Beta distribution\n— such that, for example, new_x = lambda * x1 + (1 - lambda) * x2 (where\nx1 and x2 are images) and the same equation is applied to the labels as well.", "\ndef sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2):\n gamma_1_sample = tf.random.gamma(shape=[size], alpha=concentration_1)\n gamma_2_sample = tf.random.gamma(shape=[size], alpha=concentration_0)\n return gamma_1_sample / (gamma_1_sample + gamma_2_sample)\n\n\ndef mix_up(ds_one, ds_two, alpha=0.2):\n # Unpack two datasets\n images_one, labels_one = ds_one\n images_two, labels_two = ds_two\n batch_size = tf.shape(images_one)[0]\n\n # Sample lambda and reshape it to do the mixup\n l = sample_beta_distribution(batch_size, alpha, alpha)\n x_l = tf.reshape(l, (batch_size, 1, 1, 1))\n y_l = tf.reshape(l, (batch_size, 1))\n\n # Perform mixup on both images and labels by combining a pair of images/labels\n # (one from each dataset) into one image/label\n images = images_one * x_l + images_two * (1 - x_l)\n labels = labels_one * y_l + labels_two * (1 - y_l)\n return (images, labels)\n", "Note that here , we are combining two images to create a single one. Theoretically,\nwe can combine as many we want but that comes at an increased computation cost. In\ncertain cases, it may not help improve the performance as well.\nVisualize the new augmented dataset", "# First create the new dataset using our `mix_up` utility\ntrain_ds_mu = train_ds.map(\n lambda ds_one, ds_two: mix_up(ds_one, ds_two, alpha=0.2), num_parallel_calls=AUTO\n)\n\n# Let's preview 9 samples from the dataset\nsample_images, sample_labels = next(iter(train_ds_mu))\nplt.figure(figsize=(10, 10))\nfor i, (image, label) in enumerate(zip(sample_images[:9], sample_labels[:9])):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(image.numpy().squeeze())\n print(label.numpy().tolist())\n plt.axis(\"off\")", "Model building", "\ndef get_training_model():\n model = tf.keras.Sequential(\n [\n layers.Conv2D(16, (5, 5), activation=\"relu\", input_shape=(28, 28, 1)),\n layers.MaxPooling2D(pool_size=(2, 2)),\n layers.Conv2D(32, (5, 5), activation=\"relu\"),\n layers.MaxPooling2D(pool_size=(2, 2)),\n layers.Dropout(0.2),\n layers.GlobalAvgPool2D(),\n layers.Dense(128, activation=\"relu\"),\n layers.Dense(10, activation=\"softmax\"),\n ]\n )\n return model\n", "For the sake of reproducibility, we serialize the initial random weights of our shallow\nnetwork.", "initial_model = get_training_model()\ninitial_model.save_weights(\"initial_weights.h5\")", "1. Train the model with the mixed up dataset", "model = get_training_model()\nmodel.load_weights(\"initial_weights.h5\")\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\nmodel.fit(train_ds_mu, validation_data=val_ds, epochs=EPOCHS)\n_, test_acc = model.evaluate(test_ds)\nprint(\"Test accuracy: {:.2f}%\".format(test_acc * 100))", "2. Train the model without the mixed up dataset", "model = get_training_model()\nmodel.load_weights(\"initial_weights.h5\")\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])\n# Notice that we are NOT using the mixed up dataset here\nmodel.fit(train_ds_one, validation_data=val_ds, epochs=EPOCHS)\n_, test_acc = model.evaluate(test_ds)\nprint(\"Test accuracy: {:.2f}%\".format(test_acc * 100))", "Readers are encouraged to try out mixup on different datasets from different domains and\nexperiment with the lambda parameter. You are strongly advised to check out the\noriginal paper as well - the authors present several ablation studies on mixup\nshowing how it can improve generalization, as well as show their results of combining\nmore than two images to create a single one.\nNotes\n\nWith mixup, you can create synthetic examples — especially when you lack a large\ndataset - without incurring high computational costs.\nLabel smoothing and mixup usually do not work well together because label smoothing\nalready modifies the hard labels by some factor.\nmixup does not work well when you are using Supervised Contrastive\nLearning (SCL) since SCL expects the true labels\nduring its pre-training phase.\nA few other benefits of mixup include (as described in the paper) robustness to\nadversarial examples and stabilized GAN (Generative Adversarial Networks) training.\nThere are a number of data augmentation techniques that extend mixup such as\nCutMix and AugMix." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
alexwhb/algorithm-practice
jupyter-notebooks/miscellaneous/Pandas intro.ipynb
mit
[ "import numpy as np\nimport pandas as pd\n\n\nx = pd.Series([1, 2, 3, 4, 5])\nx\n\nx+100\n\n(x ** 2) * 100\n\nx > 2\n\nlarger_than_2 = x > 2\n\nlarger_than_2.any()\n\nlarger_than_2.all()\n\nlarger_than_2.argmax()\n\ndef f(x): \n if x % 2 == 0:\n return x * 2\n else:\n return x * 3\n\nx.apply(f)\n\n%%timeit\n\nds = pd.Series(range(1000))\n\nfor counter in range(len(ds)):\n ds[counter] = f(ds[counter])", "Wow much faster", "%%timeit\n\nds = pd.Series(range(1000))\nds = ds.apply(f)", "these objects are refrence objects, so you have to do y = x.copy() to not overite x when changing a value in y", "x.describe()\n\ny = pd.Series(np.random.random(100) * 1000)\ny.describe()\n\nimport matplotlib.pyplot as plt\n\nnew = np.reshape(np.array(y.tolist()), (10,10))\nplt.imshow(new)\nplt.colorbar()\nplt.show()\n\ndata = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\ndf = pd.DataFrame(data, columns=[\"x\"])\n\ndf\n\ndf['x squared'] = df['x'] ** 2\n\ndf\n\ndf['is even'] = df['x'] % 2 == 0\n\ndf['ood even'] = df['is even'].map({False:\"odd\", True:\"even\"})\n\ndf\n\ncf = df.drop(\"is even\", 1)\n\ncf\n\ndf\n\ndf[['x', 'is even']]\n\ndf[df['is even'] == False]\n\ndf[(df['is even'] == False) | (df['x squared'] > 25)]\n\ndf[(df['is even'] == False) & (df['x squared'] > 25)]\n\ndf.describe()" ]
[ "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v0.5/_downloads/Inverse_Distance_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Inverse Distance Verification: Cressman and Barnes\nCompare inverse distance interpolation methods\nTwo popular interpolation schemes that use inverse distance weighting of observations are the\nBarnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses\nthe ratio between distance of an observation from a grid cell and the maximum allowable\ndistance to calculate the relative importance of an observation for calculating an\ninterpolation value. Barnes uses the inverse exponential ratio of each distance between\nan observation and a grid cell and the average spacing of the observations over the domain.\nAlgorithmically:\n\nA KDTree data structure is built using the locations of each observation.\nAll observations within a maximum allowable distance of a particular grid cell are found in\n O(log n) time.\nUsing the weighting rules for Cressman or Barnes analyses, the observations are given a\n proportional value, primarily based on their distance from the grid cell.\nThe sum of these proportional values is calculated and this value is used as the\n interpolated value.\nSteps 2 through 4 are repeated for each grid cell.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import cKDTree\nfrom scipy.spatial.distance import cdist\n\nfrom metpy.gridding.gridding_functions import calc_kappa\nfrom metpy.gridding.interpolation import barnes_point, cressman_point\nfrom metpy.gridding.triangles import dist_2\n\nplt.rcParams['figure.figsize'] = (15, 10)\n\n\ndef draw_circle(x, y, r, m, label):\n nx = x + r * np.cos(np.deg2rad(list(range(360))))\n ny = y + r * np.sin(np.deg2rad(list(range(360))))\n plt.plot(nx, ny, m, label=label)", "Generate random x and y coordinates, and observation values proportional to x * y.\nSet up two test grid locations at (30, 30) and (60, 60).", "np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = xp * xp / 1000\n\nsim_gridx = [30, 60]\nsim_gridy = [30, 60]", "Set up a cKDTree object and query all of the observations within \"radius\" of each grid point.\nThe variable indices represents the index of each matched coordinate within the\ncKDTree's data list.", "grid_points = np.array(list(zip(sim_gridx, sim_gridy)))\n\nradius = 40\nobs_tree = cKDTree(list(zip(xp, yp)))\nindices = obs_tree.query_ball_point(grid_points, r=radius)", "For grid 0, we will use Cressman to interpolate its value.", "x1, y1 = obs_tree.data[indices[0]].T\ncress_dist = dist_2(sim_gridx[0], sim_gridy[0], x1, y1)\ncress_obs = zp[indices[0]]\n\ncress_val = cressman_point(cress_dist, cress_obs, radius)", "For grid 1, we will use barnes to interpolate its value.\nWe need to calculate kappa--the average distance between observations over the domain.", "x2, y2 = obs_tree.data[indices[1]].T\nbarnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)\nbarnes_obs = zp[indices[1]]\n\nave_spacing = np.mean((cdist(list(zip(xp, yp)), list(zip(xp, yp)))))\nkappa = calc_kappa(ave_spacing)\n\nbarnes_val = barnes_point(barnes_dist, barnes_obs, kappa)", "Plot all of the affiliated information and interpolation values.", "for i, zval in enumerate(zp):\n plt.plot(pts[i, 0], pts[i, 1], '.')\n plt.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))\n\nplt.plot(sim_gridx, sim_gridy, '+', markersize=10)\n\nplt.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')\nplt.plot(x2, y2, 'ks', fillstyle='none', markersize=10, label='grid 1 matches')\n\ndraw_circle(sim_gridx[0], sim_gridy[0], m='k-', r=radius, label='grid 0 radius')\ndraw_circle(sim_gridx[1], sim_gridy[1], m='b-', r=radius, label='grid 1 radius')\n\nplt.annotate('grid 0: cressman {:.3f}'.format(cress_val), xy=(sim_gridx[0] + 2, sim_gridy[0]))\nplt.annotate('grid 1: barnes {:.3f}'.format(barnes_val), xy=(sim_gridx[1] + 2, sim_gridy[1]))\n\nplt.axes().set_aspect('equal', 'datalim')\nplt.legend()", "For each point, we will do a manual check of the interpolation values by doing a step by\nstep and visual breakdown.\nPlot the grid point, observations within radius of the grid point, their locations, and\ntheir distances from the grid point.", "plt.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]),\n xy=(sim_gridx[0] + 2, sim_gridy[0]))\nplt.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)\n\nmx, my = obs_tree.data[indices[0]].T\nmz = zp[indices[0]]\n\nfor x, y, z in zip(mx, my, mz):\n d = np.sqrt((sim_gridx[0] - x)**2 + (y - sim_gridy[0])**2)\n plt.plot([sim_gridx[0], x], [sim_gridy[0], y], '--')\n\n xave = np.mean([sim_gridx[0], x])\n yave = np.mean([sim_gridy[0], y])\n\n plt.annotate('distance: {}'.format(d), xy=(xave, yave))\n plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))\n\nplt.xlim(0, 80)\nplt.ylim(0, 80)\nplt.axes().set_aspect('equal', 'datalim')", "Step through the cressman calculations.", "dists = np.array([22.803508502, 7.21110255093, 31.304951685, 33.5410196625])\nvalues = np.array([0.064, 1.156, 3.364, 0.225])\n\ncres_weights = (radius * radius - dists * dists) / (radius * radius + dists * dists)\ntotal_weights = np.sum(cres_weights)\nproportion = cres_weights / total_weights\nvalue = values * proportion\n\nval = cressman_point(cress_dist, cress_obs, radius)\n\nprint('Manual cressman value for grid 1:\\t', np.sum(value))\nprint('Metpy cressman value for grid 1:\\t', val)", "Now repeat for grid 1, except use barnes interpolation.", "plt.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]),\n xy=(sim_gridx[1] + 2, sim_gridy[1]))\nplt.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)\n\nmx, my = obs_tree.data[indices[1]].T\nmz = zp[indices[1]]\n\nfor x, y, z in zip(mx, my, mz):\n d = np.sqrt((sim_gridx[1] - x)**2 + (y - sim_gridy[1])**2)\n plt.plot([sim_gridx[1], x], [sim_gridy[1], y], '--')\n\n xave = np.mean([sim_gridx[1], x])\n yave = np.mean([sim_gridy[1], y])\n\n plt.annotate('distance: {}'.format(d), xy=(xave, yave))\n plt.annotate('({}, {}) : {} F'.format(x, y, z), xy=(x, y))\n\nplt.xlim(40, 80)\nplt.ylim(40, 100)\nplt.axes().set_aspect('equal', 'datalim')", "Step through barnes calculations.", "dists = np.array([9.21954445729, 22.4722050542, 27.892651362, 38.8329756779])\nvalues = np.array([2.809, 6.241, 4.489, 2.704])\n\nweights = np.exp(-dists**2 / kappa)\ntotal_weights = np.sum(weights)\nvalue = np.sum(values * (weights / total_weights))\n\nprint('Manual barnes value:\\t', value)\nprint('Metpy barnes value:\\t', barnes_point(barnes_dist, barnes_obs, kappa))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
r-karasik/lanl-auth-cybersecurity
exploration.ipynb
mit
[ "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Make the graphs a bit prettier, and bigger\npd.set_option('display.mpl_style', 'default')\nplt.rcParams['figure.figsize'] = (15, 5)\nplt.rcParams['font.family'] = 'sans-serif'\n\n# This is necessary to show lots of columns in pandas 0.12. \n# Not necessary in pandas 0.13.\npd.set_option('display.width', 5000) \npd.set_option('display.max_columns', 60)\n\ndf=pd.read_csv('sample2v.csv', header=None)\n\ndf\n\ndf[8].value_counts()\n\n114./10400", "Conclusion\nIn this dataset, Fails take roughly 1%. \nIf it is a representative sample of the real data, then running machine learning on the whole set will just not make sense. Any classifier that just predict \"Success\" for every line will attain 99% accuracy. \nThis means that I need to collect data for \"Fail\" cases and randomly sample data for \"Success\" in roughly equal amounts and then look at machine learning (classifier) for such sets to see the true accuracy of the algorithm.\nNext Steps\nI want to examine this dataset to see if there are any obvious correlations and to understand the data I have in my columns.\nauthentication type", "df[5].unique()", "logon type", "df[6].unique()", "authentication orientation", "df[7].unique()\n\ndf.groupby([5,8]).count()\n\ndf.groupby([6,8]).count()\n\ndf.groupby([7,8]).count()\n\ndf.groupby([6,7]).count()", "This is a simple way to see if there are any labels in columns 5-7 that predict the outcome. (answer: not really as the count for most events that can be interpreted this way is too low). Also I am try to see if there are any interesting correlations between labels.", "print len(df[3].unique()), len(df[4].unique())", "Potentially too many variables to be used in analysis", "df[\"source_user\"], df[\"source_domain\"] = zip(*df[1].str.split('@').tolist())\n\ndf[\"source_user\"]=df[\"source_user\"].str.rstrip('$')\n\ndf[\"destination_user\"], df[\"destination_domain\"] = zip(*df[2].str.split('@').tolist())\ndf[\"destination_user\"]=df[\"destination_user\"].str.rstrip('$')\n\ndf['same_user']=(df['destination_user']==df['source_user'])\ndf['same_domain']=(df['destination_domain']==df['source_domain'])\n\ndf['same_user'].value_counts()\n\ndf['same_domain'].value_counts()\n\ndf['source_domain'].unique()\n\ndf['destination_domain'].unique()\n\ndf['source_user'].unique()\n\ndf['destination_user'].unique()", "Potentially too many variable. I now want to explore what users I have in addition to C-numbers and U-numbers. (C=computer and U=user?)", "good=df[~df.source_user.str.startswith(\"U\")] \ngood=good.source_user[~good.source_user.str.startswith('C')]\ngood.unique()\n\ngood=df[~np.logical_or(df.destination_user.str.startswith(\"U\"), df.destination_user.str.startswith(\"C\"))] \n#good=good.destination_user[~good.destination_user.str.contains('C')]\ngood.destination_user.unique()", "Idea: one can expand this column into into 6 categories: C-users, U-users, 'ANONYMOUS LOGON', 'LOCAL_SERVICE', 'SYSTEM', 'NETWORK SERVICE'", "dd=df['destination_domain'].str.startswith('C')\nprint min(df['destination_domain'][dd].str.slice(1).astype(int)), max(df['destination_domain'][dd].str.slice(1).astype(int))\ndd=df[~df.destination_domain.str.startswith('C')]\nprint dd.destination_domain.unique()\n\nsd=df['source_domain'].str.startswith('C')\nprint min(df['source_domain'][sd].str.slice(1).astype(int)), max(df['source_domain'][sd].str.slice(1).astype(int))\nsd=df[~df.source_domain.str.startswith('C')]\nprint sd.source_domain.unique()", "Conclusion\nThis dataset contains columns of categorical data (aside from time). To work with this data, each label should be converted to its own column with values 1 (True) if the label applies and 0 (False) otherwise. Some columns (5-7) contain ~10 labels, where as other columns contain ten thousands of labels. I will ignore the 2nd class of labels on the first pass. Instead I will consider when these labels coincide. This way I will prevent my set of features from exploding. Also the 2nd class of labels most likely comes from some ordering of computers and users in the lab. Considering if one wants to authenticate to the same computer or to a different computer should matter more for authentication success than specific computer label.", "df['source_user_comp_same']=(df[3]==df['source_user'])\ndf['destination_user_comp_same']=(df['destination_user']==df[4])\ndf['same_comp']=(df[3]==df[4])\ndf['source_domain_comp_same']=(df[3]==df['source_domain'])\ndf['destination_domain_comp_same']=(df['destination_domain']==df[4])\n\ndf['source_user_comp_same'].value_counts()\n\ndf['destination_user_comp_same'].value_counts()\n\ndf['same_comp'].value_counts()\n\ndf['source_domain_comp_same'].value_counts()\n\ndf['destination_domain_comp_same'].value_counts()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
machinelearningnanodegree/stanford-cs231
solutions/vijendra/assignment3/ImageGeneration.ipynb
mit
[ "Image Generation\nIn this notebook we will continue our exploration of image gradients using the deep model that was pretrained on TinyImageNet. We will explore various ways of using these image gradients to generate images. We will implement class visualizations, feature inversion, and DeepDream.", "# As usual, a bit of setup\n\nimport time, os, json\nimport numpy as np\nfrom scipy.misc import imread, imresize\nimport matplotlib.pyplot as plt\n\nfrom cs231n.classifiers.pretrained_cnn import PretrainedCNN\nfrom cs231n.data_utils import load_tiny_imagenet\nfrom cs231n.image_utils import blur_image, deprocess_image, preprocess_image\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "TinyImageNet and pretrained model\nAs in the previous notebook, load the TinyImageNet dataset and the pretrained model.", "data = load_tiny_imagenet('cs231n/datasets/tiny-imagenet-100-A', subtract_mean=True)\nmodel = PretrainedCNN(h5_file='cs231n/datasets/pretrained_model.h5')", "# Class visualization\nBy starting with a random noise image and performing gradient ascent on a target class, we can generate an image that the network will recognize as the target class. This idea was first presented in [1]; [2] extended this idea by suggesting several regularization techniques that can improve the quality of the generated image.\nConcretely, let $I$ be an image and let $y$ be a target class. Let $s_y(I)$ be the score that a convolutional network assigns to the image $I$ for class $y$; note that these are raw unnormalized scores, not class probabilities. We wish to generate an image $I^*$ that achieves a high score for the class $y$ by solving the problem\n$$\nI^* = \\arg\\max_I s_y(I) + R(I)\n$$\nwhere $R$ is a (possibly implicit) regularizer. We can solve this optimization problem using gradient descent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form\n$$\nR(I) + \\lambda \\|I\\|_2^2\n$$\nand implicit regularization as suggested by [2] by peridically blurring the generated image. We can solve this problem using gradient ascent on the generated image.\nIn the cell below, complete the implementation of the create_class_visualization function.\n[1] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. \"Deep Inside Convolutional Networks: Visualising\nImage Classification Models and Saliency Maps\", ICLR Workshop 2014.\n[2] Yosinski et al, \"Understanding Neural Networks Through Deep Visualization\", ICML 2015 Deep Learning Workshop", "def create_class_visualization(target_y, model, **kwargs):\n \"\"\"\n Perform optimization over the image to generate class visualizations.\n \n Inputs:\n - target_y: Integer in the range [0, 100) giving the target class\n - model: A PretrainedCNN that will be used for generation\n \n Keyword arguments:\n - learning_rate: Floating point number giving the learning rate\n - blur_every: An integer; how often to blur the image as a regularizer\n - l2_reg: Floating point number giving L2 regularization strength on the image;\n this is lambda in the equation above.\n - max_jitter: How much random jitter to add to the image as regularization\n - num_iterations: How many iterations to run for\n - show_every: How often to show the image\n \"\"\"\n \n learning_rate = kwargs.pop('learning_rate', 10000)\n blur_every = kwargs.pop('blur_every', 1)\n l2_reg = kwargs.pop('l2_reg', 1e-6)\n max_jitter = kwargs.pop('max_jitter', 4)\n num_iterations = kwargs.pop('num_iterations', 100)\n show_every = kwargs.pop('show_every', 25)\n \n X = np.random.randn(1, 3, 64, 64)\n for t in xrange(num_iterations):\n # As a regularizer, add random jitter to the image\n ox, oy = np.random.randint(-max_jitter, max_jitter+1, 2)\n X = np.roll(np.roll(X, ox, -1), oy, -2)\n\n dX = None\n ############################################################################\n # TODO: Compute the image gradient dX of the image with respect to the #\n # target_y class score. This should be similar to the fooling images. Also #\n # add L2 regularization to dX and update the image X using the image #\n # gradient and the learning rate. #\n ############################################################################\n pass\n ############################################################################\n # END OF YOUR CODE #\n ############################################################################\n \n # Undo the jitter\n X = np.roll(np.roll(X, -ox, -1), -oy, -2)\n \n # As a regularizer, clip the image\n X = np.clip(X, -data['mean_image'], 255.0 - data['mean_image'])\n \n # As a regularizer, periodically blur the image\n if t % blur_every == 0:\n X = blur_image(X)\n \n # Periodically show the image\n if t % show_every == 0:\n plt.imshow(deprocess_image(X, data['mean_image']))\n plt.gcf().set_size_inches(3, 3)\n plt.axis('off')\n plt.show()\n return X", "You can use the code above to generate some cool images! An example is shown below. Try to generate a cool-looking image. If you want you can try to implement the other regularization schemes from Yosinski et al, but it isn't required.", "target_y = 43 # Tarantula\nprint data['class_names'][target_y]\nX = create_class_visualization(target_y, model, show_every=25)", "Feature Inversion\nIn an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network.\nConcretely, given a image $I$, let $\\phi_\\ell(I)$ be the activations at layer $\\ell$ of the convolutional network $\\phi$. We wish to find an image $I^*$ with a similar feature representation as $I$ at layer $\\ell$ of the network $\\phi$ by solving the optimization problem\n$$\nI^* = \\arg\\min_{I'} \\|\\phi_\\ell(I) - \\phi_\\ell(I')\\|_2^2 + R(I')\n$$\nwhere $\\|\\cdot\\|_2^2$ is the squared Euclidean norm. As above, $R$ is a (possibly implicit) regularizer. We can solve this optimization problem using gradient descent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form\n$$\nR(I') + \\lambda \\|I'\\|_2^2\n$$\ntogether with implicit regularization by periodically blurring the image, as recommended by [2].\nImplement this method in the function below.\n[1] Aravindh Mahendran, Andrea Vedaldi, \"Understanding Deep Image Representations by Inverting them\", CVPR 2015\n[2] Yosinski et al, \"Understanding Neural Networks Through Deep Visualization\", ICML 2015 Deep Learning Workshop", "def invert_features(target_feats, layer, model, **kwargs):\n \"\"\"\n Perform feature inversion in the style of Mahendran and Vedaldi 2015, using\n L2 regularization and periodic blurring.\n \n Inputs:\n - target_feats: Image features of the target image, of shape (1, C, H, W);\n we will try to generate an image that matches these features\n - layer: The index of the layer from which the features were extracted\n - model: A PretrainedCNN that was used to extract features\n \n Keyword arguments:\n - learning_rate: The learning rate to use for gradient descent\n - num_iterations: The number of iterations to use for gradient descent\n - l2_reg: The strength of L2 regularization to use; this is lambda in the\n equation above.\n - blur_every: How often to blur the image as implicit regularization; set\n to 0 to disable blurring.\n - show_every: How often to show the generated image; set to 0 to disable\n showing intermediate reuslts.\n \n Returns:\n - X: Generated image of shape (1, 3, 64, 64) that matches the target features.\n \"\"\"\n learning_rate = kwargs.pop('learning_rate', 10000)\n num_iterations = kwargs.pop('num_iterations', 500)\n l2_reg = kwargs.pop('l2_reg', 1e-7)\n blur_every = kwargs.pop('blur_every', 1)\n show_every = kwargs.pop('show_every', 50)\n \n X = np.random.randn(1, 3, 64, 64)\n for t in xrange(num_iterations):\n ############################################################################\n # TODO: Compute the image gradient dX of the reconstruction loss with #\n # respect to the image. You should include L2 regularization penalizing #\n # large pixel values in the generated image using the l2_reg parameter; #\n # then update the generated image using the learning_rate from above. #\n ############################################################################\n pass\n ############################################################################\n # END OF YOUR CODE #\n ############################################################################\n \n # As a regularizer, clip the image\n X = np.clip(X, -data['mean_image'], 255.0 - data['mean_image'])\n \n # As a regularizer, periodically blur the image\n if (blur_every > 0) and t % blur_every == 0:\n X = blur_image(X)\n\n if (show_every > 0) and (t % show_every == 0 or t + 1 == num_iterations):\n plt.imshow(deprocess_image(X, data['mean_image']))\n plt.gcf().set_size_inches(3, 3)\n plt.axis('off')\n plt.title('t = %d' % t)\n plt.show()", "Shallow feature reconstruction\nAfter implementing the feature inversion above, run the following cell to try and reconstruct features from the fourth convolutional layer of the pretrained model. You should be able to reconstruct the features using the provided optimization parameters.", "filename = 'kitten.jpg'\nlayer = 3 # layers start from 0 so these are features after 4 convolutions\nimg = imresize(imread(filename), (64, 64))\n\nplt.imshow(img)\nplt.gcf().set_size_inches(3, 3)\nplt.title('Original image')\nplt.axis('off')\nplt.show()\n\n# Preprocess the image before passing it to the network:\n# subtract the mean, add a dimension, etc\nimg_pre = preprocess_image(img, data['mean_image'])\n\n# Extract features from the image\nfeats, _ = model.forward(img_pre, end=layer)\n\n# Invert the features\nkwargs = {\n 'num_iterations': 400,\n 'learning_rate': 5000,\n 'l2_reg': 1e-8,\n 'show_every': 100,\n 'blur_every': 10,\n}\nX = invert_features(feats, layer, model, **kwargs)", "Deep feature reconstruction\nReconstructing images using features from deeper layers of the network tends to give interesting results. In the cell below, try to reconstruct the best image you can by inverting the features after 7 layers of convolutions. You will need to play with the hyperparameters to try and get a good result.\nHINT: If you read the paper by Mahendran and Vedaldi, you'll see that reconstructions from deep features tend not to look much like the original image, so you shouldn't expect the results to look like the reconstruction above. You should be able to get an image that shows some discernable structure within 1000 iterations.", "filename = 'kitten.jpg'\nlayer = 6 # layers start from 0 so these are features after 7 convolutions\nimg = imresize(imread(filename), (64, 64))\n\nplt.imshow(img)\nplt.gcf().set_size_inches(3, 3)\nplt.title('Original image')\nplt.axis('off')\nplt.show()\n\n# Preprocess the image before passing it to the network:\n# subtract the mean, add a dimension, etc\nimg_pre = preprocess_image(img, data['mean_image'])\n\n# Extract features from the image\nfeats, _ = model.forward(img_pre, end=layer)\n\n# Invert the features\n# You will need to play with these parameters.\nkwargs = {\n 'num_iterations': 1000,\n 'learning_rate': 0,\n 'l2_reg': 0,\n 'show_every': 100,\n 'blur_every': 0,\n}\nX = invert_features(feats, layer, model, **kwargs)", "DeepDream\nIn the summer of 2015, Google released a blog post describing a new method of generating images from neural networks, and they later released code to generate these images.\nThe idea is very simple. We pick some layer from the network, pass the starting image through the network to extract features at the chosen layer, set the gradient at that layer equal to the activations themselves, and then backpropagate to the image. This has the effect of modifying the image to amplify the activations at the chosen layer of the network.\nFor DeepDream we usually extract features from one of the convolutional layers, allowing us to generate images of any resolution.\nWe can implement this idea using our pretrained network. The results probably won't look as good as Google's since their network is much bigger, but we should still be able to generate some interesting images.", "def deepdream(X, layer, model, **kwargs):\n \"\"\"\n Generate a DeepDream image.\n \n Inputs:\n - X: Starting image, of shape (1, 3, H, W)\n - layer: Index of layer at which to dream\n - model: A PretrainedCNN object\n \n Keyword arguments:\n - learning_rate: How much to update the image at each iteration\n - max_jitter: Maximum number of pixels for jitter regularization\n - num_iterations: How many iterations to run for\n - show_every: How often to show the generated image\n \"\"\"\n \n X = X.copy()\n \n learning_rate = kwargs.pop('learning_rate', 5.0)\n max_jitter = kwargs.pop('max_jitter', 16)\n num_iterations = kwargs.pop('num_iterations', 100)\n show_every = kwargs.pop('show_every', 25)\n \n for t in xrange(num_iterations):\n # As a regularizer, add random jitter to the image\n ox, oy = np.random.randint(-max_jitter, max_jitter+1, 2)\n X = np.roll(np.roll(X, ox, -1), oy, -2)\n\n dX = None\n ############################################################################\n # TODO: Compute the image gradient dX using the DeepDream method. You'll #\n # need to use the forward and backward methods of the model object to #\n # extract activations and set gradients for the chosen layer. After #\n # computing the image gradient dX, you should use the learning rate to #\n # update the image X. #\n ############################################################################\n pass\n ############################################################################\n # END OF YOUR CODE #\n ############################################################################\n \n # Undo the jitter\n X = np.roll(np.roll(X, -ox, -1), -oy, -2)\n \n # As a regularizer, clip the image\n mean_pixel = data['mean_image'].mean(axis=(1, 2), keepdims=True)\n X = np.clip(X, -mean_pixel, 255.0 - mean_pixel)\n \n # Periodically show the image\n if t == 0 or (t + 1) % show_every == 0:\n img = deprocess_image(X, data['mean_image'], mean='pixel')\n plt.imshow(img)\n plt.title('t = %d' % (t + 1))\n plt.gcf().set_size_inches(8, 8)\n plt.axis('off')\n plt.show()\n return X", "Generate some images!\nTry and generate a cool-looking DeepDeam image using the pretrained network. You can try using different layers, or starting from different images. You can reduce the image size if it runs too slowly on your machine, or increase the image size if you are feeling ambitious.", "def read_image(filename, max_size):\n \"\"\"\n Read an image from disk and resize it so its larger side is max_size\n \"\"\"\n img = imread(filename)\n H, W, _ = img.shape\n if H >= W:\n img = imresize(img, (max_size, int(W * float(max_size) / H)))\n elif H < W:\n img = imresize(img, (int(H * float(max_size) / W), max_size))\n return img\n\nfilename = 'kitten.jpg'\nmax_size = 256\nimg = read_image(filename, max_size)\nplt.imshow(img)\nplt.axis('off')\n\n# Preprocess the image by converting to float, transposing,\n# and performing mean subtraction.\nimg_pre = preprocess_image(img, data['mean_image'], mean='pixel')\n\nout = deepdream(img_pre, 7, model, learning_rate=2000)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Artificial-Intelligence
Python/2 Constraint Solver/Constraint-Propagation-Solver.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as f:\n css = f.read()\nHTML(css)", "If we want to have reproducible results, the environment variable PYTHONHASHSEED has to be set to a fixed value, for example to 0.\nBelow we check that this environment is set so that results are reproducible.\nIn order to set this variable we have to use the following sequence of commands in the anaconda shell.\nconda activate ai\nconda env config vars set PYTHONHASHSEED=0\nconda activate ai\nIt is necessary to reactivate the environment ai for the setting to take effect.", "import os\nos.getenv('PYTHONHASHSEED')\n\nassert hash(\"test\") == 4418353137104490830, \"You python hash seed is not correct, results might differ\"", "A Backtracking Solver with Constraint Propagation\nUtility Functions", "import ast", "The function collect_variables(expr) takes a string expr that can be interpreted as a Python expression as input and collects all variables occurring in expr. It takes care to eliminate the function symbols from the names returned by extract_variables.", "def collect_variables(expression): \n tree = ast.parse(expression)\n Variables = { node.id for node in ast.walk(tree) \n if isinstance(node, ast.Name) \n if node.id not in dir(__builtins__)\n }\n return frozenset(Variables)", "The function arb(S) takes a set S as input and returns an arbitrary element from \nthis set.", "def arb(S):\n for x in S:\n return x", "Backtracking is simulated by raising the Backtrack exception. We define this new class of exceptions so that we can distinguish Backtrack exceptions from ordinary exceptions. This is done by creating a new, empty class that is derived from the class Exception.", "class Backtrack(Exception):\n pass", "Given a list of sets L, the function union(L) returns the set of all elements occurring in some set $S$ that is itself a member of the list L, i.e. we have\n$$ \\texttt{union}(L) = { x \\mid \\exists S \\in L : x \\in L }. $$", "def union(L):\n return { x for S in L\n for x in S\n }\n\nunion([ {1, 2}, {'a', 'b'}, {1, 'a'} ])", "The Constraint Propagation Solver\nThe procedure solve(P, lcv=False) takes two arguments:\n* P is a constraint satisfaction problem, i.e. P is a triple of the form \n $$ \\mathtt{P} = \\langle \\mathtt{Variables}, \\mathtt{Values}, \\mathtt{Constraints} \\rangle $$\n where \n - Variables is a set of strings which serve as variables,\n - Values is a set of values that can be assigned to the variables in the set Variables.\n - Constraints is a set of formulas from first order logic.\n Each of these formulas is called a constraint of P.\n The formulas are represented as strings.\n* lcv is a Boolean flag. If this flag is set to True, the least constraining value heuristic is used when choosing values. \n Otherwise, the values are chosen arbitrarily. \nInitially, the function solve checks that the set Constraints does not contain any variables that are not \nelements of the set Variables. Furthermore, it checks that all variables in Variables do indeed occur \nin one of the constraints. These two checks are useful to capture spelling mistakes.\nThen, the function solve converts the CSP P into an augmented CSP where every constraint $f$ is annotated with the variables occurring in $f$. \nThe most important data structure maintained by solve is the dictionary ValuesPerVar. For every variable $x$ occurring in a constraint of P, the expression $\\texttt{ValuesPerVar}[x]$ is the set of values that can be used to instantiate the variable $x$. Initially, \n$\\texttt{ValuesPerVar}[x]$ is set to Values, but as the search for a solution proceeds, the sets $\\texttt{ValuesPerVar}[x]$ are reduced by removing any values that cannot be part of a solution.\nThis way, the consequences of binding one variable to a value are propagated to the other variables.\nNext, the function solve divides the constraints into two groups:\n- The unary constraints are those that only contain a single variable.\nThe unary constraints can be solved immediately: \n If $f$ is a unary constraint containing only the variable $x$, the set $\\texttt{ValuesPerVar}(x)$ \n is reduced to the set of those values $v$ such that $\\texttt{eval}(f, {x\\mapsto v})$ is true.\n- The remaining constraints contain at least two different variables.\nAfter the unary constraints have been taken care of, backtrack_search is called to solve the remaining constraint satisfaction problem. The function backtrack_search uses both backtracking and constraint propagation to solve the remaining constraints.\nFurthermore, the most constrained variable heuristic and, if lvc is set to True, the least constraining value heuristic are used.", "def solve(P, lcv=False):\n Variables, Values, Constraints = P \n VarsInConstrs = union([ collect_variables(f) for f in Constraints ])\n MisspelledVars = (VarsInConstrs - Variables) | (Variables - VarsInConstrs)\n if len(MisspelledVars) > 0:\n print('Did you misspell any of the following Variables?')\n for v in MisspelledVars:\n print(v)\n Annotated = { (f, collect_variables(f)) for f in Constraints }\n ValuesPerVar = { v: Values for v in Variables }\n UnaryConstrs = { (f, V) for f, V in Annotated if len(V) == 1 }\n OtherConstrs = { (f, V) for f, V in Annotated if len(V) >= 2 }\n try:\n for f, V in UnaryConstrs:\n var = arb(V)\n ValuesPerVar[var] = solve_unary(f, var, ValuesPerVar[var])\n return backtrack_search({}, ValuesPerVar, OtherConstrs, lcv)\n except Backtrack:\n return None", "The function solve_unary takes three arguments:\n* f is a unary constraint, i.e. a constraint that contains only one variable,\n* x is the variable occurring in f, and \n* Values is the set of values that can be assigned to the variable x. \nThe function returns the subset of those values v from the set Values that can be substituted for x such that $\\texttt{eval}(f, { x \\mapsto v })$ evaluates as True. If the unary constraint f is unsolvable, then the given CSP is unsolvable and an exception is raised.", "def solve_unary(f, x, Values):\n Legal = { value for value in Values if eval(f, { x: value }) }\n if len(Legal) == 0:\n raise Backtrack()\n return Legal", "The function backtrack_search takes four arguments:\n- Assignment is a partial variable assignment that is represented as a\n dictionary. Initially, this assignment will be the empty dictionary. \n Every recursive call of backtrack_search adds the assignment of one \n variable to the given assignment. \n- ValuesPerVar is a dictionary. For every variable x, ValuesPerVar[x] is the set of values \n that still might be assigned to x.\n- Constraints is a set of pairs of the form (F, V) where F is a constraint and V is the \n set of variables occurring in V.\n- lcv is a Boolean flag. If this flag is set to true, the least constraining value heuristic is used.\nThe function tries to solve the given CSP via backtracking. Instead of picking the variables arbitrarily, it uses \nthe most constraint variable heuristic and therefore instantiates those variables first, that have the least\nremaining values. This way, a dead end in the search is discovered sooner.", "def backtrack_search(Assignment, ValuesPerVar, Constraints, lcv):\n if len(Assignment) == len(ValuesPerVar):\n return Assignment\n x = most_constrained_variable(Assignment, ValuesPerVar)\n if lcv and len(ValuesPerVar[x]) > 1:\n ValueList = least_constraining(x, ValuesPerVar, Assignment, Constraints)\n else:\n ValueList = ValuesPerVar[x]\n for v in ValueList: \n try:\n NewValues = propagate(x, v, Assignment, Constraints, ValuesPerVar)\n NewAssign = Assignment.copy()\n NewAssign[x] = v\n return backtrack_search(NewAssign, NewValues, Constraints, lcv)\n except Backtrack:\n continue\n raise Backtrack()", "The function most_constrained_variable takes two parameters:\n- Assignment is a partial variable assignment that assigns values to variables. It is represented as a dictionary.\n- ValuesPerVar is a dictionary that maps variables to the set of values that may be assigned to these variables,\n i.e. for every variable x, ValuesPerVar[x] is the set of values that can be assigned to the variable x\n without violating a constraint.\nThe function returns an unassigned variable x such that the number of values in ValuesPerVar[x] is minimal among all other unassigned variables.", "def most_constrained_variable(Assignment, ValuesPerVar):\n Unassigned = { (x, len(U)) for x, U in ValuesPerVar.items()\n if x not in Assignment\n }\n minSize = min(lenU for _, lenU in Unassigned)\n return arb({ x for x, lenU in Unassigned if lenU == minSize })", "We import math because this gives us access to the infinite value $\\infty$, which is available as math.inf.", "import math", "The function least_constraining takes four arguments:\n* x is a variable. \n* ValuesPerVar is a dictionary. For every variable var, ValuesPerVar[var] is the set of values that can be assigned to var.\n* Assignment is a partial variable assignment.\n* Constraints is a set of annotated constraints.\nThis function returns a list of values that can be substituted for the variable x.\nThis list is sorted so that the least constraining values are at the beginning of this list.", "def least_constraining(x, ValuesPerVar, Assignment, Constraints):\n NumbersValues = []\n for value in ValuesPerVar[x]:\n ReducedValues = ValuesPerVar.copy()\n num_removed = shrinkage(x, value, Assignment, ReducedValues, Constraints)\n if num_removed != math.inf:\n NumbersValues.append( (num_removed, value) )\n NumbersValues.sort(key=lambda p: p[0])\n return [val for _, val in NumbersValues]", "The function shrinkage takes 5 arguments:\n- x is a variable that has not yet been assigned a value.\n- value is a value that is to be assigned to the variable x.\n- Assignment is a partial variable assignment that does not assign a value to x.\n- ValuesPerVar is a dictionary that has variables as keys. For every variable z, ValuesPerVar[z] is the set of values that \n can still be assigned to the variable z.\n- Constraints is a set of pairs of the form (f, V) where f is a constraint and V is the set of variables occurring in f.\nThis function returns the shrinkage number, which is the number of values that need to be removed from the set \nValuesPerVar[y] for those variables y that are different from x if we assign value to the variable x. \nIf the assignment { x: value } results in any of the sets ValuesPerVar[y]\nbecoming empty, then the function returns math.inf in order to signal that the assignment { x: value } leads to an unsolvable problem.", "def shrinkage(x, value, Assignment, ValuesPerVar, Constraints):\n count = 0 # number of values removed from ValuesPerVar\n BoundVars = set(Assignment.keys())\n for f, Vars in Constraints:\n if x in Vars:\n UnboundVars = Vars - BoundVars - { x }\n if len(UnboundVars) == 1:\n y = arb(UnboundVars)\n Legal = set()\n for w in ValuesPerVar[y]:\n NewAssign = Assignment.copy()\n NewAssign[x] = value\n NewAssign[y] = w\n if eval(f, NewAssign):\n Legal.add(w)\n else:\n count += 1\n if len(Legal) == 0:\n return math.inf\n ValuesPerVar[x] = Legal\n return count ", "The function propagate takes five arguments:\n- x is a variable,\n- v is a value that is supposed to be assigned to x.\n- Assignment is a partial assignment that contains assignments for variables that are different from x.\n- Constraints is a set of annotated constraints.\n- ValuesPerVar is a dictionary assigning sets of values to all variables. For every unassigned variable z, ValuesPerVar[z] is the set of values that still might be assigned to z.\nThe purpose of the function propagate is to compute how the sets ValuesPerVar[z] can be shrunk when the value v is assigned to the variable x. The dictionary ValuesPerVar with appropriately reduced sets ValuesPerVar[z] is returned. In particular, the consequences of assigning the value v to the variable x are propagated:\nIf there is a constraint f such that x occurs in f and there is just one variable y left that occurs in \nf and that is not yet bound in Assignment, then the values that can still be assigned to y are computed\nand the dictionary ValuesDict is updated accordingly. If there are no values left that can be assigned to \ny without violating the constraint f, the function backtracks.", "def propagate(x, v, Assignment, Constraints, ValuesPerVar):\n ValuesDict = ValuesPerVar.copy()\n ValuesDict[x] = { v }\n BoundVars = set(Assignment.keys())\n for f, Vars in Constraints:\n if x in Vars:\n UnboundVars = Vars - BoundVars - { x }\n if len(UnboundVars) == 1:\n y = arb(UnboundVars)\n Legal = set()\n for w in ValuesDict[y]:\n NewAssign = Assignment.copy()\n NewAssign[x] = v\n NewAssign[y] = w\n if eval(f, NewAssign):\n Legal.add(w)\n if not Legal:\n raise Backtrack()\n ValuesDict[y] = Legal\n return ValuesDict", "Solving the Eight-Queens-Puzzle", "%%capture\n%run N-Queens-Problem-CSP.ipynb\n\nP = create_csp(8)", "Constraint Propagation with the least constraining value heuristic takes about 27 milliseconds on my Windows desktop to solve the eight queens puzzle.", "%%time\nSolution = solve(P, lcv=True)\nprint(f'Solution = {Solution}')\n\nshow_solution(Solution)", "Constraint Propagation without the least constraining value heuristic takes only 13 milliseconds on my desktop to solve the eight queens puzzle.", "%%time\nSolution = solve(P)\nprint(f'Solution = {Solution}')\n\nP = create_csp(32)", "Constraint propagation can solve the 32 queens problem in less than $2.2$ seconds, if the least constraining value heuristic is used.", "%%time\nSolution = solve(P, True)\nprint(f'Solution = {Solution}')", "Constraint propagation can solve the 32 queens problem in less than $264$ milliseconds, if the least constraining value heuristic is not used.\nThe $n$-queens problem is a relatively easy CSP and hence the least constraining value is not useful.", "%%time\nSolution = solve(P)\nprint(f'Solution = {Solution}')", "Solving the Zebra Puzzle", "%run Zebra.ipynb\n\nzebra = zebra_csp()", "Constraint propagation with the least constraining value heuristic takes about 12 milliseconds to solve the Zebra Puzzle.", "%%time\nSolution = solve(zebra, True)\n\nshow_solution(Solution)", "If the least constraining value heuristic is not used, it takes about 12 milliseconds to solve the Zebra Puzzle.", "%%time\nSolution = solve(zebra)", "Solving a Sudoku Puzzle", "%run Sudoku-Frame.ipynb\n\ncsp = sudoku_csp(Sudoku)", "Constraint propagation with the least constraining value heuristic takes about 112 milliseconds to solve \nthe given sudoku.", "%%time\nSolution = solve(csp, True)\n\nshow_solution(Solution)", "Constraint propagation without the least constraining value heuristic takes 133 milliseconds to solve \nthe given sudoku. Hence, in this case the least constraining value heuristic is useful.", "%%time\nSolution = solve(csp, False)", "Let's check whether the solution is unique.", "csp = find_alternative(csp, Solution)\ncsp\n\n%%time\nSolution = solve(csp)\nif Solution:\n print('There is another solution.')\nelse:\n print('The solution is unique!')", "Solving the Crypto-Arithmetic Puzzle", "%run Crypto-Arithmetic.ipynb\n\ncsp = crypto_csp()\ncsp", "Constraint propagation takes about 118 milliseconds to solve the crypto-arithmetic puzzle if the \nleast constraining value heuristic is used.", "%%time\nSolution = solve(csp, True)\n\nshow_solution(Solution)", "Constraint propagation takes about 1.1 seconds if the least constraining value heuristic is not used.", "%%time\nSolution = solve(csp)", "Let us try the hard version of the puzzle.", "csp = crypto_csp_hard()\n\n%%time\nSolution = solve(csp, True)\n\n%%time\nSolution = solve(csp)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mdda/pycon.sg-2015_deep-learning
ipynb/3-ChooseGPU.ipynb
mit
[ "GPU Chooser\nThis workbook can help rank GPUs according a mixture of features (with the weights determined by the user) and graph it against price.\nData\nFirstly, pull in the parameters from Wikipedia\n(NVidia cards, \nAMD cards) \nfor the cards under consideration (more can easily be added, though, to keep the list reasonable, don't add cards with >1000 single precision GFLOPs), each with one example of the product on Amazon (more Amazon examples can be added below) :", "raw=\"\"\"\nname | sh:tx:rop | mem | bw|bus|ocl|single|double|watts| amz:brand:comment\nGeForce GT740 DDR3 4G| 384:32:16 | 4096| 28|128|1.2| 763 | 0 | 65 | B00KJGYOBQ\n\nGeForce GTX 750 1Gb | 512:32:16 | 1024| 80|128|1.2| 1044 | 32.6 | 55 | B00IDG3NDY\nGeForce GTX 750 2Gb | 512:32:16 | 2048| 80|128|1.2| 1044 | 32.6 | 55 | B00J3ZNB04\nGeForce GTX 750Ti 2Gb| 640:40:16 | 2048| 80|128|1.2| 1306 | 40.8 | 60 | B00IDG3IDO\nGeForce GTX 750Ti 4Gb| 640:40:16 | 4096| 80|128|1.2| 1306 | 40.8 | 60 | B00T4RJ8FI\nGeForce GTX 760 2Gb |1152:96:32 | 2048|192|256|1.2| 2257 | 94 | 170 | B00DT5R3EO\nGeForce GTX 760 4Gb |1152:96:32 | 4096|192|256|1.2| 2257 | 94 | 170 | B00E9O28DU\n\nGeForce GTX 960 2Gb |1024:64:32 | 2048|112|128|1.2| 2308 | 72.1 | 120 | B00SC6HAS4\nGeForce GTX 960 4Gb |1024:64:32 | 4096|112|128|1.2| 2308 | 72.1 | 120 | B00UOYQ5LA\nGeForce GTX 970 |1664:104:56| 3584|196|224|1.2| 3494 | 109 | 145 | B00NVODXR4\nGeForce GTX 980 |2048:128:64| 4096|224|256|1.2| 4612 | 144 | 165 | B00NT9UT3M\nGeForce GTX 980 Ti |2816:176:96| 6144|336|384|1.2| 5632 | 176 | 250 | B00YNEIAWY\nGeForce GTX Titan X |3072:192:96|12288|336|384|1.2| 6144 | 192 | 250 | B00UXTN5P0\n\nHD 5570 1Gb | 400:20:8 | 1024| 29|128|1.2| 520 | 0 | 39 | B004JU260O\nR9 280 |1792:112:32| 3072|240|384|1.2| 2964 | 741 | 250 | B00IZXOW80\n\nR9 290 |2560:160:64| 4096|320|512|2.0| 4848 | 606 | 275 | B00V4JVY1A\nR9 290X |2816:176:64| 4096|320|512|2.0| 5632 | 704 | 290 | B00FLMKQY2\n\nR9 380 2Gb |1792:112:32| 2048|182|256|2.1| 3476 | 217 | 190 | B00ZGL8EBK\nR9 380 4Gb |1792:112:32| 4096|182|256|2.1| 3476 | 217 | 190 | B00ZGF3TUC\nR9 390 8Gb |2560:160:64| 8192|384|512|2.1| 5120 | 640 | 275 | B00ZGL8CYY\nR9 390X |2816:176:64| 8192|384|512|2.1| 5914 | 739 | 275 | B00ZGL8CFI\n\"\"\"\n\nimport re\narr = [ re.split(r'\\s*[|:]\\s*',l) for l in raw.split('\\n') if len(l)>0]\nheadings = arr[0]\ncards=[ { h:(e if h in 'name.amz' else float(e)) for h,e in zip(headings,a) } for a in arr[1:] ]\npricing={ a['amz']:{k:v for k,v in a.items() if k in 'name:brand:comment:amz'} for a in cards}\n#for c in cards:print(\"%s|%s\" % (c['name'], c['amz']))", "Now the GPU card data is in a nice array of dictionary entries, with numeric entries for all but 'name' and the Amazon item ID, indexed in the same order as 'raw'.\nEquivalent cards for Additional Price data\nHere, one can put additional Amazon product codes that refer to the same \ncard from a Compute perspective (different manufacturer and/or different ports may make the \ncards different from a gaming user's perspective, of course).\nTODO : Add in more prices, to get a broader sample", "raw=\"\"\"\nname |amz:brand:comment\nGeForce GTX 750 1Gb |\nGeForce GTX 750 2Gb |\nGeForce GTX 750Ti 2Gb|\nGeForce GTX 750Ti 4Gb|\nGeForce GTX 760 2Gb |\nGeForce GTX 760 4Gb |\nGeForce GTX 960 2Gb |\nGeForce GTX 960 4Gb |\nGeForce GTX 970 |B00OQUMGM0:GigabyteMiniITX\nGeForce GTX 970 |B00NH5ZNWA:PNY\nGeForce GTX 980 |\nGeForce GTX 980 Ti |B00YDAYOK0:EVGA\nGeForce GTX Titan X |\nR9 290 |\nR9 290X |\nR9 380 2Gb |\nR9 380 4Gb |\nR9 390 8Gb |B00ZQ9JKSS:Visiontech\nR9 390 8Gb |B00ZQ3QVU4:Asus\nR9 390 8Gb |B00ZGF3UAQ:Gigabyte\nR9 390 8Gb |B00ZGL8CYY:Sapphire\nR9 390 8Gb |B00ZGF0UAE:MSI\nR9 390X |B00ZGF3TNO:Gigabyte\nR9 390X |B00ZGL8CFI:Sapphire\nR9 390X |B00ZGF158A:MSI\n\"\"\"\n\narr = [ re.split(r'\\s*[|:]\\s*',l) for l in raw.split('\\n') if len(l)>0]\nheadings = arr[0]\nequivs =[ { h:e for h,e in zip(headings,a) } for a in arr[1:] ]\npricing.update({ a['amz']:a for a in equivs if a['amz'] })\n#pricing", "Add known prices from Amazon\nIf you want to regenerate these, execute the block below. To 'cache' them back into this script, \nsimply copy the generated list back into the following cell", "cache={'B00IDG3IDO': 139.99, 'B00OQUMGM0': 299.99, 'B00T4RJ8FI': 349.99, 'B00ZGL8CYY': 359.42, 'B00NT9UT3M': 507.82, 'B00ZGF0UAE': 369.99, 'B00YNEIAWY': 698.85, 'B00ZGF158A': 429.99, 'B00IZXOW80': 249.99, 'B00FLMKQY2': 339.99, 'B00ZGF3UAQ': 329.99, 'B00ZGL8EBK': 216.53, 'B00IDG3NDY': 114.12, 'B00V4JVY1A': 333.26, 'B00ZGL8CFI': 458.63, 'B00UOYQ5LA': 239.99, 'B00YDAYOK0': 679.99, 'B00UXTN5P0': 1029.99, 'B00ZQ3QVU4': 349.99, 'B00ZQ9JKSS': 368.63, 'B00DT5R3EO': 199.99, 'B00NVODXR4': 337.99, 'B00KJGYOBQ': 99.99, 'B004JU260O': 180.99, 'B00J3ZNB04': 149.37, 'B00ZGF3TUC': 229.99, 'B00SC6HAS4': 199.99, 'B00E9O28DU': 274.99}\nfor k,v in cache.items():\n if k in pricing and pricing[k].get('px',None) is None:\n pricing[k]['px'] = v\n#pricing", "Grab prices from Amazon\nRather than use their API (which creates the issue of putting the keys into GitHub), just grab the pages. NB: The page caches the prices found into the data structure to avoid doing this too often!\nThe price downloading/parsing requires that you have requests and BeautifulSoup installed : pip install requests BeautifulSoup4", "import requests\nfrom bs4 import BeautifulSoup\n\nBASE_URL = \"http://www.amazon.com/exec/obidos/ASIN/\"\n\nfor k,v in pricing.items():\n if v.get('px', None) is None:\n name = v.get('name', 'UNKNOWN')\n print(\"Fetching price for %s from Amazon.com\" % (name))\n r = requests.get(BASE_URL + k)\n soup = BeautifulSoup(r.text, 'html.parser')\n price = None\n try:\n ele = soup.find(id=\"priceblock_ourprice\")\n price = float(ele.text.replace('$','').replace(',',''))\n except AttributeError:\n print(\"Didn't find the 'price' element for %s (%s)\" % (name, k))\n v['px']=price\nprint(\"Finished downloading prices : Run the 'cache' script below to save the data\")", "Code required to 'cache' prices found\nExectute the following, and copy its output to the pxs= line above so that the page \ncan remember the prices found on Amazon most recently.", "print({ k:v['px'] for k,v in pricing.items() if v.get('px',None) is not None})", "Aggregate Prices (to determine range, and minimum per card)", "for c in cards:\n pxs = [ v['px'] for k,v in pricing.items() if v['name']== c['name'] and v.get('px', None) is not None ]\n c['px']=min(pxs)\n c['px_max']=max(pxs)", "Show Known Prices", "for c in cards:\n if c.get('px', None) is not None:\n print(\"%s| $%7.2f\" % ((c['name']+' '*30)[:24], c['px']))", "Score Cards based on given weights\nThe concept here is that one can focus on a 'basecard' (for instance, one you already have, or one you've looked at closely), and then assign multiplicative weights to each of a GPU card's qualities, and come up with a 'relative performance' according to that weighting scheme.", "basecard = 'GeForce GTX 760 2Gb' # Name should match a card with full data above\nbasedata = [ c for c in cards if c['name']==basecard ][0] \n\nmultipliers = dict(single=2., mem=1.) # FLOPs are twice as important as memory, all else ignored\ncards_filtered = [c for c in cards if c['ocl']<3. and c['px']<500. ]\n\ndef evaluate_card(base, d, mult):\n comp=0.\n for (k,v) in mult.items():\n if d.get(k,None) is not None and base.get(k,None) is not None:\n comp += v*d[k]/base[k]\n return comp\nx=[ c.get('px',None) for c in cards_filtered ]\ny=[ evaluate_card(basedata, c, multipliers) for c in cards_filtered ]\nl=[ c['name'] for c in cards_filtered ]\nfor name,score,px in sorted(zip(l,y,x), key=lambda p: -p[1]):\n print(\"%s| $%7.2f | %5.2f\" % ((name+' '*30)[:24], px, score))", "Visualize the Results\nFinally, the card scores can be visualised, against their absolute dollar \ncost (the 'efficient frontier' being the envelope around the points from the upper left corner).", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(15, 8))\nplt.plot(x,y, 'ro')\nfor i, xy in enumerate(zip(x, y)): \n plt.annotate('%s' % (l[i]), xy=xy, xytext=(5,.05), textcoords='offset points')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oditorium/blog
Modules/DataImport.ipynb
agpl-3.0
[ "Data Import - Testing\nClass definitions\nmodule DataImport\nWe want to import data directly from the ECB data warehouse, so for example rather than going to the series we want to download the csv data. In fact, the ECB provides three different download format (two csv's, one generic and one for Excel) and one XML download.\nThere is is also an sdmx query facility that allows more granula control over what data will be downloaded.\nThe URI's are as follows (most also allow https):\n\nhuman readable series\n~~~\nhttp://sdw.ecb.europa.eu/quickview.do?SERIES_KEY=-key-\n~~~\ncsv file (generic and Excel format)\n~~~\nhttp://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=csv\nhttp://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=xls\n~~~\nsdmx file\n~~~\nhttp://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=sdmx\n~~~\nsdmx query and endpoint\n~~~\nhttp://sdw.ecb.europa.eu/quickviewexport.do?SERIES_KEY=-key-&type=sdmxQuery\nhttp://sdw-ws.ecb.europa.eu/\n~~~", "#!wget https://www.dropbox.com/s//DataImport.py -O DataImport.py\nimport DataImport as di\n#help('DataImport')", "module PDataFrame\nthat's a little side project that creates persistent data frames", "#!wget https://www.dropbox.com/s//PDataFrame.py -O PDataFrame.py\nimport PDataFrame as pdf\n#help('PDataFrame')", "Testing\nBookmarks\nthat's a little side project, which is to create a file containing bookmarks for interesting series in the ECB database; it uses the PDataFrame class defined above.\nNote that the following lines can generally be commented out: the whole idea here is that the bookmarks are kept in persistant storage (here the file ECB_DataSeries.csv) so one only has to execute bm.set() once to add a new bookmark (provided the csv file is being moved around with this note book)", "#pdf.PDataFrame.create('DataImport.csv', ('key', 'description'))\n#bm = pdf.PDataFrame('DataImport.csv')\n#bm.set('deposit', ('ILM.W.U2.C.L022.U2.EUR', 'current usage of the deposit facility'))\n#bm.set('lending', ('ILM.M.U2.C.A05B.U2.EUR', 'current aggregate usage of major lending facilities'))\n#bm.set('lending_marg', ('ILM.W.U2.C.A055.U2.EUR', 'current usage of the marginal lending facility')", "just to check what bookmarks we have defined...", "bm = pdf.PDataFrame('DataImport.csv')\nbm._df", "...and how to get the values back", "bm.get('deposit', 'key')", "Data\nwe fetch three data series, the ECB deposit facility, the ECB lending facility, and the ECB marginal lending facility using the fetch method that takes as parameter the series key (see below and explanation for the skip_end parameter)", "ei = di.ECBDataImport()\ndeposit = ei.fetch(bm.get('deposit', 'key'), skip_end=10)\nlending = ei.fetch(bm.get('lending', 'key'))\nlending_marg = ei.fetch(bm.get('lending_marg', 'key'))", "the dataset returned contains a number of additonal info items, for example a description", "deposit.keys()\n\ndeposit['descr']", "The time information is in a funny format eg (eg, \"2008w21\"). So we then reformat the datatables into something that can be plotted, ie a float. For this we have the static method data_table that takes the data and a reformatting function for the time. Normally it returns a 2-tuple, the first component being the time-tuple, the second component being the value-tuple\nIf desired, additionally an interpolation function can be return as the third component. This is necessary if we want to do operations on series that are not based on the same time values. We see this in the last line below: le[2] is the interpolation function for the lending, and it is applied to dp[0] which are the time values for the deposit function. Now the two series are on the same basis and can hence be substracted (note that in fetch() we needed the skip_end parameter, because the available deposit data series goes further than the available lending series, which makes the interpolation fail).", "unit = 1000000\ndp = ei.data_table(deposit, ei.time_reformat1, unit)\nle = ei.data_table(lending, ei.time_reformat2, unit, True)\nlm = ei.data_table(lending_marg, ei.time_reformat1, unit)\ndiff = le[2](dp[0]) - dp[1]", "The functions for converting time are implemented as static methods on the object. For the time being there are two of them", "ei.time_reformat1(\"2010w2\")\n\nei.time_reformat2(\"2010mar\")", "We now can plot the data series. Note that that would not have been that trivial to do in Excel because one of the data series is monthly, the other one is weekly", "plot(le[0], le[1])\nplot(lm[0], lm[1])\nplot(dp[0], dp[1])\n\nplot(dp[0], diff)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/rba
Data Pre-Processing & Feature Selection.ipynb
apache-2.0
[ "Data Pre-processing & Feature Selection", "###########################################################################\n#\n# Copyright 2021 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# This solution, including any related sample code or data, is made available \n# on an “as is,” “as available,” and “with all faults” basis, solely for \n# illustrative purposes, and without warranty or representation of any kind. \n# This solution is experimental, unsupported and provided solely for your \n# convenience. Your use of it is subject to your agreements with Google, as \n# applicable, and may constitute a beta feature as defined under those \n# agreements. To the extent that you make any data available to Google in \n# connection with your use of the solution, you represent and warrant that you \n# have all necessary and appropriate rights, consents and permissions to permit \n# Google to use and process that data. By using any portion of this solution, \n# you acknowledge, assume and accept all risks, known and unknown, associated \n# with its usage, including with respect to your deployment of any portion of \n# this solution in your systems, or usage in connection with your business, \n# if at all.\n###########################################################################", "0) Dependencies", "################################################################################\n######################### CHANGE BQ PROJECT NAME BELOW #########################\n################################################################################\n\nproject_name = '' #add proj name\n\n# Google credentials authentication libraries\nfrom google.colab import auth\nauth.authenticate_user()\n\n# data processing libraries\nimport numpy as np\nfrom numpy.core.numeric import NaN\nimport datetime\nimport pandas as pd\nimport pandas_gbq\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\n!pip install boruta #boruta for feature selection\nfrom boruta import BorutaPy\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.tree import DecisionTreeClassifier\n\n# modeling and metrics\nfrom scipy.optimize import least_squares\nfrom statsmodels.stats.outliers_influence import variance_inflation_factor\nfrom statsmodels.tools.tools import add_constant\nimport statsmodels.api as sm\n\n\nimport itertools\nfrom scipy.stats.stats import pearsonr\n\n# Visualization\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = [10, 5] #change size of plot\nimport seaborn as sns\n\n# BigQuery Magics\n'''\nBigQuery magics are used to run BigQuery SQL queries in a python environment.\nThese queries can also be run in the BigQuery UI\n'''\n\nfrom google.cloud import bigquery\nfrom google.cloud.bigquery import magics\n\nmagics.context.project = project_name #update your project name \n\nclient = bigquery.Client(project=magics.context.project)", "1) Import dataset", "################################################################################\n######################### CHANGE BQ PROJECT NAME BELOW #########################\n################################################################################\n\n%%bigquery df\nSELECT *\nFROM `.RBA_demo.SAMPLE_DATA`; #update with project name\n\ndf.head()", "Immediately remove variables that won't be used in the model. Here, includes columns like geo which is consistent across the dataset, and aggregated media such as total clicks across DSPs.", "df.drop(columns = ['geo','x1','x2','x8','x18','x19','x20','x21','x22','x23','x24','x25'], inplace = True)\n\nlen(df.columns)\n\ndf.describe()", "2) Data Cleaning\n2.1) Check for missing data and impute\nCheck the amount of of missing values (% of total column) in the data and sort by \nhighest to lowest.", "missing_values = 100*df.isnull().sum()/len(df)\nmissing_values.sort_values(ascending = False)", "If there are any NAs in the data that should be zeros, replace those data\npoints with zero.", "df.fillna(0, inplace = True)", "3) Define Y (KPI column) and create initial feature set", "#Input column names for date (ex: \"day\") and Y (ex: \"new_accounts\" or \"sales\") \ndate_col = \"date\" \nkpi_col = \"y1\" \n\n#Set the date as index\ndf = df.sort_values(date_col)\ndf = df.set_index(date_col)\n\ntarget_variable = df[kpi_col] #y variable\n\n# Create a dataframe for features (all variables except date and kpi) x variables\nfeatureset_df = df[df.columns[df.columns != date_col]]\nfeatureset_df = df[df.columns[df.columns != kpi_col]]\n\nfeatureset_df.head()", "4) Visualize Series\nOptional:\nVisualizing each series is useful to better understand the underlying distribution of the data. This allows for examination of outliers. \nUnderstanding the distribution of the underlying data can also inform prior parameterization in bayesian modeling approaches later on.", "for i in range(2,len(featureset_df.columns)):\n plt.figure()\n sns.kdeplot(featureset_df[featureset_df.columns[i]], label = featureset_df.columns[i], shade = True)", "5) Feature Creation\n5.1) Check for Seasonality and add Flag\nView the target variable as a time series plot and identify periods where data peaks.\nIn this specific example, conversions spike on Mondays, so we will create\na specific flag (0/1 variable) for that day and use it as a control feature in the feature set.", "plt.plot_date(df.index, target_variable, linestyle = '-')\n\nfeatureset_df['Is_Monday'] = (df.index.get_level_values(0).weekday == 0).astype(int)\nfeatureset_df['Is_Q2Q3'] = (df.index.get_level_values(0).month == 4).astype(int) | (df.index.get_level_values(0).month == 5).astype(int) | (df.index.get_level_values(0).month == 6).astype(int) | (df.index.get_level_values(0).month == 7).astype(int) | (df.index.get_level_values(0).month == 8).astype(int) | (df.index.get_level_values(0).month == 9).astype(int)", "5.2) Lag, Carryover, Diminishing Returns\nWe'll need to transform the raw data by applying lag, carryover, and diminishing returns to have it most accuratley predict the target variable. \nFirst, split the df into two different dataframes:\n\n\nFeatures that don't need to be transformed\n\nExamples are: \ndate\ntarget variable\ncontrol variables (seasonality, promotions, etc.)\n\n\n\n\n\nFeatures that do need to be transformed\n\nPaid media tactics \nAny other feature where there is some sort of delayed response with the target variable\n\n\n\nStarting points for lags:\n- If you are using daily data, the lag should at default be 14.\n- If you are using weekly data, the lag should at default be 5.\nOthers can and should be tested to determine the best lag length for your specific data.", "# Variables that do not need to be transformed\n\nuntransformed_df = pd.concat([target_variable, featureset_df[['Is_Monday','Is_Q2Q3']]], axis = 1) #Target variable + controls\n\n# Variables that do need to be transformed\n\n#exclude dummies/controls that do not need to be transformed\n#transformed_df = featureset_df[['feature1', 'feature2',...]]\n\n'''\nNote: In this example case, almost all of the features in the featureset_df are media features.\nAs more dummy variables or other control variables are added, the user will need to \nspecify which columns should be transformed\n'''\ntransformed_df = featureset_df.loc[:,~featureset_df.columns.isin(['Is_Monday','Is_Q2Q3'])]\n\n#This function creates the different combinations of Lag, Decay, and Curve\ndef transformation(dataframe,x):\n lag = []\n for i in range(0, 7, 1):\n data = dataframe[x].shift(i).to_frame()\n data.columns = [col_name+'Lag'+str(i)for col_name in data.columns]\n # store DataFrame in list\n lag.append(data)\n # see pd.concat documentation for more info\n lag = pd.concat(lag,axis=1)\n lag=lag.fillna(0)\n Alpha = []\n for i in np.linspace(0.6,1.0,num=5):\n data = pow(lag,i)\n data.columns = [col_name+'Alpha'+str(i)for col_name in data.columns]\n # store DataFrame in list\n Alpha.append(data)\n # see pd.concat documentation for more info\n Alpha = pd.concat(Alpha,axis=1) \n Decay=[]\n #j = 0\n for percent in np.linspace(0.6,1.0,5):\n data = Alpha.copy()\n data.columns = [col_name+'Decay'+str(percent)for col_name in data.columns]\n for i in range(0,len(Alpha)):\n for j in range(0,len(Alpha.columns)):\n #data = data + data.shift(1)*(1-i)\n if(i == 0):\n data.iloc[i, j] = data.iloc[i, j]*percent\n else:\n data.iloc[i, j] = data.iloc[i - 1, j] *(1-percent) + data.iloc[i,j] * percent\n Decay.append(data)\n j = j + 1\n Decay = pd.concat(Decay,axis=1)\n \n return Decay", "Make sure data is correctly sorted by date before running feature selection algo.\nThis is important because the algorithm takes from a previous row of the data as it evaluates the current row. Unsorted data can cause errors in resulting feature selection\ninfo.", "transformed_df = transformed_df.sort_values('date')", "WARNING: This section of the code is estimated to take 10-30 minutes to complete\ndepending on weekly vs daily data and the number of features in the model.", "#Loop through each column and apply the transformation\n\ncolumns = transformed_df.columns\nsales = target_variable\nall_data = [] \nfor col in columns:\n newdf = transformation(transformed_df, col) \n corr_df = pd.concat([sales, newdf], axis=1)\n corr = corr_df.corr().sort_values(kpi_col,ascending=False) \n new_vals= corr.iloc[1:4 , 0:1].index.tolist() \n data = newdf[new_vals]\n all_data.append(data)\ntransformed_df = pd.concat(all_data,axis=1)", "6) Feature Selection\nFor feature selection we employ the Boruta algorithm.(More information here)\nThis algorithm will tell you the rank of each feature and whether or not to keep a varaible in the model (i.e. Keep = True/False). The goal of RBA is to optimize across all paid digital media tactics, therefore select the top ranking feature for each group of features (whether or not the algorithm tells you to keep the feature).", "# Specifiying the target and x variables\ny = target_variable\nx = transformed_df #update with transformed features\n\n# define random forest classifier\nforest = RandomForestRegressor(n_jobs=-1, max_depth=5)\nforest.fit(x, y)\n\n# define Boruta feature selection method\nfeat_selector = BorutaPy(forest, n_estimators='auto', verbose=2, random_state=1)\n\n# find all relevant features\nfeat_selector.fit(np.array(x), np.array(y))\n\n# check selected features\nfeat_selector.support_\n\n# check ranking of features\nfeat_selector.ranking_\n\n# call transform() on X to filter it down to selected features\nX_filtered = feat_selector.transform(np.array(x))\n\n#Select the top ranking variable for each group of variables. \nfeature_ranks = list(zip(x.columns, \n feat_selector.ranking_, \n feat_selector.support_))\n\n# iterate through and print out the results\nfor feat in feature_ranks:\n print('{:<25}, Rank: {}, Keep: {}'.format(feat[0], feat[1], feat[2]))", "Reduce the overall dataset to just selected features using the ranking\nfrom the Boruta output, and save to a dataframe", "selected_featureset_df = transformed_df[['x3Lag0Alpha0.6Decay1.0',\n'x4Lag0Alpha1.0Decay0.9',\n'x5Lag0Alpha1.0Decay1.0',\n'x6Lag0Alpha0.6Decay1.0',\n'x7Lag0Alpha0.6Decay1.0',\n'x9Lag0Alpha0.8Decay0.9',\n'x10Lag0Alpha1.0Decay1.0',\n'x11Lag0Alpha0.8Decay1.0',\n'x12Lag0Alpha1.0Decay1.0',\n'x13Lag0Alpha0.8Decay1.0',\n'x14Lag0Alpha1.0Decay1.0',\n'x15Lag0Alpha0.7Decay1.0',\n'x16Lag0Alpha1.0Decay0.9',\n'x17Lag1Alpha0.8Decay0.7',\n'x26Lag0Alpha1.0Decay0.9',\n'x27Lag5Alpha1.0Decay1.0',\n'x28Lag3Alpha0.7Decay0.6',\n'x29Lag0Alpha0.6Decay1.0',\n'x30Lag4Alpha1.0Decay1.0',\n'x31Lag5Alpha1.0Decay1.0',\n'x32Lag0Alpha0.6Decay0.9',\n'x33Lag0Alpha1.0Decay1.0',\n'x34Lag0Alpha1.0Decay1.0',\n'x35Lag5Alpha0.6Decay1.0',\n'x36Lag0Alpha0.7Decay0.8',\n'x37Lag5Alpha0.6Decay1.0',\n'x38Lag3Alpha0.6Decay0.7',\n'x39Lag0Alpha0.9Decay1.0',\n'x40Lag4Alpha0.6Decay0.7',\n'x41Lag0Alpha1.0Decay1.0',\n'x42Lag0Alpha0.7Decay1.0',\n'x43Lag6Alpha0.6Decay0.9',\n'x44Lag0Alpha0.6Decay0.8',\n'x45Lag0Alpha0.8Decay1.0',\n'x46Lag0Alpha0.6Decay0.9',\n]]\n\n# add back in the untransformed control variables to the featureset\nselected_featureset_df = pd.concat([selected_featureset_df,untransformed_df[untransformed_df.columns[untransformed_df.columns != kpi_col]]],axis = 1)", "7) Feature Scaling\n7.1) Feature Scaling\nThe default method of standardization utilizes Standard Scaler, which takes in\ninput data and transforms so that the output has mean 0 and standard deviation of 1\nacross all features.\nAlternative methods of feature scaling include square-root transformation,\nde-meaning, natural log transformations, Min-Max Scalers, or normalization", "scaler = StandardScaler()\nstandardized_transform = scaler.fit_transform(selected_featureset_df)\nselected_featureset_df = pd.DataFrame(standardized_transform, columns = selected_featureset_df.columns)", "Option to review visuals of the data. After the data is standardized the distributions may take on a more normal shape.", "'''\nfor i in range(0,len(X_transform_stand.columns)):\n plt.figure()\n sns.kdeplot(X_transform_stand[X_transform_stand.columns[i]], label = X_transform_stand.columns[i], shade = True)\n'''\n\nselected_featureset_df.head()", "8) Handle Multicollinearity (reduce feature set)\n\nPrint a correlation heatmap to visualize correlations across feature set\nRun variance inflation factor analysis and output results to flag multicollinearity above specified threshold", "correl = selected_featureset_df.corr()\n\n# Getting the Upper Triangle of the co-relation matrix\nmatrix = np.triu(correl)\n\n# using the upper triangle matrix as mask \nsns.heatmap(correl, mask=matrix)", "Run VIF analysis and flag values greater than 10.\nIndustry best practice flags values above 10 as an extreme violation of regression model assumptions. (Reference)", "vif = add_constant(selected_featureset_df)\n\n# loop to calculate the VIF for each X \nvif = pd.Series([variance_inflation_factor(vif.values, i) \n for i in range(vif.shape[1])], \n index=vif.columns) \n\n# processing to output VIF results as a dataframe \nvif_df=vif.to_frame().reset_index()\n\nvif_df.columns = ['feature', 'vif']\nvif_df=vif_df.replace([np.inf], np.nan) # replace inf calculations as missing and zero fill \nvif_df=vif_df.fillna(0).sort_values(by=\"vif\", ascending=False)\n\nvif_df.reset_index(inplace = True)\nvif_df", "Drop the highest VIF features and print the high collinearity columns in a list", "high_collinearity_columns = vif_df.feature[vif_df['vif'] >= 10].to_list()\n\nhigh_collinearity_columns", "Drop 1 variable at a time (start with the highest VIF) and re-run the VIF cell to re-check multicollinearity. This will allow the user to preserve as many features in the model as possible.", "cols_to_drop = []\nwhile vif_df.vif[1] >= 10:\n if vif_df.vif[1] >= 10:\n cols_to_drop.append(vif_df.feature[1])\n selected_featureset_df.drop(columns = vif_df.feature[1],inplace = True) \n vif = add_constant(selected_featureset_df)\n # loop to calculate the VIF for each X \n vif = pd.Series([variance_inflation_factor(vif.values, i) \n for i in range(vif.shape[1])], index=vif.columns) \n # processing to output VIF results as a dataframe \n vif_df=vif.to_frame().reset_index()\n vif_df.columns = ['feature', 'vif']\n vif_df=vif_df.replace([np.inf], np.nan) # replace inf calculations as missing and zero fill \n vif_df=vif_df.fillna(0).sort_values(by=\"vif\", ascending=False)\n vif_df.reset_index(inplace = True)\n\ncols_to_drop\n\nselected_featureset_df.columns\n\nlen(selected_featureset_df.columns)\n\n# Replace the decimal points with underscores so that data can be exported to BQ\nselected_featureset_df.columns = selected_featureset_df.columns.str.replace(\".\",\"_\")", "9) Export Final Dataset\n9.1) Trim the final dataset according to lag", "final_df = selected_featureset_df\nfinal_df['y1'] = target_variable.reset_index()[kpi_col]", "Trim the start of your dataset to correspond with the max lag\n(if max lag is 4 weeks, trim the first 4 weeks off of the data)", "max_lag = 5\nfinal_df = final_df[max_lag:]\nfinal_df.reset_index(inplace = True)\nfinal_df.drop(columns = 'index',inplace = True)\n\nfinal_df.head()\n\n################################################################################\n######################### CHANGE BQ PROJECT NAME BELOW #########################\n################################################################################\n\ndestination_project_id = \"\" #@param\ndestination_dataset = \"RBA_demo\" #@param\ndestination_table = \"cleaned_data\" #@param\ndataset_table = destination_dataset+\".\"+destination_table\n\nfinal_df.to_gbq(dataset_table, \n destination_project_id,\n chunksize=None, \n if_exists='replace'\n )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
althonos/pronto
docs/source/examples/ms.ipynb
mit
[ "Exploring MzML files with the MS Ontology\n\nIn this example, we will learn how to use pronto to extract a hierarchy from the MS Ontology, a controlled vocabulary developed by the Proteomics Standards Initiative to hold metadata about Mass Spectrometry instrumentation and other Protein Identification and Quantitation software. This example is taken from a real situation that kickstarted the development of pronto to extract metadata from MzML files, a file format for Mass Spectrometry data based on XML.\nLoading ms.obo\nThe MS ontology is available online on the OBO Foundry, so unless we are using a local version we can simply use the version found online to load the OBO file. We may get some encoding warnings since ms.obo imports some legacy ontologies, but we should be OK for the most part since we are only querying terms directly.", "import pronto\nms = pronto.Ontology.from_obo_library(\"ms.obo\")", "Displaying a class hierarchy with Vega\nThe MS ontology contains a catalog of several instruments, grouped by instrument manufacturers, but not all instruments are at the same depth level. We can easily use the Term.subclasses method to find all instruments defined in the controlled vocabulary. Let's then build a tree from all the subclasses of MS:1000031:", "instruments = ms['MS:1000031'].subclasses().to_set()\ndata = []\n\nfor term in instruments:\n value = {\"id\": int(term.id[3:]), \"name\": term.id, \"desc\": term.name} \n parents = term.superclasses(with_self=False, distance=1).to_set() & instruments\n if parents:\n value['parent'] = int(parents.pop().id[3:])\n data.append(value)", "Now that we have our tree structure, we can render it simply with Vega to get a better idea of the classes we are inspecting:", "import json\nimport urllib.request\n\n# Let's use the Vega radial tree example as a basis of the visualization\nview = json.load(urllib.request.urlopen(\"https://vega.github.io/vega/examples/radial-tree-layout.vg.json\"))\n\n# First replace the default data with our own\nview['data'][0].pop('url')\nview['data'][0]['values'] = data\nview['marks'][1]['encode']['enter']['tooltip'] = {\"signal\": \"datum.desc\"}\nview['signals'][4]['value'] = 'cluster'\n\n# Render the clustered tree\ndisplay({\"application/vnd.vega.v5+json\": view}, raw=True)", "Extracting the instruments from an MzML file\nMzML files store the metadata corresponding to one or several MS scans using the MS controlled vocabulary, but the location and type of metadata can vary and needs to be extracted from a term subclassing hierarchy. Let's download an example file from the MetaboLights library and parse it with xml.etree:", "import urllib.request\nimport xml.etree.ElementTree as etree\n\nURL = \"http://ftp.ebi.ac.uk/pub/databases/metabolights/studies/public/MTBLS341/pos_Exp2-K3_2-E,5_01_7458.d.mzML\"\nmzml = etree.parse(urllib.request.urlopen(URL))", "Now we want to extract the instruments that were used in the MS scan, that are stored as mzml:cvParam elements: we build a set of all the instruments in the MS ontology, and we iterate over the instruments mzml:cvParam to find the ones that refer to instruments:", "instruments = ms[\"MS:1000031\"].subclasses().to_set().ids\nstudy_instruments = []\n\npath = \"mzml:instrumentConfigurationList/mzml:instrumentConfiguration/mzml:cvParam\"\nfor element in mzml.iterfind(path, {'mzml': 'http://psi.hupo.org/ms/mzml'}):\n if element.attrib['accession'] in instruments:\n study_instruments.append(ms[element.attrib['accession']])\n \nprint(study_instruments)", "Finally we can extract the manufacturer of the instruments we found by checking which one of its superclasses is a direct child of the MS:1000031 term. We use the distance argument of subclasses to get the direct subclasses of instrument model, which are the manufacturers, and we use set operations to select manufacturers from the superclasses of each intrument we found.", "manufacturers = ms['MS:1000031'].subclasses(distance=1, with_self=False).to_set()\nstudy_manufacturers = []\n\nfor instrument in study_instruments:\n study_manufacturers.extend(manufacturers & instrument.superclasses().to_set())\n\nprint(study_manufacturers)", "Validating the controlled vocabulary terms in an MzML file\nAll mzml:cvParam XML elements are required to have the 3 following attributes:\n\naccession, which is the identifier of the term in one of the ontologies imported in the file\ncvRef, which is the identifier of the ontology imported in the file\nname, which is the textual definition of the term\n\nname in particular is redundant with respect to the actual ontology file, but can help rendering the XML elements. However, some MzML files can have a mismatch between the name and accession attributes. In order to check these mismatches we can use pronto to retrieve the name of all of these controlled vocabulary terms.", "mismatches = [\n element\n for element in mzml.iter()\n if element.tag == \"{http://psi.hupo.org/ms/mzml}cvParam\"\n if element.get('accession') in ms\n if ms[element.get('accession')].name != element.get('name')\n]\n\nfor m in mismatches:\n print(f\"{m.get('accession')}: {m.get('name')!r} (should be {ms[m.get('accession')].name!r})\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DataPilot/notebook-miner
summary_of_work/10. Bottom Up Experimentation.ipynb
apache-2.0
[ "Bottom Up Experimentation\nThis notebook documents the bottom-up strategy experimentation to determine notebook similarity. It is based on the notion that it is easier to aggregate than to break down a 'black box.'\nThe biggest challenge is working with the AST structure. Because it is a tree, we need to merge leafs with their parents, working our way up.\nGOAL\nThere are two main goals:\n1. Come up with a similarity function for entire notebooks\n2. Maximize the coverage of 'black boxes' while minimizing the number of 'black boxes'", "# Necessary imports \nimport os\nimport time\nfrom nbminer.notebook_miner import NotebookMiner\nfrom nbminer.cells.cells import Cell\nfrom nbminer.features.ast_features import ASTFeatures\nfrom nbminer.stats.summary import Summary\nfrom nbminer.stats.multiple_summary import MultipleSummary\nfrom nbminer.features.featurize.ast_graph.ast_graph import *\n\npeople = os.listdir('../testbed/Final')\nnotebooks = []\nfor person in people:\n person = os.path.join('../testbed/Final', person)\n if os.path.isdir(person):\n direc = os.listdir(person)\n notebooks.extend([os.path.join(person, filename) for filename in direc if filename.endswith('.ipynb')])\nnotebook_objs = [NotebookMiner(file) for file in notebooks]\na = ASTFeatures(notebook_objs)\n\nfor i, nb in enumerate(a.nb_features):\n a.nb_features[i] = nb.get_new_notebook()\n\ngraphs = []\nfor nb in a.nb_features:\n for cell in nb.get_all_cells():\n graphs.append(cell.get_feature('graph'))\nagr = ASTGraphReducer(graphs)\nnum_nodes = []\nfor g in agr.graphs:\n num_nodes.append(g.graph_nodes())\nprint ('Total number of graphs:',agr.number_graphs())\nprint ('Total number of graphs with one node:',agr.number_single())\nprint ('Total number of nodes:',agr.count_nodes())\nprint (agr.count_nodes())\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.hist(num_nodes, bins=30)\n\ncur_count = 0\nnew_count = 1\nprint (agr.count_nodes())\nwhile cur_count != new_count:\n cur_count = new_count\n new_count = (agr.count_nodes())\n agr.build_relations()\nprint (new_count)\n\n\nnum_nodes = []\nfor g in agr.graphs:\n num_nodes.append(g.graph_nodes())\nprint ('Total number of graphs:',agr.number_graphs())\nprint ('Total number of graphs with one node:',agr.number_single())\nprint ('Total number of nodes:',agr.count_nodes())\nprint (agr.count_nodes())\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.hist(num_nodes, bins=30)\n\n# Similarity between nb 0 and all other notebooks:\nprint (sorted([similarity[1][1] for similarity in a.notebook_jaccard_similarity(0)]))\n\n# Maximum similarity\nall_sims = []\nmax_sim = 0\nmax_val = None\nfor i in range(len(a.nb_features)):\n for similarity in a.notebook_jaccard_similarity(i):\n if similarity[1][1] > max_sim:\n max_sim = similarity[1][1]\n max_val = (i, similarity[0])\nmax_sim, max_val\n\na.nb_features[2].notebook.filename\n\na.nb_features[3].notebook.filename", "Black Boxes\nNow we're interested in what happened with this bottom up approach. What does the final thing look like? We can print out each graph and get a sense of what's happened, then we can look at some actual code, what it looks like in graph format, and what the black boxes it holds actually mean", "for cell in a.nb_features[25].get_all_cells():\n print (cell.get_feature('graph').get_nodes())\n\nfor cell in a.nb_features[39].get_all_cells():\n print (cell.get_feature('graph').get_nodes())\n\ncells = []\nfor nb in a.nb_features:\n cells.extend([cell for cell in nb.get_all_cells()])\ngroups = []\ncur_code = ''\ncur_group = []\nfor cell in cells:\n if cell.get_feature('original_code') == cur_code:\n cur_group.append(cell)\n else:\n if len(cur_group) > 0:\n groups.append(cur_group)\n cur_group = []\n cur_code = cell.get_feature('original_code')\n\ngroup = 6\nprint ('*'*50)\nprint ('Black Boxes')\nfor cell in groups[group]:\n print (cell.get_feature('graph').get_nodes())\nprint ('*'*50)\nprint ('Code')\nprint (groups[group][0].get_feature('original_code'))\nprint ('*'*50)\nprint ('Black Box meaning')\nfor cell in groups[group]:\n n = (cell.get_feature('graph').get_nodes())\n if len(n) == 1 and n[0][:5] == 'black':\n print (agr.get_trace(n[0]))\n\nprint (agr.get_trace('black_box1288'))\n\nfor key in agr.names.keys():\n if 'Call' in key:\n print (key)", "Can we go further\nNow that we have a bunch of (hopefully) single element top level nodes, we can combine like pairs.", "graph_sets = []\nfor nb in a.nb_features:\n graph_set = []\n for cell in nb.get_all_cells():\n graph_set.append(cell.get_feature('graph'))\n graph_sets.append(graph_set)\n\nagc = ASTGraphCombiner(graph_sets)\n\nprint ('before',agc.count_graphs())\nagc.reduce_graphs()\nprint ('after',agc.count_graphs())\nprint ('total_distinct',agc.count_distinct_nodes())\n\n\nfor graph in agc.graph_sets[0]:\n print (graph.get_nodes())", "Coverage\nWhat is the final 'coverage' of our method? The best way to represent the coverage is to look at the top level graphs. We had a total of 19,882 graphs, and we covered all of these graphs with a total of 3,047 unique node types." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/quantum
docs/tutorials/barren_plateaus.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Barren plateaus\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/barren_plateaus\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/barren_plateaus.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/quantum/blob/master/docs/tutorials/barren_plateaus.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/barren_plateaus.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this example you will explore the result of <a href=\"https://www.nature.com/articles/s41467-018-07090-4\" class=\"external\">McClean, 2019</a> that says not just any quantum neural network structure will do well when it comes to learning. In particular you will see that a certain large family of random quantum circuits do not serve as good quantum neural networks, because they have gradients that vanish almost everywhere. In this example you won't be training any models for a specific learning problem, but instead focusing on the simpler problem of understanding the behaviors of gradients.\nSetup", "!pip install tensorflow==2.7.0", "Install TensorFlow Quantum:", "!pip install tensorflow-quantum\n\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)", "Now import TensorFlow and the module dependencies:", "import tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\n\nnp.random.seed(1234)", "1. Summary\nRandom quantum circuits with many blocks that look like this ($R_{P}(\\theta)$ is a random Pauli rotation):<br/>\n<img src=\"./images/barren_2.png\" width=700>\nWhere if $f(x)$ is defined as the expectation value w.r.t. $Z_{a}Z_{b}$ for any qubits $a$ and $b$, then there is a problem that $f'(x)$ has a mean very close to 0 and does not vary much. You will see this below:\n2. Generating random circuits\nThe construction from the paper is straightforward to follow. The following implements a simple function that generates a random quantum circuit—sometimes referred to as a quantum neural network (QNN)—with the given depth on a set of qubits:", "def generate_random_qnn(qubits, symbol, depth):\n \"\"\"Generate random QNN's with the same structure from McClean et al.\"\"\"\n circuit = cirq.Circuit()\n for qubit in qubits:\n circuit += cirq.ry(np.pi / 4.0)(qubit)\n\n for d in range(depth):\n # Add a series of single qubit rotations.\n for i, qubit in enumerate(qubits):\n random_n = np.random.uniform()\n random_rot = np.random.uniform(\n ) * 2.0 * np.pi if i != 0 or d != 0 else symbol\n if random_n > 2. / 3.:\n # Add a Z.\n circuit += cirq.rz(random_rot)(qubit)\n elif random_n > 1. / 3.:\n # Add a Y.\n circuit += cirq.ry(random_rot)(qubit)\n else:\n # Add a X.\n circuit += cirq.rx(random_rot)(qubit)\n\n # Add CZ ladder.\n for src, dest in zip(qubits, qubits[1:]):\n circuit += cirq.CZ(src, dest)\n\n return circuit\n\n\ngenerate_random_qnn(cirq.GridQubit.rect(1, 3), sympy.Symbol('theta'), 2)", "The authors investigate the gradient of a single parameter $\\theta_{1,1}$. Let's follow along by placing a sympy.Symbol in the circuit where $\\theta_{1,1}$ would be. Since the authors do not analyze the statistics for any other symbols in the circuit, let's replace them with random values now instead of later.\n3. Running the circuits\nGenerate a few of these circuits along with an observable to test the claim that the gradients don't vary much. First, generate a batch of random circuits. Choose a random ZZ observable and batch calculate the gradients and variance using TensorFlow Quantum.\n3.1 Batch variance computation\nLet's write a helper function that computes the variance of the gradient of a given observable over a batch of circuits:", "def process_batch(circuits, symbol, op):\n \"\"\"Compute the variance of a batch of expectations w.r.t. op on each circuit that \n contains `symbol`. Note that this method sets up a new compute graph every time it is\n called so it isn't as performant as possible.\"\"\"\n\n # Setup a simple layer to batch compute the expectation gradients.\n expectation = tfq.layers.Expectation()\n\n # Prep the inputs as tensors\n circuit_tensor = tfq.convert_to_tensor(circuits)\n values_tensor = tf.convert_to_tensor(\n np.random.uniform(0, 2 * np.pi, (n_circuits, 1)).astype(np.float32))\n\n # Use TensorFlow GradientTape to track gradients.\n with tf.GradientTape() as g:\n g.watch(values_tensor)\n forward = expectation(circuit_tensor,\n operators=op,\n symbol_names=[symbol],\n symbol_values=values_tensor)\n\n # Return variance of gradients across all circuits.\n grads = g.gradient(forward, values_tensor)\n grad_var = tf.math.reduce_std(grads, axis=0)\n return grad_var.numpy()[0]", "3.1 Set up and run\nChoose the number of random circuits to generate along with their depth and the amount of qubits they should act on. Then plot the results.", "n_qubits = [2 * i for i in range(2, 7)\n ] # Ranges studied in paper are between 2 and 24.\ndepth = 50 # Ranges studied in paper are between 50 and 500.\nn_circuits = 200\ntheta_var = []\n\nfor n in n_qubits:\n # Generate the random circuits and observable for the given n.\n qubits = cirq.GridQubit.rect(1, n)\n symbol = sympy.Symbol('theta')\n circuits = [\n generate_random_qnn(qubits, symbol, depth) for _ in range(n_circuits)\n ]\n op = cirq.Z(qubits[0]) * cirq.Z(qubits[1])\n theta_var.append(process_batch(circuits, symbol, op))\n\nplt.semilogy(n_qubits, theta_var)\nplt.title('Gradient Variance in QNNs')\nplt.xlabel('n_qubits')\nplt.xticks(n_qubits)\nplt.ylabel('$\\\\partial \\\\theta$ variance')\nplt.show()", "This plot shows that for quantum machine learning problems, you can't simply guess a random QNN ansatz and hope for the best. Some structure must be present in the model circuit in order for gradients to vary to the point where learning can happen.\n4. Heuristics\nAn interesting heuristic by <a href=\"https://arxiv.org/pdf/1903.05076.pdf\" class=\"external\">Grant, 2019</a> allows one to start very close to random, but not quite. Using the same circuits as McClean et al., the authors propose a different initialization technique for the classical control parameters to avoid barren plateaus. The initialization technique starts some layers with totally random control parameters—but, in the layers immediately following, choose parameters such that the initial transformation made by the first few layers is undone. The authors call this an identity block.\nThe advantage of this heuristic is that by changing just a single parameter, all other blocks outside of the current block will remain the identity—and the gradient signal comes through much stronger than before. This allows the user to pick and choose which variables and blocks to modify to get a strong gradient signal. This heuristic does not prevent the user from falling in to a barren plateau during the training phase (and restricts a fully simultaneous update), it just guarantees that you can start outside of a plateau.\n4.1 New QNN construction\nNow construct a function to generate identity block QNNs. This implementation is slightly different than the one from the paper. For now, look at the behavior of the gradient of a single parameter so it is consistent with McClean et al, so some simplifications can be made.\nTo generate an identity block and train the model, generally you need $U1(\\theta_{1a}) U1(\\theta_{1b})^{\\dagger}$ and not $U1(\\theta_1) U1(\\theta_1)^{\\dagger}$. Initially $\\theta_{1a}$ and $\\theta_{1b}$ are the same angles but they are learned independently. Otherwise, you will always get the identity even after training. The choice for the number of identity blocks is empirical. The deeper the block, the smaller the variance in the middle of the block. But at the start and end of the block, the variance of the parameter gradients should be large.", "def generate_identity_qnn(qubits, symbol, block_depth, total_depth):\n \"\"\"Generate random QNN's with the same structure from Grant et al.\"\"\"\n circuit = cirq.Circuit()\n\n # Generate initial block with symbol.\n prep_and_U = generate_random_qnn(qubits, symbol, block_depth)\n circuit += prep_and_U\n\n # Generate dagger of initial block without symbol.\n U_dagger = (prep_and_U[1:])**-1\n circuit += cirq.resolve_parameters(\n U_dagger, param_resolver={symbol: np.random.uniform() * 2 * np.pi})\n\n for d in range(total_depth - 1):\n # Get a random QNN.\n prep_and_U_circuit = generate_random_qnn(\n qubits,\n np.random.uniform() * 2 * np.pi, block_depth)\n\n # Remove the state-prep component\n U_circuit = prep_and_U_circuit[1:]\n\n # Add U\n circuit += U_circuit\n\n # Add U^dagger\n circuit += U_circuit**-1\n\n return circuit\n\n\ngenerate_identity_qnn(cirq.GridQubit.rect(1, 3), sympy.Symbol('theta'), 2, 2)", "4.2 Comparison\nHere you can see that the heuristic does help to keep the variance of the gradient from vanishing as quickly:", "block_depth = 10\ntotal_depth = 5\n\nheuristic_theta_var = []\n\nfor n in n_qubits:\n # Generate the identity block circuits and observable for the given n.\n qubits = cirq.GridQubit.rect(1, n)\n symbol = sympy.Symbol('theta')\n circuits = [\n generate_identity_qnn(qubits, symbol, block_depth, total_depth)\n for _ in range(n_circuits)\n ]\n op = cirq.Z(qubits[0]) * cirq.Z(qubits[1])\n heuristic_theta_var.append(process_batch(circuits, symbol, op))\n\nplt.semilogy(n_qubits, theta_var)\nplt.semilogy(n_qubits, heuristic_theta_var)\nplt.title('Heuristic vs. Random')\nplt.xlabel('n_qubits')\nplt.xticks(n_qubits)\nplt.ylabel('$\\\\partial \\\\theta$ variance')\nplt.show()", "This is a great improvement in getting stronger gradient signals from (near) random QNNs." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_lcmv_beamformer_volume.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute LCMV inverse solution on evoked data in volume source space\nCompute LCMV inverse solution on an auditory evoked dataset in a volume source\nspace. It stores the solution in a nifti file for visualisation, e.g. with\nFreeview.", "# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.beamformer import make_lcmv, apply_lcmv\n\nfrom nilearn.plotting import plot_stat_map\nfrom nilearn.image import index_img\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'", "Get epochs", "event_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nleft_temporal_channels = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads', selection=left_temporal_channels)\n\n# Pick the channels of interest\nraw.pick_channels([raw.ch_names[pick] for pick in picks])\n# Re-normalize our empty-room projectors, so they are fine after subselection\nraw.info.normalize_proj()\n\n# Read epochs\nproj = False # already applied\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n baseline=(None, 0), preload=True, proj=proj,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))\nevoked = epochs.average()\n\nforward = mne.read_forward_solution(fname_fwd)\n\n# Read regularized noise covariance and compute regularized data covariance\nnoise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')\ndata_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,\n method='shrunk')\n\n# Compute weights of free orientation (vector) beamformer with weight\n# normalization (neural activity index, NAI). Providing a noise covariance\n# matrix enables whitening of the data and forward solution. Source orientation\n# is optimized by setting pick_ori to 'max-power'.\n# weight_norm can also be set to 'unit-noise-gain'. Source orientation can also\n# be 'normal' (but only when using a surface-based source space) or None,\n# which computes a vector beamfomer. Note, however, that not all combinations\n# of orientation selection and weight normalization are implemented yet.\nfilters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,\n noise_cov=noise_cov, pick_ori='max-power',\n weight_norm='nai')\n\n# Apply this spatial filter to the evoked data. The output of these two steps\n# is equivalent to calling lcmv() and enables the application of the spatial\n# filter to separate data sets, e.g. when using a common spatial filter to\n# compare conditions.\nstc = apply_lcmv(evoked, filters, max_ori_out='signed')\n\n# take absolute values for plotting\nstc.data[:, :] = np.abs(stc.data)\n\n# Save result in stc files\nstc.save('lcmv-vol')\n\nstc.crop(0.0, 0.2)\n\n# Save result in a 4D nifti file\nimg = mne.save_stc_as_volume('lcmv_inverse.nii.gz', stc,\n forward['src'], mri_resolution=False)\n\nt1_fname = data_path + '/subjects/sample/mri/T1.mgz'\n\n# Plotting with nilearn ######################################################\nplot_stat_map(index_img(img, 61), t1_fname, threshold=1.35,\n title='LCMV (t=%.1f s.)' % stc.times[61])\n\n# plot source time courses with the maximum peak amplitudes\nplt.figure()\nplt.plot(stc.times, stc.data[np.argsort(np.max(stc.data, axis=1))[-40:]].T)\nplt.xlabel('Time (ms)')\nplt.ylabel('LCMV value')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
nwhidden/ND101-Deep-Learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n # using np.clip() to prevent overflow\n self.activation_function = lambda x : 1. / (1. + np.exp(-np.clip(x, -500, 500))) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n # X: 1x3 | y: 1x1\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n #print(\"X: \"); print(X)\n #print(\"self.weights_input_to_hidden: \"); print(self.weights_input_to_hidden)\n \n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n #print(\"hidden_inputs: \"); print(hidden_inputs)\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n #print(\"hidden_outputs: \"); print(hidden_outputs)\n # TODO: Output layer - Replace these values with your calculations.\n #print(\"self.weights_hidden_to_output: \"); print(self.weights_hidden_to_output)\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n #print(\"final_inputs: \"); print(final_inputs)\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n \n #print(\"y: \"); print(y)\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n #print(\"error: \"); print(error)\n \n output_error_term = error\n #print(\"output_error_term: error * final_outputs * (1 - final_outputs)\"); print(output_error_term)\n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = output_error_term * self.weights_hidden_to_output # (1,) * (2,1) = (2,1)\n #print(\"hidden_error: output_error_term * self.weights_hidden_to_output\"); print(hidden_error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n hidden_error_term = hidden_error.T * hidden_outputs * (1 - hidden_outputs) # (1,2) * (2,) * (2,) = (1,2)\n #print(\"hidden_error_term: hidden_error.T * hidden_outputs * (1 - hidden_outputs)\"); print(hidden_error_term)\n #print(\"hidden_error shape:\")\n #print(hidden_error.shape)\n #print(\"hidden_error_term shape:\")\n #print(hidden_error_term.shape)\n #print(\"delta_weights_i_h shape:\")\n #print(delta_weights_i_h.shape)\n #print(\"output_error_term shape:\")\n #print(output_error_term.shape)\n #print(\"hidden_outputs shape:\")\n #print(hidden_outputs.shape)\n #print(\"delta_weights_h_o shape:\")\n #print(delta_weights_h_o.shape)\n # Weight step (input to hidden)\n #print(\"delta_weights_i_h: \"); print(delta_weights_i_h)\n delta_weights_i_h += X[:,None] * hidden_error_term # (3,1) * (1,2) = (3,2)\n #print(\"delta_weights_i_h: X[:,None] * hidden_error_term\"); print(delta_weights_i_h)\n\n # Weight step (hidden to output)\n #print(\"delta_weights_h_o: \"); print(delta_weights_h_o)\n delta_weights_h_o += output_error_term * hidden_outputs[:,None]\n #print(\"delta_weights_h_o: output_error_term * hidden_outputs[:,None]\"); print(delta_weights_h_o)\n\n # TODO: Update the weights - Replace these values with your calculations.\n #print(\"lr: \"); print(self.lr)\n #print(\"n_records: \"); print(n_records)\n #print(\"weights_hidden_to_output: \"); print(self.weights_hidden_to_output)\n #print(\"weights_input_to_hidden: \"); print(self.weights_input_to_hidden)\n \n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n #print(\"weights_hidden_to_output: \"); print(self.weights_hidden_to_output)\n #print(\"weights_input_to_hidden: \"); print(self.weights_input_to_hidden)\n \n\n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n #print(\"Features: \"); print(features)\n #print(\"weights_input_to_hidden: \"); print(self.weights_input_to_hidden)\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n #print(\"hidden_inputs: \"); print(hidden_inputs)\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n #print(\"hidden_outputs: \"); print(hidden_outputs)\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n #print(\"weights_input_to_hidden: \"); print(self.weights_hidden_to_output)\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n #print(\"final_inputs: \"); print(final_inputs)\n final_outputs = final_inputs # signals from final output layer \n #print(\"final_outputs: \"); print(final_outputs)\n \n return final_outputs\n\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\n\n\niterations = 5000\nlearning_rate_list = [0.1, 0.2, 0.4, 1, 2, 4, 10, 20, 40]\nhidden_nodes_list = [4, 6, 7, 8, 10, 12, 16, 20]\noutput_nodes = 1\n\n### Final, tuned hyperparameters, after grid search ###\n#iterations = 2000\n#learning_rate = 1\n#hidden_nodes = 12\n#output_nodes = 1\n\nN_i = train_features.shape[1] # 56\n#print(N_i)\nbest_loss = 10000 # set to high number; will use this to pick the best network and # of iterations based on loss plot\nfor learning_rate in learning_rate_list:\n for hidden_nodes in hidden_nodes_list:\n\n network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n #print(\"N_i = {}\".format(N_i))\n losses = {'train':[], 'validation':[]}\n for ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.loc[batch].values, train_targets.loc[batch]['cnt']\n\n network.train(X, y)\n\n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n #sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n # + \"% ... Training loss: \" + str(train_loss)[:5] \\\n # + \" ... Validation loss: \" + str(val_loss)[:5])\n #sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n if val_loss < best_loss:\n best_loss = val_loss\n best_run = [ii,learning_rate,hidden_nodes,train_loss,val_loss]\n best_run_losses = losses\n best_network = network\n #print(\"iterations: {:5d} | lrn_rate: {:02f} | hdn_nodes: {:3d} | training loss: {:02f} | val_loss: {:02f}\".format(\n # ii, learning_rate, hidden_nodes, train_loss, val_loss))\n # end if\n # end for\n print(\"done loop with hyperparams hidden_nodes = {}, learning_rate = {}. Current best run: \".format(hidden_nodes,learning_rate))\n print(\"iterations: {:5d} | lrn_rate: {:02f} | hdn_nodes: {:3d} | training loss: {:02f} | val_loss: {:02f}\".format(\n best_run[0], best_run[1], best_run[2], best_run[3], best_run[4]))\n\n # end for\n# end for\n\n\nplt.plot(best_run_losses['train'], label='Training loss')\nplt.plot(best_run_losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim((0,1))", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = best_network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.loc[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nThe model predicts the data very well except for Dec 22-30. In this timeframe the data is abnormal presumably due to the effect of Christmas holiday on the bike rentals. We have 2 years worth of data; We probably need many years worth of data to be able to predict seasonal/holiday trends.\n4700 iterations is a bit overkill; I could have stopped training at around 2000 iterations with the final set of hyperparameters, since I keep only the best network, there was no point going back and training only with 2000 iterations. My final selected hyperparameters would be:\niterations = 2000 | learning_rate = 1 | hidden_nodes = 12 | output_nodes = 1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nwhidden/ND101-Deep-Learning
language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n \n source_ids = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\\n')]\n target_ids = [[target_vocab_to_int[word] for word in (line + ' <EOS>').split()] for line in target_text.split('\\n')]\n\n #print('source text: \\n', source_words[:50], '\\n\\n')\n #print('target text: \\n', target_words[:50], '\\n\\n')\n \n return source_ids, target_ids\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n # TODO: Implement Function\n inputs = tf.placeholder(tf.int32, shape = (None, None), name = 'input')\n targets = tf.placeholder(tf.int32, shape = (None, None), name = 'targets')\n learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')\n keep_prob = tf.placeholder(tf.float32, name = 'keep_prob')\n tgt_seq_length = tf.placeholder(tf.int32, (None,), name = 'target_sequence_length')\n src_seq_length = tf.placeholder(tf.int32, (None,), name = 'source_sequence_length')\n \n max_tgt_seq_length = tf.reduce_max(tgt_seq_length, name = 'max_target_sequence_length')\n\n return inputs, targets, learning_rate, keep_prob, tgt_seq_length, max_tgt_seq_length, src_seq_length\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n # TODO: Implement Function\n enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, \n source_vocab_size, \n encoding_embedding_size)\n \n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer = tf.random_uniform_initializer(-0.1, 0.1, seed = 2))\n return enc_cell\n \n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob = keep_prob)\n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length = source_sequence_length, dtype = tf.float32)\n return enc_output, enc_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # TODO: Implement Function\n \n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs = dec_embed_input, \n sequence_length = target_sequence_length,\n time_major = False)\n training_decoder = tf.contrib.seq2seq.BasicDecoder(cell = dec_cell, \n helper = training_helper, \n initial_state = encoder_state,\n output_layer = output_layer)\n dec_outputs = tf.contrib.seq2seq.dynamic_decode(decoder = training_decoder, \n impute_finished = True,\n maximum_iterations = max_summary_length)[0]\n #train_logits = output_layer(dec_outputs)\n \n return dec_outputs\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n # TODO: Implement Function\n # tile the start tokens for inference helper\n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype = tf.int32), [batch_size], name = 'start_tokens')\n \n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding = dec_embeddings,\n start_tokens = start_tokens,\n end_token = end_of_sequence_id)\n \n inference_decoder = tf.contrib.seq2seq.BasicDecoder(cell = dec_cell,\n helper = inference_helper, \n initial_state = encoder_state,\n output_layer = output_layer)\n decoder_outputs = tf.contrib.seq2seq.dynamic_decode(decoder = inference_decoder,\n impute_finished = True,\n maximum_iterations = max_target_sequence_length)[0]\n return decoder_outputs\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n \n # Embed the target sequences\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n \n # Construct decoder LSTM cell\n def make_cell(rnn_size):\n cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer = tf.random_uniform_initializer(-0.1, 0.1, seed = 2))\n return cell\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # Create output layer\n output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev = 0.1))\n \n # Use decoding_layer_train to get training logits\n with tf.variable_scope(\"decode\"):\n train_logits = decoding_layer_train(encoder_state = encoder_state,\n dec_cell = dec_cell,\n dec_embed_input = dec_embed_input, \n target_sequence_length = target_sequence_length,\n max_summary_length = max_target_sequence_length,\n output_layer = output_layer,\n keep_prob = keep_prob)\n # end with\n \n # Use decoding_layer_infer to get logits at inference time\n with tf.variable_scope(\"decode\", reuse = True):\n inference_logits = decoding_layer_infer(encoder_state = encoder_state, \n dec_cell = dec_cell, \n dec_embeddings = dec_embeddings, \n start_of_sequence_id = target_vocab_to_int['<GO>'], \n end_of_sequence_id = target_vocab_to_int['<EOS>'], \n max_target_sequence_length = max_target_sequence_length, \n vocab_size = target_vocab_size, \n output_layer = output_layer, \n batch_size = batch_size, \n keep_prob = keep_prob)\n # end with\n \n return train_logits, inference_logits\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n \n # Encode input using encoding_layer\n _, enc_state = encoding_layer( rnn_inputs = input_data, \n rnn_size = rnn_size, \n num_layers = num_layers, \n keep_prob = keep_prob, \n source_sequence_length = source_sequence_length, \n source_vocab_size = source_vocab_size, \n encoding_embedding_size = enc_embedding_size)\n \n # Process target data using process_decoder_input\n dec_input = process_decoder_input(target_data = target_data, \n target_vocab_to_int = target_vocab_to_int, \n batch_size = batch_size)\n # decode the encoded input using decoding_layer\n dec_output_train, dec_output_infer = decoding_layer( dec_input = dec_input, \n encoder_state = enc_state,\n target_sequence_length = target_sequence_length, \n max_target_sequence_length = max_target_sentence_length,\n rnn_size = rnn_size,\n num_layers = num_layers, \n target_vocab_to_int = target_vocab_to_int, \n target_vocab_size = target_vocab_size,\n batch_size = batch_size, \n keep_prob = keep_prob, \n decoding_embedding_size = dec_embedding_size)\n \n return dec_output_train, dec_output_infer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# Number of Epochs\nepochs = 20\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 64\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 256\ndecoding_embedding_size = 256\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.4\ndisplay_step = 64", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n # convert to lowercase\n sentence = sentence.lower()\n \n # convert words to ids, using vocab_to_int\n sequence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()]\n return sequence\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.22/_downloads/3674b896fc4e4a279156fa5c0f61aea8/plot_10_preprocessing_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "Overview of artifact detection\nThis tutorial covers the basics of artifact detection, and introduces the\nartifact detection tools available in MNE-Python.\n :depth: 2\nWe begin as always by importing the necessary Python modules and loading some\nexample data &lt;sample-dataset&gt;:", "import os\nimport numpy as np\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)\nraw.crop(0, 60).load_data() # just use a fraction of data for speed here", "What are artifacts?\nArtifacts are parts of the recorded signal that arise from sources other than\nthe source of interest (i.e., neuronal activity in the brain). As such,\nartifacts are a form of interference or noise relative to the signal of\ninterest. There are many possible causes of such interference, for example:\n\n\nEnvironmental artifacts\n\nPersistent oscillations centered around the AC power line frequency_\n (typically 50 or 60 Hz)\nBrief signal jumps due to building vibration (such as a door slamming)\nElectromagnetic field noise from nearby elevators, cell phones, the\n geomagnetic field, etc.\n\n\n\nInstrumentation artifacts\n\nElectromagnetic interference from stimulus presentation (such as EEG\n sensors picking up the field generated by unshielded headphones)\nContinuous oscillations at specific frequencies used by head position\n indicator (HPI) coils\nRandom high-amplitude fluctuations (or alternatively, constant zero\n signal) in a single channel due to sensor malfunction (e.g., in surface\n electrodes, poor scalp contact)\n\n\n\nBiological artifacts\n\nPeriodic QRS_-like signal patterns (especially in magnetometer\n channels) due to electrical activity of the heart\nShort step-like deflections (especially in frontal EEG channels) due to\n eye movements\nLarge transient deflections (especially in frontal EEG channels) due to\n blinking\nBrief bursts of high frequency fluctuations across several channels due\n to the muscular activity during swallowing\n\n\n\nThere are also some cases where signals from within the brain can be\nconsidered artifactual. For example, if a researcher is primarily interested\nin the sensory response to a stimulus, but the experimental paradigm involves\na behavioral response (such as button press), the neural activity associated\nwith the planning and executing the button press could be considered an\nartifact relative to signal of interest (i.e., the evoked sensory response).\n<div class=\"alert alert-info\"><h4>Note</h4><p>Artifacts of the same genesis may appear different in recordings made by\n different EEG or MEG systems, due to differences in sensor design (e.g.,\n passive vs. active EEG electrodes; axial vs. planar gradiometers, etc).</p></div>\n\nWhat to do about artifacts\nThere are 3 basic options when faced with artifacts in your recordings:\n\nIgnore the artifact and carry on with analysis\nExclude the corrupted portion of the data and analyze the remaining data\nRepair the artifact by suppressing artifactual part of the recording\n while (hopefully) leaving the signal of interest intact\n\nThere are many different approaches to repairing artifacts, and MNE-Python\nincludes a variety of tools for artifact repair, including digital filtering,\nindependent components analysis (ICA), Maxwell filtering / signal-space\nseparation (SSS), and signal-space projection (SSP). Separate tutorials\ndemonstrate each of these techniques for artifact repair. Many of the\nartifact repair techniques work on both continuous (raw) data and on data\nthat has already been epoched (though not necessarily equally well); some can\nbe applied to memory-mapped_ data while others require the data to be\ncopied into RAM. Of course, before you can choose any of these strategies you\nmust first detect the artifacts, which is the topic of the next section.\nArtifact detection\nMNE-Python includes a few tools for automated detection of certain artifacts\n(such as heartbeats and blinks), but of course you can always visually\ninspect your data to identify and annotate artifacts as well.\nWe saw in the introductory tutorial &lt;tut-overview&gt; that the example\ndata includes :term:SSP projectors &lt;projector&gt;, so before we look at\nartifacts let's set aside the projectors in a separate variable and then\nremove them from the :class:~mne.io.Raw object using the\n:meth:~mne.io.Raw.del_proj method, so that we can inspect our data in it's\noriginal, raw state:", "ssp_projectors = raw.info['projs']\nraw.del_proj()", "Low-frequency drifts\nLow-frequency drifts are most readily detected by visual inspection using the\nbasic :meth:~mne.io.Raw.plot method, though it is helpful to plot a\nrelatively long time span and to disable channel-wise DC shift correction.\nHere we plot 60 seconds and show all the magnetometer channels:", "mag_channels = mne.pick_types(raw.info, meg='mag')\nraw.plot(duration=60, order=mag_channels, n_channels=len(mag_channels),\n remove_dc=False)", "Low-frequency drifts are readily removed by high-pass filtering at a fairly\nlow cutoff frequency (the wavelength of the drifts seen above is probably\naround 20 seconds, so in this case a cutoff of 0.1 Hz would probably suppress\nmost of the drift).\nPower line noise\nPower line artifacts are easiest to see on plots of the spectrum, so we'll\nuse :meth:~mne.io.Raw.plot_psd to illustrate.", "fig = raw.plot_psd(tmax=np.inf, fmax=250, average=True)\n# add some arrows at 60 Hz and its harmonics:\nfor ax in fig.axes[1:]:\n freqs = ax.lines[-1].get_xdata()\n psds = ax.lines[-1].get_ydata()\n for freq in (60, 120, 180, 240):\n idx = np.searchsorted(freqs, freq)\n ax.arrow(x=freqs[idx], y=psds[idx] + 18, dx=0, dy=-12, color='red',\n width=0.1, head_width=3, length_includes_head=True)", "Here we see narrow frequency peaks at 60, 120, 180, and 240 Hz — the power\nline frequency of the USA (where the sample data was recorded) and its 2nd,\n3rd, and 4th harmonics. Other peaks (around 25 to 30 Hz, and the second\nharmonic of those) are probably related to the heartbeat, which is more\neasily seen in the time domain using a dedicated heartbeat detection function\nas described in the next section.\nHeartbeat artifacts (ECG)\nMNE-Python includes a dedicated function\n:func:~mne.preprocessing.find_ecg_events in the :mod:mne.preprocessing\nsubmodule, for detecting heartbeat artifacts from either dedicated ECG\nchannels or from magnetometers (if no ECG channel is present). Additionally,\nthe function :func:~mne.preprocessing.create_ecg_epochs will call\n:func:~mne.preprocessing.find_ecg_events under the hood, and use the\nresulting events array to extract epochs centered around the detected\nheartbeat artifacts. Here we create those epochs, then show an image plot of\nthe detected ECG artifacts along with the average ERF across artifacts. We'll\nshow all three channel types, even though EEG channels are less strongly\naffected by heartbeat artifacts:", "ecg_epochs = mne.preprocessing.create_ecg_epochs(raw)\necg_epochs.plot_image(combine='mean')", "The horizontal streaks in the magnetometer image plot reflect the fact that\nthe heartbeat artifacts are superimposed on low-frequency drifts like the one\nwe saw in an earlier section; to avoid this you could pass\nbaseline=(-0.5, -0.2) in the call to\n:func:~mne.preprocessing.create_ecg_epochs.\nYou can also get a quick look at the\nECG-related field pattern across sensors by averaging the ECG epochs together\nvia the :meth:~mne.Epochs.average method, and then using the\n:meth:mne.Evoked.plot_topomap method:", "avg_ecg_epochs = ecg_epochs.average().apply_baseline((-0.5, -0.2))", "Here again we can visualize the spatial pattern of the associated field at\nvarious times relative to the peak of the EOG response:", "avg_ecg_epochs.plot_topomap(times=np.linspace(-0.05, 0.05, 11))", "Or, we can get an ERP/F plot with :meth:~mne.Evoked.plot or a combined\nscalp field maps and ERP/F plot with :meth:~mne.Evoked.plot_joint. Here\nwe've specified the times for scalp field maps manually, but if not provided\nthey will be chosen automatically based on peaks in the signal:", "avg_ecg_epochs.plot_joint(times=[-0.25, -0.025, 0, 0.025, 0.25])", "Ocular artifacts (EOG)\nSimilar to the ECG detection and epoching methods described above, MNE-Python\nalso includes functions for detecting and extracting ocular artifacts:\n:func:~mne.preprocessing.find_eog_events and\n:func:~mne.preprocessing.create_eog_epochs. Once again we'll use the\nhigher-level convenience function that automatically finds the artifacts and\nextracts them in to an :class:~mne.Epochs object in one step. Unlike the\nheartbeat artifacts seen above, ocular artifacts are usually most prominent\nin the EEG channels, but we'll still show all three channel types. We'll use\nthe baseline parameter this time too; note that there are many fewer\nblinks than heartbeats, which makes the image plots appear somewhat blocky:", "eog_epochs = mne.preprocessing.create_eog_epochs(raw, baseline=(-0.5, -0.2))\neog_epochs.plot_image(combine='mean')\neog_epochs.average().plot_joint()", "Summary\nFamiliarizing yourself with typical artifact patterns and magnitudes is a\ncrucial first step in assessing the efficacy of later attempts to repair\nthose artifacts. A good rule of thumb is that the artifact amplitudes should\nbe orders of magnitude larger than your signal of interest — and there should\nbe several occurrences of such events — in order to find signal\ndecompositions that effectively estimate and repair the artifacts.\nSeveral other tutorials in this section illustrate the various tools for\nartifact repair, and discuss the pros and cons of each technique, for\nexample:\n\ntut-artifact-ssp\ntut-artifact-ica\ntut-artifact-sss\n\nThere are also tutorials on general-purpose preprocessing steps such as\nfiltering and resampling &lt;tut-filter-resample&gt; and excluding\nbad channels &lt;tut-bad-channels&gt; or spans of data\n&lt;tut-reject-data-spans&gt;.\n.. LINKS\nhttps://en.wikipedia.org/wiki/Mains_electricity" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
qinwf-nuan/keras-js
notebooks/layers/pooling/GlobalMaxPooling3D.ipynb
mit
[ "import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.pooling import GlobalMaxPooling3D\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()", "GlobalMaxPooling3D\n[pooling.GlobalMaxPooling3D.0] input 6x6x3x4, data_format='channels_last'", "data_in_shape = (6, 6, 3, 4)\nL = GlobalMaxPooling3D(data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(270)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.GlobalMaxPooling3D.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.GlobalMaxPooling3D.1] input 3x6x6x3, data_format='channels_first'", "data_in_shape = (3, 6, 6, 3)\nL = GlobalMaxPooling3D(data_format='channels_first')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(271)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.GlobalMaxPooling3D.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "[pooling.GlobalMaxPooling3D.2] input 5x3x2x1, data_format='channels_last'", "data_in_shape = (5, 3, 2, 1)\nL = GlobalMaxPooling3D(data_format='channels_last')\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = L(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nnp.random.seed(272)\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['pooling.GlobalMaxPooling3D.2'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}", "export for Keras.js tests", "print(json.dumps(DATA))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dariox2/CADL
session-5/session-5-part-2[4-5].ipynb
apache-2.0
[ "Session 5: Generative Networks\nAssignment: Generative Adversarial Networks, Variational Autoencoders, and Recurrent Neural Networks\n<p class=\"lead\">\n<a href=\"https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\">Creative Applications of Deep Learning with Google's Tensorflow</a><br />\n<a href=\"http://pkmital.com\">Parag K. Mital</a><br />\n<a href=\"https://www.kadenze.com\">Kadenze, Inc.</a>\n</p>\n\nContinued from session-5-part-1.ipynb...\nTable of Contents\n<!-- MarkdownTOC autolink=\"true\" autoanchor=\"true\" bracket=\"round\" -->\n\nOverview\nLearning Goals\nPart 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)\nIntroduction\nBuilding the Encoder\nBuilding the Discriminator for the Training Samples\nBuilding the Decoder\nBuilding the Generator\nBuilding the Discriminator for the Generated Samples\nGAN Loss Functions\nBuilding the Optimizers w/ Regularization\nLoading a Dataset\nTraining\nEquilibrium\nPart 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)\nBatch Normalization\nBuilding the Encoder\nBuilding the Variational Layer\nBuilding the Decoder\nBuilding VAE/GAN Loss Functions\nCreating the Optimizers\nLoading the Dataset\nTraining\nPart 3 - Latent-Space Arithmetic\nLoading the Pre-Trained Model\nExploring the Celeb Net Attributes\nFind the Latent Encoding for an Attribute\nLatent Feature Arithmetic\nExtensions\nPart 4 - Character-Level Language Model\nPart 5 - Pretrained Char-RNN of Donald Trump\nGetting the Trump Data\nBasic Text Analysis\nLoading the Pre-trained Trump Model\nInference: Keeping Track of the State\nProbabilistic Sampling\nInference: Temperature\nInference: Priming\n\n\nAssignment Submission\n\n<!-- /MarkdownTOC -->\n\nFirst check the Python version\nimport sys\nif sys.version_info < (3,4):\n print('You are running an older version of Python!\\n\\n',\n 'You should consider updating to Python 3.4.0 or',\n 'higher as the libraries built for this course',\n 'have only been tested in Python 3.4 and higher.\\n')\n print('Try installing the Python 3.5 version of anaconda'\n 'and then restart jupyter notebook:\\n',\n 'https://www.continuum.io/downloads\\n\\n')\nNow get necessary libraries\ntry:\n import os\n import numpy as np\n import matplotlib.pyplot as plt\n from skimage.transform import resize\n from skimage import data\n from scipy.misc import imresize\n from scipy.ndimage.filters import gaussian_filter\n import IPython.display as ipyd\n import tensorflow as tf\n from libs import utils, gif, datasets, dataset_utils, nb_utils\nexcept ImportError as e:\n print(\"Make sure you have started notebook in the same directory\",\n \"as the provided zip file which includes the 'libs' folder\",\n \"and the file 'utils.py' inside of it. You will NOT be able\",\n \"to complete this assignment unless you restart jupyter\",\n \"notebook inside the directory created by extracting\",\n \"the zip file or cloning the github repo.\")\n print(e)\nWe'll tell matplotlib to inline any drawn figures like so:\n%matplotlib inline\nplt.style.use('ggplot')", "# Bit of formatting because I don't like the default inline code style:\nfrom IPython.core.display import HTML\nHTML(\"\"\"<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\"\"\")", "<style> .rendered_html code { \n padding: 2px 4px;\n color: #c7254e;\n background-color: #f9f2f4;\n border-radius: 4px;\n} </style>\n\n<a name=\"part-4---character-level-language-model\"></a>\nPart 4 - Character-Level Language Model\nWe'll now continue onto the second half of the homework and explore recurrent neural networks. We saw one potential application of a recurrent neural network which learns letter by letter the content of a text file. We were then able to synthesize from the model to produce new phrases. Let's try to build one. Replace the code below with something that loads your own text file or one from the internet. Be creative with this!\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "import tensorflow as tf\nfrom six.moves import urllib\nscript = 'http://www.awesomefilm.com/script/biglebowski.txt'\ntxts = []\nf, _ = urllib.request.urlretrieve(script, script.split('/')[-1])\nwith open(f, 'r') as fp:\n txt = fp.read()", "Let's take a look at the first part of this:", "txt[:100]", "We'll just clean up the text a little. This isn't necessary, but can help the training along a little. In the example text I provided, there is a lot of white space (those \\t's are tabs). I'll remove them. There are also repetitions of \\n, new lines, which are not necessary. The code below will remove the tabs, ending whitespace, and any repeating newlines. Replace this with any preprocessing that makes sense for your dataset. Try to boil it down to just the possible letters for what you want to learn/synthesize while retaining any meaningful patterns:", "txt = \"\\n\".join([txt_i.strip()\n for txt_i in txt.replace('\\t', '').split('\\n')\n if len(txt_i)])", "Now we can see how much text we have:", "len(txt)", "In general, we'll want as much text as possible. But I'm including this just as a minimal example so you can explore your own. Try making a text file and seeing the size of it. You'll want about 1 MB at least.\nLet's now take a look at the different characters we have in our file:", "vocab = list(set(txt))\nvocab.sort()\nlen(vocab)\nprint(vocab)", "And then create a mapping which can take us from the letter to an integer look up table of that letter (and vice-versa). To do this, we'll use an OrderedDict from the collections library. In Python 3.6, this is the default behavior of dict, but in earlier versions of Python, we'll need to be explicit by using OrderedDict.", "from collections import OrderedDict\n\nencoder = OrderedDict(zip(vocab, range(len(vocab))))\ndecoder = OrderedDict(zip(range(len(vocab)), vocab))\n\nencoder", "We'll store a few variables that will determine the size of our network. First, batch_size determines how many sequences at a time we'll train on. The seqence_length parameter defines the maximum length to unroll our recurrent network for. This is effectively the depth of our network during training to help guide gradients along. Within each layer, we'll have n_cell LSTM units, and n_layers layers worth of LSTM units. Finally, we'll store the total number of possible characters in our data, which will determine the size of our one hot encoding (like we had for MNIST in Session 3).", "# Number of sequences in a mini batch\nbatch_size = 100\n\n# Number of characters in a sequence\nsequence_length = 50\n\n# Number of cells in our LSTM layer\nn_cells = 128\n\n# Number of LSTM layers\nn_layers = 3\n\n# Total number of characters in the one-hot encoding\nn_chars = len(vocab)", "Let's now create the input and output to our network. We'll use placeholders and feed these in later. The size of these need to be [batch_size, sequence_length]. We'll then see how to build the network in between.\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "X = tf.placeholder(tf.int32, shape=..., name='X')\n\n# We'll have a placeholder for our true outputs\nY = tf.placeholder(tf.int32, shape=..., name='Y')", "The first thing we need to do is convert each of our sequence_length vectors in our batch to n_cells LSTM cells. We use a lookup table to find the value in X and use this as the input to n_cells LSTM cells. Our lookup table has n_chars possible elements and connects each character to n_cells cells. We create our lookup table using tf.get_variable and then the function tf.nn.embedding_lookup to connect our X placeholder to n_cells number of neurons.", "# we first create a variable to take us from our one-hot representation to our LSTM cells\nembedding = tf.get_variable(\"embedding\", [n_chars, n_cells])\n\n# And then use tensorflow's embedding lookup to look up the ids in X\nXs = tf.nn.embedding_lookup(embedding, X)\n\n# The resulting lookups are concatenated into a dense tensor\nprint(Xs.get_shape().as_list())", "Now recall from the lecture that recurrent neural networks share their weights across timesteps. So we don't want to have one large matrix with every timestep, but instead separate them. We'll use tf.split to split our [batch_size, sequence_length, n_cells] array in Xs into a list of sequence_length elements each composed of [batch_size, n_cells] arrays. This gives us sequence_length number of arrays of [batch_size, 1, n_cells]. We then use tf.squeeze to remove the 1st index corresponding to the singleton sequence_length index, resulting in simply [batch_size, n_cells].", "with tf.name_scope('reslice'):\n Xs = [tf.squeeze(seq, [1])\n for seq in tf.split(1, sequence_length, Xs)]", "With each of our timesteps split up, we can now connect them to a set of LSTM recurrent cells. We tell the tf.nn.rnn_cell.BasicLSTMCell method how many cells we want, i.e. how many neurons there are, and we also specify that our state will be stored as a tuple. This state defines the internal state of the cells as well as the connection from the previous timestep. We can also pass a value for the forget_bias. Be sure to experiment with this parameter as it can significantly effect performance (e.g. Gers, Felix A, Schmidhuber, Jurgen, and Cummins, Fred. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451–2471, 2000).", "cells = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_cells, state_is_tuple=True, forget_bias=1.0)", "Let's take a look at the cell's state size:", "cells.state_size", "c defines the internal memory and h the output. We'll have as part of our cells, both an initial_state and a final_state. These will become important during inference and we'll see how these work more then. For now, we'll set the initial_state to all zeros using the convenience function provided inside our cells object, zero_state:", "initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)", "Looking at what this does, we can see that it creates a tf.Tensor of zeros for our c and h states for each of our n_cells and stores this as a tuple inside the LSTMStateTuple object:", "initial_state", "So far, we have created a single layer of LSTM cells composed of n_cells number of cells. If we want another layer, we can use the tf.nn.rnn_cell.MultiRNNCell method, giving it our current cells, and a bit of pythonery to multiply our cells by the number of layers we want. We'll then update our initial_state variable to include the additional cells:", "cells = tf.nn.rnn_cell.MultiRNNCell(\n [cells] * n_layers, state_is_tuple=True)\ninitial_state = cells.zero_state(tf.shape(X)[0], tf.float32)", "Now if we take a look at our initial_state, we should see one LSTMStateTuple for each of our layers:", "initial_state", "So far, we haven't connected our recurrent cells to anything. Let's do this now using the tf.nn.rnn method. We also pass it our initial_state variables. It gives us the outputs of the rnn, as well as their states after having been computed. Contrast that with the initial_state, which set the LSTM cells to zeros. After having computed something, the cells will all have a different value somehow reflecting the temporal dynamics and expectations of the next input. These will be stored in the state tensors for each of our LSTM layers inside a LSTMStateTuple just like the initial_state variable.\n```python\nhelp(tf.nn.rnn)\nHelp on function rnn in module tensorflow.python.ops.rnn:\nrnn(cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None)\n Creates a recurrent neural network specified by RNNCell cell.\nThe simplest form of RNN network generated is:\n\n state = cell.zero_state(...)\n outputs = []\n for input_ in inputs:\n output, state = cell(input_, state)\n outputs.append(output)\n return (outputs, state)\n\nHowever, a few other options are available:\n\nAn initial state can be provided.\nIf the sequence_length vector is provided, dynamic calculation is performed.\nThis method of calculation does not compute the RNN steps past the maximum\nsequence length of the minibatch (thus saving computational time),\nand properly propagates the state at an example's sequence length\nto the final state output.\n\nThe dynamic calculation performed is, at time t for batch row b,\n (output, state)(b, t) =\n (t &gt;= sequence_length(b))\n ? (zeros(cell.output_size), states(b, sequence_length(b) - 1))\n : cell(input(b, t), state(b, t - 1))\n\nArgs:\n cell: An instance of RNNCell.\n inputs: A length T list of inputs, each a `Tensor` of shape\n `[batch_size, input_size]`, or a nested tuple of such elements.\n initial_state: (optional) An initial state for the RNN.\n If `cell.state_size` is an integer, this must be\n a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`.\n If `cell.state_size` is a tuple, this should be a tuple of\n tensors having shapes `[batch_size, s] for s in cell.state_size`.\n dtype: (optional) The data type for the initial state and expected output.\n Required if initial_state is not provided or RNN state has a heterogeneous\n dtype.\n sequence_length: Specifies the length of each sequence in inputs.\n An int32 or int64 vector (tensor) size `[batch_size]`, values in `[0, T)`.\n scope: VariableScope for the created subgraph; defaults to \"RNN\".\n\nReturns:\n A pair (outputs, state) where:\n - outputs is a length T list of outputs (one for each input), or a nested\n tuple of such elements.\n - state is the final state\n\nRaises:\n TypeError: If `cell` is not an instance of RNNCell.\n ValueError: If `inputs` is `None` or an empty list, or if the input depth\n (column size) cannot be inferred from inputs via shape inference.\n\n```\nUse the help on the functino tf.nn.rnn to create the outputs and states variable as below. We've already created each of the variable you need to use:\n<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>", "outputs, state = tf.nn.rnn(cell=..., input=..., initial_state=...)", "Let's take a look at the state now:", "state", "Our outputs are returned as a list for each of our timesteps:", "outputs", "We'll now stack all our outputs for every timestep. We can treat every observation at each timestep and for each batch using the same weight matrices going forward, since these should all have shared weights. Each timstep for each batch is its own observation. So we'll stack these in a 2d matrix so that we can create our softmax layer:", "outputs_flat = tf.reshape(tf.concat(1, outputs), [-1, n_cells])", "Our outputs are now concatenated so that we have [batch_size * timesteps, n_cells]", "outputs_flat", "We now create a softmax layer just like we did in Session 3 and in Session 3's homework. We multiply our final LSTM layer's n_cells outputs by a weight matrix to give us n_chars outputs. We then scale this output using a tf.nn.softmax layer so that they become a probability by exponentially scaling its value and dividing by its sum. We store the softmax probabilities in probs as well as keep track of the maximum index in Y_pred:", "with tf.variable_scope('prediction'):\n W = tf.get_variable(\n \"W\",\n shape=[n_cells, n_chars],\n initializer=tf.random_normal_initializer(stddev=0.1))\n b = tf.get_variable(\n \"b\",\n shape=[n_chars],\n initializer=tf.random_normal_initializer(stddev=0.1))\n\n # Find the output prediction of every single character in our minibatch\n # we denote the pre-activation prediction, logits.\n logits = tf.matmul(outputs_flat, W) + b\n\n # We get the probabilistic version by calculating the softmax of this\n probs = tf.nn.softmax(logits)\n\n # And then we can find the index of maximum probability\n Y_pred = tf.argmax(probs, 1)", "To train the network, we'll measure the loss between our predicted outputs and true outputs. We could use the probs variable, but we can also make use of tf.nn.softmax_cross_entropy_with_logits which will compute the softmax for us. We therefore need to pass in the variable just before the softmax layer, denoted as logits (unscaled values). This takes our variable logits, the unscaled predicted outputs, as well as our true outputs, Y. Before we give it Y, we'll need to reshape our true outputs in the same way, [batch_size x timesteps, n_chars]. Luckily, tensorflow provides a convenience for doing this, the tf.nn.sparse_softmax_cross_entropy_with_logits function:\n```python\nhelp(tf.nn.sparse_softmax_cross_entropy_with_logits)\nHelp on function sparse_softmax_cross_entropy_with_logits in module tensorflow.python.ops.nn_ops:\nsparse_softmax_cross_entropy_with_logits(logits, labels, name=None)\n Computes sparse softmax cross entropy between logits and labels.\nMeasures the probability error in discrete classification tasks in which the\nclasses are mutually exclusive (each entry is in exactly one class). For\nexample, each CIFAR-10 image is labeled with one and only one label: an image\ncan be a dog or a truck, but not both.\n\n**NOTE:** For this operation, the probability of a given label is considered\nexclusive. That is, soft classes are not allowed, and the `labels` vector\nmust provide a single specific index for the true class for each row of\n`logits` (each minibatch entry). For soft softmax classification with\na probability distribution for each entry, see\n`softmax_cross_entropy_with_logits`.\n\n**WARNING:** This op expects unscaled logits, since it performs a softmax\non `logits` internally for efficiency. Do not call this op with the\noutput of `softmax`, as it will produce incorrect results.\n\nA common use case is to have logits of shape `[batch_size, num_classes]` and\nlabels of shape `[batch_size]`. But higher dimensions are supported.\n\nArgs:\n logits: Unscaled log probabilities of rank `r` and shape\n `[d_0, d_1, ..., d_{r-2}, num_classes]` and dtype `float32` or `float64`.\n labels: `Tensor` of shape `[d_0, d_1, ..., d_{r-2}]` and dtype `int32` or\n `int64`. Each entry in `labels` must be an index in `[0, num_classes)`.\n Other values will result in a loss of 0, but incorrect gradient\n computations.\n name: A name for the operation (optional).\n\nReturns:\n A `Tensor` of the same shape as `labels` and of the same type as `logits`\n with the softmax cross entropy loss.\n\nRaises:\n ValueError: If logits are scalars (need to have rank &gt;= 1) or if the rank\n of the labels is not equal to the rank of the labels minus one.\n\n```", "with tf.variable_scope('loss'):\n # Compute mean cross entropy loss for each output.\n Y_true_flat = tf.reshape(tf.concat(1, Y), [-1])\n # logits are [batch_size x timesteps, n_chars] and\n # Y_true_flat are [batch_size x timesteps]\n loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, Y_true_flat)\n # Compute the mean over our `batch_size` x `timesteps` number of observations\n mean_loss = tf.reduce_mean(loss)", "Finally, we can create an optimizer in much the same way as we've done with every other network. Except, we will also \"clip\" the gradients of every trainable parameter. This is a hacky way to ensure that the gradients do not grow too large (the literature calls this the \"exploding gradient problem\"). However, note that the LSTM is built to help ensure this does not happen by allowing the gradient to be \"gated\". To learn more about this, please consider reading the following material:\nhttp://www.felixgers.de/papers/phd.pdf\nhttps://colah.github.io/posts/2015-08-Understanding-LSTMs/", "with tf.name_scope('optimizer'):\n optimizer = tf.train.AdamOptimizer(learning_rate=0.001)\n gradients = []\n clip = tf.constant(5.0, name=\"clip\")\n for grad, var in optimizer.compute_gradients(mean_loss):\n gradients.append((tf.clip_by_value(grad, -clip, clip), var))\n updates = optimizer.apply_gradients(gradients)", "Let's take a look at the graph:", "nb_utils.show_graph(tf.get_default_graph().as_graph_def())", "Below is the rest of code we'll need to train the network. I do not recommend running this inside Jupyter Notebook for the entire length of the training because the network can take 1-2 days at least to train, and your browser may very likely complain. Instead, you should write a python script containing the necessary bits of code and run it using the Terminal. We didn't go over how to do this, so I'll leave it for you as an exercise. The next part of this notebook will have you load a pre-trained network.", "with tf.Session() as sess:\n init = tf.initialize_all_variables()\n sess.run(init)\n\n cursor = 0\n it_i = 0\n while it_i < 500:\n Xs, Ys = [], []\n for batch_i in range(batch_size):\n if (cursor + sequence_length) >= len(txt) - sequence_length - 1:\n cursor = 0\n Xs.append([encoder[ch]\n for ch in txt[cursor:cursor + sequence_length]])\n Ys.append([encoder[ch]\n for ch in txt[cursor + 1: cursor + sequence_length + 1]])\n\n cursor = (cursor + sequence_length)\n Xs = np.array(Xs).astype(np.int32)\n Ys = np.array(Ys).astype(np.int32)\n\n loss_val, _ = sess.run([mean_loss, updates],\n feed_dict={X: Xs, Y: Ys})\n if it_i % 100 == 0:\n print(it_i, loss_val)\n\n if it_i % 500 == 0:\n p = sess.run(probs, feed_dict={X: np.array(Xs[-1])[np.newaxis]})\n ps = [np.random.choice(range(n_chars), p=p_i.ravel())\n for p_i in p]\n p = [np.argmax(p_i) for p_i in p]\n if isinstance(txt[0], str):\n print('original:', \"\".join(\n [decoder[ch] for ch in Xs[-1]]))\n print('synth(samp):', \"\".join(\n [decoder[ch] for ch in ps]))\n print('synth(amax):', \"\".join(\n [decoder[ch] for ch in p]))\n else:\n print([decoder[ch] for ch in ps])\n\n it_i += 1", "<a name=\"part-5---pretrained-char-rnn-of-donald-trump\"></a>\nPart 5 - Pretrained Char-RNN of Donald Trump\nRather than stick around to let a model train, let's now explore one I've trained for you Donald Trump. If you've trained your own model on your own text corpus then great! You should be able to use that in place of the one I've provided and still continue with the rest of the notebook. \nFor the Donald Trump corpus, there are a lot of video transcripts that you can find online. I've searched for a few of these, put them in a giant text file, made everything lowercase, and removed any extraneous letters/symbols to help reduce the vocabulary (not that it's not very large to begin with, ha).\nI used the code exactly as above to train on the text I gathered and left it to train for about 2 days. The only modification is that I also used \"dropout\" which you can see in the libs/charrnn.py file. Let's explore it now and we'll see how we can play with \"sampling\" the model to generate new phrases, and how to \"prime\" the model (a psychological term referring to when someone is exposed to something shortly before another event).\nFirst, let's clean up any existing graph:", "tf.reset_default_graph()", "<a name=\"getting-the-trump-data\"></a>\nGetting the Trump Data\nNow let's load the text. This is included in the repo or can be downloaded from:", "with open('trump.txt', 'r') as fp:\n txt = fp.read()", "Let's take a look at what's going on in here:", "txt[:100]", "<a name=\"basic-text-analysis\"></a>\nBasic Text Analysis\nWe can do some basic data analysis to get a sense of what kind of vocabulary we're working with. It's really important to look at your data in as many ways as possible. This helps ensure there isn't anything unexpected going on. Let's find every unique word he uses:", "words = set(txt.split(' '))\n\nwords", "Now let's count their occurrences:", "counts = {word_i: 0 for word_i in words}\nfor word_i in txt.split(' '):\n counts[word_i] += 1\ncounts", "We can sort this like so:", "[(word_i, counts[word_i]) for word_i in sorted(counts, key=counts.get, reverse=True)]", "As we should expect, \"the\" is the most common word, as it is in the English language: https://en.wikipedia.org/wiki/Most_common_words_in_English\n<a name=\"loading-the-pre-trained-trump-model\"></a>\nLoading the Pre-trained Trump Model\nLet's load the pretrained model. Rather than provide a tfmodel export, I've provided the checkpoint so you can also experiment with training it more if you wish. We'll rebuild the graph using the charrnn module in the libs directory:", "from libs import charrnn", "Let's get the checkpoint and build the model then restore the variables from the checkpoint. The only parameters of consequence are n_layers and n_cells which define the total size and layout of the model. The rest are flexible. We'll set the batch_size and sequence_length to 1, meaning we can feed in a single character at a time only, and get back 1 character denoting the very next character's prediction.", "ckpt_name = 'trump.ckpt'\ng = tf.Graph()\nn_layers = 3\nn_cells = 512\nwith tf.Session(graph=g) as sess:\n model = charrnn.build_model(txt=txt,\n batch_size=1,\n sequence_length=1,\n n_layers=n_layers,\n n_cells=n_cells,\n gradient_clip=10.0)\n saver = tf.train.Saver()\n if os.path.exists(ckpt_name):\n saver.restore(sess, ckpt_name)\n print(\"Model restored.\")", "Let's now take a look at the model:", "nb_utils.show_graph(g.as_graph_def())\n\nn_iterations = 100", "<a name=\"inference-keeping-track-of-the-state\"></a>\nInference: Keeping Track of the State\nNow recall from Part 4 when we created our LSTM network, we had an initial_state variable which would set the LSTM's c and h state vectors, as well as the final output state which was the output of the c and h state vectors after having passed through the network. When we input to the network some letter, say 'n', we can set the initial_state to zeros, but then after having input the letter n, we'll have as output a new state vector for c and h. On the next letter, we'll then want to set the initial_state to this new state, and set the input to the previous letter's output. That is how we ensure the network keeps track of time and knows what has happened in the past, and let it continually generate.", "curr_states = None\ng = tf.Graph()\nwith tf.Session(graph=g) as sess:\n model = charrnn.build_model(txt=txt,\n batch_size=1,\n sequence_length=1,\n n_layers=n_layers,\n n_cells=n_cells,\n gradient_clip=10.0)\n saver = tf.train.Saver()\n if os.path.exists(ckpt_name):\n saver.restore(sess, ckpt_name)\n print(\"Model restored.\")\n \n # Get every tf.Tensor for the initial state\n init_states = []\n for s_i in model['initial_state']:\n init_states.append(s_i.c)\n init_states.append(s_i.h)\n \n # Similarly, for every state after inference\n final_states = []\n for s_i in model['final_state']:\n final_states.append(s_i.c)\n final_states.append(s_i.h)\n\n # Let's start with the letter 't' and see what comes out:\n synth = [[encoder[' ']]]\n for i in range(n_iterations):\n\n # We'll create a feed_dict parameter which includes what to\n # input to the network, model['X'], as well as setting\n # dropout to 1.0, meaning no dropout.\n feed_dict = {model['X']: [synth[-1]],\n model['keep_prob']: 1.0}\n \n # Now we'll check if we currently have a state as a result\n # of a previous inference, and if so, add to our feed_dict\n # parameter the mapping of the init_state to the previous\n # output state stored in \"curr_states\".\n if curr_states:\n feed_dict.update(\n {init_state_i: curr_state_i\n for (init_state_i, curr_state_i) in\n zip(init_states, curr_states)})\n \n # Now we can infer and see what letter we get\n p = sess.run(model['probs'], feed_dict=feed_dict)[0]\n \n # And make sure we also keep track of the new state\n curr_states = sess.run(final_states, feed_dict=feed_dict)\n \n # Find the most likely character\n p = np.argmax(p)\n \n # Append to string\n synth.append([p])\n \n # Print out the decoded letter\n print(model['decoder'][p], end='')\n sys.stdout.flush()", "<a name=\"probabilistic-sampling\"></a>\nProbabilistic Sampling\nRun the above cell a couple times. What you should find is that it is deterministic. We always pick the most likely character. But we can do something else which will make things less deterministic and a bit more interesting: we can sample from our probabilistic measure from our softmax layer. This means if we have the letter 'a' as 0.4, and the letter 'o' as 0.2, we'll have a 40% chance of picking the letter 'a', and 20% chance of picking the letter 'o', rather than simply always picking the letter 'a' since it is the most probable.", "curr_states = None\ng = tf.Graph()\nwith tf.Session(graph=g) as sess:\n model = charrnn.build_model(txt=txt,\n batch_size=1,\n sequence_length=1,\n n_layers=n_layers,\n n_cells=n_cells,\n gradient_clip=10.0)\n saver = tf.train.Saver()\n if os.path.exists(ckpt_name):\n saver.restore(sess, ckpt_name)\n print(\"Model restored.\")\n \n # Get every tf.Tensor for the initial state\n init_states = []\n for s_i in model['initial_state']:\n init_states.append(s_i.c)\n init_states.append(s_i.h)\n \n # Similarly, for every state after inference\n final_states = []\n for s_i in model['final_state']:\n final_states.append(s_i.c)\n final_states.append(s_i.h)\n\n # Let's start with the letter 't' and see what comes out:\n synth = [[encoder[' ']]]\n for i in range(n_iterations):\n\n # We'll create a feed_dict parameter which includes what to\n # input to the network, model['X'], as well as setting\n # dropout to 1.0, meaning no dropout.\n feed_dict = {model['X']: [synth[-1]],\n model['keep_prob']: 1.0}\n \n # Now we'll check if we currently have a state as a result\n # of a previous inference, and if so, add to our feed_dict\n # parameter the mapping of the init_state to the previous\n # output state stored in \"curr_states\".\n if curr_states:\n feed_dict.update(\n {init_state_i: curr_state_i\n for (init_state_i, curr_state_i) in\n zip(init_states, curr_states)})\n \n # Now we can infer and see what letter we get\n p = sess.run(model['probs'], feed_dict=feed_dict)[0]\n \n # And make sure we also keep track of the new state\n curr_states = sess.run(final_states, feed_dict=feed_dict)\n \n # Now instead of finding the most likely character,\n # we'll sample with the probabilities of each letter\n p = p.astype(np.float64)\n p = np.random.multinomial(1, p.ravel() / p.sum())\n p = np.argmax(p)\n \n # Append to string\n synth.append([p])\n \n # Print out the decoded letter\n print(model['decoder'][p], end='')\n sys.stdout.flush()", "<a name=\"inference-temperature\"></a>\nInference: Temperature\nWhen performing probabilistic sampling, we can also use a parameter known as temperature which comes from simulated annealing. The basic idea is that as the temperature is high and very hot, we have a lot more free energy to use to jump around more, and as we cool down, we have less energy and then become more deterministic. We can use temperature by scaling our log probabilities like so:", "temperature = 0.5\ncurr_states = None\ng = tf.Graph()\nwith tf.Session(graph=g) as sess:\n model = charrnn.build_model(txt=txt,\n batch_size=1,\n sequence_length=1,\n n_layers=n_layers,\n n_cells=n_cells,\n gradient_clip=10.0)\n saver = tf.train.Saver()\n if os.path.exists(ckpt_name):\n saver.restore(sess, ckpt_name)\n print(\"Model restored.\")\n \n # Get every tf.Tensor for the initial state\n init_states = []\n for s_i in model['initial_state']:\n init_states.append(s_i.c)\n init_states.append(s_i.h)\n \n # Similarly, for every state after inference\n final_states = []\n for s_i in model['final_state']:\n final_states.append(s_i.c)\n final_states.append(s_i.h)\n\n # Let's start with the letter 't' and see what comes out:\n synth = [[encoder[' ']]]\n for i in range(n_iterations):\n\n # We'll create a feed_dict parameter which includes what to\n # input to the network, model['X'], as well as setting\n # dropout to 1.0, meaning no dropout.\n feed_dict = {model['X']: [synth[-1]],\n model['keep_prob']: 1.0}\n \n # Now we'll check if we currently have a state as a result\n # of a previous inference, and if so, add to our feed_dict\n # parameter the mapping of the init_state to the previous\n # output state stored in \"curr_states\".\n if curr_states:\n feed_dict.update(\n {init_state_i: curr_state_i\n for (init_state_i, curr_state_i) in\n zip(init_states, curr_states)})\n \n # Now we can infer and see what letter we get\n p = sess.run(model['probs'], feed_dict=feed_dict)[0]\n \n # And make sure we also keep track of the new state\n curr_states = sess.run(final_states, feed_dict=feed_dict)\n \n # Now instead of finding the most likely character,\n # we'll sample with the probabilities of each letter\n p = p.astype(np.float64)\n p = np.log(p) / temperature\n p = np.exp(p) / np.sum(np.exp(p))\n p = np.random.multinomial(1, p.ravel() / p.sum())\n p = np.argmax(p)\n \n # Append to string\n synth.append([p])\n \n # Print out the decoded letter\n print(model['decoder'][p], end='')\n sys.stdout.flush()", "<a name=\"inference-priming\"></a>\nInference: Priming\nLet's now work on \"priming\" the model with some text, and see what kind of state it is in and leave it to synthesize from there. We'll do more or less what we did before, but feed in our own text instead of the last letter of the synthesis from the model.", "prime = \"obama\"\ntemperature = 1.0\ncurr_states = None\nn_iterations = 500\ng = tf.Graph()\nwith tf.Session(graph=g) as sess:\n model = charrnn.build_model(txt=txt,\n batch_size=1,\n sequence_length=1,\n n_layers=n_layers,\n n_cells=n_cells,\n gradient_clip=10.0)\n saver = tf.train.Saver()\n if os.path.exists(ckpt_name):\n saver.restore(sess, ckpt_name)\n print(\"Model restored.\")\n \n # Get every tf.Tensor for the initial state\n init_states = []\n for s_i in model['initial_state']:\n init_states.append(s_i.c)\n init_states.append(s_i.h)\n \n # Similarly, for every state after inference\n final_states = []\n for s_i in model['final_state']:\n final_states.append(s_i.c)\n final_states.append(s_i.h)\n\n # Now we'll keep track of the state as we feed it one\n # letter at a time.\n curr_states = None\n for ch in prime:\n feed_dict = {model['X']: [[model['encoder'][ch]]],\n model['keep_prob']: 1.0}\n if curr_states:\n feed_dict.update(\n {init_state_i: curr_state_i\n for (init_state_i, curr_state_i) in\n zip(init_states, curr_states)})\n \n # Now we can infer and see what letter we get\n p = sess.run(model['probs'], feed_dict=feed_dict)[0]\n p = p.astype(np.float64)\n p = np.log(p) / temperature\n p = np.exp(p) / np.sum(np.exp(p))\n p = np.random.multinomial(1, p.ravel() / p.sum())\n p = np.argmax(p)\n \n # And make sure we also keep track of the new state\n curr_states = sess.run(final_states, feed_dict=feed_dict)\n \n # Now we're ready to do what we were doing before but with the\n # last predicted output stored in `p`, and the current state of\n # the model.\n synth = [[p]]\n print(prime + model['decoder'][p], end='')\n for i in range(n_iterations):\n\n # Input to the network\n feed_dict = {model['X']: [synth[-1]],\n model['keep_prob']: 1.0}\n \n # Also feed our current state\n feed_dict.update(\n {init_state_i: curr_state_i\n for (init_state_i, curr_state_i) in\n zip(init_states, curr_states)})\n \n # Inference\n p = sess.run(model['probs'], feed_dict=feed_dict)[0]\n \n # Keep track of the new state\n curr_states = sess.run(final_states, feed_dict=feed_dict)\n \n # Sample\n p = p.astype(np.float64)\n p = np.log(p) / temperature\n p = np.exp(p) / np.sum(np.exp(p))\n p = np.random.multinomial(1, p.ravel() / p.sum())\n p = np.argmax(p)\n \n # Append to string\n synth.append([p])\n \n # Print out the decoded letter\n print(model['decoder'][p], end='')\n sys.stdout.flush()", "<a name=\"assignment-submission\"></a>\nAssignment Submission\nAfter you've completed both notebooks, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:\nsession-5/ \n session-5-part-1.ipynb \n session-5-part-2.ipynb \n vaegan.gif\n\nYou'll then submit this zip file for your third assignment on Kadenze for \"Assignment 5: Generative Adversarial Networks and Recurrent Neural Networks\"! If you have any questions, remember to reach out on the forums and connect with your peers or with me.\nTo get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info\nAlso, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!", "utils.build_submission('session-5.zip',\n ('vaegan.gif',\n 'session-5-part-1.ipynb',\n 'session-5-part-2.ipynb'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Cyb3rWard0g/HELK
docker/helk-jupyter/notebooks/tutorials/02-intro-to-numpy-arrays.ipynb
gpl-3.0
[ "Introduction to Python NumPy Arrays\n\nGoals:\n\nLearn the basics of Python Numpy Arrays\n\nReferences:\n* http://www.numpy.org/\n* https://docs.scipy.org/doc/numpy/user/quickstart.html\n* https://www.datacamp.com/community/tutorials/python-numpy-tutorial\n* https://blog.thedataincubator.com/2018/02/numpy-and-pandas/\n* https://medium.com/@ericvanrees/pandas-series-objects-and-numpy-arrays-15dfe05919d7\n* https://www.machinelearningplus.com/python/numpy-tutorial-part1-array-python-examples/\n* https://towardsdatascience.com/a-hitchhiker-guide-to-python-numpy-arrays-9358de570121\n* McKinney, Wes. Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython. O'Reilly Media. Kindle Edition\nWhat is NumPy?\n\nNumPy is short for \"Numerical Python\" and it is a fundamental python package for scientific computing.\nIt uses a high-performance data structure known as the n-dimensional array or ndarray, a multi-dimensional array object, for efficient computation of arrays and matrices.\n\nWhat is an Array?\n\nPython arrays are data structures that store data similar to a list, except the type of objects stored in them is constrained.\nElements of an array are all of the same type and indexed by a tuple of positive integers.\nThe python module array allows you to specify the type of array at object creation time by using a type code, which is a single character. You can read more about each type code here: https://docs.python.org/3/library/array.html?highlight=array#module-array", "import array\n\narray_one = array.array('i',[1,2,3,4])\ntype(array_one)\n\ntype(array_one[0])", "What is a NumPy N-Dimensional Array (ndarray)?\n\nIt is an efficient multidimensional array providing fast array-oriented arithmetic operations.\nAn ndarray as any other array, it is a container for homogeneous data (Elements of the same type)\nIn NumPy, data in an ndarray is simply referred to as an array.\nAs with other container objects in Python, the contents of an ndarray can be accessed and modified by indexing or slicing operations.\nFor numerical data, NumPy arrays are more efficient for storing and manipulating data than the other built-in Python data structures.", "import numpy as np\nnp.__version__\n\nlist_one = [1,2,3,4,5]\n\nnumpy_array = np.array(list_one)\ntype(numpy_array)\n\nnumpy_array", "Advantages of NumPy Arrays\nVectorized Operations\n\nThe key difference between an array and a list is, arrays are designed to handle vectorized operations while a python list is not.\nNumPy operations perform complex computations on entire arrays without the need for Python for loops.\nIn other words, if you apply a function to an array, it is performed on every item in the array, rather than on the whole array object.\nIn a python list, you will have to perform a loop over the elements of the list.", "list_two = [1,2,3,4,5]\n# The following will throw an error:\nlist_two + 2", "Performing a loop to add 2 to every integer in the list", "for index, item in enumerate(list_two):\n list_two[index] = item + 2\nlist_two", "With a NumPy array, you can do the same simply by doing the following:", "numpy_array\n\nnumpy_array + 2", "Any arithmetic operations between equal-size arrays applies the operation element-wise:", "numpy_array_one = np.array([1,2])\nnumpy_array_two = np.array([4,6])\n\nnumpy_array_one + numpy_array_two\n\nnumpy_array_one > numpy_array_two", "Memory.\n\nNumPy internally stores data in a contiguous block of memory, independent of other built-in Python objects.\nNumPy arrays takes significantly less amount of memory as compared to python lists.", "import numpy as np\nimport sys\n\npython_list = [1,2,3,4,5,6]\npython_list_size = sys.getsizeof(1) * len(python_list)\npython_list_size\n\npython_numpy_array = np.array([1,2,3,4,5,6])\npython_numpy_array_size = python_numpy_array.itemsize * python_numpy_array.size\npython_numpy_array_size", "Basic Indexing and Slicing\nOne Dimensional Array\n\nWhen it comes down to slicing and indexing, one-dimensional arrays are the same as python lists", "numpy_array\n\nnumpy_array[1]\n\nnumpy_array[1:4]", "You can slice the array and pass it to a variable. Remember that variables just reference objects.\nAny change that you make to the array slice, it will be technnically done on the original array object. Once again, variables just reference objects.", "numpy_array_slice = numpy_array[1:4]\nnumpy_array_slice\n\nnumpy_array_slice[1] = 10\nnumpy_array_slice\n\nnumpy_array", "Two-Dimensional Array\n\nIn a two-dimensional array, elements of the array are one-dimensional arrays", "numpy_two_dimensional_array = np.array([[1,2,3],[4,5,6],[7,8,9]])\n\nnumpy_two_dimensional_array\n\nnumpy_two_dimensional_array[1]", "Instead of looping to the one-dimensional arrays to access specific elements, you can just pass a second index value", "numpy_two_dimensional_array[1][2]\n\nnumpy_two_dimensional_array[1,2]", "Slicing two-dimensional arrays is a little different than one-dimensional ones.", "numpy_two_dimensional_array\n\nnumpy_two_dimensional_array[:1]\n\nnumpy_two_dimensional_array[:2]\n\nnumpy_two_dimensional_array[:3]\n\nnumpy_two_dimensional_array[:2,1:]\n\nnumpy_two_dimensional_array[:2,:1]\n\nnumpy_two_dimensional_array[2][1:]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/migration/UJ8 Vertex SDK AutoML Text Sentiment Analysis.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI: Vertex AI Migration: AutoML Text Sentiment Analysis\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ8%20Vertex%20SDK%20AutoML%20Text%20Sentiment%20Analysis.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ8%20Vertex%20SDK%20AutoML%20Text%20Sentiment%20Analysis.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nDataset\nThe dataset used for this tutorial is the Crowdflower Claritin-Twitter dataset from data.world Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.getenv(\"IS_TESTING\"):\n ! pip3 install --upgrade tensorflow $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)", "Location of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.", "IMPORT_FILE = \"gs://cloud-samples-data/language/claritin.csv\"\nSENTIMENT_MAX = 4", "Quick peek at your data\nThis tutorial uses a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.", "if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head", "Create a dataset\ndatasets.create-dataset-api\nCreate the Dataset\nNext, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:\n\ndisplay_name: The human readable name for the Dataset resource.\ngcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.\nimport_schema_uri: The data labeling schema for the data items.\n\nThis operation may take several minutes.", "dataset = aip.TextDataset.create(\n display_name=\"Crowdflower Claritin-Twitter\" + \"_\" + TIMESTAMP,\n gcs_source=[IMPORT_FILE],\n import_schema_uri=aip.schema.dataset.ioformat.text.sentiment,\n)\n\nprint(dataset.resource_name)", "Example Output:\nINFO:google.cloud.aiplatform.datasets.dataset:Creating TextDataset\nINFO:google.cloud.aiplatform.datasets.dataset:Create TextDataset backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/3193181053544038400\nINFO:google.cloud.aiplatform.datasets.dataset:TextDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664\nINFO:google.cloud.aiplatform.datasets.dataset:To use this TextDataset in another session:\nINFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.TextDataset('projects/759209241365/locations/us-central1/datasets/3704325042721521664')\nINFO:google.cloud.aiplatform.datasets.dataset:Importing TextDataset data: projects/759209241365/locations/us-central1/datasets/3704325042721521664\nINFO:google.cloud.aiplatform.datasets.dataset:Import TextDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/3704325042721521664/operations/5152246891450204160\nINFO:google.cloud.aiplatform.datasets.dataset:TextDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/3704325042721521664\nprojects/759209241365/locations/us-central1/datasets/3704325042721521664\n\nTrain a model\ntraining.automl-api\nCreate and run training pipeline\nTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.\nCreate training pipeline\nAn AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the TrainingJob resource.\nprediction_type: The type task to train the model for.\nclassification: A text classification model.\nsentiment: A text sentiment analysis model.\nextraction: A text entity extraction model.\nmulti_label: If a classification task, whether single (False) or multi-labeled (True).\nsentiment_max: If a sentiment analysis task, the maximum sentiment value.\n\nThe instantiated object is the DAG (directed acyclic graph) for the training pipeline.", "dag = aip.AutoMLTextTrainingJob(\n display_name=\"claritin_\" + TIMESTAMP,\n prediction_type=\"sentiment\",\n sentiment_max=SENTIMENT_MAX,\n)\n\nprint(dag)", "Example output:\n&lt;google.cloud.aiplatform.training_jobs.AutoMLTextTrainingJob object at 0x7fc3b6c90f10&gt;\n\nRun the training pipeline\nNext, you run the DAG to start the training job by invoking the method run, with the following parameters:\n\ndataset: The Dataset resource to train the model.\nmodel_display_name: The human readable name for the trained model.\ntraining_fraction_split: The percentage of the dataset to use for training.\ntest_fraction_split: The percentage of the dataset to use for test (holdout data).\nvalidation_fraction_split: The percentage of the dataset to use for validation.\n\nThe run method when completed returns the Model resource.\nThe execution of the training pipeline will take upto 20 minutes.", "model = dag.run(\n dataset=dataset,\n model_display_name=\"claritin_\" + TIMESTAMP,\n training_fraction_split=0.8,\n validation_fraction_split=0.1,\n test_fraction_split=0.1,\n)", "Example output:\nINFO:google.cloud.aiplatform.training_jobs:View Training:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/training/8859754745456230400?project=759209241365\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400 current state:\nPipelineState.PIPELINE_STATE_RUNNING\n...\nINFO:google.cloud.aiplatform.training_jobs:AutoMLTextTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/8859754745456230400\nINFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/6389525951797002240\n\nEvaluate the model\nprojects.locations.models.evaluations.list\nReview model evaluation scores\nAfter your model has finished training, you can review the evaluation scores for it.\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.", "# Get model resource ID\nmodels = aip.Model.list(filter=\"display_name=claritin_\" + TIMESTAMP)\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": f\"{REGION}-aiplatform.googleapis.com\"}\nmodel_service_client = aip.gapic.ModelServiceClient(client_options=client_options)\n\nmodel_evaluations = model_service_client.list_model_evaluations(\n parent=models[0].resource_name\n)\nmodel_evaluation = list(model_evaluations)[0]\nprint(model_evaluation)", "Example output:\nname: \"projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824\"\nmetrics_schema_uri: \"gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml\"\nmetrics {\n struct_value {\n fields {\n key: \"auPrc\"\n value {\n number_value: 0.9891107\n }\n }\n fields {\n key: \"confidenceMetrics\"\n value {\n list_value {\n values {\n struct_value {\n fields {\n key: \"precision\"\n value {\n number_value: 0.2\n }\n }\n fields {\n key: \"recall\"\n value {\n number_value: 1.0\n }\n }\n }\n }\n\nMake batch predictions\npredictions.batch-prediction\nGet test item(s)\nNow do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_items = ! gsutil cat $IMPORT_FILE | head -n2\n\nif len(test_items[0]) == 4:\n _, test_item_1, test_label_1, _ = str(test_items[0]).split(\",\")\n _, test_item_2, test_label_2, _ = str(test_items[1]).split(\",\")\nelse:\n test_item_1, test_label_1, _ = str(test_items[0]).split(\",\")\n test_item_2, test_label_2, _ = str(test_items[1]).split(\",\")\n\n\nprint(test_item_1, test_label_1)\nprint(test_item_2, test_label_2)", "Make the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:\n\ncontent: The Cloud Storage path to the file with the text item.\nmime_type: The content type. In our example, it is a text file.\n\nFor example:\n {'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}", "import json\n\nimport tensorflow as tf\n\ngcs_test_item_1 = BUCKET_NAME + \"/test1.txt\"\nwith tf.io.gfile.GFile(gcs_test_item_1, \"w\") as f:\n f.write(test_item_1 + \"\\n\")\ngcs_test_item_2 = BUCKET_NAME + \"/test2.txt\"\nwith tf.io.gfile.GFile(gcs_test_item_2, \"w\") as f:\n f.write(test_item_2 + \"\\n\")\n\ngcs_input_uri = BUCKET_NAME + \"/test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n data = {\"content\": gcs_test_item_1, \"mime_type\": \"text/plain\"}\n f.write(json.dumps(data) + \"\\n\")\n data = {\"content\": gcs_test_item_2, \"mime_type\": \"text/plain\"}\n f.write(json.dumps(data) + \"\\n\")\n\nprint(gcs_input_uri)\n! gsutil cat $gcs_input_uri", "Make the batch prediction request\nNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:\n\njob_display_name: The human readable name for the batch prediction job.\ngcs_source: A list of one or more batch request input files.\ngcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.\nsync: If set to True, the call will block while waiting for the asynchronous batch job to complete.", "batch_predict_job = model.batch_predict(\n job_display_name=\"claritin_\" + TIMESTAMP,\n gcs_source=gcs_input_uri,\n gcs_destination_prefix=BUCKET_NAME,\n sync=False,\n)\n\nprint(batch_predict_job)", "Example output:\nINFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob\n&lt;google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0&gt; is waiting for upstream dependencies to complete.\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:\nJobState.JOB_STATE_RUNNING\n\nWait for completion of batch prediction job\nNext, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.", "batch_predict_job.wait()", "Example Output:\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_SUCCEEDED\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328\n\nGet the predictions\nNext, get the results from the completed batch prediction job.\nThe results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:\n\ncontent: The prediction request.\nprediction: The prediction response.\nsentiment: The sentiment.", "import json\n\nimport tensorflow as tf\n\nbp_iter_outputs = batch_predict_job.iter_outputs()\n\nprediction_results = list()\nfor blob in bp_iter_outputs:\n if blob.name.split(\"/\")[-1].startswith(\"prediction\"):\n prediction_results.append(blob.name)\n\ntags = list()\nfor prediction_result in prediction_results:\n gfile_name = f\"gs://{bp_iter_outputs.bucket.name}/{prediction_result}\"\n with tf.io.gfile.GFile(name=gfile_name, mode=\"r\") as gfile:\n for line in gfile.readlines():\n line = json.loads(line)\n print(line)\n break", "Example Output:\n{'instance': {'content': 'gs://andy-1234-221921aip-20210811220920/test2.txt', 'mimeType': 'text/plain'}, 'prediction': {'sentiment': 3}}\n\nMake online predictions\npredictions.deploy-model-api\nDeploy the model\nNext, deploy your model for online prediction. To deploy the model, you invoke the deploy method.", "endpoint = model.deploy()", "Example output:\nINFO:google.cloud.aiplatform.models:Creating Endpoint\nINFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352\nINFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472\nINFO:google.cloud.aiplatform.models:To use this Endpoint in another session:\nINFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')\nINFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472\nINFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480\nINFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472\n\npredictions.online-prediction-automl\nGet test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_item = ! gsutil cat $IMPORT_FILE | head -n1\nif len(test_item[0]) == 3:\n _, test_item, test_label, max = str(test_item[0]).split(\",\")\nelse:\n test_item, test_label, max = str(test_item[0]).split(\",\")\n\nprint(test_item, test_label)", "Make the prediction\nNow that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.\nRequest\nThe format of each instance is:\n { 'content': text_string }\n\nSince the predict() method can take multiple items (instances), send your single test item as a list of one test item.\nResponse\nThe response from the predict() call is a Python dictionary with the following entries:\n\nids: The internal assigned unique identifiers for each prediction request.\nsentiment: The sentiment value.\ndeployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.", "instances_list = [{\"content\": test_item}]\n\nprediction = endpoint.predict(instances_list)\nprint(prediction)", "Example output:\nPrediction(predictions=[{'sentiment': 2.0}], deployed_model_id='311601595311718400', explanations=None)\n\nUndeploy the model\nWhen you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.", "endpoint.undeploy_all()", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
heroxbd/SHTOOLS
examples/notebooks/tutorial_4.ipynb
bsd-3-clause
[ "Spherical Harmonic Normalizations and Parseval's theorem", "%matplotlib inline\nfrom __future__ import print_function # only necessary if using Python 2.x\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom pyshtools.shclasses import SHCoeffs, SHGrid, SHWindow", "The energy and variance of a single spherical harmonic\nWe demonstrate in this paragraph two properties of the 4-pi normalized spherical harmonics. We initializing the coefficient class with a single non-zero coefficient. To make things simple, we fix the coefficient value, and therefore also its energy, to 1.\nWe then compute the normalization N that is by default fixed to 4pi in shtools. The variance of the spherical harmonic is the integral of its squared amplitude (norm) divided by the surface area of the sphere (4pi) and should in the 4pi normalization correspond to 1.\n$$N = \\int_\\Omega Y^2_{lm}(\\mathbf{r})~d\\Omega$$\n$$Var(Y_{lm}(\\mathbf{r})) = \\frac{N}{4 \\pi} = 1$$\nThe integrals are easy to compute using the DH2 grid that has an equal spacing in latitude and longitude.", "# Initialize the model with a single harmonic set to 1.\nlmax = 99\ncoeffs = SHCoeffs.from_zeros(lmax)\ncoeffs.set_coeffs(values=[1], ls=[5], ms=[2])\npower = coeffs.get_powerperdegree()\nprint('total coefficient power is ', power.sum())\n\ngrid = coeffs.expand('DH2')\ngrid.plot_rawdata()\n\n# Generate an empty spatial grid\nlats = grid.get_lats()\nlons = grid.get_lons()\nlatgrid, longrid = np.meshgrid(lats, lons, indexing='ij')\n\n# Next, compute the weights used to integrate the function.\n# This will only be approximate.\nweights = np.cos(np.radians(latgrid))\ndlat = np.radians(lats[0] - lats[1])\ndlon = np.radians(lons[1] - lons[0])\nsurface = weights.sum() * dlat * dlon\nprint('correct surface (4 pi) =', 4 * np.pi)\nprint('computed surface =', surface)\n\n# Finally, compute the model variance\nsh_energy = np.sum(grid.data**2 * weights) * dlat * dlon\n\nprint('computed spherical harmonics energy =', sh_energy)\nprint('variance = spherical harmonics energy / surface =', sh_energy / surface)", "Parseval's theorem in the case of a random model\nWe have seen in the previous example, that a single 4pi normalized spherical harmonic has unit variance. Because spherical harmonics are orthogonal functions on the sphere, the total variance of a model is the sum of the variances of its' 4pi-normalized spherical harmonics coefficients.\nIf the coefficients of all spherical harmonics are independent, the distribution will become Gaussian as predicted by the central limit theorem, or it will be perfectly Gaussian if the individual coefficients were Gaussian in the first place.\nWe illustrate this behaviour in the following short code snippet.", "nl = 200\na = 30\nls = np.arange(nl, dtype=float)\npower = 1. / (1. + (ls / a) ** 2) ** 1\ncoeffs = SHCoeffs.from_random(power)\npower_random = coeffs.get_powerperdegree()\ntotal_power = power_random.sum()\nprint('total coefficient power =', total_power)\n\ngrid = coeffs.expand('DH2')\ngrid.plot_rawdata()\n\n# Generate a spatial grid.\nlats = grid.get_lats()\nlons = grid.get_lons()\nlatgrid, longrid = np.meshgrid(lats, lons, indexing='ij')\n\n# First, compute the spherical surface element to weight\n# the different grid points when constructing a histogram.\nweights = np.cos(np.radians(latgrid))\ndlat = np.radians(lats[0] - lats[1])\ndlon = np.radians(lons[1] - lons[0])\nsurface = weights.sum() * dlat * dlon\n\n# Compute a histogram of the grided data.\nbins = np.linspace(-50, 50, 30)\ncenter = 0.5 * (bins[:-1] + bins[1:])\ndbin = center[1] - center[0]\nhist, bins = np.histogram(grid.data, bins=bins, weights=weights, density=True)\n\n# Compute the expected distribution.\nnormal_distribution = np.exp( - center ** 2 / (2 * total_power))\nnormal_distribution /= dbin * normal_distribution.sum()\n\n# Plot both distributions.\nfig, ax = plt.subplots(1, 1)\nax.plot(center, hist, '-x', c='blue', label='computed distribution')\nax.plot(center, normal_distribution, c='red', label='predicted distribution')\nax.legend(loc=3);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-1/cmip6/models/sandbox-1/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-1\nSource ID: SANDBOX-1\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-1', 'sandbox-1', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bakerjd99/jacks
notebooks/Extracting SQL code from SSIS dtsx packages with Python lxml.ipynb
unlicense
[ "Extracting SQL code from SSIS dtsx packages with Python lxml\n\n\nCode for the blog post Extracting SQL code from SSIS dtsx packages with Python lxml\n\n\nFrom Analyze the Data not the Drivel", "# imports\nimport os\nfrom lxml import etree\n\n# set sql output directory\nsql_out = r\"C:\\temp\\dtsxsql\"\nif not os.path.isdir(sql_out):\n os.makedirs(sql_out)\n\n# set dtsx package file\nssis_dtsx = r'C:\\temp\\dtsx\\ParseXML.dtsx'\nif not os.path.isfile(ssis_dtsx):\n print(\"no package file\")\n\n# read and parse ssis package\ntree = etree.parse(ssis_dtsx)\nroot = tree.getroot()\nroot.tag \n\n# collect unique lxml transformed element tags\nele_tags = set()\nfor ele in root.xpath(\".//*\"):\n ele_tags.add(ele.tag)\nprint(ele_tags)\nprint(len(ele_tags))", "Code reformatted to better display on blog", "pfx = '{www.microsoft.com/'\nexe_tag = pfx + 'SqlServer/Dts}Executable'\nobj_tag = pfx + 'SqlServer/Dts}ObjectName'\ndat_tag = pfx + 'SqlServer/Dts}ObjectData'\ntsk_tag = pfx + 'sqlserver/dts/tasks/sqltask}SqlTaskData'\nsrc_tag = pfx + \\\n 'sqlserver/dts/tasks/sqltask}SqlStatementSource'\nprint(exe_tag)\nprint(obj_tag)\nprint(tsk_tag)\nprint(src_tag)\n\n# extract sql source statements and write to *.sql files \ntotal_bytes = 0\npackage_name = root.attrib[obj_tag].replace(\" \",\"\")\nfor cnt, ele in enumerate(root.xpath(\".//*\")):\n if ele.tag == exe_tag:\n attr = ele.attrib\n for child0 in ele:\n if child0.tag == dat_tag:\n for child1 in child0:\n sql_comment = attr[obj_tag].strip()\n if child1.tag == tsk_tag:\n dtsx_sql = child1.attrib[src_tag]\n dtsx_sql = \"-- \" + \\\n sql_comment + \"\\n\" + dtsx_sql\n sql_file = sql_out + \"\\\\\" \\\n + package_name + str(cnt) + \".sql\"\n total_bytes += len(dtsx_sql)\n print((len(dtsx_sql), \n sql_comment, sql_file))\n with open(sql_file, \"w\") as file:\n file.write(dtsx_sql)\nprint(('total bytes',total_bytes))", "Original unformatted code", "# scan package tree and extract sql source code\ntotal_bytes = 0\npackage_name = root.attrib['{www.microsoft.com/SqlServer/Dts}ObjectName'].replace(\" \",\"\")\nfor cnt, ele in enumerate(root.xpath(\".//*\")):\n if ele.tag == \"{www.microsoft.com/SqlServer/Dts}Executable\":\n attr = ele.attrib\n for child0 in ele:\n if child0.tag == \"{www.microsoft.com/SqlServer/Dts}ObjectData\":\n for child1 in child0:\n sql_comment = attr[\"{www.microsoft.com/SqlServer/Dts}ObjectName\"].strip()\n if child1.tag == \"{www.microsoft.com/sqlserver/dts/tasks/sqltask}SqlTaskData\":\n dtsx_sql = child1.attrib[\"{www.microsoft.com/sqlserver/dts/tasks/sqltask}SqlStatementSource\"]\n dtsx_sql = \"-- \" + sql_comment + \"\\n\" + dtsx_sql\n sql_file = sql_out + \"\\\\\" + package_name + str(cnt) + \".sql\"\n total_bytes += len(dtsx_sql)\n print((len(dtsx_sql), sql_comment, sql_file))\n with open(sql_file, \"w\") as file:\n file.write(dtsx_sql)\nprint(('total sql bytes',total_bytes))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jpn--/larch
book/user-guide/linear-funcs.ipynb
gpl-3.0
[ "import larch\nlarch.__version__", "(linear-funcs)=\nLinear Functions\nIn many discrete choice models, the \nprobability of selecting any particular alternative is represented as\nsome function based on the utility of the various alternatives. \nIn Larch, the utility is created based on one or more linear-in-parameters\nfunctions, which combine raw or pre-computed data values with \nnamed model parameters. To facilitate writing these functions,\nLarch provides two special classes: parameter references (P) and\ndata references (X).", "from larch import P, X", "Parameter and data references can be defined using either a function-like notation,\nor a attribute-like notation.", "P('NamedParameter')\n\n# TEST\nnamed_param = P('NamedParameter')\nassert isinstance(named_param, larch.model.linear.ParameterRef_C)\nassert named_param == 'NamedParameter'\n\nX.NamedDataValue\n\n# TEST\nnamed_data = X.NamedDataValue\nassert isinstance(named_data, larch.model.linear.DataRef_C)\nassert named_data == 'NamedDataValue'", "In either case, if the named value contains any spaces or non-alphanumeric characters,\nit must be given in function-like notation only, as Python will not accept\nthose characters in the attribute-like form.", "P('Named Parameter')\n\n# TEST\nnamed_param = P('Named Parameter')\nassert isinstance(named_param, larch.model.linear.ParameterRef_C)\nassert named_param == 'Named Parameter'", "Data references can name an exact data element that appears in the data used for \nmodel estimation or application, or can include simple transformations of that data, so long\nas these transformations can be done without regard to any estimated parameter.\nFor example, we can use the log of income:", "X(\"log(INCOME)\")", "To write a linear-in-parameters utility function, we simply multiply together\na parameter reference and a data reference, and then optionally add that\nto one or more similar terms.", "P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST", "It is permissible to omit the data reference on a term \n(in which case it is implicitly set to 1.0).", "P.ASC + P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST", "On the other hand, Larch does not currently permit you to omit the parameter \nreference from a term.", "P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST + X.GEN_COST", "Although you cannot include a term with an implicit parameter set to 1,\nyou can achieve the same model structure by including that parameter explicitly.\nLater in the model setup process you can explicitly lock any parameter to\nhave a specific fixed value.", "P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_COST + X.GEN_COST * P.One\n\n# TEST\nf = _\nassert len(f) == 3", "In addition to writing out a linear function as a single command, you can also compose\nsuch functions over several Python commands, using both in-place and regular addition.", "f = P.ASC + P.InVehTime * X.AUTO_TIME\nf += P.Cost * X.AUTO_COST\nf\n\nf + P.Cost * X.AUTO_TOLL", "Functional simplification is not automatic. Thus, while you can subtract term from\na linear function, it does not cancel out existing terms from the function.", "f = P.ASC + P.InVehTime * X.AUTO_TIME\nf - P.InVehTime * X.AUTO_TIME", "Instead, to actually remove terms from a linear function, use the remove_param or remove_data methods.", "f = P.ASC + P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_TOLL\nf.remove_param(P.InVehTime)\n\n# TEST\nassert len(f) == 2\nassert f[0].param == 'ASC'\nassert f[1].param == 'Cost'\n\nf = P.ASC + P.InVehTime * X.AUTO_TIME + P.Cost * X.AUTO_TOLL\nf.remove_data('AUTO_TOLL')\n\n# TEST\nassert len(f) == 2\nassert f[0].param == 'ASC'\nassert f[1].param == 'InVehTime'" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UDST/activitysim
activitysim/examples/example_estimation/notebooks/17_tour_mode_choice.ipynb
bsd-3-clause
[ "Estimating Tour Mode Choice\nThis notebook illustrates how to re-estimate tour and subtour mode choice for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries", "import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd", "We'll work in our test directory, where ActivitySim has saved the estimation data bundles.", "os.chdir('test')", "Load data and prep model for estimation", "modelname = \"tour_mode_choice\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)", "The tour mode choice model is already a ModelGroup segmented on different purposes,\nso we can add the subtour mode choice as just another member model of the group", "model2, data2 = component_model(\"atwork_subtour_mode_choice\", return_data=True)\n\nmodel.extend(model2)", "Review data loaded from the EDB\nThe next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.\nCoefficients", "data.coefficients", "Utility specification", "data.spec", "Chooser data", "data.chooser_data", "Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.", "model.load_data()\nmodel.doctor(repair_ch_av=\"-\")\n\nmodel.maximize_loglike(method=\"SLSQP\", options={\"maxiter\": 1000})\n\nmodel.calculate_parameter_covariance()", "Estimated coefficients", "model.parameter_summary()", "Output Estimation Results", "from activitysim.estimation.larch import update_coefficients\nresult_dir = data.edb_directory/\"estimated\"\nupdate_coefficients(\n model, data, result_dir,\n output_file=f\"{modelname}_coefficients_revised.csv\",\n);", "Write the model estimation report, including coefficient t-statistic and log likelihood", "model.to_xlsx(\n result_dir/f\"{modelname}_model_estimation.xlsx\", \n data_statistics=False,\n)", "Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.", "pd.read_csv(result_dir/f\"{modelname}_coefficients_revised.csv\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Olsthoorn/TransientGroundwaterFlow
Syllabus_in_notebooks/Sec6_3_well_near_a_river_extended_assignment.ipynb
gpl-3.0
[ "Section 6.3\nA well in an (un)confined aquifer near a river\nIHE, Delft, Dec. 2017\n@T.N.Olsthoorn\nThe word \"(un)confined\" implies no leakage, so that we will need the well solution by C.V.Theis (1935)\nFor the context, assume we have a well near a river that is in good contact with the aquifer. The aquifer extends to infinity with a straight river through the point with coordinates (x=0, y=0) along the y-axis.\nTo simulate the well in relation to the river we need a mirror well at the opposite side, with opposite flow.\nThe Theis well drawdown is\n$$ s = \\frac {Q_0} {4 \\pi kD} \\mbox{W}(u) $$\nwhere the so-called Theis well function is the exponential integral which can be found in the module scipy.special.exp1\n$$ W(u) = \\mbox{exp1}(u)$$\nthe exponential integral, and\n$$ u = \\frac {r^2 S} { 4 kD t} $$\nThe flow throug a ring with radius $r$ caused by such as well equals\n$$ Q_r = Q_0 e^{-u} $$\nAssume the river is straight along the y axis, and that the well is at point ($-a,\\, 0$) and has a constant extraction $Q$ starting at $t=0$.\nTo simulate a river that causes a constant head in the groundwater, we just place a mirror well with opposite flow at the other side of the river shore, at location ($+a,\\, 0$)\nWe will first compute the drawdown in an arbitrary point, not necessarily at the river shore. Next we well compute the inflow from the river.", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.special import exp1 # you could do: from scipy.special import exp1 as W\nimport pdb", "Convenience function for setting up graphs", "def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None,\n xscale='linear', yscale='linear', size_inches=(14, 8)):\n '''Setup a new axis for plotting'''\n fig, ax = plt.subplots()\n fig.set_size_inches(size_inches)\n ax.set_title(title)\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n ax.set_xscale(xscale)\n ax.set_yscale(yscale)\n if xlim is not None: ax.set_xlim(xlim)\n if ylim is not None: ax.set_ylim(ylim)\n ax.grid(True)\n return ax\n\n# Aquifer properties\nkD = 600 # m2/d\nS = 0.2 # [-]\n\n# The well, mirro well and observation point\na = 125. # distance from the well to the river bank\nx0, y0 = 50., 100. # m coordinates of the observation point\nx1, y1 = -a, 0 # m coordinate of extraction well well\nx2, y2 = +a, 0 # m coordinate of injection well (mirror well)\nQ = 1200 # m3/d, extraction of the well\nr0 = 0.25 # well radius\n\nt = 1.0 # d\n\n# Distance between each well and the obsrvation poiny\nr1 = np.sqrt((x1 - x0)**2 + (y1 - y0)**2)\nr2 = np.sqrt((x2 - x0)**2 + (y2 - y0)**2)\n\n# The argument of the well function for each well\nu1 = r1 ** 2 * S / (4 * kD * t)\nu2 = r2 ** 2 * S / (4 * kD * t)\n\n# drawdown, superimposing the well and its mirror well\ns = Q /(4 * np.pi * kD) * (exp1(u1) - exp1(u2)) # minus because mirror well has opposite Q\n\nprint(f\"The ddn s at x0={x0:.1f}, y0={y0:.1f} at t={t:.2f} d with Q={Q:.0f} m3/d equals {s:.2f} m\")\n ", "Many observation points\nInstead of 1 observation point, we can take many observationi points, for instance, along a line throug both wells.\nTo prevent that the distance $r$ becomes zero at the well center, set the minimum distance equal to the well's radius $r_0$, which will normally be around 0.25 m.", "# Drawdown as arbitraty points x, y\nx = np.linspace(-2 * a, 2 * a, 201) # many x-coordinates\ny = np.zeros_like(x) # same number of y-coordinates, all value zero (line aloong x-axis)\nrw = 0.25 # The well radius\n\n# Compute the distance from each well to all observation points at once:\nr1 = np.sqrt((x1 - x)**2 + (y1 - y)**2)\nr2 = np.sqrt((x2 - x)**2 + (y2 - y)**2)\n\n# Use logical indexing to set or exclude points or select specific ones. In this case\n# we will put the distance of points closer than r_0 from the well (in fact inside the well) equal to\n# r_0.\nr1[r1<rw] = rw\nr2[r2<rw] = rw\n\n# Compute the argument of teh well function for all points at once:\nu1 = r1**2 * S / (4 * kD * t)\nu2 = r2**2 * S / (4 * kD * t)\n\n# Compute the head due to the well and its mirror for all points at once\ns = Q / (4 * np.pi *kD) * (exp1(u1) - exp1(u2))\n\nax = newfig('Drawdown along a line throug the well and its mirror', 'x [m], y=0', 's [m]', ylim=(2, -2))\n# By using (2, -2) in ylim, we invert the vertical axis at the same time.\n\nax.plot(x, s)\n\nplt.show()", "Compute the flow across a ring with radius r\nThe total flow across a ring with radius $r$ around a transient well is\n$$ Q_r = Q_0 e^{-u} $$\nAnd the specific discharge\n$$ q_r = \\frac {Q_0} {2 \\pi r} e^{-u} $$\nand points in the direction of the well.\nThe specific discharge at randomly chosen points\nHere we have a single well in location x0, y0 and a set randomly chosen points with coordinates in the vectors x and y.\nThe specific discharge is computed at all points at once. Then we compute the x and y components of these vectors. Finally we show them in a simple way, in which the point is indicated by a small circle and the discharge vector by a line with length proportional to its strength.", "# Well properties\nQ0, x0, y0 = 1200., -a, 0. # the extractioni and location of the well\n\n# Chose a set of randomly placed observation points\na = 50 # m a length to scale randomly chosen points\nn = 200\nx = a * (np.random.randn(n) - 0.5) # choose 25 random values, with average value 0\ny = a * (np.random.randn(n) - 0.5) # same\n\n# Distance from the well to each of the random points\nr = np.sqrt((x-x0)**2 + (y-y0)**2)\n\n# u\nu = r**2 * S /(4 * kD * t)\n\n# the specific discharge\nq = Q0 / (2 * np.pi * r) * np.exp(-u) # m2/d, see syllabus\nalpha = np.arctan2(y - y0, x- x0) # angle between vector and horizontal\nqx = q * np.cos(alpha) # x component of specific discharge\nqy = q * np.sin(alpha) # y component of specific discharge\n\nscale = 2.0 # scale factor to plot of the specific discharge vectors (arbiytrary, just for visualization)\n\nax = newfig('discharge vector plot', 'x [m]', 'y [m]')\n\n# plot the location of the well as black ('k') circle ('o') of size 8\nax.plot(x0, y0, 'ko', markersize=8)\n\n# in a loop, plot one vector after another\nfor xi, yi, qxi, qyi in zip(x, y, qx, qy):\n ax.plot(xi, yi, 'o', markersize=4) # plot marker at obs. point\n ax.plot([xi, xi - scale * qxi],\n [yi, yi - scale * qyi]) # plot vector\n\n", "Of course, one could also draw contourlines.\nIncluding the mirror well in the discharge vector", "# Well properties\na = 125\nQ = 1200 # m3/d extraction\nx1, y1 = -a, 0. # the extractioni and location of the well\nx2, y2 = +a, 0. # the extractioni and location of the well\n\n# Chose a set of randomly placed observation points\nn = 500\nx = 2 * a * (np.random.randn(n) - 0.5) # choose 25 random values, with average value 0\ny = 2 * a * (np.random.randn(n) - 0.5) # same\n\n# Distance from the well to each of the random points\nr1 = np.sqrt((x-x1)**2 + (y-y1)**2)\nr2 = np.sqrt((x-x2)**2 + (y-y2)**2)\n\n# u\nu1 = r1**2 * S /(4 * kD * t)\nu2 = r2**2 * S /(4 * kD * t)\n\n# the specific discharge\nq1 = Q0 / (2 * np.pi * r1) * np.exp(-u1)\nq2 = -Q0 / (2 * np.pi * r2) * np.exp(-u2) # sum because flows add\n\nalpha1 = np.arctan2(y - y1, x- x1) # angle between vector and horizontal\nalpha2 = np.arctan2(y - y2, x- x2) # angle between vector and horizontal\n\nqx = q1 * np.cos(alpha1) + q2 * np.cos(alpha2) # x component of specific discharge\nqy = q1 * np.sin(alpha1) + q2 * np.sin(alpha2) # y component of specific discharge\n\nscale = 20.0 # scale factor to plot of the specific discharge vectors (arbiytrary, just for visualization)\n\nax = newfig('discharge vector plot', 'x [m]', 'y [m]')\n\n# plot the location of the well as black ('k') circle ('o') of size 8\nax.plot(x1, y1, 'ko', markersize=8, label='well')\nax.plot(x2, y2, 'ko', markersize=8, label='mirror well')\n\n# in a loop, plot one vector after another\nfor xi, yi, qxi, qyi in zip(x, y, qx, qy):\n ax.plot(xi, yi, 'o', markersize=4) # plot marker at obs. point\n ax.plot([xi, xi - scale * qxi],\n [yi, yi - scale * qyi]) # plot vector\n\n", "Inflow from the river\nFirst we'll show the flux flowing from the river into the aquifer at a point in time.\nIn the section thereafter, we'll integrate the flux along the bank of the river to get the total inflow.\nThe inflowing flux at the river bank is obtained from the x-component of the discharge vectors of both wells in point of the river bank. We just compute the vector for each well separately and add the x and y component of the well and its mirror in river-bank points. Because the river bank lies along the y-axis, we only need the x-component of the discharge vectors.", "Q = 1200 # m3/d\n\na = 125 # m, distance of well to river shore\nx1, y1 = -a, 0 # location of well\nx2, y2 = +a, 0 # location of mirror well\n\n# Choose points along the river bank\ny = np.linspace(-1000, 1000, 81)\nx = np.zeros_like(y)\n\nr1 = np.sqrt((x - x1) ** 2 + (y - y1) ** 2)\nr2 = np.sqrt((x - x2) ** 2 + (y - y2) ** 2)\n\nu1 = r1 ** 2 * S / (4 * kD * t)\nu2 = r2 ** 2 * S / (4 * kD * t)\n\nq1 = +Q / (2 * np.pi * r1) * np.exp(-u1)\nq2 = -Q / (2 * np.pi * r2) * np.exp(-u2)\n\nalpha1 = np.arctan2(y, x - x1)\nalpha2 = np.arctan2(y, x - x2)\n\nqx = q1 * np.cos(alpha1) + q2 * np.cos(alpha2)\nqy = q1 * np.sin(alpha1) + q2 * np.sin(alpha2) # Will be zero for all bank points.\n\nqin = qx # we only need the x-component.\n\nax = newfig('Inflow along the river', 'y [m]', 'inflow m2/d')\nplt.plot(y, qin)\n\n# show tyat qy is zero\nprint(f'Max qy = {np.min(qy):.4g}, min qy = {np.max(qy):4g} m2/d')", "Compute the total inflow at a given time\nWe have to integrate the inflow along the y axis.\nTo do this, we take the pieces $\\Delta y$ between the chosen point on the $y$ axis and multiply them with the averate value of $q_{in}$ at each piece on the $y$ axis.", "dy = y[1:] - y[:-1] # or np.diff(y)\nqm = 0.5 * (qin[:-1] + qin[1:]) # or np.diff(qin)\nQin = np.sum(qm * dy) # total inflow through the river bank\nprint('Total inflow with Q={:.0f} at t={:.1f} d equals {:.1f} m3/d'\\\n .format(Q, t, Qin))", "The compute inflow will approach the true inflow better, the longer the part of the y-axis is chosen.\nShow the total inflow for many times but a constant well extraction\nTo to this, a loop it necessary to compute the inflow for each time at a time and collect the results.", "# The data\n\nQ = 1200 # m3/d\n\nb = 125 # m, distance of well to river shore\nx1, y1 = -b, 0 # location of well\nx2, y2 = +b, 0 # location of mirror well\n\n# The times\ntimes = np.linspace(0, 200, 2001)\ntimes[0] = 0.0001 # to prevent computing for t=0 use a very small number\n\n# all parameters that are constant in time \ny = np.linspace(-1000, 1000, 81)\nx = np.zeros_like(y)\n\ndy = y[1:] - y[:-1]\n\nr1 = np.sqrt((x - x1) ** 2 + (y - y1) ** 2)\nr2 = np.sqrt((x - x2) ** 2 + (y - y2) ** 2)\n\nalpha1 = np.arctan2(y, x - x1)\nalpha2 = np.arctan2(y, x - x2)\n\nax = newfig('Total inflow from the river', 'time [d]', 'inflow m3/d', ylim=(0, Q))\n\n# Compute for each time in turn\nQin = [] # empty list in which the values will be assembled\nfor t in times: # t is the next value from times in the loop\n u1 = r1 ** 2 * S / (4 * kD * t)\n u2 = r2 ** 2 * S / (4 * kD * t)\n\n q1 = +Q / (2 * np.pi * r1) * np.exp(-u1)\n q2 = -Q / (2 * np.pi * r2) * np.exp(-u2)\n\n qx = q1 * np.cos(alpha1) + q2 * np.cos(alpha2)\n qy = q1 * np.sin(alpha1) + q2 * np.sin(alpha2)\n\n qin = qx\n \n qm = 0.5 * (qin[:-1] + qin[1:])\n Qin.append(np.sum(qm * dy)) # append next value\n\nax.plot(times, Qin) # plot the curve\n\n# embellishment of the plot\n", "Everything combined, many times, with varying well extraction\nEvery winter the extration is Q0 and during the summer it is -Q0\nWe simulate 10 years using superosition in time. The winter injection\nwill start each year in September and the summer extraction each year in april. Let's define the start times in days since the start of of the first year, assuming for simplicity that a month as always 30 days.", "Q0 = 2400 # m3/d\nkD = 500 # m2/d\nS = 0.2 # [-]\n\n# all variable that are constant in time:\n\nb = 1500 # m, distance of well to river shore\nx1, y1 = -b, 0 # location of well\nx2, y2 = +b, 0 # location of mirror well\n\n# all parameters that are constant in time \ny = np.linspace(-1000, 1000, 81)\nx = np.zeros_like(y)\n\ndy = y[1:] - y[:-1]\n\nr1 = np.sqrt((x - x1) ** 2 + (y - y1) ** 2)\nr2 = np.sqrt((x - x2) ** 2 + (y - y2) ** 2)\n\nalpha1 = np.arctan2(y, x - x1)\nalpha2 = np.arctan2(y, x - x2)\n\n", "The first time the injection is Q.\nThen at each time when extraction starts, is has -2Q\nAnd then each time when we switch back to injection the flow is +2Q\nThe flows to be used for superposition in time are\nMonth 9, 15, 21, 27, 33, 39, 45, ...\nFlow Q, -2Q, 2Q, -2Q, 2Q, -2Q, 2Q, ...\nNow define the times at which the total inflow is to be computed.", "# Set the times when pumping switches\n# start, end, step\nmonth = np.arange(9, 120, 6) # month numbers at which flow switches\n #counting from Jan 1 in first year\nTsw = 30 * month # t at which flow switches in days\n\n# set the pumping flow at the switch points\nQsw = Q0 * 2 * (-1) ** np.arange(len(Tsw))\nQsw[0] = Q0\n\n# Show them\nprint('{:>6} {:>6}'.format('Tsw[d]', 'Qsw'))\nfor tsw, Q in zip(Tsw, Qsw):\n print('{:6.0f} {:6.0f}'.format(tsw, Q))\n", "Choose times at which we want to see the total inflow", "# The small values prevents computing flows at the time the well starts\n# t is now just a negilible time later.\n# start end step\ntimes = np.arange(0, 30 * 12 * 10, 10) + 0.00001 # 10 years in days in 10 day steps", "Actual computation for all times", "# embellishment of the plot\nax = newfig(f'Total inflow from the river, well is {b:.0f} m from river', 'time [d]',\n '<--outflow, extraction | inflow, injection--> m3/d')\n\nplt.ylim(-Q0, Q0) # maximum value is Q, the total extraction\n\nQriv = [] # empty list in which the values will be assembled\nQwell= []\nfor t in times: # t is the next value from times in the loop\n Qsum = 0.\n Qtot = 0.\n for tsw, Q in zip(Tsw, Qsw):\n if t > tsw:\n u1 = r1 ** 2 * S / (4 * kD * (t - tsw))\n u2 = r2 ** 2 * S / (4 * kD * (t - tsw))\n\n q1 = +Q / (2 * np.pi * r1) * np.exp(-u1)\n q2 = -Q / (2 * np.pi * r2) * np.exp(-u2)\n\n qx = q1 * np.cos(alpha1) + q2 * np.cos(alpha2)\n qy = q1 * np.sin(alpha1) + q2 * np.sin(alpha2)\n\n qin = qx\n\n qm = 0.5 * (qin[:-1] + qin[1:])\n Qsum = Qsum + np.sum(qm * dy)\n Qtot = Qtot + Q\n Qriv.append(-Qsum) # append next value\n Qwell.append(Qtot)\n\n# plot the curve\nax.plot(times, Qwell, 'k', label='Qwell')\nax.plot(times, Qriv, 'r', label='Qriv')\n\nax.legend()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
andalexo/bgv
BSRTExample.ipynb
mit
[ "#general settings\n%matplotlib notebook\n%load_ext autoreload\n%autoreload 2\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pytimber\nfrom pytimber import BSRT\n\ndb = pytimber.LoggingDB()\n\nt1=pytimber.parsedate(\"2017-11-02 16:26:00.000\")\nt2=pytimber.parsedate(\"2017-11-02 16:34:00.000\")", "Getting the BSRT data from timber\ngenerate the BSRT instance, which automatically calculates the emittances using the data stored in timber.", "bsrt = BSRT.fromdb(t1,t2,beam='B1')", "Dictionary with emittances and timestamps for each slot", "bsrt.emit", "what slots do we have?", "print sorted(bsrt.emit.keys())", "Plotting the data\nWe can plot the emittance", "plt.figure()\nbsrt.plot(plane='v',t1=t1,t2=t2,slots=None,avg=None,fit=False)", "... and we can also perform a moving average over the data", "plt.figure()\nbsrt.plot(plane='h',t1=t1,t2=t2,slots=None,avg=10,fit=False)", "... or plot only specific slots", "plt.figure()\nbsrt.plot(plane='h',t1=t1,t2=t2,slots=[50,62],avg=10,fit=False)", "Fitting the emittance\nfit the emittance between tstart and tend", "tstart=pytimber.parsedate(\"2017-11-02 16:26:00.000\")\ntend=pytimber.parsedate(\"2017-11-02 16:34:00.000\")", "The raw data can be also fitted with an exponential: \n$\\epsilon(t) = \\epsilon_0\\cdot e^{((t-t_{\\rm start})/\\tau)}$", "plt.figure()\nbsrt.plot(plane='h',t1=tstart,t2=tend,slots=[50,62],avg=10,fit=True)", "The fit data from tstart to tend ist then stored in bsrt.emitfit.", "bsrt.emitfit", "For each (tstart,tend) the fitting data is stored in this dictionary. This means you can e.g. fit your data from [t0,t1], [t1,t2], etc., plot the complete data and then plot the individual fits on top.", "t0fit=pytimber.parsedate(\"2016-08-24 03:34:00.000\")\nt1fit=pytimber.parsedate(\"2016-08-24 03:46:00.000\")\nt2fit=pytimber.parsedate(\"2016-08-24 03:50:00.000\")\nt3fit=pytimber.parsedate(\"2016-08-24 03:57:00.000\")\nt4fit=pytimber.parsedate(\"2016-08-24 04:08:00.000\")", "Here we perform the fit. The function bsrt.plot(..., fit=True) and bsrt.plot_fit() automatically generate this data, if no entry for the desired timestamp is found in bsrt.emitfit. Just to show it, we use here the underlying function bsrt.fit.", "for ts,te in [[t0fit,t1fit],[t1fit,t2fit],[t2fit,t3fit],[t3fit,t4fit]]:\n bsrt.fit(ts,te,force=True)", "and now we can do the plot", "plt.figure()\nfor slot,c in zip([1062,1074],['b','r']):\n # plot the averaged data\n bsrt.plot(plane='v',t1=t0fit,t2=t4fit,slots=slot,avg=None,fit=False,color=c)\n # now add the fit data with a black line\n for ts,te in [[t0fit,t1fit],[t1fit,t2fit],[t2fit,t3fit],[t3fit,t4fit]]:\n bsrt.plot_fit(plane='v',t1=ts,t2=te,slots = slot, color=c)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
adityaka/misc_scripts
python-scripts/data_analytics_learn/.ipynb_checkpoints/L1_Starter_Code-checkpoint.ipynb
bsd-3-clause
[ "Before we get started, a couple of reminders to keep in mind when using iPython notebooks:\n\nRemember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.\nWhen you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.\nThe previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.\n\nLoad Data from CSVs", "import unicodecsv\n\n## Longer version of code (replaced with shorter, equivalent version below)\n\n# enrollments = []\n# f = open('enrollments.csv', 'rb')\n# reader = unicodecsv.DictReader(f)\n# for row in reader:\n# enrollments.append(row)\n# f.close()\n\nwith open('enrollments.csv', 'rb') as f:\n reader = unicodecsv.DictReader(f)\n enrollments = list(reader)\n\n#####################################\n# 1 #\n#####################################\n\n## Read in the data from daily_engagement.csv and project_submissions.csv \n## and store the results in the below variables.\n## Then look at the first row of each table.\n\ndaily_engagement = ''\nproject_submissions = ", "Fixing Data Types", "from datetime import datetime as dt\n\n# Takes a date as a string, and returns a Python datetime object. \n# If there is no date given, returns None\ndef parse_date(date):\n if date == '':\n return None\n else:\n return dt.strptime(date, '%Y-%m-%d')\n \n# Takes a string which is either an empty string or represents an integer,\n# and returns an int or None.\ndef parse_maybe_int(i):\n if i == '':\n return None\n else:\n return int(i)\n\n# Clean up the data types in the enrollments table\nfor enrollment in enrollments:\n enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])\n enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])\n enrollment['is_canceled'] = enrollment['is_canceled'] == 'True'\n enrollment['is_udacity'] = enrollment['is_udacity'] == 'True'\n enrollment['join_date'] = parse_date(enrollment['join_date'])\n \nenrollments[0]\n\n# Clean up the data types in the engagement table\nfor engagement_record in daily_engagement:\n engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))\n engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))\n engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))\n engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])\n engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])\n \ndaily_engagement[0]\n\n# Clean up the data types in the submissions table\nfor submission in project_submissions:\n submission['completion_date'] = parse_date(submission['completion_date'])\n submission['creation_date'] = parse_date(submission['creation_date'])\n\nproject_submissions[0]", "Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.\nInvestigating the Data", "#####################################\n# 2 #\n#####################################\n\n## Find the total number of rows and the number of unique students (account keys)\n## in each table.", "Problems in the Data", "#####################################\n# 3 #\n#####################################\n\n## Rename the \"acct\" column in the daily_engagement table to \"account_key\".", "Missing Engagement Records", "#####################################\n# 4 #\n#####################################\n\n## Find any one student enrollments where the student is missing from the daily engagement table.\n## Output that enrollment.", "Checking for More Problem Records", "#####################################\n# 5 #\n#####################################\n\n## Find the number of surprising data points (enrollments missing from\n## the engagement table) that remain, if any.", "Tracking Down the Remaining Problems", "# Create a set of the account keys for all Udacity test accounts\nudacity_test_accounts = set()\nfor enrollment in enrollments:\n if enrollment['is_udacity']:\n udacity_test_accounts.add(enrollment['account_key'])\nlen(udacity_test_accounts)\n\n# Given some data with an account_key field, removes any records corresponding to Udacity test accounts\ndef remove_udacity_accounts(data):\n non_udacity_data = []\n for data_point in data:\n if data_point['account_key'] not in udacity_test_accounts:\n non_udacity_data.append(data_point)\n return non_udacity_data\n\n# Remove Udacity test accounts from all three tables\nnon_udacity_enrollments = remove_udacity_accounts(enrollments)\nnon_udacity_engagement = remove_udacity_accounts(daily_engagement)\nnon_udacity_submissions = remove_udacity_accounts(project_submissions)\n\nprint len(non_udacity_enrollments)\nprint len(non_udacity_engagement)\nprint len(non_udacity_submissions)", "Refining the Question", "#####################################\n# 6 #\n#####################################\n\n## Create a dictionary named paid_students containing all students who either\n## haven't canceled yet or who remained enrolled for more than 7 days. The keys\n## should be account keys, and the values should be the date the student enrolled.\n\npaid_students =", "Getting Data from First Week", "# Takes a student's join date and the date of a specific engagement record,\n# and returns True if that engagement record happened within one week\n# of the student joining.\ndef within_one_week(join_date, engagement_date):\n time_delta = engagement_date - join_date\n return time_delta.days < 7\n\n#####################################\n# 7 #\n#####################################\n\n## Create a list of rows from the engagement table including only rows where\n## the student is one of the paid students you just found, and the date is within\n## one week of the student's join date.\n\npaid_engagement_in_first_week = ", "Exploring Student Engagement", "from collections import defaultdict\n\n# Create a dictionary of engagement grouped by student.\n# The keys are account keys, and the values are lists of engagement records.\nengagement_by_account = defaultdict(list)\nfor engagement_record in paid_engagement_in_first_week:\n account_key = engagement_record['account_key']\n engagement_by_account[account_key].append(engagement_record)\n\n# Create a dictionary with the total minutes each student spent in the classroom during the first week.\n# The keys are account keys, and the values are numbers (total minutes)\ntotal_minutes_by_account = {}\nfor account_key, engagement_for_student in engagement_by_account.items():\n total_minutes = 0\n for engagement_record in engagement_for_student:\n total_minutes += engagement_record['total_minutes_visited']\n total_minutes_by_account[account_key] = total_minutes\n\nimport numpy as np\n\n# Summarize the data about minutes spent in the classroom\ntotal_minutes = total_minutes_by_account.values()\nprint 'Mean:', np.mean(total_minutes)\nprint 'Standard deviation:', np.std(total_minutes)\nprint 'Minimum:', np.min(total_minutes)\nprint 'Maximum:', np.max(total_minutes)", "Debugging Data Analysis Code", "#####################################\n# 8 #\n#####################################\n\n## Go through a similar process as before to see if there is a problem.\n## Locate at least one surprising piece of data, output it, and take a look at it.", "Lessons Completed in First Week", "#####################################\n# 9 #\n#####################################\n\n## Adapt the code above to find the mean, standard deviation, minimum, and maximum for\n## the number of lessons completed by each student during the first week. Try creating\n## one or more functions to re-use the code above.", "Number of Visits in First Week", "######################################\n# 10 #\n######################################\n\n## Find the mean, standard deviation, minimum, and maximum for the number of\n## days each student visits the classroom during the first week.", "Splitting out Passing Students", "######################################\n# 11 #\n######################################\n\n## Create two lists of engagement data for paid students in the first week.\n## The first list should contain data for students who eventually pass the\n## subway project, and the second list should contain data for students\n## who do not.\n\nsubway_project_lesson_keys = ['746169184', '3176718735']\n\npassing_engagement =\nnon_passing_engagement =", "Comparing the Two Student Groups", "######################################\n# 12 #\n######################################\n\n## Compute some metrics you're interested in and see how they differ for\n## students who pass the subway project vs. students who don't. A good\n## starting point would be the metrics we looked at earlier (minutes spent\n## in the classroom, lessons completed, and days visited).", "Making Histograms", "######################################\n# 13 #\n######################################\n\n## Make histograms of the three metrics we looked at earlier for both\n## students who passed the subway project and students who didn't. You\n## might also want to make histograms of any other metrics you examined.", "Improving Plots and Sharing Findings", "######################################\n# 14 #\n######################################\n\n## Make a more polished version of at least one of your visualizations\n## from earlier. Try importing the seaborn library to make the visualization\n## look better, adding axis labels and a title, and changing one or more\n## arguments to the hist() function." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Upward-Spiral-Science/uhhh
code/Graph Analysis/Delaunay.ipynb
apache-2.0
[ "Delaunay\nHere, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data", "import csv\nfrom scipy.stats import kurtosis\nfrom scipy.stats import skew\nfrom scipy.spatial import Delaunay\nimport numpy as np\nimport math\nimport skimage\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom skimage import future\nimport networkx as nx\nfrom ragGen import *\n%matplotlib inline\nsns.set_color_codes(\"pastel\")\nfrom scipy.signal import argrelextrema\n\n# Read in the data\ndata = open('../../data/data.csv', 'r').readlines()\nfieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']\nreader = csv.reader(data)\nreader.next()\n\nrows = [[int(col) for col in row] for row in reader]\n\n# These will come in handy later\nsorted_x = sorted(list(set([r[0] for r in rows])))\nsorted_y = sorted(list(set([r[1] for r in rows])))\nsorted_z = sorted(list(set([r[2] for r in rows])))", "We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).\nLet's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...", "a = np.array(rows)\nb = np.delete(a, np.s_[3::],1)\n\n# Separate layers - have to do some wonky stuff to get this to work\nb = sorted(b, key=lambda e: e[1])\nb = np.array([v.tolist() for v in b])\nb = np.split(b, np.where(np.diff(b[:,1]))[0]+1)", "Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.", "graphs = []\ncentroid_list = []\n\nfor layer in b:\n centroids = np.array(layer)\n \n # get rid of the y value - not relevant anymore\n centroids = np.delete(centroids, 1, 1)\n centroid_list.append(centroids)\n \n graph = Delaunay(centroids)\n graphs.append(graph)", "We're going to need a method to get edge lengths from 2D centroid pairs", "def get_d_edge_length(edge):\n (x1, y1), (x2, y2) = edge\n return math.sqrt((x2-x1)**2 + (y2-y1)**2)\n\nedge_length_list = [[]]\ntri_area_list = [[]]\n\nfor del_graph in graphs:\n \n tri_areas = []\n edge_lengths = []\n triangles = []\n\n for t in centroids[del_graph.simplices]:\n triangles.append(t)\n a, b, c = [tuple(map(int,list(v))) for v in t]\n edge_lengths.append(get_d_edge_length((a,b)))\n edge_lengths.append(get_d_edge_length((a,c)))\n edge_lengths.append(get_d_edge_length((b,c)))\n try:\n tri_areas.append(float(Triangle(a,b,c).area))\n except:\n continue\n edge_length_list.append(edge_lengths)\n tri_area_list.append(tri_areas)", "Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the \"centroids\" are no different:", "np.subtract(centroid_list[0], centroid_list[1])", "There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity. \nDrawing Graphs\nFirst we look at the default networkx graph plotting:", "real_volume = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))\nfor r in rows:\n real_volume[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]\n\nnx_graphs = []\nfor layer in b:\n G = nx.Graph(graph)\n nx_graphs.append(G)\n\nfor graph in graphs:\n plt.figure()\n nx.draw(graph, node_size=100)", "This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.\nSelf Loops", "num_self_loops = []\nfor rag in y_rags:\n num_self_loops.append(rag.number_of_selfloops())\n\nnum_self_loops", "Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.\nThe answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.\n<img src=\"../../docs/figures/selfloop.png\" width=\"100\">\nTo see whether the graphs are formed properly, let's look at an adjacency lists:", "# y_rags[0].adjacency_list()", "Compare that to the test data:", "# Test Data\ntest = np.array([[1,2],[3,4]])\ntest_rag = skimage.future.graph.RAG(test)\ntest_rag.adjacency_list()", "X-Layers", "real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))\nfor r in rows:\n real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]\n\nx_rags = []\ncount = 0;\nfor layer in real_volume_x:\n count = count + 1\n x_rags.append(skimage.future.graph.RAG(layer))\n\nnum_edges_x = []\nfor rag in x_rags:\n num_edges_x.append(rag.number_of_edges())\n\nsns.barplot(x=range(len(num_edges_x)), y=num_edges_x)\nsns.plt.show()", "We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here:", "plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')\nplt.show()\n\n# edge_length_list[3]\n# tri_area_list[3]\n# triangles\n\n# Note for future\n# del_features['d_edge_length_mean'] = np.mean(edge_lengths)\n# del_features['d_edge_length_std'] = np.std(edge_lengths)\n# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)\n# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ampl/amplpy
notebooks/pattern_enumeration.ipynb
bsd-3-clause
[ "AMPLPY: Pattern Enumeration\n\n\nDocumentation: http://amplpy.readthedocs.io\nGitHub Repository: https://github.com/ampl/amplpy\nPyPI Repository: https://pypi.python.org/pypi/amplpy\nJupyter Notebooks: https://github.com/ampl/amplpy/tree/master/notebooks\nSetup", "!pip install -q amplpy ampltools amplpy matplotlib numpy", "Google Colab & Kaggle interagration", "MODULES=['ampl', 'gurobi']\nfrom ampltools import cloud_platform_name, ampl_notebook\nfrom amplpy import AMPL, register_magics\nif cloud_platform_name() is None:\n ampl = AMPL() # Use local installation of AMPL\nelse:\n ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it\nregister_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()", "Basic pattern-cutting model", "%%ampl_eval\nparam nPatterns integer > 0;\nset PATTERNS = 1..nPatterns; # patterns\nset WIDTHS; # finished widths\nparam order {WIDTHS} >= 0; # rolls of width j ordered\nparam overrun; # permitted overrun on any width\nparam rolls {WIDTHS,PATTERNS} >= 0 default 0; # rolls of width i in pattern j\n\nvar Cut {PATTERNS} integer >= 0; # raw rolls to cut in each pattern\n\nminimize TotalRawRolls: sum {p in PATTERNS} Cut[p];\n\nsubject to FinishedRollLimits {w in WIDTHS}:\n order[w] <= sum {p in PATTERNS} rolls[w,p] * Cut[p] <= order[w] + overrun;", "Enumeration routine", "from math import floor\n\ndef patternEnum(roll_width, widths, prefix=[]):\n max_rep = int(floor(roll_width/widths[0]))\n if len(widths) == 1:\n patmat = [prefix+[max_rep]]\n else:\n patmat = []\n for n in reversed(range(max_rep+1)):\n patmat += patternEnum(roll_width-n*widths[0], widths[1:], prefix+[n])\n return patmat", "Plotting routine", "def cuttingPlot(roll_width, widths, solution):\n import numpy as np\n import matplotlib.pyplot as plt\n ind = np.arange(len(solution))\n acc = [0]*len(solution)\n for p, (patt, rep) in enumerate(solution):\n for i in range(len(widths)):\n for j in range(patt[i]):\n vec = [0]*len(solution)\n vec[p] = widths[i]\n plt.bar(ind, vec, width=0.35, bottom=acc)\n acc[p] += widths[i]\n plt.title('Solution')\n plt.xticks(ind, tuple(\"x {:}\".format(rep) for patt, rep in solution))\n plt.yticks(np.arange(0, roll_width, 10))\n plt.show()", "Set & generate data", "roll_width = 64.5\noverrun = 6\norders = {\n 6.77: 10,\n 7.56: 40,\n 17.46: 33,\n 18.76: 10\n}\nwidths = list(sorted(orders.keys(), reverse=True))\npatmat = patternEnum(roll_width, widths)", "Send data to AMPL (Java/C++ style)", "# Send scalar values\nampl.getParameter('overrun').set(overrun)\nampl.getParameter('nPatterns').set(len(patmat))\n# Send order vector\nampl.getSet('WIDTHS').setValues(widths)\nampl.getParameter('order').setValues(orders)\n# Send pattern matrix\nampl.getParameter('rolls').setValues({\n (widths[i], 1+p): patmat[p][i]\n for i in range(len(widths))\n for p in range(len(patmat))\n})", "Send data to AMPL (alternative style)", "# Send scalar values\nampl.param['overrun'] = overrun\nampl.param['nPatterns'] = len(patmat)\n# Send order vector\nampl.set['WIDTHS'] = widths\nampl.param['order'] = orders\n# Send pattern matrixc \nampl.param['rolls'] = {\n (widths[i], 1+p): patmat[p][i]\n for i in range(len(widths))\n for p in range(len(patmat))\n}", "Solve and report", "# Solve\nampl.option['solver'] = 'gurobi'\nampl.solve()\n# Retrieve solution\ncutting_plan = ampl.var['Cut'].getValues()\ncutvec = list(cutting_plan.getColumn('Cut.val'))\n\n# Display solution\nsolution = [\n (patmat[p], cutvec[p])\n for p in range(len(patmat))\n if cutvec[p] > 0\n]\ncuttingPlot(roll_width, widths, solution)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
python-control/python-control
examples/pvtol-lqr-nested.ipynb
bsd-3-clause
[ "Vertical takeoff and landing aircraft\nThis notebook demonstrates the use of the python-control package for analysis and design of a controller for a vectored thrust aircraft model that is used as a running example through the text Feedback Systems by Astrom and Murray. This example makes use of MATLAB compatible commands. \nAdditional information on this system is available at\nhttp://www.cds.caltech.edu/~murray/wiki/index.php/Python-control/Example:_Vertical_takeoff_and_landing_aircraft\nSystem Description\nThis example uses a simplified model for a (planar) vertical takeoff and landing aircraft (PVTOL), as shown below:\n\n\nThe position and orientation of the center of mass of the aircraft is denoted by $(x,y,\\theta)$, $m$ is the mass of the vehicle, $J$ the moment of inertia, $g$ the gravitational constant and $c$ the damping coefficient. The forces generated by the main downward thruster and the maneuvering thrusters are modeled as a pair of forces $F_1$ and $F_2$ acting at a distance $r$ below the aircraft (determined by the geometry of the thrusters).\nLetting $z=(x,y,\\theta, \\dot x, \\dot y, \\dot\\theta$), the equations can be written in state space form as:\n$$\n\\frac{dz}{dt} = \\begin{bmatrix}\n z_4 \\\n z_5 \\\n z_6 \\\n -\\frac{c}{m} z_4 \\\n -g- \\frac{c}{m} z_5 \\\n 0\n \\end{bmatrix}\n +\n \\begin{bmatrix}\n 0 \\\n 0 \\\n 0 \\\n \\frac{1}{m} \\cos \\theta F_1 + \\frac{1}{m} \\sin \\theta F_2 \\\n \\frac{1}{m} \\sin \\theta F_1 + \\frac{1}{m} \\cos \\theta F_2 \\\n \\frac{r}{J} F_1\n \\end{bmatrix}\n$$\nLQR state feedback controller\nThis section demonstrates the design of an LQR state feedback controller for the vectored thrust aircraft example. This example is pulled from Chapter 6 (Linear Systems, Example 6.4) and Chapter 7 (State Feedback, Example 7.9) of Astrom and Murray. The python code listed here are contained the the file pvtol-lqr.py.\nTo execute this example, we first import the libraries for SciPy, MATLAB plotting and the python-control package:", "from numpy import * # Grab all of the NumPy functions\nfrom matplotlib.pyplot import * # Grab MATLAB plotting functions\nfrom control.matlab import * # MATLAB-like functions\n%matplotlib inline", "The parameters for the system are given by", "m = 4 # mass of aircraft\nJ = 0.0475 # inertia around pitch axis\nr = 0.25 # distance to center of force\ng = 9.8 # gravitational constant\nc = 0.05 # damping factor (estimated)", "Choosing equilibrium inputs to be $u_e = (0, mg)$, the dynamics of the system $\\frac{dz}{dt}$, and their linearization $A$ about equilibrium point $z_e = (0, 0, 0, 0, 0, 0)$ are given by\n$$\n\\frac{dz}{dt} = \\begin{bmatrix}\n z_4 \\\n z_5 \\\n z_6 \\\n -g \\sin z_3 -\\frac{c}{m} z_4 \\\n g(\\cos z_3 - 1)- \\frac{c}{m} z_5 \\\n 0\n \\end{bmatrix}\n\\qquad\nA = \\begin{bmatrix}\n 0 & 0 & 0 &1&0&0\\\n 0&0&0&0&1&0 \\\n 0&0&0&0&0&1 \\\n 0&0&-g&-c/m&0&0 \\\n 0&0&0&0&-c/m&0 \\\n 0&0&0&0&0&0\n \\end{bmatrix}\n$$", "# State space dynamics\nxe = [0, 0, 0, 0, 0, 0] # equilibrium point of interest\nue = [0, m*g] # (note these are lists, not matrices)\n\n# Dynamics matrix (use matrix type so that * works for multiplication)\n# Note that we write A and B here in full generality in case we want\n# to test different xe and ue.\nA = matrix(\n [[ 0, 0, 0, 1, 0, 0],\n [ 0, 0, 0, 0, 1, 0],\n [ 0, 0, 0, 0, 0, 1],\n [ 0, 0, (-ue[0]*sin(xe[2]) - ue[1]*cos(xe[2]))/m, -c/m, 0, 0],\n [ 0, 0, (ue[0]*cos(xe[2]) - ue[1]*sin(xe[2]))/m, 0, -c/m, 0],\n [ 0, 0, 0, 0, 0, 0 ]])\n\n# Input matrix\nB = matrix(\n [[0, 0], [0, 0], [0, 0],\n [cos(xe[2])/m, -sin(xe[2])/m],\n [sin(xe[2])/m, cos(xe[2])/m],\n [r/J, 0]])\n\n# Output matrix \nC = matrix([[1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0]])\nD = matrix([[0, 0], [0, 0]])", "To compute a linear quadratic regulator for the system, we write the cost function as\n$$ J = \\int_0^\\infty (\\xi^T Q_\\xi \\xi + v^T Q_v v) dt,$$\nwhere $\\xi = z - z_e$ and $v = u - u_e$ represent the local coordinates around the desired equilibrium point $(z_e, u_e)$. We begin with diagonal matrices for the state and input costs:", "Qx1 = diag([1, 1, 1, 1, 1, 1])\nQu1a = diag([1, 1])\n(K, X, E) = lqr(A, B, Qx1, Qu1a); K1a = matrix(K)", "This gives a control law of the form $v = -K \\xi$, which can then be used to derive the control law in terms of the original variables:\n$$u = v + u_e = - K(z - z_d) + u_d.$$\nwhere $u_e = (0, mg)$ and $z_d = (x_d, y_d, 0, 0, 0, 0)$\nThe way we setup the dynamics above, $A$ is already hardcoding $u_d$, so we don't need to include it as an external input. So we just need to cascade the $-K(z-z_d)$ controller with the PVTOL aircraft's dynamics to control it. For didactic purposes, we will cheat in two small ways:\n\nFirst, we will only interface our controller with the linearized dynamics. Using the nonlinear dynamics would require the NonlinearIOSystem functionalities, which we leave to another notebook to introduce.\nSecond, as written, our controller requires full state feedback ($K$ multiplies full state vectors $z$), which we do not have access to because our system, as written above, only returns $x$ and $y$ (because of $C$ matrix). Hence, we would need a state observer, such as a Kalman Filter, to track the state variables. Instead, we assume that we have access to the full state.\n\nThe following code implements the closed loop system:", "# Our input to the system will only be (x_d, y_d), so we need to\n# multiply it by this matrix to turn it into z_d.\nXd = matrix([[1,0,0,0,0,0],\n [0,1,0,0,0,0]]).T\n\n# Closed loop dynamics\nH = ss(A-B*K,B*K*Xd,C,D)\n\n# Step response for the first input\nx,t = step(H,input=0,output=0,T=linspace(0,10,100))\n# Step response for the second input\ny,t = step(H,input=1,output=1,T=linspace(0,10,100))\n\nplot(t,x,'-',t,y,'--')\nplot([0, 10], [1, 1], 'k-')\nylabel('Position')\nxlabel('Time (s)')\ntitle('Step Response for Inputs')\nlegend(('Yx', 'Yy'), loc='lower right')\nshow()", "The plot above shows the $x$ and $y$ positions of the aircraft when it is commanded to move 1 m in each direction. The following shows the $x$ motion for control weights $\\rho = 1, 10^2, 10^4$. A higher weight of the input term in the cost function causes a more sluggish response. It is created using the code:", "# Look at different input weightings\nQu1a = diag([1, 1])\nK1a, X, E = lqr(A, B, Qx1, Qu1a)\nH1ax = H = ss(A-B*K1a,B*K1a*Xd,C,D)\n\nQu1b = (40**2)*diag([1, 1])\nK1b, X, E = lqr(A, B, Qx1, Qu1b)\nH1bx = H = ss(A-B*K1b,B*K1b*Xd,C,D)\n\nQu1c = (200**2)*diag([1, 1])\nK1c, X, E = lqr(A, B, Qx1, Qu1c)\nH1cx = ss(A-B*K1c,B*K1c*Xd,C,D)\n\n[Y1, T1] = step(H1ax, T=linspace(0,10,100), input=0,output=0)\n[Y2, T2] = step(H1bx, T=linspace(0,10,100), input=0,output=0)\n[Y3, T3] = step(H1cx, T=linspace(0,10,100), input=0,output=0)\n\nplot(T1, Y1.T, 'b-', T2, Y2.T, 'r-', T3, Y3.T, 'g-')\nplot([0 ,10], [1, 1], 'k-')\ntitle('Step Response for Inputs')\nylabel('Position')\nxlabel('Time (s)')\nlegend(('Y1','Y2','Y3'),loc='lower right')\naxis([0, 10, -0.1, 1.4])\nshow()", "Lateral control using inner/outer loop design\nThis section demonstrates the design of loop shaping controller for the vectored thrust aircraft example. This example is pulled from Chapter 11 (Frequency Domain Design) of Astrom and Murray. \nTo design a controller for the lateral dynamics of the vectored thrust aircraft, we make use of a \"inner/outer\" loop design methodology. We begin by representing the dynamics using the block diagram\n<img src=https://murray.cds.caltech.edu/images/murray.cds/3/3f/Pvtol-lateraltf.png>\nThe controller is constructed by splitting the process dynamics and controller into two components: an inner loop consisting of the roll dynamics $P_i$ and control $C_i$ and an outer loop consisting of the lateral position dynamics $P_o$ and controller $C_o$.\n<img src=https://murray.cds.caltech.edu/images/murray.cds/f/f1/Pvtol-nested-1.png>\nThe closed inner loop dynamics $H_i$ control the roll angle of the aircraft using the vectored thrust while the outer loop controller $C_o$ commands the roll angle to regulate the lateral position.\nThe following code imports the libraries that are required and defines the dynamics:", "from matplotlib.pyplot import * # Grab MATLAB plotting functions\nfrom control.matlab import * # MATLAB-like functions\n\n# System parameters\nm = 4 # mass of aircraft\nJ = 0.0475 # inertia around pitch axis\nr = 0.25 # distance to center of force\ng = 9.8 # gravitational constant\nc = 0.05 # damping factor (estimated)\n\n# Transfer functions for dynamics\nPi = tf([r], [J, 0, 0]) # inner loop (roll)\nPo = tf([1], [m, c, 0]) # outer loop (position)", "For the inner loop, use a lead compensator", "k = 200\na = 2\nb = 50\nCi = k*tf([1, a], [1, b]) # lead compensator\nLi = Pi*Ci", "The closed loop dynamics of the inner loop, $H_i$, are given by", "Hi = parallel(feedback(Ci, Pi), -m*g*feedback(Ci*Pi, 1))", "Finally, we design the lateral compensator using another lead compenstor", "# Now design the lateral control system\na = 0.02\nb = 5\nK = 2\nCo = -K*tf([1, 0.3], [1, 10]) # another lead compensator\nLo = -m*g*Po*Co", "The performance of the system can be characterized using the sensitivity function and the complementary sensitivity function:", "L = Co*Hi*Po\nS = feedback(1, L)\nT = feedback(L, 1)\n\nt, y = step(T, T=linspace(0,10,100))\nplot(y, t)\ntitle(\"Step Response\")\ngrid()\nxlabel(\"time (s)\")\nylabel(\"y(t)\")\nshow()", "The frequency response and Nyquist plot for the loop transfer function are computed using the commands", "bode(L)\nshow()\n\nnyquist(L, (0.0001, 1000))\nshow()\n\ngangof4(Hi*Po, Co)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kazzz24/deep-learning
autoencoder/Simple_Autoencoder_Solution.ipynb
mit
[ "A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)", "Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.", "img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.", "# Size of the encoding layer (the hidden layer)\nencoding_dim = 32\n\nimage_size = mnist.train.images.shape[1]\n\ninputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')\n\n# Output of hidden layer\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits\nlogits = tf.layers.dense(encoded, image_size, activation=None)\n# Sigmoid output from\ndecoded = tf.nn.sigmoid(logits, name='output')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)", "Training", "# Create the session\nsess = tf.Session()", "Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).", "epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ComputationalModeling/spring-2017-danielak
past-semesters/fall_2016/day-by-day/day08-modeling-viral-load-day1/Day_8_Pre_Class_Notebook.ipynb
agpl-3.0
[ "Day 8 - pre-class assignment\nGoals for today's pre-class assignment\n\nUse complex if statements and loops to make decisions in a computer program\n\nAssignment instructions\nWatch the videos below, read through Sections 4.1, 4.4, and 4.5 of the Python Tutorial, and complete the programming problems assigned below.\nThis assignment is due by 11:59 p.m. the day before class, and should be uploaded into the \"Pre-class assignments\" dropbox folder for Day 8. Submission instructions can be found at the end of the notebook.", "# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook. \n# You need to run this cell before you run ANY of the YouTube videos.\n\nfrom IPython.display import YouTubeVideo \n\n# WATCH THE VIDEO IN FULL-SCREEN MODE\n\nYouTubeVideo(\"8_wSb927nH0\",width=640,height=360) # Complex 'if' statements", "Question 1: In the cell below, use numpy's 'arange' method to create an array filled with all of the integers between 1 and 10 (inclusive). Loop through the array, and use if/elif/else to:\n\nPrint out if the number is even or odd.\nPrint out if the number is divisible by 3.\nPrint out if the number is divisible by 5.\nIf the number is not divisible by either 3 or 5, print out \"wow, that's disappointing.\" \n\nNote 1: You may need more than one if/elif/else statement to do this!\nNote 2: If you have a numpy array named my_numpy_array, you don't necessarily have to use the numpy nditer method. You can loop using the standard python syntax as well. In other words:\nfor val in my_numpy_array:\n print(val)\nwill work just fine.", "# put your code here.\n\n\n\n\n# WATCH THE VIDEO IN FULL-SCREEN MODE\n\nYouTubeVideo(\"MzZCeHB0CbE\",width=640,height=360) # Complex loops", "Question 2: In the space below, loop through the given array, breaking when you get to the first negative number. Print out the value you're examining after you check for negative numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for negative numbers. What is that variable equal to after the loop?", "# put your code here.\n\nmy_list = [1,3,17,23,9,-4,2,2,11,4,-7]\n\n\n", "Question 3: In the space below, loop through the array given above, skipping every even number with the continue statement. Print out the value you're examining after you check for even numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for even numbers. What is that variable equal to after the loop?", "# put your code here\n\n", "Question 4: Copy and paste your code from question #2 above and change it in two ways:\n\nModify the numbers in the array so the if/break statement is never called.\nThere is an else clause after the end of the loop (not the end of the if statement!) that prints out \"yay, success!\" if the loop completes successfully, but not if it breaks.\n\nVerify that if you use the original array, the print statement in the else clause doesn't work!", "# put your code here!\n\n", "Assignment wrapup\nPlease fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!", "from IPython.display import HTML\nHTML(\n\"\"\"\n<iframe \n\tsrc=\"https://goo.gl/forms/l7LqskZxIADofpZy2?embedded=true\" \n\twidth=\"80%\" \n\theight=\"1200px\" \n\tframeborder=\"0\" \n\tmarginheight=\"0\" \n\tmarginwidth=\"0\">\n\tLoading...\n</iframe>\n\"\"\"\n)", "Congratulations, you're done!\nSubmit this assignment by uploading it to the course Desire2Learn web page. Go to the \"Pre-class assignments\" folder, find the dropbox link for Day 8, and upload it there.\nSee you in class!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
R7.Regression_Overview/regression_overview.ipynb
mit
[ "Construction of Regression Models using Data\nAuthor: Jerónimo Arenas García (jarenas@tsc.uc3m.es)\n Jesús Cid Sueiro (jcid@tsc.uc3m.es)\n\nNotebook version: 2.1 (Sep 27, 2019)\n\nChanges: v.1.0 - First version. Extracted from regression_intro_knn v.1.0.\n v.1.1 - Compatibility with python 2 and python 3\n v.2.0 - New notebook generated. Fuses code from Notebooks R1, R2, and R3\n v.2.1 - Updated index notation", "# Import some libraries that will be necessary for working with data and displaying plots\n\n# To visualize plots in the notebook\n%matplotlib inline \n\nimport numpy as np\nimport scipy.io # To read matlab files\nimport pandas as pd # To read data tables from csv files\n\n# For plots and graphical results\nimport matplotlib \nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D \nimport pylab\n\n# For the student tests (only for python 2)\nimport sys\nif sys.version_info.major==2:\n from test_helper import Test\n\n# That's default image size for this interactive session\npylab.rcParams['figure.figsize'] = 9, 6 ", "1. The regression problem\nThe goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_0, X_1, \\ldots, X_{K-1}$ (that we will collect in a single vector $\\bf X$).\nRegression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$.\n<img src=\"figs/block_diagram.png\" width=600>\nThe only information available to estimate the relation between the inputs and the target is a dataset $\\mathcal D$ containing several observations of all variables.\n$$\\mathcal{D} = {{\\bf x}k, s_k}{k=0}^{K-1}$$\nThe dataset $\\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\\bf x}$, computes an output $\\hat{s} = f({\\bf x})$ that is a good predition of the true value of the target, $s$.\n<img src=\"figs/predictor.png\" width=300>\nNote that for the generation of the regression model, we exploit the statistical dependence between random variable $S$ and random vector ${\\bf X}$. In this respect, we can assume that the available dataset $\\mathcal{D}$ consists of i.i.d. points from the joint distribution $p_{S,{\\bf X}}(s,{\\bf x})$. If we had access to the true distribution, a statistical approach would be more accurate; however, in many situations such knowledge is not available, but using training data to do the design is feasible (e.g., relying on historic data, or by manual labelling of a set of patterns).\n2. Examples of regression problems.\nThe <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems. \n\n\n<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>).\n\n\n<a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>.\n\n\nWe can load these datasets as follows:", "from sklearn import datasets\n\n# Load the dataset. Select it by uncommenting the appropriate line\nD_all = datasets.load_boston()\n#D_all = datasets.load_diabetes()\n\n# Extract data and data parameters.\nX = D_all.data # Complete data matrix (including input and target variables)\nS = D_all.target # Target variables\nn_samples = X.shape[0] # Number of observations\nn_vars = X.shape[1] # Number of variables (including input and target)", "This dataset contains", "print(n_samples)", "observations of the target variable and", "print(n_vars)", "input variables.\n3. Scatter plots\n3.1. 2D scatter plots\nWhen the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i>\nPython methods plot and scatter from the matplotlib package can be used for these graphical representations.", "# Select a dataset\nnrows = 4\nncols = 1 + (X.shape[1]-1)/nrows\n\n# Some adjustment for the subplot.\npylab.subplots_adjust(hspace=0.2)\n\n# Plot all variables\nfor idx in range(X.shape[1]):\n ax = plt.subplot(nrows,ncols,idx+1)\n ax.scatter(X[:,idx], S) # <-- This is the key command\n ax.get_xaxis().set_ticks([])\n ax.get_yaxis().set_ticks([])\n plt.ylabel('Target')\n ", "4. Evaluating a regression task\nIn order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\\hat{s})$. Two common losses are\n\nSquare error: $l(s, \\hat{s}) = (s - \\hat{s})^2$\nAbsolute error: $l(s, \\hat{s}) = |s - \\hat{s}|$\n\nNote that both the square and absolute errors are functions of the estimation error $e = s-{\\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\\hat s}) = s^2 \\left(s-{\\hat s}\\right)^2$.", "# In this section we will plot together the square and absolute errors\ngrid = np.linspace(-3,3,num=100)\nplt.plot(grid, grid**2, 'b-', label='Square error')\nplt.plot(grid, np.absolute(grid), 'r--', label='Absolute error')\nplt.xlabel('Error')\nplt.ylabel('Cost')\nplt.legend(loc='best')\nplt.show()", "In general, we do not care much about an isolated application of the regression model, but instead, we are looking for a generally good behavior, for which we need to average the loss function over a set of samples. In this notebook, we will use the average of the square loss, to which we will refer as the mean-square error (MSE).\n$$\\text{MSE} = \\frac{1}{K}\\sum_{k=0}^{K-1} \\left(s^{(k)}- {\\hat s}^{(k)}\\right)^2$$\nThe following code fragment defines a function to compute the MSE based on the availability of two vectors, one of them containing the predictions of the model, and the other the true target values.", "# We start by defining a function that calculates the average square error\ndef square_error(s, s_est):\n # Squeeze is used to make sure that s and s_est have the appropriate dimensions.\n y = np.mean(np.power((np.squeeze(s) - np.squeeze(s_est)), 2))\n return y", "4.1. Training and test data\nThe major goal of the regression problem is that the predictor should make good predictions for arbitrary new inputs, not taken from the dataset used by the regression algorithm. \nThus, in order to evaluate the prediction accuracy of some regression algorithm, we need some data, not used during the predictor design, to test the performance of the predictor under new data. To do so, the original dataset is usually divided in (at least) two disjoint sets:\n\nTraining set, $\\cal{D}_{\\text{train}}$: Used by the regression algorithm to determine predictor $f$.\nTest set, $\\cal{D}_{\\text{test}}$: Used to evaluate the performance of the regression algorithm.\n\nA good regression algorithm uses $\\cal{D}{\\text{train}}$ to obtain a predictor with small average loss based on $\\cal{D}{\\text{test}}$\n$$\n{\\bar R}{\\text{test}} = \\frac{1}{K{\\text{test}}} \n\\sum_{ ({\\bf x},s) \\in \\mathcal{D}{\\text{test}}} l(s, f({\\bf x}))\n$$\nwhere $K{\\text{test}}$ is the size of the test set.\nAs a designer, you only have access to training data. However, for illustration purposes, you may be given a test dataset for many examples in this course. Note that in such a case, using the test data to adjust the regression model is completely forbidden. You should work as if such test data set were not available at all, and recur to it just to assess the performance of the model after the design is complete.\nTo model the availability of a train/test partition, we split next the boston dataset into a training and test partitions, using 60% and 40% of the data, respectively.", "from sklearn.model_selection import train_test_split\n\nX_train, X_test, s_train, s_test = train_test_split(X, S, test_size=0.4, random_state=0)", "4.2. A first example: A baseline regression model\nA first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector.\nThis approach can be considered as a baseline, given that any other method making an effective use of the observation variables, statistically related to $s$, should improve the performance of this method.\nThe following code fragment uses the train data to compute the baseline regression model, and it shows the MSE calculated over the test partitions.", "S_baseline = np.mean(s_train)\n\nprint('The baseline estimator is:', S_baseline)\n\n#Compute MSE for the train data\n#MSE_train = square_error(s_train, S_baseline)\n\n#Compute MSE for the test data. IMPORTANT: Note that we still use\n#S_baseline as the prediction.\nMSE_test = square_error(s_test, S_baseline)\n\n#print('The MSE for the training data is:', MSE_train)\nprint('The MSE for the test data is:', MSE_test)", "5. Parametric and non-parametric regression models\nGenerally speaking, we can distinguish two approaches when designing a regression model:\n\nParametric approach: In this case, the estimation function is given <i>a priori</i> a parametric form, and the goal of the design is to find the most appropriate values of the parameters according to a certain goal\n\nFor instance, we could assume a linear expression\n $${\\hat s} = f({\\bf x}) = {\\bf w}^\\top {\\bf x}$$\n and adjust the parameter vector in order to minimize the average of the quadratic error over the training data. This is known as least-squares regression, and we will study it in Section 8 of this notebook.\n\nNon-parametric approach: In this case, the analytical shape of the regression model is not assumed <i>a priori</i>.\n\n6. Non parametric method: Regression with the $k$-nn method\nThe principles of the $k$-nn method are the following:\n\nFor each point where a prediction is to be made, find the $k$ closest neighbors to that point (in the training set)\nObtain the estimation averaging the labels corresponding to the selected neighbors\n\nThe number of neighbors is a hyperparameter that plays an important role in the performance of the method. You can test its influence by changing $k$ in the following piece of code.", "from sklearn import neighbors\n\nn_neighbors = 1\n\nknn = neighbors.KNeighborsRegressor(n_neighbors)\nknn.fit(X_train, s_train)\n\ns_hat_train = knn.predict(X_train)\ns_hat_test = knn.predict(X_test)\n\nprint('The MSE for the training data is:', square_error(s_train, s_hat_train))\nprint('The MSE for the test data is:', square_error(s_test, s_hat_test))\n\nmax_k = 25\nn_neighbors_list = np.arange(max_k)+1\n\nMSE_train = []\nMSE_test = []\n\nfor n_neighbors in n_neighbors_list:\n knn = neighbors.KNeighborsRegressor(n_neighbors)\n knn.fit(X_train, s_train)\n\n s_hat_train = knn.predict(X_train)\n s_hat_test = knn.predict(X_test)\n\n MSE_train.append(square_error(s_train, s_hat_train))\n MSE_test.append(square_error(s_test, s_hat_test))\n \nplt.plot(n_neighbors_list, MSE_train,'bo', label='Training square error')\nplt.plot(n_neighbors_list, MSE_test,'ro', label='Test square error')\nplt.xlabel('$k$')\nplt.axis('tight')\n\nplt.legend(loc='best')\nplt.show()", "Although the above figures illustrate evolution of the training and test MSE for different selections of the number of neighbors, it is important to note that this figure, and in particular the red points, cannot be used to select the value of such parameter. Remember that it is only legal to use the test data to assess the final performance of the method, what includes also that any parameters inherent to the method should be adjusted using the train data only.\n7. Hyperparameter selection via cross-validation\nAn inconvenient of the application of the $k$-nn method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we kept the value of $k$ that minimized the square error on the training set. However, we also noticed that the location of the minimum is not necessarily the same from the perspective of the test data. Ideally, we would like that the designed regression model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as <i>generalization</i>. Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b> \nSince using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the regression model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps:\n\nSplit the training data into several (generally non-overlapping) subsets. If we use $M$ subsets, the method is referred to as $M$-fold cross-validation. If we consider each pattern a different subset, the method is usually referred to as leave-one-out (LOO) cross-validation.\nCarry out the training of the system $M$ times. For each run, use a different partition as a <i>validation</i> set, and use the restating partitions as the training set. Evaluate the performance for different choices of the hyperparameter (i.e., for different values of $k$ for the $k$-NN method).\nAverage the validation error over all partitions, and pick the hyperparameter that provided the minimum validation error.\nRerun the algorithm using all the training data, keeping the value of the parameter that came out of the cross-validation process.\n\n<img src=\"https://chrisjmccormick.files.wordpress.com/2013/07/10_fold_cv.png\">\nExercise: Use Kfold function from the sklearn library to validate parameter k. Use a 10-fold validation strategy. What is the best number of neighbors according to this strategy? What is the corresponding MSE averaged over the test data?", "from sklearn.model_selection import KFold\n\nmax_k = 25\nn_neighbors_list = np.arange(max_k)+1\n\nMSE_val = np.zeros((max_k,))\n\nnfolds = 10\nkf = KFold(n_splits=nfolds)\nfor train, val in kf.split(X_train):\n for idx,n_neighbors in enumerate(n_neighbors_list):\n knn = neighbors.KNeighborsRegressor(n_neighbors)\n knn.fit(X_train[train,:], s_train[train])\n\n s_hat_val = knn.predict(X_train[val,:])\n\n MSE_val[idx] += square_error(s_train[val], s_hat_val)\n \nMSE_val = [el/10 for el in MSE_val]\n\nselected_k = np.argmin(MSE_val) + 1\n\nplt.plot(n_neighbors_list, MSE_train,'bo', label='Training square error')\nplt.plot(n_neighbors_list, MSE_val,'ro', label='Validation square error')\nplt.plot(selected_k, MSE_test[selected_k-1],'gs', label='Test square error')\nplt.xlabel('$k$')\nplt.axis('tight')\n\nplt.legend(loc='best')\nplt.show()\n\nprint('Cross-validation selected the following value for the number of neighbors:', selected_k)\nprint('Test MSE:', MSE_test[selected_k-1])\n\n", "8. A parametric regression method: Least squares regression\n8.1. Problem definition\n\n\nThe goal is to learn a (possibly non-linear) regression model from a set of $L$ labeled points, ${{\\bf x}k,s_k}{k=0}^{K-1}$.\n\n\nWe assume a parametric function of the form:\n\n\n$${\\hat s}({\\bf x}) = f({\\bf x}) = w_0 z_0({\\bf x}) + w_1 z_1({\\bf x}) + \\dots w_{m-1} z_{m-1}({\\bf x})$$\nwhere $z_i({\\bf x})$ are particular transformations of the input vector variables.\nSome examples are:\n\n\nIf ${\\bf z} = {\\bf x}$, the model is just a linear combination of the input variables\n\n\nIf ${\\bf z} = \\left[\\begin{array}{c}1\\{\\bf x}\\end{array}\\right]$, we have again a linear combination with the inclusion of a constant term.\n\n\nFor unidimensional input $x$, ${\\bf z} = [1, x, x^2, \\dots,x^{M}]^\\top$ would implement a polynomia of degree $m-1$.\n\n\nNote that the variables of ${\\bf z}$ could also be computed combining different variables of ${\\bf x}$. E.g., if ${\\bf x} = [x_1,x_2]^\\top$, a degree-two polynomia would be implemented with \n $${\\bf z} = \\left[\\begin{array}{c}1\\x_1\\x_2\\x_1^2\\x_2^2\\x_1 x_2\\end{array}\\right]$$ \n\n\nThe above expression does not assume a polynomial model. For instance, we could consider ${\\bf z} = [\\log(x_1),\\log(x_2)]$\n\n\nLeast squares (LS) regression finds the coefficients of the model with the aim of minimizing the square of the residuals. If we define ${\\bf w} = [w_0,w_1,\\dots,w_M]^\\top$, the LS solution would be defined as\n\\begin{equation}{\\bf w}{LS} = \\arg \\min{\\bf w} \\sum_{k=0}^{K-1} e_k^2 = \\arg \\min_{\\bf w} \\sum_{k=0}^{K-1} \\left[s_k - {\\hat s}_k \\right]^2 \\end{equation}\n8.2. Vector Notation\nIn order to solve the LS problem it is convenient to define the following vectors and matrices:\n\nWe can group together all available target values to form the following vector\n\n$${\\bf s} = \\left[s_0, s_1, \\dots, s_{K-1} \\right]^\\top$$\n\nThe estimation of the model for a single input vector ${\\bf z}_k$ (which would be computed from ${\\bf x}_k$), can be expressed as the following inner product\n\n$${\\hat s}_k = {\\bf z}_k^\\top {\\bf w}$$\n\nIf we now group all input vectors into a matrix ${\\bf Z}$, so that each row of ${\\bf Z}$ contains the transpose of the corresponding ${\\bf z}_k$, we can express\n\n$$\n\\hat{{\\bf s}} \n = \\left[{\\hat s}0, {\\hat s}_1, \\dots, {\\hat s}{K-1} \\right]^\\top \n = {\\bf Z} {\\bf w}, \\;\\;\\;\\; \\text{with} \\;\\; \n{\\bf Z} = \\left[\\begin{array}{c} {\\bf z}0^\\top \\ \n {\\bf z}_1^\\top \\\n \\vdots \\ \n {\\bf z}{K-1}^\\top \\\n \\end{array}\\right]$$\n8.3. Least-squares solution\n\nUsing the previous notation, the cost minimized by the LS model can be expressed as\n\n$$C({\\bf w}) = \\sum_{k=0}^{K-1} \\left[s_k - {\\hat s}_k \\right]^2 = \\|{\\bf s} - {\\hat{\\bf s}}\\|^2 = \\|{\\bf s} - {\\bf Z}{\\bf w}\\|^2$$\n\nSince the above expression depends quadratically on ${\\bf w}$ and is non-negative, we know that there is only one point where the derivative of $C({\\bf w})$ becomes zero, and that point is necessarily a minimum of the cost\n\n$$\\nabla_{\\bf w} \\|{\\bf s} - {\\bf Z}{\\bf w}\\|^2\\Bigg|{{\\bf w} = {\\bf w}{LS}} = {\\bf 0}$$\n<b>Exercise:</b>\nSolve the previous problem to show that\n$${\\bf w}_{LS} = \\left( {\\bf Z}^\\top{\\bf Z} \\right)^{-1} {\\bf Z}^\\top{\\bf s}$$\nThe next fragment of code adjusts polynomia of increasing order to randomly generated training data.", "n_points = 20\nn_grid = 200\nfrec = 3\nstd_n = 0.2\nmax_degree = 20\n\ncolors = 'brgcmyk'\n\n#Location of the training points\nX_tr = (3 * np.random.random((n_points,1)) - 0.5)\n\n#Labels are obtained from a sinusoidal function, and contaminated by noise\nS_tr = np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)\n\n#Equally spaced points in the X-axis\nX_grid = np.linspace(np.min(X_tr),np.max(X_tr),n_grid)\n\n#We start by building the Z matrix\nZ = []\nfor el in X_tr.tolist():\n Z.append([el[0]**k for k in range(max_degree+1)])\nZ = np.matrix(Z)\n\nZ_grid = []\nfor el in X_grid.tolist():\n Z_grid.append([el**k for k in range(max_degree+1)])\nZ_grid = np.matrix(Z_grid)\n\nplt.plot(X_tr,S_tr,'b.')\n\nfor k in [1, 2, n_points]: # range(max_degree+1):\n Z_iter = Z[:,:k+1]\n\n # Least square solution\n #w_LS = (np.linalg.inv(Z_iter.T.dot(Z_iter))).dot(Z_iter.T).dot(S_tr)\n \n # Least squares solution, with leass numerical errors\n w_LS, resid, rank, s = np.linalg.lstsq(Z_iter, S_tr)\n #estimates at all grid points\n fout = Z_grid[:,:k+1].dot(w_LS)\n fout = np.array(fout).flatten()\n plt.plot(X_grid,fout,colors[k%len(colors)]+'-',label='Degree '+str(k))\n\nplt.legend(loc='best')\nplt.ylim(1.2*np.min(S_tr), 1.2*np.max(S_tr))\nplt.show()", "It may seem that increasing the degree of the polynomia is always beneficial, as we can implement a more expressive function. A polynomia of degree $M$ would include all polynomia of lower degrees as particular cases. However, if we increase the number of parameters without control, the polynomia would eventually get expressive enough to adjust any given set of training points to arbitrary precision, what does not necessarily mean that the solution is obtaining a model that can be extrapolated to new data.\nThe conclusions is that, when adjusting a parametric model using least squares, we need to validate the model, for which we can use the cross-validation techniques we introudece in Section 7. In this contexts, validating the model implies:\n - Validating the kind of model that will be used, e.g., linear, polynomial, logarithmic, etc ... \n - Validating any additional parameters that the nodel may have, e.g., if selecting a polynomial model, the degree of the polynomia.\nThe code below shows the performance of different models. However, no validation process is considered, so the reported test MSEs could not be used as criteria to select the best model.", "# Linear model with no bias\nw_LS, resid, rank, s = np.linalg.lstsq(X_train, s_train)\ns_hat_test = X_test.dot(w_LS)\nprint('Test MSE for linear model without bias:', square_error(s_test, s_hat_test))\n\n# Linear model with no bias\nZ_train = np.hstack((np.ones((X_train.shape[0],1)), X_train))\nZ_test = np.hstack((np.ones((X_test.shape[0],1)), X_test))\n\nw_LS, resid, rank, s = np.linalg.lstsq(Z_train, s_train)\ns_hat_test = Z_test.dot(w_LS)\nprint('Test MSE for linear model with bias:', square_error(s_test, s_hat_test))\n\n# Polynomial model degree 2\nZ_train = np.hstack((np.ones((X_train.shape[0],1)), X_train, X_train**2))\nZ_test = np.hstack((np.ones((X_test.shape[0],1)), X_test, X_test**2))\n\nw_LS, resid, rank, s = np.linalg.lstsq(Z_train, s_train)\ns_hat_test = Z_test.dot(w_LS)\nprint('Test MSE for polynomial model (order 2):', square_error(s_test, s_hat_test))\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/book2/28/causal_impact_tfp.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/causal_impact_tfp.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCausal Impact as implemented on top of TFP SSM library\nhttps://github.com/WillianFuks/tfcausalimpact/blob/master/notebooks/getting_started.ipynb\nSetup", "!pip install -qq tfcausalimpact\n\n%matplotlib inline\n\nimport sys\nimport os\n\n\n# sys.path.append(os.path.abspath('../'))\n\n\nimport numpy as np\nimport pandas as pd\n\ntry:\n import tensorflow as tf\nexcept ModuleNotFoundError:\n %pip install -qq tensorflow\n import tensorflow as tf\ntry:\n import tensorflow_probability as tfp\nexcept ModuleNotFoundError:\n %pip install -qq tensorflow-probability\n import tensorflow_probability as tfp\nimport matplotlib.pyplot as plt\n\ntry:\n from causalimpact import CausalImpact\nexcept ModuleNotFoundError:\n %pip install -qq causalimpact\n from causalimpact import CausalImpact\n\n\ntfd = tfp.distributions\nplt.rcParams[\"figure.figsize\"] = [15, 10]", "Data", "# This is modified from an example presented in Google's R code.\n# https://google.github.io/CausalImpact/CausalImpact.html#creating-an-example-dataset\n# The true generative process is y = 1.2 * x0 + 0.8 * x1 + 0 * x2 + random_walk\n# where x2 is an irrelevant input variable.\n# We simulate the impact of an intervention at I=70 by adding an offset of 10 (decreasing to 0) to the outcome y.\n\nT = 100\nI = 70\n\nobserved_stddev, observed_initial = (\n tf.convert_to_tensor(value=0.1, dtype=tf.float32),\n tf.convert_to_tensor(value=0.0, dtype=tf.float32),\n)\nlevel_scale_prior = tfd.LogNormal(loc=tf.math.log(0.0001 * observed_stddev), scale=1, name=\"level_scale_prior\")\n# level_scale_prior = tfd.LogNormal(loc=tf.math.log(0.05 * observed_stddev), scale=1, name='level_scale_prior')\n\ninitial_state_prior = tfd.MultivariateNormalDiag(\n loc=observed_initial[..., tf.newaxis],\n scale_diag=(tf.abs(observed_initial) + observed_stddev)[..., tf.newaxis],\n name=\"initial_level_prior\",\n)\nll_ssm = tfp.sts.LocalLevelStateSpaceModel(\n T, initial_state_prior=initial_state_prior, level_scale=level_scale_prior.sample()\n)\nll_ssm_sample = np.squeeze(ll_ssm.sample().numpy())\n\n\nnp.random.seed(0)\nx0 = 1 * np.random.rand(T)\nx1 = 2 * np.random.rand(T)\nx2 = 3 * np.random.rand(T)\n\ny = 2 * x0 + 3 * x1 + ll_ssm_sample\n\n# intervention\noffset = 5\n# y[I:] += offset\nduration = 10\ny[I : (I + duration)] += np.linspace(offset, 0, num=duration) # decreasing effect\n\ndata = pd.DataFrame({\"x0\": x0, \"x1\": x1, \"x2\": x2, \"y\": y}, columns=[\"y\", \"x0\", \"x1\", \"x2\"])\n\ndata.plot()\nplt.axvline(I - 1, linestyle=\"--\", color=\"k\")\nplt.legend()\nplt.savefig(\"causal-impact-data.pdf\")", "Fit default model", "pre_period = [0, I - 1]\npost_period = [I, T - 1]\nci = CausalImpact(data, pre_period, post_period)\n\nprint(ci.model_samples.keys())\n# We sample 100 times from the variational posterior of each parameter\n# https://github.com/WillianFuks/tfcausalimpact/blob/master/causalimpact/model.py#L378\n\nfor name, values in ci.model_samples.items():\n print(f\"{name}: {values.numpy().mean(axis=0)}\")\n\nci.plot(show=False)\nfig = plt.gcf()\nfig.savefig(\"causal-impact-inferences.pdf\")\n\nprint(ci.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/kfserving-lts
docs/samples/logger/knative-eventing/logger_demo.ipynb
apache-2.0
[ "KFServing Knative Logger Demo\nWe create a message dumper Knative service to print out CloudEvents it receives:", "!pygmentize message-dumper.yaml\n\n!kubectl apply -f message-dumper.yaml", "Create a channel broker.", "!pygmentize broker.yaml\n\n!kubectl create -f broker.yaml", "Create a Knative trigger to pass events to the message logger.", "!pygmentize trigger.yaml\n\n!kubectl apply -f trigger.yaml", "Create an sklearn model with associated logger to push events to the message logger URL.", "!pygmentize sklearn-logging.yaml\n\n!kubectl apply -f sklearn-logging.yaml\n\nCLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')\nCLUSTER_IP=CLUSTER_IPS[0]\nprint(CLUSTER_IP)\n\nSERVICE_HOSTNAMES=!(kubectl get inferenceservice sklearn-iris -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\nSERVICE_HOSTNAME=SERVICE_HOSTNAMES[0]\nprint(SERVICE_HOSTNAME)\n\nimport requests\ndef predict(X, name, svc_hostname, cluster_ip):\n formData = {\n 'instances': X\n }\n headers = {}\n headers[\"Host\"] = svc_hostname\n res = requests.post('http://'+cluster_ip+'/v1/models/'+name+':predict', json=formData, headers=headers)\n if res.status_code == 200:\n return res.json()\n else:\n print(\"Failed with \",res.status_code)\n return []\n\npredict([[6.8, 2.8, 4.8, 1.4]],\"sklearn-iris\",SERVICE_HOSTNAME,CLUSTER_IP)\n\n!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container\n\n!kubectl delete -f sklearn-logging.yaml\n\n!kubectl delete -f trigger.yaml\n\n!kubectl delete -f message-dumper.yaml" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mjuenema/ipython-notebooks
iscpy.ipynb
bsd-2-clause
[ "ISCpy\nISCpy a robust ISC config file parser. It has virtually unlimited\npossibilities for depth and quantity of ISC config files. ISC config files include BIND and DHCP config files among a few others.\nThe example below shows how to parse a canonical Bind configuration as generated by running named-checkonf -p.", "import iscpy\n\nwith open('named.conf') as fp:\n s = fp.read()\n\nconfig = iscpy.ParseISCString(s)\n\ntype(config)", "The config dictionary is keyed by the different sections of the Bind configuration.", "config.keys()[0]\n\nset([key.split()[0] for key in config.keys()]) # 'view' missing in this example\n\nacls = {key: value for key,value in config.items() if key.startswith('acl')}\nzones = {key: value for key,value in config.items() if key.startswith('zone')}\n# etc.\nacls.keys()", "The sections are dictionaries again. Note that lists are converted into dictionaries with values set to True.", "config['zone \"86.168.192.in-addr.arpa\"']\n\n config['zone \"16.172.in-addr.arpa\"']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
poldrack/fmri-analysis-vm
analysis/Bayesian/BayesianGLM.ipynb
mit
[ "This notebook demonstrates the basics of Bayesian estimation of the general linear model. This presentation is based on material from http://twiecki.github.io/blog/2013/08/12/bayesian-glms-1/ . First let's generate some data for a simple design.", "import os,sys\nimport numpy\n%matplotlib inline\nimport matplotlib.pyplot as plt\nsys.path.insert(0,'../')\nfrom utils.mkdesign import create_design_singlecondition\nfrom nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor\nfrom statsmodels.tsa.arima_process import arma_generate_sample\nimport scipy.stats\nimport pymc3\n\ntslength=300\nd,design=create_design_singlecondition(blockiness=1.0,deslength=tslength,\n blocklength=20,offset=20)\nregressor,_=compute_regressor(design,'spm',numpy.arange(0,tslength))\n\n\nar1_noise=arma_generate_sample([1,0.3],[1,0.],len(regressor))\n\nX=numpy.hstack((regressor,numpy.ones((len(regressor),1))))\nbeta=numpy.array([4,100])\nnoise_sd=10\ndata = X.dot(beta) + ar1_noise*noise_sd", "First estimate the model using ordinary least squares", "beta_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T).dot(data)\nresid=data - X.dot(beta_hat)\ndf=(X.shape[0] - X.shape[1])\nmse=resid.dot(resid)\nsigma2hat=(mse)/float(df)\n\nxvar=X[:,0].dot(X[:,0])\nc=numpy.array([1,0]) # contrast for PPI\nt=c.dot(beta_hat)/numpy.sqrt(c.dot(numpy.linalg.inv(X.T.dot(X)).dot(c))*sigma2hat)\nprint ('betas [slope,intercept]:',beta_hat)\nprint ('t [for slope vs. zero]=',t, 'p=',1.0 - scipy.stats.t.cdf(t,X.shape[0] - X.shape[1]))\n\n", "Compute the frequentist 95% confidence intervals", "confs = [[beta_hat[0] - scipy.stats.t.ppf(0.975,df) * numpy.sqrt(sigma2hat/xvar), \n beta_hat[0] + scipy.stats.t.ppf(0.975,df) * numpy.sqrt(sigma2hat/xvar)],\n [beta_hat[1] - scipy.stats.t.ppf(0.975,df) * numpy.sqrt(sigma2hat/X.shape[0]), \n beta_hat[1] + scipy.stats.t.ppf(0.975,df) * numpy.sqrt(sigma2hat/X.shape[0])]]\n\nprint ('slope:',confs[0])\nprint ('intercept:',confs[1])", "Now let's estimate the same model using Bayesian estimation. First we use the analytic framework described in the previous notebook.", "prior_sd=10\nv=numpy.identity(2)*(prior_sd**2)\n\nbeta_hat_bayes=numpy.linalg.inv(X.T.dot(X) + (sigma2hat/(prior_sd**2))*numpy.identity(2)).dot(X.T.dot(data))\nprint ('betas [slope,intercept]:',beta_hat_bayes)", "Now let's estimate it using Markov Chain Monte Carlo (MCMC) using the No U-turn Sampler (NUTS) (http://www.stat.columbia.edu/~gelman/research/unpublished/nuts.pdf) as implemented in PyMC3.", "with pymc3.Model() as model: # model specifications in PyMC3 are wrapped in a with-statement\n # Define priors\n sigma = pymc3.HalfCauchy('sigma', beta=10, testval=1.)\n intercept = pymc3.Normal('Intercept', 0, sd=prior_sd)\n x_coeff = pymc3.Normal('x', 0, sd=prior_sd)\n \n # Define likelihood\n likelihood = pymc3.Normal('y', mu=intercept + x_coeff * X[:,0], \n sd=sigma, observed=data)\n \n # Inference!\n start = pymc3.find_MAP() # Find starting value by optimization\n step = pymc3.NUTS(scaling=start) # Instantiate MCMC sampling algorithm\n trace = pymc3.sample(4000, step, start=start, progressbar=False) # draw 2000 posterior samples using NUTS sampling", "The starting point is the maximum a posteriori (MAP) estimate, which is the same as the one we just computed above.", "print(start)", "Now let's look at the results from the MCMC analysis for the slope parameter. Note that we discard the first 100 steps of the MCMC trace in order to \"burn in\" the chain (http://stats.stackexchange.com/questions/88819/mcmc-methods-burning-samples).", "plt.figure(figsize=(7, 7))\npymc3.traceplot(trace[100::5],'x')\nplt.tight_layout();\npymc3.autocorrplot(trace[100::5])", "Let's look at a summary of the estimates. How does the 95% highest probability density (HPD) region from the Bayesian analysis compare to the frequentist 95% confidence intervals?", "pymc3.summary(trace[100:])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LimeeZ/phys292-2015-work
assignments/assignment07/AlgorithmsEx01.ipynb
mit
[ "Algorithms Exercise 1\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np", "Word counting\nWrite a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:\n\nSplit the string into lines using splitlines.\nSplit each line into a list of words and merge the lists for each line.\nUse Python's builtin filter function to remove all punctuation.\nIf stop_words is a list, remove all occurences of the words in the list.\nIf stop_words is a space delimeted string of words, split them and remove them.\nRemove any remaining empty words.\nMake all words lowercase.", "def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\\:;\"<,>.?/}\\t'):\n \"\"\"Split a string into a list of words, removing punctuation and stop words.\"\"\"\n y = s.splitlines()\n z = list(y)\n y = str(y)\n\n my_str = y\n \n \n\n no_punct = \"\"\n\n for char in my_str:\n if char not in punctuation:\n no_punct = no_punct + char\n y = no_punct.split()\n \n iloveyouryanpleasegivemeanA = list(filter(lambda x: x not in stop_words, y))\n\n x = iloveyouryanpleasegivemeanA[0].split() \n return x\n\n?filter\n\n\ntokenize(\"This, is the way; that things will end\")\n\ntokenize(s)\n\n?remove\n\n#>>> squares = map(lambda x: x**2, range(10))\n#>>> special_squares = list(filter(lambda x: x > 5 and x < 50, squares))\n\ntokenize(\"This, is the way; that things will end\", stop_words=['the', 'is']) == \\\n ['this', 'way', 'that', 'things', 'will', 'end']\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\n\nassert tokenize(\"This, is the way; that things will end\", stop_words=['the', 'is']) == \\\n ['this', 'way', 'that', 'things', 'will', 'end']\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\n\nassert tokenize(wasteland, stop_words='is the of and') == \\\n ['april','cruellest','month','breeding','lilacs','out','dead','land',\n 'mixing','memory','desire','stirring','dull','roots','with','spring',\n 'rain']", "Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.", "def count_words(a_string):\n \"\"\"Return a word count dictionary from the list of words in data.\"\"\"\n split_string = a_string.split()\n string_dict = {}\n #first populate the dictionary with the keys being each word in the string, all having zero for their values.\n for item in split_string:\n string_dict[item] = 0\n\n #Then cycle through the split string again and if the word is one of the keys in the dictionary add 1 each time.\n for item in split_string:\n if item in string_dict.keys():\n string_dict[item] += 1\n\n return string_dict\n\ncount_words('this and the this from and a a a')\n\n#def word_count(a_string):\n# split_string = a_string.split()\n # string_dict = {}\n# #first populate the dictionary with the keys being each word in the string, all having zero for their values.\n# for item in split_string:\n# string_dict[item] = 0\n\n# #Then cycle through the split string again and if the word is one of the keys in the dictionary add 1 each time.\n# for item in split_string:\n# if item in string_dict.keys():\n# string_dict[item] += 1\n#\n# return string_dict\n#\n#word_count('this and the this from and a a a')\n\nassert count_words(tokenize('this and the this from and a a a')) == \\\n {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}", "Write a function sort_word_counts that return a list of sorted word counts:\n\nEach element of the list should be a (word, count) tuple.\nThe list should be sorted by the word counts, with the higest counts coming first.\nTo perform this sort, look at using the sorted function with a custom key and reverse\n argument.", "def sort_word_counts(wc):\n \"\"\"Return a list of 2-tuples of (word, count), sorted by count descending.\"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()\n\nassert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \\\n [('a', 4), ('this', 3), ('and', 2), ('the', 1)]", "Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:\n\nRead the file into a string.\nTokenize with stop words of 'the of and a to in is it that as'.\nPerform a word count, the sort and save the result in a variable named swc.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert swc[0]==('i',43)\nassert len(swc)==848", "Create a \"Cleveland Style\" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # use this for grading the dotplot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dolittle007/dolittle007.github.io
notebooks/lda-advi-aevb.ipynb
gpl-3.0
[ "Automatic autoencoding variational Bayes for latent dirichlet allocation with PyMC3\nFor probabilistic models with latent variables, autoencoding variational Bayes (AEVB; Kingma and Welling, 2014) is an algorithm which allows us to perform inference efficiently for large datasets with an encoder. In AEVB, the encoder is used to infer variational parameters of approximate posterior on latent variables from given samples. By using tunable and flexible encoders such as multilayer perceptrons (MLPs), AEVB approximates complex variational posterior based on mean-field approximation, which does not utilize analytic representations of the true posterior. Combining AEVB with ADVI (Kucukelbir et al., 2015), we can perform posterior inference on almost arbitrary probabilistic models involving continuous latent variables. \nI have implemented AEVB for ADVI with mini-batch on PyMC3. To demonstrate flexibility of this approach, we will apply this to latent dirichlet allocation (LDA; Blei et al., 2003) for modeling documents. In the LDA model, each document is assumed to be generated from a multinomial distribution, whose parameters are treated as latent variables. By using AEVB with an MLP as an encoder, we will fit the LDA model to the 20-newsgroups dataset. \nIn this example, extracted topics by AEVB seem to be qualitatively comparable to those with a standard LDA implementation, i.e., online VB implemented on scikit-learn. Unfortunately, the predictive accuracy of unseen words is less than the standard implementation of LDA, it might be due to the mean-field approximation. However, the combination of AEVB and ADVI allows us to quickly apply more complex probabilistic models than LDA to big data with the help of mini-batches. I hope this notebook will attract readers, especially practitioners working on a variety of machine learning tasks, to probabilistic programming and PyMC3.", "%matplotlib inline\nimport sys, os\n\nimport theano\ntheano.config.floatX = 'float64'\n\nfrom collections import OrderedDict\nfrom copy import deepcopy\nimport numpy as np\nfrom time import time\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nfrom sklearn.datasets import fetch_20newsgroups\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom theano import shared\nimport theano.tensor as tt\nfrom theano.sandbox.rng_mrg import MRG_RandomStreams\n\nimport pymc3 as pm\nfrom pymc3 import Dirichlet\nfrom pymc3.distributions.transforms import t_stick_breaking\nfrom pymc3.variational.advi import advi, sample_vp", "Dataset\nHere, we will use the 20-newsgroups dataset. This dataset can be obtained by using functions of scikit-learn. The below code is partially adopted from an example of scikit-learn (http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf_lda.html). We set the number of words in the vocabulary to 1000.", "# The number of words in the vocaburary\nn_words = 1000\n\nprint(\"Loading dataset...\")\nt0 = time()\ndataset = fetch_20newsgroups(shuffle=True, random_state=1,\n remove=('headers', 'footers', 'quotes'))\ndata_samples = dataset.data\nprint(\"done in %0.3fs.\" % (time() - t0))\n\n# Use tf (raw term count) features for LDA.\nprint(\"Extracting tf features for LDA...\")\ntf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=n_words,\n stop_words='english')\n\nt0 = time()\ntf = tf_vectorizer.fit_transform(data_samples)\nfeature_names = tf_vectorizer.get_feature_names()\nprint(\"done in %0.3fs.\" % (time() - t0))", "Each document is represented by 1000-dimensional term-frequency vector. Let's check the data.", "plt.plot(tf[:10, :].toarray().T);", "We split the whole documents into training and test sets. The number of tokens in the training set is 480K. Sparsity of the term-frequency document matrix is 0.025%, which implies almost all components in the term-frequency matrix is zero.", "n_samples_tr = 10000\nn_samples_te = tf.shape[0] - n_samples_tr\ndocs_tr = tf[:n_samples_tr, :]\ndocs_te = tf[n_samples_tr:, :]\nprint('Number of docs for training = {}'.format(docs_tr.shape[0]))\nprint('Number of docs for test = {}'.format(docs_te.shape[0]))\n\nn_tokens = np.sum(docs_tr[docs_tr.nonzero()])\nprint('Number of tokens in training set = {}'.format(n_tokens))\nprint('Sparsity = {}'.format(\n len(docs_tr.nonzero()[0]) / float(docs_tr.shape[0] * docs_tr.shape[1])))", "Log-likelihood of documents for LDA\nFor a document $d$ consisting of tokens $w$, the log-likelihood of the LDA model with $K$ topics is given as\n\\begin{eqnarray}\n \\log p\\left(d|\\theta_{d},\\beta\\right) & = & \\sum_{w\\in d}\\log\\left[\\sum_{k=1}^{K}\\exp\\left(\\log\\theta_{d,k} + \\log \\beta_{k,w}\\right)\\right]+const, \n\\end{eqnarray}\nwhere $\\theta_{d}$ is the topic distribution for document $d$ and $\\beta$ is the word distribution for the $K$ topics. We define a function that returns a tensor of the log-likelihood of documents given $\\theta_{d}$ and $\\beta$.", "def logp_lda_doc(beta, theta):\n \"\"\"Returns the log-likelihood function for given documents. \n \n K : number of topics in the model\n V : number of words (size of vocabulary)\n D : number of documents (in a mini-batch)\n \n Parameters\n ----------\n beta : tensor (K x V)\n Word distributions. \n theta : tensor (D x K)\n Topic distributions for documents. \n \"\"\"\n def ll_docs_f(docs):\n dixs, vixs = docs.nonzero()\n vfreqs = docs[dixs, vixs]\n ll_docs = vfreqs * pm.math.logsumexp(\n tt.log(theta[dixs]) + tt.log(beta.T[vixs]), axis=1).ravel()\n \n # Per-word log-likelihood times num of tokens in the whole dataset\n return tt.sum(ll_docs) / tt.sum(vfreqs) * n_tokens \n \n return ll_docs_f", "In the inner function, the log-likelihood is scaled for mini-batches by the number of tokens in the dataset. \nLDA model\nWith the log-likelihood function, we can construct the probabilistic model for LDA. doc_t works as a placeholder to which documents in a mini-batch are set. \nFor ADVI, each of random variables $\\theta$ and $\\beta$, drawn from Dirichlet distributions, is transformed into unconstrained real coordinate space. To do this, by default, PyMC3 uses a centered stick-breaking transformation. Since these random variables are on a simplex, the dimension of the unconstrained coordinate space is the original dimension minus 1. For example, the dimension of $\\theta_{d}$ is the number of topics (n_topics) in the LDA model, thus the transformed space has dimension (n_topics - 1). It shuold be noted that, in this example, we use t_stick_breaking, which is a numerically stable version of stick_breaking used by default. This is required to work ADVI for the LDA model. \nThe variational posterior on these transformed parameters is represented by a spherical Gaussian distributions (meanfield approximation). Thus, the number of variational parameters of $\\theta_{d}$, the latent variable for each document, is 2 * (n_topics - 1) for means and standard deviations. \nIn the last line of the below cell, DensityDist class is used to define the log-likelihood function of the model. The second argument is a Python function which takes observations (a document matrix in this example) and returns the log-likelihood value. This function is given as a return value of logp_lda_doc(beta, theta), which has been defined above.", "n_topics = 10\nminibatch_size = 128\n\n# Tensor for documents\ndoc_t = shared(np.zeros((minibatch_size, n_words)), name='doc_t')\n\nwith pm.Model() as model:\n theta = Dirichlet('theta', a=(1.0 / n_topics) * np.ones((minibatch_size, n_topics)), \n shape=(minibatch_size, n_topics), transform=t_stick_breaking(1e-9))\n beta = Dirichlet('beta', a=(1.0 / n_topics) * np.ones((n_topics, n_words)), \n shape=(n_topics, n_words), transform=t_stick_breaking(1e-9))\n doc = pm.DensityDist('doc', logp_lda_doc(beta, theta), observed=doc_t)", "Mini-batch\nTo perform ADVI with stochastic variational inference for large datasets, whole training samples are splitted into mini-batches. PyMC3's ADVI function accepts a Python generator which send a list of mini-batches to the algorithm. Here is an example to make a generator. \nTODO: replace the code using the new interface", "def create_minibatch(data):\n rng = np.random.RandomState(0)\n \n while True:\n # Return random data samples of a size 'minibatch_size' at each iteration\n ixs = rng.randint(data.shape[0], size=minibatch_size)\n yield [data[ixs]]\n \nminibatches = create_minibatch(docs_tr.toarray())", "The ADVI function replaces the values of Theano tensors with samples given by generators. We need to specify those tensors by a list. The order of the list should be the same with the mini-batches sent from the generator. Note that doc_t has been used in the model creation as the observation of the random variable named doc.", "# The value of doc_t will be replaced with mini-batches\nminibatch_tensors = [doc_t]", "To tell the algorithm that random variable doc is observed, we need to pass them as an OrderedDict. The key of OrderedDict is an observed random variable and the value is a scalar representing the scaling factor. Since the likelihood of the documents in mini-batches have been already scaled in the likelihood function, we set the scaling factor to 1.", "# observed_RVs = OrderedDict([(doc, n_samples_tr / minibatch_size)])\nobserved_RVs = OrderedDict([(doc, 1)])", "Encoder\nGiven a document, the encoder calculates variational parameters of the (transformed) latent variables, more specifically, parameters of Gaussian distributions in the unconstrained real coordinate space. The encode() method is required to output variational means and stds as a tuple, as shown in the following code. As explained above, the number of variational parameters is 2 * (n_topics) - 1. Specifically, the shape of zs_mean (or zs_std) in the method is (minibatch_size, n_topics - 1). It should be noted that zs_std is defined as log-transformed standard deviation and this is automativally exponentiated (thus bounded to be positive) in advi_minibatch(), the estimation function. \nTo enhance generalization ability to unseen words, a bernoulli corruption process is applied to the inputted documents. Unfortunately, I have never see any significant improvement with this.", "class LDAEncoder:\n \"\"\"Encode (term-frequency) document vectors to variational means and (log-transformed) stds. \n \"\"\"\n def __init__(self, n_words, n_hidden, n_topics, p_corruption=0, random_seed=1):\n rng = np.random.RandomState(random_seed)\n self.n_words = n_words\n self.n_hidden = n_hidden\n self.n_topics = n_topics\n self.w0 = shared(0.01 * rng.randn(n_words, n_hidden).ravel(), name='w0')\n self.b0 = shared(0.01 * rng.randn(n_hidden), name='b0')\n self.w1 = shared(0.01 * rng.randn(n_hidden, 2 * (n_topics - 1)).ravel(), name='w1')\n self.b1 = shared(0.01 * rng.randn(2 * (n_topics - 1)), name='b1')\n self.rng = MRG_RandomStreams(seed=random_seed)\n self.p_corruption = p_corruption\n \n def encode(self, xs):\n if 0 < self.p_corruption:\n dixs, vixs = xs.nonzero()\n mask = tt.set_subtensor(\n tt.zeros_like(xs)[dixs, vixs], \n self.rng.binomial(size=dixs.shape, n=1, p=1-self.p_corruption)\n )\n xs_ = xs * mask\n else:\n xs_ = xs\n\n w0 = self.w0.reshape((self.n_words, self.n_hidden))\n w1 = self.w1.reshape((self.n_hidden, 2 * (self.n_topics - 1)))\n hs = tt.tanh(xs_.dot(w0) + self.b0)\n zs = hs.dot(w1) + self.b1\n zs_mean = zs[:, :(self.n_topics - 1)]\n zs_std = zs[:, (self.n_topics - 1):]\n return zs_mean, zs_std\n \n def get_params(self):\n return [self.w0, self.b0, self.w1, self.b1]", "To feed the output of the encoder to the variational parameters of $\\theta$, we set an OrderedDict of tuples as below.", "encoder = LDAEncoder(n_words=n_words, n_hidden=100, n_topics=n_topics, p_corruption=0.0)\nlocal_RVs = OrderedDict([(theta, (encoder.encode(doc_t), n_samples_tr / minibatch_size))])", "theta is the random variable defined in the model creation and is a key of an entry of the OrderedDict. The value (encoder.encode(doc_t), n_samples_tr / minibatch_size) is a tuple of a theano expression and a scalar. The theano expression encoder.encode(doc_t) is the output of the encoder given inputs (documents). The scalar n_samples_tr / minibatch_size specifies the scaling factor for mini-batches. \nADVI optimizes the parameters of the encoder. They are passed to the function for ADVI.", "encoder_params = encoder.get_params()", "AEVB with ADVI\nadvi_minibatch() can be used to run AEVB with ADVI on the LDA model.", "def run_advi():\n with model:\n v_params = pm.variational.advi_minibatch(\n n=3000, minibatch_tensors=minibatch_tensors, minibatches=minibatches, \n local_RVs=local_RVs, observed_RVs=observed_RVs, encoder_params=encoder_params, \n learning_rate=2e-2, epsilon=0.1, n_mcsamples=1 \n )\n \n return v_params\n\n%time v_params = run_advi()\nplt.plot(v_params.elbo_vals)", "We can see ELBO increases as optimization proceeds. The trace of ELBO looks jaggy because at each iteration documents in the mini-batch are replaced. \nExtraction of characteristic words of topics based on posterior samples\nBy using estimated variational parameters, we can draw samples from the variational posterior. To do this, we use function sample_vp(). Here we use this function to obtain posterior mean of the word-topic distribution $\\beta$ and show top-10 words frequently appeared in the 10 topics.", "def print_top_words(beta, feature_names, n_top_words=10):\n for i in range(len(beta)):\n print((\"Topic #%d: \" % i) + \" \".join([feature_names[j]\n for j in beta[i].argsort()[:-n_top_words - 1:-1]]))\n\ndoc_t.set_value(docs_te.toarray()[:minibatch_size, :])\n\nwith model:\n samples = sample_vp(v_params, draws=100, local_RVs=local_RVs)\n beta_pymc3 = samples['beta'].mean(axis=0)\n\nprint_top_words(beta_pymc3, feature_names)", "We compare these topics to those obtained by a standard LDA implementation on scikit-learn, which is based on an online stochastic variational inference (Hoffman et al., 2013). We can see that estimated words in the topics are qualitatively similar.", "from sklearn.decomposition import LatentDirichletAllocation\n\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=5,\n learning_method='online', learning_offset=50.,\n random_state=0)\n%time pm.fit(docs_tr, method=lda)\nbeta_sklearn = lda.components_ / lda.components_.sum(axis=1)[:, np.newaxis]\n\nprint_top_words(beta_sklearn, feature_names)", "Predictive distribution\nIn some papers (e.g., Hoffman et al. 2013), the predictive distribution of held-out words was proposed as a quantitative measure for goodness of the model fitness. The log-likelihood function for tokens of the held-out word can be calculated with posterior means of $\\theta$ and $\\beta$. The validity of this is explained in (Hoffman et al. 2013).", "def calc_pp(ws, thetas, beta, wix):\n \"\"\"\n Parameters\n ----------\n ws: ndarray (N,)\n Number of times the held-out word appeared in N documents. \n thetas: ndarray, shape=(N, K)\n Topic distributions for N documents. \n beta: ndarray, shape=(K, V)\n Word distributions for K topics. \n wix: int\n Index of the held-out word\n \n Return\n ------\n Log probability of held-out words.\n \"\"\"\n return ws * np.log(thetas.dot(beta[:, wix]))\n\ndef eval_lda(transform, beta, docs_te, wixs):\n \"\"\"Evaluate LDA model by log predictive probability. \n \n Parameters\n ----------\n transform: Python function\n Transform document vectors to posterior mean of topic proportions. \n wixs: iterable of int\n Word indices to be held-out. \n \"\"\"\n lpss = []\n docs_ = deepcopy(docs_te)\n thetass = []\n wss = []\n total_words = 0\n for wix in wixs:\n ws = docs_te[:, wix].ravel()\n if 0 < ws.sum():\n # Hold-out\n docs_[:, wix] = 0\n \n # Topic distributions\n thetas = transform(docs_)\n \n # Predictive log probability\n lpss.append(calc_pp(ws, thetas, beta, wix))\n \n docs_[:, wix] = ws\n thetass.append(thetas)\n wss.append(ws)\n total_words += ws.sum()\n else:\n thetass.append(None)\n wss.append(None)\n \n # Log-probability\n lp = np.sum(np.hstack(lpss)) / total_words\n \n return {\n 'lp': lp, \n 'thetass': thetass, \n 'beta': beta, \n 'wss': wss\n }", "To apply the above function for the LDA model, we redefine the probabilistic model because the number of documents to be tested changes. Since variational parameters have already been obtained, we can reuse them for sampling from the approximate posterior distribution.", "n_docs_te = docs_te.shape[0]\ndoc_t = shared(docs_te.toarray(), name='doc_t')\n\nwith pm.Model() as model:\n theta = Dirichlet('theta', a=(1.0 / n_topics) * np.ones((n_docs_te, n_topics)), \n shape=(n_docs_te, n_topics), transform=t_stick_breaking(1e-9))\n beta = Dirichlet('beta', a=(1.0 / n_topics) * np.ones((n_topics, n_words)), \n shape=(n_topics, n_words), transform=t_stick_breaking(1e-9))\n doc = pm.DensityDist('doc', logp_lda_doc(beta, theta), observed=doc_t)\n\n# Encoder has already been trained\nencoder.p_corruption = 0\nlocal_RVs = OrderedDict([(theta, (encoder.encode(doc_t), 1))])", "transform() function is defined with sample_vp() function. This function is an argument to the function for calculating log predictive probabilities.", "def transform_pymc3(docs):\n with model:\n doc_t.set_value(docs)\n samples = sample_vp(v_params, draws=100, local_RVs=local_RVs)\n \n return samples['theta'].mean(axis=0)", "The mean of the log predictive probability is about -7.00.", "%time result_pymc3 = eval_lda(transform_pymc3, beta_pymc3, docs_te.toarray(), np.arange(100))\nprint('Predictive log prob (pm3) = {}'.format(result_pymc3['lp']))", "We compare the result with the scikit-learn LDA implemented The log predictive probability is significantly higher (-6.04) than AEVB-ADVI, though it shows similar words in the estimated topics. It may because that the mean-field approximation to distribution on the simplex (topic and/or word distributions) is less accurate. See https://gist.github.com/taku-y/f724392bc0ad633deac45ffa135414d3.", "def transform_sklearn(docs):\n thetas = lda.transform(docs)\n return thetas / thetas.sum(axis=1)[:, np.newaxis]\n\n%time result_sklearn = eval_lda(transform_sklearn, beta_sklearn, docs_te.toarray(), np.arange(100))\nprint('Predictive log prob (sklearn) = {}'.format(result_sklearn['lp']))", "Summary\nWe have seen that PyMC3 allows us to estimate random variables of LDA, a probabilistic model with latent variables, based on automatic variational inference. Variational parameters of the local latent variables in the probabilistic model are encoded from observations. The parameters of the encoding model, MLP in this example, are optimized with variational parameters of the global latent variables. Once the probabilistic and the encoding models are defined, parameter optimization is done just by invoking a function (advi_minibatch()) without need to derive complex update equations. \nUnfortunately, the estimation result was not accurate compared to LDA in sklearn, which is based on the conjugate priors and thus not relying on the mean field approximation. To improve the estimation accuracy, some researchers proposed post processings that moves Monte Carlo samples to improve variational lower bound (e.g., Rezende and Mohamed, 2015; Salinams et al., 2015). By implementing such methods on PyMC3, we may achieve more accurate estimation while automated as shown in this notebook. \nReferences\n\nKingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. stat, 1050, 1.\nKucukelbir, A., Ranganath, R., Gelman, A., & Blei, D. (2015). Automatic variational inference in Stan. In Advances in neural information processing systems (pp. 568-576).\nBlei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan), 993-1022.\nHoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. W. (2013). Stochastic variational inference. Journal of Machine Learning Research, 14(1), 1303-1347.\nRezende, D. J., & Mohamed, S. (2015). Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770.\nSalimans, T., Kingma, D. P., & Welling, M. (2015). Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning (pp. 1218-1226)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
enchantner/python-zero
lesson_8/Slides.ipynb
mit
[ "%%html\n<style>\n.text_cell_render * {\n font-family: OfficinaSansCTT;\n}\n.reveal code {\n font-family: OfficinaSansCTT;\n}\n.text_cell_render h3 {\n font-family: OfficinaSansCTT;\n}\n.reveal section img {\n max-height: 500px;\n margin-left: auto;\n margin-right: auto;\n}\n</style>", "Вопросы\n\nНазовите несколько видов баз данных и опишите, в чем их суть.\nЧто такое транзакция в терминах баз данных?\nЧто делает следующий запрос?\nSQL\nSELECT * FROM employees, managers WHERE employees.salary &gt; 100000 AND employees.manager_id = managers.id\nЧто такое миграция, если своими словами?\nВ чем преимущества NoSQL баз перед реляционными, если грубо?\nДля чего нужен Elasticsearch?\n\nФортран и процессоры\n\n\nSSE, SSE2, SSE3 ...\nВекторные операции\nСуровые дяди-ученые с 80 годов пишут библиотеки, которые реализуют основные операции линейной алгебры\nBLAS, LAPACK, CUDA\n\nМатричные данные\n\n\nMatlab (Octave), R, Julia, Wolfram, ..., Numpy\n\nNumpy\n\nhttp://www.numpy.org/\npip install numpy или просто взять Anaconda\nhttp://www.labri.fr/perso/nrougier/teaching/numpy.100/\nhttps://cs231n.github.io/python-numpy-tutorial/\n\nПримеры с Numpy", "import numpy as np\n\nnp.zeros(10) # вектор из 10 нулевых элементов\nnp.arange(9).reshape(3,3) # матрица 3х3 с числами от 0 до 8\nm = np.eye(3) # единичная матрица\nm.shape # размеры матрицы\na = np.random.random((3,3,3)) # трехмерная матрица 3x3x3 со случайными значениями\n\n# Матрица numpy - многомерный массив и позволяет делать и менять срез по каждому из измерений\na[3, 4:5, 1:20:2]", "Вопрос: как сделать “шахматную доску” из нулей и единиц?\nУпражнения\n\nДопустим, у нас есть две двумерные матрицы 5x2 и 3x2 (можно заполнить случайными числами). Нужно посчитать произведение этих матриц. Надо ли транспонировать вторую?\nПосчитать коэффициент корреляции Пирсона между первыми двумя колонками первой матрицы. Какая функция numpy в этом поможет?\n\nPandas\n\n\nhttp://pandas.pydata.org/\npip install pandas\nосновные понятия - Dataframe и Series\n\nПлюшки\n\nСхожий интерфейс с R (для олдскульных аналитиков)\nБесшовная интеграция с Numpy и Matplotlib\nЛегкое добавление колонок («фич»)\nУдобные механизмы для заполнения пробелов в данных", "import pandas as pd\n# чтение CSV (крутые параметры: names, chunksize, dtype)\ndf = pd.read_csv(\"USlocalopendataportals.csv\")\ndf.columns # названия колонок\ndf.head(15) # просмотр верхних 15 строчек\ndf[\"column\"].apply(lambda c: c / 100.0) # применение функции к колонке\ndf[\"column\"].str.%строковая функция% # работа со строковыми колонками\ndf[\"column\"].astype(np.float32) # привести колонку к нужному типу\n# а как сделать через apply?\n\ndf.groupby(\"column\").mean() # операция агрегации\n\n# выбор нескольких колонок\ndf[[\"Column 1\", \"Column 2\"]]\n# создать новую колонку\ndf[\"Сolumn\"] = a # массив подходящего размера\n# переименование колонки\ndf.rename(\n columns={\"Oldname\": \"Newname\"},\n inplace=True\n)\n# объединение нескольких условий выборки\ndf[df.col1 > 10 & df.col2 < 100] # можно еще | для OR", "Упражнение\n\nПрочитать файл USlocalopendataportals.csv в Pandas.\nСколько различных значений в колонке Location?\nСколько порталов принадлежит государству?\n\n\nВыбрать все записи только для Location, начинающихся с “New York” и “Washington D.C.”.\nКакой сайт встречается в этих данных дважды?\nКак это показать кодом?\n\n\n“Починить” колонку Population и посчитать среднее население для каждой локации.\n\nВизуализация\n\n\nMatplotlib (http://matplotlib.org/)\n\nSeaborn (https://www.stanford.edu/~mwaskom/software/seaborn/ )\nggplot (http://ggplot.yhathq.com/ )\n\n\n\nD3.js (https://d3js.org/ )\n\nBokeh (http://bokeh.pydata.org/en/latest/ )\nPlotly (https://plot.ly/ )\n\n\nPygal (http://pygal.org/ )\nGrafana (https://grafana.com/ )\nKibana (https://www.elastic.co/products/kibana )", "# интеграция в IPython\n%matplotlib inline\n# основной объект\nfrom matplotlib import pyplot as plt\n\n# столбиковая диаграмма прямо из Pandas\ndf[\"Column\"].plot(kind=\"bar\")", "Упражнения с Matplotlib\n\nПо датафрейму, построенному в последнем пункте предыдущего задания, построить график с горизонтальными столбиками\nВзять случайную выборку (скажем, 10) по локациям и населению из прошлых упражнений. С помощью объекта pyplot построить круговую диаграмму по ним.\n\nПодсказка - http://bit.ly/1eLirnL\nScikit-Learn\n\nhttp://scikit-learn.org/stable/index.html\nОбширная библиотека алгоритмов для Machine Learning\nhttp://scikit-learn.org/stable/modules/clustering.html#k-means\n\nЛинейная регрессия", "# http://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html\n \nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn import datasets, linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n\n# Load the diabetes dataset\ndiabetes = datasets.load_diabetes()\n\n# Use only one feature\ndiabetes_X = diabetes.data[:, np.newaxis, 2]\n\n# Split the data into training/testing sets\ndiabetes_X_train = diabetes_X[:-20]\ndiabetes_X_test = diabetes_X[-20:]\n\n# Split the targets into training/testing sets\ndiabetes_y_train = diabetes.target[:-20]\ndiabetes_y_test = diabetes.target[-20:]\n\n# Create linear regression object\nregr = linear_model.LinearRegression()\n\n# Train the model using the training sets\nregr.fit(diabetes_X_train, diabetes_y_train)\n\n# Make predictions using the testing set\ndiabetes_y_pred = regr.predict(diabetes_X_test)\n\n# The coefficients\nprint('Coefficients: \\n', regr.coef_)\n# The mean squared error\nprint(\"Mean squared error: %.2f\"\n % mean_squared_error(diabetes_y_test, diabetes_y_pred))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred))\n\n# Plot outputs\nplt.scatter(diabetes_X_test, diabetes_y_test, color='black')\nplt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3)\n\nplt.xticks(())\nplt.yticks(())\n\nplt.show()", "Богатая экосистема\n\nDask (https://dask.pydata.org/en/latest/ ) - out-of-core версия Pandas/Numpy\nScipy (https://www.scipy.org/scipylib/index.html) - научные вычисления общего типа\nPyMC3 (https://github.com/pymc-devs/pymc3 ) - Markov Chain Monte Carlo\nNLTK (http://www.nltk.org/ ) + pymorphy2 (https://pymorphy2.readthedocs.io/) + gensim (https://radimrehurek.com/gensim/) + BigARTM (https://www.machinelearning.ru/wiki/index.php?title=BigARTM) - анализ текстов\nTensorflow (https://www.tensorflow.org/) + MXNet (https://mxnet.incubator.apache.org/) + Keras (https://keras.io/) + Theano (https://deeplearning.net/software/theano/) + PyTorch (https://pytorch.org) - нейронные сети и deep learning\n\nЧто бы еще почитать" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ricardog/raster-project
Abundance.ipynb
apache-2.0
[ "Projecting terrestrial biodiversity using PREDICTS and LUH2\nThis notebook shows how to use rasterset to project a PREDICTS model using the LUH2 land-use data.\nYou can set three parameters below:\n\nscenario: can be either historical (850CE - 2015CE) or one of he LUH2 scenarios available (all in lowercase, e.g. ssp1_rcp2.6_image).\nyear: year for which to generate the projection. For the historical scenario the year must be between 850-2015. For the SSP scenarios the year must be between 2015-2100.\nwhat: the name of the variable to evaluate. Many abundance models evaluate a variable called LogAbund. If you want to project abundance than what should be LogAbund. But you can use any of the intermediate variables as well. For example setting what to hpd will generate a projection of human population density.\n\nImports (non-local)", "import click\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpy.ma as ma\nimport rasterio\nfrom rasterio.plot import show, show_hist", "Local imports", "from projections.rasterset import RasterSet, Raster\nfrom projections.simpleexpr import SimpleExpr\nimport projections.r2py.modelr as modelr\nimport projections.predicts as predicts\nimport projections.utils as utils\n", "Parameters", "scenario = 'historical'\nyear = 2000\nwhat = 'LogAbund'", "Models\nThis notebook uses Sam's LUH2 abundance models. Thus we need to load a forested and a non-forested model, project using both and then combine the projection.", "modf = modelr.load('ab-fst-1.rds')\nintercept_f = modf.intercept\npredicts.predictify(modf)\n\nmodn = modelr.load('ab-nfst-1.rds')\nintercept_n = modn.intercept\npredicts.predictify(modn)", "Rastersets\nUse the PREDICTS python module to generate the appropriate rastersets. Each rasterset is like a DataFrame or hash (dict in python). The columns are variables and hold a function that describes how to compute the data.\nGenerating a rasterset is a two-step process. First generate a hash (dict in python) and then pass the dict to the constructor.\nEach model will be evaluated only where the forested mask is set (or not set). Load the mask from the LUH2 statis data set.\nNote that we need to explicitly assign the R model we loaded in the previous cell to the corresponding variable of the rasterset.", "fstnf = rasterio.open(utils.luh2_static('fstnf'))\nrastersf = predicts.rasterset('luh2', scenario, year, 'f')\nrsf = RasterSet(rastersf, mask=fstnf, maskval=0.0)\nrastersn = predicts.rasterset('luh2', scenario, year, 'n')\nrsn = RasterSet(rastersn, mask=fstnf, maskval=1.0)\nvname = modf.output\nassert modf.output == modn.output\nrsf[vname] = modf\nrsn[vname] = modn\n", "Eval\nNow evaluate each model in turn and then combine the data. Because we are guaranteed that the data is non-overlaping (no cell should have valid data in both projections) we can simply add them together (with masked values filled in as 0). The overall mask is the logical AND of the two invalid masks.", "datan, meta = rsn.eval(what, quiet=True)\ndataf, _ = rsf.eval(what, quiet=True)\ndata_vals = dataf.filled(0) + datan.filled(0)\ndata = data_vals.view(ma.MaskedArray)\ndata.mask = np.logical_and(dataf.mask, datan.mask)\n", "Rendering\nUse matplotlib (via rasterio.plot) to render the generated data. This will display the data in-line in the notebook.", "show(data, cmap='viridis')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fatbeau/tribe
.ipynb_checkpoints/Introduction to Networkx-checkpoint.ipynb
mit
[ "Analyses with NetworkX\nSocial networks have become a fixture of modern life thanks to social networking sites like Facebook and Twitter. Social networks themselves are not new, however. The study of such networks dates back to the early twentieth century, particularly in the field of sociology and anthropology. It is their prevelance in mainstream applciations that have moved these types of studies to the purview of data science. \nThe basis for the analyses in this notebook comes from Graph Theory- the mathmatical study of the application and properties of graphs, originally motivated by the study of games of chance. Generally speaking, this involves the study of network encoding, and measuring properties of the graph. Graph theory can be traced back to Euler's work on the Konigsberg Bridges problem (1735). However in recent decades, the rise of the social network has influenced the discpline, particularly with Computer Science graph data structures and databases. \nA Graph, then can be defined as: G = (V, E) consiting of a finite set of nodes denoted by V or V(G) and a collection E or E(G) of unordered pairs {u, v} where u, v ∈ V. Less formally, this is a symbolic repreentation of a network and their relationships- a set of linked nodes.\nGraphs can be either directed or undirected. Directed graphs simply have ordered relationships, undirected graphs can be seen as bidirectional directed graphs. A directed graph in a social network tends to have directional semantic relationships, e.g. \"friends\" - Abe might be friends with Jane, but Jane might not reciprocate. Undirected social networks have more general semantic relationships, e.g. \"knows\". Any directed graph can easily be converted to the more general undirected graph. In this case, the adjacency matrix becomes symmetric.\nA few final terms will help us in our discussion. The cardinality of vertices is called the order of the Graph, where as the cardinality of the edges is called the size. In the above graph, the order is 7 and the size is 10. Two nodes are adjacent if they share an edge, they are also called neighbors and the neighborhood of a vertex is the set of all vertices that a vertex is connected to. The number of nodes in a vertex' neighborhood is that vertex' degree. \nRequired Python Libraries\nThe required external libraries for the tasks in this notebook are as follows:\n\nnetworkx\nmatplotlib\npython-louvain\n\nNetworkX is a well maintained Python library for the creation, manipulation, and study of the structure of complex networks. Its tools allow for the quick creation of graphs, and the library also contains many common graph algorithms. In particular NetworkX complements Python's scientific computing suite of SciPy/NumPy, Matplotlib, and Graphviz and can handle graphs in memory of 10M's of nodes and 100M's of links. NetworkX should be part of every data scientist's toolkit. \nNetworkX and Python are the perfect combination to do social network analysis. NetworkX is designed to handle data at scale, data that is relevant to modern scale social networks. The core algorithms that are included are implemented on extremely fast legacy code. Graphs are hugely flexible (nodes can be any hashable type), and there is an extensive set of native IO formats. Finally, with Python- you'll be able to access or use a myriad of data sources from databases to the Internet.", "%matplotlib inline\n\nimport os\nimport random\nimport community\n\nimport numpy as np\nimport networkx as nx\nimport matplotlib.pyplot as plt\n\nfrom tribe.utils import *\nfrom tribe.stats import *\nfrom operator import itemgetter\n\n## Some Helper constants\nFIXTURES = os.path.join(os.getcwd(), \"fixtures\")\nGRAPHML = os.path.join(FIXTURES, \"emails.graphml\")", "The basics of creating a NetworkX Graph:", "H = nx.Graph(name=\"Hello World Graph\")\n# Also nx.DiGraph, nx.MultiGraph, etc\n\n# Add nodes manually, label can be anything hashable\nH.add_node(1, name=\"Ben\", email=\"benjamin@bengfort.com\")\nH.add_node(2, name=\"Tony\", email=\"ojedatony1616@gmail.com\")\n\n# Can also add an iterable of nodes: H.add_nodes_from\n\nH.add_edge(1,2, label=\"friends\", weight=0.832)\n\n# Can also add an iterable of edges: H.add_edges_from\n\nprint nx.info(H)\n# Clearing a graph is easy\nH.remove_node(1)\nH.clear()", "For testing and diagnostics it's useful to generate a random Graph. NetworkX comes with several graph models including:\n\nComplete Graph G=nx.complete_graph(100)\nStar Graph G=nx.star_graph(100)\nErdős-Rényi graph, binomial graph G=nx.erdos_renyi_graph(100, 0.20)\nWatts-Strogatz small-world graph G=nx.watts_strogatz_graph(100, 0.20)\nHolme and Kim power law G=nx.powerlaw_cluster_graph(100, 0.20)\n\nBut there are so many more, see Graph generators for more information on all the types of graph generators NetworkX provides. These, however are the best ones for doing research on social networks.", "H = nx.erdos_renyi_graph(100, 0.20)", "Accessing Nodes and Edges:", "print H.nodes()[1:10]\nprint H.edges()[1:5]\nprint H.neighbors(3)\n\n# For fast, memory safe iteration, use the `_iter` methods\n\nedges, nodes = 0,0\nfor e in H.edges_iter(): edges += 1\nfor n in H.nodes_iter(): nodes += 1\n \nprint \"%i edges, %i nodes\" % (edges, nodes)\n\n# Accessing the properties of a graph\n\nprint H.graph['name']\nH.graph['created'] = strfnow()\nprint H.graph\n\n# Accessing the properties of nodes and edges\n\nH.node[1]['color'] = 'red'\nH.node[43]['color'] = 'blue'\n\nprint H.node[43]\nprint H.nodes(data=True)[:3]\n\n# The weight property is special and should be numeric\nH.edge[0][40]['weight'] = 0.432\nH.edge[0][39]['weight'] = 0.123\n\nprint H.edge[40][0]\n\n# Accessing the highest degree node\ncenter, degree = sorted(H.degree().items(), key=itemgetter(1), reverse=True)[0]\n\n# A special type of subgraph\nego = nx.ego_graph(H, center)\n\npos = nx.spring_layout(H)\nnx.draw(H, pos, node_color='#0080C9', edge_color='#cccccc', node_size=50)\nnx.draw_networkx_nodes(H, pos, nodelist=[center], node_size=100, node_color=\"r\")\nplt.show()\n\n# Other subgraphs can be extracted with nx.subgraph\n\n# Finding the shortest path\nH = nx.star_graph(100)\nprint nx.shortest_path(H, random.choice(H.nodes()), random.choice(H.nodes()))\n\npos = nx.spring_layout(H)\nnx.draw(H, pos)\nplt.show()\n\n# Preparing for Data Science Analysis\nprint nx.to_numpy_matrix(H)\n# print nx.to_scipy_sparse_matrix(G)", "Serialization of Graphs\nMost Graphs won't be constructed in memory, but rather saved to disk. Serialize and deserialize Graphs as follows:", "G = nx.read_graphml(GRAPHML) # opposite of nx.write_graphml\n\nprint nx.info(G)", "NetworkX has a ton of Graph serialization methods, and most have methods in the following format for serialization format, format:\n\nRead Graph from disk: read_format\nWrite Graph to disk: write_format\nParse a Graph string: parse_format\nGenerate a random Graph in format: generate_format\n\nThe list of formats is pretty impressive:\n\nAdjacency List\nMultiline Adjacency List\nEdge List\nGEXF\nGML\nPickle\nGraphML\nJSON\nLEDA\nYAML\nSparseGraph6\nPajek\nGIS Shapefile\n\nThe JSON and GraphmL are most noteworthy (for use in D3 and Gephi/Neo4j)\nInitial Analysis of Email Network\nWe can do some initial analyses on our network using built in NetworkX methods.", "# Generate a list of connected components\n# See also nx.strongly_connected_components\nfor component in nx.connected_components(G):\n print len(component)\n\n\nlen([c for c in nx.connected_components(G)])\n\n# Get a list of the degree frequencies\ndist = FreqDist(nx.degree(G).values())\ndist.plot()\n\n# Compute Power log sequence\ndegree_sequence=sorted(nx.degree(G).values(),reverse=True) # degree sequence\n\nplt.loglog(degree_sequence,'b-',marker='.')\nplt.title(\"Degree rank plot\")\nplt.ylabel(\"degree\")\nplt.xlabel(\"rank\")\n\n# Graph Properties\nprint \"Order: %i\" % G.number_of_nodes()\nprint \"Size: %i\" % G.number_of_edges()\n\nprint \"Clustering: %0.5f\" % nx.average_clustering(G)\n\nprint \"Transitivity: %0.5f\" % nx.transitivity(G)\n\nhairball = nx.subgraph(G, [x for x in nx.connected_components(G)][0])\nprint \"Average shortest path: %0.4f\" % nx.average_shortest_path_length(hairball)\n\n# Node Properties\nnode = 'benjamin@bengfort.com' # Change to an email in your graph\nprint \"Degree of node: %i\" % nx.degree(G, node)\nprint \"Local clustering: %0.4f\" % nx.clustering(G, node)", "Computing Key Players\nIn the previous graph, we began exploring ego networks and strong ties between individuals in our social network. We started to see that actors with strong ties to other actors created clusters that centered around themselves. This leads to the obvious question: who are the key figures in the graph, and what kind of pull do they have? We'll look at a couple measures of \"centrality\" to try to discover this: degree centrality, betweeness centrality, closeness centrality, and eigenvector centrality.\nDegree Centrality\nThe most common and perhaps simplest technique for finding the key actors of a graph is to measure the degree of each vertex. Degree is a signal that determines how connected a node is, which could be a metaphor for influence or popularity. At the very least, the most connected nodes are the ones that spread information the fastest, or have the greatest effect on their community. Measures of degree tend to suffer from dillution, and benefit from statistical techniques to normalize data sets.", "def nbest_centrality(graph, metric, n=10, attribute=\"centrality\", **kwargs):\n centrality = metric(graph, **kwargs)\n nx.set_node_attributes(graph, attribute, centrality)\n degrees = sorted(centrality.items(), key=itemgetter(1), reverse=True)\n \n for idx, item in enumerate(degrees[0:n]):\n item = (idx+1,) + item\n print \"%i. %s: %0.4f\" % item\n \n return degrees\n\ndegrees = nbest_centrality(G, nx.degree_centrality, n=15)", "Betweenness Centrality\nA path is a sequence of nodes between a star node and an end node where no node appears twice on the path, and is measured by the number of edges included (also called hops). The most interesting path to compute for two given nodes is the shortest path, e.g. the minimum number of edges required to reach another node, this is also called the node distance. Note that paths can be of length 0, the distance from a node to itself.", "# centrality = nx.betweenness_centrality(G)\n# normalized = nx.betweenness_centrality(G, normalized=True)\n# weighted = nx.betweenness_centrality(G, weight=\"weight\")\n\ndegrees = nbest_centrality(G, nx.betweenness_centrality, n=15)", "Closeness Centrality\nAnother centrality measure, closeness takes a statistical look at the outgoing paths fora particular node, v. That is, what is the average number of hops it takes to reach any other node in the network from v? This is simply computed as the reciprocal of the mean distance to all other nodes in the graph, which can be normalized to n-1 / size(G)-1 if all nodes in the graph are connected. The reciprocal ensures that nodes that are closer (e.g. fewer hops) score \"better\" e.g. closer to one as in other centrality scores.", "# centrality = nx.closeness_centrality(graph)\n# normalied = nx.closeness_centrality(graph, normalized=True)\n# weighted = nx.closeness_centrality(graph, distance=\"weight\")\n\ndegrees = nbest_centrality(G, nx.closeness_centrality, n=15)", "Eigenvector Centrality\nThe eigenvector centrality of a node, v is proportional to the sum of the centrality scores of it's neighbors. E.g. the more important people you are connected to, the more important you are. This centrality measure is very interesting, because an actor with a small number of hugely influential contacts may outrank ones with many more mediocre contacts. For our social network, hopefully it will allow us to get underneath the celebrity structure of heroic teams and see who actually is holding the social graph together.", "# centrality = nx.eigenvector_centality(graph)\n# centrality = nx.eigenvector_centrality_numpy(graph)\n\ndegrees = nbest_centrality(G, nx.eigenvector_centrality_numpy, n=15)", "Clustering and Cohesion\nIn this next section, we're going to characterize our social network as a whole, rather than from the perspective of individual actors. This task is usually secondary to getting a feel for the most important nodes; but it is a chicken and an egg problem- determining the techniques to analyze and split the whole graph can be informed by key player analyses, and vice versa. \nThe density of a network is the ratio of the number of edges in the network to the total number of possible edges in the network. The possible number of edges for a graph of n vertices is n(n-1)/2 for an undirected graph (remove the division for a directed graph). Perfectly connected networks (every node shares an edge with every other node) have a density of 1, and are often called cliques.", "print nx.density(G)", "Graphs can also be analyzed in terms of distance (the shortest path between two nodes). The longest distance in a graph is called the diameter of the social graph, and represents the longest information flow along the graph. Typically less dense (sparse) social networks will have a larger diameter than more dense networks. Additionally, the average distance is an interesting metric as it can give you information about how close nodes are to each other.", "for subgraph in nx.connected_component_subgraphs(G):\n print nx.diameter(subgraph)\n print nx.average_shortest_path_length(subgraph)", "Let's actually get into some clustering. The python-louvain library uses NetworkX to perform community detection with the louvain method. Here is a simple example of cluster partitioning on a small, built-in social network.", "partition = community.best_partition(G)\nprint \"%i partitions\" % len(set(partition.values()))\nnx.set_node_attributes(G, 'partition', partition)\n\npos = nx.spring_layout(G)\nplt.figure(figsize=(12,12))\nplt.axis('off')\n\nnx.draw_networkx_nodes(G, pos, node_size=200, cmap=plt.cm.RdYlBu, node_color=partition.values())\nnx.draw_networkx_edges(G,pos, alpha=0.5)", "Visualizing Graphs\nNetworkX wraps matplotlib or graphviz to draw simple graphs using the same charting library we saw in the previous chapter. This is effective for smaller size graphs, but with larger graphs memory can quickly be consumed. To draw a graph, simply use the networkx.draw function, and then use pyplot.show to display it.", "nx.draw(nx.erdos_renyi_graph(20, 0.20))\nplt.show()\n", "There is, however, a rich drawing library underneath that lets you customize how the Graph looks and is laid out with many different layout algorithms. Let's take a look at an example using one of the built-in Social Graphs: The Davis Women's Social Club.", "# Generate the Graph\nG=nx.davis_southern_women_graph()\n# Create a Spring Layout\npos=nx.spring_layout(G)\n\n# Find the center Node\ndmin=1\nncenter=0\nfor n in pos:\n x,y=pos[n]\n d=(x-0.5)**2+(y-0.5)**2\n if d<dmin:\n ncenter=n\n dmin=d\n\n# color by path length from node near center\np=nx.single_source_shortest_path_length(G,ncenter)\n\n# Draw the graph\nplt.figure(figsize=(8,8))\nnx.draw_networkx_edges(G,pos,nodelist=[ncenter],alpha=0.4)\nnx.draw_networkx_nodes(G,pos,nodelist=p.keys(),\n node_size=90,\n node_color=p.values(),\n cmap=plt.cm.Reds_r)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
chongyangma/python-machine-learning-book
code/ch05/ch05.ipynb
mit
[ "Copyright (c) 2015-2017 Sebastian Raschka\nhttps://github.com/rasbt/python-machine-learning-book\nMIT License\nPython Machine Learning - Code Examples\nChapter 5 - Compressing Data via Dimensionality Reduction\nNote that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).", "%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -p numpy,scipy,matplotlib,sklearn", "The use of watermark is optional. You can install this IPython extension via \"pip install watermark\". For more information, please see: https://github.com/rasbt/watermark.\n<br>\n<br>\nOverview\n\nUnsupervised dimensionality reduction via principal component analysis 128\nTotal and explained variance\nFeature transformation\nPrincipal component analysis in scikit-learn\nSupervised data compression via linear discriminant analysis\nComputing the scatter matrices\nSelecting linear discriminants for the new feature subspace\nProjecting samples onto the new feature space\nLDA via scikit-learn\nUsing kernel principal component analysis for nonlinear mappings\nKernel functions and the kernel trick\nImplementing a kernel principal component analysis in Python\nExample 1 – separating half-moon shapes\nExample 2 – separating concentric circles\n\n\nProjecting new data points\nKernel principal component analysis in scikit-learn\nSummary\n\n<br>\n<br>", "from IPython.display import Image\n%matplotlib inline\n\n# Added version check for recent scikit-learn 0.18 checks\nfrom distutils.version import LooseVersion as Version\nfrom sklearn import __version__ as sklearn_version", "Unsupervised dimensionality reduction via principal component analysis", "Image(filename='./images/05_01.png', width=400) \n\nimport pandas as pd\n\ndf_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'\n 'machine-learning-databases/wine/wine.data',\n header=None)\n\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',\n 'Alcalinity of ash', 'Magnesium', 'Total phenols',\n 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',\n 'Color intensity', 'Hue',\n 'OD280/OD315 of diluted wines', 'Proline']\n\ndf_wine.head()", "<hr>\n\nNote:\nIf the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at ./../datasets/wine/wine.data.\nOr you could fetch it via", "df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)\n\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', \n'Alcalinity of ash', 'Magnesium', 'Total phenols', \n'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', \n'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']\ndf_wine.head()", "<hr>\n\nSplitting the data into 70% training and 30% test subsets.", "if Version(sklearn_version) < '0.18':\n from sklearn.cross_validation import train_test_split\nelse:\n from sklearn.model_selection import train_test_split\n\nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values\n\nX_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=0.3, random_state=0)", "Standardizing the data.", "from sklearn.preprocessing import StandardScaler\n\nsc = StandardScaler()\nX_train_std = sc.fit_transform(X_train)\nX_test_std = sc.transform(X_test)", "Note\nAccidentally, I wrote X_test_std = sc.fit_transform(X_test) instead of X_test_std = sc.transform(X_test). In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for \"new, unseen\" data.\nMy initial typo reflects a common mistake is that some people are not re-using these parameters from the model training/building and standardize the new data \"from scratch.\" Here's simple example to explain why this is a problem.\nLet's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature \"length\"):\n\ntrain_1: 10 cm -> class_2\ntrain_2: 20 cm -> class_2\ntrain_3: 30 cm -> class_1\n\nmean: 20, std.: 8.2\nAfter standardization, the transformed feature values are\n\ntrain_std_1: -1.21 -> class_2\ntrain_std_2: 0 -> class_2\ntrain_std_3: 1.21 -> class_1\n\nNext, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:\n\nnew_4: 5 cm -> class ?\nnew_5: 6 cm -> class ?\nnew_6: 7 cm -> class ?\n\nIf we look at the \"unstandardized \"length\" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.\n\nnew_std_4: -1.21 -> class 2\nnew_std_5: 0 -> class 2\nnew_std_6: 1.21 -> class 1\n\nHowever, if we use the parameters from your \"training set standardization,\" we'd get the values:\n\nsample5: -18.37 -> class 2\nsample6: -17.15 -> class 2\nsample7: -15.92 -> class 2\n\nThe values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the \"new samples\" are much lower than every standardized feature in the training set.\n\nEigendecomposition of the covariance matrix.", "import numpy as np\ncov_mat = np.cov(X_train_std.T)\neigen_vals, eigen_vecs = np.linalg.eig(cov_mat)\n\nprint('\\nEigenvalues \\n%s' % eigen_vals)", "Note: \nAbove, I used the numpy.linalg.eig function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors.\n <pre>>>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)</pre>\n This is not really a \"mistake,\" but probably suboptimal. It would be better to use numpy.linalg.eigh in such cases, which has been designed for Hermetian matrices. The latter always returns real eigenvalues; whereas the numerically less stable np.linalg.eig can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.)\n<br>\n<br>\nTotal and explained variance", "tot = sum(eigen_vals)\nvar_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]\ncum_var_exp = np.cumsum(var_exp)\n\nimport matplotlib.pyplot as plt\n\n\nplt.bar(range(1, 14), var_exp, alpha=0.5, align='center',\n label='individual explained variance')\nplt.step(range(1, 14), cum_var_exp, where='mid',\n label='cumulative explained variance')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Principal components')\nplt.legend(loc='best')\nplt.tight_layout()\n# plt.savefig('./figures/pca1.png', dpi=300)\nplt.show()", "<br>\n<br>\nFeature transformation", "# Make a list of (eigenvalue, eigenvector) tuples\neigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])\n for i in range(len(eigen_vals))]\n\n# Sort the (eigenvalue, eigenvector) tuples from high to low\neigen_pairs.sort(key=lambda k: k[0], reverse=True)\n\n# Note: I added the `key=lambda k: k[0]` in the sort call above\n# just like I used it further below in the LDA section.\n# This is to avoid problems if there are ties in the eigenvalue\n# arrays (i.e., the sorting algorithm will only regard the\n# first element of the tuples, now).\n\nw = np.hstack((eigen_pairs[0][1][:, np.newaxis],\n eigen_pairs[1][1][:, np.newaxis]))\nprint('Matrix W:\\n', w)", "Note\nDepending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:\n[[ 0.14669811 0.50417079]\n[-0.24224554 0.24216889]\n[-0.02993442 0.28698484]\n[-0.25519002 -0.06468718]\n[ 0.12079772 0.22995385]\n[ 0.38934455 0.09363991]\n[ 0.42326486 0.01088622]\n[-0.30634956 0.01870216]\n[ 0.30572219 0.03040352]\n[-0.09869191 0.54527081]\nPlease note that this is not an issue: If $v$ is an eigenvector of a matrix $\\Sigma$, we have\n$$\\Sigma v = \\lambda v,$$\nwhere $\\lambda$ is our eigenvalue,\nthen $-v$ is also an eigenvector that has the same eigenvalue, since\n$$\\Sigma(-v) = -\\Sigma v = -\\lambda v = \\lambda(-v).$$", "X_train_pca = X_train_std.dot(w)\ncolors = ['r', 'b', 'g']\nmarkers = ['s', 'x', 'o']\n\nfor l, c, m in zip(np.unique(y_train), colors, markers):\n plt.scatter(X_train_pca[y_train == l, 0], \n X_train_pca[y_train == l, 1], \n c=c, label=l, marker=m)\n\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.legend(loc='lower left')\nplt.tight_layout()\n# plt.savefig('./figures/pca2.png', dpi=300)\nplt.show()\n\nX_train_std[0].dot(w)", "<br>\n<br>\nPrincipal component analysis in scikit-learn", "from sklearn.decomposition import PCA\n\npca = PCA()\nX_train_pca = pca.fit_transform(X_train_std)\npca.explained_variance_ratio_\n\nplt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')\nplt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Principal components')\nplt.show()\n\npca = PCA(n_components=2)\nX_train_pca = pca.fit_transform(X_train_std)\nX_test_pca = pca.transform(X_test_std)\n\nplt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.show()\n\nfrom matplotlib.colors import ListedColormap\n\ndef plot_decision_regions(X, y, classifier, resolution=0.02):\n\n # setup marker generator and color map\n markers = ('s', 'x', 'o', '^', 'v')\n colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')\n cmap = ListedColormap(colors[:len(np.unique(y))])\n\n # plot the decision surface\n x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1\n x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1\n xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),\n np.arange(x2_min, x2_max, resolution))\n Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)\n Z = Z.reshape(xx1.shape)\n plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)\n plt.xlim(xx1.min(), xx1.max())\n plt.ylim(xx2.min(), xx2.max())\n\n # plot class samples\n for idx, cl in enumerate(np.unique(y)):\n plt.scatter(x=X[y == cl, 0], \n y=X[y == cl, 1],\n alpha=0.6, \n c=cmap(idx),\n edgecolor='black',\n marker=markers[idx], \n label=cl)", "Training logistic regression classifier using the first 2 principal components.", "from sklearn.linear_model import LogisticRegression\n\nlr = LogisticRegression()\nlr = lr.fit(X_train_pca, y_train)\n\nplot_decision_regions(X_train_pca, y_train, classifier=lr)\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.legend(loc='lower left')\nplt.tight_layout()\n# plt.savefig('./figures/pca3.png', dpi=300)\nplt.show()\n\nplot_decision_regions(X_test_pca, y_test, classifier=lr)\nplt.xlabel('PC 1')\nplt.ylabel('PC 2')\nplt.legend(loc='lower left')\nplt.tight_layout()\n# plt.savefig('./figures/pca4.png', dpi=300)\nplt.show()\n\npca = PCA(n_components=None)\nX_train_pca = pca.fit_transform(X_train_std)\npca.explained_variance_ratio_", "<br>\n<br>\nSupervised data compression via linear discriminant analysis", "Image(filename='./images/05_06.png', width=400) ", "<br>\n<br>\nComputing the scatter matrices\nCalculate the mean vectors for each class:", "np.set_printoptions(precision=4)\n\nmean_vecs = []\nfor label in range(1, 4):\n mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))\n print('MV %s: %s\\n' % (label, mean_vecs[label - 1]))", "Compute the within-class scatter matrix:", "d = 13 # number of features\nS_W = np.zeros((d, d))\nfor label, mv in zip(range(1, 4), mean_vecs):\n class_scatter = np.zeros((d, d)) # scatter matrix for each class\n for row in X_train_std[y_train == label]:\n row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors\n class_scatter += (row - mv).dot((row - mv).T)\n S_W += class_scatter # sum class scatter matrices\n\nprint('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))", "Better: covariance matrix since classes are not equally distributed:", "print('Class label distribution: %s' \n % np.bincount(y_train)[1:])\n\nd = 13 # number of features\nS_W = np.zeros((d, d))\nfor label, mv in zip(range(1, 4), mean_vecs):\n class_scatter = np.cov(X_train_std[y_train == label].T)\n S_W += class_scatter\nprint('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],\n S_W.shape[1]))", "Compute the between-class scatter matrix:", "mean_overall = np.mean(X_train_std, axis=0)\nd = 13 # number of features\nS_B = np.zeros((d, d))\nfor i, mean_vec in enumerate(mean_vecs):\n n = X_train[y_train == i + 1, :].shape[0]\n mean_vec = mean_vec.reshape(d, 1) # make column vector\n mean_overall = mean_overall.reshape(d, 1) # make column vector\n S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)\n\nprint('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))", "<br>\n<br>\nSelecting linear discriminants for the new feature subspace\nSolve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:", "eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))", "Note:\nAbove, I used the numpy.linalg.eig function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors.\n <pre>>>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)</pre>\n This is not really a \"mistake,\" but probably suboptimal. It would be better to use numpy.linalg.eigh in such cases, which has been designed for Hermetian matrices. The latter always returns real eigenvalues; whereas the numerically less stable np.linalg.eig can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.)\nSort eigenvectors in decreasing order of the eigenvalues:", "# Make a list of (eigenvalue, eigenvector) tuples\neigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])\n for i in range(len(eigen_vals))]\n\n# Sort the (eigenvalue, eigenvector) tuples from high to low\neigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)\n\n# Visually confirm that the list is correctly sorted by decreasing eigenvalues\n\nprint('Eigenvalues in decreasing order:\\n')\nfor eigen_val in eigen_pairs:\n print(eigen_val[0])\n\ntot = sum(eigen_vals.real)\ndiscr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]\ncum_discr = np.cumsum(discr)\n\nplt.bar(range(1, 14), discr, alpha=0.5, align='center',\n label='individual \"discriminability\"')\nplt.step(range(1, 14), cum_discr, where='mid',\n label='cumulative \"discriminability\"')\nplt.ylabel('\"discriminability\" ratio')\nplt.xlabel('Linear Discriminants')\nplt.ylim([-0.1, 1.1])\nplt.legend(loc='best')\nplt.tight_layout()\n# plt.savefig('./figures/lda1.png', dpi=300)\nplt.show()\n\nw = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,\n eigen_pairs[1][1][:, np.newaxis].real))\nprint('Matrix W:\\n', w)", "<br>\n<br>\nProjecting samples onto the new feature space", "X_train_lda = X_train_std.dot(w)\ncolors = ['r', 'b', 'g']\nmarkers = ['s', 'x', 'o']\n\nfor l, c, m in zip(np.unique(y_train), colors, markers):\n plt.scatter(X_train_lda[y_train == l, 0] * (-1),\n X_train_lda[y_train == l, 1] * (-1),\n c=c, label=l, marker=m)\n\nplt.xlabel('LD 1')\nplt.ylabel('LD 2')\nplt.legend(loc='lower right')\nplt.tight_layout()\n# plt.savefig('./figures/lda2.png', dpi=300)\nplt.show()", "<br>\n<br>\nLDA via scikit-learn", "if Version(sklearn_version) < '0.18':\n from sklearn.lda import LDA\nelse:\n from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\n\nlda = LDA(n_components=2)\nX_train_lda = lda.fit_transform(X_train_std, y_train)\n\nfrom sklearn.linear_model import LogisticRegression\nlr = LogisticRegression()\nlr = lr.fit(X_train_lda, y_train)\n\nplot_decision_regions(X_train_lda, y_train, classifier=lr)\nplt.xlabel('LD 1')\nplt.ylabel('LD 2')\nplt.legend(loc='lower left')\nplt.tight_layout()\n# plt.savefig('./images/lda3.png', dpi=300)\nplt.show()\n\nX_test_lda = lda.transform(X_test_std)\n\nplot_decision_regions(X_test_lda, y_test, classifier=lr)\nplt.xlabel('LD 1')\nplt.ylabel('LD 2')\nplt.legend(loc='lower left')\nplt.tight_layout()\n# plt.savefig('./images/lda4.png', dpi=300)\nplt.show()", "<br>\n<br>\nUsing kernel principal component analysis for nonlinear mappings", "Image(filename='./images/05_11.png', width=500) ", "<br>\n<br>\nImplementing a kernel principal component analysis in Python", "from scipy.spatial.distance import pdist, squareform\nfrom scipy import exp\nfrom scipy.linalg import eigh\nimport numpy as np\n\ndef rbf_kernel_pca(X, gamma, n_components):\n \"\"\"\n RBF kernel PCA implementation.\n\n Parameters\n ------------\n X: {NumPy ndarray}, shape = [n_samples, n_features]\n \n gamma: float\n Tuning parameter of the RBF kernel\n \n n_components: int\n Number of principal components to return\n\n Returns\n ------------\n X_pc: {NumPy ndarray}, shape = [n_samples, k_features]\n Projected dataset \n\n \"\"\"\n # Calculate pairwise squared Euclidean distances\n # in the MxN dimensional dataset.\n sq_dists = pdist(X, 'sqeuclidean')\n\n # Convert pairwise distances into a square matrix.\n mat_sq_dists = squareform(sq_dists)\n\n # Compute the symmetric kernel matrix.\n K = exp(-gamma * mat_sq_dists)\n\n # Center the kernel matrix.\n N = K.shape[0]\n one_n = np.ones((N, N)) / N\n K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)\n\n # Obtaining eigenpairs from the centered kernel matrix\n # numpy.eigh returns them in sorted order\n eigvals, eigvecs = eigh(K)\n\n # Collect the top k eigenvectors (projected samples)\n X_pc = np.column_stack((eigvecs[:, -i]\n for i in range(1, n_components + 1)))\n\n return X_pc", "<br>\nExample 1: Separating half-moon shapes", "import matplotlib.pyplot as plt\nfrom sklearn.datasets import make_moons\n\nX, y = make_moons(n_samples=100, random_state=123)\n\nplt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)\nplt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)\n\nplt.tight_layout()\n# plt.savefig('./figures/half_moon_1.png', dpi=300)\nplt.show()\n\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import StandardScaler\n\nscikit_pca = PCA(n_components=2)\nX_spca = scikit_pca.fit_transform(X)\n\nfig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))\n\nax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],\n color='red', marker='^', alpha=0.5)\nax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],\n color='blue', marker='o', alpha=0.5)\n\nax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,\n color='red', marker='^', alpha=0.5)\nax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,\n color='blue', marker='o', alpha=0.5)\n\nax[0].set_xlabel('PC1')\nax[0].set_ylabel('PC2')\nax[1].set_ylim([-1, 1])\nax[1].set_yticks([])\nax[1].set_xlabel('PC1')\n\nplt.tight_layout()\n# plt.savefig('./figures/half_moon_2.png', dpi=300)\nplt.show()\n\nfrom matplotlib.ticker import FormatStrFormatter\n\nX_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)\n\nfig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))\nax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1], \n color='red', marker='^', alpha=0.5)\nax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],\n color='blue', marker='o', alpha=0.5)\n\nax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02, \n color='red', marker='^', alpha=0.5)\nax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,\n color='blue', marker='o', alpha=0.5)\n\nax[0].set_xlabel('PC1')\nax[0].set_ylabel('PC2')\nax[1].set_ylim([-1, 1])\nax[1].set_yticks([])\nax[1].set_xlabel('PC1')\nax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))\nax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))\n\nplt.tight_layout()\n# plt.savefig('./figures/half_moon_3.png', dpi=300)\nplt.show()", "<br>\nExample 2: Separating concentric circles", "from sklearn.datasets import make_circles\n\nX, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)\n\nplt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)\nplt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)\n\nplt.tight_layout()\n# plt.savefig('./figures/circles_1.png', dpi=300)\nplt.show()\n\nscikit_pca = PCA(n_components=2)\nX_spca = scikit_pca.fit_transform(X)\n\nfig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))\n\nax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],\n color='red', marker='^', alpha=0.5)\nax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],\n color='blue', marker='o', alpha=0.5)\n\nax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,\n color='red', marker='^', alpha=0.5)\nax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,\n color='blue', marker='o', alpha=0.5)\n\nax[0].set_xlabel('PC1')\nax[0].set_ylabel('PC2')\nax[1].set_ylim([-1, 1])\nax[1].set_yticks([])\nax[1].set_xlabel('PC1')\n\nplt.tight_layout()\n# plt.savefig('./figures/circles_2.png', dpi=300)\nplt.show()\n\nX_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)\n\nfig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))\nax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],\n color='red', marker='^', alpha=0.5)\nax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],\n color='blue', marker='o', alpha=0.5)\n\nax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,\n color='red', marker='^', alpha=0.5)\nax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,\n color='blue', marker='o', alpha=0.5)\n\nax[0].set_xlabel('PC1')\nax[0].set_ylabel('PC2')\nax[1].set_ylim([-1, 1])\nax[1].set_yticks([])\nax[1].set_xlabel('PC1')\n\nplt.tight_layout()\n# plt.savefig('./figures/circles_3.png', dpi=300)\nplt.show()", "<br>\n<br>\nProjecting new data points", "from scipy.spatial.distance import pdist, squareform\nfrom scipy import exp\nfrom scipy.linalg import eigh\nimport numpy as np\n\ndef rbf_kernel_pca(X, gamma, n_components):\n \"\"\"\n RBF kernel PCA implementation.\n\n Parameters\n ------------\n X: {NumPy ndarray}, shape = [n_samples, n_features]\n \n gamma: float\n Tuning parameter of the RBF kernel\n \n n_components: int\n Number of principal components to return\n\n Returns\n ------------\n X_pc: {NumPy ndarray}, shape = [n_samples, k_features]\n Projected dataset \n \n lambdas: list\n Eigenvalues\n\n \"\"\"\n # Calculate pairwise squared Euclidean distances\n # in the MxN dimensional dataset.\n sq_dists = pdist(X, 'sqeuclidean')\n\n # Convert pairwise distances into a square matrix.\n mat_sq_dists = squareform(sq_dists)\n\n # Compute the symmetric kernel matrix.\n K = exp(-gamma * mat_sq_dists)\n\n # Center the kernel matrix.\n N = K.shape[0]\n one_n = np.ones((N, N)) / N\n K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)\n\n # Obtaining eigenpairs from the centered kernel matrix\n # numpy.eigh returns them in sorted order\n eigvals, eigvecs = eigh(K)\n\n # Collect the top k eigenvectors (projected samples)\n alphas = np.column_stack((eigvecs[:, -i]\n for i in range(1, n_components + 1)))\n\n # Collect the corresponding eigenvalues\n lambdas = [eigvals[-i] for i in range(1, n_components + 1)]\n\n return alphas, lambdas\n\nX, y = make_moons(n_samples=100, random_state=123)\nalphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)\n\nx_new = X[-1]\nx_new\n\nx_proj = alphas[-1] # original projection\nx_proj\n\ndef project_x(x_new, X, gamma, alphas, lambdas):\n pair_dist = np.array([np.sum((x_new - row)**2) for row in X])\n k = np.exp(-gamma * pair_dist)\n return k.dot(alphas / lambdas)\n\n# projection of the \"new\" datapoint\nx_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)\nx_reproj \n\nplt.scatter(alphas[y == 0, 0], np.zeros((50)),\n color='red', marker='^', alpha=0.5)\nplt.scatter(alphas[y == 1, 0], np.zeros((50)),\n color='blue', marker='o', alpha=0.5)\nplt.scatter(x_proj, 0, color='black',\n label='original projection of point X[25]', marker='^', s=100)\nplt.scatter(x_reproj, 0, color='green',\n label='remapped point X[25]', marker='x', s=500)\nplt.legend(scatterpoints=1)\n\nplt.tight_layout()\n# plt.savefig('./figures/reproject.png', dpi=300)\nplt.show()\n\nX, y = make_moons(n_samples=100, random_state=123)\nalphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)\n\ndef project_x(x_new, X, gamma, alphas, lambdas):\n pair_dist = np.array([np.sum((x_new - row)**2) for row in X])\n k = np.exp(-gamma * pair_dist)\n return k.dot(alphas / lambdas)\n\n# projection of the \"new\" datapoint\nx_new = X[-1]\n\nx_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)\n\n\nplt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),\n color='red', marker='^', alpha=0.5)\nplt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),\n color='blue', marker='o', alpha=0.5)\nplt.scatter(x_reproj, 0, color='green',\n label='new point [ 100.0, 100.0]', marker='x', s=500)\nplt.legend(scatterpoints=1)\n\n\nplt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),\n color='red', marker='^', alpha=0.5)\nplt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),\n color='blue', marker='o', alpha=0.5)\nplt.scatter(x_proj, 0, color='black',\n label='some point [1.8713, 0.0093]', marker='^', s=100)\nplt.scatter(x_reproj, 0, color='green',\n label='new point [ 100.0, 100.0]', marker='x', s=500)\nplt.legend(scatterpoints=1)\n\nplt.tight_layout()\n# plt.savefig('./figures/reproject.png', dpi=300)\nplt.show()", "<br>\n<br>\nKernel principal component analysis in scikit-learn", "from sklearn.decomposition import KernelPCA\n\nX, y = make_moons(n_samples=100, random_state=123)\nscikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)\nX_skernpca = scikit_kpca.fit_transform(X)\n\nplt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],\n color='red', marker='^', alpha=0.5)\nplt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],\n color='blue', marker='o', alpha=0.5)\n\nplt.xlabel('PC1')\nplt.ylabel('PC2')\nplt.tight_layout()\n# plt.savefig('./figures/scikit_kpca.png', dpi=300)\nplt.show()", "<br>\n<br>\nSummary\n..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IntelPNI/brainiak
examples/reprsimil/bayesian_rsa_example.ipynb
apache-2.0
[ "This demo shows how to use the Bayesian Representational Similarity Analysis method in brainiak with a simulated dataset.\nThe brainik.reprsimil.brsa module has two estimators named BRSA and GBRSA. Both of them can be used to estimate representational similarity from a single participant, but with some differences in the assumptions of the models and fitting procedure. The basic usages are similar. We now generally recommend using GBRSA over BRSA for most of the cases. This document shows how to use BRSA for most of the part. At the end of the document, the usage of GBRSA is shown as well. You are encouranged to go through the example and try both estimators for your data.\nThe group_brsa_example.ipynb in the same directory demonstrates how to use GBRSA to estimate shared representational structure from multiple participants.\nPlease note that the model assumes that the covariance matrix U which all $\\beta_i$ follow describe a multi-variate Gaussian distribution that is zero-meaned. This assumption does not imply that there must be both positive and negative responses across voxels.\nHowever, it means that (Group) Bayesian RSA treats the task-evoked activity against baseline BOLD level as signal, while in other RSA tools the deviation of task-evoked activity in each voxel from the average task-evoked activity level across voxels may be considered as signal of interest.\nDue to this assumption in (G)BRSA, relatively high degree of similarity may be expected when the activity patterns of two task conditions share a strong sensory driven components. When two task conditions elicit exactly the same activity pattern but only differ in their global magnitudes, under the assumption in (G)BRSA, their similarity is 1; under the assumption that only deviation of pattern from average patterns is signal of interest (which is currently not supported by (G)BRSA), their similarity would be -1 because the deviations of the two patterns from their average pattern are exactly opposite.\nLoad some package which we will use in this demo.\nIf you see error related to loading any package, you can install that package. For example, if you use Anaconda, you can use \"conda install matplotlib\" to install matplotlib.\nNotice that due to current implementation, you need to import either prior_GP_var_inv_gamma or prior_GP_var_half_cauchy from brsa module, in order to use the smooth prior imposed onto SNR in BRSA (see below). They are forms of priors imposed on the variance of Gaussian Process prior on log(SNR). (If you think these sentences are confusing, just import them like below and forget about this).", "%matplotlib inline\nimport scipy.stats\nimport scipy.spatial.distance as spdist\nimport numpy as np\nfrom brainiak.reprsimil.brsa import BRSA, prior_GP_var_inv_gamma, prior_GP_var_half_cauchy\nfrom brainiak.reprsimil.brsa import GBRSA\nimport brainiak.utils.utils as utils\nimport matplotlib.pyplot as plt\nimport logging\nnp.random.seed(10)", "You might want to keep a log of the output.", "logging.basicConfig(\n level=logging.DEBUG,\n filename='brsa_example.log',\n format='%(relativeCreated)6d %(threadName)s %(message)s')", "We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure\nLoad an example design matrix.\nThe user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL.\nThe design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond).\nThe common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event.\nFor each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel.\nOur goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested.\nWe can use the utility called ReadDesign in brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition}\nYou can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (name of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array.\nIn typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix. \nWe concatenate the design matrix by 2 times, mimicking 2 runs of identical timing", "design = utils.ReadDesign(fname=\"example_design.1D\")\n\nn_run = 3\ndesign.n_TR = design.n_TR * n_run\ndesign.design_task = np.tile(design.design_task[:,:-1],\n [n_run, 1])\n# The last \"condition\" in design matrix\n# codes for trials subjects made and error.\n# We ignore it here.\n\n\nfig = plt.figure(num=None, figsize=(12, 3),\n dpi=150, facecolor='w', edgecolor='k')\nplt.plot(design.design_task)\nplt.ylim([-0.2, 0.4])\nplt.title('hypothetic fMRI response time courses '\n 'of all conditions\\n'\n '(design matrix)')\nplt.xlabel('time')\nplt.show()\n\nn_C = np.size(design.design_task, axis=1)\n# The total number of conditions.\nROI_edge = 15\n# We simulate \"ROI\" of a rectangular shape\nn_V = ROI_edge**2 * 2\n# The total number of simulated voxels\nn_T = design.n_TR\n# The total number of time points,\n# after concatenating all fMRI runs\n", "simulate data: noise + signal\nFirst, we start with noise, which is Gaussian Process in space and AR(1) in time", "noise_bot = 0.5\nnoise_top = 5.0\nnoise_level = np.random.rand(n_V) * \\\n (noise_top - noise_bot) + noise_bot\n# The standard deviation of the noise is in the range of [noise_bot, noise_top]\n# In fact, we simulate autocorrelated noise with AR(1) model. So the noise_level reflects\n# the independent additive noise at each time point (the \"fresh\" noise)\n\n# AR(1) coefficient\nrho1_top = 0.8\nrho1_bot = -0.2\nrho1 = np.random.rand(n_V) \\\n * (rho1_top - rho1_bot) + rho1_bot\n\nnoise_smooth_width = 10.0\ncoords = np.mgrid[0:ROI_edge, 0:ROI_edge*2, 0:1]\ncoords_flat = np.reshape(coords,[3, n_V]).T\ndist2 = spdist.squareform(spdist.pdist(coords_flat, 'sqeuclidean'))\n\n# generating noise\nK_noise = noise_level[:, np.newaxis] \\\n * (np.exp(-dist2 / noise_smooth_width**2 / 2.0) \\\n + np.eye(n_V) * 0.1) * noise_level\n# We make spatially correlated noise by generating\n# noise at each time point from a Gaussian Process\n# defined over the coordinates.\nplt.pcolor(K_noise)\nplt.colorbar()\nplt.xlim([0, n_V])\nplt.ylim([0, n_V])\nplt.title('Spatial covariance matrix of noise')\nplt.show()\nL_noise = np.linalg.cholesky(K_noise)\nnoise = np.zeros([n_T, n_V])\nnoise[0, :] = np.dot(L_noise, np.random.randn(n_V))\\\n / np.sqrt(1 - rho1**2)\nfor i_t in range(1, n_T):\n noise[i_t, :] = noise[i_t - 1, :] * rho1 \\\n + np.dot(L_noise,np.random.randn(n_V))\n# For each voxel, the noise follows AR(1) process:\n# fresh noise plus a dampened version of noise at\n# the previous time point.\n# In this simulation, we also introduced spatial smoothness resembling a Gaussian Process.\n# Notice that we simulated in this way only to introduce spatial noise correlation.\n# This does not represent the assumption of the form of spatial noise correlation in the model.\n# Instead, the model is designed to capture structured noise correlation manifested\n# as a few spatial maps each modulated by a time course, which appears as spatial noise correlation. \nfig = plt.figure(num=None, figsize=(12, 2), dpi=150,\n facecolor='w', edgecolor='k')\nplt.plot(noise[:, 0])\nplt.title('noise in an example voxel')\nplt.show()", "Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix.\nOur model allows to impose a Gaussian Process prior on the log(SNR) of each voxels.\nWhat this means is that SNR turn to be smooth and local, but betas (response amplitudes of each voxel to each condition) are not necessarily correlated in space. Intuitively, this is based on the assumption that voxels coding for related aspects of a task turn to be clustered (instead of isolated)\nOur Gaussian Process are defined on both the coordinate of a voxel and its mean intensity.\nThis means that voxels close together AND have similar intensity should have similar SNR level. Therefore, voxels of white matter but adjacent to gray matters do not necessarily have high SNR level.\nIf you have an ROI saved as a binary Nifti file, say, with name 'ROI.nii'\nThen you can use nibabel package to load the ROI and the following example code to retrive the coordinates of voxels.\nNote: the following code won't work if you just installed Brainiak and try this demo because ROI.nii does not exist. It just serves as an example for you to retrieve coordinates of voxels in an ROI. You can use the ROI_coords for the argument coords in BRSA.fit()", "# import nibabel\n# ROI = nibabel.load('ROI.nii')\n# I,J,K = ROI.shape \n# all_coords = np.zeros((I, J, K, 3)) \n# all_coords[...,0] = np.arange(I)[:, np.newaxis, np.newaxis] \n# all_coords[...,1] = np.arange(J)[np.newaxis, :, np.newaxis] \n# all_coords[...,2] = np.arange(K)[np.newaxis, np.newaxis, :] \n# ROI_coords = nibabel.affines.apply_affine(\n# ROI.affine, all_coords[ROI.get_data().astype(bool)])\n", "Let's keep in mind of the pattern of the ideal covariance / correlation below and see how well BRSA can recover their patterns.", "# ideal covariance matrix\nideal_cov = np.zeros([n_C, n_C])\nideal_cov = np.eye(n_C) * 0.6\nideal_cov[8:12, 8:12] = 0.6\nfor cond in range(8, 12):\n ideal_cov[cond,cond] = 1\n\nfig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(ideal_cov)\nplt.colorbar()\nplt.xlim([0, 16])\nplt.ylim([0, 16])\nax = plt.gca()\nax.set_aspect(1)\nplt.title('ideal covariance matrix')\nplt.show()\n\nstd_diag = np.diag(ideal_cov)**0.5\nideal_corr = ideal_cov / std_diag / std_diag[:, None]\nfig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(ideal_corr)\nplt.colorbar()\nplt.xlim([0, 16])\nplt.ylim([0, 16])\nax = plt.gca()\nax.set_aspect(1)\nplt.title('ideal correlation matrix')\nplt.show()", "In the following, pseudo-SNR is generated from a Gaussian Process defined on a \"rectangular\" ROI, just for simplicity of code", "L_full = np.linalg.cholesky(ideal_cov) \n\n# generating signal\nsnr_level = 1.0\n# Notice that accurately speaking this is not SNR.\n# The magnitude of signal depends not only on beta but also on x.\n# (noise_level*snr_level)**2 is the factor multiplied\n# with ideal_cov to form the covariance matrix from which\n# the response amplitudes (beta) of a voxel are drawn from.\n\ntau = 1.0\n# magnitude of Gaussian Process from which the log(SNR) is drawn\nsmooth_width = 3.0\n# spatial length scale of the Gaussian Process, unit: voxel\ninten_kernel = 4.0\n# intensity length scale of the Gaussian Process\n# Slightly counter-intuitively, if this parameter is very large,\n# say, much larger than the range of intensities of the voxels,\n# then the smoothness has much small dependency on the intensity.\n\n\ninten = np.random.rand(n_V) * 20.0\n# For simplicity, we just assume that the intensity\n# of all voxels are uniform distributed between 0 and 20\n# parameters of Gaussian process to generate pseuso SNR\n# For curious user, you can also try the following commond\n# to see what an example snr map might look like if the intensity\n# grows linearly in one spatial direction\n\n# inten = coords_flat[:,0] * 2\n\n\ninten_tile = np.tile(inten, [n_V, 1])\ninten_diff2 = (inten_tile - inten_tile.T)**2\n\nK = np.exp(-dist2 / smooth_width**2 / 2.0 \n - inten_diff2 / inten_kernel**2 / 2.0) * tau**2 \\\n + np.eye(n_V) * tau**2 * 0.001\n# A tiny amount is added to the diagonal of\n# the GP covariance matrix to make sure it can be inverted\nL = np.linalg.cholesky(K)\nsnr = np.abs(np.dot(L, np.random.randn(n_V))) * snr_level\nsqrt_v = noise_level * snr\nbetas_simulated = np.dot(L_full, np.random.randn(n_C, n_V)) * sqrt_v\nsignal = np.dot(design.design_task, betas_simulated)\n\n\nY = signal + noise + inten\n# The data to be fed to the program.\n\n\nfig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(np.reshape(snr, [ROI_edge, ROI_edge*2]))\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nplt.title('pseudo-SNR in a rectangular \"ROI\"')\nplt.show()\n\nidx = np.argmin(np.abs(snr - np.median(snr)))\n# choose a voxel of medium level SNR.\nfig = plt.figure(num=None, figsize=(12, 4), dpi=150,\n facecolor='w', edgecolor='k')\nnoise_plot, = plt.plot(noise[:,idx],'g')\nsignal_plot, = plt.plot(signal[:,idx],'b')\nplt.legend([noise_plot, signal_plot], ['noise', 'signal'])\nplt.title('simulated data in an example voxel'\n ' with pseudo-SNR of {}'.format(snr[idx]))\nplt.xlabel('time')\nplt.show()\n\nfig = plt.figure(num=None, figsize=(12, 4), dpi=150,\n facecolor='w', edgecolor='k')\ndata_plot, = plt.plot(Y[:,idx],'r')\nplt.legend([data_plot], ['observed data of the voxel'])\nplt.xlabel('time')\nplt.show()\n\nidx = np.argmin(np.abs(snr - np.max(snr)))\n# display the voxel of the highest level SNR.\nfig = plt.figure(num=None, figsize=(12, 4), dpi=150,\n facecolor='w', edgecolor='k')\nnoise_plot, = plt.plot(noise[:,idx],'g')\nsignal_plot, = plt.plot(signal[:,idx],'b')\nplt.legend([noise_plot, signal_plot], ['noise', 'signal'])\nplt.title('simulated data in the voxel with the highest'\n ' pseudo-SNR of {}'.format(snr[idx]))\nplt.xlabel('time')\nplt.show()\n\nfig = plt.figure(num=None, figsize=(12, 4), dpi=150,\n facecolor='w', edgecolor='k')\ndata_plot, = plt.plot(Y[:,idx],'r')\nplt.legend([data_plot], ['observed data of the voxel'])\nplt.xlabel('time')\nplt.show()\n", "The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels.\nWhen you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan.\nNote that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space", "scan_onsets = np.int32(np.linspace(0, design.n_TR,num=n_run + 1)[: -1])\nprint('scan onsets: {}'.format(scan_onsets))", "Fit Bayesian RSA to our simulated data\nThe nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise. \nIf you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as nuisance argument to BRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True.", "\nbrsa = BRSA(GP_space=True, GP_inten=True)\n# Initiate an instance, telling it\n# that we want to impose Gaussian Process prior\n# over both space and intensity.\n\nbrsa.fit(X=Y, design=design.design_task,\n coords=coords_flat, inten=inten, scan_onsets=scan_onsets)\n\n# The data to fit should be given to the argument X.\n# Design matrix goes to design. And so on.\n", "We can have a look at the estimated similarity in matrix brsa.C_.\nWe can also compare the ideal covariance above with the one recovered, brsa.U_", "fig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(brsa.C_, vmin=-0.1, vmax=1)\nplt.xlim([0, n_C])\nplt.ylim([0, n_C])\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Estimated correlation structure\\n shared between voxels\\n'\n 'This constitutes the output of Bayesian RSA\\n')\nplt.show()\n\nfig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(brsa.U_)\nplt.xlim([0, 16])\nplt.ylim([0, 16])\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Estimated covariance structure\\n shared between voxels\\n')\nplt.show()", "In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions.\nThis is what vanila RSA might give", "regressor = np.insert(design.design_task,\n 0, 1, axis=1)\nbetas_point = np.linalg.lstsq(regressor, Y)[0]\npoint_corr = np.corrcoef(betas_point[1:, :])\npoint_cov = np.cov(betas_point[1:, :])\nfig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(point_corr, vmin=-0.1, vmax=1)\nplt.xlim([0, 16])\nplt.ylim([0, 16])\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Correlation structure estimated\\n'\n 'based on point estimates of betas\\n')\nplt.show()\n\nfig = plt.figure(num=None, figsize=(4, 4), dpi=100)\nplt.pcolor(point_cov)\nplt.xlim([0, 16])\nplt.ylim([0, 16])\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Covariance structure of\\n'\n 'point estimates of betas\\n')\nplt.show()", "We can make a comparison between the estimated SNR map and the true SNR map (normalized)", "fig = plt.figure(num=None, figsize=(5, 5), dpi=100)\nplt.pcolor(np.reshape(brsa.nSNR_, [ROI_edge, ROI_edge*2]))\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nax.set_title('estimated pseudo-SNR')\nplt.show()\n\nfig = plt.figure(num=None, figsize=(5, 5), dpi=100)\nplt.pcolor(np.reshape(snr / np.exp(np.mean(np.log(snr))),\n [ROI_edge, ROI_edge*2]))\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nax.set_title('true normalized pseudo-SNR')\nplt.show()\n\nRMS_BRSA = np.mean((brsa.C_ - ideal_corr)**2)**0.5\nRMS_RSA = np.mean((point_corr - ideal_corr)**2)**0.5\nprint('RMS error of Bayesian RSA: {}'.format(RMS_BRSA))\nprint('RMS error of standard RSA: {}'.format(RMS_RSA))\nprint('Recovered spatial smoothness length scale: '\n '{}, vs. true value: {}'.format(brsa.lGPspace_, smooth_width))\nprint('Recovered intensity smoothness length scale: '\n '{}, vs. true value: {}'.format(brsa.lGPinten_, inten_kernel))\nprint('Recovered standard deviation of GP prior: '\n '{}, vs. true value: {}'.format(brsa.bGP_, tau))", "Empirically, the smoothness turns to be over-estimated when signal is weak.\nWe can also look at how other parameters are recovered.", "\nplt.scatter(rho1, brsa.rho_)\nplt.xlabel('true AR(1) coefficients')\nplt.ylabel('recovered AR(1) coefficients')\nax = plt.gca()\nax.set_aspect(1)\nplt.show()\n\nplt.scatter(np.log(snr) - np.mean(np.log(snr)),\n np.log(brsa.nSNR_))\nplt.xlabel('true normalized log SNR')\nplt.ylabel('recovered log pseudo-SNR')\nax = plt.gca()\nax.set_aspect(1)\nplt.show()\n\n", "Even though the variation reduced in estimated pseudo-SNR (due to overestimation of smoothness of the GP prior under low SNR situation), betas recovered by the model has higher correlation with true betas than doing simple regression, shown below. Obiously there is shrinkage of the estimated betas, as a result of variance-bias tradeoff. But we think such shrinkage does preserve the patterns of betas, and therefore the result is suitable to be further used for decoding purpose.", "plt.scatter(betas_simulated, brsa.beta_)\nplt.xlabel('true betas (response amplitudes)')\nplt.ylabel('recovered betas by Bayesian RSA')\nax = plt.gca()\nax.set_aspect(1)\nplt.show()\n\n\nplt.scatter(betas_simulated, betas_point[1:, :])\nplt.xlabel('true betas (response amplitudes)')\nplt.ylabel('recovered betas by simple regression')\nax = plt.gca()\nax.set_aspect(1)\nplt.show()", "The singular decomposition of noise, and the comparison between the first two principal component of noise and the patterns of the first two nuisance regressors, returned by the model.\nThe principal components may not look exactly the same. The first principal components both capture the baseline image intensities (although they may sometimes appear counter-phase)\nApparently one can imagine that the choice of the number of principal components used as nuisance regressors can influence the result. If you just choose 1 or 2, perhaps only the global drift would be captured. But including too many nuisance regressors would slow the fitting speed and might have risk of overfitting. The users might consider starting in the range of 5-20. We do not have automatic cross-validation built in. But you can use the score() function to do cross-validation and select the appropriate number. The idea here is similar to that in GLMdenoise (http://kendrickkay.net/GLMdenoise/)", "u, s, v = np.linalg.svd(noise + inten)\nplt.plot(s)\nplt.xlabel('principal component')\nplt.ylabel('singular value of unnormalized noise')\nplt.show()\n\nplt.pcolor(np.reshape(v[0,:], [ROI_edge, ROI_edge*2]))\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Weights of the first principal component in unnormalized noise')\nplt.colorbar()\nplt.show()\n\n\nplt.pcolor(np.reshape(brsa.beta0_[0,:], [ROI_edge, ROI_edge*2]))\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Weights of the DC component in noise')\nplt.colorbar()\nplt.show()\n\nplt.pcolor(np.reshape(inten, [ROI_edge, ROI_edge*2]))\nax = plt.gca()\nax.set_aspect(1)\nplt.title('The baseline intensity of the ROI')\nplt.colorbar()\nplt.show()\n\nplt.pcolor(np.reshape(v[1,:], [ROI_edge, ROI_edge*2]))\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Weights of the second principal component in unnormalized noise')\nplt.colorbar()\nplt.show()\n\nplt.pcolor(np.reshape(brsa.beta0_[1,:], [ROI_edge, ROI_edge*2]))\nax = plt.gca()\nax.set_aspect(1)\nplt.title('Weights of the first recovered noise pattern\\n not related to DC component in noise')\nplt.colorbar()\nplt.show()\n\n", "\"Decoding\" from new data\nNow we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of brsa to estimate the \"design matrix\" in this new dataset.", "noise_new = np.zeros([n_T, n_V])\nnoise_new[0, :] = np.dot(L_noise, np.random.randn(n_V))\\\n / np.sqrt(1 - rho1**2)\nfor i_t in range(1, n_T):\n noise_new[i_t, :] = noise_new[i_t - 1, :] * rho1 \\\n + np.dot(L_noise,np.random.randn(n_V))\nY_new = signal + noise_new + inten\n\n\nts, ts0 = brsa.transform(Y_new,scan_onsets=scan_onsets)\n# ts, ts0 = brsa.transform(Y_new,scan_onsets=scan_onsets)\nrecovered_plot, = plt.plot(ts[:200, 8], 'b')\ndesign_plot, = plt.plot(design.design_task[:200, 8], 'g')\nplt.legend([design_plot, recovered_plot],\n ['design matrix for one condition', 'recovered time course for the condition'])\nplt.show()\n# We did not plot the whole time series for the purpose of seeing closely how much the two\n# time series overlap\n\nc = np.corrcoef(design.design_task.T, ts.T)\n\n\n# plt.pcolor(c[0:n_C, n_C:],vmin=-0.5,vmax=1)\nplt.pcolor(c[0:16, 16:],vmin=-0.5,vmax=1)\nax = plt.gca()\nax.set_aspect(1)\nplt.title('correlation between true design matrix \\nand the recovered task-related activity')\nplt.colorbar()\nplt.xlabel('recovered task-related activity')\nplt.ylabel('true design matrix')\nplt.show()\n\n\n# plt.pcolor(c[n_C:, n_C:],vmin=-0.5,vmax=1)\nplt.pcolor(c[16:, 16:],vmin=-0.5,vmax=1)\nax = plt.gca()\nax.set_aspect(1)\nplt.title('correlation within the recovered task-related activity')\nplt.colorbar()\nplt.show()\n\n", "Model selection by cross-validataion:\nYou can compare different models by cross-validating the parameters of one model learnt from some training data\non some testing data. BRSA provides a score() function, which provides you a pair of cross-validated log likelihood\nfor testing data. The first value is the cross-validated log likelihood of the model you have specified. The second\nvalue is a null model which assumes everything else the same except that there is no task-related activity.\nNotice that comparing the score of your model of interest against its corresponding null model is not the single way to compare models. You might also want to compare against a model using the same set of design matrix, but a different rank (especially rank 1, which means all task conditions have the same response pattern, only differing in the magnitude).\nIn general, in the context of BRSA, a model means the timing of each event and the way these events are grouped, together with other trivial parameters such as the rank of the covariance matrix and the number of nuisance regressors. All these parameters can influence model performance.\nIn future, we will provide interface to test the performance of a model with predefined similarity matrix or covariance matrix.", "[score, score_null] = brsa.score(X=Y_new, design=design.design_task, scan_onsets=scan_onsets)\nprint(\"Score of full model based on the correct esign matrix, assuming {} nuisance\"\n \" components in the noise: {}\".format(brsa.n_nureg_, score))\nprint(\"Score of a null model with the same assumption except that there is no task-related response: {}\".format(\n score_null))\nplt.bar([0,1],[score, score_null], width=0.5)\nplt.ylim(np.min([score, score_null])-100, np.max([score, score_null])+100)\nplt.xticks([0,1],['Model','Null model'])\nplt.ylabel('cross-validated log likelihood')\nplt.title('cross validation on new data')\nplt.show()\n\n[score_noise, score_noise_null] = brsa.score(X=noise_new+inten, design=design.design_task, scan_onsets=scan_onsets)\nprint(\"Score of full model for noise, based on the correct design matrix, assuming {} nuisance\"\n \" components in the noise: {}\".format(brsa.n_nureg_, score_noise))\nprint(\"Score of a null model for noise: {}\".format(\n score_noise_null))\nplt.bar([0,1],[score_noise, score_noise_null], width=0.5)\nplt.ylim(np.min([score_noise, score_noise_null])-100, np.max([score_noise, score_noise_null])+100)\nplt.xticks([0,1],['Model','Null model'])\nplt.ylabel('cross-validated log likelihood')\nplt.title('cross validation on noise')\nplt.show()", "As can be seen above, the model with the correct design matrix explains new data with signals generated from the true model better than the null model, but explains pure noise worse than the null model.\nWe can also try the version which marginalize SNR and rho for each voxel.\nThis version is intended for analyzing data of a group of participants and estimating their shared similarity matrix. But it also allows analyzing single participant.", "\ngbrsa = GBRSA(nureg_method='PCA', auto_nuisance=True, logS_range=1,\n anneal_speed=20, n_iter=50)\n# Initiate an instance, telling it\n# that we want to impose Gaussian Process prior\n# over both space and intensity.\n\ngbrsa.fit(X=Y, design=design.design_task,scan_onsets=scan_onsets)\n\n# The data to fit should be given to the argument X.\n# Design matrix goes to design. And so on.\n\n\nplt.pcolor(np.reshape(gbrsa.nSNR_, (ROI_edge, ROI_edge*2)))\nplt.colorbar()\nax = plt.gca()\nax.set_aspect(1)\nplt.title('SNR map estimated by marginalized BRSA')\nplt.show()\n\nplt.pcolor(np.reshape(snr, (ROI_edge, ROI_edge*2)))\nax = plt.gca()\nax.set_aspect(1)\nplt.colorbar()\nplt.title('true SNR map')\nplt.show()\n\nplt.scatter(snr, gbrsa.nSNR_)\nax = plt.gca()\nax.set_aspect(1)\nplt.xlabel('simulated pseudo-SNR')\nplt.ylabel('estimated pseudo-SNR')\nplt.show()\n\nplt.scatter(np.log(snr), np.log(gbrsa.nSNR_))\nax = plt.gca()\nax.set_aspect(1)\nplt.xlabel('simulated log(pseudo-SNR)')\nplt.ylabel('estimated log(pseudo-SNR)')\nplt.show()\n\nplt.pcolor(gbrsa.U_)\nplt.colorbar()\nplt.title('covariance matrix estimated by marginalized BRSA')\nplt.show()\nplt.pcolor(ideal_cov)\nplt.colorbar()\nplt.title('true covariance matrix')\nplt.show()\n\nplt.scatter(betas_simulated, gbrsa.beta_)\nax = plt.gca()\nax.set_aspect(1)\nplt.xlabel('simulated betas')\nplt.ylabel('betas estimated by marginalized BRSA')\nplt.show()\n\nplt.scatter(rho1, gbrsa.rho_)\nax = plt.gca()\nax.set_aspect(1)\nplt.xlabel('simulated AR(1) coefficients')\nplt.ylabel('AR(1) coefficients estimated by marginalized BRSA')\nplt.show()", "We can also do \"decoding\" and cross-validating using the marginalized version in GBRSA", "# \"Decoding\"\nts, ts0 = gbrsa.transform([Y_new],scan_onsets=[scan_onsets])\n\nrecovered_plot, = plt.plot(ts[0][:200, 8], 'b')\ndesign_plot, = plt.plot(design.design_task[:200, 8], 'g')\nplt.legend([design_plot, recovered_plot],\n ['design matrix for one condition', 'recovered time course for the condition'])\nplt.show()\n# We did not plot the whole time series for the purpose of seeing closely how much the two\n# time series overlap\n\nc = np.corrcoef(design.design_task.T, ts[0].T)\n\n\nplt.pcolor(c[0:n_C, n_C:],vmin=-0.5,vmax=1)\nax = plt.gca()\nax.set_aspect(1)\nplt.title('correlation between true design matrix \\nand the recovered task-related activity')\nplt.colorbar()\nplt.xlabel('recovered task-related activity')\nplt.ylabel('true design matrix')\nplt.show()\n\n\nplt.pcolor(c[n_C:, n_C:],vmin=-0.5,vmax=1)\nax = plt.gca()\nax.set_aspect(1)\nplt.title('correlation within the recovered task-related activity')\nplt.colorbar()\nplt.show()\n\n# cross-validataion\n[score, score_null] = gbrsa.score(X=[Y_new], design=[design.design_task], scan_onsets=[scan_onsets])\nprint(\"Score of full model based on the correct design matrix, assuming {} nuisance\"\n \" components in the noise: {}\".format(gbrsa.n_nureg_, score))\nprint(\"Score of a null model with the same assumption except that there is no task-related response: {}\".format(\n score_null))\nplt.bar([0,1],[score[0], score_null[0]], width=0.5)\nplt.ylim(np.min([score[0], score_null[0]])-100, np.max([score[0], score_null[0]])+100)\nplt.xticks([0,1],['Model','Null model'])\nplt.ylabel('cross-validated log likelihood')\nplt.title('cross validation on new data')\nplt.show()\n\n[score_noise, score_noise_null] = gbrsa.score(X=[noise_new+inten], design=[design.design_task],\n scan_onsets=[scan_onsets])\nprint(\"Score of full model for noise, based on the correct design matrix, assuming {} nuisance\"\n \" components in the noise: {}\".format(gbrsa.n_nureg_, score_noise))\nprint(\"Score of a null model for noise: {}\".format(\n score_noise_null))\nplt.bar([0,1],[score_noise[0], score_noise_null[0]], width=0.5)\nplt.ylim(np.min([score_noise[0], score_noise_null[0]])-100, np.max([score_noise[0], score_noise_null[0]])+100)\nplt.xticks([0,1],['Model','Null model'])\nplt.ylabel('cross-validated log likelihood')\nplt.title('cross validation on noise')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jeanpat/DeepFISH
notebooks/.ipynb_checkpoints/Loading_13434_LowRes_Dataset-checkpoint.ipynb
gpl-3.0
[ "import h5py\nimport numpy as np\nimport skimage as sk\n#print sk.__version__\nfrom skimage import io\nfrom matplotlib import pyplot as plt\n\nfrom skimage import filters\nfrom skimage import feature\nfrom skimage import io\nfrom scipy import ndimage as nd\nfrom scipy import misc\n\nfrom subprocess import check_output\nprint(check_output([\"ls\", \"../dataset\"]).decode(\"utf8\"))", "Loading the dataset stored in hdf5 format", "h5f = h5py.File('../dataset/LowRes_13434_overlapping_pairs.h5','r')\npairs = h5f['dataset_1'][:]\nh5f.close()\n\nprint(pairs.shape)\nprint(pairs[0,:,:,0].dtype)\nprint(pairs[0,:,:,0].max())", "Looking at some examples", "grey = pairs[0,:,:,0]\nmask = pairs[0,:,:,1]\n%matplotlib inline\nplt.figure(figsize = (10,8))\nplt.subplot(121)\nplt.imshow(grey)\nplt.title('max='+str(grey.max()))\nplt.subplot(122)\nplt.imshow(mask, interpolation = 'nearest')", "Problem in the groundtruth label at low resolution:\nThe labels are a little noisy on the edge of the groundtruth label.", "plt.figure(figsize = (10,12))\nplt.subplot(141)\nplt.imshow(mask, interpolation = 'nearest')\nplt.subplot(142)\nplt.title('label 1')\nplt.imshow(mask == 1, interpolation = 'nearest')\nplt.subplot(143)\nplt.title('label 2')\nplt.imshow(mask == 2, interpolation = 'nearest')\nplt.subplot(144)\nplt.title('label 3')\nplt.imshow(mask == 3, interpolation = 'nearest')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.17/_downloads/82dd66e6bdf7150b8691eaa46b63bcf9/plot_read_events.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading an event file\nRead events from a file. For a more detailed guide on how to read events\nusing MNE-Python, see tut_epoching_and_averaging.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Chris Holdgraf <choldgraf@berkeley.edu>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'", "Reading events\nBelow we'll read in an events file. We suggest that this file end in\n-eve.fif. Note that we can read in the entire events file, or only\nevents corresponding to particular event types with the include and\nexclude parameters.", "events_1 = mne.read_events(fname, include=1)\nevents_1_2 = mne.read_events(fname, include=[1, 2])\nevents_not_4_32 = mne.read_events(fname, exclude=[4, 32])", "Events objects are essentially numpy arrays with three columns:\nevent_sample | previous_event_id | event_id", "print(events_1[:5], '\\n\\n---\\n\\n', events_1_2[:5], '\\n\\n')\n\nfor ind, before, after in events_1[:5]:\n print(\"At sample %d stim channel went from %d to %d\"\n % (ind, before, after))", "Plotting events\nWe can also plot events in order to visualize how events occur over the\ncourse of our recording session. Below we'll plot our three event types\nto see which ones were included.", "fig, axs = plt.subplots(1, 3, figsize=(15, 5))\n\nmne.viz.plot_events(events_1, axes=axs[0], show=False)\naxs[0].set(title=\"restricted to event 1\")\n\nmne.viz.plot_events(events_1_2, axes=axs[1], show=False)\naxs[1].set(title=\"restricted to event 1 or 2\")\n\nmne.viz.plot_events(events_not_4_32, axes=axs[2], show=False)\naxs[2].set(title=\"keep all but 4 and 32\")\nplt.setp([ax.get_xticklabels() for ax in axs], rotation=45)\nplt.tight_layout()\nplt.show()", "Writing events\nFinally, we can write events to disk. Remember to use the naming convention\n-eve.fif for your file.", "mne.write_events('example-eve.fif', events_1)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/inm/cmip6/models/sandbox-1/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: INM\nSource ID: SANDBOX-1\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:05\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/tfx
docs/tutorials/tfx/penguin_simple.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Simple TFX Pipeline Tutorial using Penguin dataset\nA Short tutorial to run a simple TFX pipeline.\nNote: We recommend running this tutorial in a Colab notebook, with no setup required! Just click \"Run in Google Colab\".\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"/>View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/penguin_simple.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/penguin_simple.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n<td><a href=\"https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/penguin_simple.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a></td>\n</table></div>\n\nIn this notebook-based tutorial, we will create and run a TFX pipeline\nfor a simple classification model.\nThe pipeline will consist of three essential TFX components: ExampleGen,\nTrainer and Pusher. The pipeline includes the most minimal ML workflow like\nimporting data, training a model and exporting the trained model.\nPlease see\nUnderstanding TFX Pipelines\nto learn more about various concepts in TFX.\nSet Up\nWe first need to install the TFX Python package and download\nthe dataset which we will use for our model.\nUpgrade Pip\nTo avoid upgrading Pip in a system when running locally,\ncheck to make sure that we are running in Colab.\nLocal systems can of course be upgraded separately.", "try:\n import colab\n !pip install --upgrade pip\nexcept:\n pass", "Install TFX", "!pip install -U tfx", "Did you restart the runtime?\nIf you are using Google Colab, the first time that you run\nthe cell above, you must restart the runtime by clicking\nabove \"RESTART RUNTIME\" button or using \"Runtime > Restart\nruntime ...\" menu. This is because of the way that Colab\nloads packages.\nCheck the TensorFlow and TFX versions.", "import tensorflow as tf\nprint('TensorFlow version: {}'.format(tf.__version__))\nfrom tfx import v1 as tfx\nprint('TFX version: {}'.format(tfx.__version__))", "Set up variables\nThere are some variables used to define a pipeline. You can customize these\nvariables as you want. By default all output from the pipeline will be\ngenerated under the current directory.", "import os\n\nPIPELINE_NAME = \"penguin-simple\"\n\n# Output directory to store artifacts generated from the pipeline.\nPIPELINE_ROOT = os.path.join('pipelines', PIPELINE_NAME)\n# Path to a SQLite DB file to use as an MLMD storage.\nMETADATA_PATH = os.path.join('metadata', PIPELINE_NAME, 'metadata.db')\n# Output directory where created models from the pipeline will be exported.\nSERVING_MODEL_DIR = os.path.join('serving_model', PIPELINE_NAME)\n\nfrom absl import logging\nlogging.set_verbosity(logging.INFO) # Set default logging level.", "Prepare example data\nWe will download the example dataset for use in our TFX pipeline. The dataset we\nare using is\nPalmer Penguins dataset\nwhich is also used in other\nTFX examples.\nThere are four numeric features in this dataset:\n\nculmen_length_mm\nculmen_depth_mm\nflipper_length_mm\nbody_mass_g\n\nAll features were already normalized to have range [0,1]. We will build a\nclassification model which predicts the species of penguins.\nBecause TFX ExampleGen reads inputs from a directory, we need to create a\ndirectory and copy dataset to it.", "import urllib.request\nimport tempfile\n\nDATA_ROOT = tempfile.mkdtemp(prefix='tfx-data') # Create a temporary directory.\n_data_url = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'\n_data_filepath = os.path.join(DATA_ROOT, \"data.csv\")\nurllib.request.urlretrieve(_data_url, _data_filepath)", "Take a quick look at the CSV file.", "!head {_data_filepath}", "You should be able to see five values. species is one of 0, 1 or 2, and all\nother features should have values between 0 and 1.\nCreate a pipeline\nTFX pipelines are defined using Python APIs. We will define a pipeline which\nconsists of following three components.\n- CsvExampleGen: Reads in data files and convert them to TFX internal format\nfor further processing. There are multiple\nExampleGens for various\nformats. In this tutorial, we will use CsvExampleGen which takes CSV file input.\n- Trainer: Trains an ML model.\nTrainer component requires a\nmodel definition code from users. You can use TensorFlow APIs to specify how to\ntrain a model and save it in a saved_model format.\n- Pusher: Copies the trained model outside of the TFX pipeline.\nPusher component can be thought\nof as a deployment process of the trained ML model.\nBefore actually define the pipeline, we need to write a model code for the\nTrainer component first.\nWrite model training code\nWe will create a simple DNN model for classification using TensorFlow Keras\nAPI. This model training code will be saved to a separate file.\nIn this tutorial we will use\nGeneric Trainer\nof TFX which support Keras-based models. You need to write a Python file\ncontaining run_fn function, which is the entrypoint for the Trainer\ncomponent.", "_trainer_module_file = 'penguin_trainer.py'\n\n%%writefile {_trainer_module_file}\n\nfrom typing import List\nfrom absl import logging\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow_transform.tf_metadata import schema_utils\n\nfrom tfx import v1 as tfx\nfrom tfx_bsl.public import tfxio\nfrom tensorflow_metadata.proto.v0 import schema_pb2\n\n_FEATURE_KEYS = [\n 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'\n]\n_LABEL_KEY = 'species'\n\n_TRAIN_BATCH_SIZE = 20\n_EVAL_BATCH_SIZE = 10\n\n# Since we're not generating or creating a schema, we will instead create\n# a feature spec. Since there are a fairly small number of features this is\n# manageable for this dataset.\n_FEATURE_SPEC = {\n **{\n feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)\n for feature in _FEATURE_KEYS\n },\n _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)\n}\n\n\ndef _input_fn(file_pattern: List[str],\n data_accessor: tfx.components.DataAccessor,\n schema: schema_pb2.Schema,\n batch_size: int = 200) -> tf.data.Dataset:\n \"\"\"Generates features and label for training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n schema: schema of the input data.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n tfxio.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_LABEL_KEY),\n schema=schema).repeat()\n\n\ndef _build_keras_model() -> tf.keras.Model:\n \"\"\"Creates a DNN Keras model for classifying penguin data.\n\n Returns:\n A Keras Model.\n \"\"\"\n # The model below is built with Functional API, please refer to\n # https://www.tensorflow.org/guide/keras/overview for all API options.\n inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]\n d = keras.layers.concatenate(inputs)\n for _ in range(2):\n d = keras.layers.Dense(8, activation='relu')(d)\n outputs = keras.layers.Dense(3)(d)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n optimizer=keras.optimizers.Adam(1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()])\n\n model.summary(print_fn=logging.info)\n return model\n\n\n# TFX Trainer will call this function.\ndef run_fn(fn_args: tfx.components.FnArgs):\n \"\"\"Train the model based on given args.\n\n Args:\n fn_args: Holds args used to train the model as name/value pairs.\n \"\"\"\n\n # This schema is usually either an output of SchemaGen or a manually-curated\n # version provided by pipeline author. A schema can also derived from TFT\n # graph if a Transform component is used. In the case when either is missing,\n # `schema_from_feature_spec` could be used to generate schema from very simple\n # feature_spec, but the schema returned would be very primitive.\n schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n schema,\n batch_size=_TRAIN_BATCH_SIZE)\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n schema,\n batch_size=_EVAL_BATCH_SIZE)\n\n model = _build_keras_model()\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps)\n\n # The result of the training should be saved in `fn_args.serving_model_dir`\n # directory.\n model.save(fn_args.serving_model_dir, save_format='tf')", "Now you have completed all preparation steps to build a TFX pipeline.\nWrite a pipeline definition\nWe define a function to create a TFX pipeline. A Pipeline object\nrepresents a TFX pipeline which can be run using one of the pipeline\norchestration systems that TFX supports.", "def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,\n module_file: str, serving_model_dir: str,\n metadata_path: str) -> tfx.dsl.Pipeline:\n \"\"\"Creates a three component penguin pipeline with TFX.\"\"\"\n # Brings data into the pipeline.\n example_gen = tfx.components.CsvExampleGen(input_base=data_root)\n\n # Uses user-provided Python function that trains a model.\n trainer = tfx.components.Trainer(\n module_file=module_file,\n examples=example_gen.outputs['examples'],\n train_args=tfx.proto.TrainArgs(num_steps=100),\n eval_args=tfx.proto.EvalArgs(num_steps=5))\n\n # Pushes the model to a filesystem destination.\n pusher = tfx.components.Pusher(\n model=trainer.outputs['model'],\n push_destination=tfx.proto.PushDestination(\n filesystem=tfx.proto.PushDestination.Filesystem(\n base_directory=serving_model_dir)))\n\n # Following three components will be included in the pipeline.\n components = [\n example_gen,\n trainer,\n pusher,\n ]\n\n return tfx.dsl.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n metadata_connection_config=tfx.orchestration.metadata\n .sqlite_metadata_connection_config(metadata_path),\n components=components)", "Run the pipeline\nTFX supports multiple orchestrators to run pipelines.\nIn this tutorial we will use LocalDagRunner which is included in the TFX\nPython package and runs pipelines on local environment.\nWe often call TFX pipelines \"DAGs\" which stands for directed acyclic graph.\nLocalDagRunner provides fast iterations for development and debugging.\nTFX also supports other orchestrators including Kubeflow Pipelines and Apache\nAirflow which are suitable for production use cases.\nSee\nTFX on Cloud AI Platform Pipelines\nor\nTFX Airflow Tutorial\nto learn more about other orchestration systems.\nNow we create a LocalDagRunner and pass a Pipeline object created from the\nfunction we already defined.\nThe pipeline runs directly and you can see logs for the progress of the pipeline including ML model training.", "tfx.orchestration.LocalDagRunner().run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n data_root=DATA_ROOT,\n module_file=_trainer_module_file,\n serving_model_dir=SERVING_MODEL_DIR,\n metadata_path=METADATA_PATH))", "You should see \"INFO:absl:Component Pusher is finished.\" at the end of the\nlogs if the pipeline finished successfully. Because Pusher component is the\nlast component of the pipeline.\nThe pusher component pushes the trained model to the SERVING_MODEL_DIR which\nis the serving_model/penguin-simple directory if you did not change the\nvariables in the previous steps. You can see the result from the file browser\nin the left-side panel in Colab, or using the following command:", "# List files in created model directory.\n!find {SERVING_MODEL_DIR}", "Next steps\nYou can find more resources on https://www.tensorflow.org/tfx/tutorials.\nPlease see\nUnderstanding TFX Pipelines\nto learn more about various concepts in TFX." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jackovt/Presentation-Design-Patterns
examples/python-example/observe.ipynb
mit
[ "Observer/Observable in Python\nHere is a quick example of the Observer/Observable pattern in python. We use a simple model object to hold data that is to be rendered by a plot object. As the data in the model is changed, we want to notify the plot without the model knowing any details (i.e. not having a direct dependency) about the plot/graph.\nStart with the Observable\nStart with a base class, Observable. Anything that inherits from this class can be \"observed\". Essentially, this base class will hold a list of observers and manage registering and unregistering observers. Leveraging Giant Flying Saucer example.", "class Observable:\n \"\"\" Extend this class to be observable. \"\"\"\n\n def __init__(self):\n self.observers = []\n\n def register(self, observer):\n if not observer in self.observers:\n self.observers.append(observer)\n \n def unregister(self, observer):\n if observer in self.observers:\n self.observers.remove(observer)\n \n def unregister_all(self):\n if self.observers:\n del self.observers[:]\n \n def update_observers(self, *args):\n \"\"\" Walk through the list of observers and call their update method. \"\"\"\n for observer in self.observers:\n # Any observer must have this update method, see observer interface below.\n observer.update(*args)\n", "An observable 'Model'\nWe are creating a very simple class to hold x and y data. We added a convenience method to provide more control over when we actually update our observers.", "class Model(Observable):\n \"\"\" A class to hold a 2D matrix of data. \"\"\"\n\n def __init__(self,name):\n self.name = name\n self.x = []\n self.y = []\n super(Model,self).__init__()\n\n def _notify(self):\n \"\"\" Ensure the dimensions are of equal length before notifying observers. \"\"\"\n if len(self.x) == len(self.y):\n self.update_observers(self.__dict__)\n \n def set_x(self,x_vals):\n self.x = x_vals\n self._notify()\n\n def set_y(self,y_vals):\n self.y = y_vals\n self._notify()\n ", "The Observer\nThe base class for Observer is very simple. In Java, it would be an interface. We are using Python's Abstract Base Class with the abstract method (pretty much equivalent to a Java Interface). To observe an observable, all you need to do is implement the update method!", "from abc import ABCMeta, abstractmethod\n \nclass Observer(object):\n __metaclass__ = ABCMeta\n \n @abstractmethod\n def update(self,*args):\n \"\"\" Can take an arbitrary list of arguments. \"\"\"\n pass\n", "For this example, we just have one observer (can certainly have more than one) that is a plot object.", "import matplotlib.pyplot as pyplot\n\nclass Plot(Observer):\n \n # Just rotating through colors to help differentiate plots\n colors = ['black','blue','green','red']\n \n def __init__(self):\n self.color = -1\n \n def _color(self):\n \"\"\" Just work through set of colors \"\"\"\n color_indx = len(self.colors)-1\n self.color += 1 if self.color < color_indx else -color_indx\n return self.colors[self.color]\n \n \n def update(self,*args):\n pyplot.plot(args[0]['x'],args[0]['y'],self._color())\n pyplot.show()", "So, now we'll use our classes by creating a model instance, registering an observer, and setting the data.", "# An Observable instance\nm = Model('2D Graph')\n# An Observer instance\np = Plot()\n# Hook them together\nm.register(p)\n\n# Now set the data on observable object, just creating a list from 0 to 13 \nm.set_x(range(14))\n\n# Because we ensure the x dimension and y dimension match, the observer isn't notified until y is set.\nm.set_y(range(14))\n", "Okay, so, now let's assign y to the first 13 numbers in the fibonacci sequence. Notice, as soon as y is set, the plot is updated. There is no code in this fragment to explicitly re-draw the plot!", "y = [0,1]\nfor i in range(12):\n y.append(y[i]+y[i+1])\nprint(y)\n \nm.set_y(y)\n\n# Change y data to be a quadratic \nm.set_y([x**2 for x in range(14)])\n\n# And for fun, let's go exponential. And our pattern is really paying off now, graphs are redrawn as soon\n# as y is set because the observers are notified.\nm.set_y([2**x for x in range(14)])", "So, we have appropriate coupling and cohesion here. The data doesn't have to know any details about plotting. Models and users of models don't need to depend on graphing details. And the plots need only observe anything that has 2D data to provide a simple graph. They don't need to directly depend on how a model is built or the data calculated, they simply respond to updates to update their view of that data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kidel/In-Codice-Ratio-OCR-with-CNN
Notebooks/05_ICR-Test.ipynb
apache-2.0
[ "In Codice Ratio Convolutional Neural Network - Test (no distortions)\nIn notebooks from 5 to 7 we are going to apply everything we experimented so far on the new dataset to see what best fits it.\nImports", "import os.path\nfrom IPython.display import Image\n\nfrom util import Util\nu = Util()\n\nimport numpy as np\n# Explicit random seed for reproducibility\nnp.random.seed(1337) \n\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Convolution2D, MaxPooling2D\nfrom keras.utils import np_utils\nfrom keras import backend as K\n\nimport dataset_generator as dataset\n\nchar = 'a'\nchar_type = 'centrato'", "Definitions", "batch_size = 512\nnb_classes = 2\nnb_epoch = 500\n# checkpoint path\ncheckpoints_filepath = \"checkpoints/05_ICR_A_no-dist_weights.best.hdf5\"\n\n# input image dimensions\nimg_rows, img_cols = 34, 56\n# number of convolutional filters to use\nnb_filters1 = 30\nnb_filters2 = 50\n# size of pooling area for max pooling\npool_size1 = (2, 2)\npool_size2 = (3, 3)\n# convolution kernel size\nkernel_size1 = (4, 4)\nkernel_size2 = (5, 5)\n# dense layer size\ndense_layer_size1 = 250\n# dropout rate\ndropout = 0.15\n# activation\nactivation = 'relu'", "Data load", "(X_train, y_train, X_test, y_test) = dataset.generate_half_labeled_half_height(char, char_type)\n\nu.plot_images(X_train[0:9], y_train[0:9], img_shape=(56,34))\n\nif K.image_dim_ordering() == 'th':\n X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)\n X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)\n input_shape = (1, img_rows, img_cols)\nelse:\n X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)\n X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)\n input_shape = (img_rows, img_cols, 1)\n\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\nX_train /= 255\nX_test /= 255\nprint('X_train shape:', X_train.shape)\nprint(X_train.shape[0], 'train samples')\nprint(X_test.shape[0], 'test samples')\n\n# convert class vectors to binary class matrices\nY_train = np_utils.to_categorical(y_train, nb_classes)\nY_test = np_utils.to_categorical(y_test, nb_classes)", "Model definition", "model = Sequential()\n\ndef initialize_network(model, dropout1=dropout, dropout2=dropout):\n model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],\n border_mode='valid',\n input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))\n model.add(Activation(activation, name='activation_1_' + activation))\n model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))\n model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))\n model.add(Activation(activation, name='activation_2_' + activation))\n model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))\n model.add(Dropout(dropout))\n\n model.add(Flatten())\n model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))\n model.add(Activation(activation, name='activation_3_' + activation))\n model.add(Dropout(dropout))\n model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))\n model.add(Activation('softmax', name='softmax'))\n\n model.compile(loss='categorical_crossentropy',\n optimizer='adadelta',\n metrics=['accuracy', 'precision', 'recall'])\n \n # loading weights from checkpoints \n if os.path.exists(checkpoints_filepath):\n model.load_weights(checkpoints_filepath)\n \ninitialize_network(model)", "Training and evaluation\nUsing non verbose output for training, since we already get some informations from the callback.", "# checkpoint\ncheckpoint = ModelCheckpoint(checkpoints_filepath, monitor='val_precision', verbose=1, save_best_only=True, mode='max')\ncallbacks_list = [checkpoint]\n\n# training\nprint('training model')\nhistory = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,\n verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list)\n\n# ensures that the weights with the best evaluation are loaded\nif os.path.exists(checkpoints_filepath):\n model.load_weights(checkpoints_filepath)\n\n# evaluation\nprint('evaluating model')\nscore = model.evaluate(X_test, Y_test, verbose=1)\nprint('Test score:', score[0])\nprint('Test accuracy:', score[1]*100, '%')\nprint('Test error:', (1-score[2])*100, '%')\n\nu.plot_history(history)\nu.plot_history(history, metric='loss', loc='upper left')", "Inspecting the result", "# The predict_classes function outputs the highest probability class\n# according to the trained classifier for each input example.\npredicted_classes = model.predict_classes(X_test)\n\n# Check which items we got right / wrong\ncorrect_indices = np.nonzero(predicted_classes == y_test)[0]\nincorrect_indices = np.nonzero(predicted_classes != y_test)[0]", "Examples of correct predictions", "u.plot_images(X_test[correct_indices[:9]], y_test[correct_indices[:9]], \n predicted_classes[correct_indices[:9]], img_shape=(56,34))", "Examples of incorrect predictions", "u.plot_images(X_test[incorrect_indices[:9]], y_test[incorrect_indices[:9]], \n predicted_classes[incorrect_indices[:9]], img_shape=(56,34))", "Confusion matrix", "u.plot_confusion_matrix(y_test, nb_classes, predicted_classes)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
davek44/Basset
src/explore_dataset.ipynb
mit
[ "%pylab inline\n\nimport random, time\nimport numpy as np\nfrom sklearn import preprocessing\nfrom sklearn.decomposition import PCA, FastICA\nfrom sklearn.manifold import TSNE, MDS, Isomap", "Let's load in the data to a panda DataFrame. A sample of the data is often better here so the routines run faster.", "import pandas as pd\n\nactivity_file = 'encode_roadmap_act.txt'\n# activity_file = 'encode_roadmap_acc_50k.txt'\n\nactivity_df = pd.read_table(activity_file, index_col=0)", "First, let's ask some sequence-centric questions. If we compute the proportion of active targets for each sequence, what does the distribution of this stat look like?", "import matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(font_scale=1.3)\n\nseq_activity = activity_df.mean(axis=1)\n\nconstitutive_pct = sum(seq_activity > 0.5) / float(seq_activity.shape[0])\nprint '%.4f constitutively active sequences' % constitutive_pct\n\nsns.distplot(seq_activity, kde=False)\n\ncell_activity = df.mean(axis=0)\n\nca_out = open('cell_activity.txt', 'w')\nfor ci in range(len(cell_activity)):\n cols = (str(ci), df.columns[ci], str(cell_activity[ci]))\n print >> ca_out, '\\t'.join(cols)\nca_out.close()\n\nprint cell_activity.min(), cell_activity.max()\nprint cell_activity.median()\n\nsns.distplot(cell_activity, kde=False)\n\n# construct matrix\nX = np.array(df).T\n\nprint X.shape\n\n# dimensionality reduction\nmodel = Isomap(n_components=2, n_neighbors=10)\nX_dr = model.fit_transform(X)\n\n# plot PCA\nplt.figure(figsize=(16,12), dpi=100)\nplt.scatter(X_dr[:,0], X_dr[:,1], c='black', s=3)\n#plt.ylim(-10,15)\n#plt.xlim(-14,15)\n\nfor label, x, y in zip(df.columns, X_dr[:,0], X_dr[:,1]):\n plt.annotate(label, xy=(x,y), size=10)\n \nplt.tight_layout()\nplt.savefig('pca.pdf')\n\n# Isomap dimensionality reduction\nmodel = Isomap(n_components=2, n_neighbors=5)\nX_dr = model.fit_transform(X)\n\n# plot\nplt.figure(figsize=(16,12), dpi=100)\nplt.scatter(X_dr[:,0], X_dr[:,1], c='black', s=3)\n#plt.ylim(-10,15)\n#plt.xlim(-14,15)\n\nfor label, x, y in zip(df.columns, X_dr[:,0], X_dr[:,1]):\n plt.annotate(label, xy=(x,y), size=10)\n \nplt.tight_layout()\nplt.savefig('isomap.pdf')\n\nt0 = time.time()\n\nseq_samples = random.sample(xrange(X.shape[1]), 1000)\n\nsns.set(font_scale=0.6)\nplt.figure()\nsns.clustermap(df.iloc[seq_samples].T, metric='jaccard', cmap='Reds', linewidths=0, xticklabels=False, figsize=(13,18))\nplt.savefig('clustermap.pdf')\n\nprint 'Takes %ds' % (time.time() - t0)" ]
[ "code", "markdown", "code", "markdown", "code" ]
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_ml/td2a_correction_session_4A.ipynb
mit
[ "2A.ml - Machine Learning et Marketting - correction\nClassification binaire, correction.", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfrom jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Données\nTout d'abord, on récupère la base de données : Bank Marketing Data Set.", "url = \"https://archive.ics.uci.edu/ml/machine-learning-databases/00222/\"\nfile = \"bank.zip\"\nimport pyensae.datasource\ndata = pyensae.datasource.download_data(file, website=url)\n\nimport pandas\ndf = pandas.read_csv(\"bank.csv\",sep=\";\")\ndf.tail()", "Exercice 1 : prédire y en fonction des attributs\nLes données ne sont pas toutes au format numérique, il faut convertir les variables catégorielles. Pour cela, on utilise la fonction DictVectorizer.", "import numpy\nimport numpy as np\nnumerique = [ c for c,d in zip(df.columns,df.dtypes) if d == numpy.int64 ]\ncategories = [ c for c in df.columns if c not in numerique and c not in [\"y\"] ]\ntarget = \"y\"\nprint(numerique)\nprint(categories)\nprint(target)\nnum = df[ numerique ]\ncat = df[ categories ]\ntar = df[ target ]", "On traite les variables catégorielles :", "from sklearn.feature_extraction import DictVectorizer\nprep = DictVectorizer()\ncat_as_dicts = [dict(r.iteritems()) for _, r in cat.iterrows()]\ntemp = prep.fit_transform(cat_as_dicts)\ncat_exp = temp.toarray()\nprep.feature_names_", "On construit les deux matrices $(X,Y)$ = (features, classe).\nRemarque : certains modèles d'apprentissage n'acceptent pas les corrélations. Lorsqu'on crée des variables catégorielles à choix unique, les sommes des colonnes associées à une catégories fait nécessairement un. Avec deux variables catégorielles, on introduit nécessairement des corrélations. On pense à enlever les dernières catégories : 'contact=unknown', 'default=yes', 'education=unknown', 'housing=yes', 'job=unknown', 'loan=yes', 'marital=single', 'month=sep', 'poutcome=unknown'.", "cat_exp_df = pandas.DataFrame( cat_exp, columns = prep.feature_names_ )\nreject = ['contact=unknown', 'default=yes', 'education=unknown', 'housing=yes','job=unknown', \n 'loan=yes', 'marital=single', 'month=sep', 'poutcome=unknown']\nkeep = [ c for c in cat_exp_df.columns if c not in reject ]\ncat_exp_df_nocor = cat_exp_df [ keep ]\nX = pandas.concat ( [ num, cat_exp_df_nocor ], axis= 1)\nY = tar.apply( lambda r : (1.0 if r == \"yes\" else 0.0))\nX.shape, Y.shape", "Quelques corrélations sont très grandes malgré tout :", "import numpy\nnumpy.corrcoef(X)", "On divise en base d'apprentissage et de test :", "from sklearn.model_selection import train_test_split\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)", "Puis on cale un modèle d'apprentissage :", "from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier\ntype_classifier = GradientBoostingClassifier\nclf = type_classifier()\nclf = clf.fit(X_train, Y_train.ravel()) ", "La méthode ravel évite de prendre en compte l'index de Y_train. La méthode train_test_split conserve dans l'index les positions initiales des élèments. Mais l'index fait que Y_train[0] ne désigne pas le premier élément de Y_train mais le premier élément du tableau initial. Y_train.ravel()[0] désigne bien le premier élément du tableau. On calcule ensuite la matrice de confusion (Confusion matrix) :", "from sklearn.metrics import confusion_matrix\nfor x,y in [ (X_train, Y_train), (X_test, Y_test) ]:\n yp = clf.predict(x)\n cm = confusion_matrix(y.ravel(), yp.ravel())\n print(cm)\n \nimport matplotlib.pyplot as plt\nplt.matshow(cm)\nplt.title('Confusion matrix')\nplt.colorbar()\nplt.ylabel('True label')\nplt.xlabel('Predicted label') ", "Si le model choisi est un GradientBoostingClassifier, on peut regarder l'importance des variables dans la construction du résultat. Le graphe suivant est inspiré de la page Gradient Boosting regression même si ce n'est pas une régression qui a été utilisée ici.", "import numpy as np\nfeature_name = X.columns\n\nlimit = 20\nfeature_importance = clf.feature_importances_[:20]\nfeature_importance = 100.0 * (feature_importance / feature_importance.max())\nsorted_idx = np.argsort(feature_importance)\npos = np.arange(sorted_idx.shape[0]) + .5\nplt.subplot(1, 2, 2)\nplt.barh(pos, feature_importance[sorted_idx], align='center')\nplt.yticks(pos, feature_name[sorted_idx])\nplt.xlabel('Relative Importance')\nplt.title('Variable Importance')", "Il faut tout de même rester prudent quant à l'interprétation du graphe précédent. La documentation au sujet de limportance des features précise plus ou moins comment sont calculés ces chiffres. Toutefois, lorsque des variables sont très corrélées, elles sont plus ou moins interchangeables. Tout dépend alors comment l'algorithme d'apprentissage choisit telle ou telle variables, toujours dans le même ordre ou dans un ordre aléatoire.\nvariables\nOn utilise le code de la séance 3 Analyse en Composantes Principales pour observer les variables.", "from sklearn.decomposition import PCA\npca = PCA(n_components=4)\nx_transpose = X.T\npca.fit(x_transpose)\n\nplt.bar(numpy.arange(len(pca.explained_variance_ratio_))+0.5, pca.explained_variance_ratio_)\nplt.title(\"Variance expliquée\")\n\nimport warnings\nwarnings.filterwarnings('ignore')\nX_reduced = pca.transform(x_transpose)\nplt.figure(figsize=(18,6))\nplt.scatter(X_reduced[:, 0], X_reduced[:, 1])\nfor label, x, y in zip(x_transpose.index, X_reduced[:, 0], X_reduced[:, 1]):\n plt.annotate(\n label,\n xy = (x, y), xytext = (-10, 10),\n textcoords = 'offset points', ha = 'right', va = 'bottom',\n bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),\n arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))", "Les variables les plus dissemblables sont celles qui contribuent le plus. Toutefois, à la vue de ce graphique, il apparaît qu'il faut normaliser les données avant d'interpréter l'ACP :", "from sklearn.preprocessing import normalize\nxnorm = normalize(x_transpose)\npca = PCA(n_components=10)\npca.fit(xnorm)\nplt.bar(numpy.arange(len(pca.explained_variance_ratio_))+0.5, pca.explained_variance_ratio_)\nplt.title(\"Variance expliquée\")\n\nX_reduced = pca.transform(xnorm)\nplt.figure(figsize=(18,6))\nplt.scatter(X_reduced[:, 0], X_reduced[:, 1])\nfor label, x, y in zip(x_transpose.index, X_reduced[:, 0], X_reduced[:, 1]):\n plt.annotate(\n label,\n xy = (x, y), xytext = (-10, 10),\n textcoords = 'offset points', ha = 'right', va = 'bottom',\n bbox = dict(boxstyle = 'round,pad=0.5', fc = 'yellow', alpha = 0.5),\n arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3,rad=0'))", "Nettement mieux. En règle générale, il est préférable de normaliser ses données avant d'apprendre un modèle. Cela n'est pas toujours nécessaire (comme pour les arbres de décision). Toutefois, numériquement, avoir des données d'ordre de grandeur très différent introduit toujours plus d'approximations.", "from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import Normalizer\nfrom sklearn.ensemble import GradientBoostingClassifier\n\nclf = Pipeline([\n ('normalize', Normalizer()),\n ('classification', GradientBoostingClassifier())\n ])\nclf = clf.fit(X_train, Y_train.ravel()) \n\nfrom sklearn.metrics import confusion_matrix\nx,y = X_test, Y_test\nyp = clf.predict(x)\ncm2 = confusion_matrix(y, yp)\nprint(\"non normalisé\\n\",cm)\nprint(\"normalisé\\n\",cm2)", "C'est plus ou moins équivalent lorsque les variables sont normalisées dans ce cas. Il faudrait vérifier sur la courbe ROC.\nExercice 2 : tracer la courbe ROC\nOn utilise l'exemple Receiver Operating Characteristic (ROC) qu'il faut modifié car la réponse juste dans notre cas est le fait de prédire la bonne classe. Cela veut dire qu'il y a deux cas pour lesquels le modèle prédit le bon résultat : on choisit la classe qui la probabilité la plus forte.", "from sklearn.metrics import roc_curve, auc\nprobas = clf.predict_proba(X_test)\nprobas[:5]", "On construit le vecteur des bonnes réponses :", "rep = [ ]\nyt = Y_test.ravel()\nfor i in range(probas.shape[0]):\n p0,p1 = probas[i,:]\n exp = yt[i]\n if p0 > p1 :\n if exp == 0 :\n # bonne réponse\n rep.append ( (1, p0) )\n else :\n # mauvaise réponse\n rep.append( (0,p0) )\n else :\n if exp == 0 :\n # mauvaise réponse\n rep.append ( (0, p1) )\n else :\n # bonne réponse\n rep.append( (1,p1) )\nmat_rep = numpy.array(rep)\nmat_rep[:5]\n\n\"taux de bonne réponse\",sum(mat_rep[:,0]/len(mat_rep)) # voir matrice de confusion\n\nfpr, tpr, thresholds = roc_curve(mat_rep[:,0], mat_rep[:, 1])\nroc_auc = auc(fpr, tpr)\nplt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)\nplt.plot([0, 1], [0, 1], 'k--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.0])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC')\nplt.legend(loc=\"lower right\")", "Ce score n'est pas si mal pour un premier essai. On n'a pas tenu compte du fait que la classe 1 est sous-représentée (voir Quelques astuces pour faire du machine learning. A priori, ce ne devrait pas être le cas du GradientBoostingClassifier. C'est une famille de modèles qui, lors de l'apprentissage, pondère davantage les exemples où ils font des erreurs. L'algorithme de boosting le plus connu est AdaBoost.\nOn tire maintenant deux échantillons aléatoires qu'on ajoute au graphique précédent :", "import random\nY1 = numpy.array([ random.randint(0,1) == 0 for i in range(0,mat_rep.shape[0]) ])\nY2 = numpy.array([ random.randint(0,1) == 0 for i in range(0,mat_rep.shape[0]) ])\n\nfpr1, tpr1, thresholds1 = roc_curve(mat_rep[Y1,0], mat_rep[Y1, 1])\nroc_auc1 = auc(fpr1, tpr1)\nfpr2, tpr2, thresholds2 = roc_curve(mat_rep[Y2,0], mat_rep[Y2, 1])\nroc_auc2 = auc(fpr2, tpr2)\nprint(fpr1.shape,tpr1.shape,fpr2.shape,tpr2.shape)\n\nimport matplotlib.pyplot as plt\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nax.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)\nax.plot([0, 1,2], [0, 1,2], 'k--')\nax.set_xlim([0.0, 1.0])\nax.set_ylim([0.0, 1.0])\nax.set_xlabel('False Positive Rate')\nax.set_ylabel('True Positive Rate')\nax.set_title('ROC')\nax.plot(fpr1, tpr1, label='ech 1, area=%0.2f' % roc_auc1)\nax.plot(fpr2, tpr2, label='ech 2, area=%0.2f' % roc_auc2)\nax.legend(loc=\"lower right\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dhiapet/PyMC3
docs/notebooks/getting_started.ipynb
apache-2.0
[ "Probabilistic Programming in Python using PyMC\nAuthors: John Salvatier, Thomas V. Wiecki, Christopher Fonnesbeck\nIntroduction\nProbabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intutive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, whicstatisticalh means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.\nProbabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.\nWhile most of PyMC3's user-facing features are written in pure Python, it leverages Theano (Bergstra et al., 2010) to transparently transcode models to C and compile them to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray data structure, and similarly allow for broadcasting and advanced indexing, just as NumPy arrays do. Theano also automatically optimizes the likelihood's computational graph for speed and provides simple GPU integration.\nHere, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends.\nInstallation\nRunning PyMC3 requires a working Python interpreter, either version 2.7 (or more recent) or 3.4 (or more recent); we recommend that new users install version 3.4. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free Anaconda Python Distribution by ContinuumIO. \nPyMC3 can be installed using pip (https://pip.pypa.io/en/latest/installing.html):\npip install git+https://github.com/pymc-devs/pymc3\nPyMC3 depends on several third-party Python packages which will be automatically installed when installing via pip. The four required dependencies are: Theano, NumPy, SciPy, and Matplotlib. \nTo take full advantage of PyMC3, the optional dependencies Pandas and Patsy should also be installed. These are not automatically installed, but can be installed by:\npip install patsy pandas\nThe source code for PyMC3 is hosted on GitHub at https://github.com/pymc-devs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage.\nA Motivating Example: Linear Regression\nTo introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors for the parameters. We are interested in predicting outcomes $Y$ as normally-distributed observations with an expected value $\\mu$ that is a linear function of two predictor variables, $X_1$ and $X_2$.\n$$\\begin{aligned} \nY &\\sim \\mathcal{N}(\\mu, \\sigma^2) \\\n\\mu &= \\alpha + \\beta_1 X_1 + \\beta_2 X_2\n\\end{aligned}$$\nwhere $\\alpha$ is the intercept, and $\\beta_i$ is the coefficient for covariate $X_i$, while $\\sigma$ represents the observation error. Since we are constructing a Bayesian model, the unknown variables in the model must be assigned a prior distribution. We choose zero-mean normal priors with variance of 100 for both regression coefficients, which corresponds to weak information regarding the true parameter values. We choose a half-normal distribution (normal distribution bounded at zero) as the prior for $\\sigma$.\n$$\\begin{aligned} \n\\alpha &\\sim \\mathcal{N}(0, 100) \\\n\\beta_i &\\sim \\mathcal{N}(0, 100) \\\n\\sigma &\\sim \\lvert\\mathcal{N}(0, 1){\\rvert}\n\\end{aligned}$$\nGenerating data\nWe can simulate some artificial data from this model using only NumPy's random module, and then use PyMC3 to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC3 model structure.", "import numpy as np\nimport matplotlib.pyplot as plt\n\n# Intialize random number generator\nnp.random.seed(123)\n\n# True parameter values\nalpha, sigma = 1, 1\nbeta = [1, 2.5]\n\n# Size of dataset\nsize = 100\n\n# Predictor variable\nX1 = np.linspace(0, 1, size)\nX2 = np.linspace(0,.2, size)\n\n# Simulate outcome variable\nY = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma", "Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.", "%pylab inline \n\nfig, axes = subplots(1, 2, sharex=True, figsize=(10,4))\naxes[0].scatter(X1, Y)\naxes[1].scatter(X2, Y)\naxes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2');", "Model Specification\nSpecifiying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above. \nFirst, we import the components we will need from PyMC.", "from pymc3 import Model, Normal, HalfNormal", "Now we build our model, which we will present in full first, then explain each part line-by-line.", "basic_model = Model()\n\nwith basic_model:\n \n # Priors for unknown model parameters\n alpha = Normal('alpha', mu=0, sd=10)\n beta = Normal('beta', mu=0, sd=10, shape=2)\n sigma = HalfNormal('sigma', sd=1)\n \n # Expected value of outcome\n mu = alpha + beta[0]*X1 + beta[1]*X2\n \n # Likelihood (sampling distribution) of observations\n Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)", "The first line,\npython\nbasic_model = Model()\ncreates a new Model object which is a container for the model random variables.\nFollowing instantiation of the model, the subsequent specification of the model components is performed inside a with statement:\npython\nwith basic_model:\nThis creates a context manager, with our basic_model as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model right after we create them. If you try to create a new random variable without a with model: statement, it will raise an error since there is no obvious model for the variable to be added to.\nThe first three statements in the context manager:\npython\nalpha = Normal('alpha', mu=0, sd=10)\nbeta = Normal('beta', mu=0, sd=10, shape=2)\nsigma = HalfNormal('sigma', sd=1)\ncreate a stochastic random variables with a Normal prior distributions for the regression coefficients with a mean of 0 and standard deviation of 10 for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, $\\sigma$. These are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and partly random (or stochastic). \nWe call the Normal constructor to create a random variable to use as a normal prior. The first argument is always the name of the random variable, which should almost always match the name of the Python variable being assigned to, since it sometimes used to retrieve the variable from the model for summarizing output. The remaining required arguments for a stochastic object are the parameters, in this case mu, the mean, and sd, the standard deviation, which we assign hyperparameter values for the model. In general, a distribution's parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta, Exponential, Categorical, Gamma, Binomial and many others, are available in PyMC3.\nThe beta variable has an additional shape argument to denote it as a vector-valued parameter of size 2. The shape argument is available for all distributions and specifies the length or shape of the random variable, but is optional for scalar variables, since it defaults to a value of one. It can be an integer, to specify an array, or a tuple, to specify a multidimensional array (e.g. shape=(5,7) makes random variable that takes on 5 by 7 matrix values). \nDetailed notes about distributions, sampling methods and other PyMC3 functions are available via the help function.", "help(Normal) #try help(Model), help(Uniform) or help(basic_model)", "Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship:\npython\nmu = alpha + beta[0]*X1 + beta[1]*X2\nThis creates a deterministic random variable, which implies that its value is completely determined by its parents' values. That is, there is no uncertainty beyond that which is inherent in the parents' values. Here, mu is just the sum of the intercept alpha and the two products of the coefficients in beta and the predictor variables, whatever their values may be. \nPyMC3 random variables and data can be arbitrarily added, subtracted, divided, multiplied together and indexed-into to create new random variables. This allows for great model expressivity. Many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided. \nThe final line of the model, defines Y_obs, the sampling distribution of the outcomes in the dataset.\npython\nY_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)\nThis is a special case of a stochastic variable that we call an observed stochastic, and represents the data likelihood of the model. It is identical to a standard stochastic, except that its observed argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray or pandas.DataFrame object.\nNotice that, unlike for the priors of the model, the parameters for the normal distribution of Y_obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. This creates parent-child relationships between the likelihood and these two variables.\nModel fitting\nHaving completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could calculate the posterior estimates analytically, but for most non-trivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods.\nMaximum a posteriori methods\nThe maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be biased if the mode isn't representative of the distribution. PyMC3 provides this functionality with the find_MAP function.\nBelow we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values.", "from pymc3 import find_MAP\n\nmap_estimate = find_MAP(model=basic_model)\n \nprint(map_estimate)", "By default, find_MAP uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.", "from scipy import optimize\n\nmap_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell)\n \nprint(map_estimate)", "It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together. \nMost techniques for finding the MAP estimate also only find a local optimium (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different.\nSampling methods\nThough finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution. \nTo conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers: NUTS, Metropolis, Slice, HamiltonianMC, and BinaryMetropolis. These step methods can be assigned manually, or assigned automatically by PyMC3. Auto-assignment is based on the attributes of each variable in the model. In general:\n\nBinary variables will be assigned to BinaryMetropolis\nDiscrete variables will be assigned to Metropolis\nContinuous variables will be assigned to NUTS\n\nAuto-assignment can be overriden for any subset of variables by specifying them manually prior to sampling.\nGradient-based sampling methods\nPyMC3 has the standard sampling algorithms like adaptive Metropolis-Hastings and adaptive slice sampling, but PyMC3's most capable step method is the No-U-Turn Sampler. NUTS is especially useful on models that have many continuous parameters, a situatiuon where other MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. This helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentation of the posterior density. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables. \nNUTS requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, althrough NUTS uses it somewhat differently. The matrix gives the rough shape of the distribution so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often.\nFortunately NUTS can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find_MAP) to NUTS, it will look at the local curvature of the log posterior-density (the diagonal of the Hessian matrix) at that point to make a guess for a good scaling vector, which often results in a good value. The MAP estimate is often a good point to use to initiate sampling. It is also possible to supply your own vector or scaling matrix to NUTS, though this is a more advanced use. If you wish to modify a Hessian at a specific point to use as your scaling matrix or vector, you can use find_hessian or find_hessian_diag.\nFor our basic linear regression example in basic_model, we will use NUTS to sample 2000 draws from the posterior using the MAP as the starting point and scaling point. This must also be performed inside the context of the model.", "from pymc3 import NUTS, sample\n\nwith basic_model:\n \n # obtain starting values via MAP\n start = find_MAP(fmin=optimize.fmin_powell)\n \n # draw 2000 posterior samples\n trace = sample(2000, start=start) ", "The sample function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected. The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows:", "trace['alpha'][-5:]", "If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample.", "from pymc3 import Slice\n\nwith basic_model:\n \n # obtain starting values via MAP\n start = find_MAP(fmin=optimize.fmin_powell)\n \n # instantiate sampler\n step = Slice(vars=[sigma]) \n \n # draw 5000 posterior samples\n trace = sample(5000, step=step, start=start) \n", "Posterior analysis\nPyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.", "from pymc3 import traceplot\n\ntraceplot(trace[4000:]);", "The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.\nIn addition, the summary function provides a text-based output of common posterior statistics:", "from pymc3 import summary\n\nsummary(trace[4000:])", "Case study 1: Stochastic volatility\nWe present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3's use in addressing a more realistic problem. The distribution of market returns is highly non-normal, which makes sampling the volatlities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like Metropolis-Hastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient.\nThe Model\nAsset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21).\n$$\\begin{aligned} \n \\sigma &\\sim exp(50) \\\n \\nu &\\sim exp(.1) \\\n s_i &\\sim \\mathcal{N}(s_{i-1}, \\sigma^{-2}) \\\n log(y_i) &\\sim t(\\nu, 0, exp(-2 s_i))\n\\end{aligned}$$\nHere, $y$ is the daily return series which is modeled with a Student-t distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process $s$. The individual $s_i$ are the individual daily log volatilities in the latent log volatility process. \nThe Data\nOur data consist of daily returns of the S&P 500 during the 2008 financial crisis.", "import pandas as pd\nreturns = pd.read_csv('data/SP500.csv', index_col=0, parse_dates=True)\nprint(len(returns))\n\nreturns.plot(figsize=(10, 6))\nplt.ylabel('daily returns in %');", "Model Specification\nAs with the linear regession example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential distribution for the $ \\nu $ and $\\sigma$ priors, the student-t (T) distribution for distribution of returns, and the GaussianRandomWalk for the prior for the latent volatilities. \nIn PyMC3, variables with purely positive priors like Exponential are transformed with a log transform. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named \"variableName_log\") is added to the model for sampling. In this model this happens behind the scenes for both the degrees of freedom, nu, and the scale parameter for the volatility process, sigma, since they both have exponential priors. Variables with priors that constrain them on two sides, like Beta or Uniform, are also transformed to be unconstrained but with a log odds transform. \nAlthough, unlike model specifiation in PyMC2, we do not typically provide starting points for variables at the model specification stage, we can also provide an initial value for any distribution (called a \"test value\") using the testval argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are illegal and we want to ensure we select a legal one. The test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden. \nThe vector of latent volatilities s is given a prior distribution by GaussianRandomWalk. As its name suggests GaussianRandomWalk is a vector valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. The scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector.", "from pymc3 import Exponential, T, exp, Deterministic\nfrom pymc3.distributions.timeseries import GaussianRandomWalk\n\nwith Model() as sp500_model:\n \n nu = Exponential('nu', 1./10, testval=5.)\n \n sigma = Exponential('sigma', 1./.02, testval=.1)\n \n s = GaussianRandomWalk('s', sigma**-2, shape=len(returns))\n \n volatility_process = Deterministic('volatility_process', exp(-2*s))\n \n r = T('r', nu, lam=1/volatility_process, observed=returns['S&P500'])", "Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.\nAlso note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.\nFitting\nBefore we draw samples from the posterior, it is prudent to find a decent starting valuwa by finding a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log_sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=.1 for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions and we have 400 stochastic random variables (mostly from s).\nTo do the sampling, we do a short initial run to put us in a volume of high probability, then start again at the new starting point. trace[-1] gives us the last point in the sampling trace. NUTS will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling.", "import scipy\nwith sp500_model:\n start = find_MAP(vars=[s], fmin=scipy.optimize.fmin_l_bfgs_b)\n \n step = NUTS(scaling=start)\n trace = sample(100, step, progressbar=False)\n\n # Start next run at the last sampled position.\n step = NUTS(scaling=trace[-1], gamma=.25)\n trace = sample(2000, step, start=trace[-1], progressbar=False, njobs=4)", "We can check our samples by looking at the traceplot for nu and sigma.", "traceplot(trace, [nu, sigma]);", "Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.", "fig, ax = plt.subplots(figsize=(15, 8))\nreturns.plot(ax=ax)\nax.plot(returns.index, 1/np.exp(trace['s',::30].T), 'r', alpha=.03);\nax.set(title='volatility_process', xlabel='time', ylabel='volatility');\nax.legend(['S&P500', 'stochastic volatility process'])", "As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease.\nCase study 2: Coal mining disasters\nConsider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as the marker value. \nNext we will build a model for this series and attempt to estimate when the change occured. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables.", "disaster_data = np.ma.masked_values([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,\n 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,\n 2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0,\n 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,\n 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,\n 3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,\n 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999)\nyear = np.arange(1851, 1962)\n\nplot(year, disaster_data, 'o', markersize=8);\nylabel(\"Disaster count\")\nxlabel(\"Year\")", "Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.\nIn our model, \n$$ \n\\begin{aligned}\n D_t &\\sim \\text{Pois}(r_t), r_t= \\begin{cases} \n l, & \\text{if } t \\lt s \\\n r, & \\text{if } t \\ge s \n \\end{cases} \\\n s &\\sim \\text{Unif}(t_l, t_h)\\ \n e &\\sim \\text{exp}(1)\\\n l &\\sim \\text{exp}(1) \n\\end{aligned}\n$$\n the parameters are defined as follows: \n * $D_t$: The number of disasters in year $t$\n * $r_t$: The rate parameter of the Poisson distribution of disasters in year $t$.\n * $s$: The year in which the rate parameter changes (the switchpoint).\n * $e$: The rate parameter before the switchpoint $s$.\n * $l$: The rate parameter after the switchpoint $s$.\n * $t_l$, $t_h$: The lower and upper boundaries of year $t$.\nThis model is built much like our previous models. The major differences are the introduction of discrete variables with the Poisson and discrete-uniform priors and the novel form of the deterministic random variable rate.", "from pymc3 import DiscreteUniform, Poisson, switch\n\nwith Model() as disaster_model:\n\n switchpoint = DiscreteUniform('switchpoint', lower=year.min(), upper=year.max(), testval=1900)\n\n # Priors for pre- and post-switch rates number of disasters\n early_rate = Exponential('early_rate', 1)\n late_rate = Exponential('late_rate', 1)\n\n # Allocate appropriate Poisson rates to years before and after current\n rate = switch(switchpoint >= year, early_rate, late_rate)\n\n disasters = Poisson('disasters', rate, observed=disaster_data)", "The logic for the rate random variable,\npython\nrate = switch(switchpoint &gt;= year, early_rate, late_rate)\nis implemented using switch, a Theano function that works like an if statement. It uses the first argument to switch between the next two arguments.\nMissing values are handled transparently by passing a MaskedArray or a pandas.DataFrame with NaN values to the observed argument when creating an observed stochastic random variable. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. All we need to do to handle the missing values is ensure we sample this random variable as well.\nUnfortunately because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling switchpoint or the missing disaster observations. Instead, we will sample using a Metroplis step method, which implements adaptive Metropolis-Hastings, because it is designed to handle discrete values.\nWe sample with both samplers at once by passing them to the sample function in a list. Each new sample is generated by first applying step1 then step2.", "from pymc3 import Metropolis \n\nwith disaster_model:\n step1 = NUTS([early_rate, late_rate])\n \n # Use Metropolis for switchpoint, and missing values since it accomodates discrete variables\n step2 = Metropolis([switchpoint, disasters.missing_values[0]] )\n\n trace = sample(10000, step=[step1, step2])", "In the trace plot below we can see that there's about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship beween the year switchpoint and the likelihood and not due to sampling error.", "traceplot(trace);", "Arbitrary deterministics\nDue to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator.\nTheano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types.", "import theano.tensor as T \nfrom theano.compile.ops import as_op\n\n@as_op(itypes=[T.lscalar], otypes=[T.lscalar])\ndef crazy_modulo3(value):\n if value > 0: \n return value % 3\n else :\n return (-value + 1) % 3\n \nwith Model() as model_deterministic:\n a = Poisson('a', 1)\n b = crazy_modulo3(a)", "An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.\nArbitrary distributions\nSimilarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014). \n```python\nimport theano.tensor as T\nfrom pymc3 import DensityDist, Uniform\nwith Model() as model:\n alpha = Uniform('intercept', -100, 100)\n# Create custom densities\nbeta = DensityDist('beta', lambda value: -1.5 * T.log(1 + value**2), testval=0)\neps = DensityDist('eps', lambda value: -T.log(T.abs_(value)), testval=1)\n\n# Create likelihood\nlike = Normal('y_est', mu=alpha + beta * X, sd=eps, observed=Y)\n\n```\nFor more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.Op. \nImplementing the beta variable above as a Continuous subclass is shown below, along with a sub-function using the as_op decorator, though this is not strictly necessary.", "from pymc3.distributions import Continuous\n\nclass Beta(Continuous):\n def __init__(self, mu, *args, **kwargs):\n super(Beta, self).__init__(*args, **kwargs)\n self.mu = mu\n self.mode = mu\n\n def logp(self, value):\n mu = self.mu\n return beta_logp(value - mu)\n \n@as_op(itypes=[T.dscalar], otypes=[T.dscalar])\ndef beta_logp(value):\n return -1.5 * np.log(1 + (value)**2)\n\n\nwith Model() as model:\n beta = Beta('slope', mu=0, testval=0)", "Generalized Linear Models\nGeneralized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module.\nThe glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example:", "# Convert X and Y to a pandas DataFrame\nimport pandas \ndf = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y})", "The model can then be very concisely specified in one line of code.", "from pymc3.glm import glm\n\nwith Model() as model_glm:\n glm('y ~ x1 + x2', df)", "The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.", "from pymc3.glm.families import Binomial\n\ndf_logistic = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y > 0})\n\nwith Model() as model_glm_logistic:\n glm('y ~ x1 + x2', df_logistic, family=Binomial())", "Backends\nPyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends:\nBy default, an in-memory ndarray is used but if the samples would get too large to be held in memory we could use the sqlite backend:", "from pymc3.backends import SQLite\n\nwith model_glm_logistic:\n backend = SQLite('trace.sqlite')\n trace = sample(5000, trace=backend)\n\nsummary(trace, vars=['x1', 'x2'])", "The stored trace can then later be loaded using the load command:", "from pymc3.backends.sqlite import load\n\nwith basic_model:\n trace_loaded = load('trace.sqlite')", "More information about backends can be found in the docstring of pymc.backends.\nReferences\nPatil, A., D. Huard and C.J. Fonnesbeck. (2010) PyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical Software, 35(4), pp. 1-81\nBastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I., Bergeron, A., Bouchard, N., Warde-Farley, D., and Bengio, Y. (2012) “Theano: new features and speed improvements”. NIPS 2012 deep learning workshop.\nBergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., and Bengio, Y. (2010) “Theano: A CPU and GPU Math Expression Compiler”. Proceedings of the Python for Scientific Computing Conference (SciPy) 2010. June 30 - July 3, Austin, TX\nLunn, D.J., Thomas, A., Best, N., and Spiegelhalter, D. (2000) WinBUGS -- a Bayesian modelling framework: concepts, structure, and extensibility. Statistics and Computing, 10:325--337.\nNeal, R.M. Slice sampling. Annals of Statistics. (2003). doi:10.2307/3448413.\nvan Rossum, G. The Python Library Reference Release 2.6.5., (2010). URL http://docs.python.org/library/.\nDuane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987) “Hybrid Monte Carlo”, Physics Letters, vol. 195, pp. 216-222.\nStan Development Team. (2014). Stan: A C++ Library for Probability and Sampling, Version 2.5.0. http://mc-stan.org. \nGamerman, D. Markov Chain Monte Carlo: statistical simulation for Bayesian inference. Chapman and Hall, 1997.\nHoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. The Journal of Machine Learning Research, 30.\nVanderplas, Jake. \"Frequentism and Bayesianism IV: How to be a Bayesian in Python.\" Pythonic Perambulations. N.p., 14 Jun 2014. Web. 27 May. 2015. https://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/.\nR.G. Jarrett. A note on the intervals between coal mining disasters. Biometrika, 66:191–193, 1979." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ampl/amplpy
notebooks/hashcode/practice_problem.ipynb
bsd-3-clause
[ "Google Hashcode 2022\n\nGoogle Hashcode is a team programming competition to solve a complex engineering problem.\nIn this notebook we are showing how Mathematical Optimization methods as Mixed Integer Programming (MIP) are useful to solve this kind of problems, as they are really easy to implement and give optimal solutions (not only trade-off ones), as opposed to greedy approaches or heuristics. We are solving the pizza warm-up exercise.\nWe are using AMPL as the modeling language to formulate the problem from two different approaches (not all the formulations are the same in terms of complexity), coming up with enhancements or alternative approaches is an important part of the solving process.\nAs an instructive example of how to face this kind of problems, we are using the AMPL API for Python (AMPLPY), so we can read the input of the problem, translate easily to a data file for AMPL, and retrieve the solution to get the score. Because of using MIP approach, the score will be the highest possible for the problem.\nProblem statement\nThe statement of this year is related to a pizzeria, the goal is to maximize the number of customers coming, and we want to pick the ingredients for the only pizza that is going to be sold:\n\nEach customer has a list of ingredients he loves, and a list of those he does not like.\nA customer will come to the pizzeria if the pizza has all the ingredients he likes, and does not have any disgusting ingredient for him.\n\nTask: choose the exact ingredients the pizza should have so it maximizes the number of customers given their lists of preferences. The score is the number of customers coming to eat the pizza.\n(The statement can be found here)\nFirst formulation\nThe first MIP formulation will be straightforward. We have to define the variables we are going to use, and then the objective function and constraints will be easy to figure out.\nVariables\nWe have to decide which ingredients to pick, so\n* $x_i$ = 1 if the ingredient i is in the pizza, 0 otherwise.\n* $y_j$ = 1 if the customer will come to the pizzeria, 0 otherwise.\nWhere $i = 1, .., I$ and $j = 1, .., c$ (c = total of customers and I = total of ingredients).\nObjective function\nThe goal is to maximize the number of customers, so this is clear:\n$$maximize \\ \\sum \\limits_{j = 1}^c y_j$$\nFinally, we need to tie the variables to have the meaning we need by using constraints.\nConstraints\nIf the j customer comes, his favourite ingredients should be picked (mathematically $y_j=1$ implies all the $x_i = 1$). So, for each $j = 1, .., c$:\n$$|Likes_j| \\cdot y_j \\leq \\sum \\limits_{i \\in Likes_j} x[i]$$\nWhere $Likes_j$ is the set of ingredients $j$ customer likes, and $|Likes_j|$ the number of elements of the set.\nIf any of the disliked ingredients is in the pizza, customer $j$ can't come (any $x_i = 1$ implies $y_j = 0$). For each customer $j = 1, .., c$:\n$$\\sum \\limits_{i \\in Dislikes_j} x_i \\leq \\frac{1}{2}+(|Dislikes_j|+\\frac{1}{2})\\cdot(1-y_j)$$\nSo when customer $j$ comes, the right side is equal to\n$$\\frac{1}{2}+(|Dislikes_j|+\\frac{1}{2})\\cdot(1-1) = \\frac{1}{2} + 0 = \\frac{1}{2}$$\nThis implies the left side to be zero, because the $x_i$ variables are binary. If the customer $j$ does not come, the inequality is satisfied trivially.\nWe will need the input data files from the problem, they are available in the amplpy Github repository:", "import os\nif not os.path.isdir('input_data'):\n os.system('git clone https://github.com/ampl/amplpy.git')\n os.chdir('amplpy/notebooks/hashcode')\n\nif not os.path.isdir('ampl_input'):\n os.mkdir('ampl_input')", "Let's use AMPL to formulate the previous problem. The following section setup AMPL to run in also in the cloud (not only locally) with Google Colab.\nAMPLPY Setup in the cloud\nHere is some documentation and examples of the API: Documentation, GitHub Repository, PyPI Repository, other Jupyter Notebooks. The following cell is enough to install it. We are using ampl (modeling language) and gurobi (solver) modules.", "!pip install -q amplpy ampltools\n\nMODULES=['ampl', 'gurobi']\nfrom ampltools import cloud_platform_name, ampl_notebook\nfrom amplpy import AMPL, register_magics\nif cloud_platform_name() is None:\n ampl = AMPL() # Use local installation of AMPL\nelse:\n ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it\nregister_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()", "Solving problem with AMPL\nFirst, we need to write the model file (.mod) containing the mathematical formulation. After that, we will write a data file (.dat) to solve the different instances of the Hashcode problem.", "%%writefile pizza.mod\n\n# PARAMETERS AND SETS\nparam total_customers;\n\n# Set of ingredients\nset INGR;\n# Customers lists of preferences\nset Likes{1..total_customers};\nset Dislikes{1..total_customers};\n\n# VARIABLES\n\n# Take or not to take the ingredient\nvar x{i in INGR}, binary;\n# customer comes OR NOT\nvar y{j in 1..total_customers}, binary;\n\n# OBJECTIVE FUNCTION\nmaximize Total_Customers: sum{j in 1..total_customers} y[j];\n\ns.t.\nCustomer_Likes{j in 1..total_customers}:\n\tcard(Likes[j])*y[j] <= sum{i in Likes[j]} x[i];\n\nparam eps := 0.5;\n\nCustomer_Dislikes{j in 1..total_customers}:\n\tsum{i in Dislikes[j]} x[i] <= 1-eps+(card(Dislikes[j])+eps)*(1-y[j]);", "Translate input with Python\nThe input files are in the folder input_data/, but they do not have the AMPL data format. Fortunately, we can easily parse the original input files to generate AMPL data files.", "import sys\n\n# dict to map chars to hashcode original filenames\nfilename = {\n 'a':'input_data/a_an_example.in.txt',\n 'b':'input_data/b_basic.in.txt',\n 'c':'input_data/c_coarse.in.txt',\n 'd':'input_data/d_difficult.in.txt',\n 'e':'input_data/e_elaborate.in.txt'\n}\n\ndef read(testcase):\n original_stdout = sys.stdout\n with open(filename[testcase]) as input_file, open('ampl_input/pizza_'+testcase+'.dat', 'w+') as output_data_file:\n sys.stdout = output_data_file # Change the standard output to the file we created.\n # total_customers\n total_customers = int(input_file.readline())\n print('param total_customers :=',total_customers,';')\n \n # loop over customers\n ingr=set()\n for c in range(1, total_customers+1):\n likes = input_file.readline().split()\n likes.pop(0)\n print('set Likes['+str(c)+'] := ',end='')\n print(*likes, end = ' ')\n print(';')\n dislikes = input_file.readline().split()\n dislikes.pop(0)\n print('set Dislikes['+str(c)+'] := ',end='')\n print(*dislikes, end = ' ')\n print(';')\n ingr = ingr.union(set(likes))\n ingr = ingr.union(set(dislikes))\n print('set INGR :=')\n print(*sorted(ingr), end = '\\n')\n print(';')\n sys.stdout = original_stdout\n\n# Let's try with problem 'c' from hashcode\nread('c')", "The file written can be displayed with ampl:", "%%ampl_eval\nshell 'cat ampl_input/pizza_c.dat';", "Now, solve the problem usign AMPL and Gurobi (MIP solver)", "os.listdir('ampl_input')\n\n%%ampl_eval\nmodel pizza.mod;\ndata ampl_input/pizza_c.dat;\noption solver gurobi;\nsolve;\ndisplay x, y;", "So the ingredients we should pick are:\n* byyii, dlust, luncl, tfeej, vxglq, xdozp and xveqd.\n* Customers coming are: 4, 5, 7, 8, 10. Total score: 5.\nWe can write an output file in the hashcode format:", "%%ampl_eval\nprintf \"%d \", sum{i in INGR} x[i] > output_file.out;\nfor{i in INGR}{\n if x[i] = 1 then printf \"%s \", i >> output_file.out;\n}\nshell 'cat output_file.out';", "You can try this with the other practice instances!\nThe big ones can take several hours to get the optimal solution, as MIP problems are usually hard because of the integrity constraints of the variables. That's why it is often necessary to reformulate the problem, or try to improve an existing formulation by adding of combining constraints / variables. In the following section, we present an alternative point of view to attack the Hashcode practice problem, hoping the solver finds a solution earlier this way.\nAlternative formulation\nWe could exploit the relations between customers and see if we can figure out of them. Actually, the goal is to get the biggest set of independent customers that are compatible (so none of their favourite ingredients are in the pizza). The ingredients we are picking may be deduced from the particular customers preferences we want to have.\nWith this idea, let's propose a graph approach where each customer is represented by node, and two nodes are connected by an edge if and only if the two customers are compatible. This is translated to the problem as:\n\nCustomer i loved ingredients are not in the disliked ingredients list of j (and vice versa).\n\nWith sets, this is:\n$$Liked_i \\cap Disliked_j = Liked_j \\cap Disliked_i = \\emptyset $$\nSo the problem is reduced to find the maximal clique in the graph (a clique is a subset of nodes and edges such as every pair of nodes are connected by an edge), which is an NP-Complete problem. The clique is maximal respect to the number of nodes.\nNew variables\nTo solve the clique problem we may use the binary variables:\n* $x_i$ = 1 if the node belongs to the maximal clique, 0 otherwise. For each $i = 1, .., c$.\nObjective function\nIt is the same as in the previous approach, as a node $i$ is in the maximal clique if and only if the customer $i$ is coming to the pizzeria in the corresponding optimal solution to the original problem. A bigger clique would induce a better solution, or a better solution would imply the solution customers to generate a bigger clique as all of them are compatible.\n$$maximize \\ \\sum \\limits_{i = 1}^c x_i$$\nNew constraints\nThe constraints are quite simple now. Two nodes that are not connected can't be in the same clique. For each pair of nodes not connected $i$ and $j$:\n$$x_i + x_j \\leq 1$$\nFormulation with AMPL\nWe are writing a new model file (very similar to the previous one). In order to reuse the data files and not get any errors, we will keep the INGR set although it is not going to be used anymore.\nThe most interesting feature in the model could be the condition to check that two customers are incompatible to generate a constraint. The condition is:\n$$Liked_i \\cap Disliked_j \\neq \\emptyset \\ \\text{ or } \\ Liked_j \\cap Disliked_i \\neq \\emptyset$$\nA set is not empty if its cardinality is greater or equal to one, so in AMPL we could write:\ncard(Likes[i] inter Dislikes[j]) &gt;= 1 or card(Likes[j] inter Dislikes[i]) &gt;= 1", "%%writefile pizza_alternative.mod\n\n# PARAMETERS AND SETS\nparam total_customers;\n\n# Set of ingredients\nset INGR;\n# Customers lists of preferences\nset Likes{1..total_customers};\nset Dislikes{1..total_customers};\n\n# VARIABLES\n\n# customer comes OR NOT <=> node in the clique or not\nvar x{i in 1..total_customers}, binary;\n\n# OBJECTIVE FUNCTION\nmaximize Total_Customers: sum{i in 1..total_customers} x[i];\n\ns.t.\n# Using the set operations to check if two nodes are not connected\nCompatible{i in 1..total_customers-1, j in i+1..total_customers : card(Likes[i] inter Dislikes[j]) >= 1 or card(Likes[j] inter Dislikes[i]) >= 1}:\n\tx[i]+x[j] <= 1;\n", "We can still use the same data files.", "%%ampl_eval\nreset;\nmodel pizza_alternative.mod;\ndata ampl_input/pizza_c.dat;\noption solver gurobi;\nsolve;\ndisplay x;\n\n%%ampl_eval\nset picked_ingr default {};\nfor{i in 1..total_customers}{\n if x[i] = 1 then let picked_ingr := picked_ingr union Likes[i];\n}\n\nprintf \"%d \", card(picked_ingr) > output_file.out;\nfor{i in picked_ingr}{\n printf \"%s \", i >> output_file.out;\n}\nshell 'cat output_file.out';", "Conclusion\nFirst, let's compare the size of the two models.\n\nFirst approach size: $c+I$ variables + $2c$ constraints.\nSecond approach size: $c$ variables + $c(c-1)/2$ constraints (potentially).\n\nAlso in the second approach, each constraint has only two non-zero coefficients along with variables, which is an advantage to have more sparse coefficient matrices.\nThe choice of one model or another will depend on the concrete instance of the problem, so the sparsity of the matrix and the real number of constraints can change (actually, the constraints of the two models are compatible). AMPL will take care of building the coefficient matrix efficiently, so there is no extra effort to compute the constraints or sums within them once the model is prepared and sent to the solver, and we can focus on thinking algorithmically. Also a lot of constraints and variables would be removed by presolve. To know more about the AMPL modeling language you can take a look to the manual.\nSome of the advantages of this approach are:\n* It is really easy to implement solutions.\n* There is no need to debug algorithms, only the correctness of the model.\n* Models are very flexible, so new constraints could be added while the rest of the model remains the same.\nDisadvantages:\n* It is hard to estimate how long it is going to take, even in simple models like the ones presented.\n* Sometimes it is hard to formulate the problem, as some of the constraints or the objective function could not adjust to the usual mathematical language. The problem could be non-linear so convergence would be more difficult and even optimal solutions would not be guaranteed.\n* For simple problems, more efficient algorithmic techniques could also give the best solution (Dynamic Programming, optimal greedy approaches...).\nEnhancements:\n* Study the problem to come up with presolve heuristics in order to get smaller models.\n* Add termination criterias (solver options) so the solver can stop prematurely when finding a enough good solution (there is a little gap between the best found solution and the known bounds), or even a time limit. If you are lucky the solution could be the optimal one but the optimality was not proved yet.\n* If the solver could not find the optimal solution on time, but we used a termination criteria, we could retrieve a good solution and run some kind of algorithm over it so we can improve and get closer to the optimal (GRASP or Genetic Algorithms, for instance). Actually, when solving a real engineering problem is desirable to combine exact methods as MIP, heuristics (greedy approaches) or metaheuristics (GRASP, Simulated Annealing, ...) among others, to reach better solutions.\n--\nAuthor: Marcos Dominguez Velad. Software engineer at AMPL.\n&#109;&#97;&#114;&#99;&#111;&#115;&#64;&#97;&#109;&#112;&#108;&#46;&#99;&#111;&#109;" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/kubeflow_pipelines/cicd/solutions/kfp_cicd.ipynb
apache-2.0
[ "CI/CD for a KFP pipeline\nLearning Objectives:\n1. Learn how to create a custom Cloud Build builder to pilote CAIP Pipelines\n1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP\n1. Learn how to setup a Cloud Build Github trigger to rebuild the KFP\nIn this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds and deploys a KFP pipeline. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.\nConfiguring environment settings\nUpdate the ENDPOINT constant with the settings reflecting your lab environment. \nThe endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.\n\nOpen the SETTINGS for your instance\nUse the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.", "ENDPOINT = \"<YOUR_ENDPOINT>\"\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]", "Creating the KFP CLI builder\nReview the Dockerfile describing the KFP CLI builder", "!cat kfp-cli/Dockerfile", "Build the image and push it to your project's Container Registry.", "IMAGE_NAME = \"kfp-cli\"\nTAG = \"latest\"\nIMAGE_URI = f\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}\"\n\n!gcloud builds submit --timeout 15m --tag {IMAGE_URI} kfp-cli", "Understanding the Cloud Build workflow.\nReview the cloudbuild.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables.\nThe CI/CD workflow automates the steps you walked through manually during lab-02-kfp-pipeline:\n1. Builds the trainer image\n1. Builds the base image for custom components\n1. Compiles the pipeline\n1. Uploads the pipeline to the KFP environment\n1. Pushes the trainer and base images to your project's Container Registry\nAlthough the KFP backend supports pipeline versioning, this feature has not been yet enable through the KFP CLI. As a temporary workaround, in the Cloud Build configuration a value of the TAG_NAME variable is appended to the name of the pipeline. \nThe Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI. \nManually triggering CI/CD runs\nYou can manually trigger Cloud Build runs using the gcloud builds submit command.", "SUBSTITUTIONS = \"\"\"\n_ENDPOINT={},\\\n_TRAINER_IMAGE_NAME=trainer_image,\\\n_BASE_IMAGE_NAME=base_image,\\\nTAG_NAME=test,\\\n_PIPELINE_FOLDER=.,\\\n_PIPELINE_DSL=covertype_training_pipeline.py,\\\n_PIPELINE_PACKAGE=covertype_training_pipeline.yaml,\\\n_PIPELINE_NAME=covertype_continuous_training,\\\n_RUNTIME_VERSION=1.15,\\\n_PYTHON_VERSION=3.7,\\\n_USE_KFP_SA=True,\\\n_COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/\n\"\"\".format(\n ENDPOINT\n).strip()\n\n!gcloud builds submit . --config cloudbuild.yaml --substitutions {SUBSTITUTIONS}", "Setting up GitHub integration\nIn this exercise you integrate your CI/CD workflow with GitHub, using Cloud Build GitHub App. \nYou will set up a trigger that starts the CI/CD workflow when a new tag is applied to the GitHub repo managing the pipeline source code. You will use a fork of this repo as your source GitHub repository.\nCreate a fork of this repo\nFollow the GitHub documentation to fork this repo\nCreate a Cloud Build trigger\nConnect the fork you created in the previous step to your Google Cloud project and create a trigger following the steps in the Creating GitHub app trigger article. Use the following values on the Edit trigger form:\n|Field|Value|\n|-----|-----|\n|Name|[YOUR TRIGGER NAME]|\n|Description|[YOUR TRIGGER DESCRIPTION]|\n|Event| Tag|\n|Source| [YOUR FORK]|\n|Tag (regex)|.*|\n|Build Configuration|Cloud Build configuration file (yaml or json)|\n|Cloud Build configuration file location| ./notebooks/kubeflow_pipelines/cicd/solutions/cloudbuild.yaml|\nUse the following values for the substitution variables:\n|Variable|Value|\n|--------|-----|\n|_BASE_IMAGE_NAME|base_image|\n|_COMPONENT_URL_SEARCH_PREFIX|https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/|\n|_ENDPOINT|[Your inverting proxy host]|\n|_PIPELINE_DSL|covertype_training_pipeline.py|\n|_PIPELINE_FOLDER|notebooks/kubeflow_pipelines/cicd/solutions|\n|_PIPELINE_NAME|covertype_training_deployment|\n|_PIPELINE_PACKAGE|covertype_training_pipeline.yaml|\n|_PYTHON_VERSION|3.7|\n|_RUNTIME_VERSION|1.15|\n|_TRAINER_IMAGE_NAME|trainer_image|\n|_USE_KFP_SA|False|\nTrigger the build\nTo start an automated build create a new release of the repo in GitHub. Alternatively, you can start the build by applying a tag using git. \ngit tag [TAG NAME]\ngit push origin --tags\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
moonbury/pythonanywhere
learn_scipy/7702OS_Chap_01_rev20150118.ipynb
gpl-3.0
[ "<center><font color=red>Learning SciPy for Numerical and Scientific Computing</font></center>\n\nContent under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).\n\n<b><font color='red'> NOTE: This IPython notebook should be read alonside the corresponding chapter in the book, where each piece of code is fully explained. </font></b> <br>\n<center> CHAPTER 1: Introduction to SciPy </center>\nCHAPTER SUMMARY\nIn this chapter we'll learn the benefits of using the combination of Python, NumPy, SciPy, and Matplotlib as a programming environment for any scientific endeavor that requires mathematics; in particular, anything related to numerical computations. We'll explore the environment, learn how to download and install the required libraries, use them for some quick computations, and figure out a few good ways to search for help.\nWhat is SciPy?\nA few links with documentation that can help to enhance the discussion presented on this section of the book are as follows:\n<ul>\n\nThe Scipy main site:\n<li>\n [http://www.scipy.org/](http://www.scipy.org/)\n</li><br>\n\nScipy : high-level scientific computing\n<li>\n [https://scipy-lectures.github.io/intro/scipy.html](https://scipy-lectures.github.io/intro/scipy.html)\n</li><br>\n\nArchives of the SciPy mailing discussion list \n<li>\n [http://mail.scipy.org/pipermail/scipy-user/](http://mail.scipy.org/pipermail/scipy-user/)\n</li>\n</ul>\n\nHow to install SciPy\n<ul>\nInstalling the SciPy Stack and Scientific Python distributions:\n<li>\n [http://www.scipy.org/install.html](http://www.scipy.org/install.html)\n</li><br>\n\nBuilding From Source on Linux:\n<li>\n [http://www.scipy.org/scipylib/building/linux.html](http://www.scipy.org/scipylib/building/linux.html)\n</li><br>\n\nUnofficial Windows Binaries for Python Extension Packages:\n<li>\n [http://www.lfd.uci.edu/~gohlke/pythonlibs/](http://www.lfd.uci.edu/~gohlke/pythonlibs/)\n</li><br>\n</ul>\n\nSciPy organization\n<ul>\nAn exhaustive list of SciPy modules is avalilable at:\n\n<li>\n http://docs.scipy.org/doc/scipy/reference/py-modindex.html\n</li><br>\n</ul>", "import numpy\nimport scipy\n\nscores=numpy.array([114, 100, 104, 89, 102, 91, 114, 114, 103, 105, 108, 130, 120, 132, 111, 128, 118, 119, 86, \n 72, 111, 103, 74, 112, 107, 103, 98, 96, 112, 112, 93])", "<b> <font color=red>What follows generates the <i> stem plot</i> mentioned in the text</font></b>", "nbins = 6\nsortedscores = numpy.sort(scores)\nintervals = numpy.linspace(sortedscores[0],sortedscores[-1],nbins+1);\nleftEdges = intervals[0:-1]; \nrightEdges = intervals[1:]; \nmiddleEdges = ( leftEdges + rightEdges ) / 2.0; \n\nj=0\ntemp=sortedscores[j]\ncount = numpy.zeros([nbins,1])\ni=0\nistop = len(leftEdges) - 1 \nwhile j<len(sortedscores):\n while ( ((leftEdges[i] <= temp) & (temp < rightEdges[i])) ):\n count[i] = count[i]+1\n j=j+1\n temp = sortedscores[j]\n if i < istop:\n i=i+1\n else:\n j=j+1\nif temp == rightEdges[i]:\n count[i] = count[i]+1\n\n%matplotlib inline \nimport matplotlib.pylab as plt\nplt.stem(middleEdges, count)\nplt.show()", "<b> <font color=red>NOTE</font>: to shorten the output of the next command, click the leftmost mouse button on the left of the output to activate a scrolling window. Do the same to anyother help output which follows after this one </b>", "dir(scores)\n\nxmean = scipy.mean(scores)\nsigma = scipy.std(scores)\nn = scipy.size(scores)\nprint (\"({0:0.14f}, {1:0.15f}, {2:0.14f})\".format(xmean, xmean - 2.576*sigma/scipy.sqrt(n), \n xmean + 2.576*sigma/scipy.sqrt(n) ))\n\n\nfrom scipy import stats\nresult=scipy.stats.bayes_mvs(scores)", "<b> <font color=red>NOTE</font>: to shorten the output of the next command, click the leftmost mouse button on the left of the output to activate a scrolling window. Do the same to anyother help output which follows after this one </b>", "help(scipy.stats.bayes_mvs)\n\nprint(result[0])\n", "How to find documentation\n<ul>\nGeneral SciPy documentation is available at:\n<li>\n [http://docs.scipy.org/doc/](http://docs.scipy.org/doc/)\n</li><br>\n\nGeneral SciPy documentation index is available at:\n<li>\n [http://docs.scipy.org/doc/scipy/reference/index.html](http://docs.scipy.org/doc/scipy/reference/index.html)\n</li><br>\n</ul>", "help(scipy.stats.bayes_mvs)\n\nnumpy.info('random')\n\nhelp(numpy.random)\n\nhelp(scipy.stats)\n\nhelp(scipy.stats.kurtosis)", "Scientific visualization\n<ul>\nGeneral Matplotlib references:<br>\n<li>\n [Matplotlib main web site: http://matplotlib.org/](http://matplotlib.org/)\n</li><br>\n<li>\n [Introduction to Matplotlib at: http://scipy-lectures.github.io/matplotlib/matplotlib.html](http://scipy-lectures.github.io/matplotlib/matplotlib.html)\n</li><br>\n<li>\n [Matplotlib example gallery at: http://matplotlib.org/gallery.html ](http://matplotlib.org/gallery.html)\n</li><br>\n</ul>\n<ul>\nOther Python-based visualization tools:<br>\n<li>\n [https://wiki.python.org/moin/NumericAndScientific/Plotting](https://wiki.python.org/moin/NumericAndScientific/Plotting)\n</li><br>\n<li>\n [Plotly: https://plot.ly/](https://plot.ly/)\n</li><br>\n<li>\n [Mayavi: http://code.enthought.com/projects/mayavi/ ](http://code.enthought.com/projects/mayavi/)\n</li><br>\n</ul>", "%matplotlib inline \nimport numpy\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (12.0, 8.0)\n\nx=numpy.linspace(0,2*numpy.pi,32)\n\nfig = plt.figure()\n\nplt.plot(x, numpy.sin(x))\nfig.savefig('sine.png')\nplt.show()\n", "On the IPython Notebook\n<ul>\nGeneral IPython Notebook references:<br>\n<li>\n [IPython Notebook main web site: http://ipython.org/ipython-doc/stable/notebook/index.html](http://ipython.org/ipython-doc/stable/notebook/index.html)\n</li><br>\n<li>\n [IPython example gallery: https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks)\n</li><br>\n<li>\n [Interactive notebooks: Sharing the code http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261](http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261)\n</li><br>\n<li>\n [WAKARI: IPython Notebook online https://wakari.io/](https://wakari.io/)\n</li><br>\n</ul>\n<br>\n<center> This is the end of the working codes shown and thoroughly discussed in Chapter 1 of the book <font color=red>Learning SciPy for Numerical and Scientific Computing</font> </center>\n\nContent under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
miykael/nipype_tutorial
notebooks/advanced_aws.ipynb
bsd-3-clause
[ "Using Nipype with Amazon Web Services (AWS)\nSeveral groups have been successfully using Nipype on AWS. This procedure\ninvolves setting a temporary cluster using StarCluster and potentially\ntransferring files to/from S3. The latter is supported by Nipype through\nDataSink and S3DataGrabber.\nUsing DataSink with S3\nThe DataSink class now supports sending output data directly to an AWS S3\nbucket. It does this through the introduction of several input attributes to the\nDataSink interface and by parsing the base_directory attribute. This class\nuses the boto3 and\nbotocore Python packages to\ninteract with AWS. To configure the DataSink to write data to S3, the user must\nset the base_directory property to an S3-style filepath.\nFor example:", "from nipype.interfaces.io import DataSink\nds = DataSink()\nds.inputs.base_directory = 's3://mybucket/path/to/output/dir'", "With the \"s3://\" prefix in the path, the DataSink knows that the output\ndirectory to send files is on S3 in the bucket \"mybucket\". \"path/to/output/dir\"\nis the relative directory path within the bucket \"mybucket\" where output data\nwill be uploaded to (Note: if the relative path specified contains folders that\ndon’t exist in the bucket, the DataSink will create them). The DataSink treats\nthe S3 base directory exactly as it would a local directory, maintaining support\nfor containers, substitutions, subfolders, \".\" notation, etc. to route output\ndata appropriately.\nThere are four new attributes introduced with S3-compatibility: creds_path,\nencrypt_bucket_keys, local_copy, and bucket.", "ds.inputs.creds_path = '/home/neuro/aws_creds/credentials.csv'\nds.inputs.encrypt_bucket_keys = True\nds.local_copy = '/home/neuro/workflow_outputs/local_backup'", "creds_path is a file path where the user's AWS credentials file (typically\na csv) is stored. This credentials file should contain the AWS access key id and\nsecret access key and should be formatted as one of the following (these formats\nare how Amazon provides the credentials file by default when first downloaded).\nRoot-account user:\nAWSAccessKeyID=ABCDEFGHIJKLMNOP\nAWSSecretKey=zyx123wvu456/ABC890+gHiJk\n\nIAM-user:\nUser Name,Access Key Id,Secret Access Key\n\"username\",ABCDEFGHIJKLMNOP,zyx123wvu456/ABC890+gHiJk\n\nThe creds_path is necessary when writing files to a bucket that has\nrestricted access (almost no buckets are publicly writable). If creds_path\nis not specified, the DataSink will check the AWS_ACCESS_KEY_ID and\nAWS_SECRET_ACCESS_KEY environment variables and use those values for bucket\naccess.\nencrypt_bucket_keys is a boolean flag that indicates whether to encrypt the\noutput data on S3, using server-side AES-256 encryption. This is useful if the\ndata being output is sensitive and one desires an extra layer of security on the\ndata. By default, this is turned off.\nlocal_copy is a string of the filepath where local copies of the output data\nare stored in addition to those sent to S3. This is useful if one wants to keep\na backup version of the data stored on their local computer. By default, this is\nturned off.\nbucket is a boto3 Bucket object that the user can use to overwrite the\nbucket specified in their base_directory. This can be useful if one has to\nmanually create a bucket instance on their own using special credentials (or\nusing a mock server like fakes3). This is\ntypically used for developers unit-testing the DataSink class. Most users do not\nneed to use this attribute for actual workflows. This is an optional argument.\nFinally, the user needs only to specify the input attributes for any incoming\ndata to the node, and the outputs will be written to their S3 bucket.\npython\nworkflow.connect(inputnode, 'subject_id', ds, 'container')\nworkflow.connect(realigner, 'realigned_files', ds, 'motion')\nSo, for example, outputs for sub001’s realigned_file1.nii.gz will be in:\ns3://mybucket/path/to/output/dir/sub001/motion/realigned_file1.nii.gz\n\nUsing S3DataGrabber\nComing soon..." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
enbanuel/phys202-2015-work
assignments/assignment11/OptimizationEx01.ipynb
mit
[ "Optimization Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt", "Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:", "# YOUR CODE HERE\ndef hat(x, a, b):\n V = -a*(x**2) + b*(x**4)\n return V\n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0", "Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:", "a = 5.0\nb = 1.0\n\n# YOUR CODE HERE\nx = np.linspace(-3, 3, 25)\n\nplt.figure(figsize=(7, 4))\nplt.plot(x, hat(x, a, b))\n\nplt.ylabel('$V(X)$')\nplt.xlabel('x')\nplt.title('Hat')\nplt.grid()\n\nassert True # leave this to grade the plot", "Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.", "# YOUR CODE HERE\nx0 = [-1.5, 1.5]\nres1 = opt.minimize(hat, x0[0], args=(a, b))\nres2 = opt.minimize(hat, x0[1], args=(a, b))\n\nx1 = res1.x\nx2 = res2.x\n\ny1 = np.array(hat(x1, a, b))\ny2 = np.array(hat(x2, a, b))\n\nx = np.linspace(-3, 3, 50)\n\nplt.figure(figsize=(9, 5))\nplt.plot(x, hat(x, a, b))\n\nplt.ylabel('$V(X)$')\nplt.xlabel('x')\nplt.title('Hat')\n\nplt.scatter(x1, y1, color='red')\nplt.scatter(x2, y2, color='red')\nplt.legend('Vm')\n\nplt.grid()\n\nassert True # leave this for grading the plot", "To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\nYOUR ANSWER HERE: The original hat potential function:\n$$ V(x) = -a x^2 + b x^4 $$\nwith $ a= 5$ and $b=1$ gives us:\n$$ V(x) = -5 x^2 + x^4 $$\nTaking the drivative and setting it equal to 0 will give us all points where the slope is zero, thus giving us the local maxima and minima:\n$$ V^\\prime (x)= \\frac{d V}{d x} = 0 = -10 x + 4 x^3$$\nSolving for $x$ gives us values of:\n$$ x = -\\sqrt{\\frac{5}{2}}, 0, \\sqrt{\\frac{5}{2}}$$\nBy taking the second derivative of $V(x)$ and pluggin in the values of $x$, we can determine which are minima or maxima:\n$$ V^{\\prime \\prime} (x)=\\frac{d^2 V}{d x^2} = -10 + 12x^2$$\nFor the given values of $x$ above, we get:\n$$ V^{\\prime \\prime} (0) = -10 $$\n$$ V^{\\prime \\prime} \\left(\\sqrt{\\frac{5}{2}}\\right) = 20 $$\n$$ V^{\\prime \\prime} \\left(-\\sqrt{\\frac{5}{2}}\\right) = 20 $$\nWhich implies concave down, up, and up, respectively.\nTherfore, the minima are, $x_1 = \\sqrt{\\frac{5}{2}} \\approx 1.581$, and $x_2 = -\\sqrt{\\frac{5}{2}} \\approx -1.581$, which equals what opt.minimize gave us." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
muatik/my-coding-challenges
python/challenge-reverse-words.ipynb
mit
[ "Constraints\n\nCan I assume the string is ASCII?\nYes\nNote: Unicode strings could require special handling depending on your language\n\n\n\nAlgorithm\nSince Python strings are immutable, we'll use a list of words instead to exercise in-place string manipulation as you would get with a C string.\ncreate a list as a string builder for reversed sentence\n* split string into words\n* for each word\n* create a list as a string builder for reversed word\n* for each character in current word:\n* * insert character into the begining of reversed word string\n* * inserts reversed word string into the reversed sentence list\nComplexity:\n\nTime: O(len(string))\nSpace: O(len(string))", "def reverse_words(string):\n if not string:\n return string\n newString = []\n for word in string.split():\n newWord = []\n for char in word:\n newWord.insert(0, char)\n newString.insert(0, \"\".join(newWord))\n return \" \".join(newString)", "Pythonic way\nYou can achive the same goal in a concise way by using python's list modidication methods as follows:", "def reverse_words2(string):\n string = list(string)\n string.reverse()\n return \"\".join(string)\n\ndef reverse_words3(string):\n return string[::-1]\n", "Unittests", "from nose.tools import assert_equal\n\n\ndef testWith(func):\n assert_equal(reverse_words(\"this is an Example.\"), \".elpmaxE na si siht\")\n assert_equal(reverse_words(\"hello friend\"), \"dneirf olleh\")\n assert_equal(reverse_words(\"COOL\"), \"LOOC\")\n assert_equal(reverse_words(None), None)\n print('Success: reverse_words')\n \ntestWith(reverse_words)\ntestWith(reverse_words2)\ntestWith(reverse_words3)", "Benchmark", "import timeit\nprint \"reverse_words\", timeit.timeit(\n \"reverse_words('this is an Example.')\",\n \"from __main__ import reverse_words\", number=99000)\nprint \"reverse_words2\", timeit.timeit(\n \"reverse_words2('this is an Example.')\",\n \"from __main__ import reverse_words2\", number=99000)\nprint \"reverse_words3\", timeit.timeit(\n \"reverse_words3('this is an Example.')\",\n \"from __main__ import reverse_words3\", number=99000)\n\nprint \"hellasdasd\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
4dsolutions/Python5
PySummary2.ipynb
mit
[ "SAISOFT PYT-PR: Session 10\nThe Ecosystem includes Many Languages\nThe Python ecosystem involves Python in using other languages and rule sets such as:\n\nSQL (Structured Query Language)\nRegexes (Regular Expressions)\nMarkdown (for Jupyter Notebooks)\nmagic (% and %% inside Notebooks and I-Python)\nAPIs (every library and framework comes with one)\nSphinx (documentation generator, uses reStructured Text)\nJavaScript (used with web frameworks)\nTemplate Languages (example: Jinja)\nHTML / CSS (naturally)\nWidget Toolkits (more APIs for GUIs)\nIDEs (such as Spyder, Pycharm, Sublime Text, vi, emacs)\n\nand so much else.\nLets take a look at another Standard Library module and talk about what it does. \nHash functions are not designed to preserve content. They don't encrypt. They're about associating some unique finger print with any hashable object.", "import hashlib\nm = hashlib.sha256()\nm.update(b\"Nobody inspects\")\nm.update(b\" the spammish repetition\")\nm.digest()", "Expected:\n<pre>\nb'\\x03\\x1e\\xdd}Ae\\x15\\x93\\xc5\\xfe\\\\\\x00o\\xa5u+7\\xfd\\xdf\\xf7\\xbcN\\x84:\\xa6\\xaf\\x0c\\x95\\x0fK\\x94\\x06'\n</pre>", "result = hashlib.sha256(b\"Nobody inspects the spammish repetition\").hexdigest()\nresult\n\nprint(\"Digest size\", m.digest_size)\nprint(\"Block size \", m.block_size)", "In class, we looked at a passwords database that doesn't save actual passwords, only hashes thereof. Even system administrators with the keys to the database, have no means to force a hash to run backwards to regain the phrase which was behind it. A hash is a one way street.\nLAB:\nCheck the Python docs and run the above example with sha224 instead. Do you get past this assertion (unit test)?", "# Uncomment me to check your result\n# assert result == 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI Pipelines: Metrics visualization and run comparison using the KFP SDK\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb\">\n Open in Vertex AI Workbench\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis notebook shows how to use the Kubeflow Pipelines (KFP) SDK to build Vertex AI Pipelines that generate model metrics and metrics visualizations, and comparing pipeline runs.\nDataset\nThe dataset used for this tutorial is the Wine dataset from Scikit-learn builtin datasets.\nThe dataset predicts the origin of a wine.\nDataset\nThe dataset used for this tutorial is the Iris dataset from Scikit-learn builtin datasets.\nThe dataset predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.\nObjective\nThe steps performed include:\n\nGenerate ROC curve and confusion matrix visualizations for classification results\nWrite metrics\nCompare metrics across pipeline runs\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3.\n\n\nActivate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.\n\n\nRun jupyter notebook on the command line in a terminal shell to launch Jupyter.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex AI SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG", "Install the latest GA version of KFP SDK library as well.", "! pip3 install $USER kfp --upgrade\n\nif os.getenv(\"IS_TESTING\"):\n ! pip3 install --upgrade matplotlib $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Check the versions of the packages you installed. The KFP SDK version should be >=1.6.", "! python3 -c \"import kfp; print('KFP SDK version: {}'.format(kfp.__version__))\"", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Service Account\nIf you don't know your service account, try to get your service account using gcloud command by executing the second cell below.", "SERVICE_ACCOUNT = \"[your-service-account]\" # @param {type:\"string\"}\n\nif (\n SERVICE_ACCOUNT == \"\"\n or SERVICE_ACCOUNT is None\n or SERVICE_ACCOUNT == \"[your-service-account]\"\n):\n # Get your GCP project id from gcloud\n shell_output = !gcloud auth list 2>/dev/null\n SERVICE_ACCOUNT = shell_output[2].strip()\n print(\"Service Account:\", SERVICE_ACCOUNT)", "Set service account access for Vertex AI Pipelines\nRun the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.", "! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME\n\n! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Vertex AI Pipelines constants\nSetup up the following constants for Vertex AI Pipelines:", "PIPELINE_ROOT = \"{}/pipeline_root/iris\".format(BUCKET_NAME)", "Additional imports.", "from kfp.v2 import dsl\nfrom kfp.v2.dsl import ClassificationMetrics, Metrics, Output, component", "Initialize Vertex AI SDK for Python\nInitialize the Vertex AI SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)", "Define pipeline components using scikit-learn\nIn this section, you define some Python function-based components that use scikit-learn to train some classifiers and produce evaluations that can be visualized.\nNote the use of the @component() decorator in the definitions below. You can optionally set a list of packages for the component to install; the base image to use (the default is a Python 3.7 image); and the name of a component YAML file to generate, so that the component definition can be shared and reused.\nDefine wine_classification component\nThe first component shows how to visualize an ROC curve.\nNote that the function definition includes an output called wmetrics, of type Output[ClassificationMetrics]. You can visualize the metrics in the Pipelines user interface in the Cloud Console.\nTo do this, this example uses the artifact's log_roc_curve() method. This method takes as input arrays with the false positive rates, true positive rates, and thresholds, as generated by the sklearn.metrics.roc_curve function.\nWhen you evaluate the cell below, a task factory function called wine_classification is created, that is used to construct the pipeline definition. In addition, a component YAML file is created, which can be shared and loaded via file or URL to create the same task factory function.", "@component(\n packages_to_install=[\"sklearn\"],\n base_image=\"python:3.9\",\n output_component_file=\"wine_classification_component.yaml\",\n)\ndef wine_classification(wmetrics: Output[ClassificationMetrics]):\n from sklearn.datasets import load_wine\n from sklearn.ensemble import RandomForestClassifier\n from sklearn.metrics import roc_curve\n from sklearn.model_selection import cross_val_predict, train_test_split\n\n X, y = load_wine(return_X_y=True)\n # Binary classification problem for label 1.\n y = y == 1\n\n X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n rfc = RandomForestClassifier(n_estimators=10, random_state=42)\n rfc.fit(X_train, y_train)\n y_scores = cross_val_predict(rfc, X_train, y_train, cv=3, method=\"predict_proba\")\n fpr, tpr, thresholds = roc_curve(\n y_true=y_train, y_score=y_scores[:, 1], pos_label=True\n )\n wmetrics.log_roc_curve(fpr, tpr, thresholds)", "Define iris_sgdclassifier component\nThe second component shows how to visualize a confusion matrix, in this case for a model trained using SGDClassifier.\nAs with the previous component, you create a metricsc output artifact of type Output[ClassificationMetrics]. Then, use the artifact's log_confusion_matrix method to visualize the confusion matrix results, as generated by the sklearn.metrics.confusion_matrix function.", "@component(packages_to_install=[\"sklearn\"], base_image=\"python:3.9\")\ndef iris_sgdclassifier(\n test_samples_fraction: float,\n metricsc: Output[ClassificationMetrics],\n):\n from sklearn import datasets, model_selection\n from sklearn.linear_model import SGDClassifier\n from sklearn.metrics import confusion_matrix\n\n iris_dataset = datasets.load_iris()\n train_x, test_x, train_y, test_y = model_selection.train_test_split(\n iris_dataset[\"data\"],\n iris_dataset[\"target\"],\n test_size=test_samples_fraction,\n )\n\n classifier = SGDClassifier()\n classifier.fit(train_x, train_y)\n predictions = model_selection.cross_val_predict(classifier, train_x, train_y, cv=3)\n metricsc.log_confusion_matrix(\n [\"Setosa\", \"Versicolour\", \"Virginica\"],\n confusion_matrix(\n train_y, predictions\n ).tolist(), # .tolist() to convert np array to list.\n )", "Define iris_logregression component\nThe third component also uses the \"iris\" dataset, but trains a LogisticRegression model. It logs model accuracy in the metrics output artifact.", "@component(\n packages_to_install=[\"sklearn\"],\n base_image=\"python:3.9\",\n)\ndef iris_logregression(\n input_seed: int,\n split_count: int,\n metrics: Output[Metrics],\n):\n from sklearn import datasets, model_selection\n from sklearn.linear_model import LogisticRegression\n\n # Load digits dataset\n iris = datasets.load_iris()\n # # Create feature matrix\n X = iris.data\n # Create target vector\n y = iris.target\n # test size\n test_size = 0.20\n\n # cross-validation settings\n kfold = model_selection.KFold(\n n_splits=split_count, random_state=input_seed, shuffle=True\n )\n # Model instance\n model = LogisticRegression()\n scoring = \"accuracy\"\n results = model_selection.cross_val_score(model, X, y, cv=kfold, scoring=scoring)\n print(f\"results: {results}\")\n\n # split data\n X_train, X_test, y_train, y_test = model_selection.train_test_split(\n X, y, test_size=test_size, random_state=input_seed\n )\n # fit model\n model.fit(X_train, y_train)\n\n # accuracy on test set\n result = model.score(X_test, y_test)\n print(f\"result: {result}\")\n metrics.log_metric(\"accuracy\", (result * 100.0))", "Define the pipeline\nNext, define a simple pipeline that uses the components that were created in the previous section.", "PIPELINE_NAME = \"metrics-pipeline-v2\"\n\n\n@dsl.pipeline(\n # Default pipeline root. You can override it when submitting the pipeline.\n pipeline_root=PIPELINE_ROOT,\n # A name for the pipeline.\n name=\"metrics-pipeline-v2\",\n)\ndef pipeline(seed: int, splits: int):\n wine_classification_op = wine_classification() # noqa: F841\n iris_logregression_op = iris_logregression( # noqa: F841\n input_seed=seed, split_count=splits\n )\n iris_sgdclassifier_op = iris_sgdclassifier(test_samples_fraction=0.3) # noqa: F841", "Compile the pipeline\nNext, compile the pipeline.", "from kfp.v2 import compiler # noqa: F811\n\ncompiler.Compiler().compile(\n pipeline_func=pipeline,\n package_path=\"tabular classification_pipeline.json\".replace(\" \", \"_\"),\n)", "Run the pipeline\nNext, run the pipeline.", "DISPLAY_NAME = \"iris_\" + TIMESTAMP\n\njob = aip.PipelineJob(\n display_name=DISPLAY_NAME,\n template_path=\"tabular classification_pipeline.json\".replace(\" \", \"_\"),\n job_id=f\"tabular classification-v2{TIMESTAMP}-1\".replace(\" \", \"\"),\n pipeline_root=PIPELINE_ROOT,\n parameter_values={\"seed\": 7, \"splits\": 10},\n)\n\njob.run()", "Click on the generated link to see your run in the Cloud Console.\n<!-- It should look something like this as it is running:\n\n<a href=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" target=\"_blank\"><img src=\"https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png\" width=\"40%\"/></a> -->\n\nIn the UI, many of the pipeline DAG nodes will expand or collapse when you click on them.\nComparing pipeline runs in the UI\nNext, generate another pipeline run that uses a different seed and split for the iris_logregression step.\nSubmit the new pipeline run:", "job = aip.PipelineJob(\n display_name=\"iris_\" + TIMESTAMP,\n template_path=\"tabular classification_pipeline.json\".replace(\" \", \"_\"),\n job_id=f\"tabular classification-pipeline-v2{TIMESTAMP}-2\".replace(\" \", \"\"),\n pipeline_root=PIPELINE_ROOT,\n parameter_values={\"seed\": 5, \"splits\": 7},\n)\n\njob.run()", "When both pipeline runs have finished, compare their results by navigating to the pipeline runs list in the Cloud Console, selecting both of them, and clicking COMPARE at the top of the Console panel.\nCompare the parameters and metrics of the pipelines run from their tracked metadata\nNext, you use the Vertex AI SDK for Python to compare the parameters and metrics of the pipeline runs. Wait until the pipeline runs have finished to run the next cell.", "pipeline_df = aip.get_pipeline_df(pipeline=PIPELINE_NAME)\nprint(pipeline_df.head(2))", "Plot parallel coordinates of parameters and metrics\nWith the metric and parameters in a dataframe, you can perform further analysis to extract useful information. The following example compares data from each run using a parallel coordinate plot.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nplt.rcParams[\"figure.figsize\"] = [15, 5]\n\npipeline_df[\"param.input:seed\"] = pipeline_df[\"param.input:seed\"].astype(np.float16)\npipeline_df[\"param.input:splits\"] = pipeline_df[\"param.input:splits\"].astype(np.float16)\n\nax = pd.plotting.parallel_coordinates(\n pipeline_df.reset_index(level=0),\n \"run_name\",\n cols=[\"param.input:seed\", \"param.input:splits\", \"metric.accuracy\"],\n)\nax.set_yscale(\"symlog\")\nax.legend(bbox_to_anchor=(1.0, 0.5))", "Plot ROC curve and calculate AUC number\nIn addition to basic metrics, you can extract complex metrics and perform further analysis using the get_pipeline_df method.", "try:\n df = pd.DataFrame(pipeline_df[\"metric.confidenceMetrics\"][0])\n auc = np.trapz(df[\"recall\"], df[\"falsePositiveRate\"])\n plt.plot(df[\"falsePositiveRate\"], df[\"recall\"], label=\"auc=\" + str(auc))\n plt.legend(loc=4)\n plt.show()\nexcept Exception as e:\n print(e)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\ntry:\n if delete_model and \"DISPLAY_NAME\" in globals():\n models = aip.Model.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n model = models[0]\n aip.Model.delete(model)\n print(\"Deleted model:\", model)\nexcept Exception as e:\n print(e)\n\ntry:\n if delete_endpoint and \"DISPLAY_NAME\" in globals():\n endpoints = aip.Endpoint.list(\n filter=f\"display_name={DISPLAY_NAME}_endpoint\", order_by=\"create_time\"\n )\n endpoint = endpoints[0]\n endpoint.undeploy_all()\n aip.Endpoint.delete(endpoint.resource_name)\n print(\"Deleted endpoint:\", endpoint)\nexcept Exception as e:\n print(e)\n\nif delete_dataset and \"DISPLAY_NAME\" in globals():\n if \"tabular\" == \"tabular\":\n try:\n datasets = aip.TabularDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.TabularDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\n if \"tabular\" == \"image\":\n try:\n datasets = aip.ImageDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.ImageDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\n if \"tabular\" == \"text\":\n try:\n datasets = aip.TextDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.TextDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\n if \"tabular\" == \"video\":\n try:\n datasets = aip.VideoDataset.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n dataset = datasets[0]\n aip.VideoDataset.delete(dataset.resource_name)\n print(\"Deleted dataset:\", dataset)\n except Exception as e:\n print(e)\n\ntry:\n if delete_pipeline and \"DISPLAY_NAME\" in globals():\n pipelines = aip.PipelineJob.list(\n filter=f\"display_name={DISPLAY_NAME}\", order_by=\"create_time\"\n )\n pipeline = pipelines[0]\n aip.PipelineJob.delete(pipeline.resource_name)\n print(\"Deleted pipeline:\", pipeline)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]