repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
mne-tools/mne-tools.github.io
|
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"The :class:SourceEstimate <mne.SourceEstimate> data structure\nSource estimates, commonly referred to as STC (Source Time Courses),\nare obtained from source localization methods.\nSource localization method solve the so-called 'inverse problem'.\nMNE provides different methods for solving it:\ndSPM, sLORETA, LCMV, MxNE etc.\nSource localization consists in projecting the EEG/MEG sensor data into\na 3-dimensional 'source space' positioned in the individual subject's brain\nanatomy. Hence the data is transformed such that the recorded time series at\neach sensor location maps to time series at each spatial location of the\n'source space' where is defined our source estimates.\nAn STC object contains the amplitudes of the sources over time.\nIt only stores the amplitudes of activations but\nnot the locations of the sources. To get access to the locations\nyou need to have the :class:source space <mne.SourceSpaces>\n(often abbreviated src) used to compute the\n:class:forward operator <mne.Forward> (often abbreviated fwd).\nSee tut_forward for more details on forward modeling, and\nsphx_glr_auto_tutorials_plot_mne_dspm_source_localization.py\nfor an example of source localization with dSPM, sLORETA or eLORETA.\nSource estimates come in different forms:\n- :class:`mne.SourceEstimate`: For cortically constrained source spaces.\n\n- :class:`mne.VolSourceEstimate`: For volumetric source spaces\n\n- :class:`mne.VectorSourceEstimate`: For cortically constrained source\n spaces with vector-valued source activations (strength and orientation)\n\n- :class:`mne.MixedSourceEstimate`: For source spaces formed of a\n combination of cortically constrained and volumetric sources.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>:class:`(Vector) <mne.VectorSourceEstimate>`\n :class:`SourceEstimate <mne.SourceEstimate>` are surface representations\n mostly used together with\n `FreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>`\n surface representations.</p></div>\n\nLet's get ourselves an idea of what a :class:mne.SourceEstimate really\nis. We first set up the environment and load some data:",
"import os\n\nfrom mne import read_source_estimate\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# Paths to example data\nsample_dir_raw = sample.data_path()\nsample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')\nsubjects_dir = os.path.join(sample_dir_raw, 'subjects')\n\nfname_stc = os.path.join(sample_dir, 'sample_audvis-meg')",
"Load and inspect example data\nThis data set contains source estimation data from an audio visual task. It\nhas been mapped onto the inflated cortical surface representation obtained\nfrom\nFreeSurfer <sphx_glr_auto_tutorials_plot_background_freesurfer.py>\nusing the dSPM method. It highlights a noticeable peak in the auditory\ncortices.\nLet's see how it looks like.",
"stc = read_source_estimate(fname_stc, subject='sample')\n\n# Define plotting parameters\nsurfer_kwargs = dict(\n hemi='lh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=0.09, time_unit='s', size=(800, 800),\n smoothing_steps=5)\n\n# Plot surface\nbrain = stc.plot(**surfer_kwargs)\n\n# Add title\nbrain.add_text(0.1, 0.9, 'SourceEstimate', 'title', font_size=16)",
"SourceEstimate (stc)\nA source estimate contains the time series of a activations\nat spatial locations defined by the source space.\nIn the context of a FreeSurfer surfaces - which consist of 3D triangulations\n- we could call each data point on the inflated brain\nrepresentation a vertex . If every vertex represents the spatial location\nof a time series, the time series and spatial location can be written into a\nmatrix, where to each vertex (rows) at multiple time points (columns) a value\ncan be assigned. This value is the strength of our signal at a given point in\nspace and time. Exactly this matrix is stored in stc.data.\nLet's have a look at the shape",
"shape = stc.data.shape\n\nprint('The data has %s vertex locations with %s sample points each.' % shape)",
"We see that stc carries 7498 time series of 25 samples length. Those time\nseries belong to 7498 vertices, which in turn represent locations\non the cortical surface. So where do those vertex values come from?\nFreeSurfer separates both hemispheres and creates surfaces\nrepresentation for left and right hemisphere. Indices to surface locations\nare stored in stc.vertices. This is a list with two arrays of integers,\nthat index a particular vertex of the FreeSurfer mesh. A value of 42 would\nhence map to the x,y,z coordinates of the mesh with index 42.\nSee next section on how to get access to the positions in a\n:class:mne.SourceSpaces object.\nSince both hemispheres are always represented separately, both attributes\nintroduced above, can also be obtained by selecting the respective\nhemisphere. This is done by adding the correct prefix (lh or rh).",
"shape_lh = stc.lh_data.shape\n\nprint('The left hemisphere has %s vertex locations with %s sample points each.'\n % shape_lh)",
"Since we did not change the time representation, only the selected subset of\nvertices and hence only the row size of the matrix changed. We can check if\nthe rows of stc.lh_data and stc.rh_data sum up to the value we had\nbefore.",
"is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0]\n\nprint('The number of vertices in stc.lh_data and stc.rh_data do ' +\n ('not ' if not is_equal else '') +\n 'sum up to the number of rows in stc.data')",
"Indeed and as the mindful reader already suspected, the same can be said\nabout vertices. stc.lh_vertno thereby maps to the left and\nstc.rh_vertno to the right inflated surface representation of\nFreeSurfer.\nRelationship to SourceSpaces (src)\nAs mentioned above, :class:src <mne.SourceSpaces> carries the mapping from\nstc to the surface. The surface is built up from a\ntriangulated mesh <https://en.wikipedia.org/wiki/Surface_triangulation>_\nfor each hemisphere. Each triangle building up a face consists of 3 vertices.\nSince src is a list of two source spaces (left and right hemisphere), we can\naccess the respective data by selecting the source space first. Faces\nbuilding up the left hemisphere can be accessed via src[0]['tris'], where\nthe index $0$ stands for the left and $1$ for the right\nhemisphere.\nThe values in src[0]['tris'] refer to row indices in src[0]['rr'].\nHere we find the actual coordinates of the surface mesh. Hence every index\nvalue for vertices will select a coordinate from here. Furthermore\nsrc[0]['vertno'] stores the same data as stc.lh_vertno,\nexcept when working with sparse solvers such as\n:func:mne.inverse_sparse.mixed_norm, as then only a fraction of\nvertices actually have non-zero activations.\nIn other words stc.lh_vertno equals src[0]['vertno'], whereas\nstc.rh_vertno equals src[1]['vertno']. Thus the Nth time series in\nstc.lh_data corresponds to the Nth value in stc.lh_vertno and\nsrc[0]['vertno'] respectively, which in turn map the time series to a\nspecific location on the surface, represented as the set of cartesian\ncoordinates stc.lh_vertno[N] in src[0]['rr'].\nLet's obtain the peak amplitude of the data as vertex and time point index",
"peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True,\n time_as_index=True)",
"The first value thereby indicates which vertex and the second which time\npoint index from within stc.lh_vertno or stc.lh_data is used. We can\nuse the respective information to get the index of the surface vertex\nresembling the peak and its value.",
"peak_vertex_surf = stc.lh_vertno[peak_vertex]\n\npeak_value = stc.lh_data[peak_vertex, peak_time]",
"Let's visualize this as well, using the same surfer_kwargs as in the\nbeginning.",
"brain = stc.plot(**surfer_kwargs)\n\n# We add the new peak coordinate (as vertex index) as an annotation dot\nbrain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue')\n\n# We add a title as well, stating the amplitude at this time and location\nbrain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_size=14)",
"Summary\n:class:stc <mne.SourceEstimate> is a class of MNE-Python, representing the\ntransformed time series obtained from source estimation. For both hemispheres\nthe data is stored separately in stc.lh_data and stc.rh_data in form\nof a $m \\times n$ matrix, where $m$ is the number of spatial\nlocations belonging to that hemishpere and $n$ the number of time\npoints.\nstc.lh_vertno and stc.rh_vertno correspond to src[0]['vertno']\nand src[1]['vertno']. Those are the indices of locations on the surface\nrepresentation.\nThe surface's mesh coordinates are stored in src[0]['rr'] and\nsrc[1]['rr'] for left and right hemisphere. 3D coordinates can be\naccessed by the above logic::\n\n\n\nlh_coordinates = src[0]['rr'][stc.lh_vertno]\nlh_data = stc.lh_data\n\n\n\nor::\n\n\n\nrh_coordinates = src[1]['rr'][src[1]['vertno']]\nrh_data = stc.rh_data"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/ai-platform-samples
|
notebooks/samples/explanations/ai-explanations-image.ipynb
|
apache-2.0
|
[
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"AI Explanations: Deploying an image model\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/ml-on-gcp/blob/master/tutorials/explanations/ai-explanations-image.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/ml-on-gcp/tree/main/tutorials/explanations/ai-explanations-image.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nThis tutorial shows how to train a Keras classification model on image data and deploy it to the AI Platform Explanations service to get feature attributions on your deployed model.\nIf you've already got a trained model and want to deploy it to AI Explanations, skip to the Export the model as a TF 1 SavedModel section.\nDataset\nThe dataset used for this tutorial is the flowers dataset from TensorFlow Datasets.\nObjective\nThe goal of this tutorial is to train a model on a simple image dataset (flower classification) to understand how you can use AI Explanations with image models. For image models, AI Explanations returns an image with the pixels highlighted that signaled your model's prediction the most.\nThis tutorial focuses more on deploying the model to AI Platform with Explanations than on the design of the model itself. \nCosts\nThis tutorial uses billable components of Google Cloud Platform (GCP):\n\nAI Platform for:\nPrediction\nExplanation: AI Explanations comes at no extra charge to prediction prices. However, explanation requests take longer to process than normal predictions, so heavy usage of Explanations along with auto-scaling may result in more nodes being started and thus more charges\nCloud Storage for:\nStoring model files for deploying to Cloud AI Platform\n\nLearn about AI Platform\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nBefore you begin\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type\nThis tutorial assumes you are running the notebook either in Colab or Cloud AI Platform Notebooks.\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the AI Platform Training & Prediction and Compute Engine APIs.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID=\"[your-project-id]\"",
"Authenticate your GCP account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.",
"import sys, os\nimport warnings\nimport googleapiclient\n\nwarnings.filterwarnings('ignore')\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' \n# If you are running this notebook in Colab, follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nif 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()\n !gcloud config set project $PROJECT_ID\n try:\n %tensorflow_version 1.x\n except Exception:\n pass\n import tensorflow as tf",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. AI Platform runs\nthe code from this package. In this tutorial, AI Platform also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an AI Platform model version based on this output in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets. \nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Cloud\nAI Platform services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with AI Platform.",
"BUCKET_NAME = PROJECT_ID + \"_flowers_model\"\nREGION = \"us-central1\"",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION gs://$BUCKET_NAME",
"Import libraries\nImport the libraries we'll be using in this tutorial. This tutorial has been tested with TensorFlow versions 1.14 and 1.15.",
"import math, json, random\nimport numpy as np\nimport PIL\nimport tensorflow as tf\n\nfrom matplotlib import pyplot as plt\nfrom base64 import b64encode\n\n\nprint(\"Tensorflow version \" + tf.__version__)\nAUTO = tf.data.experimental.AUTOTUNE",
"Downloading and preprocessing the flowers dataset\nIn this section you'll download the flower images (in this dataset they are TFRecords), use the tf.data API to create a data input pipeline, and split the data into training and validation sets.",
"GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec'\nIMAGE_SIZE = [192, 192]\n\nBATCH_SIZE = 32 \n\nVALIDATION_SPLIT = 0.19\nCLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)\n\n# Split data files between training and validation\nfilenames = tf.gfile.Glob(GCS_PATTERN)\nrandom.shuffle(filenames)\nsplit = int(len(filenames) * VALIDATION_SPLIT)\ntraining_filenames = filenames[split:]\nvalidation_filenames = filenames[:split]\nprint(\"Pattern matches {} data files. Splitting dataset into {} training files and {} validation files\".format(len(filenames), len(training_filenames), len(validation_filenames)))\nvalidation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE\nsteps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE\nprint(\"With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.\".format(BATCH_SIZE, steps_per_epoch, validation_steps))",
"The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model. \nIf you're running this from Colab the cell is hidden. You can look at the code by right clicking on the cell --> \"Form\" --> \"Show form\" if you'd like to see it.",
"#@title display utilities [RUN ME]\n\ndef dataset_to_numpy_util(dataset, N):\n dataset = dataset.batch(N)\n \n if tf.executing_eagerly():\n # In eager mode, iterate in the Datset directly.\n for images, labels in dataset:\n numpy_images = images.numpy()\n numpy_labels = labels.numpy()\n break;\n \n else: # In non-eager mode, must get the TF note that \n # yields the nextitem and run it in a tf.Session.\n get_next_item = dataset.make_one_shot_iterator().get_next()\n with tf.Session() as ses:\n numpy_images, numpy_labels = ses.run(get_next_item)\n\n return numpy_images, numpy_labels\n\ndef title_from_label_and_target(label, correct_label):\n label = np.argmax(label, axis=-1) # one-hot to class number\n correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number\n correct = (label == correct_label)\n return \"{} [{}{}{}]\".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',\n CLASSES[correct_label] if not correct else ''), correct\n\ndef display_one_flower(image, title, subplot, red=False):\n plt.subplot(subplot)\n plt.axis('off')\n plt.imshow(image)\n plt.title(title, fontsize=16, color='red' if red else 'black')\n return subplot+1\n \ndef display_9_images_from_dataset(dataset):\n subplot=331\n plt.figure(figsize=(13,13))\n images, labels = dataset_to_numpy_util(dataset, 9)\n for i, image in enumerate(images):\n title = CLASSES[np.argmax(labels[i], axis=-1)]\n subplot = display_one_flower(image, title, subplot)\n if i >= 8:\n break;\n \n plt.tight_layout()\n plt.subplots_adjust(wspace=0.1, hspace=0.1)\n plt.show()\n \ndef display_9_images_with_predictions(images, predictions, labels):\n subplot=331\n plt.figure(figsize=(13,13))\n for i, image in enumerate(images):\n title, correct = title_from_label_and_target(predictions[i], labels[i])\n subplot = display_one_flower(image, title, subplot, not correct)\n if i >= 8:\n break;\n \n plt.tight_layout()\n plt.subplots_adjust(wspace=0.1, hspace=0.1)\n plt.show()\n \ndef display_training_curves(training, validation, title, subplot):\n if subplot%10==1: # set up the subplots on the first call\n plt.subplots(figsize=(10,10), facecolor='#F0F0F0')\n plt.tight_layout()\n ax = plt.subplot(subplot)\n ax.set_facecolor('#F8F8F8')\n ax.plot(training)\n ax.plot(validation)\n ax.set_title('model '+ title)\n ax.set_ylabel(title)\n ax.set_xlabel('epoch')\n ax.legend(['train', 'valid.'])",
"Read images and labels from TFRecords",
"def read_tfrecord(example):\n features = {\n \"image\": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring\n \"class\": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar\n \"one_hot_class\": tf.io.VarLenFeature(tf.float32),\n }\n example = tf.parse_single_example(example, features)\n image = tf.image.decode_jpeg(example['image'], channels=3)\n image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range\n image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU\n class_label = tf.cast(example['class'], tf.int32)\n one_hot_class = tf.sparse.to_dense(example['one_hot_class'])\n one_hot_class = tf.reshape(one_hot_class, [5])\n return image, one_hot_class\n\ndef load_dataset(filenames):\n # Read data from TFRecords\n\n dataset = tf.data.Dataset.from_tensor_slices(filenames)\n dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16, num_parallel_calls=AUTO) # faster\n dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)\n return dataset",
"In the following cell, we'll use a visualization utility function we defined above to preview some flower images with their associated labels.",
"display_9_images_from_dataset(load_dataset(training_filenames))",
"Create training and validation datasets",
"def get_batched_dataset(filenames):\n dataset = load_dataset(filenames)\n dataset = dataset.cache() # This dataset fits in RAM\n dataset = dataset.repeat()\n dataset = dataset.shuffle(2048)\n dataset = dataset.batch(BATCH_SIZE)\n dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)\n # For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets\n return dataset\n\ndef get_training_dataset():\n return get_batched_dataset(training_filenames)\n\ndef get_validation_dataset():\n return get_batched_dataset(validation_filenames)\n\nsome_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), 8*20)",
"Build, train, and evaluate the model\nIn this section we'll define the layers of our model using the Keras Sequential model API. Then we'll run training and evaluation, and finally run some test predictions on the local model.",
"model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(kernel_size=3, filters=16, padding='same', activation='relu', input_shape=[*IMAGE_SIZE, 3]),\n tf.keras.layers.Conv2D(kernel_size=3, filters=30, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=2),\n tf.keras.layers.Conv2D(kernel_size=3, filters=60, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=2),\n tf.keras.layers.Conv2D(kernel_size=3, filters=90, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=2),\n tf.keras.layers.Conv2D(kernel_size=3, filters=110, padding='same', activation='relu'),\n tf.keras.layers.MaxPooling2D(pool_size=2),\n tf.keras.layers.Conv2D(kernel_size=3, filters=130, padding='same', activation='relu'),\n tf.keras.layers.Conv2D(kernel_size=1, filters=40, padding='same', activation='relu'),\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(5, activation='softmax')\n])\n\nmodel.compile(\n optimizer='adam',\n loss= 'categorical_crossentropy',\n metrics=['accuracy'])\n\nmodel.summary()",
"Train the model\nTrain this on a GPU if you have access (in Colab, from the menu select Runtime --> Change runtime type). On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes.",
"EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75%\n\nhistory = model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=EPOCHS,\n validation_data=get_validation_dataset(), validation_steps=validation_steps)",
"Get predictions on local model and visualize them",
"# Randomize the input so that you can execute multiple times to change results\npermutation = np.random.permutation(8*20)\nsome_flowers, some_labels = (some_flowers[permutation], some_labels[permutation])\n\npredictions = model.predict(some_flowers, batch_size=16)\nevaluations = model.evaluate(some_flowers, some_labels, batch_size=16)\n \nprint(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())\nprint('[val_loss, val_acc]', evaluations)\n\ndisplay_9_images_with_predictions(some_flowers, predictions, some_labels)",
"Export the model as a TF 1 SavedModel\nAI Explanations currently supports TensorFlow 1.x. In order to deploy our model in a format compatible with AI Explanations, we'll follow the steps below to convert our Keras model to a TF Estimator, and then use the export_saved_model method to generate the SavedModel and save it in GCS.",
"## Convert our Keras model to an estimator and then export to SavedModel\nkeras_estimator = tf.keras.estimator.model_to_estimator(keras_model=model, model_dir='savedmodel_export')",
"The decode_img_bytes function below handles converting image bytes (the format our served model will expect) to floats: a [192,192,3] dimensional matrix that our model is expecting. For image explanations models, we recommend this approach rather than sending an image as a float array from the client.",
"def decode_img_bytes(img_bytes, height, width, color_depth):\n features = tf.squeeze(img_bytes, axis=1, name='input_squeeze')\n float_pixels = tf.map_fn(\n lambda img_string: tf.io.decode_image(\n img_string, \n channels=color_depth,\n dtype=tf.float32\n ),\n features,\n dtype=tf.float32,\n name='input_convert'\n )\n\n tf.Tensor.set_shape(float_pixels, (None, height, width, color_depth))\n float_pixels = tf.identity(float_pixels, name='input_pixels')\n\n return float_pixels\n\ndef serving_input_receiver_fn():\n img_bytes = tf.placeholder(shape=(None,1), dtype=tf.string)\n img_float = decode_img_bytes(img_bytes, 192,192, 3)\n return tf.estimator.export.ServingInputReceiver({'conv2d_input': img_float}, {'conv2d_input': img_bytes})\n\nexport_path = keras_estimator.export_saved_model(\n 'gs://' + BUCKET_NAME + '/explanations',\n serving_input_receiver_fn\n).decode('utf-8')\nprint(\"Model exported to: \", export_path)\n\n!saved_model_cli show --dir $export_path --all",
"Generate the metadata for AI Explanations\nIn order to deploy this model to Cloud Explanations, we need to create an explanation_metadata.json file with information about our model inputs, outputs, and baseline. \nFor image models, using [0,1] as your input baseline represents black and white images. In this case we're using np.random to generate the baseline because our training images contain a lot of black and white (i.e. daisy petals).",
"random_baseline = np.random.rand(192,192,3)\n\nexplanation_metadata = {\n \"inputs\": {\n \"data\": {\n \"input_tensor_name\": \"input_pixels:0\",\n \"modality\": \"image\",\n \"input_baselines\": [random_baseline.tolist()]\n }\n },\n \"outputs\": {\n \"probability\": {\n \"output_tensor_name\": \"dense/Softmax:0\"\n }\n },\n \"framework\": \"tensorflow\"\n }\n\n# Write the json to a local file\nwith open('explanation_metadata.json', 'w') as output_file:\n json.dump(explanation_metadata, output_file)\n\n# Copy this file into the GCS location with our SavedModel assets\n!gsutil cp explanation_metadata.json $export_path",
"Deploy model to AI Explanations\nIn this step we'll use the gcloud CLI to deploy our model to AI Explanations.\nCreate the model",
"MODEL = 'flowers'\n\n# Create the model if it doesn't exist yet (you only need to run this once)\n!gcloud ai-platform models create $MODEL --enable-logging --region=$REGION",
"Create explainable model versions\nFor image models, we offer two choices for explanation methods: \n* Integrated Gradients (IG)\n* XRAI \nYou can find more info on each method in the documentation. Below, we'll show you how to deploy a version with both so that you can compare results. If you already know which explanation method you'd like to use, you can deploy one version and skip the code blocks for the other method.\nCreating the version will take ~5-10 minutes. Note that your first deploy may take longer.\nDeploy an explainable model with Integrated Gradients",
"# Each time you create a version the name should be unique\nIG_VERSION = 'v_ig'\n\n# Create the version with gcloud\n!gcloud beta ai-platform versions create $IG_VERSION --region=$REGION \\\n--model $MODEL \\\n--origin $export_path \\\n--runtime-version 1.15 \\\n--framework TENSORFLOW \\\n--python-version 3.7 \\\n--machine-type n1-standard-4 \\\n--explanation-method integrated-gradients \\\n--num-integral-steps 25\n\n# Make sure the IG model deployed correctly. State should be `READY` in the following log\n!gcloud ai-platform versions describe $IG_VERSION --model $MODEL --region=$REGION",
"Deploy an explainable model with XRAI",
"# Each time you create a version the name should be unique\nXRAI_VERSION = 'v_xrai'\n\n# Create the XRAI version with gcloud\n!gcloud beta ai-platform versions create $XRAI_VERSION --region=$REGION \\\n--model $MODEL \\\n--origin $export_path \\\n--runtime-version 1.15 \\\n--framework TENSORFLOW \\\n--python-version 3.7 \\\n--machine-type n1-standard-4 \\\n--explanation-method xrai \\\n--num-integral-steps 25\n\n# Make sure the XRAI model deployed correctly. State should be `READY` in the following log\n!gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL",
"Get predictions and explanations on deployed model\nHere we'll prepare some test images to send to our model. Then we'll use the AI Platform Prediction API to get the model's predicted class along with the explanation for each image.",
"# Download test flowers from public bucket\n!mkdir flowers\n!gsutil -m cp gs://flowers_model/test_flowers/* ./flowers\n\n# Resize the images to what our model is expecting (192,192)\ntest_filenames = []\n\nfor i in os.listdir('flowers'):\n img_path = 'flowers/' + i\n with PIL.Image.open(img_path) as ex_img:\n resize_img = ex_img.resize([192,192])\n resize_img.save(img_path)\n test_filenames.append(img_path)\n\n# Prepare our prediction JSON to send to our Cloud model\ninstances = []\n\nfor i in test_filenames:\n with open(i, 'rb') as example_img:\n b64str = b64encode(example_img.read()).decode('utf-8')\n instances.append({'conv2d_input': [{'b64': b64str}]})",
"The predict_json method below calls our deployed model with the specified image data, model name, and version.",
"# This is adapted from a sample in the docs\n# Find it here: https://cloud.google.com/ai-platform/prediction/docs/online-predict#python\n\ndef predict_json(project, model, instances, version=None):\n \"\"\"Send json data to a deployed model for prediction.\n\n Args:\n project (str): project where the AI Platform Model is deployed.\n model (str): model name.\n instances ([Mapping[str: Any]]): Keys should be the names of Tensors\n your deployed model expects as inputs. Values should be datatypes\n convertible to Tensors, or (potentially nested) lists of datatypes\n convertible to tensors.\n version: str, version of the model to target.\n Returns:\n Mapping[str: any]: dictionary of prediction results defined by the\n model.\n \"\"\"\n\n service = googleapiclient.discovery.build('ml', 'v1')\n name = 'projects/{}/models/{}'.format(project, model)\n\n if version is not None:\n name += '/versions/{}'.format(version)\n\n response = service.projects().explain(\n name=name,\n body={'instances': instances}\n ).execute()\n\n if 'error' in response:\n raise RuntimeError(response['error'])\n\n return response",
"Make an AI Explanations request with gcloud\nFirst we'll look at the explanations results for IG, then we'll compare with XRAI. \nIf you only deployed one model above, run only the cell for that explanation method.",
"# IG EXPLANATIONS\nig_response = predict_json(PROJECT_ID, MODEL, instances, IG_VERSION)\n\n# XRAI EXPLANATIONS\nxrai_response = predict_json(PROJECT_ID, MODEL, instances, XRAI_VERSION)",
"See our model's predicted classes without explanations\nFirst, let's preview the images and see what our model predicted for them. Why did the model predict these classes? We'll see explanations in the next section.",
"from io import BytesIO\nimport matplotlib.image as mpimg\nimport base64\n\n# Note: change the `ig_response` variable below if you didn't deploy an IG model\nfor i,val in enumerate(ig_response['explanations']):\n class_name = CLASSES[val['attributions_by_label'][0]['label_index']]\n confidence_score = str(round(val['attributions_by_label'][0]['example_score'] * 100, 3)) + '%'\n print('Predicted class: ' + class_name + '\\n' + 'Confidence score: ' + confidence_score)\n \n img = instances[i]['conv2d_input'][0]['b64']\n im = BytesIO(base64.b64decode(img))\n i = mpimg.imread(im, format='JPG')\n plt.imshow(i, interpolation='nearest')\n plt.show()",
"Visualize the images with AI Explanations\nNow let's look at the explanations. \nThe images returned show the explanations for only the top class predicted by the model. This means that if one of our model's predictions is incorrect, the pixels you see highlighted are for the incorrect class. For example, if the model predicted rose when it should have predicted tulip, you'll see explanations for why the model thought this image was a rose.\nFirst, we'll visualize the attributions for our Integrated Gradients version. Currently, the highlighted pixels returned from AI Explanations show the top 60% of pixels that contributed to the model's prediction. The pixels we'll see after running the cell below show us the pixels that signaled the model's prediction most.",
"import io\n\nfor idx, flower in enumerate(ig_response['explanations']):\n predicted_flower = CLASSES[flower['attributions_by_label'][0]['label_index']]\n confidence = flower['attributions_by_label'][0]['example_score']\n print('Predicted flower: ', predicted_flower)\n b64str = flower['attributions_by_label'][0]['attributions']['data']['b64_jpeg']\n i = base64.b64decode(b64str)\n i = io.BytesIO(i)\n i = mpimg.imread(i, format='JPG')\n\n plt.imshow(i, interpolation='nearest')\n plt.show()",
"Let's compare this with the image explanations we get from our XRAI version.",
"for idx, flower in enumerate(xrai_response['explanations']):\n predicted_flower = CLASSES[flower['attributions_by_label'][0]['label_index']]\n confidence = flower['attributions_by_label'][0]['example_score']\n print('Predicted flower: ', predicted_flower)\n b64str = flower['attributions_by_label'][0]['attributions']['data']['b64_jpeg']\n i = base64.b64decode(b64str)\n i = io.BytesIO(i)\n i = mpimg.imread(i, format='JPG')\n\n plt.imshow(i, interpolation='nearest')\n plt.show()",
"Sanity check our explanations\nTo better make sense of the feature attributions we're getting, we should compare them with our model's baseline. In the case of image models, the baseline_score returned by AI Explanations is the score our model would give an image input with the baseline we specified. The baseline will be different for each class in our model. In other words, every time your model predicts tulip as the top class, you'll see the same baseline score. \nIn this case, we used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above.\nIf the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful. \nBelow we'll calculate the difference between baseline_score and example_score for the 3 test images above.\nNote that the score values for classification models are probabilities: the confidence your model has in its predicted class. A score of 0.90 for tulip means your model has classified the image as a tulip with 90% confidence.\nWe're running sanity checks below on our IG model, but if you'd like to inspect your XRAI model just swap out the ig_response and IG_VERSION variables below.",
"for i,val in enumerate(ig_response['explanations']):\n baseline_score = val['attributions_by_label'][0]['baseline_score']\n predicted_score = val['attributions_by_label'][0]['example_score']\n print('Baseline score: ', baseline_score) \n print('Predicted score: ', predicted_score)\n print('Predicted - Baseline: ', predicted_score - baseline_score, '\\n')",
"As another sanity check, we'll also look at the explanations for this model's baseline image: an image array of randomly generated values using np.random. First, we'll convert the same np.random baseline array we generated above to a base64 string and preview it.",
"# Convert our baseline from above to a base64 string\nrand_test_img = PIL.Image.fromarray((random_baseline * 255).astype('uint8'))\nbuffer = BytesIO()\nrand_test_img.save(buffer, format=\"BMP\")\nnew_image_string = base64.b64encode(buffer.getvalue()).decode(\"utf-8\")\n\n# Preview it\nplt.imshow(rand_test_img)\n\n# Save the image to a variable in the format our model is expecting\nsanity_check_img = {'conv2d_input': [{'b64': new_image_string}]}\n\n# Make the prediction request\nsanity_check_resp = predict_json(PROJECT_ID, MODEL, sanity_check_img, IG_VERSION)\n\n# View explanations on the baseline random image\nsanity_check_img = base64.b64decode(sanity_check_resp['explanations'][0]['attributions_by_label'][0]['attributions']['data']['b64_jpeg'])\nsanity_check_img = io.BytesIO(sanity_check_img)\nsanity_check_img = mpimg.imread(sanity_check_img, format='JPG')\n\nplt.imshow(sanity_check_img, interpolation='nearest')\nplt.show()",
"The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values you may need to increase the number of integral steps used when you deploy your model.",
"baseline_score = sanity_check_resp['explanations'][0]['attributions_by_label'][0]['baseline_score']\nexample_score = sanity_check_resp['explanations'][0]['attributions_by_label'][0]['example_score']\n\nprint(abs(baseline_score - example_score))",
"Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nAlternatively, you can clean up individual resources by running the following\ncommands:",
"# Delete model version resource\n!gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL\n!gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL\n\n# Delete model resource\n!gcloud ai-platform models delete $MODEL --quiet\n\n# Delete Cloud Storage objects that were created\n!gsutil -m rm -r $BUCKET_NAME",
"If your Cloud Storage bucket doesn't contain any other objects and you would like to delete it, run gsutil rm -r gs://$BUCKET_NAME.\nWhat's next?\nTo learn more about AI Explanations, check out the resources here.\n\nAI Explanations documentation\nAI Explanations whitepaper\nIntegrated gradients paper\nXRAI paper"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ncar/cmip6/models/sandbox-2/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: NCAR\nSource ID: SANDBOX-2\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:22\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ModSimPy
|
notebooks/rabbits2.ipynb
|
mit
|
[
"Modeling and Simulation in Python\nRabbit example\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International",
"%matplotlib inline\n\nfrom modsim import *",
"Rabbit Redux\nThis notebook starts with a version of the rabbit population growth model and walks through some steps for extending it.\nIn the original model, we treat all rabbits as adults; that is, we assume that a rabbit is able to breed in the season after it is born. In this notebook, we extend the model to include both juvenile and adult rabbits.\nAs an example, let's assume that rabbits take 3 seasons to mature. We could model that process explicitly by counting the number of rabbits that are 1, 2, or 3 seasons old. As an alternative, we can model just two stages, juvenile and adult. In the simpler model, the maturation rate is 1/3 of the juveniles per season.\nTo implement this model, make these changes in the System object:\n\n\nBefore you make any changes, run all cells and confirm your understand them.\n\n\nThen, add a second initial populations: juvenile_pop0, with value 0.\n\n\nAdd an additional variable, mature_rate, with the value 0.33.",
"system = System(t0 = 0, \n t_end = 10,\n adult_pop0 = 10,\n birth_rate = 0.9,\n death_rate = 0.5)\n\nsystem",
"Now update run_simulation with the following changes:\n\n\nAdd a second TimeSeries, named juveniles, to keep track of the juvenile population, and initialize it with juvenile_pop0.\n\n\nInside the for loop, compute the number of juveniles that mature during each time step.\n\n\nAlso inside the for loop, add a line that stores the number of juveniles in the new TimeSeries. For simplicity, let's assume that only adult rabbits die.\n\n\nDuring each time step, subtract the number of maturations from the juvenile population and add it to the adult population.\n\n\nAfter the for loop, store the juveniles TimeSeries as a variable in System.",
"def run_simulation(system):\n \"\"\"Runs a proportional growth model.\n \n Adds TimeSeries to `system` as `results`.\n \n system: System object with t0, t_end, p0,\n birth_rate and death_rate\n \"\"\"\n adults = TimeSeries()\n adults[system.t0] = system.adult_pop0\n \n for t in linrange(system.t0, system.t_end):\n births = system.birth_rate * adults[t]\n deaths = system.death_rate * adults[t]\n \n adults[t+1] = adults[t] + births - deaths\n \n system.adults = adults",
"Test your changes in run_simulation:",
"run_simulation(system)\nsystem.adults",
"Next, update plot_results to plot both the adult and juvenile TimeSeries.",
"def plot_results(system, title=None):\n \"\"\"Plot the estimates and the model.\n \n system: System object with `results`\n \"\"\"\n newfig()\n plot(system.adults, 'bo-', label='adults')\n decorate(xlabel='Season', \n ylabel='Rabbit population',\n title=title)",
"And test your updated version of plot_results.",
"plot_results(system, title='Proportional growth model')",
"This notebook demonstrates the steps we recommend for starting your project:\n\n\nStart with one of the examples from the book, either by copying a notebook or pasting code into a new notebook. Get the code working before you make any changes.\n\n\nMake one small change, and run the code again.\n\n\nRepeat step 2 until you have a basic implementation of your model.\n\n\nIf you start with working code that you understand and make small changes, you can avoid spending a lot of time debugging.\nOne you have a basic model working, you can think about what metrics to measure, what parameters to sweep, and how to use the model to predict, explain, or design.\nBonus question\nSuppose you only have room for 30 adult rabbits. Whenever the adult population exceeds 30, you take any excess rabbits to market (as pets for kind children, of course). Modify run_simulation to model this strategy. What effect does it have on the behavior of the system? You might have to run for more than 10 seasons to see what happens."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lwcook/horsetail-matching
|
notebooks/Gradients.ipynb
|
mit
|
[
"In this notebook we look at how to use the gradient of the horsetail matching metric to speed up optimizations (in terms of number of evaluations of the quantity of interest).",
"import numpy\nimport matplotlib.pyplot as plt\n\nfrom horsetailmatching import UniformParameter, IntervalParameter, HorsetailMatching\nfrom horsetailmatching.demoproblems import TP1, TP2",
"First we will look at the purely probabilistic case and a simple test problem. We set up the uncertain parameters and create the horsetail matching object as usual.",
"u1 = UniformParameter(lower_bound=-1, upper_bound=1)\nu2 = UniformParameter(lower_bound=-1, upper_bound=1)\ninput_uncertainties = [u1, u2]",
"Horsetail matching uses the same syntax for specifying a gradient as the scipy.minimize function: through the 'jac' argument. If 'jac' is True, then horsetail matching expects the qoi function to also return the jacobian of the qoi (the gradient with respect to the design variables). Alternatively 'jac' is a fuction that takes two inputs (the values of the design variables and uncertainties), and returns the gradient. The following code demonstrates these alternatives:",
"def fun_qjac(x, u):\n return TP1(x, u, jac=True) # Returns both qoi and its gradient\n\ndef fun_q(x, u): \n return TP1(x, u, jac=False) # Returns just the qoi\n\ndef fun_jac(x, u):\n return TP1(x, u, jac=True)[1] # Returns just the gradient\n\ntheHM = HorsetailMatching(fun_qjac, input_uncertainties, jac=True, method='kernel', kernel_bandwidth=0.001,\n samples_prob=2000, integration_points=numpy.linspace(-10, 100, 5000))\n\ntheHM = HorsetailMatching(fun_q, input_uncertainties, jac=fun_jac, method='empirical', samples_prob=2000)\n\nprint(theHM.evalMetric([1,1]))",
"The gradient can be evaluated using either the 'empirical' or 'kernel' based methods, however the 'empirical' method can sometimes give discontinuous gradients and so in general the 'kernel' based method is preferred.\nNote that when we are using kernels to evaluate the horsetail plot (with the method 'kernel'), it is important to provide integration points that cover the range of values of q that designs visited in the optimization might reach. Integrations points far beyond the range of samples are not evaluated and so this range can be made to be large without taking a computational penalty.\nAdditionally, here we specified the kernel_bandwidth which is fixed throughout an optimization. If this is not specified, Scott's rule is used on the samples from the initial design to determine the bandwidth. \nNow we can use this in a gradient based optimizer:",
"from scipy.optimize import minimize\n\nsolution = minimize(theHM.evalMetric, x0=[1,1], method='BFGS', jac=True)\nprint(solution)\n\n(x1, y1, t1), (x2, y2, t2), CDFs = theHM.getHorsetail()\n\nfor (x, y) in CDFs:\n plt.plot(x, y, c='grey', lw=0.5)\nplt.plot(x1, y1, 'r')\nplt.plot(t1, y1, 'k--')\nplt.xlim([-1, 5])\nplt.ylim([0, 1])\nplt.xlabel('Quantity of Interest')\nplt.show()",
"Once again the optimizer has found the optimum where the CDF is a step function, but this time in fewer iterations. \nWe can also use gradients for optimization under mixed uncertainties in exactly the same way. The example below performs the optimization of TP2 like in the mixed uncertainties tutorial, but this time using gradients. Note that we turn on the verbosity so we can see what the horsetail matching object is doing at each design point.",
"def fun_qjac(x, u):\n return TP2(x, u, jac=True) # Returns both qoi and its gradient\n\nu1 = UniformParameter()\nu2 = IntervalParameter()\n\ntheHM = HorsetailMatching(fun_qjac, u1, u2, jac=True, method='kernel',\n samples_prob=500, samples_int=50, integration_points=numpy.linspace(-20, 100, 3000),\n verbose=True)\n\nsolution = minimize(theHM.evalMetric, x0=[1, 1], method='BFGS', jac=True)\nprint(solution)",
"To plot the optimum solution...",
"upper, lower, CDFs = theHM.getHorsetail()\n(q1, h1, t1) = upper\n(q2, h2, t2) = lower\n\nfor CDF in CDFs:\n plt.plot(CDF[0], CDF[1], c='grey', lw=0.05)\nplt.plot(q1, h1, 'r')\nplt.plot(q2, h2, 'r')\nplt.plot(t1, h1, 'k--')\nplt.plot(t2, h2, 'k--')\nplt.xlim([0, 15])\nplt.ylim([0, 1])\nplt.xlabel('Quantity of Interest')\nplt.show()",
"We can see that using gradients we found the minimum after visiting about an order of magnitude fewer design points than were required without using gradients in the mixed uncertainties tutorial.\nThis concludes our illustration of using horsetail matching with gradients. In the next tutorial we illustrate how we can change the target to specify preferences about the desired behavior under uncertainty: http://nbviewer.jupyter.org/github/lwcook/horsetail-matching/blob/master/notebooks/Targets.ipynb\nFor other tutorials, please visit http://www-edc.eng.cam.ac.uk/aerotools/horsetailmatching/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
samuelshaner/openmc
|
docs/source/pythonapi/examples/mgxs-part-i.ipynb
|
mit
|
[
"This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:\n\nGeneral equations for scalar-flux averaged multi-group cross sections\nCreation of multi-group cross sections for an infinite homogeneous medium\nUse of tally arithmetic to manipulate multi-group cross sections\n\nIntroduction to Multi-Group Cross Sections (MGXS)\nMany Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. An example of U-235's continuous-energy fission cross section along with a 16-group cross section computed for a light water reactor spectrum is displayed below.",
"from IPython.display import Image\nImage(filename='images/mgxs.png', width=350)",
"A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations for fine-mesh heterogeneous deterministic neutron transport applications.\nBefore proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.\nIntroductory Notation\nThe continuous real-valued microscopic cross section may be denoted $\\sigma_{n,x}(\\mathbf{r}, E)$ for position vector $\\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\\Phi(\\mathbf{r},E)$ for position $\\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.\nSpatial and Energy Discretization\nThe energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \\in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.\nMulti-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \\in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.\nGeneral Scalar-Flux Weighted MGXS\nThe multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section $\\sigma_{n,x,k,g}$ as follows:\n$$\\sigma_{n,x,k,g} = \\frac{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\sigma_{n,x}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\Phi(\\mathbf{r},E')}$$\nThis scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for most multi-group cross sections, including total, absorption, and fission reaction types. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell or universe) define the bounds of integration for both numerator and denominator.\nMulti-Group Scattering Matrices\nThe general multi-group cross section $\\sigma_{n,x,k,g}$ is a vector of $G$ values for each energy group $g$. The equation presented above only discretizes the energy of the incoming neutron and neglects the outgoing energy of the neutron (if any). Hence, this formulation must be extended to account for the outgoing energy of neutrons in the discretized scattering matrix cross section used by deterministic neutron transport codes. \nWe denote the incoming and outgoing neutron energy groups as $g$ and $g'$ for the microscopic scattering matrix cross section $\\sigma_{n,s}(\\mathbf{r},E)$. As before, spatial homogenization and energy condensation are used to find the multi-group scattering matrix cross section $\\sigma_{n,s,k,g \\to g'}$ as follows:\n$$\\sigma_{n,s,k,g\\rightarrow g'} = \\frac{\\int_{E_{g'}}^{E_{g'-1}}\\mathrm{d}E''\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\sigma_{n,s}(\\mathbf{r},E'\\rightarrow E'')\\Phi(\\mathbf{r},E')}{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\Phi(\\mathbf{r},E')}$$\nThis scalar flux-weighted multi-group microscopic scattering matrix is computed using OpenMC tallies with both energy in and energy out filters.\nMulti-Group Fission Spectrum\nThe energy spectrum of neutrons emitted from fission is denoted by $\\chi_{n}(\\mathbf{r},E' \\rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\\chi_{n}(\\mathbf{r},E)$ with outgoing energy $E$.\nUnlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\\sigma_{n,f}(\\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\\nu_{n}(\\mathbf{r},E)$. The multi-group fission spectrum $\\chi_{n,k,g}$ is then the probability of fission neutrons emitted into energy group $g$. \nSimilar to before, spatial homogenization and energy condensation are used to find the multi-group fission spectrum $\\chi_{n,k,g}$ as follows:\n$$\\chi_{n,k,g'} = \\frac{\\int_{E_{g'}}^{E_{g'-1}}\\mathrm{d}E''\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\chi_{n}(\\mathbf{r},E'\\rightarrow E'')\\nu_{n}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\nu_{n}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}$$\nThe fission production-weighted multi-group fission spectrum is computed using OpenMC tallies with both energy in and energy out filters.\nThis concludes our brief overview on the methodology to compute multi-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.\nGenerate Input Files",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport openmc\nimport openmc.mgxs as mgxs",
"First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.",
"# Instantiate some Nuclides\nh1 = openmc.Nuclide('H1')\no16 = openmc.Nuclide('O16')\nu235 = openmc.Nuclide('U235')\nu238 = openmc.Nuclide('U238')\nzr90 = openmc.Nuclide('Zr90')",
"With the nuclides we defined, we will now create a material for the homogeneous medium.",
"# Instantiate a Material and register the Nuclides\ninf_medium = openmc.Material(name='moderator')\ninf_medium.set_density('g/cc', 5.)\ninf_medium.add_nuclide(h1, 0.028999667)\ninf_medium.add_nuclide(o16, 0.01450188)\ninf_medium.add_nuclide(u235, 0.000114142)\ninf_medium.add_nuclide(u238, 0.006886019)\ninf_medium.add_nuclide(zr90, 0.002116053)",
"With our material, we can now create a Materials object that can be exported to an actual XML file.",
"# Instantiate a Materials collection and export to XML\nmaterials_file = openmc.Materials([inf_medium])\nmaterials_file.export_to_xml()",
"Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.",
"# Instantiate boundary Planes\nmin_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)\nmax_x = openmc.XPlane(boundary_type='reflective', x0=0.63)\nmin_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)\nmax_y = openmc.YPlane(boundary_type='reflective', y0=0.63)",
"With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.",
"# Instantiate a Cell\ncell = openmc.Cell(cell_id=1, name='cell')\n\n# Register bounding Surfaces with the Cell\ncell.region = +min_x & -max_x & +min_y & -max_y\n\n# Fill the Cell with the Material\ncell.fill = inf_medium",
"OpenMC requires that there is a \"root\" universe. Let us create a root universe and add our square cell to it.",
"# Instantiate Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(cell)",
"We now must create a geometry that is assigned a root universe and export it to XML.",
"# Create Geometry and set root Universe\nopenmc_geometry = openmc.Geometry()\nopenmc_geometry.root_universe = root_universe\n\n# Export to \"geometry.xml\"\nopenmc_geometry.export_to_xml()",
"Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.",
"# OpenMC simulation parameters\nbatches = 50\ninactive = 10\nparticles = 2500\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True}\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.source.Source(space=uniform_dist)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()",
"Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.",
"# Instantiate a 2-group EnergyGroups object\ngroups = mgxs.EnergyGroups()\ngroups.group_edges = np.array([0., 0.625, 20.0e6])",
"We can now use the EnergyGroups object, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:\n\nTotalXS\nTransportXS\nNuTransportXS\nAbsorptionXS\nCaptureXS\nFissionXS\nNuFissionXS\nKappaFissionXS\nScatterXS\nNuScatterXS\nScatterMatrixXS\nNuScatterMatrixXS\nChi\nChiPrompt\nInverseVelocity\nPromptNuFissionXS\n\nThese classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. In this case, let's create the multi-group total, absorption and scattering cross sections with our 2-group structure.",
"# Instantiate a few different sections\ntotal = mgxs.TotalXS(domain=cell, groups=groups)\nabsorption = mgxs.AbsorptionXS(domain=cell, groups=groups)\nscattering = mgxs.ScatterXS(domain=cell, groups=groups)",
"Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Absorption object as follows.",
"absorption.tallies",
"The Absorption object includes tracklength tallies for the 'absorption' and 'flux' scores in the 2-group structure in cell 1. Now that each MGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the \"tallies.xml\" input file for OpenMC.",
"# Instantiate an empty Tallies object\ntallies_file = openmc.Tallies()\n\n# Add total tallies to the tallies file\ntallies_file += total.tallies.values()\n\n# Add absorption tallies to the tallies file\ntallies_file += absorption.tallies.values()\n\n# Add scattering tallies to the tallies file\ntallies_file += scattering.tallies.values()\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()",
"Now we a have a complete set of inputs, so we can go ahead and run our simulation.",
"# Run OpenMC\nopenmc.run()",
"Tally Data Processing\nOur simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.",
"# Load the last statepoint file\nsp = openmc.StatePoint('statepoint.50.h5')",
"In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.\nThe statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.",
"# Load the tallies from the statepoint into each MGXS object\ntotal.load_from_statepoint(sp)\nabsorption.load_from_statepoint(sp)\nscattering.load_from_statepoint(sp)",
"Voila! Our multi-group cross sections are now ready to rock 'n roll!\nExtracting and Storing MGXS Data\nLet's first inspect our total cross section by printing it to the screen.",
"total.print_xs()",
"Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a \"derived\" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.",
"df = scattering.get_pandas_dataframe()\ndf.head(10)",
"Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.",
"absorption.export_xs_data(filename='absorption-xs', format='excel')",
"The following code snippet shows how to export all three MGXS to the same HDF5 binary data store.",
"total.build_hdf5_store(filename='mgxs', append=True)\nabsorption.build_hdf5_store(filename='mgxs', append=True)\nscattering.build_hdf5_store(filename='mgxs', append=True)",
"Comparing MGXS with Tally Arithmetic\nFinally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a \"derived\" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to confirm that the TotalXS is equal to the sum of the AbsorptionXS and ScatterXS objects.",
"# Use tally arithmetic to compute the difference between the total, absorption and scattering\ndifference = total.xs_tally - absorption.xs_tally - scattering.xs_tally\n\n# The difference is a derived tally which can generate Pandas DataFrames for inspection\ndifference.get_pandas_dataframe()",
"Similarly, we can use tally arithmetic to compute the ratio of AbsorptionXS and ScatterXS to the TotalXS.",
"# Use tally arithmetic to compute the absorption-to-total MGXS ratio\nabsorption_to_total = absorption.xs_tally / total.xs_tally\n\n# The absorption-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection\nabsorption_to_total.get_pandas_dataframe()\n\n# Use tally arithmetic to compute the scattering-to-total MGXS ratio\nscattering_to_total = scattering.xs_tally / total.xs_tally\n\n# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection\nscattering_to_total.get_pandas_dataframe()",
"Lastly, we sum the derived scatter-to-total and absorption-to-total ratios to confirm that they sum to unity.",
"# Use tally arithmetic to ensure that the absorption- and scattering-to-total MGXS ratios sum to unity\nsum_ratio = absorption_to_total + scattering_to_total\n\n# The scattering-to-total ratio is a derived tally which can generate Pandas DataFrames for inspection\nsum_ratio.get_pandas_dataframe()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
spencer2211/deep-learning
|
autoencoder/Simple_Autoencoder.ipynb
|
mit
|
[
"A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)",
"Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.",
"img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')\n\nprint(mnist.train.images.shape[1])",
"We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.",
"# Size of the encoding layer (the hidden layer)\nencoding_dim = 32 # feel free to change this value\n\nimage_size = mnist.train.images.shape[1]\n\n# Input and target placeholders\ninputs_ = tf.placeholder(tf.float32, shape=[None, image_size], name=\"inputs\")\ntargets_ = tf.placeholder(tf.float32, shape=[None, image_size], name=\"targets\")\n\n# Output of hidden layer, single fully connected layer here with ReLU activation\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits, fully connected layer with no activation\nlogits = tf.layers.dense(encoded, image_size, activation=None)\n# Sigmoid output from logits\ndecoded = tf.nn.sigmoid(logits, name=\"output\")\n\n# Sigmoid cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n# Mean of the loss\ncost = tf.reduce_mean(loss)\n\n# Adam optimizer\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)",
"Training",
"# Create the session\nsess = tf.Session()",
"Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).",
"epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
franzpl/StableGrid
|
jupyter_notebooks/clock_frequency_accuracy.ipynb
|
mit
|
[
"On the influence of temperature in 16 MHz clock frequency measurement\nThis notebook discusses the behaviour of accuracy and stability of 2 types of clock generators (ceramic resonator & quartz) under influence of temperature for highly precision frequency measurements.",
"import numpy as np\nimport matplotlib.pyplot as plt",
"Ceramic Resonator",
"data_ceramic = np.genfromtxt('temp_and_freq_error_data_ceramic')\n\nfreq_error_ceramic = data_ceramic[:3600, 0]\ntemp_ceramic = data_ceramic[:, 1]\n\nt = np.arange(0, len(freq_error_ceramic))\n\nfig, ax1 = plt.subplots()\nplt.title(\"Clock frequency accuracy at 26 °C (ceramic resonator)\")\nplt.grid()\nax1.plot(t / 60, freq_error_ceramic, color='b')\nax1.set_ylabel(\"Frequency Error / Hz\", color='b')\nax1.set_xlabel(\"t / min\")\nax1.tick_params('y', colors='b')\n\nplt.ylim([7000,9000])\nax2 = ax1.twinx()\nax2.plot(t / 60, (freq_error_ceramic / 16000000) * 10**6, color='r')\nax2.set_ylabel('PPM', color='r')\nax2.tick_params('y', colors='r')\nfig.tight_layout()\n\n\nplt.show()",
"Standard Deviation",
"std_dev_ceramic = np.std(freq_error_ceramic)",
"Average",
"average_ceramic = np.average(freq_error_ceramic)\n\nppm_average_ceramic = (average_ceramic / 16000000) * 10**6\n\nppm_average_ceramic",
"The frequency tolerance of the used ceramic resonator of the Arduino UNO is approx. 500 ppm. Consequently, this means for mains frequency measurements:",
"(50 * 500 * 10**-6) * 1000 # (mains frequency * ppm) * 1000 in mHZ\n\nnp.max(freq_error_ceramic)\n\nnp.min(freq_error_ceramic)",
"The error for mains frequency measurements is 25 mHz.\nCeramic resonator behaviour under the influence of increasing temperature",
"data_temp_ceramic = np.genfromtxt('ceramic_behaviour_increasing_temperature')\n\nfreq_error_temp_ceramic = data_temp_ceramic[:, 0]\nincreasing_temp_ceramic = data_temp_ceramic[:, 1]\n\nfig, ax3 = plt.subplots()\nplt.title(\"Clock frequency stability (ceramic resonator)\")\nplt.grid()\nax3.plot(increasing_temp_ceramic, freq_error_temp_ceramic, color='b')\nax3.set_ylabel(\"Frequency Error / Hz\", color='b')\nax3.set_xlabel(\"T / °C\")\nax3.tick_params('y', colors='b')\nax4 = ax3.twinx()\nax4.plot(increasing_temp_ceramic, (freq_error_temp_ceramic / 16000000) * 10**6, color='r')\nax4.set_ylabel('PPM', color='r')\nax4.tick_params('y', colors='r')\nfig.tight_layout()\nplt.show()",
"Quartz",
"data_quartz = np.genfromtxt('temp_and_freq_error_data_quartz')\n\nfreq_error_quartz = data_quartz[:3600, 0]\ntemp_quartz = data_quartz[:, 1]\n\nt = np.arange(0, len(freq_error_quartz))\n\nfig, ax5 = plt.subplots()\nplt.title(\"Clock frequency accuracy at 26 °C (quartz)\")\nplt.grid()\nax5.plot(t / 60, freq_error_quartz, color='b')\nax5.set_ylabel(\"Frequency Error / Hz\", color='b')\nax5.tick_params('y', colors='b')\nax5.set_xlabel(\"t / min\")\nax6 = ax5.twinx()\nax6.plot(t / 60, (freq_error_quartz / 16000000) * 10**6, color='r')\nax6.set_ylabel('PPM', color='r')\nax6.tick_params('y', colors='r')\nfig.tight_layout()\nplt.show()",
"Standard Deviation",
"std_dev_quartz = np.std(freq_error_quartz)",
"Average",
"average_quartz = np.average(freq_error_quartz)\n\nppm_average_quartz = (average_quartz / 16000000) * 10**6\n\nppm_average_quartz\n\naverage_quartz\n\nnp.max(freq_error_quartz)\n\nnp.min(freq_error_quartz)",
"Quartz behaviour under the influence of increasing temperature",
"data_temp_quartz = np.genfromtxt('quartz_behaviour_increasing_temperature.txt')\n\nfreq_error_temp_quartz = data_temp_quartz[:, 0]\nincreasing_temp_quartz = data_temp_quartz[:, 1]\n\nfig, ax7 = plt.subplots()\nplt.title(\"Clock frequency stability (quartz)\")\nplt.grid()\nax7.plot(increasing_temp_quartz, freq_error_temp_quartz, color='b')\nax7.set_ylabel(\"Frequency Error / Hz\", color='b')\nax7.tick_params('y', colors='b')\nax7.set_xlabel(\"T / °C\")\nax8 = ax7.twinx()\nax8.plot(increasing_temp_quartz, (freq_error_temp_quartz / 16000000) * 10**6, color='r')\nax8.set_ylabel('PPM', color='r')\nax8.tick_params('y', colors='r')\nfig.tight_layout()\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Danghor/Formal-Languages
|
ANTLR4-Python/Interpreter/Interpreter.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open (\"../../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)",
"An Interpreter for a Simple Programming Language\nIn this notebook we develop an interpreter for a small programming language.\nThe grammar for this language is stored in the file Pure.g4.",
"!cat -n Pure.g4",
"The grammar shown above does only contain skip actions. The corrsponding grammar that is enriched with actions is stored in the file Simple.g4.\nAn example program that conforms to this grammar is stored in the file sum.sl.",
"!cat sum.sl",
"The file Simple.g4 contains a parser for the language described by the grammar Pure.g4. This parser returns\nan abstract syntax tree. This tree is represented as a nested tuple.",
"!cat -n Simple.g4",
"The parser shown above will transform the program sum.sl into the nested tuple stored in the file sum.ast.",
"!cat sum.ast\n\n!antlr4 -Dlanguage=Python3 Simple.g4\n\nfrom SimpleLexer import SimpleLexer\nfrom SimpleParser import SimpleParser\nimport antlr4\n\n%run ../AST-2-Dot.ipynb",
"The function main takes one parameter file. This parameter is a string specifying a program file.\nThe function reads the program contained in this file and executes it.",
"def main(file):\n with open(file, 'r') as handle:\n program_text = handle.read()\n input_stream = antlr4.InputStream(program_text)\n lexer = SimpleLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = SimpleParser(token_stream)\n result = parser.program()\n Statements = result.stmnt_list\n ast = tuple2dot(Statements)\n print(Statements)\n display(ast)\n ast.render('ast', view=True)\n execute_tuple(Statements)",
"The function execute_list takes two arguments:\n- Statement_List is a list of statements,\n- Values is a dictionary assigning integer values to variable names.\nThe function executes the statements in Statement_List. If an assignment statement is executed,\nthe dictionary Values is updated.",
"def execute_tuple(Statement_List, Values={}):\n for stmnt in Statement_List:\n execute(stmnt, Values)",
"The function execute takes two arguments:\n- stmnt is a statement,\n- Values is a dictionary assigning integer values to variable names.\nThe function executes the statements in Statement_List. If an assignment statement is executed,\nthe dictionary Values is updated.",
"L = [1,2,3,4,5]\na, b, *R = L\na, b, R\n\ndef execute(stmnt, Values):\n op = stmnt[0]\n if stmnt == 'program':\n pass\n elif op == ':=':\n _, var, value = stmnt\n Values[var] = evaluate(value, Values)\n elif op == 'read':\n _, var = stmnt\n Values[var] = int(input())\n elif op == 'print':\n _, expr = stmnt\n print(evaluate(expr, Values))\n elif op == 'if':\n _, test, *SL = stmnt\n if evaluate(test, Values):\n execute_tuple(SL, Values)\n elif op == 'while':\n _, test, *SL = stmnt\n while evaluate(test, Values):\n execute_tuple(SL, Values)\n else:\n assert False, f'{stmnt} unexpected'",
"The function evaluate takes two arguments:\n- expr is a logical expression or an arithmetic expression,\n- Values is a dictionary assigning integer values to variable names.\nThe function evaluates the given expression and returns this value.",
"def evaluate(expr, Values):\n if isinstance(expr, int):\n return expr\n if isinstance(expr, str):\n return Values[expr] \n op = expr[0]\n if op == '==':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) == evaluate(rhs, Values)\n if op == '<':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) < evaluate(rhs, Values)\n if op == '+':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) + evaluate(rhs, Values)\n if op == '-':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) - evaluate(rhs, Values)\n if op == '*':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) * evaluate(rhs, Values)\n if op == '/':\n _, lhs, rhs = expr\n return evaluate(lhs, Values) / evaluate(rhs, Values)\n assert False, f'{stmnt} unexpected'\n\n!cat sum.sl\n\nmain('sum.sl')\n\n!cat factorial.sl\n\nmain('factorial.sl')\n\n!rm *.py *.tokens *.interp\n!rm -r __pycache__/\n!rm *.pdf\n\n!ls"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
napsternxg/DataMiningPython
|
Lecture Notebooks/Getting Started.ipynb
|
gpl-3.0
|
[
"Getting started\n\nPython basiscs\nLoading data\nPlotting data\n\nPython basics\nFor a data scientist learning some basic data structures available in python is very important for efficient usage of the tools available in the python ecosystem. \nVariables\nVariables can be defined using any alphanumeric string which starts with an alphabet or \"_\". It can include \"_\" in it.",
"a = 10\nb = 20\nc = \"Hello\"\n\nprint a, b, c",
"Lists\nA list is a data structure which can hold a list of items of different types. Think of a shopping list. Items in the list can be accessed using zero based index. You will use these when you want to add more data and access the data based on the position in index.",
"list_items = [\"milk\", \"cereal\", \"banana\", 22.5, [1,2,3]] ## A list can contain another list and items of different types\nprint list_items\nprint \"3rd item in the list: \", list_items[2] # Zero based index starts from 0 so 3rd item will have index 2",
"Sets\nLike list but only store unique items which are hashable (think basic data types like string, ints and not lists, will explain later). Super useful for checking if an item is already in the list. Items are not indexed. So items can only be added or removed. You will use these when you want to keep track of unique items e.g. feature names in the data.",
"set_items = set([1,2,3, 1])\nprint set_items\nprint \"Is 1 in set_items: \", 1 in set_items\nprint \"Is 10 in set_items: \", 10 in set_items",
"Dictionaries\nLike sets but can also map values to each unique item. Essentially, it stores key-value pairs which are useful for fast lookup of items. Think of telephone directory or shopping catalogue. Keys should be of same time as items in sets, but values can be anything. You will use these when you want to keep unique items and their related values e.g. words in the data and the number of times they occur.",
"item_details = {\n \"milk\": {\n \"brand\": \"Amul\",\n \"quantity\": 2.5,\n \"cost\": 10\n },\n \"chocolate\": {\n \"brand\": \"Cadbury\",\n \"quantity\": 1,\n \"cost\": 5\n },\n}\n\nprint item_details\nprint \"What are is the brand of milk: \", item_details[\"milk\"][\"brand\"]\nprint \"What are is the cost of chocolate: \", item_details[\"chocolate\"][\"cost\"]",
"Functions\nUsing a function is handy in cases when you need to repeat something over an over again. A function can take arguments and return some variables. \nE.g. if you want to fetch tweets using different queries then you can define a function which takes the query and gives you as output the tweets on that query. You can then just call the function with different queries rather than rewriting the whole code for getting the queries.",
"def get_items_from_file(filename):\n data = []\n with open(filename) as fp:\n for line in fp:\n line = line.strip().split(\" \")\n data.append(line)\n return data\n\nprint \"Data in file data/temp1.txt\"\nprint get_items_from_file(\"../data/temp1.txt\")\n\nprint \"Data in file data/temp2.txt\"\nprint get_items_from_file(\"../data/temp2.txt\")",
"Loading Data",
"from scipy.io import arff\n\ndata, meta = arff.loadarff(\"../data/iris.arff\")\n\ndata.shape, meta\n\ndata[0]",
"Pandas\nPandas is a wonderful library for working with tabular data in python. It can read csv files easily and represents them as dataframes. Think of it like excel but faster and without a GUI.",
"import pandas as pd\n\ndf_iris = pd.DataFrame(data, columns=meta.names())\ndf_iris.head()\n\nprint \"The shape of iris data is: \", df_iris.shape\n\nprint \"Show how many instances are of each class: \"\ndf_iris[\"class\"].value_counts()\n\ndf_iris[\"sepallength\"].hist(bins=10)",
"Filtering data \nFiltering parts of the data in pandas is really easy. \nIf you want to filter data for editing it then you need to make a copy of the filtered data.",
"print \"Show data containing with petalwidth > 2.0\"\ndf_iris[df_iris[\"petalwidth\"] > 2.0]",
"Titanic data\n```\nVARIABLE DESCRIPTIONS:\nsurvival Survival\n (0 = No; 1 = Yes)\npclass Passenger Class\n (1 = 1st; 2 = 2nd; 3 = 3rd)\nname Name\nsex Sex\nage Age\nsibsp Number of Siblings/Spouses Aboard\nparch Number of Parents/Children Aboard\nticket Ticket Number\nfare Passenger Fare\ncabin Cabin\nembarked Port of Embarkation\n (C = Cherbourg; Q = Queenstown; S = Southampton)\nSPECIAL NOTES:\nPclass is a proxy for socio-economic status (SES)\n 1st ~ Upper; 2nd ~ Middle; 3rd ~ Lower\nAge is in Years; Fractional if Age less than One (1)\n If the Age is Estimated, it is in the form xx.5\nWith respect to the family relation variables (i.e. sibsp and parch)\nsome relations were ignored. The following are the definitions used\nfor sibsp and parch.\nSibling: Brother, Sister, Stepbrother, or Stepsister of Passenger Aboard Titanic\nSpouse: Husband or Wife of Passenger Aboard Titanic (Mistresses and Fiances Ignored)\nParent: Mother or Father of Passenger Aboard Titanic\nChild: Son, Daughter, Stepson, or Stepdaughter of Passenger Aboard Titanic\nOther family relatives excluded from this study include cousins,\nnephews/nieces, aunts/uncles, and in-laws. Some children travelled\nonly with a nanny, therefore parch=0 for them. As well, some\ntravelled with very close friends or neighbors in a village, however,\nthe definitions do not support such relations.\n```",
"df = pd.read_csv(\"../data/titanic.csv\")\ndf.shape\n\ndf.head()",
"Plotting data\nGreat for visual inspection.\nMatplotlib and Seaborn\nMatplotlib is a low level python library which gives you complete control over your plots. \nSeaborn is a library made on top of matplotlib and which adds functionality to create certain types of plots easily. Works great with pandas.",
"# We need the line below to show plots directly in the notebook.\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nsns.set_style(\"ticks\")\nsns.set_context(\"paper\")\n\ncolors = {\n \"Iris-setosa\": \"red\",\n \"Iris-versicolor\": \"green\",\n \"Iris-virginica\": \"blue\",\n}\nplt.scatter(df_iris.petallength, df_iris.petalwidth, c=map(lambda x: colors[x], df_iris[\"class\"]))\nplt.xlabel(\"petallength\")\nplt.ylabel(\"petalwidth\")\n\nsns.lmplot(x=\"petallength\", y=\"petalwidth\", hue=\"class\", data=df_iris, fit_reg=False)\n\nsns.pairplot(df_iris, hue=\"class\")\n\nsns.countplot(x=\"sex\", data=df)\n\nsns.countplot(x=\"class\", data=df)\n\nsns.countplot(x=\"embark_town\", data=df)\n\nsns.countplot(x=\"alive\", data=df)\n\nsns.countplot(x=\"alone\", data=df)\n\nsns.lmplot(x=\"age\", y=\"survived\", hue=\"sex\", data=df, fit_reg=True, logistic=True)\n\nsns.barplot(x=\"sex\", y=\"survived\", hue=\"embark_town\", data=df)\n\nsns.barplot(x=\"sex\", y=\"survived\", hue=\"class\", data=df)\n\nsns.barplot(x=\"sex\", y=\"survived\", hue=pd.cut(df.age, bins=[0,18,30,100]), data=df)\n\nsns.barplot(x=\"sex\", y=\"survived\", hue=\"alone\", data=df)\n\nsns.barplot(x=\"sex\", y=\"survived\", hue=pd.cut(df.sibsp, bins=[0,1,2,3,10]), data=df)\n\nsns.barplot(x=\"sex\", y=\"survived\", hue=pd.cut(df.parch, bins=[0,1,2,3,10]), data=df)\n\nsns.barplot(x=\"sex\", y=\"age\", hue=pd.cut(df.parch, bins=[0,1,2,3,10]), data=df)\n\nsns.barplot(x=\"sex\", y=\"age\", hue=pd.cut(df.sibsp, bins=[0,1,2,3,10]), data=df)\n\nsns.barplot(x=\"sex\", y=\"age\", hue=\"embark_town\", data=df)\n\nsns.barplot(x=\"sex\", y=\"age\", hue=\"class\", data=df)",
"Question: Draw the plot of mean petalwidth of the various categories of Iris-classes. It should show the mean petalwidth for each petallengths in buckets [0, 2.5, 4.5, 6.5, 10]\nANSWER BELOW\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.",
"sns.barplot(x=\"class\", y=\"petalwidth\", hue=pd.cut(df_iris.petallength, bins=[0, 2.5, 4.5, 6.5, 10]), data=df_iris)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jarvis-fga/Projetos
|
Problema 2/Daniel - Julliana/.ipynb_checkpoints/Amazon-checkpoint.ipynb
|
mit
|
[
"Primeiramente, é necessária a leitura dos 3 arquivos, inserindo as informações em um vetor:",
"import codecs\nwith codecs.open(\"imdb_labelled.txt\", \"r\", \"utf-8\") as arquivo:\n vetor = []\n for linha in arquivo:\n vetor.append(linha)\nwith codecs.open(\"amazon_cells_labelled.txt\", \"r\", \"utf-8\") as arquivo:\n for linha in arquivo:\n vetor.append(linha)\nwith codecs.open(\"yelp_labelled.txt\", \"r\", \"utf-8\") as arquivo:\n for linha in arquivo:\n vetor.append(linha)",
"Depois, devemos retirar cada quebra de linha no final de cada linha, ou seja, os '\\n'.",
"vetor = [ x[:-1] for x in vetor ]\n\nimport nltk\n\nvetor = ([s.replace('&', '').replace(' - ', '').replace('.', '').replace(',', '').replace('!', '').\n replace('+', '')for s in vetor])",
"A seguir, retiramos os dois últimos caracteres sobrando apenas o nosso comentário. Depois, passamos ele para lowercase.",
"TextosQuebrados = [ x[:-4] for x in vetor ]\n\nTextosQuebrados = map(lambda X:X.lower(),TextosQuebrados)\n\n#TextosQuebrados = [x.split(' ') for x in TextosQuebrados]\n\nTextosQuebrados = [nltk.tokenize.word_tokenize(frase) for frase in TextosQuebrados]\n\n#X[0]\n\nimport nltk\nstopwords = nltk.corpus.stopwords.words('english')\n\nstemmer = nltk.stem.RSLPStemmer()\n\ndicionario = set()\n\nfor comentarios in TextosQuebrados:\n validas = [stemmer.stem(palavra) for palavra in comentarios if palavra not in stopwords and len(palavra) > 0]\n dicionario.update(validas)\n\n \ntotalDePalavras = len(dicionario)\ntuplas = zip(dicionario, xrange(totalDePalavras))\ntradutor = {palavra:indice for palavra,indice in tuplas}\n \ndef vetorizar_texto(texto, tradutor, stemmer):\n vetor = [0] * len(tradutor)\n for palavra in texto:\n if len(palavra) > 0:\n raiz = stemmer.stem(palavra)\n if raiz in tradutor:\n posicao = tradutor[raiz]\n vetor[posicao] += 1\n\n return vetor\n\nvetoresDeTexto = [vetorizar_texto(texto, tradutor,stemmer) for texto in TextosQuebrados]\nX = vetoresDeTexto\n\nY = [ x[-1:] for x in vetor ]\nlen(Y)\n\nporcentagem_de_treino = 0.8\n\ntamanho_do_treino = porcentagem_de_treino * len(Y)\ntamanho_de_validacao = len(Y) - tamanho_do_treino\n\ntreino_dados = X[0:int(tamanho_do_treino)]\ntreino_marcacoes = Y[0:int(tamanho_do_treino)]\n\nvalidacao_dados = X[int(tamanho_do_treino):]\nvalidacao_marcacoes = Y[int(tamanho_do_treino):]\n\nfim_de_teste = tamanho_do_treino + tamanho_de_validacao\nteste_dados = X[int(tamanho_do_treino):int(fim_de_teste)]\nteste_marcacoes = Y[int(tamanho_do_treino):int(fim_de_teste)]",
"Foi decidida a abordagem por poly SCV",
"\"\"\"\"from sklearn import svm\nfrom sklearn.model_selection import cross_val_score\nk = 10\n\n# Implement poly SVC \npoly_svc = svm.SVC(kernel='linear')\naccuracy_poly_svc = cross_val_score(poly_svc, treino_dados, treino_marcacoes, cv=k, scoring='accuracy')\nprint('poly_svc: ', accuracy_poly_svc.mean())\"\"\"\"",
"Resultado - Poly: \nOs 3: Após 10 minutos rodando, foi decidido parar o teste\nIMdB: 0.51750234411626805\nAmazon: 0.51125019534302241\nYelp: 0.56500429754649173\nResultado - Linear:\nOs 3: 0.7745982496802607 (5 minutos)\nIMdB: 0.72168288013752147\nAmazon: 0.78869745272698855\nYelp: 0.77492342553523996",
"def fit_and_predict(nome, modelo, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes):\n\tmodelo.fit(treino_dados, treino_marcacoes)\n\n\tresultado = modelo.predict(teste_dados)\n\tacertos = (resultado == teste_marcacoes)\n\n\ttotal_de_acertos = sum(acertos)\n\ttotal_de_elementos = len(teste_dados)\n\ttaxa_de_acerto = float(total_de_acertos) / float(total_de_elementos)\n\n\tprint(taxa_de_acerto)\n\treturn taxa_de_acerto\n\n\nresultados = {}\n\nfrom sklearn.naive_bayes import MultinomialNB\nmodeloMultinomial = MultinomialNB()\n\nfrom sklearn.ensemble import AdaBoostClassifier\nclf = AdaBoostClassifier(n_estimators=100)\n\nresultadoMultinomial = fit_and_predict(\"MultinomialNB\", AdaBoostClassifier(n_estimators=100), treino_dados, treino_marcacoes, teste_dados, teste_marcacoes)\nresultados[resultadoMultinomial] = modeloMultinomial",
"MultinomialNB:\nTodos: 0.811666666667\nIMDB: 0.775\nAmazon: 76.5\nYell: 0.715\nCom maior refinamento de dados:\nMultinomialNB:\nTodos: 0.808652246256\nAdaboost:\nTodos:0.527454242928"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
4dsolutions/Python5
|
CompoundFiveOctahedra.ipynb
|
mit
|
[
"Oregon Curriculum Network <br />\nDiscovering Math with Python\nFive Octahedrons in S* Module Volumes\n<a data-flickr-embed=\"true\" href=\"https://www.flickr.com/photos/kirbyurner/46361149221/in/dateposted-public/\" title=\"Five Octahedrons by Casey House\"><img src=\"https://farm5.staticflickr.com/4918/46361149221_ef7104b704.jpg\" width=\"500\" height=\"499\" alt=\"Five Octahedrons by Casey House\"></a><script async src=\"//embedr.flickr.com/assets/client-code.js\" charset=\"utf-8\"></script>\nThis is a cut and paste job, working from notes messaged by David Koski to Casey House and myself. It's about the volume of Five Octahedrons (a five-fold symmetric non-convex shape -- it has \"bumps\").\nMy goal is to verify David's numbers using the extended precision library gmpy2.\nThe precision is expressed in bits, not decimal digits. 100 is already well beyond what a native floating point IEEE 754 would contain. If using a live version of this Notebook, you'll be able to take control of the context, including how many bits.\nLets go with 200 bits this time. You'll start to see discrepancies in the least significant digits, which is to be expected.",
"import gmpy2\ngmpy2.get_context().precision=200\n\nroot5 = gmpy2.sqrt(5)\nroot2 = gmpy2.sqrt(2)\nØ = (1 + root5)/2\nØ\n\nS = (Ø **-5)/2 # home base Smod\nS3 = S * Ø**3 # capital s, phi up\nS6 = S3 * Ø**3 # phi up yet again\n# and so on\n\ns3 = S * Ø**-3 # small s, phi down\ns6 = s3 * Ø**-3\ns9 = s6 * Ø**-3 \ns12= s9 * Ø**-3\n\nfor i in 's12, s9, s6, s3, S, S3, S6'.split(\",\"):\n print(\"{:>4} =\".format(i), eval(i))",
"The S may be expressed in terms of phi-scaled versions of itself.",
"S = 4*s3 + s6 # synonyms\nS",
"<a data-flickr-embed=\"true\" href=\"https://www.flickr.com/photos/kirbyurner/6335726352/in/photolist-25JufgG-22RBsb5-CwvwpP-vLby2U-ujipN3-f75zUP-aDSfHf-8ryECF-8ryEix-7mcmne-5zTRjp-5zY9gA-7k4Eid-7jZLe2-7k4Ejf-7k4Em5-7jZLhp\" title=\"S Module\"><img src=\"https://farm7.staticflickr.com/6114/6335726352_902009df40.jpg\" width=\"500\" height=\"441\" alt=\"S Module\"></a><script async src=\"//embedr.flickr.com/assets/client-code.js\" charset=\"utf-8\"></script>\nThe five octahedrons are presumed to have edges 2R, the same edge used for the unit volume tetrahedron, and for the 12-around-1 cuboctahedron. The dissection by David Koski was done in vZome.",
"# Koski writes:\nFiveOcta = 132 * S + 36 * s3\nRD6 = 126*S + 30*s3\n# 1/10 of RTW is 6S+6s3\n# adding them together gets the result\n# Whew\nFiveOcta\n\nRD6\n\nFiveOcta2 = 10116 * s9 + 2388 * s12\nFiveOcta2",
"David: \"So, the beauty of working with these bits is that I can move them around.\"",
"for i in 132*S + 36*s3, 564*s3 + 132*s6, 2388*s6 + 564*s9, 10116*s9 + 2388*s12:\n print(i)",
"<a data-flickr-embed=\"true\" href=\"https://www.flickr.com/photos/kirbyurner/31499584137/in/dateposted-public/\" title=\"icosa_within\"><img src=\"https://farm5.staticflickr.com/4818/31499584137_1babf0215c.jpg\" width=\"500\" height=\"312\" alt=\"icosa_within\"></a><script async src=\"//embedr.flickr.com/assets/client-code.js\" charset=\"utf-8\"></script>\n<div align=\"center\">\nIcosa Within by D. B. Koski \n</div>\n<br />\n<a data-flickr-embed=\"true\" href=\"https://www.flickr.com/photos/kirbyurner/31499584597/in/dateposted-public/\" title=\"rt_inside\"><img src=\"https://farm5.staticflickr.com/4858/31499584597_bda3daa48a.jpg\" width=\"500\" height=\"312\" alt=\"rt_inside\"></a><script async src=\"//embedr.flickr.com/assets/client-code.js\" charset=\"utf-8\"></script>\n<div align=\"center\">\nRT Within by D. B. Koski \n</div>\n\nThe Five Octahedra each of volume 4, edges 2R, contain a shared Icosahedron, the long diagonals of which define a corresponding RT as shown by the Koski vZomes above.\nThis shared Icosahedron has a volume of $2.5 * sfactor^2$ (See All Aboard the S Train link below).\nAn Icosahedron 8 times larger, $20 * sfactor^2$, has the volume 20 cuboctahedron + that of the RT defined above.",
"cubocta = gmpy2.mpfr(2.5) # inscribed in Octa 4\nsfactor = 2 * gmpy2.sqrt(2) * Ø ** -2\n\nIcosa_within = cubocta * sfactor ** 2 # inscribed in Octa 4\nRT_within = 60*S + 60*s3 # RT anchored by Icosa_within (long diags)\n\nprint(Icosa_within * 8)\nprint(cubocta * 8 + RT_within) # 20 + RT Within",
"David went on to get the volume of the Compound Five Cuboctahedra, depicted below:\n<a data-flickr-embed=\"true\" href=\"https://www.flickr.com/photos/kirbyurner/31591317707/in/dateposted-public/\" title=\"Five Cuboctahedrons\"><img src=\"https://farm5.staticflickr.com/4899/31591317707_d6426c753e.jpg\" width=\"500\" height=\"312\" alt=\"Five Cuboctahedrons\"></a><script async src=\"//embedr.flickr.com/assets/client-code.js\" charset=\"utf-8\"></script>\nThe cuboctahedrons have the same edge lengths as the octahedrons above, meaning they have volume 20 versus volume 4. The answer, per Koski, may again be expressed in terms of phi-scaled S modules:",
"480*S + 280*s3",
"Lets take this opportunity to contextualize the above in terms of our larger volumes table. We'll need a few more players, namely the E family.\nThe T_factor, about 0.99948333, and discussed in Synergetics, is the linear scale factor by which the E module is reduced to create a T module. This T_factor to the 3rd power would be the volumetric reduction factor, and applied to the E module returns volume 1/24. \nT_factor = $(\\Phi/\\sqrt{2}) (\\sqrt[3]{2/3})$.\nAn RT of 120 T modules has a volume of exactly 5, where T = A = B = 1/24 tetravolumes. \nAn RT of 120 E modules has a radius (center to diamond face centers) of exactly 1, and encases a unit radius sphere. \nThis slightly larger RT shows up as RT5+ in the table below.",
"E = (root2/8) * (Ø ** -3) # home base Emod\nE3 = E * Ø ** 3 # Emod phi up\ne3 = E * Ø ** -3 # Emod phi down\nSuperRT = 120 * E3 # = 20 * Synergetics Constant sqrt(9/8)\n\nF = gmpy2.mpfr(gmpy2.mpq(1, 16)) # space-filling shape, appears in 5RD\n\nT_factor = Ø/root2 * gmpy2.root( gmpy2.mpq(2, 3), 3) # T radius vs. E radius of 1\nT = E * T_factor ** 3\nT_factor\n\nprint(\"Five VEs : {:60.57}\".format(480*S + 280*s3)) # compound shape\nprint(\"SuperRT : {:60.57}\".format(SuperRT)) # formed by P Dodeca + Icosa verts\nprint(\"Cubocta : {:60.57}\".format(20*S6 + 20*S3)) # classic Jitterbug (JB) starts here\nprint(\"Icosa : {:60.57}\".format(100*E3 + 20*E)) # JB icosa\nprint(\"SmallGuy : {:60.57}\".format(360*E + 85*e3)) # skew to JB icosa\nprint(\"P Dodeca : {:60.57}\".format(348*E + 84*e3)) # dual to JB icosa, edges crossing\nprint(\"Five Octas : {:60.57}\".format(132*S + 36*s3)) # compound shape\nprint(\"Rh Dodeca (RD): {:60.57}\".format(6*S6 + 6*S3)) # space filler, shares verts with RT7.5\nprint(\"RT5+ : {:60.57}\".format(120*E)) # 120 E modules\nprint(\"RT5 : {:60.57}\".format(5*S6 + 5*S3)) # 120 T modules\nprint(\"RT5 (check) : {:60.57}\".format(T * 120))\nprint(\"Octa : {:60.57}\".format(4*S6 + 4*S3)) # JB octahedron\nprint(\"Cube : {:60.57}\".format(3*S6 + 3*S3)) \nprint(\"Skew Icosa : {:60.57}\".format(Icosa_within)) # askew to 2.5 VE\nprint(\"Small VE : {:60.57}\".format(cubocta)) # 1/8 volume of JB cubocta\nprint(\"Five Tetras : {:60.57}\".format(35*S + 45*s3)) # per Koski\nprint(\"Tetra : {:60.57}\".format(5*S3 + S)) # unit volume\nprint(\"-\" * 76)\nprint(\"F : {:60.57}\".format(F)) # fourth RITE (Rite = 2 Mites)\nprint(\"E3 : {:60.57}\".format(E3)) \nprint(\"E : {:60.57}\".format(E)) \nprint(\"e3 : {:60.57}\".format(e3)) \nprint(\"T : {:60.57}\".format(E * T_factor**3)) \nprint(\"S6 : {:60.57}\".format(S6))\nprint(\"S3 : {:60.57}\".format(S3))\nprint(\"S : {:60.57}\".format(S))\nprint(\"s3 : {:60.57}\".format(s3))\nprint(\"s6 : {:60.57}\".format(s6))",
"For further reading:\nAll Aboard the S Train"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rishuatgithub/MLPy
|
nlp/UPDATED_NLP_COURSE/02-Parts-of-Speech-Tagging/00-POS-Basics.ipynb
|
apache-2.0
|
[
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nPart of Speech Basics\nThe challenge of correctly identifying parts of speech is summed up nicely in the spaCy docs:\n<div class=\"alert alert-info\" style=\"margin: 20px\">Processing raw text intelligently is difficult: most words are rare, and it's common for words that look completely different to mean almost the same thing. The same words in a different order can mean something completely different. Even splitting text into useful word-like units can be difficult in many languages. While it's possible to solve some problems starting from only the raw characters, it's usually better to use linguistic knowledge to add useful information. That's exactly what spaCy is designed to do: you put in raw text, and get back a **Doc** object, that comes with a variety of annotations.</div>\nIn this section we'll take a closer look at coarse POS tags (noun, verb, adjective) and fine-grained tags (plural noun, past-tense verb, superlative adjective).",
"# Perform standard imports\nimport spacy\nnlp = spacy.load('en_core_web_sm')\n\n# Create a simple Doc object\ndoc = nlp(u\"The quick brown fox jumped over the lazy dog's back.\")",
"View token tags\nRecall that you can obtain a particular token by its index position.\n* To view the coarse POS tag use token.pos_\n* To view the fine-grained tag use token.tag_\n* To view the description of either type of tag use spacy.explain(tag)\n<div class=\"alert alert-success\">Note that `token.pos` and `token.tag` return integer hash values; by adding the underscores we get the text equivalent that lives in **doc.vocab**.</div>",
"# Print the full text:\nprint(doc.text)\n\n# Print the fifth word and associated tags:\nprint(doc[4].text, doc[4].pos_, doc[4].tag_, spacy.explain(doc[4].tag_))",
"We can apply this technique to the entire Doc object:",
"for token in doc:\n print(f'{token.text:{10}} {token.pos_:{8}} {token.tag_:{6}} {spacy.explain(token.tag_)}')",
"Coarse-grained Part-of-speech Tags\nEvery token is assigned a POS Tag from the following list:\n<table><tr><th>POS</th><th>DESCRIPTION</th><th>EXAMPLES</th></tr>\n\n<tr><td>ADJ</td><td>adjective</td><td>*big, old, green, incomprehensible, first*</td></tr>\n<tr><td>ADP</td><td>adposition</td><td>*in, to, during*</td></tr>\n<tr><td>ADV</td><td>adverb</td><td>*very, tomorrow, down, where, there*</td></tr>\n<tr><td>AUX</td><td>auxiliary</td><td>*is, has (done), will (do), should (do)*</td></tr>\n<tr><td>CONJ</td><td>conjunction</td><td>*and, or, but*</td></tr>\n<tr><td>CCONJ</td><td>coordinating conjunction</td><td>*and, or, but*</td></tr>\n<tr><td>DET</td><td>determiner</td><td>*a, an, the*</td></tr>\n<tr><td>INTJ</td><td>interjection</td><td>*psst, ouch, bravo, hello*</td></tr>\n<tr><td>NOUN</td><td>noun</td><td>*girl, cat, tree, air, beauty*</td></tr>\n<tr><td>NUM</td><td>numeral</td><td>*1, 2017, one, seventy-seven, IV, MMXIV*</td></tr>\n<tr><td>PART</td><td>particle</td><td>*'s, not,*</td></tr>\n<tr><td>PRON</td><td>pronoun</td><td>*I, you, he, she, myself, themselves, somebody*</td></tr>\n<tr><td>PROPN</td><td>proper noun</td><td>*Mary, John, London, NATO, HBO*</td></tr>\n<tr><td>PUNCT</td><td>punctuation</td><td>*., (, ), ?*</td></tr>\n<tr><td>SCONJ</td><td>subordinating conjunction</td><td>*if, while, that*</td></tr>\n<tr><td>SYM</td><td>symbol</td><td>*$, %, §, ©, +, −, ×, ÷, =, :), 😝*</td></tr>\n<tr><td>VERB</td><td>verb</td><td>*run, runs, running, eat, ate, eating*</td></tr>\n<tr><td>X</td><td>other</td><td>*sfpksdpsxmsa*</td></tr>\n<tr><td>SPACE</td><td>space</td></tr>\n\n___\n## Fine-grained Part-of-speech Tags\nTokens are subsequently given a fine-grained tag as determined by morphology:\n<table>\n<tr><th>POS</th><th>Description</th><th>Fine-grained Tag</th><th>Description</th><th>Morphology</th></tr>\n<tr><td>ADJ</td><td>adjective</td><td>AFX</td><td>affix</td><td>Hyph=yes</td></tr>\n<tr><td>ADJ</td><td></td><td>JJ</td><td>adjective</td><td>Degree=pos</td></tr>\n<tr><td>ADJ</td><td></td><td>JJR</td><td>adjective, comparative</td><td>Degree=comp</td></tr>\n<tr><td>ADJ</td><td></td><td>JJS</td><td>adjective, superlative</td><td>Degree=sup</td></tr>\n<tr><td>ADJ</td><td></td><td>PDT</td><td>predeterminer</td><td>AdjType=pdt PronType=prn</td></tr>\n<tr><td>ADJ</td><td></td><td>PRP\\$</td><td>pronoun, possessive</td><td>PronType=prs Poss=yes</td></tr>\n<tr><td>ADJ</td><td></td><td>WDT</td><td>wh-determiner</td><td>PronType=int rel</td></tr>\n<tr><td>ADJ</td><td></td><td>WP\\$</td><td>wh-pronoun, possessive</td><td>Poss=yes PronType=int rel</td></tr>\n<tr><td>ADP</td><td>adposition</td><td>IN</td><td>conjunction, subordinating or preposition</td><td></td></tr>\n<tr><td>ADV</td><td>adverb</td><td>EX</td><td>existential there</td><td>AdvType=ex</td></tr>\n<tr><td>ADV</td><td></td><td>RB</td><td>adverb</td><td>Degree=pos</td></tr>\n<tr><td>ADV</td><td></td><td>RBR</td><td>adverb, comparative</td><td>Degree=comp</td></tr>\n<tr><td>ADV</td><td></td><td>RBS</td><td>adverb, superlative</td><td>Degree=sup</td></tr>\n<tr><td>ADV</td><td></td><td>WRB</td><td>wh-adverb</td><td>PronType=int rel</td></tr>\n<tr><td>CONJ</td><td>conjunction</td><td>CC</td><td>conjunction, coordinating</td><td>ConjType=coor</td></tr>\n<tr><td>DET</td><td>determiner</td><td>DT</td><td>determiner</td><td></td></tr>\n<tr><td>INTJ</td><td>interjection</td><td>UH</td><td>interjection</td><td></td></tr>\n<tr><td>NOUN</td><td>noun</td><td>NN</td><td>noun, singular or mass</td><td>Number=sing</td></tr>\n<tr><td>NOUN</td><td></td><td>NNS</td><td>noun, plural</td><td>Number=plur</td></tr>\n<tr><td>NOUN</td><td></td><td>WP</td><td>wh-pronoun, personal</td><td>PronType=int rel</td></tr>\n<tr><td>NUM</td><td>numeral</td><td>CD</td><td>cardinal number</td><td>NumType=card</td></tr>\n<tr><td>PART</td><td>particle</td><td>POS</td><td>possessive ending</td><td>Poss=yes</td></tr>\n<tr><td>PART</td><td></td><td>RP</td><td>adverb, particle</td><td></td></tr>\n<tr><td>PART</td><td></td><td>TO</td><td>infinitival to</td><td>PartType=inf VerbForm=inf</td></tr>\n<tr><td>PRON</td><td>pronoun</td><td>PRP</td><td>pronoun, personal</td><td>PronType=prs</td></tr>\n<tr><td>PROPN</td><td>proper noun</td><td>NNP</td><td>noun, proper singular</td><td>NounType=prop Number=sign</td></tr>\n<tr><td>PROPN</td><td></td><td>NNPS</td><td>noun, proper plural</td><td>NounType=prop Number=plur</td></tr>\n<tr><td>PUNCT</td><td>punctuation</td><td>-LRB-</td><td>left round bracket</td><td>PunctType=brck PunctSide=ini</td></tr>\n<tr><td>PUNCT</td><td></td><td>-RRB-</td><td>right round bracket</td><td>PunctType=brck PunctSide=fin</td></tr>\n<tr><td>PUNCT</td><td></td><td>,</td><td>punctuation mark, comma</td><td>PunctType=comm</td></tr>\n<tr><td>PUNCT</td><td></td><td>:</td><td>punctuation mark, colon or ellipsis</td><td></td></tr>\n<tr><td>PUNCT</td><td></td><td>.</td><td>punctuation mark, sentence closer</td><td>PunctType=peri</td></tr>\n<tr><td>PUNCT</td><td></td><td>''</td><td>closing quotation mark</td><td>PunctType=quot PunctSide=fin</td></tr>\n<tr><td>PUNCT</td><td></td><td>\"\"</td><td>closing quotation mark</td><td>PunctType=quot PunctSide=fin</td></tr>\n<tr><td>PUNCT</td><td></td><td>``</td><td>opening quotation mark</td><td>PunctType=quot PunctSide=ini</td></tr>\n<tr><td>PUNCT</td><td></td><td>HYPH</td><td>punctuation mark, hyphen</td><td>PunctType=dash</td></tr>\n<tr><td>PUNCT</td><td></td><td>LS</td><td>list item marker</td><td>NumType=ord</td></tr>\n<tr><td>PUNCT</td><td></td><td>NFP</td><td>superfluous punctuation</td><td></td></tr>\n<tr><td>SYM</td><td>symbol</td><td>#</td><td>symbol, number sign</td><td>SymType=numbersign</td></tr>\n<tr><td>SYM</td><td></td><td>\\$</td><td>symbol, currency</td><td>SymType=currency</td></tr>\n<tr><td>SYM</td><td></td><td>SYM</td><td>symbol</td><td></td></tr>\n<tr><td>VERB</td><td>verb</td><td>BES</td><td>auxiliary \"be\"</td><td></td></tr>\n<tr><td>VERB</td><td></td><td>HVS</td><td>forms of \"have\"</td><td></td></tr>\n<tr><td>VERB</td><td></td><td>MD</td><td>verb, modal auxiliary</td><td>VerbType=mod</td></tr>\n<tr><td>VERB</td><td></td><td>VB</td><td>verb, base form</td><td>VerbForm=inf</td></tr>\n<tr><td>VERB</td><td></td><td>VBD</td><td>verb, past tense</td><td>VerbForm=fin Tense=past</td></tr>\n<tr><td>VERB</td><td></td><td>VBG</td><td>verb, gerund or present participle</td><td>VerbForm=part Tense=pres Aspect=prog</td></tr>\n<tr><td>VERB</td><td></td><td>VBN</td><td>verb, past participle</td><td>VerbForm=part Tense=past Aspect=perf</td></tr>\n<tr><td>VERB</td><td></td><td>VBP</td><td>verb, non-3rd person singular present</td><td>VerbForm=fin Tense=pres</td></tr>\n<tr><td>VERB</td><td></td><td>VBZ</td><td>verb, 3rd person singular present</td><td>VerbForm=fin Tense=pres Number=sing Person=3</td></tr>\n<tr><td>X</td><td>other</td><td>ADD</td><td>email</td><td></td></tr>\n<tr><td>X</td><td></td><td>FW</td><td>foreign word</td><td>Foreign=yes</td></tr>\n<tr><td>X</td><td></td><td>GW</td><td>additional word in multi-word expression</td><td></td></tr>\n<tr><td>X</td><td></td><td>XX</td><td>unknown</td><td></td></tr>\n<tr><td>SPACE</td><td>space</td><td>_SP</td><td>space</td><td></td></tr>\n<tr><td></td><td></td><td>NIL</td><td>missing tag</td><td></td></tr>\n</table>\n\nFor a current list of tags for all languages visit https://spacy.io/api/annotation#pos-tagging\n\n## Working with POS Tags\nIn the English language, the same string of characters can have different meanings, even within the same sentence. For this reason, morphology is important. **spaCy** uses machine learning algorithms to best predict the use of a token in a sentence. Is *\"I read books on NLP\"* present or past tense? Is *wind* a verb or a noun?",
"doc = nlp(u'I read books on NLP.')\nr = doc[1]\n\nprint(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')\n\ndoc = nlp(u'I read a book on NLP.')\nr = doc[1]\n\nprint(f'{r.text:{10}} {r.pos_:{8}} {r.tag_:{6}} {spacy.explain(r.tag_)}')",
"In the first example, with no other cues to work from, spaCy assumed that read was present tense.<br>In the second example the present tense form would be I am reading a book, so spaCy assigned the past tense.\nCounting POS Tags\nThe Doc.count_by() method accepts a specific token attribute as its argument, and returns a frequency count of the given attribute as a dictionary object. Keys in the dictionary are the integer values of the given attribute ID, and values are the frequency. Counts of zero are not included.",
"doc = nlp(u\"The quick brown fox jumped over the lazy dog's back.\")\n\n# Count the frequencies of different coarse-grained POS tags:\nPOS_counts = doc.count_by(spacy.attrs.POS)\nPOS_counts",
"This isn't very helpful until you decode the attribute ID:",
"doc.vocab[83].text",
"Create a frequency list of POS tags from the entire document\nSince POS_counts returns a dictionary, we can obtain a list of keys with POS_counts.items().<br>By sorting the list we have access to the tag and its count, in order.",
"for k,v in sorted(POS_counts.items()):\n print(f'{k}. {doc.vocab[k].text:{5}}: {v}')\n\n# Count the different fine-grained tags:\nTAG_counts = doc.count_by(spacy.attrs.TAG)\n\nfor k,v in sorted(TAG_counts.items()):\n print(f'{k}. {doc.vocab[k].text:{4}}: {v}')",
"<div class=\"alert alert-success\">**Why did the ID numbers get so big?** In spaCy, certain text values are hardcoded into `Doc.vocab` and take up the first several hundred ID numbers. Strings like 'NOUN' and 'VERB' are used frequently by internal operations. Others, like fine-grained tags, are assigned hash values as needed.</div>\n<div class=\"alert alert-success\">**Why don't SPACE tags appear?** In spaCy, only strings of spaces (two or more) are assigned tokens. Single spaces are not.</div>",
"# Count the different dependencies:\nDEP_counts = doc.count_by(spacy.attrs.DEP)\n\nfor k,v in sorted(DEP_counts.items()):\n print(f'{k}. {doc.vocab[k].text:{4}}: {v}')",
"Here we've shown spacy.attrs.POS, spacy.attrs.TAG and spacy.attrs.DEP.<br>Refer back to the Vocabulary and Matching lecture from the previous section for a table of Other token attributes.\n\nFine-grained POS Tag Examples\nThese are some grammatical examples (shown in bold) of specific fine-grained tags. We've removed punctuation and rarely used tags:\n<table>\n<tr><th>POS</th><th>TAG</th><th>DESCRIPTION</th><th>EXAMPLE</th></tr>\n<tr><td>ADJ</td><td>AFX</td><td>affix</td><td>The Flintstones were a **pre**-historic family.</td></tr>\n<tr><td>ADJ</td><td>JJ</td><td>adjective</td><td>This is a **good** sentence.</td></tr>\n<tr><td>ADJ</td><td>JJR</td><td>adjective, comparative</td><td>This is a **better** sentence.</td></tr>\n<tr><td>ADJ</td><td>JJS</td><td>adjective, superlative</td><td>This is the **best** sentence.</td></tr>\n<tr><td>ADJ</td><td>PDT</td><td>predeterminer</td><td>Waking up is **half** the battle.</td></tr>\n<tr><td>ADJ</td><td>PRP\\$</td><td>pronoun, possessive</td><td>**His** arm hurts.</td></tr>\n<tr><td>ADJ</td><td>WDT</td><td>wh-determiner</td><td>It's blue, **which** is odd.</td></tr>\n<tr><td>ADJ</td><td>WP\\$</td><td>wh-pronoun, possessive</td><td>We don't know **whose** it is.</td></tr>\n<tr><td>ADP</td><td>IN</td><td>conjunction, subordinating or preposition</td><td>It arrived **in** a box.</td></tr>\n<tr><td>ADV</td><td>EX</td><td>existential there</td><td>**There** is cake.</td></tr>\n<tr><td>ADV</td><td>RB</td><td>adverb</td><td>He ran **quickly**.</td></tr>\n<tr><td>ADV</td><td>RBR</td><td>adverb, comparative</td><td>He ran **quicker**.</td></tr>\n<tr><td>ADV</td><td>RBS</td><td>adverb, superlative</td><td>He ran **fastest**.</td></tr>\n<tr><td>ADV</td><td>WRB</td><td>wh-adverb</td><td>**When** was that?</td></tr>\n<tr><td>CONJ</td><td>CC</td><td>conjunction, coordinating</td><td>The balloon popped **and** everyone jumped.</td></tr>\n<tr><td>DET</td><td>DT</td><td>determiner</td><td>**This** is **a** sentence.</td></tr>\n<tr><td>INTJ</td><td>UH</td><td>interjection</td><td>**Um**, I don't know.</td></tr>\n<tr><td>NOUN</td><td>NN</td><td>noun, singular or mass</td><td>This is a **sentence**.</td></tr>\n<tr><td>NOUN</td><td>NNS</td><td>noun, plural</td><td>These are **words**.</td></tr>\n<tr><td>NOUN</td><td>WP</td><td>wh-pronoun, personal</td><td>**Who** was that?</td></tr>\n<tr><td>NUM</td><td>CD</td><td>cardinal number</td><td>I want **three** things.</td></tr>\n<tr><td>PART</td><td>POS</td><td>possessive ending</td><td>Fred**'s** name is short.</td></tr>\n<tr><td>PART</td><td>RP</td><td>adverb, particle</td><td>Put it **back**!</td></tr>\n<tr><td>PART</td><td>TO</td><td>infinitival to</td><td>I want **to** go.</td></tr>\n<tr><td>PRON</td><td>PRP</td><td>pronoun, personal</td><td>**I** want **you** to go.</td></tr>\n<tr><td>PROPN</td><td>NNP</td><td>noun, proper singular</td><td>**Kilroy** was here.</td></tr>\n<tr><td>PROPN</td><td>NNPS</td><td>noun, proper plural</td><td>The **Flintstones** were a pre-historic family.</td></tr>\n<tr><td>VERB</td><td>MD</td><td>verb, modal auxiliary</td><td>This **could** work.</td></tr>\n<tr><td>VERB</td><td>VB</td><td>verb, base form</td><td>I want to **go**.</td></tr>\n<tr><td>VERB</td><td>VBD</td><td>verb, past tense</td><td>This **was** a sentence.</td></tr>\n<tr><td>VERB</td><td>VBG</td><td>verb, gerund or present participle</td><td>I am **going**.</td></tr>\n<tr><td>VERB</td><td>VBN</td><td>verb, past participle</td><td>The treasure was **lost**.</td></tr>\n<tr><td>VERB</td><td>VBP</td><td>verb, non-3rd person singular present</td><td>I **want** to go.</td></tr>\n<tr><td>VERB</td><td>VBZ</td><td>verb, 3rd person singular present</td><td>He **wants** to go.</td></tr>\n</table>\n\nUp Next: Visualizing POS"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/professional-services
|
examples/kubeflow-fairing-example/Fairing_XGBoost.ipynb
|
apache-2.0
|
[
"# Copyright 2019 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"Train and deploy Xgboost (Scikit-learn) on Kubeflow from Notebooks\nThis notebook introduces you the usage of Kubeflow Fairing to train and deploy a model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to:\n\nTrain an XGBoost model in a local notebook,\nUse Kubeflow Fairing to train an XGBoost model remotely on Kubeflow cluster,\nUse Kubeflow Fairing to train an XGBoost model remotely on AI Platform training,\nUse Kubeflow Fairing to deploy a trained model to Kubeflow, and Call the deployed endpoint for predictions.\n\nYou need Python 3.6 to use Kubeflow Fairing.\nSetups\n\n\nPre-conditions\n\nDeployed a kubeflow cluster through https://deploy.kubeflow.cloud/\nHave the following environment variable ready: \nPROJECT_ID # project host the kubeflow cluster or for running AI platform training\nDEPLOYMENT_NAME # kubeflow deployment name, the same the cluster name after delpoyed\nGCP_BUCKET # google cloud storage bucket\n\n\n\n\n\nCreate service account\nbash\nexport SA_NAME = [service account name]\ngcloud iam service-accounts create ${SA_NAME}\ngcloud projects add-iam-policy-binding ${PROJECT_ID} \\\n --member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \\\n --role 'roles/editor'\ngcloud iam service-accounts keys create ~/key.json \\\n --iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com\n\n\nAuthorize for Source Repository\nbash\ngcloud auth configure-docker\n\n\nUpdate local kubeconfig (for submiting job to kubeflow cluster)\nbash\nexport CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubenete cluster name\nexport ZONE=us-central1-c\ngcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE}\n\n\nSet the environmental variable: GOOGLE_APPLICATION_CREDENTIALS\nbash\nexport GOOGLE_APPLICATION_CREDENTIALS = ....\npython\nos.environ['GOOGLE_APPLICATION_CREDENTIALS']=...\n\n\nInstall the lastest version of fairing\npython\npip install git+https://github.com/kubeflow/fairing@master\n\n\nUpload training file\n```bash\n\n\nupload the train.csv to GCS bucket that can be accessed from both CMLE and Kubeflow cluster\ngsutil cp ./train.csv ${GCP_Bucket}/train.csv\n```\nPlease not that the above configuration is required for notebook service running outside Kubeflow environment. And the examples demonstrated in the notebook is fully tested on notebook service outside Kubeflow cluster also.\nThe environemt variables, e.g. service account, projects and etc, should have been pre-configured while setting up the cluster.\nSet up your notebook for training an XGBoost model\nImport the libraries required to train this model.",
"import argparse\nimport logging\nimport joblib\nimport sys\nimport pandas as pd\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.impute import SimpleImputer\nfrom xgboost import XGBClassifier\n\nlogging.basicConfig(format='%(message)s')\nlogging.getLogger().setLevel(logging.INFO)\n\nimport os\nimport fairing\n\n# Setting up google container repositories (GCR) for storing output containers\n# You can use any docker container registry istead of GCR\n# For local notebook, GCP_PROJECT should be set explicitly\nGCP_PROJECT = fairing.cloud.gcp.guess_project_name()\nGCP_Bucket = os.environ['GCP_BUCKET'] # e.g., 'gs://kubeflow-demo-g/'\n\n# This is for local notebook instead of that in kubeflow cluster\n# os.environ['GOOGLE_APPLICATION_CREDENTIALS']=",
"Define the model logic\nDefine a function to split the input file into training and testing datasets.",
"def gcs_copy(src_path, dst_path):\n import subprocess\n print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))\n \ndef gcs_download(src_path, file_name):\n import subprocess\n print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))\n\ndef read_input(source_path, test_size=0.25):\n \"\"\"Read input data and split it into train and test.\"\"\"\n \n file_name = source_path.split('/')[-1]\n gcs_download(source_path, file_name)\n data = pd.read_csv(file_name)\n data.dropna(axis=0, inplace=True)\n\n y = data.Class\n X = data.drop(['Class', 'Amount', 'Time'], axis=1).select_dtypes(exclude=['object'])\n\n train_X, test_X, train_y, test_y = train_test_split(X.values,\n y.values,\n test_size=test_size,\n shuffle=True)\n\n imputer = SimpleImputer()\n train_X = imputer.fit_transform(train_X)\n test_X = imputer.transform(test_X)\n\n return (train_X, train_y), (test_X, test_y)",
"Define functions to train, evaluate, and save the trained model.",
"def train_model(train_X,\n train_y,\n test_X,\n test_y,\n n_estimators,\n learning_rate):\n \"\"\"Train the model using XGBRegressor.\"\"\"\n model = XGBClassifier(n_estimators=n_estimators, learning_rate=learning_rate)\n\n model.fit(train_X,\n train_y,\n early_stopping_rounds=40,\n eval_set=[(test_X, test_y)])\n\n print(\"Best loss on eval: %.2f with %d rounds\",\n model.best_score,\n model.best_iteration+1)\n return model\n\ndef eval_model(model, test_X, test_y):\n \"\"\"Evaluate the model performance.\"\"\"\n predictions = model.predict_proba(test_X)\n logging.info(\"auc=%.2f\", roc_auc_score(test_y, predictions[:,1]))\n\ndef save_model(model, model_file):\n \"\"\"Save XGBoost model for serving.\"\"\"\n joblib.dump(model, model_file)\n gcs_copy(model_file, GCP_Bucket + model_file)\n logging.info(\"Model export success: %s\", model_file)",
"Define a class for your model, with methods for training and prediction.",
"class FraudServe(object):\n \n def __init__(self):\n self.train_input = GCP_Bucket + \"train_fraud.csv\"\n self.n_estimators = 50\n self.learning_rate = 0.1\n self.model_file = \"trained_fraud_model.joblib\"\n self.model = None\n\n def train(self):\n (train_X, train_y), (test_X, test_y) = read_input(self.train_input)\n model = train_model(train_X,\n train_y,\n test_X,\n test_y,\n self.n_estimators,\n self.learning_rate)\n\n eval_model(model, test_X, test_y)\n save_model(model, self.model_file)\n\n def predict(self, X, feature_names):\n \"\"\"Predict using the model for given ndarray.\"\"\"\n if not self.model:\n self.model = joblib.load(self.model_file)\n # Do any preprocessing\n prediction = self.model.predict(data=X)\n # Do any postprocessing\n return [[prediction.item(0), prediction.item(0)]]",
"Train an XGBoost model in a notebook\nCall FraudServe().train() to train your model, and then evaluate and save your trained model.",
"FraudServe().train()",
"Make Use of Fairing\nSpicify a image registry that will hold the image built by fairing",
"# In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image\nbase_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)\n!docker build --build-arg PY_VERSION=3.6.4 . -t {base_image}\n!docker push {base_image}\n\nDOCKER_REGISTRY = 'gcr.io/{}/fairing-job-xgboost'.format(GCP_PROJECT)\nBASE_IMAGE = base_image",
"Train an XGBoost model remotely on Kubeflow\nImport the TrainJob and GKEBackend classes. Kubeflow Fairing packages the FraudServe class, the training data, and the training job's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the training job on Kubeflow.",
"from fairing import TrainJob\nfrom fairing.backends import GKEBackend\n\ntrain_job = TrainJob(FraudServe, BASE_IMAGE, input_files=[\"requirements.txt\"],\n docker_registry=DOCKER_REGISTRY, backend=GKEBackend())\ntrain_job.submit()",
"Train an XGBoost model remotely on Cloud ML Engine\nImport the TrainJob and GCPManagedBackend classes. Kubeflow Fairing packages the FraudServe class, the training data, and the training job's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the training job on Cloud ML Engine.",
"from fairing import TrainJob\nfrom fairing.backends import GCPManagedBackend\ntrain_job = TrainJob(FraudServe, BASE_IMAGE, input_files=[\"requirements.txt\"],\n docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend())\ntrain_job.submit()",
"Deploy the trained model to Kubeflow for predictions\nImport the PredictionEndpoint and KubeflowGKEBackend classes. Kubeflow Fairing packages the FraudServe class, the trained model, and the prediction endpoint's software prerequisites as a Docker image. Then Kubeflow Fairing deploys and runs the prediction endpoint on Kubeflow.\nThis part only works for fairing version >=0.5.2",
"from fairing import PredictionEndpoint\nfrom fairing.backends import KubeflowGKEBackend\n# The trained_ames_model.joblib is exported during the above local training\nendpoint = PredictionEndpoint(FraudServe, BASE_IMAGE, input_files=['trained_fraud_model.joblib', \"requirements.txt\"],\n docker_registry=DOCKER_REGISTRY, backend=KubeflowGKEBackend())\nendpoint.create()",
"Deploy to GCP",
"# Deploy model to gcp\n# from fairing.deployers.gcp.gcpserving import GCPServingDeployer\n# deployer = GCPServingDeployer()\n# deployer.deploy(VERSION_DIR, MODEL_NAME, VERSION_NAME)",
"Call the prediction endpoint\nCreate a test dataset, then call the endpoint on Kubeflow for predictions.",
"(train_X, train_y), (test_X, test_y) = read_input(GCP_Bucket + \"train_fraud.csv\")\nendpoint.predict_nparray(test_X)",
"Clean up the prediction endpoint\nDelete the prediction endpoint created by this notebook.",
"endpoint.delete()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
edwardd1/phys202-2015-work
|
assignments/assignment07/AlgorithmsEx01.ipynb
|
mit
|
[
"Algorithms Exercise 1\nImports",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np",
"Word counting\nWrite a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:\n\nSplit the string into lines using splitlines.\nSplit each line into a list of words and merge the lists for each line.\nUse Python's builtin filter function to remove all punctuation.\nIf stop_words is a list, remove all occurences of the words in the list.\nIf stop_words is a space delimeted string of words, split them and remove them.\nRemove any remaining empty words.\nMake all words lowercase.",
"things = \"hello!\"\ndef ispuct(char, punctuation='`~!@#$%^&*()_-+={[}]|\\:;\"<,>.?/}\\t'):\n return (not (char in punctuation))\n\n#x = list(filter(ispuct, things))\n#a = ''\n#a.join(x)\n#print(new_things)\n\ndef tokenize(s, stop_words = '', punctuation='`~!@#$%^&*()_+={[}]|\\:;\"<,>.?/}\\t'):\n m = []\n s = s.replace(\"-\", \" \")\n stop = stop_words\n def is_stop(word, stop_words = stop):\n return not (word in stop_words)\n def is_space(word, space = ['']):\n return not (word in space)\n for line in s.splitlines():\n raw = line.lower().split(' ' or '.') \n y = list()\n for w in raw:\n x = list(filter(ispuct, w))\n y.append(a.join(x))\n words = list(filter(is_space, y))\n words = list(filter(is_stop, words))\n m += words\n return m\n \ntokenize(\"This, is the way; that things will hi--end\", stop_words = 'is the')\n#ispuct('!')\n\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\ntokenize(wasteland, stop_words='is the of and')\n\nassert tokenize(\"This, is the way; that things will end\", stop_words=['the', 'is']) == \\\n ['this', 'way', 'that', 'things', 'will', 'end']\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\n#tokenize(wasteland, stop_words='is the of and')\nassert tokenize(wasteland, stop_words='is the of and') == \\\n ['april','cruellest','month','breeding','lilacs','out','dead','land',\n 'mixing','memory','desire','stirring','dull','roots','with','spring',\n 'rain']",
"Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.",
"def count_words(data):\n \"\"\"Return a word count dictionary from the list of words in data.\"\"\"\n dictionary = {}\n for n in data:\n dictionary[n]= data.count(n)\n return dictionary\n\nassert count_words(tokenize('this and the this from and a a a')) == \\\n {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}",
"Write a function sort_word_counts that return a list of sorted word counts:\n\nEach element of the list should be a (word, count) tuple.\nThe list should be sorted by the word counts, with the higest counts coming first.\nTo perform this sort, look at using the sorted function with a custom key and reverse\n argument.",
"def sort_word_counts(wc):\n \"\"\"Return a list of 2-tuples of (word, count), sorted by count descending.\"\"\"\n l = [(i,wc[i]) for i in wc]\n return sorted(l, key = lambda x:x[1], reverse = True)\n\nprint(sort_word_counts(count_words(tokenize('this and a the this this and a a a'))))\n\nassert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \\\n [('a', 4), ('this', 3), ('and', 2), ('the', 1)]",
"Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:\n\nRead the file into a string.\nTokenize with stop words of 'the of and a to in is it that as'.\nPerform a word count, the sort and save the result in a variable named swc.",
"txt = open('mobydick_chapter1.txt', 'r')\nx = txt.read()\nswc = sort_word_counts(count_words(tokenize(s = x, stop_words = ['the', 'of', 'and', 'to', 'in', 'is', 'it', 'that', 'as', 'a'])))\nstring = ''\nx = (tokenize(s = x, stop_words = ['the', 'of', 'and', 'to', 'in', 'is', 'it', 'that', 'as', 'a']))\nfor things in x:\n string = string + things + \" \"\nprint(len(swc))\n#print(swc)\nprint(string)\npunchfactor = 4\n\nassert swc[0]==('i',43)\nassert len(swc)==848 - punchfactor #4 is the punchfactor, ranked out of 4",
"Create a \"Cleveland Style\" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...",
"x = np.array(swc)\nplt.plot(x[0:50,1], range(50),'o')\n\n\nassert True # use this for grading the dotplot"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nick-youngblut/SIPSim
|
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
|
mit
|
[
"Description:\n\nFor emperical data, most taxa (>0.1% abundance) are detected across the entire gradient.\nChecking whether a similar pattern is seen with the simulated genome data\n\nSetting variables",
"workDir = '/home/nick/notebook/SIPSim/dev/bac_genome3/validation/'\nR_dir = '/home/nick/notebook/SIPSim/lib/R/'\nfigDir = '/home/nick/notebook/SIPSim/figures/'\n\nnprocs = 3",
"Init",
"import os\nimport numpy as np\nimport dill\nimport pandas as pd\n%load_ext rpy2.ipython\n\n%%R\nlibrary(ggplot2)\nlibrary(plyr)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(gridExtra)\n\nif not os.path.isdir(workDir):\n os.makedirs(workDir)",
"Determining the probability of detecting the taxa across the entire gradient",
"# max 13C shift\nmax_13C_shift_in_BD = 0.036\n# min BD (that we care about)\nmin_GC = 13.5\nmin_BD = min_GC/100.0 * 0.098 + 1.66\n# max BD (that we care about)\nmax_GC = 80\nmax_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C\nmax_BD = max_BD + max_13C_shift_in_BD\n## BD range of values\nBD_vals = np.arange(min_BD, max_BD, 0.001)",
"skewed normal distribution",
"F = os.path.join(workDir, 'ampFrags_real_kde_dif.pkl')\nwith open(F, 'rb') as inFH:\n kde = dill.load(inFH)\nkde\n\n# probability at each location in gradient\npdf = {}\nfor k,v in kde.items():\n pdf[k] = v.evaluate(BD_vals)\npdf.keys()\n\ndf = pd.DataFrame(pdf)\ndf['BD'] = BD_vals\ndf.head(n=3)\n\n%%R -i df -w 800 -h 350\n\ndf.g = apply(df, 2, as.numeric) %>% as.data.frame %>%\n gather(taxon_name, P, 1:3) %>%\n mutate(BD = as.numeric(BD),\n P = as.numeric(P),\n taxon_name = as.character(taxon_name)) %>%\n filter(P > 1e-15)\n\nx.lab = expression(paste('Buoyant density (g ml' ^ '-1', ')'))\n\np1.skn = ggplot(df.g, aes(BD, P, color=taxon_name)) +\n geom_point() +\n geom_line() +\n labs(x=x.lab, y='Probability density') +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'none'\n )\n\np2.skn = p1.skn + scale_y_log10()\n\ngrid.arrange(p1.skn, p2.skn, ncol=2)",
"small uniform distribution",
"F = os.path.join(workDir, 'ampFrags_sm_kde_dif.pkl')\nwith open(F, 'rb') as inFH:\n kde = dill.load(inFH)\nkde\n\n# probability at each location in gradient\npdf = {}\nfor k,v in kde.items():\n pdf[k] = v.evaluate(BD_vals)\npdf.keys()\n\ndf = pd.DataFrame(pdf)\ndf['BD'] = BD_vals\ndf.head(n=3)\n\n%%R -i df -w 800 -h 350\n\ndf.g = apply(df, 2, as.numeric) %>% as.data.frame %>%\n gather(taxon_name, P, 1:3) %>%\n mutate(BD = as.numeric(BD),\n P = as.numeric(P),\n taxon_name = as.character(taxon_name)) %>%\n filter(P > 1e-9)\n\n\n\np1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'none'\n )\n\np2 = p1 + scale_y_log10()\n\ngrid.arrange(p1, p2, ncol=2)",
"Notes\n\nEven with fragment sizes of 1-2 kb, the taxa would likely not be detected even if the gradient contained 1e9 16S copies of the taxon.\nDoes this make sense based on the theory of diffusion used?\n\nwith DBL 'smearing'\nDetermining the probability of detecting in all fragments\nskewed normal distribution",
"BD_vals = np.arange(min_BD, max_BD, 0.001)\n\nF = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL.pkl')\nwith open(F, 'rb') as inFH:\n kde = dill.load(inFH)\nkde\n\n# probability at each location in gradient\npdf = {}\nfor k,v in kde.items():\n for kk,vv in v.items():\n pdf[kk] = vv.evaluate(BD_vals)\npdf.keys()\n\ndf = pd.DataFrame(pdf)\ndf['BD'] = BD_vals\ndf.head(n=3)\n\n%%R -i df -w 800 -h 350\n\ndf.g = apply(df, 2, as.numeric) %>% as.data.frame %>%\n gather(taxon_name, P, 1:3) %>%\n mutate(BD = as.numeric(BD),\n P = as.numeric(P),\n taxon_name = as.character(taxon_name)) %>%\n filter(P > 1e-15)\n\nx.lab = expression(paste('Buoyant density (g ml' ^ '-1', ')'))\n\np1.skn.dbl = ggplot(df.g, aes(BD, P, color=taxon_name)) +\n geom_point() +\n geom_line() +\n labs(x=x.lab, y='Probability density') +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'none'\n )\n\np2.skn.dbl = p1.skn.dbl + scale_y_log10()\n\ngrid.arrange(p1.skn.dbl, p2.skn.dbl, ncol=2)",
"Notes\n\nEven if 1% of DNA is in DBL (that then diffuses back into the gradient):\nthe probably of detecting a taxa in all the gradient positions is >= 1e-7\nthis is feasible for matching the emperical data!\n\n\n\nCombined plot",
"%%R -w 800 -h 300\n\n# plot formatting\ntitle.size=16\np2.skn.f = p2.skn +\n ggtitle('Gaussian BD') +\n theme(\n plot.title = element_text(size=title.size)\n )\np2.skn.dbl.f = p2.skn.dbl +\n ggtitle('Gaussian BD + DBL') +\n theme(\n plot.title = element_text(size=title.size)\n )\n\n# combined plot\n#p.comb = cowplot::plot_grid(p2.skn.f, p2.skn.dbl.f, labels=c('A)', 'B)'), align='h')\np.comb = cowplot::ggdraw() +\n geom_rect(aes(xmin=0, ymin=0, xmax=1, ymax=1), fill='white') +\n cowplot::draw_plot(p2.skn.f, 0.01, 0.01, 0.49, 0.99) +\n cowplot::draw_plot(p2.skn.dbl.f, 0.5, 0.01, 0.49, 0.99) +\n cowplot::draw_plot_label(c('A)', 'B)'), c(0, 0.5), c(0.99, 0.99))\np.comb\n\n%%R -i workDir\n# writting plot\noutFile = file.path(workDir, 'DBL_example_log10.pdf') \nggsave(outFile, p.comb, width=10, height=3.75)\ncat('File written:', outFile, '\\n')",
"Combined plot (v2)",
"%%R -w 800 -h 300\n\np1.skn.e = p1.skn +\n scale_x_continuous(limits=c(1.675, 1.775))\np2.skn.e = p2.skn +\n scale_x_continuous(limits=c(1.675, 1.775)) +\n scale_y_log10(limits=c(1e-12, 150))\np1.skn.dbl.e = p1.skn.dbl +\n scale_x_continuous(limits=c(1.675, 1.775))\np2.skn.dbl.e = p2.skn.dbl +\n scale_x_continuous(limits=c(1.675, 1.775)) +\n scale_y_log10(limits=c(1e-12, 150))\n\n\np.comb = cowplot::ggdraw() +\n geom_rect(aes(xmin=0, ymin=0, xmax=1, ymax=1), fill='white') +\n cowplot::draw_plot(p2.skn.e, 0.01, 0.01, 0.49, 0.99) +\n cowplot::draw_plot(p2.skn.dbl.e, 0.5, 0.01, 0.49, 0.99) +\n cowplot::draw_plot_label(c('A)', 'B)'), c(0, 0.5), c(0.99, 0.99))\np.comb\n\n%%R -i workDir\n# writting plot\noutFile = file.path(workDir, 'DBL_example_log10.pdf') \nggsave(outFile, p.comb, width=10, height=3.75)\ncat('File written:', outFile, '\\n')",
"small fragment size distribution",
"BD_vals = np.arange(min_BD, max_BD, 0.001)\n\nF = os.path.join(workDir, 'ampFrags_sm_kde_dif_DBL.pkl')\nwith open(F, 'rb') as inFH:\n kde = dill.load(inFH)\nkde\n\n# probability at each location in gradient\npdf = {}\nfor k,v in kde.items():\n pdf[k] = v.evaluate(BD_vals)\npdf.keys()\n\ndf = pd.DataFrame(pdf)\ndf['BD'] = BD_vals\ndf.head(n=3)\n\n%%R -i df -w 800 -h 350\n\ndf.g = apply(df, 2, as.numeric) %>% as.data.frame %>%\n gather(taxon_name, P, 1:3) %>%\n mutate(BD = as.numeric(BD),\n P = as.numeric(P),\n taxon_name = as.character(taxon_name)) %>%\n filter(P > 1e-9)\n\np1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'none'\n )\n\np2 = p1 + scale_y_log10()\n\ngrid.arrange(p1, p2, ncol=2)",
"with DBL 'smearing' (smaller DBL)\nDetermining the probability of detecting in all fragments\nskewed normal distribution",
"BD_vals = np.arange(min_BD, max_BD, 0.001)\n\nF = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL_fa1e-4.pkl')\nwith open(F, 'rb') as inFH:\n kde = dill.load(inFH)\nkde\n\n# probability at each location in gradient\npdf = {}\nfor k,v in kde.items():\n pdf[k] = v.evaluate(BD_vals)\npdf.keys()\n\ndf = pd.DataFrame(pdf)\ndf['BD'] = BD_vals\ndf.head(n=3)\n\n%%R -i df -w 800 -h 350\n\ndf.g = apply(df, 2, as.numeric) %>% as.data.frame %>%\n gather(taxon_name, P, 1:3) %>%\n mutate(BD = as.numeric(BD),\n P = as.numeric(P),\n taxon_name = as.character(taxon_name)) %>%\n filter(P > 1e-9)\n\np1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'none'\n )\n\np2 = p1 + scale_y_log10()\n\ngrid.arrange(p1, p2, ncol=2)",
"DBL with abundance-weighted smearing",
"BD_vals = np.arange(min_BD, max_BD, 0.001)\n\nF = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL-comm.pkl')\nwith open(F, 'rb') as inFH:\n kde = dill.load(inFH)\nkde\n\n# probability at each location in gradient\npdf = {}\nfor libID,v in kde.items():\n for taxon,k in v.items():\n pdf[taxon] = k.evaluate(BD_vals)\npdf.keys()\n\ndf = pd.DataFrame(pdf)\ndf['BD'] = BD_vals\ndf.head(n=3)\n\n%%R -i df -w 800 -h 350\n\ndf.g = apply(df, 2, as.numeric) %>% as.data.frame %>%\n gather(taxon_name, P, 1:3) %>%\n mutate(BD = as.numeric(BD),\n P = as.numeric(P),\n taxon_name = as.character(taxon_name)) %>%\n filter(P > 1e-9)\n\np1 = ggplot(df.g, aes(BD, P, color=taxon_name)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n theme(\n text = element_text(size=16),\n legend.position = 'none'\n )\n\np2 = p1 + scale_y_log10()\n\ngrid.arrange(p1, p2, ncol=2)\n\n%%R\ndf.g %>%\n group_by(taxon_name) %>%\n summarize(max_P = max(P),\n min_P = min(P)) %>% print",
"Plotting pre-frac abundance vs heavy fraction P",
"%%R -i workDir\n\nF = file.path(workDir, 'comm.txt')\ndf.comm = read.delim(F, sep='\\t') %>%\n mutate(rel_abund = rel_abund_perc / 100)\ndf.comm %>% print\n\ndf.g.s = df.g %>%\n filter(BD > 1.75) %>%\n group_by(BD) %>%\n mutate(P_rel_abund = P / sum(P)) %>%\n group_by(taxon_name) %>%\n summarize(mean_P = mean(P))\n\ndf.g.s = inner_join(df.g.s, df.comm, c('taxon_name' = 'taxon_name')) \ndf.g.s %>% print\n\nggplot(df.g.s, aes(rel_abund, mean_P)) +\n geom_point() +\n geom_line()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsfenfen/parsing-prickly-pdfs
|
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
|
bsd-2-clause
|
[
"Parsing Los Angeles County's precinct-level results from the 2014 general election.",
"import pandas as pd\nimport pdfplumber\nimport re",
"Load the PDF in PDFPlumber:",
"pdf = pdfplumber.open(\"2014-bulletin-first-10-pages.pdf\")\nprint(len(pdf.pages))",
"Let's look at the first 15 characters on the first page of the PDF:",
"first_page = pdf.pages[0]\n\nchars = pd.DataFrame(first_page.chars)\nchars.head(15)",
"Extract the precint ID\nThe corresponding characters are about 37–44 pixels from the top, and on the left half of the page.",
"pd.DataFrame(first_page.crop((0, 37, first_page.width / 2, 44 )).chars)\n\ndef get_precinct_id(page):\n cropped = page.crop((0, 37, page.width / 2, 44 ))\n text = \"\".join((c[\"text\"] for c in cropped.chars))\n trimmed = re.sub(r\" +\", \"|\", text)\n return trimmed\n\nfor page in pdf.pages:\n print(get_precinct_id(page))",
"We can do the same for the number of ballots cast",
"def get_ballots_cast(page):\n cropped = page.crop((0, 48, page.width / 3, 60))\n text = \"\".join((c[\"text\"] for c in cropped.chars))\n count = int(text.split(\" \")[0])\n return count\n\nfor page in pdf.pages:\n print(get_ballots_cast(page))",
"... and for the number of registered voters in each precinct",
"def get_registered_voters(page):\n cropped = page.crop((0, 62, page.width / 3, 74))\n text = \"\".join((c[\"text\"] for c in cropped.chars))\n count = int(text.split(\" \")[0])\n return count\n\nfor page in pdf.pages:\n print(get_registered_voters(page))",
"Getting the results for each race is a bit trickier\nThe data representation isn't truly tabular, but it's structured enough to allow us to create tabular data from it. This function divides the first column of the result-listings into columns (explicitly defined, in pixels) and rows (separated by gutters of whitespace).",
"def get_results_rows(page):\n first_col = page.crop((0, 77, 212, page.height))\n table = first_col.extract_table(\n v=(0, 158, 180, 212),\n h=\"gutters\",\n x_tolerance=1)\n return table\n\nget_results_rows(first_page)",
"Let's restructure that slightly, so that each row contains information about the relevant race:",
"def get_results_table(page):\n rows = get_results_rows(page)\n results = []\n race = None\n for row in rows:\n name, affil, votes = row\n if name == \"VOTER NOMINATED\": continue\n if votes == None:\n race = name\n else:\n results.append((race, name, affil, int(votes)))\n results_df = pd.DataFrame(results, columns=[ \"race\", \"name\", \"party\", \"votes\" ])\n return results_df\n\nget_results_table(first_page)",
"From there, we can start to do some calculations:",
"def get_jerry_brown_pct(page):\n table = get_results_table(page)\n brown_votes = table[table[\"name\"] == \"EDMUND G BROWN\"][\"votes\"].iloc[0]\n kashkari_votes = table[table[\"name\"] == \"NEEL KASHKARI\"][\"votes\"].iloc[0]\n brown_prop = float(brown_votes) / (kashkari_votes + brown_votes)\n return (100 * brown_prop).round(1)\n\nfor page in pdf.pages:\n precinct_id = get_precinct_id(page)\n brown = get_jerry_brown_pct(page)\n print(\"{0}: {1}%\".format(precinct_id, brown))",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tritemio/multispot_paper
|
usALEX - Corrections - Gamma factor fit.ipynb
|
mit
|
[
"Fit Gamma factor\n\nThis notebook estimates the gamma factor from a set of 5 μs-ALEX smFRET measurements.\n\nWhat this notebook does?\nAccording to Lee 2005 (PDF, SI PDF), we estimate the $\\gamma$-factor \nfrom Proximity Ratio (PR) and S values (with background, leakage and direct excitation correction) \nfor a set of 5 μs-ALEX measurements.\nThe PR and S values are computed by the notebook\n\nusALEX-5samples-PR-leakage-dir-ex-all-ph\n\nwhich is executed by 8-spots paper analysis.\nFrom Lee 2005 (equation 20), the following linear relation holds:\n$$\\frac{1}{S} = \\Omega + \\Sigma \\cdot E_{PR}$$\nOnce $\\Omega$ and $\\Sigma$ are fitted, we can compute the $\\gamma$-factor as (equation 22):\n$$\\gamma = (\\Omega-1)/(\\Omega + \\Sigma-1)$$\n$$\\beta = \\Omega + \\Sigma - 1$$\nThe definition of $\\beta$ based on physical parameters is:\n$$ \\beta = \\frac{I_{A_{ex}}\\sigma_{A_{ex}}^A}{I_{D_{ex}}\\sigma_{D_{ex}}^D}$$\nNote that, calling $S_\\gamma$ the corrected S, the following relation holds:\n$$ S_\\gamma = (1 + \\beta)^{-1}$$\nImport libraries",
"from __future__ import division\nimport numpy as np\nimport pandas as pd\nimport lmfit\nfrom scipy.stats import linregress",
"Computation\nThis notebook read data from the file:",
"data_file = 'results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv'\n\ndata = pd.read_csv(data_file).set_index('sample')\ndata\n\ndata[['E_gauss_w', 'E_kde_w', 'S_gauss']]\n\nE_ref, S_ref = data.E_gauss_w, data.S_gauss\n\nres = linregress(E_ref, 1/S_ref)\nslope, intercept, r_val, p_val, stderr = res",
"For more info see scipy.stats.linearregress.",
"Sigma = slope \nSigma\n\nOmega = intercept\nOmega",
"Pearson correlation coefficient:",
"r_val",
"Coefficient of determination $R^2$:",
"r_val**2",
"P-value (to test the null hypothesis that the slope is zero):",
"p_val",
"Gamma computed from the previous fitted values:",
"gamma = (Omega - 1)/(Omega + Sigma - 1)\n'%.6f' % gamma\n\nwith open('results/usALEX - gamma factor - all-ph.csv', 'w') as f:\n f.write('%.6f' % gamma)\n\nbeta = Omega + Sigma - 1\n'%.6f' % beta\n\nwith open('results/usALEX - beta factor - all-ph.csv', 'w') as f:\n f.write('%.6f' % beta)",
"Fit plot",
"import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n%config InlineBackend.figure_format='retina' # for hi-dpi displays\n\nsns.set_style('whitegrid')\n\nx = np.arange(0, 1, 0.01)\nplt.plot(E_ref, 1/S_ref, 's', label='dsDNA samples')\nplt.plot(x, intercept + slope*x, 'k', label='fit (slope = %.2f)' % slope)\nplt.legend(loc=4)\nplt.ylim(1, 2)\nplt.xlabel('PR')\nplt.ylabel('1/SR');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
5agado/conversation-analyzer
|
src/test/textStatsTool.ipynb
|
apache-2.0
|
[
"import numpy as np\nimport nltk\nfrom collections import Counter\nimport pandas as pd\nimport seaborn as sns\n\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n\nsns.set_context(\"paper\", font_scale=1.2)\n\n%matplotlib notebook\n%load_ext autoreload\n%autoreload 2\n\nimport os\nimport re\nimport sys\nsys.path.append(os.path.join(os.getcwd(), \"..\\..\\src\"))\n\nimport util.io as mio\nfrom util import statsUtil\nimport util.plotting as mplot\nfrom model.conversationDataframe import ConversationDataframe\nfrom stats.iConvStats import IConvStats\nfrom stats.wordsCountStats import WordsCountStats",
"Intro\nThis notebook is used as utility/tool for analysis of text.\nThe goal is to get some insight about the structure, content and quality of the text.\nExamples: analysis of CV, personal articles, job ads. \nLoad Text\nLoad text you want to analyse",
"def load_text(filepaths):\n \"\"\"\n Load text you want to analyse.\n :param filepaths: list of paths to text files to load\n :return: single string representing all retrieved text\n \"\"\"\n text = \"\"\n for path in filepaths:\n with open(path, 'r', encoding='UTF-8') as f:\n text += \"\\n\"+f.read()\n return text\n\ntext = load_text([\"\"])",
"Basic Stats\nLength, count and richness, Ngram distribution and mosr relevant features.",
"words = statsUtil.getWords(text)\ntypes = set(words)\n\nprint(\"Total length: {:.0f}\".format(len(text)))\nprint(\"Tokens count: {:.0f}\".format(len(words)))\nprint(\"Distinct tokens count: {:.0f}\".format(len(set(words))))\nprint(\"Lexical richness: {0:.5f}\".format(len(types)/len(words)))\n\ndef plot_most_common(most_common_ngrams, n_most, join=False):\n most_common_ngrams, count = zip(*most_common_ngrams.most_common(n_most))\n if join:\n most_common_ngrams = [\" \".join(list(e)) for e in most_common_ngrams]\n ax = sns.pointplot(y=most_common_ngrams, x=count)\n sns.plt.show()\n\n# Most common words\nwords_count = Counter(words)\n\n# Plot most common words\nplot_most_common(words_count, n_most=30)\n\nmost_common_bigrams = Counter(nltk.bigrams(words))\n\nplot_most_common(most_common_bigrams, 20, join=True)\n\nmost_common_trigrams = Counter(nltk.trigrams(words))\n\nplot_most_common(most_common_trigrams, 20, join=True)\n\n# Get most relevant words using TF-IDF\n# For this statistic we need additional pieces of text to compare with our speech transcript\n# we can simply load some corpora from NLTK \n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom nltk.corpus import stopwords\n\ndef get_top_features(text, n):\n # Load corpora for different genres\n c1 = nltk.corpus.gutenberg.raw('carroll-alice.txt')\n c2 = nltk.corpus.inaugural.raw(\"2009-Obama.txt\")\n c3 = nltk.corpus.webtext.raw(\"firefox.txt\")\n # Load english stopwords\n stops = set(stopwords.words(\"english\"))\n\n # Compute TF-IDF matrix and print top results for our speech\n vectorizer = TfidfVectorizer(analyzer='word',stop_words=stops, ngram_range=(2,3))\n tfIdf = vectorizer.fit_transform([text, c1, c2, c3]).toarray()\n indices = np.argsort(tfIdf[0])[::-1]\n features = vectorizer.get_feature_names()\n top_features = [features[i] for i in indices[:n] if tfIdf[0][i]!=0]\n return top_features\n\nget_top_features(text, 20)",
"Prose Stats\n“Over the whole document, make the average sentence length 15-20 words, 25-33 syllables and 75-100 characters.”",
"# prose stats\nsentences = list(filter(lambda x : len(x)>0, map(str.strip, re.split(r'[\\.\\?!\\n]', text))))\nsen_len = [len(sent) for sent in sentences]\nprint(\"Average sentence len {}. Max {}, min {}\".format(np.mean(sen_len), max(sen_len), min(sen_len)))\n\nfor sent in sentences:\n if len(sent)>300:\n print(\"* \" + sent)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sainathadapa/fastai-courses
|
deeplearning1/nbs-custom-mine/lesson7_02_practice.ipynb
|
apache-2.0
|
[
"from theano.sandbox import cuda\nimport utils; reload(utils)\nfrom utils import *",
"VGG",
"(val_classes, train_classes,\n val_labels, train_labels,\n val_filenames, train_filenames,\n test_filenames) = get_classes('data/fish/')\n\nprint(val_classes)\nprint(train_classes)\n\nprint(val_labels)\nprint(train_labels)\n\nprint(val_filenames)\nprint(train_filenames)\nprint(test_filenames)\n\n# removing path\nremove_path = lambda y: [x.split('/')[-1] for x in y]\nraw_train_filenames = remove_path(train_filenames)\nraw_val_filenames = remove_path(val_filenames)\nraw_test_filenames = remove_path(test_filenames)\n\ntrain_data = get_data('data/fish/train', (360, 640))\nval_data = get_data('data/fish/valid', (360, 640))\ntest_data = get_data('data/fish/test', (360, 640))\n\nfrom vgg16bn import Vgg16BN\nmodel = Vgg16BN((360, 640)).model\nmodel.pop()\n\nmodel.input_shape\n\nmodel.output_shape\n\nmodel.summary()",
"Precompute convolutional output",
"model.compile(optimizer=Adam(),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nconv_train_features = model.predict(train_data)\nconv_test_features = model.predict(test_data)\nconv_val_features = model.predict(val_data)",
"Fully convolutional net (FCN)\nSince we're using a larger input, the output of the final convolutional layer is also larger. So, we probably don't want to put a dense layer in there - that would be a lot of parameters! Instead, let's use a fully convolutional net (FCN); this also has the benefit that they tend to generalize well, and also seems like a good fit for our problem (since the fish are a small part of the image)",
"lrg_model = Sequential([\n BatchNormalization(axis=1, input_shape=model.output_shape[1:]),\n Convolution2D(128, 3, 3, activation='relu', border_mode='same'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Convolution2D(128, 3, 3, activation='relu', border_mode='same'),\n BatchNormalization(axis=1),\n MaxPooling2D(),\n Convolution2D(128, 3, 3, activation='relu', border_mode='same'),\n BatchNormalization(axis=1),\n MaxPooling2D((1,2)),\n Convolution2D(8, 3, 3, border_mode='same'),\n Dropout(0.),\n GlobalAveragePooling2D(),\n Activation('softmax')\n])\n\nlrg_model.summary()\n\nlrg_model.compile(optimizer=Adam(lr=0.001),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nlrg_model.fit(conv_train_features, train_labels,\n batch_size=64,\n nb_epoch=2,\n validation_data=(conv_val_features, val_labels),\n verbose=2)\n\nlrg_model.optimizer.lr=1e-5\n\nlrg_model.fit(conv_train_features, train_labels,\n batch_size=64,\n nb_epoch=6,\n validation_data=(conv_val_features, val_labels),\n verbose=2)",
"Bounding boxes and Multi-Output",
"import ujson as json\nanno_classes = ['alb', 'bet', 'dol', 'lag', 'other', 'shark', 'yft']\nbb_json = {}\nfor c in anno_classes:\n j = json.load(open('{}annos/{}_labels.json'.format('data/fish/', c), 'r'))\n for l in j:\n if 'annotations' in l.keys() and len(l['annotations'])>0:\n bb_json[l['filename'].split('/')[-1]] = sorted(\n l['annotations'], key=lambda x: x['height']*x['width'])[-1]\n\nbb_json['img_04908.jpg']\n\ntrain_file2idx = {o:i for i,o in enumerate(raw_train_filenames)}\nval_file2idx = {o:i for i,o in enumerate(raw_val_filenames)}\n\ntrain_file2idx\n\n# for any images that have no annotations, we'll create an empty bounding box\nempty_bbox = {'height': 0., 'width': 0., 'x':0., 'y': 0.}\n\nfor x in raw_train_filenames:\n if not x in bb_json.keys(): bb_json[x] = empty_bbox\nfor x in raw_val_filenames:\n if not x in bb_json.keys(): bb_json[x] = empty_bbox\n\n# convert the coordinates to our resized 224x224 images\ndef convert_bb(bb, size):\n bb = [bb[p] for p in ['height', 'width', 'x', 'y']]\n conv_x = (224. / size[0])\n conv_y = (224. / size[1])\n bb[0] = bb[0]*conv_y\n bb[1] = bb[1]*conv_x\n bb[2] = max(bb[2]*conv_x, 0)\n bb[3] = max(bb[3]*conv_y, 0)\n return bb\n\nraw_train_sizes = [PIL.Image.open('data/fish/train/' + x).size for x in train_filenames]\nraw_val_sizes = [PIL.Image.open('data/fish/valid/' + x).size for x in val_filenames]\n\ntrain_bbox = np.stack([convert_bb(bb_json[f], s) for f,s in zip(raw_train_filenames, raw_train_sizes)])\nval_bbox = np.stack([convert_bb(bb_json[f], s) for f,s in zip(raw_val_filenames, raw_val_sizes)])\n\ndef plot_bb(i):\n bb = val_bbox[i]\n plot(val_data[i])\n plt.gca().add_patch(\n plt.Rectangle((bb[2], bb[3]), bb[1], bb[0], color='red', fill=False, lw=3)\n )\n\n%matplotlib inline\nplot_bb(0)\n\n# functional api\ninp = Input(conv_layers[-1].output_shape[1:])\nx = MaxPooling2D()(inp)\nx = BatchNormalization(axis=1)(x)\nx = Dropout(0.6/4)(x)\nx = Flatten()(x)\nx = Dense(512, activation='relu')(x)\nx = BatchNormalization()(x)\nx = Dropout(0.6)(x)\nx = Dense(512, activation='relu')(x)\nx = BatchNormalization()(x)\nx = Dropout(0.6/2)(x)\ny_bb= Dense(4, name='bb')(x)\ny_c = Dense(8, activation='softmax', name='class')(x)\n\n# multi-output\nmodel = Model([inp], [y_bb, y_c])\nmodel.compile(optimizer=Adam(lr=0.001),\n loss=['mse', 'categorical_crossentropy'],\n metrics=['accuracy'],\n loss_weights=[0.001, 1.])\n\nmodel.fit(conv_train_features,\n [train_bbox, train_labels],\n batch_size=64,\n nb_epoch=3,\n validation_data=(conv_val_features, [val_bbox, val_labels]),\n verbose=2)\n\nmodel.optimizer.lr = 1e-5\n\nmodel.fit(conv_train_features,\n [train_bbox, train_labels],\n batch_size=64,\n nb_epoch=10,\n validation_data=(conv_val_features, [val_bbox, val_labels]),\n verbose=2)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OpenWeavers/openanalysis
|
doc/OpenAnalysis/02 - Sorting.ipynb
|
gpl-3.0
|
[
"Sorting Analysis\nConsider a finite collection of orderable elements. Re-arranging that collection, so that the collection is completely ordered is called sorting. There are many techiniques to sort a collection. Following are some of the comparision based Sorting Algorithms.\n\nBubble Sort\nInsertion Sort\nSelection Sort\nMerge Sort\nQuick Sort\nHeap Sort\n\nBefore looking at the analysis part, we shall examine the Language in built methods to sorting\nsorted(collection,reverse = False[,key])\nThis function takes an iterable as argument, and returns it in sorted form based on key. If key is not given, sorting is done according to default comparision rules. Let's see the examples and understand the working of sorted(). If reverse is True, reversed collection is returned after sorting.",
"x = list(range(10))\nimport random\nrandom.shuffle(x)\n\nx\n\nsorted(x)\n\nimport math\ny = sorted(x,key = lambda x: math.sin(x)) # Sort elements of x in increasing order of their sines\ny\n\n[math.sin(i) for i in y]",
"Note how the elements of sin(y) are in increasing order.\nStandard import statement",
"from openanalysis.sorting import SortingAlgorithm,SortAnalyzer\nimport numpy as np # for doing vstack()",
"SortingAlgorithm is the base class providing the standards to implement sorting algorithms, SortAnalyzer visualizes and analyses the algorithm\nSortingAlgorithm class\nAny sorting algorithm, which has to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class.\nData Members\n\nname - Name of the Sorting Algorithm\ncount - Holds the number of basic operations performed\nhist_arr - A 2D numpy array, holding the instances of array, as exchange is performed\n\nMember Functions\n\n__init__(self, name): - Initializes algorithm with a name\nsort(self, array, visualization): - The base sorting function. Sets count to 0. array is 1D numpy array, visualization is a bool indicating whether array has to be vstacked into hist_arr\n\nAn example .... Bubble Sort\nNow we shall implement the class BubbleSort",
"class BubbleSort(SortingAlgorithm): # Derived from SortingAlgorithm\n def __init__(self):\n SortingAlgorithm.__init__(self, \"Bubble Sort\") # Initializing with name\n\n def sort(self, array, visualization=False): # MUST have this signature\n SortingAlgorithm.sort(self, array, visualization) # sets self.count to 0\n for i in range(0, array.size): # Not len(array)\n exch = False\n for j in range(0, array.size - i - 1):\n self.count += 1 # Increment self.count after each basic operation\n if array[j] > array[j + 1]:\n array[j], array[j + 1] = array[j + 1], array[j]\n exch = True\n if visualization:\n self.hist_array = np.vstack([self.hist_array, array]) # Save the current state to hist_array\n if not exch:\n break\n if visualization:\n self.hist_array = np.vstack([self.hist_array, array]) # Save the final state to hist_array",
"SortAnalyzer class\nThis class provides the visualization and analysis methods. Let's see its methods in detail\n\n\n__init__(self, sorter): Initializes visualizer with a Sorting Algorithm. \n\nsorter is a class, which is derived from SortingAlgorithm\n\n\n\nvisualize(self, num=100, save=False): Visualizes the given algorithm with a randomly shuffled array.\n\nnum size of randomly shuffled array\nsave is True means animation is saved in output/\n\n\n\nanalyze(self, maxpts=1000):\n\nPlots the running time of sorting algorithm by sorting for 3 cases\nAlready Sorted array, reverse sorted array and Shuffled array\nAnalysis is done by inputting randomly shuffled integer arrays with size staring\n from 100, and varying upto maxpts in the steps of 100, and counting the number of\n basic operations\nmaxpts - Upper bound on size of elements chosen for analysing efficiency",
"bubble_visualizer = SortVisualizer(BubbleSort)\n\nbubble_visualizer.efficiency()",
"As you can see in the above plot, BubbleSort takes $\\mathcal{O}(n)$ time on best case and $\\mathcal{O}(n^2)$ time on both avarage and worst cases\nYou can call the visualize function as shown below and see the 'mp4' file saved at output/ folder\npython\n bubble_visualizer.visualize(save=True) \ncompare(algs)\nalgs is a list of classes derived from SortingAlgorithm. It performs tests and plots the bar graph comparing the number of basic operations performed by each algorithm.\nWhy use a class if sorting could be done using a function\nWe have just seen how BubbleSort is implemented. Every sorting algorithm is not as simple as BubbleSort. QuickSort and MergeSort needs several auxiliary methods to work with. If they are scattered throughout the code, they decrease the readability. So it is better to pack everything in a class.\nExample File\nYou can see more examples at Github"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mayank-johri/LearnSeleniumUsingPython
|
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
|
gpl-3.0
|
[
"Answers - Data_Type\n\nQ : What will be the output of the following code snippets?",
"a=[1,2,3,4,5,6,7,8,9]\nprint(a[::2])\n\na=[1,2,3,4,5,6,7,8,9]\na[::2]=10,20,30,40,50,60 # a[0], a[2],... = 10,20,30\nprint(a)\n\na=[1,2,3,4,5,6,7,8,9]\na[::2]=10,20,30,40,50\nprint(a)\n\na=[1,2,3,4,5]\na[3:1:-1]\n\na=[1,2,3,4,5]\nprint(a[3:0:-1])\n\narr = [[1, 2, 3, 4],\n [4, 5, 6, 7],\n [8, 9, 10, 11],\n [12, 13, 14, 15]]\nfor i in range(0, 4):\n print(arr[i].pop())\n\narr = [1, 2, 3, 4, 5, 6]\nfor i in range(1, 6):\n arr[i - 1] = arr[i]\nfor i in range(0, 6): \n print(arr[i], end = \" \")\n\nnums = set([1,1,2,3,3,3,4])\nprint(len(nums))\n\nnumbers = [1, 2, 3, 4]\nnumbers.append([5,6,7,8])\nprint (len(numbers))\n\nnumbers = [1, 2, 3, 4]\nfor a in [5,6,7,8]:\n numbers.append(a)\nprint (len(numbers))\n\nnumbers = [1, 2, 3, 4]\nfor a in range(5,9):\n numbers.append(a)\nprint (len(numbers))\n\nnames1 = ['Amir', 'Barry', 'Chales', 'Dao']\nnames2 = names1\nnames3 = names1[:]\n\nnames2[0] = 'Alice'\nnames3[1] = 'Bob'\n\nsum = 0\nfor ls in (names1, names2, names3):\n if ls[0] == 'Alice':\n sum += 1\n if ls[1] == 'Bob':\n sum += 10\n\nprint(sum)\n\nnames1 = ['Amir', 'Barry', 'Chales', 'Dao']\nloc = names1.index(\"Edward\")\nprint (loc)\n\nlist1 = [1, 2, 3, 4]\nlist2 = [5, 6, 7, 8]\n\nprint (len(list1 + list2))\n\n\nlist1 = [1, 2, 3, 8, 4]\nlist2 = [5, 6, 7, 8, 2]\n\nprint(len(set(list1 + list2)))",
"Q: Write a Python script to add key to a dictionary.\ne.g. Sample Dictionary : {0: 10, 1: 20} Expected Result : {0: 10, 1: 20, 2: 30}",
"a = {0: 10, 1: 20}\na[2] = 30\nprint(a)",
"Q: Write a Python script to concatenate following dictionaries to create a new one.\ne.g:\nSample Dictionary : dic1={1:10, 2:20} dic2={3:30, 4:40} dic3={5:50,6:60}\nExpected Result : {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6: 60}",
"names1={1:10, 2:20} \nnames2={3:30, 4:40}\nnames3={5:50,6:60}\n# names1.update(names2)\nnew_dict = {}\nfor ls in (names1, names2, names3):\n new_dict.update(ls)\nprint(new_dict)\n\nd1={1:2,3:4}\nd2={5:6,7:9}\nd3={10:8,13:22}\nd4 = dict(d1)\nd4.update(d2)\nd4.update(d3)\nprint(d4)",
"Q: Write a Python script to check if a given key already exists in a dictionary.",
"dict = {1: 2, 3: 4, 5: 6, 7: 9, 10: 8, 13: 22}\n\nfound = True\nfor key in dict:\n if(key == 11):\n print(\"key found\")\n break;\nelse:\n print(\"key not found\")\n found = False",
"Q: Write a Python program to iterate over dictionaries using for loops.\nans: please look above examples\nQ: Write a Python script to generate and print a dictionary that contains number (between 1 and n) in the form (x, x*x).\nSample Dictionary ( n = 5) : Expected Output : {1: 1, 2: 4, 3: 9, 4: 16, 5: 25}\nand \nWrite a Python script to print a dictionary where the keys are numbers between 1 and 15 (both included) and the values are square of keys. Sample Dictionary {1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81, 10: 100, 11: 121, 12: 144, 13: 169, 14: 196, 15: 225}",
"a = { b : b*b for b in range(1,10) } \nprint(a)",
"Q: Write a Python script to sort (ascending and descending) a dictionary by value.",
"# Ans\ne = {1:39,8:110, 4:34, 3:87, 7:110, 2:87}\nsortE = sorted(e.items(), key=lambda value: value[1])\nprint(sortE)\n\n# Ans\ne = {1:39,8:110, 4:34, 3:87, 7:110, 2:87}\nsortE = sorted(e.items(), key=lambda value: value[1], reverse=True)\nprint(sortE)",
"Q: Write a Python script to merge two Python dictionaries.\n\nTIP: use update\n\nQ: Write a Python program to sum all the items in a dictionary.",
"e = {1:39,8:110, 4:34, 3:87, 7:110, 2:87}",
"Write a Python program to multiply all the items in a dictionary.\nWrite a Python program to remove a key from a dictionary.\nWrite a Python program to map two lists into a dictionary.\nWrite a Python program to sort a dictionary by key.\nWrite a Python program to get the maximum and minimum value in a dictionary.\nWrite a Python program to get a dictionary from an object's fields.\nWrite a Python program to remove duplicates from Dictionary.\nWrite a Python program to check a dictionary is empty or not.\nWrite a Python script to sort (ascending and descending) a dictionary by value."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DS-100/sp17-materials
|
sp17/hw/hw5/hw5.ipynb
|
gpl-3.0
|
[
"Homework 5",
"import numpy as np\nimport pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom IPython.display import display, Latex, Markdown\n\n!pip install -U okpy\nfrom client.api.notebook import Notebook\nok = Notebook('hw5.ok')",
"This assignment\nIn this assignment we will study a fundamental problem in crawling the web: deciding when to re-index changing pages. \nSearch engines like those deployed at Google and Microsoft periodically visit, or crawl, web pages to check their current status and record any changes to the page. Some pages, such as news websites, are updated frequently while other pages, like this one, are never updated. \nWhy not crawl the entire web all the time?\nThe web is big and even giant search engines don't have the resources to re-read the entire web continuously. Fortunately, different sites change at different rates, so an intuitive strategy would be to visit sites more if they are changed more frequently. But this requires us to know, or at least estimate, how often each web site is changed. This is the main question you'll answer in this assignment.\nAlong the way you will:\n\ndo some EDA to understand the site dataset.\nbuild several models for the site changes.\nuse simulation to get a concrete understanding of your models.\ncompute maximum-likelihood estimates of model parameters.\nuse simulation and visualization to understand the accuracy of your estimates of site change frequencies.\n\nExamining the data\nOur dataset consists of observations for about 1000 pages over a 30-day period. Every hour, each page was downloaded, and it was determined whether the page had been changed since the previous hour. For the first hour, we can't tell whether it has changed, so there are 719 such \"checks\" for each page.\nImportant fact about the data: Not every check succeeded. Sometimes the page failed to load, or it was otherwise impossible to tell whether it had changed since the previous hour. These hours are omitted from the dataset. A field in the dataset indicates the number of successful checks for each page.\nThe dataset is in JSON format in the file crawl.json. For each page, we have: the URL, called url; the number of successful visits to the page, number of checks; and the checks when a change was detected, called positive checks. Examine the crawl.json file. You might find it convenient to load it into python using json.load. The next cell two cells are provided for that.",
"# Use this cell to examine the dataset, if you like.\n!head -n 25 crawl.json\n\n# This cell loads the JSON data into Python.\nimport json\nwith open(\"crawl.json\", \"r\") as f:\n crawl_json = json.load(f)",
"Question 1\nFill in the function json_description to determine:\n\nthe number of records, and\nthe set of possible top level keys (field names) in each record.",
"def json_description(crawl_json_records):\n \"\"\"Produces information about a JSON object containing crawl data.\n \n Args:\n crawl_json_records (list): A list of JSON objects such as the\n crawl_json variable above.\n \n Returns:\n 2-tuple: An (int, set) pair. The integer is the number of records\n in the given list. The set is a set (constructed with\n Python's set() function) of strings. It contains all the\n field top names that appear at the top level of any record\n in crawl_json_records.\n \"\"\"\n ...\n # Feel free to erase the next line. It just demonstrates how\n # to return a tuple. You'll have to replace the variable names\n # with whatever you define in this function.\n return (num_records, possible_fields)\n\n_ = ok.grade('q01')\n_ = ok.backup()\n\n# Display results\nn, keys = json_description(crawl_json)\nprint(\"Number of records:\", n)\nprint(\"Keys in the records:\", keys)",
"Question 2\nWhat is the granularity of the dataset as represented in crawl.json? Write a one to two sentence description in the cell below:",
"# Use this cell for your explorations.\nq2_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q2_answer))",
"Question 3\nIt will be more convenient to work with the data in Pandas DataFrame with a rectangular format. Fill in the function make_crawls_dataframe in the cell below. Then run the cell below that to create the table crawls.",
"def make_crawls_dataframe(crawl_json_records):\n \"\"\"Creates a Pandas DataFrame from the given list of JSON records.\n \n The DataFrame corresponds to the following relation:\n\n crawls(primary key (url, hour), updated)\n\n Each hour in which a crawl happened for a page (regardless of\n whether it found a change) should be represented. `updated` is\n a boolean value indicating whether the check for that hour found\n a change.\n\n The result is sorted by URL in ascending order and **further**\n sorted by hour in ascending order among the rows for each URL.\n \n Args:\n crawl_json_records (list): A list of JSON objects such as the\n crawl_json variable above.\n \n Returns:\n DataFrame: A table whose schema (and sort order) is described\n above.\n \"\"\"\n ...\n\n# Run this cell before you continue.\ncrawls = make_crawls_dataframe(crawl_json)\n\n_ = ok.grade('q03')\n_ = ok.backup()",
"Question 4\nThere are other reasonable ways to represent the data in a relational database (or in DataFrames). One alternate schema uses 2 relations to represent the data. The first relation is:\ncrawls_that_found_changes(primary key (url, hour))\n\nThe primary key for the second relation is url. Define a schema for that 2nd relation. Your schema should ensure that the two resulting tables could be used to answer any question that could be answered using crawls.",
"# Use this cell for your explorations. Write the schema\n# in text in the string below.\nq4_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q4_answer))",
"Question 5\nIn the following we will construct visualizations to understand several key quantities of interest. The following cell constructs some key summary statistics that we will be using in the next few questions.",
"crawl_stats = (\n crawls['updated']\n .groupby(crawls.index.get_level_values('url'))\n .agg({\n 'number of crawls': 'count', \n 'proportion of updates': 'mean', \n 'number of updates': 'sum'\n })\n)",
"Part 1:\nWhat was the distribution of the number of crawls for each page? Did most get crawled all 719 times? (For this and the other parts of this question, create a visualization to answer the question.)",
"...\n\n# Leave this for grading purposes\nq5_p1_plot = plt.gcf()",
"Part 2\nWhat was the distribution of the number of positive checks for each page?",
"...\n\n# Leave this for grading purposes\nq5_p2_plot = plt.gcf()",
"Part 3\nWhat is the relationship between the number of crawls for each page and the number of positive checks? Construct a scatter plot relating these two quantities.",
"...\n\n# Leave this for grading purposes\nq5_p3_plot = plt.gcf()",
"Part 4\nIn 2 or 3 sentences, describe what you discovered in your initial explorations.",
"q5_p4_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q5_p4_answer))",
"Making a timeline from one site\nIt will be useful to be able to look at timelines of positive checks or changes for sites. The function display_points, defined below, will help.",
"def display_points(points, xlim, title):\n \"\"\"Displays a timeline with points on it.\n \n Args:\n points (ndarray): A list of floats in the range [xlim[0], xlim[1]],\n each one a point to display in the timeline.\n xlim (list-like): A list/tuple/array with 2 elements giving the\n start and end of the timeline.\n title (str): The title to display on the plot.\n \n Example:\n >>> # plot the points at [1,3,30,50]\n >>> display_points([1,4,30,50], [0, 75], \"Example\")\n \"\"\"\n fig, ax = plt.subplots(1)\n fig.set_size_inches(8, 1)\n ax.scatter(points, np.repeat(0, len(points)), alpha=0.5)\n ax.axhline(0, color=\"grey\", zorder=-1, lw=.5)\n ax.yaxis.set_visible(False)\n ax.xaxis.set_visible(True)\n ax.set_xlim(xlim)\n ax.set_title(\"{} ({:d} total points)\".format(title, len(points)))\n plt.show()",
"Example Usage",
"display_points([1,4,30,50], [0, 75], \"Example\")",
"Question 6\nWe want to understand the behavior of page changes in order to determine how often a page should be visited. To do this, we need more than summary statistics of the 1000 sites. We also need to examine the patterns of positive checks for sites. Let's examine when the positive checks occurred for a handful of sites. The display_points function is helpful here.\nWe selected a variety of sites - some changed frequently, others rarely, and some had only a few successful crawls. Their URLs are available in pages_to_display. Use display_points to examine each page. You should notice that the visualization is not very informative in the last 3 cases.",
"# This cell identifies a few categories of pages and \n# associates a label in the 'Desc' column of crawl_stats\n\ncrawl_stats['Desc'] = \"\" # default blank description\n\n# Normal pages have a moderate number of updates and \n# were successfully crawled most times.\ncrawl_stats.loc[\n (crawl_stats['proportion of updates'] > 0.1)\n & (crawl_stats['proportion of updates'] < 0.9)\n & (crawl_stats['number of crawls'] >= 700), \n 'Desc'] = 'Normal'\n\ncrawl_stats.loc[\n (crawl_stats['proportion of updates'] < .1)\n & (crawl_stats['number of crawls'] >= 700), \n 'Desc'] = 'Rare Update'\n\ncrawl_stats.loc[\n (crawl_stats['proportion of updates'] > .9)\n & (crawl_stats['number of crawls'] >= 700), \n 'Desc'] = 'Frequent Update'\n\ncrawl_stats.loc[\n crawl_stats['number of crawls'] < 50, \n 'Desc'] = 'Few Crawls'\n\n\n# Build a dataframe with a few examples from each type of webpage\nnum_of_each = 3\npages_to_display = pd.concat([\n crawl_stats[crawl_stats['Desc'] == \"Normal\"].head(num_of_each),\n crawl_stats[crawl_stats['Desc'] == 'Rare Update'].head(num_of_each),\n crawl_stats[crawl_stats['Desc'] == 'Frequent Update'].head(num_of_each),\n crawl_stats[crawl_stats['Desc'] == 'Few Crawls'].head(num_of_each),\n])\n\n# Construct the timeline point diagrams for each example\nfor (url, desc) in pages_to_display['Desc'].iteritems():\n crawls_for_site = crawls.loc[url]\n display_points(\n crawls_for_site[crawls_for_site['updated']].index,\n [0, len(crawls_for_site)], desc)",
"Examining the Times of Positive Checks\nConsider what you learned about the positive checks in the previous question. Do these occur at regular intervals? Or do they appear to occur more randomly in time? Write a 2 to 3 sentence explanation of what you see:",
"# Use this cell for your explorations.\nq6_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q6_answer))",
"Understanding this distribution of positive checks will help us determine the distribution of changes. However, determining this distribution is difficult to do if we do not crawl the page sufficiently often. (We will soon see that it is also hard to do if the site changed too many times.) For these reasons, we will focus on \"normal\" pages, i.e., pages that were successfully crawled at least 700 times and that had between 50 and 200 updates.\nDo the positive checks occur uniformly at random? Let's compare the observed distribution of the positive checks to the uniform distribution for a few of our pages. \n\nQuestion 7\nTo compare the data distribution to a hypothetical probability distribution, we can compare the histogram of the data to the probability density function. However, a more effective comparison is to compare the quantiles of the observed distribution to those of the probability distribution. \nThe $q^{th}$ quantile of the data, for $0 < q < 1$ is that point $x_q$ such that at least $q$ of the observations are at or below $x_q$ and at least $1-q$ of the observations are at or above $x_q$. For example, the median is the 0.5 quantile and the lower and upper quartiles are the 0.25 and 0.75 quantiles.\nThe $q^{th}$ quantile of a continuous probability distribution, for $0 < q < 1$ is that point $x_q$ such that \n$P(X \\leq x_q) = q$ and $P(X \\geq x_q) = 1-q$. The quantiles of a uniform distribution are easy to compute. For example, for a Uniform$(0, N)$ distribution, we have $x_q = qN$.\nIf the data come from the same probability distribution, then their quantiles should roughly match. If we make a scatter plot of pairs of the empirical and theoretical quantiles, like these 3 points:\n(lower quartile of the data, lower quartile of the pdf),\n(median of data, median of the pdf),\n(upper quartile of the data, upper quartile of the pdf)\n\n...the points should fall roughly on a line.\nSuch a plot is called a quantile-quantile or Q-Q plot. For the websites with Desc = 'Normal' websites in pages_to_display, make a Q-Q plot for that site's updates, as follows:\n\nLet $N$ be the number of crawls for this page and $n$ be the number of positive checks.\nOrder the positive check values for a site from smallest to largest. We use these as the quantiles. That is, the $k^{th}$ smallest observation is the $k/(n+1)$-quantile.\nCompute the $n$ corresponding quantiles for the uniform distribution on $(0, N)$. These are simply\n$$k \\times \\frac{N}{n+1},$$\nfor $k = 1, \\ldots, n$. (Why?)\nPlot the $n$ pairs of quantiles in a scatter plot.\n\nPart 1",
"def compute_q_q_pairs(url):\n \"\"\"Computes lists of uniform-distribution-quantiles and data-quantiles for a page.\n \n Args:\n url (str): The URL of a page.\n \n Returns:\n 2-tuple: A pair (AKA a 2-tuple). Both components are lists or arrays of\n length n, where n is the number of positive checks for the page.\n The first component contains quantiles of a uniform distribution,\n and the second contains quantiles of the distribution of positive\n check times for the page. The kth element of a component is the\n (k+1)/(n+1)-th quantile of its distribution or dataset.\n \n The first component of the pair contains quantiles of the uniform\n distribution on [0, N], where N is the number of checks performed\n for the page.\n \"\"\"\n ...\n obs_q = ...\n pdf_q = ...\n \n return (pdf_q, obs_q)\n\n_ = ok.grade('q07')\n_ = ok.backup()",
"The following code constructs the final Q-Q plot for the \"Normal\" sites defined above",
"for url in pages_to_display[pages_to_display['Desc'] == \"Normal\"].index.values:\n print(url)\n (pdf_q, obs_q) = compute_q_q_pairs(url)\n sns.regplot(pdf_q, np.array(obs_q))\n plt.xlabel(\"True uniform quantile\")\n plt.ylabel(\"Actual quantile\")\n plt.show()",
"Part 2\nDescribe your findings.",
"# Use this cell for your explorations.\nq7_p2_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q7_p2_answer))",
"Question 8\nEven if the updates were distributed uniformly, the Q-Q plot will not appear as exactly a straight line. To get a sense of what a uniform-quantile plot might look like if the data were truly distributed according to the uniform distribution, simulate $n$ observations from the uniform distribution on the interval (0, 719) and make a uniform-quantile plot for these simulated data. $n$ is defined in the next cell; write your code in the cell below that.",
"url = 'http://a.hatena.ne.jp/Syako/simple'\nn = np.count_nonzero(crawls.loc[url]['updated'])\n\n...\n\n# Leave this for grading purposes\nq8_plot = plt.gcf()",
"How do the empirical quantile plots from the previous quesion compare to your simulated quantile plot? Optionally, we suggest looking at a few other sites and a few other simulated sets of data to see how well they match.",
"# Use this cell for your explorations.\nq8_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q8_answer))",
"Estimating the Update Rate: A Simple Approach\nHow would you estimate the change rate for a page? \nFor example, imagine that in 720 hourly visits to a page, we observed a change in the page on 36 visits. One estimate for the rate of changes is:\n$$\\frac {36\\text{ changes}}{720\\text{ hours}} = \\frac{1}{20} \\text{ changes per hour}$$\nThere is a small problem with our estimate. What if a page changes twice in an hour? We would see only one positive check. We do not observe the true number of changes that happened to a page in 30 days, but the number of hours that had a change. Think about how this could affect our interpretation of how often websites change.\nTo help answer this question, we use a probability model for the change behavior of a site and examine the impact of the incomplete observations on our simple estimate of a site's rate of change. \nA Model for Page Updates\nWhat model should we use?\nIn our earlier Q-Q plot analysis we found that the updates appear to occur uniformly at random. \nHowever, the number of positive checks (and therefore the number of page changes) is not the same from one page to the next. That is, both the number of changes and the hours of the changes appear to be random.\nWhen events can happen independently at any point in a span of time, and they're equally likely to happen at any of the times, a good model for the number of events that happen is the Poisson distribution.\nThe Poisson distribution has a simple probability mass function, \n$$P(k) = \\frac{\\lambda^k}{k!} e^{-\\lambda},$$\nfor $k = 0, 1, 2, \\ldots$. The parameter $\\lambda$ is called the rate, and this is a Poisson$(\\lambda)$ distribution.\nFor example, if $\\lambda$ is the hourly rate of changes to a page, then the chance of $k$ changes in one hour is:\n$$P(k) = \\frac{\\lambda^k}{k!} e^{-\\lambda}.$$\nIf we count the number of changes over $N$ hours, then the number of changes in this time period has \na Poisson$(N \\lambda)$ distribution. That is, \n$$P(k \\texttt{ updates in }N \\texttt{ hours}) = \\frac{(N\\lambda)^k}{k!} e^{-(N\\lambda)}.$$\n\nQuestion 9\nSuppose that we observe $n$ changes for a site that was visited for $N$ hours. If the number of changes follows the Poisson distribution, show that $\\frac{n}{N}$ is the MLE for $\\lambda$. In fancy notation, we could call this $\\lambda^{\\text{MLE}}$. (Note that we are currently ignoring the problem of not being able to detect multiple changes in an hour.)\nNote: It's okay to write your answer in plain text. If you know $\\LaTeX$ or would like to learn it, now is a good time to try it out. (If you double-click on this cell or some of the cells above, you can see some $\\LaTeX$ examples.)",
"q9_answer = r\"\"\"\n\nPut your answer here and delete these two sentences. Some\nsteps are already filled in to get you started; the '...'\nindicate where you need to fill in one or more lines.\n\n**Step 1.** The probability of the data given $\\lambda$ is:\n\n$$L(\\lambda) = e^{-(\\lambda N)} \\frac{(\\lambda N)^{n}}{n!}$$\n\n...\n\nTherefore,\n\n$$\\lambda^{\\text{MLE}} = ...$$\n\n\"\"\"\n\ndisplay(Markdown(q9_answer))",
"Question 10\n\n\nAdd a 'simple mle' column to the crawl_stats table containing the 'simple mle' estimator we derived earlier for each website.\n\n\nThen, make a plot that displays the distribution of these MLEs for the sites with at least 700 crawls.",
"...\n\n# Leave this at the end so we can grab the plot for grading\nq10_plot = plt.gcf()\n\n_ = ok.grade('q10')\n_ = ok.backup()",
"The Impact of Hourly Observations\nThe histogram that you made for previous problem has a small mode at 1. Why is this? It is not possible for our estimate of the rate of changes to be greater than once an hour because we only observe the page once an hour. So if the rate of changes is large, we are likely to underestimate it. Let's try to assess the impact of this.\nWe will carry out a Monte Carlo (repeated, randomized simulation) study of the process that generates our observations. To perform such a study, we need a model for how the checks came out. For that, we need a model for the page changes themselves.\nWe previously said that the number of changes follows a Poisson distribution with a certain rate for each page. We also saw that the positive checks are distributed roughly uniformly. It seems reasonable to assume that the changes themselves are also distributed uniformly.\nWhen the number of events follows the Poisson distribution and the location of the events (along the time interval) follows the uniform distribution, then we have what is called a Poisson process. This is a random object that's a set of numbers rather than just one number.\nTo perform our Monte Carlo study, for various values of $\\lambda$ we will generate the simulated changes according to the Poisson process distribution and then reduce the changes to the hour of the change. More specifically, we can simulate the data as follows:\n\nGenerate $M$, the number of updates, by drawing from the Poisson(720$\\lambda$) distribution.\nPlace each of the $M$ updates uniformly at random on the interval (0, 719). \n\"Snap\" each update to the next hour. For example, a 3:15 update time becomes 4:00, which is the time when we would have observed it. (What do you do when more than one update occurs within an hour?)\n\n\nQuestion 11\nFollowing the above description of the Poisson process, complete the function sample_poisson_process.\nHint: The function np.random.poisson will be useful.",
"def sample_poisson_process(rate, length):\n \"\"\"Generates n points from Poisson(rate*length) and locates them\n uniformly at random on [0, length].\n \n Args:\n rate (float): The average number of points per unit length.\n length (float): The length of the line segment.\n \n Returns:\n ndarray: An array of points scattered randomly on [0, length].\n The number of points has a Poisson(rate*length)\n distribution.\n \"\"\"\n ...\n\n_ = ok.grade('q11')\n_ = ok.backup()",
"The snap_times function in the next cell will help you simulate the hourly observations from the crawler.",
"def snap_times(update_times, window_length, process_length):\n \"\"\"Given a list of change times, produces a list of the windows that detected a\n change (that is, where at least one change happened inside the window).\n \n This has the effect of 'snapping' each change to the next hour (or whatever the\n window_length is). For periods where more than one change happened, the output\n will still only list the period once.\n \n In other words, it produces a list of the positive checks given a list of true\n change times.\n \n Args:\n update_times (ndarray): A list of times when changes happened for a page.\n All times are between 0 and process_length.\n window_length (float): The width of each window. (For example, if time\n is being measured in hours, and an observation\n happens once per hour, then this should be 1.)\n process_length (float): The last time time any change could have happened.\n \n Returns:\n ndarray: A list of numbers, each one the right endpoint of a window where at\n least one change happened.\"\"\"\n window_ends = np.arange(0, process_length, window_length) + window_length\n num_windows = len(window_ends)\n event_windows = np.floor(np.array(update_times) / window_length).astype(int)\n events_by_window = np.bincount(event_windows, minlength=num_windows)\n event_happened = events_by_window > 0\n return window_ends[event_happened]",
"Question 12\nUse the functions sample_poisson_process and snap_times to examine what happens when we visit hourly. Look at examples where\n* the length of time is 24 hours,\n* the rate is 1/8, 1/4, 1/2, 1, and 2 changes per hour, and\n* the window_length is 1.\nFor each value of rate, simulate one set of change on the interval (0 hours, 24 hours), and plot the resulting points on a timeline using display_points. Then snap these data to the next hour with snap_times and plot the resulting positive check times.\nThen, compare the actual change times to the censored times for the various rates. What happens as the rate changes? Do you think hourly visits to the site are a problem if the rate is 1/8? How about 2?",
"# Use this cell for your explorations, then write your conclusions\n# in the string below.\nq12_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q12_answer))",
"Simulation study of the rate estimate\nNow let us extend our Monte Carlo study to compute the MLE from the observed positive checks. Since we know the true change rate in the simulations, we can compare the MLE to the true change rate and see how accurate it is as an estimator. (We couldn't have done that with our real data, since we don't know the true change rates.)\n\nQuestion 13\nSuppose a website is visited every hour for 30 days, so it has 719 visits. For a particular rate $\\lambda$, \n\nGenerate the observed positive checks with our earlier functions (sample_poisson_process and snap_times).\nCalculate the MLE from these simulated data.\nRepeat 1000 times.\n\nThis average estimate will change as we increase the true rate of changes, so vary $\\lambda$ between 0 and 4 and make a plot showing the estimate as a function of $\\lambda$. For comparison, show the true change rates in your plot as well.",
"def simulate_censored_rate_estimate(true_rate, num_visits):\n \"\"\"Simulates updates to a page and visits by a web crawler, and\n returns the proportion of visits in which an update was observed.\n \n Args:\n true_rate (float): The average number of updates per unit length\n of time. (The units are irrelevant for the\n purposes of this function, but you can imagine\n that they are hours.)\n num_visits (float): The number of visits made to the site. One\n visit is made per unit length of time, so\n this is also equal to the duration of time\n simulated.\n \n Returns:\n float: the MLE for true_rate lambda, based on the number of \n observed positive checks.\n \"\"\"\n # The skeleton here is provided for your convenience; you\n # don't have to follow it.\n draws = ...\n windows = ...\n return ...\n",
"The following helper function can be used to simulate many rate estimates using the function you just completed.",
"def simulate_many_rate_estimates(true_rate, num_visits):\n return np.mean([simulate_censored_rate_estimate(true_rate, num_visits) \n for _ in range(1000)])",
"The following code will simulate rate estimates for various values of $\\lambda$ (this cell may take up to 1 minute to run):",
"num_visits = 719\nrates = list(np.arange(0, 4, .2))\nestimates = [simulate_many_rate_estimates(r, num_visits) for r in rates]",
"Finally, the following code will plot the estimated values for $\\lambda$ against their true values. You will need this plot to answer the next question.",
"plt.plot(rates, estimates, label = 'average estimate');\nplt.plot([rates[0], rates[-1]], [rates[0], rates[-1]], \n linestyle='dashed', color='g', label='true rate')\nplt.xlabel(\"True rate of updates per hour\")\nplt.ylabel(\"Average estimate of the rate of updates per hour\")\nplt.legend();\n\n_ = ok.grade('q13')\n_ = ok.backup()",
"Question 14\nPart 1\nExplain why the estimated rates seem to level off at 1.",
"q14_p1_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q14_p1_answer))",
"Part 2\nHow far off is the estimate from the truth for $\\lambda$ less than 0.25?",
"q14_p2_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q14_p2_answer))",
"Estimating the update rate: An improved approach\nOur simulation study has shown us that our MLE estimate for the rate is biased. It systematically underestimates the quantity we're trying to estimate, the rate. This bias is small for small $\\lambda$, but can we eliminate it?\nWe can recast the problem slightly by reconsidering our observations. Although the possible times of a page changes may be continuous, we only observe the data hourly and we only observe whether there was at least one update in that hour. Does this remind you of a distribution that you have worked with?\nWhat if we consider the data for a page as 719 0-1 values, where 0 indicates no change and 1 indicates at least 1 change in the hour? Do you see why these 0-1 values are equivalent to our representation of the data as positive changes? For example, if N is 10 and we observe changes at hours 3, 7, and 9 then this information is equivalent to the sequence 0,0,1,0,0,0,1,0,1,0. \nThe distribution of the 0-1 values is Bernoulli(p). Under our assumption that the changes come from a Poisson process, their values are independent.\nThis distribution incorporates the censoring of the data. How can we estimate the rate? We can use the Poisson distribution to compute the chance of a 0 or 1, which depends on the rate. Then we use the Bernoulli distribution to find the MLE of $\\lambda$.\n\nQuestion 15\nWhat is the chance that a Poisson($\\lambda$) random variable is equal to 0? What is the chance that it's greater than or equal to 1?",
"q15_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\n\"\"\"\n\ndisplay(Markdown(q15_answer))",
"To check your answers, fill in the numerical values of each probability, for $\\lambda=0.5$",
"# The probability that a Poisson(0.5) random variable is equal to 0\nProb_pois_half_equals_0 = ...\n# The probability that a Poisson(0.5) random variable is greater than or equal to 1\nProb_pois_half_gte_1 = ...\n\n_ = ok.grade('q15')\n_ = ok.backup()",
"Question 16\nWith this modified model, the MLE of $\\lambda$, given $n$ updates in $N$ visits for a site, is:\n$$\\lambda^* = \\log \\left(\\frac{n}{N - n} + 1 \\right)$$\nShow that that's true.\nHint: If you find an expression like $\\frac{e^{-\\lambda}}{1 - e^{-\\lambda}}$, it often helps to convert such an expression to $\\frac{1}{e^{\\lambda} - 1}$ by multiplying its numerator and denominator by $e^{\\lambda}$.",
"q16_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\n\"\"\"\n\ndisplay(Markdown(q16_answer))",
"What happens when we observe a change every hour? Then, our MLE $\\lambda^*$ takes the value $\\infty$. \nWhy is that true? Intuitively, if there is a change on each visit, we can always increase the likelihood of our data (which is the likelihood that every hour saw at least one change) by increasing $\\lambda$.\nAn MLE of $\\infty$ is clearly problematic. We can use the idea of Laplace smoothing to modify the estimator a bit. We'll use the following \"modified MLE\" estimator, which adds to every site a single imagined visit where \"half\" saw a change and the other \"half\" didn't:\n$$\\lambda^+ = \\log \\left(\\frac{n + .5}{N - n + .5} + 1 \\right)$$\nYou can imagine that this estimate incorporates a prior conservative intuition that a site might not change all the time even if it happened to change every hour in the particular 30-day period we observed.\n\nQuestion 17\n\n\nAdd a 'modified mle' column to the crawl_stats table containing the 'modified mle' estimator $\\lambda^+$ for each website.\n\n\nThen, make a histogram that displays the distribution of the modified MLEs for the sites with at least 700 crawls.\n\n\nHint: You may find your work in question 10 helpful.",
"...\n\n_ = ok.grade('q17')\n_ = ok.backup()",
"Notice that this distribution is quite different than the earlier one we found in question 10. Now there is not a pile up of estimates at 1. \nHow accurate are our estimates?\nWe don't know the true update rates of the pages, so we can't know exactly how accurate our estimates are. But this is often the case in data science. Let's try to figure out a reasonable way to estimate the accuracy of our estimates.\nThe strategy is similar to the bootstrap. In pseudocode:\nfor each page $i$ :\n... Assume the true change rate $r_i$ equals our estimate $\\lambda^+_i$.\n... for _ in range(num_simulations):\n... Simulate changes and positive checks.\n... Compute a new maximum likelihood estimate of $r_i$ based on the simulated data.\n... Plot these estimates in a histogram or summarize them in some other way.\nThis is called a parametric bootstrap method, because instead of resampling directly from our data, we use our data to estimate its distribution's parameters and then sample from that distribution.\n\nQuestion 18\nComplete the definitions of simulate_change_rate_estimate and modified_mle in the cell below.",
"def simulate_change_rate_estimate(change_rate, num_observations, estimator):\n \"\"\"Simulates hourly change observations for a website and produces an\n estimate of the change rate.\n \n Args:\n change_rate (float): The hourly change rate of the website to be used in\n the simulation.\n num_observations (int): The number of observations (equivalently, the\n number of hours) to simulate.\n estimator (func): A function to apply to the simulated observations to\n produce an estimate of the change rate. Has the same\n signature as the function modified_mle below.\n \n Returns:\n float: The estimate produced by calling estimator on the simulated\n list of positive checks.\n \"\"\"\n ...\n return \n\n\ndef modified_mle(positive_checks, num_observations):\n \"\"\"Produces the modified MLE for a dataset.\n \n Args:\n positive_checks (ndarray): A list or array of the hours when a\n change was observed.\n num_observations (int): The number of hours in which we could\n have observed a change.\n \n Returns:\n (float): The modified MLE.\n \"\"\"\n ...\n return \n\n_ = ok.grade('q18')\n_ = ok.backup()",
"Question 19\nComplete the definition of plot_bootstrap_rate_estimates in the cell below. Read the docstring carefully so you know what to do. Then run the provided cell below that to plot the bootstrap rate estimate distributions for a few pages.",
"def plot_bootstrap_rate_estimates(page_url, num_simulations):\n \"\"\"Simulates positive check observations for a website many times. For\n each simulation, the page's change rate is estimated based on the simulated\n positive checks. The estimation method is the modified_mle function\n implemented above. Then this function produces a histogram that shows\n the distribution of these estimated change rates.\n \n When conducting each simulation, the change rate is taken to be the\n estimated update rate for the page. *That* estimate is the modified\n MLE computed from the actual positive check observations for that page.\n \n The modified MLE computed from the actual observations for the page\n is also displayed as a vertical red line at an appropriate horizontal\n position on the histogram.\n \n Here is a diagram that might be helpful:\n \n (This happens 1 time)\n |\n v\n Positive check observations for this page ---modified_mle---> Estimate of change rate for this page\n \n (1000x, once per simulation)\n Estimate of change rate for this page ---simulation---> Simulated positive checks\n \n (1000x, once per simulation)\n Simulated positive checks ---modified_mle---> Bootstrap estimate of change rate for this page\n \n 1000 bootstrap estimates of change rate for this page ---> Histogram\n ^\n (displayed somewhere) |\n Estimate of change rate for this page --------------------/\n \n Args:\n page_url (str): The URL of the page to simulate.\n num_simulations (int): The number of simulations to run.\n \n Returns:\n None: This function doesn't return anything; it just makes a histogram\n appear as described above.\n \"\"\"\n ...\n\n# Run this cell to make plots for several pages using your\n# function. Since no automatic tests are provided for this\n# question, we suggest examining the plots to make sure they\n# make sense to you.\nmany_updates_url = crawl_stats[np.logical_and(\n crawl_stats['number of crawls'] >= 700,\n np.logical_and(\n .2 <= crawl_stats['proportion of updates'],\n crawl_stats['proportion of updates'] >= .8))].index[0]\nplot_bootstrap_rate_estimates(many_updates_url, 1000)\n\nfew_updates_url = crawl_stats[np.logical_and(\n crawl_stats['number of crawls'] >= 700,\n np.logical_and(\n .05 <= crawl_stats['proportion of updates'],\n crawl_stats['proportion of updates'] >= .15))].index[0]\nplot_bootstrap_rate_estimates(few_updates_url, 1000)",
"Looking at the error distribution is good, but it's also useful to characterize error with a single number. (This facilitates comparisons among different estimators, for example.)\nA common way to summarize error in estimation is the root mean squared error, or RMSE. The RMSE is the square root of the MSE, or mean square error. We calculate the MSE as follows:\n\nthe average (across many simulations, where each simulation results in 1 estimate) of\nthe squared difference between the estimate and its actual value.\n\nWorking with the RMSE, rather than the MSE, is useful when we want to understand the magnitude of error relative to the estimates.\nWe don't know the true change rate for each website, so we can't compute the true MSE. However, if we find the MSE using the data making up the histogram you produced in the previous question, using the original estimate (the red line) as the actual change rate, that is a bootstrap estimate of the RMSE.\nQuestion 20\n\nComplete the definition of estimate_rmse in the cell below. Then, use it to estimate the RMSE for each page's modified MLE. Add these RMSEs as a column named 'rmse' in crawl_stats.\nNote: There are 1000 pages, so this will take some time. We recommend using only 100 simulations per page, which should be sufficient for reasonable estimates. Try your code on a single page before running it on every page.",
"def estimate_rmse(page_url, num_simulations):\n \"\"\"Simulates update observations for a website many times. For each\n simulation, the page's change rate is estimated based on the simulated\n observations. (The estimation method is the modified MLE.) Then this\n function produces an estimate of the RMSE of estimates of the change\n rate for this page.\n \n When conducting each simulation, the change rate is taken to be the\n estimated change rate for the page. *That* estimate is the modified\n MLE computed from the actual observations for the page.\n \n We compute the modified MLE for each set of simulated observations. That\n constitutes num_simulations estimates of the change rate.\n \n Then we compute the RMSE of those estimates. The \"true\" change rate in\n that calculation is taken to be the modified MLE computed from the\n actual observations for the page.\n \n Args:\n page_url (str): The URL of the page to simulate.\n num_simulations (int): The number of simulations to run.\n \n Returns:\n float: The estimated RMSE of the modified MLE for the given page,\n based on num_simulations simulations.\n \"\"\"\n ...\n \nprint(estimate_rmse(many_updates_url, 1000))\nprint(estimate_rmse(few_updates_url, 1000))\n\n# After completing estimate_rmse above, estimate the RMSEs for all\n# the pages here, and add them to crawl_stats as a new column named\n# 'rmse'.\n...\n\n_ = ok.grade('q20')\n_ = ok.backup()",
"Question 21\nCreate a visualization to display the RMSEs you computed. Then, create another visualization to see the relationship (across the 1000 pages) between RMSE and the modified MLE.",
"...",
"Is the model reasonable?\nAll the foregoing work has assumed a particular probability model for website updates: the number of changes follows a Poisson($\\lambda$) distribution and the locations of the changes, given the number of changes, follows a uniform distribution. Like most models, this is certainly imperfect. However, it still may produce reasonable estimates. Let's check some of the conditions of the model to see if they are reasonable.\nEmbedded in our model is the assumption that the rate of changes for a page is constant over time. If we find a strong pattern in the rates of updates, that is evidence against this assumption. For example, you might guess that changes for certain pages might happen more often at certain times of the day, or on certain days of the week.\nBefore you group by day of week, here are some difficulties presented by the dataset:\n\nThe day of the week is not given.\nThe hour of the day is not given.\nWe don't know when the checks started.\nWhen a site had a failed check, the check was just omitted from the dataset, so we don't know when the failed checks happened, only their total count for each page.\nNot all sites are in the same or nearby time zones.\n\nHere is what we do know:\n\nChecks for all pages started at the same time, at midnight PST (US Pacific Coast time).\nChecks were attempted every hour for 30 days, and then they stopped.\nWe know each page's URL. A URL may tell us something about its geography and therefore its rough time zone. For example, the top-level domains .edu, .com,.gov, and .net include only US sites.\n\n\nQuestion 22\nPropose a plan to answer one of these questions:\n\nHow much did the rate of changes vary by hour of the day?\nHow much did the rate of changes vary by day of the week?\n\nOr, you may choose your own pattern to test.\nYour plan doesn't need to produce results that hold for all pages, only a reasonably-large subset. Be sure to make clear exactly what question your analysis would answer.\nNote: You don't need to actually implement your plan!",
"# Feel free to use this cell to experiment. Then write your answer\n# in the string below.\nq22_answer = r\"\"\"\n\nPut your answer here, replacing this text.\n\n\"\"\"\n\ndisplay(Markdown(q22_answer))",
"Submitting your assignment\nCongratulations, you're done with this homework!\nRun the next cell to run all the tests at once.",
"_ = ok.grade_all()",
"Finally, run the next cell to submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. After you've done that, make sure you've pushed your changes to Github as well!",
"_ = ok.submit()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dennisproppe/fp_python
|
fp_lesson_3_monads.ipynb
|
apache-2.0
|
[
"Monads\nMonads are the most feared concept of FP, so I reserve a complete chapter for understanding this concept.\nWhat is a monad?\nRight now, my understanding is that monads are a very flexible concept that basically allows to attach context to an otherwise stateless system. This means, that through a monad, the application of a otherwise pure function can be made dependent on context, so that a function will be executed differently in different contexts.\nAn easy example: The maybe monad\nWe will start with an easy example: Let's assume we have the task of looking up a street name from a company record. If we'd do it the normal, non-functional way, we'd have to write functions that look up these records and check if the results are not NULL:\nThis example is heavily inspired by https://unpythonic.com/01_06_monads/\nThe following is a simple company class, where the address attribute is a simple dict containing the detailed address information.",
"class Company():\n def __init__(self, name, address=None):\n self.address = address\n self.name = name\n \n def get_name(self):\n return self.name\n \n def get_address(self):\n return self.address\n ",
"I now instatiate an instance of this class with a correctly set street attribute in the address dict. Then, everything works well when we want the query the street address from this company:",
"cp1 = Company(name=\"Meier GmbH\", address={\"street\":\"Herforstweg 4\"})\n\ncp1.get_name()\n\ncp1.get_address()\n\ncp1.get_address().get(\"street\")",
"However, when we want to get the street name when the company doesn't have a street attribute, this lookup will fail and throw an error:",
"cp2 = Company(\"Schultze AG\")\n\ncp2.get_name()\n\ncp2.get_address().get(\"street\")",
"What we would normally do to allieviate this issue is to write a function that deals with null values:",
"def get_street(company):\n address = company.get_address()\n if address:\n if address.has_key(\"street\"):\n return address.get(\"street\")\n return None\n return None\n\nget_street(cp2)\n\ncp3 = Company(name=\"Wifi GbR\", address={\"zipcode\": 11476} )\n\nget_street(cp3)",
"We now see that we are able to complete the request without an error, returning None, if there is no address given or if there is no dict entry for \"street\" in the address.\nBut wouldn't it be nice to have this handled once and for all?\nEnter the \"Maybe\" monad!",
"class Maybe():\n def __init__(self, value):\n self.value = value\n\n def bind(self, fn):\n if self.value is None:\n return self\n return fn(self.value)\n\n def get_value(self):\n return self.value\n ",
"Now, we can rewrite the get_street as get_street_from_company, using two helper function",
"def get_address(company): \n return Maybe(company.get_address())\n\ndef get_street(address):\n return Maybe(address.get('street'))\n\ndef get_street_from_company(company):\n return (Maybe(company)\n .bind(get_address)\n .bind(get_street)\n .get_value())\n\nget_street_from_company(cp1)\n\nget_street_from_company(cp3)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maurodoglio/taar
|
analysis/TAARExperimentV2Retention.ipynb
|
mpl-2.0
|
[
"import pyspark.sql.functions as F\nimport datetime as dt\nimport pandas as pd\nimport pyspark.sql.types as st\nimport matplotlib.pyplot as plt\nimport seaborn\nimport numpy as np\nimport statsmodels.api as sm\nfrom IPython.display import Markdown\n\nseaborn.set_style(\"whitegrid\")\nsc.setLogLevel(\"INFO\")\nudf = F.udf\n%matplotlib inline\n\nPERIODS = {}\nN_WEEKS = 12\nfor i in range(1, N_WEEKS + 1):\n PERIODS[i] = {\n 'start': i * 7,\n 'end': i * 7 + 6\n }\n\n\ndef date_plus_x_days(date, x):\n \"\"\"\n Returns a string date x days away from <date>\n \n Params:\n date (str): date in %Y%m%s format\n x (int) number of days to add to <date> (can be negative)\n \n >>> date_plus_x_days(\"20180101\", 1)\n \"20180102\"\n \n >>> date_plus_x_days(\"20180510\", -9)\n \"20180501\"\n \"\"\"\n new_date = dt.datetime.strptime(date, '%Y%m%d') + dt.timedelta(days=x)\n return new_date.strftime('%Y%m%d')\n\n\ndef date_diff(d1, d2, fmt='%Y%m%d'):\n \"\"\"\n Returns days elapsed from d2 to d1 as an integer\n\n Params:\n d1 (str)\n d2 (str)\n fmt (str): format of d1 and d2 (must be the same)\n\n >>> date_diff('20170205', '20170201')\n 4\n\n >>> date_diff('20170201', '20170205)\n -4\n \"\"\"\n try:\n return (pd.to_datetime(d1, format=fmt) - \n pd.to_datetime(d2, format=fmt)).days\n except:\n return None\n\n\n@udf(returnType=st.IntegerType())\ndef get_period(anchor, submission_date_s3):\n \"\"\"\n Given an anchor and a submission_date_s3,\n returns what period a ping belongs to. This \n is a spark UDF (see decoration).\n\n Params:\n anchor (col): anchor date\n submission_date_s3 (col): a ping's submission_date to s3\n\n Global:\n PERIODS (dict): defined globally based on n-week method\n\n Returns an integer indicating the retention period\n \"\"\"\n if anchor is not None:\n diff = date_diff(submission_date_s3, anchor)\n if diff >= 7: # exclude first 7 days\n for period in sorted(PERIODS):\n if diff <= PERIODS[period]['end']:\n return period\n \ndef get_retention(data):\n branch_counts = (\n data\n .groupby(\"branch\")\n .agg(F.countDistinct(\"client_id\").alias(\"total_clients\"))\n )\n\n weekly_counts = (\n data\n .groupby(\"period\", \"branch\")\n .agg(F.countDistinct(\"client_id\").alias(\"n_week_clients\"))\n )\n\n retention_by_branch = (\n weekly_counts\n .join(branch_counts, on='branch')\n .withColumn(\"retention\", F.col(\"n_week_clients\") / F.col(\"total_clients\"))\n )\n \n ret_df = retention_by_branch.toPandas()\n ret_df.fillna(0, inplace=True)\n \n return ret_df",
"Data Prep\nLoad in cleaned experiment data, generated from this notebook. Filter to clients that loaded the discopane (this is for the control group, since we have already cross referenced the TAAR logs. Clients who did not load the discopane never saw the control, serving as noise in the experiment.",
"S3_PATH = \"s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/cleaned_data/\"\nclean_data = sqlContext.read.parquet(S3_PATH).filter(\"discopane_loaded = true\")\n\nclean_data.groupby(\"branch\").agg(F.countDistinct(\"client_id\")).show()",
"Grab the min and max submission dates for filtering main_summary.",
"min_date = clean_data.select(F.min('submission_date_s3').alias('min_d')).collect()[0].min_d\nmax_date = clean_data.select(F.max('submission_date_s3').alias('max_d')).collect()[0].max_d\nprint min_date, max_date",
"Load in main_summary, filtered to the min date of the experiment, and 42 days beyond its compleition to allow for 6-week Retention Analysis. We then join main_summary with the experiment data.",
"ms = (\n sqlContext.read.option(\"mergeSchema\", True)\n .parquet(\"s3://telemetry-parquet/main_summary/v4\")\n .filter(\"submission_date_s3 >= '{}'\".format(min_date))\n .filter(\"submission_date_s3 <= '{}'\".format(date_plus_x_days(max_date, 7*N_WEEKS)))\n .filter(\"normalized_channel = 'release'\")\n .filter(\"app_name = 'Firefox'\")\n)\n\n# a client's enrollment date is determined by their first appearance in the experiment\nenrollment_dates = (\n clean_data.groupby(\"client_id\", \"branch\")\n .agg(F.min('submission_date_s3')\n .alias(\"enrollment_date\"))\n)\n\n# join main_summary to exp data\njoined = enrollment_dates.join(ms.select(\"submission_date_s3\", \"client_id\"), on=\"client_id\", how='left')\n\n# verify join contains same number of distinct clients as the experiment data.\n# this also initializes our cache\njc = joined.select(\"client_id\").distinct().count()\ncc = clean_data.select(\"client_id\").distinct().count()\n\njc - cc",
"Calculate Retention Data\nPerform 12-week retention analysis based on this example. [1-12]-Week Retention are additionally included since we can get them at a low cost. We expand out to 12-week retention to better validate the data since the TAAR branches exhibit suspiciously similar retention values.",
"joined = (\n joined.withColumn(\"period\", get_period(\"enrollment_date\", \"submission_date_s3\"))\n .filter(\"enrollment_date <= '{}'\".format(max_date))\n).distinct().cache()\n\njoined.count()\n\nret_df = get_retention(joined)\n\nret_df.to_csv(\"taar_v2_retention.csv\", index=False)",
"Write to s3 since this job is quite expensive and should only be run once.",
"%%bash\naws s3 cp taar_v2_retention.csv s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/",
"Load processed Retention Data\nThis section loads the data generated above without having to the re-run the entire notebook.",
"%%bash \naws s3 cp s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/taar_v2_retention.csv .\n\nret_df = pd.read_csv(\"taar_v2_retention.csv\")\nret_df.fillna(0, inplace=True)\n\nplt.rcParams['figure.figsize'] = (12, 6)\nfig, ax = plt.subplots()\nfor group, data in ret_df.groupby(\"branch\"):\n (data.sort_values(\"period\")\n .plot(x='period', \n y='retention', \n ax=ax, \n label=group))\nplt.ylabel(\"Retention\")\nplt.xlabel(\"Week (period)\")\nplt.title(\"1-12 Week Retention by Branch\")\nplt.show()\n\nret_df[ret_df.period >= 6.0].sort_values([\"period\", \"branch\"])",
"Investigate nearly identical retention lines for TAAR Branches\nLet's look at 6-week retention over time by each enrollment date",
"day_over_day_retention = []\nfor i in range(40):\n d = date_plus_x_days(\"20180312\", i)\n joinedx = joined.filter(\"enrollment_date = '{}'\".format(d))\n ret_dfx = get_retention(joinedx)\n week6 = ret_dfx[ret_dfx.period == 6.0]\n for b, data in week6.groupby(\"branch\"):\n x = {\n 'branch': b,\n 'ret': data['retention'].values[0],\n 'date': d\n }\n day_over_day_retention.append(x)\n\nday_over_day_retention_df = pd.DataFrame(day_over_day_retention)\nday_over_day_retention_df.date = (\n pd.to_datetime(day_over_day_retention_df.date, format='%Y%m%d'))\n\nplt.rcParams['figure.figsize'] = (12, 6)\nfig, ax = plt.subplots()\nfor group, data in day_over_day_retention_df.groupby(\"branch\"):\n data['ma'] = data.ret.rolling(window=6).mean()\n (data.sort_values(\"date\")\n .plot(x='date', \n y='ma', \n ax=ax, \n label=group))\nplt.ylabel(\"Retention\")\nplt.xlabel(\"Enrollment Date\")\nplt.title(\"6 Week Retention by Enrollment Date, Branch (6-Day Moving Average)\")\nplt.show()",
"We see increased variability with time, which is most certainly due to the study being front-loaded with participants. Looking at enrollment:",
"(joined.groupby(\"enrollment_date\")\n .agg(F.countDistinct(\"client_id\").alias(\"number of participants\"))\n .sort(\"enrollment_date\")\n .toPandas()\n .plot(x='enrollment_date'))",
"we see most of our clients are enrolled before 2018-03-22. This is why the lines are so smooth for the first ~9 datapoints in the previous retention chart\nBreaking retention down into shorter segments shows that there are indeed differences between the taar branches, however they track each other rather closely and are consistently higher than the control. This is evidence that the similarities we see in the lines for the first plot are in fact the true retention values and not a data-handling issue.\nResults",
"w6r = ret_df[ret_df.period==6]\nw6r\n\n(slinear, nlinear,\n sensemble, nensemble,\n scontrol, ncontrol) = [int(w6r[w6r.branch == b][i].values[0])\n for b in ('linear-taar', 'ensemble-taar', 'control')\n for i in ('n_week_clients', \"total_clients\")]\n\n\n\ndef get_effect(g1s, g2s, g1n, g2n):\n \"\"\"\n Extracts the observed difference and p-value \n for two proportions\n \n g1s: the number of successes for group 1\n g2s: the number of successes for group 2\n g1n: total trials for group 1\n g2n: total trials for group 2\n \n returns the effect and p-value as a tuple\n \"\"\"\n # use counts to form a proportion and format\n effect = str(round((g1s*1.0 / g1n) - (g2s*1.0 / g2n), 4) * 100) + '%'\n \n # perform test of proportions\n pval = sm.stats.proportions_ztest(np.array([g1s, g2s]),\n np.array([g1n, g2n]),\n value=0.05)[1]\n return effect, pval\n\nle, lp = get_effect(slinear, scontrol, nlinear, ncontrol) \nprint \"linear effect: {}\\npvalue: {}\\n\".format(le, lp)\n\nee, ep = get_effect(sensemble, scontrol, nensemble, ncontrol) \nprint \"ensemble effect: {}\\npvalue: {}\".format(ee, ep)\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/tensor2tensor
|
tensor2tensor/notebooks/t2t_problem.ipynb
|
apache-2.0
|
[
"Welcome to the Tensor2Tensor Dataset Colab!\nTensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.\nThis colab shows you how to add your own dataset to T2T so that you can train one of the several preexisting models on your newly added dataset!\nFor a tutorial that covers all the broader aspects of T2T using existing datasets and models, please see this IPython notebook.",
"#@title\n# Copyright 2018 Google LLC.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Welcome to the Tensor2Tensor Dataset Colab!\n\nInstallation & Setup\nDefine the Problem\n\nRun t2t_datagen\n\nViewing the generated data.\n\ntf.python_io.tf_record_iterator\nUsing tf.data.Dataset\n\nTerminology\n\nProblem\nModalities\n\n\n\nInstallation & Setup\nWe'll install T2T and TensorFlow.\nWe also need to setup the directories where T2T will:\n\nGenerate the dataset and write the TFRecords file representing the training and the eval set, vocabulary files etc DATA_DIR\nRun the training, keep the graph and the checkpoint files OUTPUT_DIR and\nUse as a scratch directory to download your dataset from a URL, unzip it, etc. TMP_DIR",
"#@title Run for installation.\n\n! pip install -q -U tensor2tensor\n! pip install -q tensorflow\n\n#@title Run this only once - Sets up TF Eager execution.\n\nimport sys\nif 'google.colab' in sys.modules: # Colab-only TensorFlow version selector\n %tensorflow_version 1.x\nimport tensorflow as tf\n\n# Enable Eager execution - useful for seeing the generated data.\ntf.compat.v1.enable_eager_execution()\n\n#@title Setting a random seed.\n\nfrom tensor2tensor.utils import trainer_lib\n\n# Set a seed so that we have deterministic outputs.\nRANDOM_SEED = 301\ntrainer_lib.set_random_seed(RANDOM_SEED)\n\n#@title Run for setting up directories.\n\nimport os\n\n# Setup and create directories.\nDATA_DIR = os.path.expanduser(\"/tmp/t2t/data\")\nOUTPUT_DIR = os.path.expanduser(\"/tmp/t2t/output\")\nTMP_DIR = os.path.expanduser(\"/tmp/t2t/tmp\")\n\n# Create them.\ntf.io.gfile.makedirs(DATA_DIR)\ntf.io.gfile.makedirs(OUTPUT_DIR)\ntf.io.gfile.makedirs(TMP_DIR)",
"Define the Problem\nTo simplify our setting our input text sampled randomly from [a, z] - each sentence has between [3, 20] words with each word being [1, 8] characters in length.\nExample input: \"olrkpi z cldv xqcxisg cutzllf doteq\" -- this will be generated by sample_sentence()\nOur output will be the input words sorted according to length.\nExample output: \"z cldv doteq olrkpi xqcxisg cutzllf\" -- this will be processed by target_sentence()\nLet's dive right into our first problem -- we'll explain as we go on.\nTake some time to read each line along with its comments -- or skip them and come back later to clarify your understanding.",
"#@title Define `sample_sentence()` and `target_sentence(input_sentence)`\nimport random\nimport string\n\ndef sample_sentence():\n # Our sentence has between 3 and 20 words\n num_words = random.randint(3, 20)\n words = []\n for i in range(num_words):\n # Our words have between 1 and 8 characters.\n num_chars = random.randint(1, 8)\n chars = []\n for j in range(num_chars):\n chars.append(random.choice(string.ascii_lowercase))\n words.append(\"\".join(chars))\n return \" \".join(words)\n\ndef target_sentence(input_sentence):\n words = input_sentence.split(\" \")\n return \" \".join(sorted(words, key=lambda x: len(x)))\n\n# `Problem` is the base class for any dataset that we want to add to T2T -- it\n# unifies the specification of the problem for generating training data,\n# training, evaluation and inference.\n#\n# All its methods (except `generate_data`) have reasonable default\n# implementations.\n#\n# A sub-class must implement `generate_data(data_dir, tmp_dir)` -- this method\n# is called by t2t-trainer or t2t-datagen to actually generate TFRecord dataset\n# files on disk.\nfrom tensor2tensor.data_generators import problem\n\n# Certain categories of problems are very common, like where either the input or\n# output is text, for such problems we define an (abstract) sub-class of\n# `Problem` called `Text2TextProblem` -- this implements `generate_data` in\n# terms of another function `generate_samples`. Sub-classes must override\n# `generate_samples` and `is_generate_per_split`.\nfrom tensor2tensor.data_generators import text_problems\n\n# Every non-abstract problem sub-class (as well as models and hyperparameter\n# sets) must be registered with T2T so that T2T knows about it and can look it\n# up when you specify your problem on the commandline to t2t-trainer or\n# t2t-datagen.\n#\n# One uses:\n# `register_problem` for a new Problem sub-class.\n# `register_model` for a new T2TModel sub-class.\n# `register_hparams` for a new hyperparameter set. All hyperparameter sets\n# typically extend `common_hparams.basic_params1` (directly or indirectly).\nfrom tensor2tensor.utils import registry\n\n\n# By default, when you register a problem (or model or hyperparameter set) the\n# name with which it gets registered is the 'snake case' version -- so here\n# the Problem class `SortWordsAccordingToLengthRandom` will be registered with\n# the name `sort_words_according_to_length_random`.\n#\n# One can override this default by actually assigning a name as follows:\n# `@registry.register_problem(\"my_awesome_problem\")`\n#\n# The registered name is specified to the t2t-trainer or t2t-datagen using the\n# commandline flag `--problem`.\n@registry.register_problem\n\n# We inherit from `Text2TextProblem` which takes care of a lot of details\n# regarding reading and writing the data to disk, what vocabulary type one\n# should use, its size etc -- so that we need not worry about them, one can,\n# of course, override those.\nclass SortWordsAccordingToLengthRandom(text_problems.Text2TextProblem):\n \"\"\"Sort words on length in randomly generated text.\"\"\"\n\n # START: Methods we should override.\n\n # The methods that need to be overriden from `Text2TextProblem` are:\n # `is_generate_per_split` and\n # `generate_samples`.\n\n @property\n def is_generate_per_split(self):\n # If we have pre-existing data splits for (train, eval, test) then we set\n # this to True, which will have generate_samples be called for each of the\n # dataset_splits.\n #\n # If we do not have pre-existing data splits, we set this to False, which\n # will have generate_samples be called just once and the Problem will\n # automatically partition the data into dataset_splits.\n return False\n\n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n # Here we are generating the data in-situ using the `sample_sentence`\n # function, otherwise we would have downloaded the data and put it in\n # `tmp_dir` -- and read it from that location.\n del tmp_dir\n\n # Unused here, is used in `Text2TextProblem.generate_data`.\n del data_dir\n\n # This would have been useful if `self.is_generate_per_split()` was True.\n # In that case we would have checked if we were generating a training,\n # evaluation or test sample. This is of type `problem.DatasetSplit`.\n del dataset_split\n\n # Just an arbitrary limit to our number of examples, this can be set higher.\n MAX_EXAMPLES = 10\n\n for i in range(MAX_EXAMPLES):\n sentence_input = sample_sentence()\n sentence_target = target_sentence(sentence_input)\n yield {\n \"inputs\" : sentence_input,\n \"targets\" : sentence_target,\n }\n\n # END: Methods we should override.\n\n # START: Overridable methods.\n\n @property\n def vocab_type(self):\n # We can use different types of vocabularies, `VocabType.CHARACTER`,\n # `VocabType.SUBWORD` and `VocabType.TOKEN`.\n #\n # SUBWORD and CHARACTER are fully invertible -- but SUBWORD provides a good\n # tradeoff between CHARACTER and TOKEN.\n return text_problems.VocabType.SUBWORD\n\n @property\n def approx_vocab_size(self):\n # Approximate vocab size to generate. Only for VocabType.SUBWORD.\n return 2**13 # ~8k\n\n @property\n def dataset_splits(self):\n # Since we are responsible for generating the dataset splits, we override\n # `Text2TextProblem.dataset_splits` to specify that we intend to keep\n # 80% data for training and 10% for evaluation and testing each.\n return [{\n \"split\": problem.DatasetSplit.TRAIN,\n \"shards\": 8,\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 1,\n }, {\n \"split\": problem.DatasetSplit.TEST,\n \"shards\": 1,\n }]\n\n # END: Overridable methods.",
"That's it!\nTo use this with t2t-trainer or t2t-datagen, save it to a directory, add an __init__.py that imports it, and then specify that directory with --t2t_usr_dir.\ni.e. as follows:\n```\n$ t2t-datagen \\\n --problem=sort_words_according_to_length_random \\\n --data_dir=/tmp/t2t/data \\\n --tmp_dir=/tmp/t2t/tmp \\\n --t2t_usr_dir=/tmp/t2t/usr\n```\nHowever, we'll generate the data from the colab itself as well -- this is what t2t-datagen essentially does.\nGenerate the data.\nWe will now generate the data by calling Problem.generate_data() and inspect it.",
"sort_len_problem = SortWordsAccordingToLengthRandom()\n\nsort_len_problem.generate_data(DATA_DIR, TMP_DIR)",
"Viewing the generated data.\ntf.data.Dataset is the recommended API for inputting data into a TensorFlow graph and the Problem.dataset() method returns a tf.data.Dataset object.",
"Modes = tf.estimator.ModeKeys\n\n# We can iterate over our examples by making an iterator and calling next on it.\nsort_len_problem_dataset = sort_len_problem.dataset(Modes.EVAL, DATA_DIR)\neager_iterator = sort_len_problem_dataset.make_one_shot_iterator()\nexample = next(eager_iterator)\n\ninput_tensor = example[\"inputs\"]\ntarget_tensor = example[\"targets\"]\n\n# The tensors are actually encoded using the generated vocabulary file -- you\n# can inspect the actual vocab file in DATA_DIR.\nprint(\"Tensor Input: \" + str(input_tensor))\nprint(\"Tensor Target: \" + str(target_tensor))\n\n\n# We use the encoders to decode the tensors to the actual input text.\ninput_encoder = sort_len_problem.get_feature_encoders(\n data_dir=DATA_DIR)[\"inputs\"]\ntarget_encoder = sort_len_problem.get_feature_encoders(\n data_dir=DATA_DIR)[\"targets\"]\n\ninput_decoded = input_encoder.decode(input_tensor.numpy())\ntarget_decoded = target_encoder.decode(target_tensor.numpy())\n\nprint(\"Decoded Input: \" + input_decoded)\nprint(\"Decoded Target: \" + target_decoded)",
"To be continued ...\nStay tuned for additions to this notebook for adding problems with non-text modalities like Images, Audio and Video!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
SN-Isotropy/Isotropy
|
examples/RandomSamplingFromSequences.ipynb
|
mit
|
[
"import numpy as np\nimport pandas as pd",
"Let us suppose that we have a dataframe with 5 rows. Each row gives us the values of a quantities in each of 5 bins. So, the columns are numbers, means of some quantity X and standard deviations of X. We will assume that in each of the bins, X is drawn from a Normal distribution, and so characterizing the mean and the standard deviation in each bin characterizes this distribution.\nQ: How do we create a set of random samples of X so, that in each bin the values are drawn from the appropriate Normal distribution as described above, but the total number of values is an input Nobjs.\nWe will generate a toy table for this first:",
"# Code to generate the toy example (let us not worry how this code works)\nnums = np.arange(1000, 6000, 1000) \\\n + np.round(np.random.RandomState(0).normal(0., 200., size=5,)).astype(np.int)\ndf = pd.DataFrame(dict(Numbers=nums, meanX=np.power(nums, 0.5)/5., \n stdX=np.power(nums, 0.1)))\n\ndf",
"Step 1\nTo answer the question: we will assume that the numbers in each bin relative to the total \n stay the same when we change the total numbers. So first :",
"df['frequencies'] = df.Numbers / df.Numbers.sum()\n\ndf",
"Step 2\nNow, given a total number of objects Nobj, we can find approimately how many we expect in each bin = Nobj * frequencies (but that might be a fraction, so we will round it to an integer). Let us try Nobj = 50000",
"numObjectsPerBin = np.round(df.frequencies * 50000).astype(np.int)\nprint(numObjectsPerBin)",
"We can check (as it obviously must) that this matches our numbers if Nobj equals the total number of objects in our \ntoy example:",
"np.round(df.frequencies * df.Numbers.sum()).astype(np.int)",
"Step 3 Now for each bin we mgith want to draw numbers from a normal distribution with size = number of objects in that bin\nFor each bin this is now easy: (some syntax about how to access the ith row and two columns by name from a dataframe is necessary, but fairly intuitive)",
"m, s = df.ix[0, ['meanX', 'stdX']] # Now the mean of the 0th bin is assigned to m, and std to s\nX = np.random.normal(m, s, size=numObjectsPerBin.ix[0] )\nprint(X)",
"So, what we need to do is loop through the bins, and keep appending X to some list",
"XVals = []\nfor i in range(len(df)):\n m, s = df.ix[i, ['meanX', 'stdX']]\n # We will convert the numpy array to list, but that may not be necessary\n X = np.random.normal(m, s, size=numObjectsPerBin.ix[i]).tolist()\n XVals.append(X)\n\n\nXVals",
"XVals is a list of lists. The 0th element of XVals is a list which has all of the Xs sampled \nin the 0th bin and likewise.\nWe can check the frequencies match up\nmap is a useful function to know about (but not essential, you can use for loops or preferably us list comprehensions in its place). But here is what it does:\n sequence = [a, b, c] \n map( func, sequence) = list(func(a), func(b), func(c)) \nSo we can use this to find the frequencies and also check that the total adds up to Nobj (approximately)",
"np.array(map(len, XVals)) / np.float(sum(map(len, XVals)))\n\ntotalobjs = sum(map(len, XVals))",
"And we can find their means and std deviations",
"map(np.mean, XVals)\n\nmap(np.std, XVals)",
"Randomness and Reproducibility\nThere was a question about getting different values from each random draw. As guessed, this is expected, but there are times when you want to get the same answer to be able to reproduce older calculations. One of the ways to achieve this is by supplying a seed",
"seed = 1\nrng = np.random.RandomState(seed)\nrng.normal(0, 1, size= 50)",
"The next 50 will be different",
"rng.normal(0, 1, size=50)\n\n# But if you want to reproduce the first 50, you can do so by using the same seed\nrng = np.random.RandomState(seed)\nrng.normal(0,1, size=60)",
"What you shoudl be able to see in this 60 is that the first 50 are the first 50 that we had previously. The next 10 are actually the first 10 from the second fifty we tried."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/CompStats
|
text_analysis.ipynb
|
mit
|
[
"Text analysis with Python\nCopyright 2019 Allen Downey\nMIT License",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt",
"Word Frequencies\nLet's look at frequencies of words, bigrams and trigrams in a text.\nThe following function reads lines from a file or URL and splits them into words:",
"def iterate_words(filename):\n \"\"\"Read lines from a file and split them into words.\"\"\"\n for line in open(filename):\n for word in line.split():\n yield word.strip()",
"Here's an example using a book. wc is a Counter of words, that is, a dictionary that maps from each word to the number of times it appears:",
"import os\n\n# Originally from https://archive.org/stream/TheFaultInOurStarsJohnGreen/The+Fault+In+Our+Stars+-+John+Green_djvu.txt\n\nfilename = 'the_fault_in_our_stars.txt'\nif not os.path.exists(filename):\n !wget https://raw.githubusercontent.com/AllenDowney/CompStats/master/the_fault_in_our_stars.txt\n\nfrom collections import Counter\nwc = Counter(iterate_words(filename))",
"Here are the 20 most common words:",
"wc.most_common(20)",
"Word frequencies in natural languages follow a predictable pattern called Zipf's law (which is an instance of Stigler's law, which is also an instance of Stigler's law).\nWe can see the pattern by lining up the words in descending order of frequency and plotting their counts (6507, 5250, 2707) versus ranks (1st, 2nd, 3rd, ...):",
"def counter_ranks(wc):\n \"\"\"Returns ranks and counts as lists.\"\"\"\n return zip(*enumerate(sorted(wc.values(), reverse=True)))\n\nranks, counts = counter_ranks(wc)\nplt.plot(ranks, counts)\nplt.xlabel('Rank')\nplt.ylabel('Count')\nplt.title('Word count versus rank, linear scale');",
"Huh. Maybe that's not so clear after all. The problem is that the counts drop off very quickly. If we use the highest count to scale the figure, most of the other counts are indistinguishable from zero.\nAlso, there are more than 10,000 words, but most of them appear only a few times, so we are wasting most of the space in the figure in a regime where nothing is happening.\nThis kind of thing happens a lot. A common way to deal with it is to compute the log of the quantities or to plot them on a log scale:",
"ranks, counts = counter_ranks(wc)\nplt.plot(ranks, counts)\nplt.xlabel('Rank')\nplt.ylabel('Count')\nplt.xscale('log')\nplt.yscale('log')\nplt.title('Word count versus rank, log-log scale');",
"This (approximately) straight line is characteristic of Zipf's law.\nn-grams\nOn to the next topic: bigrams and trigrams.",
"from itertools import tee\n\ndef pairwise(iterator):\n \"\"\"Iterates through a sequence in overlapping pairs.\n \n If the sequence is 1, 2, 3, the result is (1, 2), (2, 3), (3, 4), etc.\n \"\"\"\n a, b = tee(iterator)\n next(b, None)\n return zip(a, b)",
"bigrams is the histogram of word pairs:",
"bigrams = Counter(pairwise(iterate_words(filename)))",
"And here are the 20 most common:",
"bigrams.most_common(20)",
"Similarly, we can iterate the trigrams:",
"def triplewise(iterator):\n a, b, c = tee(iterator, 3)\n next(b)\n next(c)\n next(c)\n return zip(a, b, c)",
"And make a Counter:",
"trigrams = Counter(triplewise(iterate_words(filename)))",
"Here are the 20 most common:",
"trigrams.most_common(20)",
"Markov analysis\nAnd now for a little fun. I'll make a dictionary that maps from each word pair to a Counter of the words that can follow.",
"from collections import defaultdict\n\nd = defaultdict(Counter)\nfor a, b, c in trigrams:\n d[a, b][c] += trigrams[a, b, c]",
"Now we can look up a pair and see what might come next:",
"d['I', 'said']",
"Here are the most common words that follow \"into the\":",
"d['into', 'the'].most_common(10)",
"The following function chooses a random word from the suffixes in a Counter:",
"import random\n\ndef choice(counter):\n \"\"\"Chooses a random element.\"\"\"\n return random.choice(list(counter.elements()))\n\nchoice(d['into', 'the'])",
"Given a prefix, we can choose a random suffix:",
"prefix = 'into', 'the'\nsuffix = choice(d[prefix])\nsuffix",
"Then we can shift the words and compute the next prefix:",
"prefix = prefix[1], suffix\nprefix",
"Repeating this process, we can generate random new text that has the same correlation structure between words as the original:",
"for i in range(100):\n suffix = choice(d[prefix])\n print(suffix, end=' ')\n prefix = prefix[1], suffix",
"With a prefix of two words, we typically get text that flirts with sensibility."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mathLab/RBniCS
|
tutorials/05_gaussian/tutorial_gaussian_deim.ipynb
|
lgpl-3.0
|
[
"TUTORIAL 05 - Discrete Empirical Interpolation Method for non-affine elliptic problems\nKeywords: discrete empirical interpolation method\n1. Introduction\nIn this Tutorial, we consider steady heat conduction in a two-dimensional square domain $\\Omega = (-1, 1)^2$.\nThe boundary $\\partial\\Omega$ is kept at a reference temperature (say, zero). The conductivity coefficient is fixed to 1, while the heat source is characterized by the following expression\n$$\ng(\\boldsymbol{x}; \\boldsymbol{\\mu}) = \\exp{ -2 (x_0-\\mu_0)^2 - 2 (x_1 - \\mu_1)^2} \\quad \\forall \\boldsymbol{x} = (x_0, x_1) \\in \\Omega.\n$$\nThe parameter vector $\\boldsymbol{\\mu}$, given by \n$$\n\\boldsymbol{\\mu} = (\\mu_0,\\mu_1)\n$$\naffects the center of the Gaussian source $g(\\boldsymbol{x}; \\boldsymbol{\\mu})$, which could be located at any point $\\Omega$. Thus, the parameter domain is\n$$\n\\mathbb{P}=[-1,1]^2.\n$$\nIn order to obtain a faster evaluation (yet, provably accurate) of the problem we propose to use a certified reduced basis approximation for the problem. In order to preserve the affinity assumption (for the sake of performance) the discrete empirical interpolation method will be used on the forcing term $g(\\boldsymbol{x}; \\boldsymbol{\\mu})$.\n2. Parametrized formulation\nLet $u(\\boldsymbol{\\mu})$ be the temperature in the domain $\\Omega$.\nWe will directly provide a weak formulation for this problem\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, find $u(\\boldsymbol{\\mu})\\in\\mathbb{V}$ such that</center>\n$$a\\left(u(\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right)=f(v;\\boldsymbol{\\mu})\\quad \\forall v\\in\\mathbb{V}$$\nwhere\n\nthe function space $\\mathbb{V}$ is defined as\n$$\n\\mathbb{V} = \\left{ v \\in H^1(\\Omega(\\mu_0)): v|_{\\partial\\Omega} = 0\\right}\n$$\nNote that, as in the previous tutorial, the function space is parameter dependent due to the shape variation. \nthe parametrized bilinear form $a(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$a(u,v;\\boldsymbol{\\mu}) = \\int_{\\Omega} \\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}$$\nthe parametrized linear form $f(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$f(v;\\boldsymbol{\\mu}) = \\int_\\Omega g(\\boldsymbol{\\mu}) v \\ d\\boldsymbol{x}.$$",
"from dolfin import *\nfrom rbnics import *",
"3. Affine decomposition\nThe parametrized bilinear form $a(\\cdot, \\cdot; \\boldsymbol{\\mu})$ is trivially affine.\nThe discrete empirical interpolation method will be used on the forcing term $g(\\boldsymbol{x}; \\boldsymbol{\\mu})$ to obtain an efficient (approximately affine) expansion of $f(\\cdot; \\boldsymbol{\\mu})$.",
"@DEIM()\nclass Gaussian(EllipticCoerciveProblem):\n\n # Default initialization of members\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n EllipticCoerciveProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n self.u = TrialFunction(V)\n self.v = TestFunction(V)\n self.dx = Measure(\"dx\")(subdomain_data=subdomains)\n self.f = ParametrizedExpression(\n self, \"exp(- 2 * pow(x[0] - mu[0], 2) - 2 * pow(x[1] - mu[1], 2))\", mu=(0., 0.),\n element=V.ufl_element())\n # note that we cannot use self.mu in the initialization of self.f, because self.mu has not been initialized yet\n\n # Return custom problem name\n def name(self):\n return \"GaussianDEIM\"\n\n # Return the alpha_lower bound.\n def get_stability_factor_lower_bound(self):\n return 1.\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n def compute_theta(self, term):\n if term == \"a\":\n return (1.,)\n elif term == \"f\":\n return (1.,)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n def assemble_operator(self, term):\n v = self.v\n dx = self.dx\n if term == \"a\":\n u = self.u\n a0 = inner(grad(u), grad(v)) * dx\n return (a0,)\n elif term == \"f\":\n f = self.f\n f0 = f * v * dx\n return (f0,)\n elif term == \"dirichlet_bc\":\n bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 1),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 2),\n DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]\n return (bc0,)\n elif term == \"inner_product\":\n u = self.u\n x0 = inner(grad(u), grad(v)) * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")",
"4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh.ipynb notebook.",
"mesh = Mesh(\"data/gaussian.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/gaussian_physical_region.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/gaussian_facet_region.xml\")",
"4.2. Create Finite Element space (Lagrange P1)",
"V = FunctionSpace(mesh, \"Lagrange\", 1)",
"4.3. Allocate an object of the Gaussian class",
"problem = Gaussian(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(-1.0, 1.0), (-1.0, 1.0)]\nproblem.set_mu_range(mu_range)",
"4.4. Prepare reduction with a reduced basis method",
"reduction_method = ReducedBasis(problem)\nreduction_method.set_Nmax(20, DEIM=21)\nreduction_method.set_tolerance(1e-4, DEIM=1e-8)",
"4.5. Perform the offline phase",
"reduction_method.initialize_training_set(50, DEIM=60)\nreduced_problem = reduction_method.offline()",
"4.6.1. Perform an online solve",
"online_mu = (0.3, -1.0)\nreduced_problem.set_mu(online_mu)\nreduced_solution = reduced_problem.solve()\nplot(reduced_solution, reduced_problem=reduced_problem)",
"4.6.2. Perform an online solve with a lower number of DEIM terms",
"reduced_solution_11 = reduced_problem.solve(DEIM=11)\nplot(reduced_solution_11, reduced_problem=reduced_problem)",
"4.6.3. Perform an online solve with an even lower number of DEIM terms",
"reduced_solution_1 = reduced_problem.solve(DEIM=1)\nplot(reduced_solution_1, reduced_problem=reduced_problem)",
"4.7.1. Perform an error analysis",
"reduction_method.initialize_testing_set(50, DEIM=60)\nreduction_method.error_analysis(filename=\"error_analysis\")",
"4.7.2. Perform an error analysis with respect to the exact problem",
"reduction_method.error_analysis(\n with_respect_to=exact_problem, filename=\"error_analysis__with_respect_to_exact\")",
"4.7.3. Perform an error analysis with respect to the exact problem, but employing a smaller number of DEIM terms",
"reduction_method.error_analysis(\n with_respect_to=exact_problem, DEIM=11, filename=\"error_analysis__with_respect_to_exact__DEIM_11\")",
"5. Assignments\n\n[*] Design now a different problem (e.g. based on the previous tutorials) characterized by several non affine functions. How many times is the discrete empirical interpolation procedure called? Compare the results (errors, speedups) with the original tutorial."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ethen8181/machine-learning
|
deep_learning/seq2seq/1_torch_seq2seq_intro.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Seq2Seq\" data-toc-modified-id=\"Seq2Seq-1\"><span class=\"toc-item-num\">1 </span>Seq2Seq</a></span><ul class=\"toc-item\"><li><span><a href=\"#Seq2Seq-Introduction\" data-toc-modified-id=\"Seq2Seq-Introduction-1.1\"><span class=\"toc-item-num\">1.1 </span>Seq2Seq Introduction</a></span></li><li><span><a href=\"#Data-Preparation\" data-toc-modified-id=\"Data-Preparation-1.2\"><span class=\"toc-item-num\">1.2 </span>Data Preparation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Declaring-Fields\" data-toc-modified-id=\"Declaring-Fields-1.2.1\"><span class=\"toc-item-num\">1.2.1 </span>Declaring Fields</a></span></li><li><span><a href=\"#Constructing-Dataset\" data-toc-modified-id=\"Constructing-Dataset-1.2.2\"><span class=\"toc-item-num\">1.2.2 </span>Constructing Dataset</a></span></li><li><span><a href=\"#Constructing-Iterator\" data-toc-modified-id=\"Constructing-Iterator-1.2.3\"><span class=\"toc-item-num\">1.2.3 </span>Constructing Iterator</a></span></li></ul></li><li><span><a href=\"#Seq2Seq-Implementation\" data-toc-modified-id=\"Seq2Seq-Implementation-1.3\"><span class=\"toc-item-num\">1.3 </span>Seq2Seq Implementation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Encoder-Module\" data-toc-modified-id=\"Encoder-Module-1.3.1\"><span class=\"toc-item-num\">1.3.1 </span>Encoder Module</a></span></li><li><span><a href=\"#Decoder-Module\" data-toc-modified-id=\"Decoder-Module-1.3.2\"><span class=\"toc-item-num\">1.3.2 </span>Decoder Module</a></span></li><li><span><a href=\"#Seq2Seq-Module\" data-toc-modified-id=\"Seq2Seq-Module-1.3.3\"><span class=\"toc-item-num\">1.3.3 </span>Seq2Seq Module</a></span></li><li><span><a href=\"#Training-Seq2Seq\" data-toc-modified-id=\"Training-Seq2Seq-1.3.4\"><span class=\"toc-item-num\">1.3.4 </span>Training Seq2Seq</a></span></li><li><span><a href=\"#Evaluating-Seq2Seq\" data-toc-modified-id=\"Evaluating-Seq2Seq-1.3.5\"><span class=\"toc-item-num\">1.3.5 </span>Evaluating Seq2Seq</a></span></li></ul></li><li><span><a href=\"#Summary\" data-toc-modified-id=\"Summary-1.4\"><span class=\"toc-item-num\">1.4 </span>Summary</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(css_style='custom2.css', plot_style=False)\n\nos.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport os\nimport math\nimport time\nimport spacy\nimport torch\nimport random\nimport numpy as np\nimport torch.nn as nn\nimport torch.optim as optim\nfrom typing import List\nfrom torchtext.datasets import Multi30k\nfrom torchtext.data import Field, BucketIterator\n\n%watermark -a 'Ethen' -d -t -v -p numpy,torch,torchtext,spacy",
"Seq2Seq\nSeq2Seq (Sequence to Sequence) is a many to many network where two neural networks, one encoder and one decoder work together to transform one sequence to another. The core highlight of this method is having no restrictions on the length of the source and target sequence. At a high-level, the way it works is:\n\nThe encoder network condenses an input sequence into a vector, this vector is a smaller dimensional representation and is often referred to as the context/thought vector. This thought vector is served as an abstract representation for the entire input sequence.\nThe decoder network takes in that thought vector and unfolds that vector into the output sequence.\n\nThe main use case includes:\n\nchatbots\ntext summarization\nspeech recognition\nimage captioning\nmachine translation\n\nIn this notebook, we'll be implementing the seq2seq model ourselves using Pytorch and use it in the context of German to English translations.\nSeq2Seq Introduction\nThe following sections are heavily \"borrowed\" from the wonderful tutorial on this topic listed below.\n\nJupyter Notebook: Sequence to Sequence Learning with Neural Networks\n\nSome personal preference modifications have been made.\n<img src=\"img/1_seq2seq.png\" width=\"70%\" height=\"70%\">\nThe above image shows an example translation. The input/source sentence, \"guten morgen\", is input into the encoder (green) one word at a time. We also append a start of sequence (<sos>) and end of sequence (<eos>) token to the start and end of sentence, respectively. At each time-step, the input to the encoder is both the current word, $x_t$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The can be represented as a function of both of $x_t$ and $h_{t-1}$:\n$$h_t = \\text{Encoder}(x_t, h_{t-1})$$\nWe're using the term encoder loosely here, in practice, it can be any type of architecture, the most common ones being RNN-type network such as LSTM (Long Short-Term Memory) or a GRU (Gated Recurrent Unit). \nHere, we have $X = {x_1, x_2, ..., x_T}$, where $x_1 = \\text{<sos>}, x_2 = \\text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.\nOnce the final word, $x_T$, has been passed into the encoder, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.\nNow we have our context vector, $z$, we can start decoding it to get the target sentence, \"good morning\". Again, we append the start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder (blue) is the current word, $y_t$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:\n$$s_t = \\text{Decoder}(y_t, s_{t-1})$$\nIn the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a Linear layer, shown in purple) what we think is the next word in the sequence, $\\hat{y}_t$. \n$$\\hat{y}_t = f(s_t)$$\nThe words in the decoder are always generated one after another, with one per time-step. We always use <sos> for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\\hat{y}_{t-1}$. This is called teacher forcing, which we'll later see in action.\nWhen training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference (i.e. real world usage) it is common to keep generating words until the model outputs an <eos> token or after a certain amount of words have been generated.\nOnce we have our predicted target sentence, $\\hat{Y} = { \\hat{y}_1, \\hat{y}_2, ..., \\hat{y}_T }$, we compare it against our actual target sentence, $Y = { y_1, y_2, ..., y_T }$, to calculate our loss. We then use this loss to update all of the parameters in our model.\nData Preparation\nWe'll be coding up the models in PyTorch and using TorchText to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data. We will introduce the functionalities some these libraries along the way as well.",
"SEED = 2222\nrandom.seed(SEED)\ntorch.manual_seed(SEED)",
"The next two code chunks:\n\nDownloads the spacy model for the German and English language.\nCreate the tokenizer functions, which will take in the sentence as the input and return the sentence as a list of tokens. These functions can then be passed to torchtext.",
"# !python -m spacy download de\n# !python -m spacy download en\n\n# the link below contains explanation of how spacy's tokenization works\n# https://spacy.io/usage/spacy-101#annotations-token\nspacy_de = spacy.load('de_core_news_sm')\nspacy_en = spacy.load('en_core_web_sm')\n\n\ndef tokenize_de(text: str) -> List[str]:\n return [tok.text for tok in spacy_de.tokenizer(text)][::-1]\n\ndef tokenize_en(text: str) -> List[str]:\n return [tok.text for tok in spacy_en.tokenizer(text)]\n\ntext = \"I don't like apple.\"\ntokenize_en(text)",
"The tokenizer is language specific, e.g. it knows that in the English language don't should be tokenized into do not (n't).\nAnother thing to note is that the order of the source sentence is reversed during the tokenization process. The rationale behind things comes from the original seq2seq paper where they identified that this trick improved the result of their model.\n\nNormally, when we concatenate a source sentence with a target sentence, each word in the source sentence is far from its corresponding word in the target sentence. By reversing the source sentence, the first few words in the source sentence now becomes very close to the first few words in the target sentence, thus the model would have lesser issue establishing communication between the source and target sentence.\nAlthough, the average distance between words in the source and target language remains the same during this process, however, it was shown that the model learned much better even on later parts of the sentence.\n\nDeclaring Fields\nMoving on, we will begin leveraging torchtext's functionality. The first once is Field, which is where we specify how we wish to preprocess our text data for a certain field.\nHere, we set the tokenize argument to the correct tokenization function for the source and target field, with German being the source field and English being the target field. The field also appends the \"start of sequence\" and \"end of sequence\" tokens via the init_token and eos_token arguments, and converts all words to lowercase. The docstring of the Field object is pretty well-written, please refer to it to see other arguments that it takes in.",
"source = Field(tokenize=tokenize_de, init_token='<sos>', eos_token='<eos>', lower=True)\ntarget = Field(tokenize=tokenize_en, init_token='<sos>', eos_token='<eos>', lower=True)",
"Constructing Dataset\nWe've defined the logic of processing our raw text data, now we need to tell the fields what data it should work on. This is where Dataset comes in. The dataset we'll be using is the Multi30k dataset. This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence. Torchtext comes with a capability for us to download and load the training, validation and test data.\nexts specifies which languages to use as the source and target (source goes first) and fields specifies which field to use for the source and target.",
"train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(source, target))\nprint(f\"Number of training examples: {len(train_data.examples)}\")\nprint(f\"Number of validation examples: {len(valid_data.examples)}\")\nprint(f\"Number of testing examples: {len(test_data.examples)}\")",
"Upon loading the dataset, we can indexed and iterate over the Dataset like a normal list. Each element in the dataset bundles the attributes of a single record for us. We can index our dataset like a list and then access the .src and .trg attribute to take a look at the tokenized source and target sentence.",
"# equivalent, albeit more verbiage train_data.examples[0].src\ntrain_data[0].src\n\ntrain_data[0].trg",
"The next missing piece is to build the vocabulary for the source and target languages. That way we can convert our tokenized tokens into integers so that they can be fed into downstream models. Constructing the vocabulary and word to integer mapping is done by calling the build_vocab method of a Field on a dataset. This adds the vocab attribute to the field.\nThe vocabularies of the source and target languages are distinct. Using the min_freq argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an <unk> (unknown) token (we can customize this in the Field earlier if we like).\nIt is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents \"information leakage\" into our model, giving us artificially inflated validation/test scores.",
"source.build_vocab(train_data, min_freq=2)\ntarget.build_vocab(train_data, min_freq=2)\nprint(f\"Unique tokens in source (de) vocabulary: {len(source.vocab)}\")\nprint(f\"Unique tokens in target (en) vocabulary: {len(target.vocab)}\")",
"Constructing Iterator\nThe final step of preparing the data is to create the iterators. Very similar to DataLoader in the standard pytorch package, Iterator in torchtext converts our data into batches, so that they can be fed into the model. These can be iterated on to return a batch of data which will have a src and trg attribute (PyTorch tensors containing a batch of numericalized source and target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of tokens to a sequence of corresponding indices, where the mapping between the tokens and indices comes from the learned vocabulary. \nWhen we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchtext iterators handle this for us! BucketIterator is a extremely useful torchtext feature. It automatically shuffles and buckets the input sequences into sequences of similar length, this minimizes the amount of padding that we need to perform.",
"BATCH_SIZE = 128\n\n# pytorch boilerplate that determines whether a GPU is present or not,\n# this determines whether our dataset or model can to moved to a GPU\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n\n# create batches out of the dataset and sends them to the appropriate device\ntrain_iterator, valid_iterator, test_iterator = BucketIterator.splits(\n (train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device)\n\n# pretend that we're iterating over the iterator and print out the print element\ntest_batch = next(iter(test_iterator))\ntest_batch\n\ntest_batch.src",
"We can list out the first batch, we see each element of the iterator is a Batch object, similar to element of a Dataset, we can access the fields via its attributes. The next important thing to note that it is of size [sentence length, batch size], and the longest sentence in the first batch of the source language has a length of 10.\nSeq2Seq Implementation",
"# adjustable parameters\nINPUT_DIM = len(source.vocab)\nOUTPUT_DIM = len(target.vocab)\nENC_EMB_DIM = 256\nDEC_EMB_DIM = 256\nHID_DIM = 512\nN_LAYERS = 2\nENC_DROPOUT = 0.5\nDEC_DROPOUT = 0.5",
"To define our seq2seq model, we first specify the encoder and decoder separately.\nEncoder Module",
"class Encoder(nn.Module):\n \"\"\"\n Input :\n - source batch\n Layer : \n source batch -> Embedding -> LSTM\n Output :\n - LSTM hidden state\n - LSTM cell state\n\n Parmeters\n ---------\n input_dim : int\n Input dimension, should equal to the source vocab size.\n \n emb_dim : int\n Embedding layer's dimension.\n \n hid_dim : int\n LSTM Hidden/Cell state's dimension.\n \n n_layers : int\n Number of LSTM layers.\n \n dropout : float\n Dropout for the LSTM layer.\n \"\"\"\n\n def __init__(self, input_dim: int, emb_dim: int, hid_dim: int, n_layers: int, dropout: float):\n super().__init__()\n self.emb_dim = emb_dim\n self.hid_dim = hid_dim\n self.input_dim = input_dim\n self.n_layers = n_layers\n self.dropout = dropout\n\n self.embedding = nn.Embedding(input_dim, emb_dim)\n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout)\n\n def forward(self, src_batch: torch.LongTensor):\n \"\"\"\n\n Parameters\n ----------\n src_batch : 2d torch.LongTensor\n Batched tokenized source sentence of shape [sent len, batch size].\n\n Returns\n -------\n hidden, cell : 3d torch.LongTensor\n Hidden and cell state of the LSTM layer. Each state's shape\n [n layers * n directions, batch size, hidden dim]\n \"\"\"\n embedded = self.embedding(src_batch) # [sent len, batch size, emb dim]\n outputs, (hidden, cell) = self.rnn(embedded)\n # outputs -> [sent len, batch size, hidden dim * n directions]\n return hidden, cell\n\nencoder = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT).to(device)\nhidden, cell = encoder(test_batch.src)\nhidden.shape, cell.shape",
"Decoder Module\nThe decoder accept a batch of input tokens, previous hidden states and previous cell states. Note that in the decoder module, we are only decoding one token at a time, the input tokens will always have a sequence length of 1. This is different from the encoder module where we encode the entire source sentence all at once.",
"class Decoder(nn.Module):\n \"\"\"\n Input :\n - first token in the target batch\n - LSTM hidden state from the encoder\n - LSTM cell state from the encoder\n Layer :\n target batch -> Embedding -- \n |\n encoder hidden state ------|--> LSTM -> Linear\n |\n encoder cell state -------\n \n Output :\n - prediction\n - LSTM hidden state\n - LSTM cell state\n\n Parmeters\n ---------\n output : int\n Output dimension, should equal to the target vocab size.\n \n emb_dim : int\n Embedding layer's dimension.\n \n hid_dim : int\n LSTM Hidden/Cell state's dimension.\n \n n_layers : int\n Number of LSTM layers.\n \n dropout : float\n Dropout for the LSTM layer.\n \"\"\"\n\n def __init__(self, output_dim: int, emb_dim: int, hid_dim: int, n_layers: int, dropout: float):\n super().__init__()\n self.emb_dim = emb_dim\n self.hid_dim = hid_dim\n self.output_dim = output_dim\n self.n_layers = n_layers\n self.dropout = dropout\n\n self.embedding = nn.Embedding(output_dim, emb_dim)\n self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout=dropout)\n self.out = nn.Linear(hid_dim, output_dim)\n\n def forward(self, trg: torch.LongTensor, hidden: torch.FloatTensor, cell: torch.FloatTensor):\n \"\"\"\n\n Parameters\n ----------\n trg : 1d torch.LongTensor\n Batched tokenized source sentence of shape [batch size].\n \n hidden, cell : 3d torch.FloatTensor\n Hidden and cell state of the LSTM layer. Each state's shape\n [n layers * n directions, batch size, hidden dim]\n\n Returns\n -------\n prediction : 2d torch.LongTensor\n For each token in the batch, the predicted target vobulary.\n Shape [batch size, output dim]\n\n hidden, cell : 3d torch.FloatTensor\n Hidden and cell state of the LSTM layer. Each state's shape\n [n layers * n directions, batch size, hidden dim]\n \"\"\"\n # [1, batch size, emb dim], the 1 serves as sent len\n embedded = self.embedding(trg.unsqueeze(0))\n outputs, (hidden, cell) = self.rnn(embedded, (hidden, cell))\n prediction = self.out(outputs.squeeze(0))\n return prediction, hidden, cell\n\ndecoder = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT).to(device)\n\n# notice that we are not passing the entire the .trg\nprediction, hidden, cell = decoder(test_batch.trg[0], hidden, cell)\nprediction.shape, hidden.shape, cell.shape",
"Seq2Seq Module\nFor the final part of the implementation, we'll implement the seq2seq model. This will handle: \n\nreceiving the input/source sentence\nusing the encoder to produce the context vectors \nusing the decoder to produce the predicted output/target sentence\n\nThe Seq2Seq model takes in an Encoder, Decoder, and a device (used to place tensors on the GPU, if it exists).\nFor this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the Encoder and Decoder. This is not always the case, as we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we do have a different number of layers we will need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? etc.\nOur forward method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded. With probability equal to the teaching forcing ratio (teacher_forcing_ratio) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability 1 - teacher_forcing_ratio, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence. Note that the teacher forcing ratio is only done during training and should be shut off during evaluation.\nThe first thing we do in the forward method is to create an outputs tensor that will store all of our predictions, $\\hat{Y}$.\nWe then feed the input/source sentence, $X$/src, into the encoder and receive our final hidden and cell states.\nThe first input to the decoder is the start of sequence (<sos>) token. As our trg tensor already has the <sos> token appended (all the way back when we defined the init_token in our target field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (max_len), so we loop that many times. During each iteration of the loop, we:\n- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder\n- receive a prediction, next hidden state and next cell state ($\\hat{y}{t+1}, s{t}, c_{t}$) from the decoder\n- place our prediction, $\\hat{y}{t+1}$/output in our tensor of predictions, $\\hat{Y}$/outputs\n- decide if we are going to \"teacher force\" or not\n - if we do, the next input is the ground-truth next token in the sequence, $y{t+1}$/trg[t]\n - if we don't, the next input is the predicted next token in the sequence, $\\hat{y}_{t+1}$/top1, which we get by doing an argmax over the output tensor\nOnce we've made all of our predictions, we return our tensor full of predictions, $\\hat{Y}$/outputs.\nNote: our decoder loop starts at 1, not 0. This means the 0th element of our outputs tensor remains all zeros. So our trg and outputs look something like:\n$$\\begin{align}\n\\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\\n\\text{outputs} = [0, &\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align}$$\nLater on when we calculate the loss, we cut off the first element of each tensor to get:\n$$\\begin{align}\n\\text{trg} = [&y_1, y_2, y_3, <eos>]\\\n\\text{outputs} = [&\\hat{y}_1, \\hat{y}_2, \\hat{y}_3, <eos>]\n\\end{align}$$\nAll of this should make more sense after we look at the code in the next few section. Feel free to check out the discussion in these two github issues for some more context with this topic. issue-45 and issue-46",
"class Seq2Seq(nn.Module):\n def __init__(self, encoder: Encoder, decoder: Decoder, device: torch.device):\n super().__init__()\n self.encoder = encoder\n self.decoder = decoder\n self.device = device\n\n assert encoder.hid_dim == decoder.hid_dim, \\\n 'Hidden dimensions of encoder and decoder must be equal!'\n assert encoder.n_layers == decoder.n_layers, \\\n 'Encoder and decoder must have equal number of layers!'\n\n def forward(self, src_batch: torch.LongTensor, trg_batch: torch.LongTensor,\n teacher_forcing_ratio: float=0.5):\n\n max_len, batch_size = trg_batch.shape\n trg_vocab_size = self.decoder.output_dim\n\n # tensor to store decoder's output\n outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)\n\n # last hidden & cell state of the encoder is used as the decoder's initial hidden state\n hidden, cell = self.encoder(src_batch)\n\n trg = trg_batch[0]\n for i in range(1, max_len):\n prediction, hidden, cell = self.decoder(trg, hidden, cell)\n outputs[i] = prediction\n\n if random.random() < teacher_forcing_ratio:\n trg = trg_batch[i]\n else:\n trg = prediction.argmax(1)\n\n return outputs\n\n# note that this implementation assumes that the size of the hidden layer,\n# and the number of layer are the same between the encoder and decoder\nencoder = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)\ndecoder = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)\nseq2seq = Seq2Seq(encoder, decoder, device).to(device)\nseq2seq\n\noutputs = seq2seq(test_batch.src, test_batch.trg)\noutputs.shape\n\ndef count_parameters(model):\n return sum(p.numel() for p in model.parameters() if p.requires_grad)\n\nprint(f'The model has {count_parameters(seq2seq):,} trainable parameters')",
"Training Seq2Seq\nWe've done the hard work of defining our seq2seq module. The final touch is to specify the training/evaluation loop.",
"optimizer = optim.Adam(seq2seq.parameters())\n\n# ignore the padding index when calculating the loss\nPAD_IDX = target.vocab.stoi['<pad>']\ncriterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)\n\ndef train(seq2seq, iterator, optimizer, criterion):\n seq2seq.train()\n\n epoch_loss = 0\n for batch in iterator:\n optimizer.zero_grad()\n outputs = seq2seq(batch.src, batch.trg)\n\n # 1. as mentioned in the seq2seq section, we will\n # cut off the first element when performing the evaluation\n # 2. the loss function only works on 2d inputs\n # with 1d targets we need to flatten each of them\n outputs_flatten = outputs[1:].view(-1, outputs.shape[-1])\n trg_flatten = batch.trg[1:].view(-1)\n loss = criterion(outputs_flatten, trg_flatten)\n\n loss.backward()\n optimizer.step()\n\n epoch_loss += loss.item()\n\n return epoch_loss / len(iterator)\n\ndef evaluate(seq2seq, iterator, criterion):\n seq2seq.eval()\n\n epoch_loss = 0\n with torch.no_grad():\n for batch in iterator:\n # turn off teacher forcing\n outputs = seq2seq(batch.src, batch.trg, teacher_forcing_ratio=0) \n\n # trg = [trg sent len, batch size]\n # output = [trg sent len, batch size, output dim]\n outputs_flatten = outputs[1:].view(-1, outputs.shape[-1])\n trg_flatten = batch.trg[1:].view(-1)\n loss = criterion(outputs_flatten, trg_flatten)\n epoch_loss += loss.item()\n\n return epoch_loss / len(iterator)\n\ndef epoch_time(start_time, end_time):\n elapsed_time = end_time - start_time\n elapsed_mins = int(elapsed_time / 60)\n elapsed_secs = int(elapsed_time - (elapsed_mins * 60))\n return elapsed_mins, elapsed_secs\n\nN_EPOCHS = 20\nbest_valid_loss = float('inf')\n\nfor epoch in range(N_EPOCHS): \n start_time = time.time()\n train_loss = train(seq2seq, train_iterator, optimizer, criterion)\n valid_loss = evaluate(seq2seq, valid_iterator, criterion)\n end_time = time.time()\n\n epoch_mins, epoch_secs = epoch_time(start_time, end_time)\n\n if valid_loss < best_valid_loss:\n best_valid_loss = valid_loss\n torch.save(seq2seq.state_dict(), 'tut1-model.pt')\n\n # it's easier to see a change in perplexity between epoch as it's an exponential\n # of the loss, hence the scale of the measure is much bigger\n print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')\n print(f'\\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')\n print(f'\\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')",
"Evaluating Seq2Seq",
"seq2seq.load_state_dict(torch.load('tut1-model.pt'))\n\ntest_loss = evaluate(seq2seq, test_iterator, criterion)\nprint(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')",
"Here, we pick a random example in our dataset, print out the original source and target sentence. Then takes a look at whether the \"predicted\" target sentence generated by the model.",
"example_idx = 0\nexample = train_data.examples[example_idx]\nprint('source sentence: ', ' '.join(example.src))\nprint('target sentence: ', ' '.join(example.trg))\n\nsrc_tensor = source.process([example.src]).to(device)\ntrg_tensor = target.process([example.trg]).to(device)\nprint(trg_tensor.shape)\n\nseq2seq.eval()\nwith torch.no_grad():\n outputs = seq2seq(src_tensor, trg_tensor, teacher_forcing_ratio=0)\n\noutputs.shape\n\noutput_idx = outputs[1:].squeeze(1).argmax(1)\n' '.join([target.vocab.itos[idx] for idx in output_idx])",
"Summary\nIn this document:\n\nWe took a stab at implementing a vanilla version of the seq2seq model, and train it on a German to English translation.\nImplemented the trick introduced by the original seq2seq paper where they reverse the order of the tokens in the source sentence.\n\nThere are a lot of other tricks/ideas that are mentioned in the original paper and worth exploring. e.g.\n\nA LSTM with 4 layers was chosen.\nBeam Search was also used to decode the sentence.\nInstead of only relying on log-loss or perplexity, another evaluation metric that they used to evaluate the quality of their translation.\n\nReference\n\nBlog: A Comprehensive Introduction to Torchtext (Practical Torchtext part 1)\nJupyter Notebook: Using TorchText with Your Own Datasets\nJupyter Notebook: Sequence to Sequence Learning with Neural Networks\nPaper: Sutskever, I., Vinyals, O., and Le, Q. (2014). Sequence to sequence learning with neural networks."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/makani
|
analysis/force_balance_loop/tutorial/FBL_Tutorial.ipynb
|
apache-2.0
|
[
"%matplotlib inline\n\nimport numpy as np\nimport scipy\nimport math\nimport json\nimport pprint\nimport time\nimport copy\nfrom matplotlib import pyplot as plt\nimport itertools\nimport pandas as pd\nimport cProfile\nimport csv\nimport inspect\n\nimport sys\nsys.path.insert(0, '../../')\nsys.path.insert(0, '../')\n\nfrom mx_sys.power_calcs import power_calcs as makani_FBL\nfrom mx_sys.power_calcs import kite_pose\nfrom mx_sys.power_calcs import kite_loop\nfrom mx_sys.power_calcs import kite_path\n\nimport m600_fbl_config_manager as cm\nimport resource_fbl_manager as rm\n\nreload(makani_FBL)\nreload(cm)\nreload(rm)",
"Setup Kite and Environment\nThe easiest way to create a kite and resource is to use the managers.\nHowever, they are both just dictionaries. Required and optional elements of the dictionary are specified in the docstrings for the various objects that use them. You can always create, edit, and overwrite the various parts of the configs and resource however you'd like manually.\nThere are a several options for aero models. There are 2 types:\n1. Body coefficient models\n - Provide cx, cy, cz as a function of alpha and (optional) beta\n2. Aero coefficient models\n - Provide cL, cY, cD as a function of alpha and (optional) beta\nEither type must also return moment coefficients cl, cm, and cn.\nSee docstrings for details on how to name things, and error messages will point out missing functions.",
"# using the resource and config managers\nresource = rm.GetResourceByName()\nother_resource = rm.MakeResourceByShearAndHref(0.2, 80., 1.075, 8.)\n\nbase_kite = cm.GetConfigByName()\n#M600 does NOT SOLVE high winds with roll limits in place\n#removing those limits for a clean example\nbase_kite.pop('roll_min')\nbase_kite.pop('roll_max')\n\nprint 'Resource:'\npprint.pprint(resource)\nprint\nprint 'Other Resource:'\npprint.pprint(resource)\nprint\nprint 'Base Kite:'\npprint.pprint(base_kite.keys())\nprint\n\n# example of resource functions\nprint inspect.getsource(resource['v_w_at_height'])\nprint inspect.getsource(other_resource['v_w_at_height'])\n\n# defining a config manually\ndef rotors_simple(rho, v_a, force):\n power_shaft = v_a * -force\n out = {'power_shaft': power_shaft}\n return out\n\nother_kite = {\n 'alpha_min': 0.,\n 'alpha_max': 7.,\n 'cD_eff_tether': 0.06,\n 'gs_position': (0.,0.,0.),\n 'eta_shaft_to_pad': lambda p: 0.82,\n 'bridle_moment_from_tether_pitch_roll': (\n lambda p, r: np.array([0., 0., 0.])),\n 'l_tether': 430.,\n 'v_a_min': 30.,\n 'v_a_max': 70.,\n 'shaft_power_from_drag_power': rotors_simple,\n 'rotor_thrust_center': np.array([0., 0., 0.]),\n 'm_kite': 1200.,\n 'h_min': 80.,\n 'm_tether': 300.,\n 'aero_coeff_from_alpha_beta': lambda a, b, o: {\n 'cL': 0.11 * a, 'cY': 0.01 * b, 'cD': 0.05 + (0.04 * a)**2,\n 'cl': 0.1, 'cm': 0.2, 'cn': 0.},\n 'CG': np.array([0., 0., 0.]),\n 'tension_max': 250000.,\n 'beta_min': -5.,\n 'beta_max': 5.,\n 'c': 1.28,\n 'b': 25.66,\n 's': 32.9,\n 'inertia': np.array([[3000., 0., 0.],\n [0., 3000., 0.],\n [0., 0., 3000.]]),\n 'power_shaft_max': 900000.}",
"There are several helper functions to plot things scattered about. As an example, we can inspect the aero database to find where the Loyd limit is.\nThe Loyd limit is defined as:\n$\\zeta_{max} = \\frac{4}{27}\\frac{C_L^3}{C_D^2}$ \n$v_{a_best_power} \\approx v_{k_best_power} \\approx \\frac{2}{3}\\frac{L}{D} v_w$ \nDerivations won't be shown here, but results are below:",
"zeta, cL, L_over_D, alpha, beta = makani_FBL.calc_loyd_optimums(base_kite)\n\nplots = cm.PlotKiteAero(base_kite, keys=['zeta'])",
"Create Path Object (optional)\nSection is optional as you don't need to know how to create a path object as they are usually created and managed by the higher level object, KiteLoop.\nKitePath creates and holds all the path definition needed for the FBL model.\nYou can create it by manually making the params, or by using the config manager helper function to make an entire set of args for the M600 by specifying a min height and a loop radius.\nA path mostly just contains the positions, but also has a few summary values.",
"# using config helper function to get args and splatting it into KitePath\npath_args = cm.GetPathArgsByR_LoopAndMinHeight(base_kite, 100., 130.)\nprint 'Path_Args:'\npprint.pprint(path_args)\nprint\n\npath = kite_path.KitePath(config=base_kite, **path_args)\n\n# pull off one position to use later\nposition = path.positions[0]\n\n# print some stuff from the path\nprint 'Example Position:'\npprint.pprint(position)\nprint\nprint 'Example path summary data:'\nprint 'Half cone angle: %0.2f deg' % math.degrees(path.half_cone_angle)\nprint('Min height in loop: %0.1f m' % path.h_min)\nprint 'Virtual Hub Height: %0.2f m' % path.h_hub\n\n# make a path manually\nshape_params = {'type': 'circle',\n 'r_loop': 140.}\nlocation_params = {'azim': 0.,\n 'incl': 0.}\n\nother_path = kite_path.KitePath(shape_params, location_params, base_kite)",
"Creating a KitePose object (optional)\nA KitePose is the base level object to complete a force balance. It is a single point model of a kite. There are many options here, but this section is optional as typical use only uses the higher level objects, which will manage KitePoses automatically.\nThere are 2 solve options:\n1. Lift roll angle is not provided\n - It is solved for to make the residual zero, if possible\n2. Lift roll angle is provided\n - Orientation is fully specified, but the force balance is not likely\nFor either solution, you must specify a kite speed, either airspeed (v_a) or inertial speed (v_k), and optionally an acceleration along the flight path, body rates, and body accelerations. Gravity and curvature accelerations are applied as a result from the speed and path definition.\nPoses can always be solved, but user beware. Just because it solves doesn't mean all the constraints are valid or that the forces were able to be balanced. You need to check it yourself.\nBelow shows running for a specified lift_roll_angle and without. As a solution fills the pose state, a new pose is needed each time - there is no way to reset a pose.\npose.state is the holder for all the info about the pose. When in doubt about what pose info is available and what keys to use when accessing data from higher level objects, inspect this dictionary.\nImportant Note:\nFBL will return solutions that aren't valid - ie: either some constraint is violated or the force balance wasn't possible with the parameters given. \nThe results are just a function of the state provided. If no lift roll angle is provided, we can solve for the lift roll angle to meet the force balance, which usually works unless you lack enough lift.\nThe user has information to determine the validity of the solve. Every object has a \"valid\" attribute that keeps track of this. Violated constraints will show up in KitePose.constraints_violated (also stored in state).",
"# standard solver\n\n# solving with a known aero state\n# using kite with body coefficient aero model and v_a\npose = kite_pose.KitePose(position, resource, base_kite,\n v_a=50., alpha=5., beta=0.,\n v_w_at_h_ref=7.5, verbose=True)\npose.solve(solve_type='unknown_roll')\nprint 'Known aero state solution power:', pose.state['power']\nprint 'Pose is valid: ', pose.valid\nprint\n\n# solving with a known lift_roll_angle\n# using kite with aero coefficient aero model and v_k, with accel\npose = kite_pose.KitePose(position, resource, base_kite,\n v_a=50., alpha=5., beta=0., lift_roll_angle=-0.1,\n v_w_at_h_ref=7.5, verbose=True)\npose.solve(solve_type='full')\nprint 'Known aero state solution power:', pose.state['power']\nprint 'Pose is valid: ', pose.valid\nprint\n\n\nprint 'Example of data stored in pose.state using solution from known tension solution.'\npprint.pprint(pose.state)",
"Creating KiteLoop objects\nKiteLoop objects are a container for a \"set\" of poses that define an entire loop. The KiteLoop applies accelerations to each pose to make them consistent with the speed strategy applied.\nAny necessary variable that isn't specified is determined by an optimizer, with a default seed. Alternatively, you can explictly define a variable to optimize over and set your own seed value and parameterization type. See docstring for parameterization types and usage.\nThere are also optimization options, under the keyword arg 'opt_params'. Selection of optimization options is the single most finicky part of the process, and the most likely to cause errors or non-optimal results. There are a lot of options for the optimizer. See docstring for options, but defaults should be pretty good.\nTwo frequently used ones are 'tol' and 'constraint_stiffness'.\n'tol' is the convergence tolerance. Higher values will take longer to finish, but results will be smoother and make more power. Typical values are ~0.01 - 0.001.\n'constraint_stiffness' is a weighting factor for constraints. Higher values will make the model more quickly shy away from constraints violations, while lower ones will let the model optimize power first, then try and meet constraints, but can be harder to converge. Typical values are ~1. to 0.01.\nNote that at high wind speeds, you many need to provide a better seed, as this space is highly constrained. The KitePowerCurve handles this automatically by seeding loops with the previous loops best v_k strategy, which usually works well for finding a solution.",
"# make a loop with some options and solve it\nloop = kite_loop.KiteLoop(\n resource, base_kite, v_w_at_h_ref=9.,\n verbose=True,\n opt_params={'tol':0.01,\n 'constraint_stiffness': 0.01,\n 'maxiter':700},\n vars_to_opt={'v_a': {'param_type': 'spline',\n 'values': [40.]*6,\n 'max_step': 5.}})\nloop.solve()\n\n# look at some summary data about loop\nprint\nprint 'Loop mean power: %0.2f W' % loop.power\nprint 'Loop average v_a: %0.2f m/s' % loop.data_loop['v_a_avg_time']\nprint 'Loop valid: ', loop.valid\nprint\n\n# use Dataframe plotting library\nloop.data_poses.plot(y=['v_k', 'v_a', 'power'],\n subplots=True, figsize=(15, 12), layout=(2,2))\n\n# use Dataframe feature in jupyter to make table of data\nloop.data_poses",
"KiteLoop - Using specific values instead of the optimizer\nThere are several ways to specify values to hold fixed. If all values are specified, the optimizer isn't used at all, and the solution time is very quick (thousandths of a sec). \nSee the example below for formats to specify a particular solution, or the docstring. Any single variable can be dropped out and the optimizer will take over only that variable, using defaults optimizer options and seed unless something is specified in the \"vars_to_opt\" dictionary, as shown in the example above.\nThis methodology is useful when you just want to locally perturb something to see sensitivities, holding everything else constant. In this example, we sweep out varius azimuths, showing the power variation as the loop is slewed to the right.",
"loops = []\nazims = np.linspace(-0.5, 0, 6)\n\nfor azim in azims:\n temp_loop = kite_loop.KiteLoop(\n resource, base_kite, v_w_at_h_ref=7.5, verbose=True,\n path_location_params={'azim': azim,\n 'incl': 0.577},\n pose_states_param={'alpha': {'param_type': 'linear_interp', 'values': [4., 3., 3.5, 4.]}},\n pose_states={'beta': np.array([-3.]*18),\n 'v_k': np.array([ 42.18902103, 44.9445889 , 47.92323029, 50.84411908,\n 53.44207691, 55.55595166, 57.06702792, 57.85671546,\n 57.80642404, 56.79756345, 54.72297007, 51.73829244,\n 48.26199128, 44.72395391, 41.55406766, 39.18221986,\n 38.03770241, 38.34796218])},\n path_shape_params={'type': 'circle',\n 'r_loop': 160.,\n 'num_pos': 18})\n temp_loop.solve()\n print\n loops.append(temp_loop)\n\nplt.figure(figsize=(10,7))\nplt.title('Power vs Normalized Distance Around Loop for Different Azimuths')\nplt.ylabel('Power [W]')\nfor azim, loop in zip(azims, loops):\n plt.plot(loop.data_poses['dist_norm'],\n loop.data_poses.power_shaft,\n label=azim)\nplt.legend()",
"Using the Plotly Plotting Library\nThe KiteLoop contains a few tools that output the 3D force solution, as well as variables colored by value around the loop.\nThe plotting tool can be found at:\nmx_modeling/visualizations/power_calcs_plotting_tool/plotter.html\nOpen it directly with your browser, and point it to the files generated by the KiteLoop.",
"# make files for plotly plotter\nloop.gen_loop_vec_plot_file('test_forces.json')\nloop.gen_loop_positions_plot_file('test_colors_roll_angle.json',\n var_to_color='tether_roll_angle')",
"Creating a KitePowerCurve object\nA KitePowerCurve object creates KiteLoop objects for each wind speed in the range.\nAll the same optimization parameters, options, etc that were available at the loop level are available here as well (opt_params, vars_to_opt, loop_params, path_shape_params, path_location_params), with the same effect.\nHere is an example of not specifying anything and letting the defaults do the job.\nSolutions for previous loops are used to seed the solutions for future loops, usually enabling the KitePower curve to more quickly and easily find solutions.\nThere are three key outputs that trim the data to make a power curve. \n1. KitePowerCurve.powers is average power for each loop\n2. KitePowerCurve.powers_valid has invalid loop powers set to None\n3. KitePowerCurve.powers_final has negative powers, invalid powers, and powers at virtual hub speeds outside of cut in and cut out (if provided in the kite config) set to zero",
"pc = makani_FBL.KitePowerCurve(resource, base_kite,\n v_w_at_h_ref_range=(2., 10.), v_w_step=2.)\npc.solve()\nprint 'Validity of each loop: ', pc.valids",
"Multiple Ways to Get Data\nThere's a ton of data in the KitePowerCurve object, and lot of ways to get it.\nSummary level data is an attribute of the object, and the loop summaries are aggregated into a Dataframe object called:\nself.data_loops\nThe loops themselves are available in a list at self.loops. You can then pull out a loop and access all the pose data, or use the loop plotting tools.\nBelow are examples of different ways to get data out.\n1. Directly access the data and do whatever with it: math, plot it, whatever \nMost key data is available as an attribute as well, but the full set is in the data_loops or data_poses Dataframes.\n\n\nUse the pandas Dataframe to do whatever, including its own plotting library \nUse the plotting helper functions included with some of the objects to plot things . These built in plotting tools do a lot more formatting for you, as shown in the example below.\n\nNote: The surface plots are way nicer and actually functional if you're using a fully updated version of matplotlib with a local kernel",
"# 1: access data directly, some as attribute, some in data\nplt.plot(pc.v_ws_at_h_hub, pc.data_loops['zeta_padmount_avg_time'])\n\n# 2: use dataframe tools\npc.data_loops.plot(y='zeta_padmount_avg_time')\n\n# 3: use built in plotting helper functions\npc.plot_loop_data(ys=['zeta_padmount_avg_time'])\npc.plot_pose_data_as_surf(keys=['power', 'v_a'])\npc.plot_power_curve()",
"Putting it all together\nThis is the minimum set of things needed to calculate a power curve:\nThis example has NOT removed the roll limits, which is why the power curve has a big dip - when invalid solutions are found, the loop inclination is raised until it works, but this is a big performance hit.",
"# we need a kite\nm600 = cm.GetConfigByName()\n\n# we need a resource\nchina_lake = rm.GetResourceByName('CL_nom')\n\n# then we make and solve a power curve\nm600pc = makani_FBL.KitePowerCurve(china_lake, m600)\nm600pc.solve()\n\n# then we do things with it\nm600pc.plot_power_curve()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ahoyosid/ReNA
|
Example_Faces.ipynb
|
bsd-3-clause
|
[
"Using ReNA to find superpixels\nThe aims of the notebook is to provide an illustration of how to use ReNA \nto build superpixels.\nHere we use the olivetti faces dataset, which can be fetched via sklearn.\nLoading the data",
"import numpy as np\nfrom sklearn.datasets import fetch_olivetti_faces\n\nrandom_state = 32\ndataset = fetch_olivetti_faces(shuffle=True, random_state=random_state)\n\nX, y = dataset['data'], dataset['target']\nn_x, n_y = dataset['images'][0].shape\n\nX_data = X.reshape(-1, n_x, n_y).transpose(1, 2, 0)",
"Get the connectivity (spatial structure)",
"from sklearn.feature_extraction.image import grid_to_graph\nfrom rena import weighted_connectivity_graph\n\nconnectivity_ward = grid_to_graph(n_x, n_y, 1)\n\nmask = np.ones((n_x, n_y))\nconnectivity_rena = weighted_connectivity_graph(X_data, n_features=X.shape[1],\n mask=mask)",
"Custering",
"import time\nfrom sklearn.cluster import AgglomerativeClustering\nfrom rena import recursive_nearest_agglomeration\n\nn_clusters = 150\n\nward = AgglomerativeClustering(n_clusters=n_clusters,\n connectivity=connectivity_ward, \n linkage='ward')\nti_ward = time.clock()\nward.fit(X.T)\nto_ward = time.clock() - ti_ward\n\nlabels_ward = ward.labels_\n\nti_rena = time.clock()\nlabels_rena = recursive_nearest_agglomeration(X, connectivity_rena,\n n_clusters=n_clusters)\nto_rena = time.clock() - ti_rena\n\nprint('Time Ward: %0.3f, Time ReNA: %0.3f' % (to_ward, to_rena))\n\nfrom rena import reduce_data, approximate_data\n\nX_red_rena = reduce_data(X, labels_rena)\nX_red_ward = reduce_data(X, labels_ward)\n\nX_approx_rena = approximate_data(X_red_rena, labels_rena)\nX_approx_ward = approximate_data(X_red_ward, labels_ward)",
"Results visualization",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfig, axx = plt.subplots(3, 4, **{'figsize': (10, 5)})\nplt.gray()\n\nfor i in range(4):\n axx[0, i].imshow(X[i + 30].reshape(n_x, n_y))\n axx[0, i].set_axis_off()\n axx[0, 0].set_title('Original')\n axx[1, i].imshow(X_approx_ward[i + 30].reshape(n_x, n_y))\n axx[1, i].set_axis_off()\n axx[1, 0].set_title('Ward: approximated')\n axx[2, i].imshow(X_approx_rena[i + 30].reshape(n_x, n_y))\n axx[2, i].set_axis_off()\n axx[2, 0].set_title('ReNA: approximated')\n\n# saving results\nfig.savefig('figures/faces.png', bbox_to_inches='tight')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
diegocavalca/Studies
|
deep-learnining-specialization/4. Convolutional Neural Networks/resources/Convolution model - Step by Step - v1.ipynb
|
cc0-1.0
|
[
"Convolutional Neural Networks: Step by Step\nWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. \nNotation:\n- Superscript $[l]$ denotes an object of the $l^{th}$ layer. \n - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.\n\n\nSuperscript $(i)$ denotes an object from the $i^{th}$ example. \n\nExample: $x^{(i)}$ is the $i^{th}$ training example input.\n\n\n\nLowerscript $i$ denotes the $i^{th}$ entry of a vector.\n\nExample: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.\n\n\n\n$n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. \n\n$n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. \n\nWe assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!\n1 - Packages\nLet's first import all the packages that you will need during this assignment. \n- numpy is the fundamental package for scientific computing with Python.\n- matplotlib is a library to plot graphs in Python.\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.",
"import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)",
"2 - Outline of the Assignment\nYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:\n\nConvolution functions, including:\nZero Padding\nConvolve window \nConvolution forward\nConvolution backward (optional)\n\n\nPooling functions, including:\nPooling forward\nCreate mask \nDistribute value\nPooling backward (optional)\n\n\n\nThis notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:\n<img src=\"images/model.png\" style=\"width:800px;height:300px;\">\nNote that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. \n3 - Convolutional Neural Networks\nAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. \n<img src=\"images/conv_nn.png\" style=\"width:350px;height:200px;\">\nIn this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. \n3.1 - Zero-Padding\nZero-padding adds zeros around the border of an image:\n<img src=\"images/PAD.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption>\nThe main benefits of padding are the following:\n\n\nIt allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the \"same\" convolution, in which the height/width is exactly preserved after one layer. \n\n\nIt helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.\n\n\nExercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array \"a\" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do:\npython\na = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))",
"# GRADED FUNCTION: zero_pad\n\ndef zero_pad(X, pad):\n \"\"\"\n Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, \n as illustrated in Figure 1.\n \n Argument:\n X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images\n pad -- integer, amount of padding around each image on vertical and horizontal dimensions\n \n Returns:\n X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line)\n X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant', constant_values=(0, 0))\n ### END CODE HERE ###\n \n return X_pad\n\nnp.random.seed(1)\nx = np.random.randn(4, 3, 3, 2)\nx_pad = zero_pad(x, 2)\nprint (\"x.shape =\", x.shape)\nprint (\"x_pad.shape =\", x_pad.shape)\nprint (\"x[1,1] =\", x[1,1])\nprint (\"x_pad[1,1] =\", x_pad[1,1])\n\nfig, axarr = plt.subplots(1, 2)\naxarr[0].set_title('x')\naxarr[0].imshow(x[0,:,:,0])\naxarr[1].set_title('x_pad')\naxarr[1].imshow(x_pad[0,:,:,0])",
"Expected Output:\n<table>\n <tr>\n <td>\n **x.shape**:\n </td>\n <td>\n (4, 3, 3, 2)\n </td>\n </tr>\n <tr>\n <td>\n **x_pad.shape**:\n </td>\n <td>\n (4, 7, 7, 2)\n </td>\n </tr>\n <tr>\n <td>\n **x[1,1]**:\n </td>\n <td>\n [[ 0.90085595 -0.68372786]\n [-0.12289023 -0.93576943]\n [-0.26788808 0.53035547]]\n </td>\n </tr>\n <tr>\n <td>\n **x_pad[1,1]**:\n </td>\n <td>\n [[ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]]\n </td>\n </tr>\n\n</table>\n\n3.2 - Single step of convolution\nIn this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: \n\nTakes an input volume \nApplies a filter at every position of the input\nOutputs another volume (usually of different size)\n\n<img src=\"images/Convolution_schematic.gif\" style=\"width:500px;height:300px;\">\n<caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>\nIn a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. \nLater in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. \nExercise: Implement conv_single_step(). Hint.",
"# GRADED FUNCTION: conv_single_step\n\ndef conv_single_step(a_slice_prev, W, b):\n \"\"\"\n Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation \n of the previous layer.\n \n Arguments:\n a_slice_prev -- slice of input data of shape (f, f, n_C_prev)\n W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)\n b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)\n \n Returns:\n Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data\n \"\"\"\n\n ### START CODE HERE ### (≈ 2 lines of code)\n # Element-wise product between a_slice and W. Add bias.\n s = a_slice_prev * W + b\n # Sum over all entries of the volume s\n Z = np.sum(s)\n ### END CODE HERE ###\n\n return Z\n\nnp.random.seed(1)\na_slice_prev = np.random.randn(4, 4, 3)\nW = np.random.randn(4, 4, 3)\nb = np.random.randn(1, 1, 1)\n\nZ = conv_single_step(a_slice_prev, W, b)\nprint(\"Z =\", Z)",
"Expected Output:\n<table>\n <tr>\n <td>\n **Z**\n </td>\n <td>\n -23.1602122025\n </td>\n </tr>\n\n</table>\n\n3.3 - Convolutional Neural Networks - Forward pass\nIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: \n<center>\n<video width=\"620\" height=\"440\" src=\"images/conv_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\nExercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. \nHint: \n1. To select a 2x2 slice at the upper left corner of a matrix \"a_prev\" (shape (5,5,3)), you would do:\npython\na_slice_prev = a_prev[0:2,0:2,:]\nThis will be useful when you will define a_slice_prev below, using the start/end indexes you will define.\n2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.\n<img src=\"images/vert_horiz_kiank.png\" style=\"width:400px;height:300px;\">\n<caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption>\nReminder:\nThe formulas relating the output shape of the convolution to the input shape is:\n$$ n_H = \\lfloor \\frac{n_{H_{prev}} - f + 2 \\times pad}{stride} \\rfloor +1 $$\n$$ n_W = \\lfloor \\frac{n_{W_{prev}} - f + 2 \\times pad}{stride} \\rfloor +1 $$\n$$ n_C = \\text{number of filters used in the convolution}$$\nFor this exercise, we won't worry about vectorization, and will just implement everything with for-loops.",
"# GRADED FUNCTION: conv_forward\n\ndef conv_forward(A_prev, W, b, hparameters):\n \"\"\"\n Implements the forward propagation for a convolution function\n \n Arguments:\n A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)\n b -- Biases, numpy array of shape (1, 1, 1, n_C)\n hparameters -- python dictionary containing \"stride\" and \"pad\"\n \n Returns:\n Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward() function\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve dimensions from A_prev's shape (≈1 line) \n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve dimensions from W's shape (≈1 line)\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)\n n_H = (n_H_prev + 2 * pad - f) // stride + 1\n n_W = (n_W_prev + 2 * pad - f) // stride + 1\n \n # Initialize the output volume Z with zeros. (≈1 line)\n Z = np.zeros((m, n_H, n_W, n_C))\n \n # Create A_prev_pad by padding A_prev\n A_prev_pad = zero_pad(A_prev, pad)\n \n for i in range(m): # loop over the batch of training examples\n a_prev_pad = A_prev_pad[i, ...] # Select ith training example's padded activation\n for h in range(n_H): # loop over vertical axis of the output volume\n for w in range(n_W): # loop over horizontal axis of the output volume\n for c in range(n_C): # loop over channels (= #filters) of the output volume\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h * stride\n vert_end = h * stride + f\n horiz_start = w * stride\n horiz_end = w * stride + f\n \n # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)\n a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]\n \n # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)\n Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c])\n \n ### END CODE HERE ###\n \n # Making sure your output shape is correct\n assert(Z.shape == (m, n_H, n_W, n_C))\n \n # Save information in \"cache\" for the backprop\n cache = (A_prev, W, b, hparameters)\n \n return Z, cache\n\nnp.random.seed(1)\nA_prev = np.random.randn(10,4,4,3)\nW = np.random.randn(2,2,3,8)\nb = np.random.randn(1,1,1,8)\nhparameters = {\"pad\" : 2,\n \"stride\": 1}\n\nZ, cache_conv = conv_forward(A_prev, W, b, hparameters)\nprint(\"Z's mean =\", np.mean(Z))\nprint(\"cache_conv[0][1][2][3] =\", cache_conv[0][1][2][3])",
"Expected Output:\n<table>\n <tr>\n <td>\n **Z's mean**\n </td>\n <td>\n 0.155859324889\n </td>\n </tr>\n <tr>\n <td>\n **cache_conv[0][1][2][3]**\n </td>\n <td>\n [-0.20075807 0.18656139 0.41005165]\n </td>\n </tr>\n\n</table>\n\nFinally, CONV layer should also contain an activation, in which case we would add the following line of code:\n```python\nConvolve the window to get back one output neuron\nZ[i, h, w, c] = ...\nApply activation\nA[i, h, w, c] = activation(Z[i, h, w, c])\n```\nYou don't need to do it here. \n4 - Pooling layer\nThe pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: \n\n\nMax-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.\n\n\nAverage-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.\n\n\n<table>\n<td>\n<img src=\"images/max_pool1.png\" style=\"width:500px;height:300px;\">\n<td>\n\n<td>\n<img src=\"images/a_pool.png\" style=\"width:500px;height:300px;\">\n<td>\n</table>\n\nThese pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. \n4.1 - Forward Pooling\nNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. \nExercise: Implement the forward pass of the pooling layer. Follow the hints in the comments below.\nReminder:\nAs there's no padding, the formulas binding the output shape of the pooling to the input shape is:\n$$ n_H = \\lfloor \\frac{n_{H_{prev}} - f}{stride} \\rfloor +1 $$\n$$ n_W = \\lfloor \\frac{n_{W_{prev}} - f}{stride} \\rfloor +1 $$\n$$ n_C = n_{C_{prev}}$$",
"# GRADED FUNCTION: pool_forward\n\ndef pool_forward(A_prev, hparameters, mode = \"max\"):\n \"\"\"\n Implements the forward pass of the pooling layer\n \n Arguments:\n A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n hparameters -- python dictionary containing \"f\" and \"stride\"\n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters \n \"\"\"\n \n # Retrieve dimensions from the input shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve hyperparameters from \"hparameters\"\n f = hparameters[\"f\"]\n stride = hparameters[\"stride\"]\n \n # Define the dimensions of the output\n n_H = int(1 + (n_H_prev - f) / stride)\n n_W = int(1 + (n_W_prev - f) / stride)\n n_C = n_C_prev\n \n # Initialize output matrix A\n A = np.zeros((m, n_H, n_W, n_C)) \n \n ### START CODE HERE ###\n for i in range(m): # loop over the training examples\n for h in range(0, n_H, stride): # loop on the vertical axis of the output volume\n for w in range(0, n_W, stride): # loop on the horizontal axis of the output volume\n for c in range (n_C): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h \n vert_end = h + f\n horiz_start = w\n horiz_end = w + f\n \n # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)\n a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]\n \n # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.\n if mode == \"max\":\n A[i, h, w, c] = np.max(a_prev_slice)\n elif mode == \"average\":\n A[i, h, w, c] = np.mean(a_prev_slice)\n \n ### END CODE HERE ###\n \n # Store the input and hparameters in \"cache\" for pool_backward()\n cache = (A_prev, hparameters)\n \n # Making sure your output shape is correct\n assert(A.shape == (m, n_H, n_W, n_C))\n \n return A, cache\n\nnp.random.seed(1)\nA_prev = np.random.randn(2, 4, 4, 3)\nhparameters = {\"stride\" : 1, \"f\": 4}\n\nA, cache = pool_forward(A_prev, hparameters)\nprint(\"mode = max\")\nprint(\"A =\", A)\nprint()\nA, cache = pool_forward(A_prev, hparameters, mode = \"average\")\nprint(\"mode = average\")\nprint(\"A =\", A)",
"Expected Output:\n<table>\n\n <tr>\n <td>\n A =\n </td>\n <td>\n [[[[ 1.74481176 1.6924546 2.10025514]]] <br/>\n\n\n [[[ 1.19891788 1.51981682 2.18557541]]]]\n\n </td>\n </tr>\n <tr>\n <td>\n A =\n </td>\n <td>\n [[[[-0.09498456 0.11180064 -0.14263511]]] <br/>\n\n\n [[[-0.09525108 0.28325018 0.33035185]]]]\n\n </td>\n </tr>\n\n</table>\n\nCongratulations! You have now implemented the forward passes of all the layers of a convolutional network. \nThe remainer of this notebook is optional, and will not be graded.\n5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)\nIn modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. \nWhen in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.\n5.1 - Convolutional layer backward pass\nLet's start by implementing the backward pass for a CONV layer. \n5.1.1 - Computing dA:\nThis is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:\n$$ dA += \\sum {h=0} ^{n_H} \\sum{w=0} ^{n_W} W_c \\times dZ_{hw} \\tag{1}$$\nWhere $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. \nIn code, inside the appropriate for-loops, this formula translates into:\npython\nda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]\n5.1.2 - Computing dW:\nThis is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:\n$$ dW_c += \\sum {h=0} ^{n_H} \\sum{w=0} ^ {n_W} a_{slice} \\times dZ_{hw} \\tag{2}$$\nWhere $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. \nIn code, inside the appropriate for-loops, this formula translates into:\npython\ndW[:,:,:,c] += a_slice * dZ[i, h, w, c]\n5.1.3 - Computing db:\nThis is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:\n$$ db = \\sum_h \\sum_w dZ_{hw} \\tag{3}$$\nAs you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. \nIn code, inside the appropriate for-loops, this formula translates into:\npython\ndb[:,:,:,c] += dZ[i, h, w, c]\nExercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.",
"def conv_backward(dZ, cache):\n \"\"\"\n Implement the backward propagation for a convolution function\n \n Arguments:\n dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward(), output of conv_forward()\n \n Returns:\n dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),\n numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n dW -- gradient of the cost with respect to the weights of the conv layer (W)\n numpy array of shape (f, f, n_C_prev, n_C)\n db -- gradient of the cost with respect to the biases of the conv layer (b)\n numpy array of shape (1, 1, 1, n_C)\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve information from \"cache\"\n (A_prev, W, b, hparameters) = cache\n \n # Retrieve dimensions from A_prev's shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve dimensions from W's shape\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\"\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Retrieve dimensions from dZ's shape\n (m, n_H, n_W, n_C) = dZ.shape\n \n # Initialize dA_prev, dW, db with the correct shapes\n dA_prev = np.zeros(A_prev.shape) \n dW = np.zeros(W.shape)\n db = np.zeros(b.shape)\n\n # Pad A_prev and dA_prev\n A_prev_pad = zero_pad(A_prev, pad)\n dA_prev_pad = zero_pad(dA_prev, pad)\n\n for i in range(m): # loop over the training examples\n \n # select ith training example from A_prev_pad and dA_prev_pad\n a_prev_pad = A_prev_pad[i, ...]\n da_prev_pad = dA_prev_pad[i, ...]\n \n for h in range(0, n_H): # loop over vertical axis of the output volume\n for w in range(0, n_W): # loop over horizontal axis of the output volume\n for c in range(n_C): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\"\n vert_start = h * stride\n vert_end = h * stride + f\n horiz_start = w * stride\n horiz_end = w * stride + f\n \n # Use the corners to define the slice from a_prev_pad\n a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]\n\n # Update gradients for the window and the filter's parameters using the code formulas given above\n da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[..., c] * dZ[i, h, w, c]\n dW[:,:,:,c] += a_slice * dZ[i, h, w, c]\n db[:,:,:,c] += dZ[i, h, w, c]\n \n # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])\n dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]\n ### END CODE HERE ###\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))\n \n return dA_prev, dW, db\n\nnp.random.seed(1)\ndA, dW, db = conv_backward(Z, cache_conv)\nprint(\"dA_mean =\", np.mean(dA))\nprint(\"dW_mean =\", np.mean(dW))\nprint(\"db_mean =\", np.mean(db))",
"Expected Output: \n<table>\n <tr>\n <td>\n **dA_mean**\n </td>\n <td>\n 9.60899067587\n </td>\n </tr>\n <tr>\n <td>\n **dW_mean**\n </td>\n <td>\n 10.5817412755\n </td>\n </tr>\n <tr>\n <td>\n **db_mean**\n </td>\n <td>\n 76.3710691956\n </td>\n </tr>\n\n</table>\n\n5.2 Pooling layer - backward pass\nNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. \n5.2.1 Max pooling - backward pass\nBefore jumping into the backpropagation of the pooling layer, you are going to build a helper function called create_mask_from_window() which does the following: \n$$ X = \\begin{bmatrix}\n1 && 3 \\\n4 && 2\n\\end{bmatrix} \\quad \\rightarrow \\quad M =\\begin{bmatrix}\n0 && 0 \\\n1 && 0\n\\end{bmatrix}\\tag{4}$$\nAs you can see, this function creates a \"mask\" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. \nExercise: Implement create_mask_from_window(). This function will be helpful for pooling backward. \nHints:\n- np.max() may be helpful. It computes the maximum of an array.\n- If you have a matrix X and a scalar x: A = (X == x) will return a matrix A of the same size as X such that:\nA[i,j] = True if X[i,j] = x\nA[i,j] = False if X[i,j] != x\n- Here, you don't need to consider cases where there are several maxima in a matrix.",
"def create_mask_from_window(x):\n \"\"\"\n Creates a mask from an input matrix x, to identify the max entry of x.\n \n Arguments:\n x -- Array of shape (f, f)\n \n Returns:\n mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.\n \"\"\"\n \n ### START CODE HERE ### (≈1 line)\n mask = x == np.max(x)\n ### END CODE HERE ###\n \n return mask\n\nnp.random.seed(1)\nx = np.random.randn(2,3)\nmask = create_mask_from_window(x)\nprint('x = ', x)\nprint(\"mask = \", mask)",
"Expected Output: \n<table> \n<tr> \n<td>\n\n**x =**\n</td>\n\n<td>\n\n[[ 1.62434536 -0.61175641 -0.52817175] <br>\n [-1.07296862 0.86540763 -2.3015387 ]]\n\n </td>\n</tr>\n\n<tr> \n<td>\n**mask =**\n</td>\n<td>\n[[ True False False] <br>\n [False False False]]\n</td>\n</tr>\n\n\n</table>\n\nWhy do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will \"propagate\" the gradient back to this particular input value that had influenced the cost. \n5.2.2 - Average pooling - backward pass\nIn max pooling, for each input window, all the \"influence\" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.\nFor example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: \n$$ dZ = 1 \\quad \\rightarrow \\quad dZ =\\begin{bmatrix}\n1/4 && 1/4 \\\n1/4 && 1/4\n\\end{bmatrix}\\tag{5}$$\nThis implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. \nExercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint",
"def distribute_value(dz, shape):\n \"\"\"\n Distributes the input value in the matrix of dimension shape\n \n Arguments:\n dz -- input scalar\n shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz\n \n Returns:\n a -- Array of size (n_H, n_W) for which we distributed the value of dz\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve dimensions from shape (≈1 line)\n (n_H, n_W) = shape\n \n # Compute the value to distribute on the matrix (≈1 line)\n average = dz / (n_H * n_W)\n \n # Create a matrix where every entry is the \"average\" value (≈1 line)\n a = np.ones(shape) * average\n ### END CODE HERE ###\n \n return a\n\na = distribute_value(2, (2,2))\nprint('distributed value =', a)",
"Expected Output: \n<table> \n<tr> \n<td>\ndistributed_value =\n</td>\n<td>\n[[ 0.5 0.5]\n<br\\> \n[ 0.5 0.5]]\n</td>\n</tr>\n</table>\n\n5.2.3 Putting it together: Pooling backward\nYou now have everything you need to compute backward propagation on a pooling layer.\nExercise: Implement the pool_backward function in both modes (\"max\" and \"average\"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ.",
"def pool_backward(dA, cache, mode = \"max\"):\n \"\"\"\n Implements the backward pass of the pooling layer\n \n Arguments:\n dA -- gradient of cost with respect to the output of the pooling layer, same shape as A\n cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters \n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev\n \"\"\"\n \n ### START CODE HERE ###\n \n # Retrieve information from cache (≈1 line)\n (A_prev, hparameters) = cache\n \n # Retrieve hyperparameters from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n f = hparameters['f']\n \n # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)\n m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape\n m, n_H, n_W, n_C = dA.shape\n \n # Initialize dA_prev with zeros (≈1 line)\n dA_prev = np.zeros(A_prev.shape)\n \n for i in range(m): # loop over the training examples\n \n # select training example from A_prev (≈1 line)\n a_prev = A_prev[i]\n \n for h in range(n_H): # loop on the vertical axis\n for w in range(n_W): # loop on the horizontal axis\n for c in range(n_C): # loop over the channels (depth)\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h * stride\n vert_end = h * stride + f\n horiz_start = w * stride\n horiz_end = w * stride + f\n \n # Compute the backward propagation in both modes.\n if mode == \"max\":\n \n # Use the corners and \"c\" to define the current slice from a_prev (≈1 line)\n a_prev_slice = a_prev[vert_start: vert_end, horiz_start: horiz_end, c]\n # Create the mask from a_prev_slice (≈1 line)\n mask = create_mask_from_window(a_prev_slice)\n # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, h, w, c]\n \n elif mode == \"average\":\n \n # Get the value a from dA (≈1 line)\n da = dA[i, h, w, c]\n # Define the shape of the filter as fxf (≈1 line)\n shape = (f, f)\n # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)\n \n ### END CODE ###\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == A_prev.shape)\n \n return dA_prev\n\nnp.random.seed(1)\nA_prev = np.random.randn(5, 5, 3, 2)\nhparameters = {\"stride\" : 1, \"f\": 2}\nA, cache = pool_forward(A_prev, hparameters)\ndA = np.random.randn(5, 4, 2, 2)\n\ndA_prev = pool_backward(dA, cache, mode = \"max\")\nprint(\"mode = max\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev[1,1] = ', dA_prev[1,1]) \nprint()\ndA_prev = pool_backward(dA, cache, mode = \"average\")\nprint(\"mode = average\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev[1,1] = ', dA_prev[1,1]) ",
"Expected Output: \nmode = max:\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\n**dA_prev[1,1] =** \n</td>\n<td>\n[[ 0. 0. ] <br>\n [ 5.05844394 -1.68282702] <br>\n [ 0. 0. ]]\n</td>\n</tr>\n</table>\n\nmode = average\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\n**dA_prev[1,1] =** \n</td>\n<td>\n[[ 0.08485462 0.2787552 ] <br>\n [ 1.26461098 -0.25749373] <br>\n [ 1.17975636 -0.53624893]]\n</td>\n</tr>\n</table>\n\nCongratulations !\nCongratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ultiyuan/test0
|
lessons/.ipynb_checkpoints/SeparationPrediction-checkpoint.ipynb
|
gpl-2.0
|
[
"Separation prediction on general bodies\nIn this final notebook, we will combine the vortex panel method and the boundary layer solver to predict separation on any 2D shape and make drag predictions.\nBoundaryLayer module\nAs with VortexPanel.py, we've made a python file called BoundaryLayer.py which has the march function inside.\nWhat will we need to interface these two modules? VortexPanel doesn't need anything from BoundaryLayer - it just needs a geometry and angle of attack.",
"import numpy\nfrom matplotlib import pyplot\n%matplotlib inline\nfrom VortexPanel import Panel,solve_gamma_kutta,plot_flow,make_jukowski,make_circle\n\nalpha = numpy.pi/16\nN = 64\nfoil = make_jukowski(N)\nsolve_gamma_kutta(foil,alpha)\nplot_flow(foil,alpha)",
"From the previous notebook we know the function march doesn't need the details of the geometry, but it does need:\n\n$s$: the distance along the boundary layer\n$u_e(x)$: the velocity on the edge of the boundary layer\n$u_e'(x)$: the tangential derivative of $u_e$\n$\\nu$: the kinematic viscosity\n\nThe viscosity is obvious, but we'll need to get the other variables from the potential flow solution.\nQuiz 1\nWhat is the tangential velocity $u_e = \\vec u\\cdot\\hat s$ on the fluid side of panel $p_i$?\n\n$\\left(\\vec U +\\sum_{j=0}^{N-1} \\gamma_j \\vec f_j(x_i,y_i)\\right)\\cdot \\hat s_i$\n$-\\gamma_i$\n$U_\\infty$\n\nHint: Remember that we have set a boundary condition on the body side of the panel.\n\nNext, let's get $s$. Note that a body will form two boundary layers, one on each side. We need to identify the starting point of these two flow regions.\nQuiz 2\nWhere is the starting point of the two boundary layers?\n\nThe first and last panels: foil[0], foil[N-1]\nThe panel where $u_e = 0$\nThe left-most panel, foil[N/2]\n\n\nThis makes it straightforward to split the body into the two boundary layer sections:",
"# split panels into two sections based on the flow velocity\ndef split_panels(panels):\n # positive velocity defines `top` BL\n top = [p for p in panels if p.gamma<=0] \n # negative defines the `bottom`\n bottom = [p for p in panels if p.gamma>=0]\n # reverse array so panel[0] is stagnation\n bottom = bottom[::-1]\n\n return top,bottom\n\nfoil_top,foil_bottom = split_panels(foil)",
"Note that we changed the direction of the bottom array so that it runs from the stagnation point to the trailing edge, in accordance with the flow direction.\nLets plot them to make sure we got it right:",
"# plot panels with labels\ndef plot_segment(panels):\n pyplot.figure(figsize=(10,2))\n pyplot.axis([-1.2,1.2,-.3,.3])\n for i,p_i in enumerate(panels): \n p_i.plot()\n if i%10 == 0:\n pyplot.scatter(p_i.xc,p_i.yc)\n pyplot.text(p_i.xc,p_i.yc+0.05, \n 'panel ['+'%i'%i+']',fontsize=12)\n\nplot_segment(foil_top)\n\nplot_segment(foil_bottom)",
"Pohlhausen class\nNow we just need to pull out the distance and velocity data from these Panel arrays and pass it to the march function. To keep this clean we define a new class Pohlhausen.",
"# Pohlhausen Boundary Layer class\nclass Pohlhausen:\n def __init__(self,panels,nu):\n self.u_e = [abs(p.gamma) for p in panels] # tangential velocity\n self.s = numpy.empty_like(self.u_e) # initialize distance array\n self.s[0] = panels[0].S\n for i in range(len(self.s)-1): # fill distance array\n self.s[i+1] = self.s[i]+panels[i].S+panels[i+1].S \n ds = numpy.gradient(self.s) \n self.du_e = numpy.gradient(self.u_e,ds) # compute velocity gradient\n\n self.nu = nu # kinematic viscosity\n self.xc = [p.xc for p in panels] # x and ...\n self.yc = [p.yc for p in panels] # y locations\n \n def march(self):\n # march down the boundary layer until separation\n from BoundaryLayer import march\n self.delta,self.lam,self.iSep = march(self.s,self.u_e,self.du_e,self.nu)\n\n # interpolate values at the separation point\n def sep_interp(y): return numpy.interp( # interpolate function\n 12,-self.lam[self.iSep:self.iSep+2],y[self.iSep:self.iSep+2])\n self.s_sep = sep_interp(self.s)\n self.u_e_sep = sep_interp(self.u_e)\n self.x_sep = sep_interp(self.xc)\n self.y_sep = sep_interp(self.yc)\n self.delta_sep = sep_interp(self.delta)",
"A few implementation notes:\n - The distance from the center of panel $i+1$ to panel $i$ is $\\Delta s_{i+1} = S_i+S_{i+1}$, therefore $s_{i+1} = s_i+S_i+S_{i+1}$.\n - The numpy.gradient function is used to get $u_e'$. \n - Pohlhausen.march calls march from the last notebook and then interpolates linearly to get values at the separation point.\nCircle boundary layer\nLet's test this with the case we tried before, the flow around a circle. But this time we'll use the external flow from the vortex panel method instead of the analytic solution.\nQuiz 3\nWhy do I keep testing code on cases we've seen before?\n\nI'm terribly forgetful\nNew examples take work\nI want to validate new code by comparing to known answers\n\n\nNumerical fundamental: Validation\nEvery piece of code must be tested against a nontrivial example with a known solution\nFirst lets check that $s$, $u_e$, and $u_e'$ are computed correctly:",
"circle = make_circle(N) # set-up circle\nsolve_gamma_kutta(circle) # solve flow\ntop,bottom = split_panels(circle) # split panels\nnu = 1e-5 # set viscosity\ntop = Pohlhausen(top,nu) # get BL inputs\nu_e = 2.*numpy.sin(top.s) # analytic u_e\ndu_e = 2.*numpy.cos(top.s) # analytic du_e\n\n# compare the boundary layer inputs\npyplot.xlabel(r\"$s$\",fontsize=16)\npyplot.plot(top.s,top.u_e, lw=2, label=r'Panel $u_e$')\npyplot.plot(top.s,u_e, lw=2, label=r'Analytic $u_e$')\npyplot.plot(top.s,top.du_e, lw=2, label=r\"Panel $u_e'$\")\npyplot.plot(top.s,du_e, lw=2, label=r\"Analytic $u_e'$\")\npyplot.legend(loc='lower left')",
"Those look very good. Now lets march and look at $\\delta$ and the separation point.",
"top.march() # solve the boundary layer flow\ni = top.iSep+2 # last point to plot\n\n# plot the boundary layer thicknes and separation point\npyplot.ylabel(r'$\\delta$', fontsize=16)\npyplot.xlabel(r'$s$', fontsize=16)\npyplot.plot(top.s[:i],top.delta[:i],lw=2)\npyplot.scatter(top.s_sep,top.delta_sep, s=100, c='r')\npyplot.text(top.s_sep-0.6,top.delta_sep, \n ' separation \\n s='+'%.2f' % top.s_sep,fontsize=12)",
"Same answer as the previous notebook. Good.\nNow that we know the code is working, lets write a function to set-up, solve, and plot the separation points for the boundary layer flow.",
"def solve_plot_boundary_layers(panels,alpha=0,nu=1e-5):\n\n # split the panels\n top_panels,bottom_panels = split_panels(panels)\n \n # Set up and solve the top boundary layer\n top = Pohlhausen(top_panels,nu)\n top.march()\n\n # Set up and solve the bottom boundary layer\n bottom = Pohlhausen(bottom_panels,nu)\n bottom.march()\n \n # plot flow with separation points\n plot_flow(panels,alpha)\n pyplot.scatter(top.x_sep, top.y_sep, s=100, c='r')\n pyplot.scatter(bottom.x_sep, bottom.y_sep, s=100, c='g')\n \n return top,bottom\n\ntop,bottom = solve_plot_boundary_layers(circle)",
"The red and green dots mark the separation point for the top and bottom boundary layer, respectively.\nSeparation occurs soon after the flow begins to decelerate. Physically, the boundary layer loses energy to friction as it travels over the front of the body (remember how large $C_F$ was?) and can not cope with the adverse pressure gradient on the back of the body.\nJukowski foil validation\nNow lets write a function to get the complete flow around a Jukowski foil:",
"def predict_jukowski_separation(t_c,alpha=0,N=128):\n # set dx to gets the correct t/c\n foil = make_jukowski(N,dx=t_c-0.019)\n\n # find and print t/c\n x0 = foil[N/2].xc\n c = foil[0].xc-x0\n t = 2.*numpy.max([p.yc for p in foil])\n print \"t/c = \"+\"%.3f\"%(t/c)\n\n # solve potential flow and boundary layer evolution\n solve_gamma_kutta(foil,alpha)\n top,bottom = solve_plot_boundary_layers(foil,alpha)\n\n # print message\n print (\"Separation at x/c = \"+\"%.3f\"%\n ((top.x_sep-x0)/c)+\" from the leading edge\")\n\npredict_jukowski_separation(0.2,alpha)",
"Quiz 4\nWe know $\\nu$ doesn't impact separation. How can you move the separation points?\n\nChange the foil thickness\nChange the angle of attack\nChange the resolution\n\n\nWe can make sure the behavoir above is correct by validating against the analytic solution for simple geometries. Here is a summary figure from Chapter 3 of Hoerner's Fluid-Dynamic Drag\n\n\n\nThere are two Jukowski examples: $t/c=0.15$ which separates at $x/c\\approx0.49$ from the leading edge, and $t/c=0.17$, which separates at $x/c\\approx0.39$.",
"predict_jukowski_separation(t_c=0.15)",
"The $t/c=0.15$ case matches very well with Hoerner's picture.",
"predict_jukowski_separation(t_c=0.17)",
"Quiz 5\nWhat could be the cause of the ~$15\\%$ discrepancy in the $t/c=0.17$ case?\n\nError in Hoerner\nError in Pohlhausen boundary layer ODE\nError in numerical method (VortexPanel, BoundaryLayer, etc)\n\nEllipse validation\nLet's see how we fair in the ellipse cases. From the Hoerner image I estimate:\n$t/c$| 1/2 | 1/4 | 1/8 \n---|---|---|---|---\n$x/c$| $0.75$ | $0.85$ | $0.92$",
"def predict_ellipse_separation(t_c,N=128,alpha=0):\n ellipse = make_circle(N,t_c)\n print \"t/c = \"+\"%.3f\"%(t_c)\n\n # solve potential flow and boundary layer evolution\n solve_gamma_kutta(ellipse,alpha)\n top,bottom = solve_plot_boundary_layers(ellipse,alpha)\n\n # print message\n print (\"Separation at x/c = \"+\"%.3f\"%\n ((top.x_sep+1)/2.)+\" from the leading edge\") \n\npredict_ellipse_separation(t_c=0.5)\n\npredict_ellipse_separation(t_c=0.25)\n\npredict_ellipse_separation(t_c=0.125)",
"So I get the feeling Hoerner has a typo... that's the first one I've found.\nPressure force estimates\nNow that we can predict the separation point, we can make non-zero pressure force estimates.\nThe pressure force on the body is\n$$\\vec F_p = \\oint_{\\cal S} p \\hat n ds$$\nwhere $\\cal S$ is the body surface and $\\hat n$ is the normal to the surface. \nQuiz 6\nWhat is the equation for the pressure coefficient $c_p(s)$?\n\n$c_p(s) = 1-4\\sin(s)$\n$c_p(s) = 1-u_e^2(s)/U_\\infty^2$\n$c_p(s) = (p(s)-p_\\infty)/(\\frac 12\\rho U_\\infty^2)$\n\n\nTherefore, the drag coefficient is\n$$C_D = \\frac{-F_x}{\\frac 12 \\rho U^2_\\infty A} = \\frac1w\\oint_{\\cal S} c_p s_y ds$$\nwhere $A$ is the 2D projected area of the body (the width) and $s_y = -n_x$.\nUsing the vortex panel method we can determine the potential flow solution for $c_p$, but what does $c_p$ look like in a real flow with separation? \n\n\n\nI've sketched the results for the flow around a circular cylinder above. The measured pressure at the front of a body in a viscous fluid is fairly well predicted by potential flow. \nHowever, the pressure coefficient completely deviates from the potential flow prediction near the point of separation. Indeed it remains essentially constant in the separated flow region. \nQuiz 7\nWhat would be a simple way to estimate the drag on a body?\n\nIntegrate $c_p$ from the vortex panel method.\nSet $c_p(s) = c_p(s_{sep})$ for $s>s_{sep}$, and then integrate.\n\n\nYour turn\nCompute $C_D$ for the circle and compare to the laminar experimental value of ~$1$.",
"# you code here",
"Ignore the line below - it just loads the style sheet.",
"from IPython.core.display import HTML\ndef css_styling():\n styles = open('../styles/custom.css', 'r').read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jorisvandenbossche/DS-python-data-analysis
|
notebooks/case2_observations_processing.ipynb
|
bsd-3-clause
|
[
"<p><font size=\"6\"><b> CASE - Observation data - data cleaning and enrichment</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nplt.style.use('seaborn-whitegrid')",
"Scenario:<br>\nObservation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available.\nYou decide to share data of a field campaign, but the data set still requires some cleaning and standardization. For example, the coordinates, can be named x/y, decimalLatitude/decimalLongitude, lat/long... Luckily, you know of an international open data standard to describe occurrence/observation data, i.e. Darwin Core (DwC). Instead of inventing your own data model, you decide to comply to this international standard. The latter will enhance communication and will also make your data compliant with GBIF.\nIn short, the DwC describes a flat table (cfr. CSV) with an agreed name convention on the header names and conventions on how certain data types need to be represented (as a reference, an in depth description is given here). For this tutorial, we will focus on a few of the existing terms to learn some elements about data cleaning:\n* eventDate: ISO 6801 format of dates\n* scientificName: the accepted scientific name of the species\n* decimalLatitude/decimalLongitude: coordinates of the occurrence in WGS84 format\n* sex: either male or female to characterize the sex of the occurrence\n* occurrenceID: an identifier within the data set to identify the individual records\n* datasetName: a static string defining the source of the data\nFurthermore, additional information concerning the taxonomy will be added using an external API service\nDataset to work on:\nFor this data set, the data is split up in the following main data files:\n* surveys.csv the data with the surveys in the individual plots\n* species.csv the overview list of the species short-names\n* plot_location.xlsx the overview of coordinates of the individual locations\nThe data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona.\n\n1. Survey-data\nReading in the data of the individual surveys:",
"survey_data = pd.read_csv(\"data/surveys.csv\")\n\nsurvey_data.head()",
"<div class=\"alert alert-success\">\n\n**EXERCISE 1**\n\n- How many individual records (occurrences) does the survey data set contain?\n\n</div>",
"# %load _solutions/case2_observations_processing1.py",
"Adding the data source information as static column\nFor convenience when this data-set will be combined with other datasets, we first add a column of static values, defining the datasetName of this particular data:",
"datasetname = \"Ecological Archives E090-118-D1.\"",
"Adding this static value as a new column datasetName:\n<div class=\"alert alert-success\">\n\n**EXERCISE 2**\n\nAdd a new column, `datasetName`, to the survey data set with `datasetname` as value for all of the records (static value for the entire data set)\n\n<details><summary>Hints</summary>\n\n- When a column does not exist, a new `df[\"a_new_column\"]` can be created by assigning a value to it.\n- No `for`-loop is required, as Pandas will automatically broadcast a single string value to each of the rows in the `DataFrame`.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing2.py",
"Cleaning the sex_char column into a DwC called sex column\n<div class=\"alert alert-success\">\n\n**EXERCISE 3**\n\n- Get a list of the unique values for the column `sex_char`.\n\n<details><summary>Hints</summary>\n\n- To find the unique values, look for a function called `unique` (remember `SHIFT`+`TAB` combination to explore the available methods/attributes?)\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing3.py",
"So, apparently, more information is provided in this column, whereas according to the metadata information, the sex information should be either M (male) or F (female). We will create a column, named sex and convert the symbols to the corresponding sex, taking into account the following mapping of the values (see metadata for more details):\n* M -> male\n* F -> female\n* R -> male\n* P -> female\n* Z -> nan\nAt the same time, we will save the original information of the sex_char in a separate column, called verbatimSex, as a reference in case we need the original data later.\nIn summary, we have to:\n* rename the sex_char column to verbatimSex\n* create a new column with the name sex\n* map the original values of the sex_char to the values male and female according to the mapping above\nFirst, let's convert the name of the column header sex_char to verbatimSex with the rename function:",
"survey_data = survey_data.rename(columns={'sex_char': 'verbatimSex'})",
"<div class=\"alert alert-success\">\n\n**EXERCISE 4**\n\n- Express the mapping of the values (e.g. `M` -> `male`) into a Python dictionary object with the variable name `sex_dict`. `Z` values correspond to _Not a Number_, which can be defined as `np.nan`.\n- Use the `sex_dict` dictionary to replace the values in the `verbatimSex` column to the new values and save the mapped values in a new column 'sex' of the DataFrame.\n\n<details><summary>Hints</summary>\n\n- A dictionary is a Python standard library data structure, see https://docs.python.org/3/tutorial/datastructures.html#dictionaries - no Pandas magic involved when you need a key/value mapping.\n- When you need to replace values, look for the Pandas method `replace`.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing4.py\n\n# %load _solutions/case2_observations_processing5.py",
"Checking the current frequency of values of the resulting sex column (this should result in the values male, female and nan):",
"survey_data[\"sex\"].unique()",
"To check what the frequency of occurrences is for male/female of the categories, a bar chart is a possible representation:\n<div class=\"alert alert-success\">\n\n**EXERCISE 5**\n\n- Make a horizontal bar chart comparing the number of male, female and unknown (`NaN`) records in the data set.\n\n<details><summary>Hints</summary>\n\n- Pandas provides a shortcut method `value_counts` which works on Pandas `Series` to count unique values. Explore the documentation of the `value_counts` method to include the `NaN` values as well.\n- Check in the help of the Pandas plot function for the `kind` parameter.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing6.py",
"<div class=\"alert alert-warning\">\n\n<b>NOTE</b>: The usage of `groupby` combined with the `size` of each group would be an option as well. However, the latter does not support to count the `NaN` values as well. The `value_counts` method does support this with the `dropna=False` argument.\n\n</div>\n\nSolving double entry field by decoupling\nWhen checking the species unique information:",
"survey_data[\"species\"].unique()\n\nsurvey_data.head(10)",
"There apparently exists a double entry: 'DM and SH', which basically defines two records and should be decoupled to two individual records (i.e. rows). Hence, we should be able to create an additional row based on this split. To do so, Pandas provides a dedicated function since version 0.25, called explode. Starting from a small subset example:",
"example = survey_data.loc[7:10, \"species\"]\nexample",
"Using the split method on strings, we can split the string using a given character, in this case the word and:",
"example.str.split(\"and\")",
"The explode method will create a row for each element in the list:",
"example_split = example.str.split(\"and\").explode()\nexample_split",
"Hence, the DM and SH are now enlisted in separate rows. Other rows remain unchanged. The only remaining issue is the spaces around the characters:",
"example_split.iloc[1], example_split.iloc[2]",
"Which we can solve again using the string method strip, removing the spaces before and after the characters:",
"example_split.str.strip()",
"To make this reusable, let's create a dedicated function to combine these steps, called solve_double_field_entry:",
"def solve_double_field_entry(df, keyword=\"and\", column=\"verbatimEventDate\"):\n \"\"\"Split on keyword in column for an enumeration and create extra record\n\n Parameters\n ----------\n df: pd.DataFrame\n DataFrame with a double field entry in one or more values\n keyword: str\n word/character to split the double records on\n column: str\n column name to use for the decoupling of the records\n \"\"\"\n df = df.copy() # copy the input DataFrame to avoid editing the original\n df[column] = df[column].str.split(keyword)\n df = df.explode(column)\n df[column] = df[column].str.strip() # remove white space around the words\n return df",
"The function takes a DataFrame as input, splits the record into separate rows and returns an updated DataFrame. We can use this function to get an update of the DataFrame, with an additional row (observation) added by decoupling the specific field. Let's apply this new function.\n<div class=\"alert alert-success\">\n\n**EXERCISE 6**\n\n- Use the function `solve_double_field_entry` to update the `survey_data` by decoupling the double entries. Save the result as a variable `survey_data_decoupled`.\n\n<details><summary>Hints</summary>\n\n- As we added a 'docstring' to the function, we can check our own documentation to know how to use the function and which inputs we should provide. You can use `SHIFT` + `TAB` to explore the documentation just like any other function.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing7.py\n\nsurvey_data_decoupled[\"species\"].unique()\n\nsurvey_data_decoupled.head(11)",
"Create new occurrence identifier\nThe record_id is no longer a unique identifier for each observation after the decoupling of this data set. We will make a new data set specific identifier, by adding a column called occurrenceID that takes a new counter as identifier. As a simple and straightforward approach, we will use a new counter for the whole dataset, starting with 1:",
"np.arange(1, len(survey_data_decoupled) + 1, 1)",
"To create a new column with header occurrenceID with the values 1 -> 35550 as field values:",
"survey_data_decoupled[\"occurrenceID\"] = np.arange(1, len(survey_data_decoupled) + 1, 1)",
"To overcome the confusion on having both a record_id and occurrenceID field, we will remove the record_id term:",
"survey_data_decoupled = survey_data_decoupled.drop(columns=\"record_id\")",
"Hence, columns can be drop-ped out of a DataFrame",
"survey_data_decoupled.head(10)",
"Converting the date values\nIn the survey data set we received, the month, day, and year columns are containing the information about the date, i.e. eventDate in DarwinCore terms. We want this data in a ISO format YYYY-MM-DD. A convenient Pandas function is the usage of to_datetime, which provides multiple options to interpret dates. One of the options is the automatic interpretation of some 'typical' columns, like year, month and day, when passing a DataFrame.",
"# pd.to_datetime(survey_data_decoupled[[\"year\", \"month\", \"day\"]]) # uncomment the line and test this statement",
"This is not working, not all dates can be interpreted... We should get some more information on the reason of the errors. By using the option coerce, the problem makers will be labeled as a missing value NaT. We can count the number of dates that can not be interpreted:",
"sum(pd.to_datetime(survey_data_decoupled[[\"year\", \"month\", \"day\"]], errors='coerce').isna())",
"<div class=\"alert alert-success\">\n\n**EXERCISE 7**\n\n- Make a selection of `survey_data_decoupled` containing those records that can not correctly be interpreted as date values and save the resulting `DataFrame` as a new variable `trouble_makers`\n\n<details><summary>Hints</summary>\n\n- The result of the `.isna()` method is a `Series` of boolean values, which can be used to make a selection (so called boolean indexing or filtering)\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing8.py",
"Checking some charactersitics of the trouble_makers:",
"trouble_makers.head()\n\ntrouble_makers[\"day\"].unique()\n\ntrouble_makers[\"month\"].unique()\n\ntrouble_makers[\"year\"].unique()",
"The issue is the presence of day 31 during the months April and September of the year 2000. At this moment, we would have to recheck the original data in order to know how the issue could be solved. Apparently, - for this specific case - there has been a data-entry problem in 2000, making the 31 days during this period should actually be 30. It would be optimal to correct this in the source data set, but for the exercise, we will correct it here.\n<div class=\"alert alert-success\">\n\n**EXERCISE 8**\n\n- Assign in the `DataFrame` `survey_data_decoupled` all of the troublemakers `day` values the value 30 instead of 31.\n\n<details><summary>Hints</summary>\n\n- No `for`-loop is required, but use the same boolean mask to assign the new value to the correct rows.\n- Check `pandas_03b_indexing.ipynb` for the usage of `loc` and `iloc` to assign new values.\n- With `loc`, specify both the selecting for the rows and for the columns (`df.loc[row_indexer, column_indexer] = ..`).\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing9.py",
"Now, we do the parsing again to create a proper eventDate field, containing the dates:",
"survey_data_decoupled[\"eventDate\"] = \\\n pd.to_datetime(survey_data_decoupled[[\"year\", \"month\", \"day\"]])",
"<div class=\"alert alert-success\">\n\n**EXERCISE 9**\n\n- Check the number of observations for each year. Create a horizontal bar chart with the number of rows/observations for each year.\n\n<details><summary>Hints</summary>\n\n- To get the total number of observations, both the usage of `value_counts` as using `groupby` + `size` will work. `value_counts` is a convenient function when all you need to do is counting rows.\n- When using `value_counts`, the years in the index will no longer be in ascending order. You can chain methods and include a `sort_index()` method to sort these again.\n\n</details>\n\n\n</div>",
"# %load _solutions/case2_observations_processing10.py\n\n# %load _solutions/case2_observations_processing11.py\n\nsurvey_data_decoupled.head()",
"Currently, the dates are stored in a python specific date format:",
"survey_data_decoupled[\"eventDate\"].dtype",
"This is great, because it allows for many functionalities using the .dt accessor:",
"survey_data_decoupled.eventDate.dt #add a dot (.) and press TAB to explore the date options it provides",
"<div class=\"alert alert-success\">\n\n**EXERCISE 10**\n\n- Create a horizontal bar chart with the number of records for each year (cfr. supra), but without using the column `year`, using the `eventDate` column directly.\n\n<details><summary>Hints</summary>\n\n- Check the `groupby` + `size` solution of the previous exercise and use this to start with. Replace the `year` inside the `groupby` method...\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing12.py",
"We actually do not need the day, month, year columns anymore, but feel free to use what suits you best.\n<div class=\"alert alert-success\">\n\n**EXERCISE 11**\n\n- Create a bar chart with the number of records for each day of the week (`dayofweek`)\n\n<details><summary>Hints</summary>\n\n- Pandas has an accessor for `dayofweek` as well.\n- You can specify the days of the week yourself to improve the plot, or use the Python standard library `calendar.day_name` (import the calendar module first) to get the names.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing13.py",
"When saving the information to a file (e.g. CSV-file), this data type will be automatically converted to a string representation. However, we could also decide to explicitly provide the string format the dates are stored (losing the date type functionalities), in order to have full control on the way these dates are formatted:",
"survey_data_decoupled[\"eventDate\"] = survey_data_decoupled[\"eventDate\"].dt.strftime('%Y-%m-%d')\n\nsurvey_data_decoupled[\"eventDate\"].head()",
"For the remainder, let's remove the day/year/month columns.",
"survey_data_decoupled = survey_data_decoupled.drop(columns=[\"day\", \"month\", \"year\"])",
"2. Add species names to dataset\nThe column species only provides a short identifier in the survey overview. The name information is stored in a separate file species.csv. We want our data set to include this information, read in the data and add it to our survey data set:\n<div class=\"alert alert-success\">\n\n**EXERCISE 12**\n\n- Read in the 'species.csv' file and save the resulting `DataFrame` as variable `species_data`.\n\n<details><summary>Hints</summary>\n\n- Check the delimiter (`sep`) parameter of the `read_csv` function.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing14.py\n\nspecies_data.head()",
"Fix a wrong acronym naming\nWhen reviewing the metadata, you see that in the data-file the acronym NE is used to describe Neotoma albigula, whereas in the metadata description, the acronym NA is used.\n<div class=\"alert alert-success\">\n\n**EXERCISE 13**\n\n- Convert the value of 'NE' to 'NA' by using Boolean indexing/Filtering for the `species_id` column.\n\n<details><summary>Hints</summary>\n\n- To assign a new value, use the `loc` operator.\n- With `loc`, specify both the selecting for the rows and for the columns (`df.loc[row_indexer, column_indexer] = ..`).\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing15.py",
"Merging surveys and species\nAs we now prepared the two series, we can combine the data, using again the pd.merge operation.\nWe want to add the data of the species to the survey data, in order to see the full species names in the combined data table.\n<div class=\"alert alert-success\">\n\n**EXERCISE 14**\n\nCombine the DataFrames `survey_data_decoupled` and `species_data` by adding the corresponding species information (name, class, kingdom,..) to the individual observations. Assign the output to a new variable `survey_data_species`.\n\n<details><summary>Hints</summary>\n\n- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.\n- Take into account that our key-column is different for `species_data` and `survey_data_decoupled`, respectively `species` and `species_id`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on.\n\n</details>",
"# %load _solutions/case2_observations_processing16.py\n\nlen(survey_data_species) # check length after join operation",
"The join is ok, but we are left with some redundant columns and wrong naming:",
"survey_data_species.head()",
"We do not need the columns species_x and species_id column anymore, as we will use the scientific names from now on:",
"survey_data_species = survey_data_species.drop([\"species_x\", \"species_id\"], axis=1)",
"The column species_y could just be named species:",
"survey_data_species = survey_data_species.rename(columns={\"species_y\": \"species\"})\n\nsurvey_data_species.head()\n\nlen(survey_data_species)",
"3. Add coordinates from the plot locations\nLoading the coordinate data\nThe individual plots are only identified by a plot identification number. In order to provide sufficient information to external users, additional information about the coordinates should be added. The coordinates of the individual plots are saved in another file: plot_location.xlsx. We will use this information to further enrich our data set and add the Darwin Core Terms decimalLongitude and decimalLatitude.\n<div class=\"alert alert-success\">\n\n**EXERCISE 15**\n\n- Read the excel file 'plot_location.xlsx' and store the data as the variable `plot_data`, with 3 columns: plot, xutm, yutm.\n\n<details><summary>Hints</summary>\n\n- Pandas read methods all have a similar name, `read_...`.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing17.py\n\nplot_data.head()",
"Transforming to other coordinate reference system\nThese coordinates are in meters, more specifically in the UTM 12 N coordinate system. However, the agreed coordinate representation for Darwin Core is the World Geodetic System 1984 (WGS84).\nAs this is not a GIS course, we will shortcut the discussion about different projection systems, but provide an example on how such a conversion from UTM12N to WGS84 can be performed with the projection toolkit pyproj and by relying on the existing EPSG codes (a registry originally setup by the association of oil & gas producers).\nFirst, we define out two projection systems, using their corresponding EPSG codes:",
"from pyproj import Transformer\n\ntransformer = Transformer.from_crs(\"EPSG:32612\", \"epsg:4326\")",
"The reprojection can be done by the function transform of the projection toolkit, providing the coordinate systems and a set of x, y coordinates. For example, for a single coordinate, this can be applied as follows:",
"transformer.transform(681222.131658, 3.535262e+06)",
"Such a transformation is a function not supported by Pandas itself (it is in https://geopandas.org/). In such an situation, we want to apply a custom function to each row of the DataFrame. Instead of writing a for loop to do this for each of the coordinates in the list, we can .apply() this function with Pandas.\n<div class=\"alert alert-success\">\n\n**EXERCISE 16**\n\nApply the pyproj function `transform` to plot_data, using the columns `xutm` and `yutm` and save the resulting output in 2 new columns, called `decimalLongitude` and `decimalLatitude`:\n\n- Create a function `transform_utm_to_wgs` that takes a row of a `DataFrame` and returns a `Series` of two elements with the longitude and latitude.\n- Test this function on the first row of `plot_data`\n- Now `apply` this function on all rows (use the `axis` parameter correct)\n- Assign the result of the previous step to `decimalLongitude` and `decimalLatitude` columns\n\n<details><summary>Hints</summary>\n\n- Convert the output of the transformer to a Series before returning (`pd.Series(....)`)\n- A convenient way to select a single row is using the `.loc[0]` operator.\n- `apply` can be used for both rows (`axis` 1) as columns (`axis` 0).\n- To assign two columns at once, you can use a similar syntax as for selecting multiple columns with a list of column names (`df[['col1', 'col2']]`).\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing18.py\n\n# %load _solutions/case2_observations_processing19.py\n\n# %load _solutions/case2_observations_processing20.py\n\n# %load _solutions/case2_observations_processing21.py\n\nplot_data.head()",
"The above function transform_utm_to_wgs you have created is a very specific function that knows the structure of the DataFrame you will apply it to (it assumes the 'xutm' and 'yutm' column names). We could also make a more generic function that just takes a X and Y coordinate and returns the Series of converted coordinates (transform_utm_to_wgs2(X, Y)).\nAn alternative to apply such a custom function to the plot_data DataFrame is the usage of the lambda construct, which lets you specify a function on one line as an argument:\ntransformer = Transformer.from_crs(\"EPSG:32612\", \"epsg:4326\")\nplot_data.apply(lambda row : transformer.transform(row['xutm'], row['yutm']), axis=1)\n\n<div class=\"alert alert-warning\">\n\n__WARNING__\n\nDo not abuse the usage of the `apply` method, but always look for an existing Pandas function first as these are - in general - faster!\n\n</div>\n\nJoin the coordinate information to the survey data set\nWe can extend our survey data set with this coordinate information. Making the combination of two data sets based on a common identifier is completely similar to the usage of JOIN operations in databases. In Pandas, this functionality is provided by pd.merge.\nIn practice, we have to add the columns decimalLongitude/decimalLatitude to the current data set survey_data_species, by using the plot identification number as key to join.\n<div class=\"alert alert-success\">\n\n**EXERCISE 17**\n\n- Extract only the columns to join to our survey dataset: the `plot` identifiers, `decimalLatitude` and `decimalLongitude` into a new variable named `plot_data_selection`\n\n<details><summary>Hints</summary>\n\n- To select multiple columns, use a `list` of column names, e.g. `df[[\"my_col1\", \"my_col2\"]]`\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing22.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 18**\n\nCombine the DataFrame `plot_data_selection` and the DataFrame `survey_data_species` by adding the corresponding coordinate information to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data_plots`.\n\n<details><summary>Hints</summary>\n\n- This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier.\n- The key-column is the `plot`.\n\n</details>",
"# %load _solutions/case2_observations_processing23.py\n\nsurvey_data_plots.head()",
"The plot locations need to be stored with the variable name verbatimLocality indicating the identifier as integer value of the plot:",
"survey_data_plots = survey_data_plots.rename(columns={'plot': 'verbatimLocality'})",
"Let's now save our clean data to a csv file, so we can further analyze the data in a following notebook:",
"survey_data_plots.to_csv(\"interim_survey_data_species.csv\", index=False)",
"(OPTIONAL SECTION) 4. Using a API service to match the scientific names\nAs the current species names are rather short and could eventually lead to confusion when shared with other users, retrieving additional information about the different species in our dataset would be useful to integrate our work with other research. An option is to match our names with an external service to request additional information about the different species.\nOne of these services is GBIF API. The service can most easily be illustrated with a small example:<br><br>\nIn a new tab blad of the browser, go to the URL http://www.gbif.org/species/2475532, which corresponds to the page of Alcedo atthis (ijsvogel in dutch). One could for each of the species in the list we have do a search on the website of GBIF to find the corresponding page of the different species, from which more information can be extracted manually. However, this would take a lot of time...\nTherefore, GBIF (as many other organizations!) provides a service (or API) to extract the same information in a machine-readable way, in order to automate these searches. As an example, let's search for the information of Alcedo atthis, using the GBIF API: Go to the URL: http://api.gbif.org/v1/species/match?name=Alcedo atthis and check the output. What we did is a machine-based search on the GBIF website for information about Alcedo atthis.\nThe same can be done using Python. The main library we need to this kind of automated searches is the requests package, which can be used to do request to any kind of API out there.",
"import requests",
"Example matching with Alcedo Atthis\nFor the example of Alcedo atthis:",
"species_name = 'Alcedo atthis'\n\nbase_string = 'http://api.gbif.org/v1/species/match?'\nrequest_parameters = {'verbose': False, 'strict': True, 'name': species_name}\nmessage = requests.get(base_string, params=request_parameters).json()\nmessage",
"From which we get a dictionary containing more information about the taxonomy of the Alcedo atthis.\nIn the species data set available, the name to match is provided in the combination of two columns, so we have to combine those to in order to execute the name matching:",
"genus_name = \"Callipepla\"\nspecies_name = \"squamata\"\nname_to_match = '{} {}'.format(genus_name, species_name)\nbase_string = 'http://api.gbif.org/v1/species/match?'\nrequest_parameters = {'strict': True, 'name': name_to_match} # use strict matching(!)\nmessage = requests.get(base_string, params=request_parameters).json()\nmessage",
"To apply this on our species data set, we will have to do this request for each of the individual species/genus combination. As, this is a returning functionality, we will write a small function to do this:\nWriting a custom matching function\n<div class=\"alert alert-success\">\n\n**EXERCISE 19**\n\n- Write a function, called `name_match` that takes the `genus`, the `species` and the option to perform a strict matching or not as inputs, performs a matching with the GBIF name matching API and return the received message as a dictionary.\n\n</div>",
"# %load _solutions/case2_observations_processing24.py",
"<div class=\"alert alert-info\">\n\n**NOTE**\n\nFor many of these API request handling, dedicated packages do exist, e.g. <a href=\"https://github.com/sckott/pygbif\">pygbif</a> provides different functions to do requests to the GBIF API, basically wrapping the request possibilities. For any kind of service, just ask yourself: is the dedicated library providing sufficient additional advantage, or can I easily setup the request myself. (or sometimes: for which the documentation is the best...)<br><br>Many services do exist for a wide range of applications, e.g. scientific name matching, matching of addresses, downloading of data,...\n\n</div>\n\nTesting our custom matching function:",
"genus_name = \"Callipepla\"\nspecies_name = \"squamata\"\nname_match(genus_name, species_name, strict=True)",
"However, the matching won't provide an answer for every search:",
"genus_name = \"Lizard\"\nspecies_name = \"sp.\"\nname_match(genus_name, species_name, strict=True)",
"Match each of the species names of the survey data set\nHence, in order to add this information to our survey DataFrame, we need to perform the following steps:\n1. extract the unique genus/species combinations in our dataset and combine them in single column\n2. match each of these names to the GBIF API service\n3. process the returned message:\n * if a match is found, add the information of the columns 'class', 'kingdom', 'order', 'phylum', 'scientificName', 'status' and 'usageKey'\n * if no match was found: nan-values\n4. Join the DataFrame of unique genus/species information with the enriched GBIF info to the survey_data_plots data set\n<div class=\"alert alert-success\">\n\n**EXERCISE 20**\n\n- Extract the unique combinations of genus and species in the `survey_data_plots` using the function `drop_duplicates()`. Save the result as the variable `unique_species` and remove the `NaN` values using `.dropna()`.\n\n</div>",
"# %load _solutions/case2_observations_processing25.py\n\nlen(unique_species)",
"<div class=\"alert alert-success\">\n\n**EXERCISE 21**\n\n- Extract the unique combinations of genus and species in the `survey_data_plots` using `groupby`. Save the result as the variable `unique_species`.\n\n<details><summary>Hints</summary>\n\n- As `groupby` needs an aggregation function, this can be `first()` (the first of each group) as well.\n- Do not forget to `reset_index` after the `groupby`.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing26.py\n\nlen(unique_species)",
"<div class=\"alert alert-success\">\n\n**EXERCISE 22**\n\n- Combine the columns genus and species to a single column with the complete name, save it in a new column named 'name'\n\n</div>",
"# %load _solutions/case2_observations_processing27.py\n\nunique_species.head()",
"To perform the matching for each of the combination, different options do exist (remember apply?)\nJust to showcase the possibility of using for loops in such a situation, let's do the addition of the matched information with a for loop. First, we will store everything in one dictionary, where the keys of the dictionary are the index values of unique_species (in order to later merge them again) and the values are the entire messages (which are dictionaries on itself). The format will look as following:\nspecies_annotated = {O: {'canonicalName': 'Squamata', 'class': 'Reptilia', 'classKey': 358, ...},\n 1: {'canonicalName':...},\n 2:...}",
"# this will take a bit as we do a request to gbif for each individual species\nspecies_annotated = {}\nfor key, row in unique_species.iterrows():\n species_annotated[key] = name_match(row[\"genus\"], row[\"species\"], strict=True)\n\n#species_annotated # uncomment to see output",
"We can now transform this to a pandas DataFrame:\n<div class=\"alert alert-success\">\n\n**EXERCISE 23**\n\n- Convert the dictionary `species_annotated` into a pandas DataFrame with the row index the key-values corresponding to `unique_species` and the column headers the output columns of the API response. Save the result as the variable `df_species_annotated`.\n\n<details><summary>Hints</summary>\n\n- The documentation of `pd.DataFrame` says the input van be 'ndarray (structured or homogeneous), Iterable, dict, or DataFrame'.\n- `transpose` can be used to flip rows and columns.\n\n</details>\n\n</div>",
"# %load _solutions/case2_observations_processing28.py\n\ndf_species_annotated.head()",
"Select relevant information and add this to the survey data\n<div class=\"alert alert-success\">\n\n**EXERCISE 24**\n\n- Subselect the columns 'class', 'kingdom', 'order', 'phylum', 'scientificName', 'status' and 'usageKey' from the DataFrame `df_species_annotated`. Save it as the variable `df_species_annotated_subset`\n\n</div>",
"# %load _solutions/case2_observations_processing29.py\n\ndf_species_annotated_subset.head()",
"<div class=\"alert alert-success\">\n\n**EXERCISE 25**\n\n- Join the `df_species_annotated_subset` information to the `unique_species` overview of species. Save the result as variable `unique_species_annotated`.\n</div>",
"# %load _solutions/case2_observations_processing30.py\n\nunique_species_annotated.head()",
"<div class=\"alert alert-success\">\n\n**EXERCISE 26**\n\n- Join the `unique_species_annotated` data to the `survey_data_plots` data set, using both the genus and species column as keys. Save the result as the variable `survey_data_completed`.\n\n</div>",
"# %load _solutions/case2_observations_processing31.py\n\nlen(survey_data_completed)\n\nsurvey_data_completed.head()",
"Congratulations! You did a great cleaning job, save your result:",
"survey_data_completed.to_csv(\"survey_data_completed_.csv\", index=False)",
"Acknowledgements\n\nspecies.csv and survey.csv are used from the data carpentry workshop This data is from the paper S. K. Morgan Ernest, Thomas J. Valone, and James H.\nBrown. 2009. Long-term monitoring and experimental manipulation of a Chihuahuan Desert ecosystem near Portal, Arizona, USA. Ecology 90:1708. http://esapubs.org/archive/ecol/E090/118/\nThe plot_location.xlsx is a dummy created location file purely created for this exercise, using the plots location on google maps\nGBIF API"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
QuantCrimAtLeeds/PredictCode
|
examples/Scripts/Reload naive predictions.ipynb
|
artistic-2.0
|
[
"import os, sys\nsys.path.insert(0, os.path.abspath(os.path.join(\"..\", \"..\")))",
"Reload the naive predictions\nShows how to make use of the data produced from the scripted script naive.py.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport open_cp.scripted\nimport open_cp.scripted.analysis as analysis\n\nloaded = open_cp.scripted.Loader(\"naive_preds.pic.xz\")\n\nloaded.timed_points.time_range\n\nfig, axes = plt.subplots(ncols=2, figsize=(16,7))\nanalysis.plot_data_scatter(loaded, axes[0])\nanalysis.plot_data_grid(loaded, axes[1])\n\nnext(iter(loaded))\n\ntimes = [x[1] for x in loaded]\npreds = [x[2] for x in loaded]\n\nfig, axes = plt.subplots(ncols=2, figsize=(16,7))\nfor ax, i in zip(axes, [0, 60]):\n analysis.plot_prediction(loaded, preds[i], ax)\n ax.set_title(times[i])",
"Manually redo the predictions and scoring",
"import datetime\nimport open_cp.naive\nimport numpy as np\nimport pandas as pd\nimport open_cp.evaluation\n\nstart = datetime.datetime(2016, 10, 1)\nour_preds = []\nwhile start < datetime.datetime(2017, 1, 1):\n predictor = open_cp.naive.CountingGridKernel(loaded.grid.xsize, region=loaded.grid.region())\n mask = loaded.timed_points.timestamps < start\n predictor.data = loaded.timed_points[mask]\n pred = predictor.predict()\n\n pred.mask_with(loaded.grid)\n pred = pred.renormalise()\n \n our_preds.append(pred)\n start += datetime.timedelta(days=1)\n\nfor i in range(len(our_preds)):\n np.testing.assert_allclose(our_preds[i].intensity_matrix, preds[i].intensity_matrix)",
"Check the scoring",
"frame = pd.read_csv(\"naive.csv\")\nframe.head()\n\nframe.tail()\n\ncoverages = list(range(1,101))\n\nstart = datetime.datetime(2016, 10, 1)\nrows = []\nfor pred in our_preds:\n end = start + datetime.timedelta(days=1)\n mask = (loaded.timed_points.timestamps >= start) & (loaded.timed_points.timestamps < end)\n rows.append(open_cp.evaluation.hit_rates(pred, loaded.timed_points[mask], coverages))\n start = end\n\nfor i in range(len(rows)):\n np.testing.assert_allclose(frame.ix[i][3:].values.astype(np.float), list(rows[i].values()))",
"Some plots\nAverage hit rate\nAnd standard error",
"def plot_mean_hitrate(ax, frame, xrange):\n coverages = list(range(1,101))\n\n data= {}\n for pred_type in frame.Predictor.unique():\n data[pred_type] = {}\n f = frame[frame.Predictor == pred_type].describe()\n for cov in coverages:\n r = f[\"{}%\".format(cov)]\n data[pred_type][cov] = r[\"mean\"], (r[\"std\"] / np.sqrt(r[\"count\"]))\n \n for pred_type in data:\n series = data[pred_type]\n x = np.sort(list(xrange))\n y = np.asarray([series[xx][0] for xx in x])\n ax.plot(x, y, label=pred_type)\n dy = np.asarray([series[xx][1] for xx in x])\n ax.fill_between(x, y-dy, y+dy, alpha=0.5)\n ax.legend()\n\nfig, ax = plt.subplots(figsize=(12,8))\nplot_mean_hitrate(ax, frame, range(1,101))\nax.set(xlabel=\"Coverage (%)\", ylabel=\"Hit rate\")\nNone\n\nfig, ax = plt.subplots(figsize=(12,8))\nplot_mean_hitrate(ax, frame, range(1,21))\nax.set(xlabel=\"Coverage (%)\", ylabel=\"Hit rate\")\nNone",
"Fit binomial model instead\nUse a beta prior",
"betas = analysis.hit_counts_to_beta(\"naive_counts.csv\")\n\nfig, ax = plt.subplots(figsize=(12,8))\nanalysis.plot_betas(betas, ax)\n\nfig, ax = plt.subplots(figsize=(12,8))\nanalysis.plot_betas(betas, ax, range(1,21))",
"What does this difference actually mean??\nSuppose we pick 5% coverage. There is a big gap between the curves there.",
"tps = loaded.timed_points.bin_timestamps(datetime.datetime(2016,1,1), datetime.timedelta(days=1))\nimport collections, statistics\nc = collections.Counter(tps.timestamps)\nstatistics.mean(c.values())",
"So we have about 5 crime events a day, on average.",
"import scipy.special\n\ndef BetaBinom(alpha,beta,n,k):\n \"\"\"http://www.channelgrubb.com/blog/2015/2/27/beta-binomial-in-python\"\"\"\n part_1 = scipy.special.comb(n,k)\n part_2 = scipy.special.betaln(k+alpha,n-k+beta)\n part_3 = scipy.special.betaln(alpha,beta)\n \n result = (np.log(part_1) + part_2)- part_3\n \n return np.exp(result)\n\nfig, axes = plt.subplots(ncols=len(betas), figsize=(16,5))\n\nn = 5\nfor ax, key in zip(axes, betas):\n beta = betas[key][5]\n p = [BetaBinom(*beta.args,n,k) for k in range(0,n+1)]\n ax.bar(np.arange(n+1), p)\n ax.set(xlabel=\"Number of crimes captured\", ylabel=\"Probability\")\n ax.set_title(\"{}; {} total events.\".format(key, n))",
"These plots show the probability of capturing $x$ events out of the 5 total events. This sort of puts the difference in perspective-- it's pretty small!"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
graphistry/pygraphistry
|
demos/data/benchmarking/DenseDatasets.ipynb
|
bsd-3-clause
|
[
"Dense Datasets\n\nThis notebook is used for benchmarking and debugging sparse datasets\n\nImport the necessary libaries",
"import random\nimport graphistry as g\nimport pandas as pd",
"Check the version of the Graphistry module",
"g.__version__\n\n# To specify Graphistry account & server, use:\n# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')\n# For more options, see https://github.com/graphistry/pygraphistry#configure",
"100 dense columns with 100K edges (restricted set of integer values 1-100)\nValues can be 1-100",
"edges = [{'src': x, 'dst': (x + 1) % 100000} for x in range(0, 100000)]\nfor i, edge in enumerate(edges):\n for fld in range(0, 100):\n edge['fld' + str((fld))] = (fld + i) % 100\nedges = pd.DataFrame(edges)\nedges[:3]\n\ng.edges(edges).bind(source='src', destination='dst').plot()",
"100 dense columns with 100K edges (random floats)\nEach edge as 100 attributes which is a randomly selected float",
"edges = [{'src': x, 'dst': (x + 1) % 100000} for x in range(0, 100000)]\nfor i, edge in enumerate(edges):\n for fld in range(0, 100):\n edge['fld' + str((fld))] = random.random()\nedges = pd.DataFrame(edges)\nedges[:3]\n\ng.edges(edges).bind(source='src', destination='dst').plot()",
"100 dense columns with 100K edges (random strings)\nEach edge as 100 attributes which is a randomly selected float",
"edges = [{'src': x, 'dst': (x + 1) % 100000} for x in range(0, 100000)]\nfor i, edge in enumerate(edges):\n for fld in range(0, 100):\n edge['fld' + str((fld))] = 'String' + str(random.random())\nedges = pd.DataFrame(edges)\nedges[:3]\n\ng.edges(edges).bind(source='src', destination='dst').plot()",
"10 dense columns with 800K edges (restricted set of integers 1-100)",
"edges = [{'src': (x % 300), 'dst': ((x + 1) % 800)} for x in range(0, 800000)]\nfor i, edge in enumerate(edges):\n for fld in range(0, 10):\n edge['fld' + str((fld))] = (fld + i) % 100\nedges = pd.DataFrame(edges)\nedges[:3]\n\ng.edges(edges).bind(source='src', destination='dst').plot()",
"10 dense columns with 800K edges (random float)",
"edges = [{'src': (x % 300), 'dst': ((x + 1) % 800)} for x in range(0, 800000)]\nfor i, edge in enumerate(edges):\n for fld in range(0, 10):\n edge['fld' + str((fld))] = random.random()\nedges = pd.DataFrame(edges)\nedges[:3]\n\ng.edges(edges).bind(source='src', destination='dst').plot()",
"10 dense columns with 800K edges (random strings)",
"edges = [{'src': (x % 300), 'dst': ((x + 1) % 800)} for x in range(0, 800000)]\nfor i, edge in enumerate(edges):\n for fld in range(0, 10):\n edge['fld' + str((fld))] = 'String + ' + str(random.random())\nedges = pd.DataFrame(edges)\nedges[:3]\n\ng.edges(edges).bind(source='src', destination='dst').plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sanabasangare/data-visualization
|
fin_big_data.ipynb
|
mit
|
[
"Analyzing Financial Data with python\nStart by installing locally a comprehensive Python installation such as the Anaconda Python distribution.\nNecessary Imports\nImport the required modules/packages",
"import numpy as np # for array operations\nimport pandas as pd # for time series management\nfrom pandas_datareader import data as web # for data retrieval\nimport seaborn as sns; sns.set() # for a nicer plotting style\n\n# put all plots in the notebook itself\n%matplotlib inline",
"Retrieving Stock Price Data\nIn this case, I'm retrieving stock price data for American Express Company using its stock symbol AXP from Google Finance.",
"AXP = web.DataReader('AXP', data_source='google')",
"The \"AXP\" object is of type \"DataFrame\".",
"type(AXP)",
"Get meta information",
"AXP.info()",
"List the columns in the dataframe",
"AXP.columns",
"Display the final five rows of the data set.",
"AXP.tail()",
"Easily select single or multiple columns of a DataFrame object.\n.head() shows the first five rows of the selected column",
"AXP['Close'].head()",
".tail() here, shows the last five rows of the 2 selected columns",
"AXP[['Open', 'Close']].tail()",
"Similarly, a single or multiple rows can be selected",
"AXP.loc['2017-06-05'] # single row via index value\n\nAXP.iloc[:2] # two rows via index numbers",
"Data Visualization",
"AXP['Close'].plot(figsize=(20, 10));",
"fully vectorized operation for log return calculation",
"rets = np.log(AXP['Close'] / AXP['Close'].shift(1))",
"The log returns can then be visualized via a histogram.",
"rets.hist(figsize=(20, 10), bins=35);",
"Calculating Moving Averages with pandas function\nvectorized calculation of 50 days moving average/trend",
"AXP['MA50'] = pd.Series(AXP['Close']).rolling(window=50,center=False).mean()\n\nAXP[['Close', 'MA50']].plot(figsize=(20, 10));"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ueapy/ueapy.github.io
|
content/notebooks/2016-06-10-arcgis-intro.ipynb
|
mit
|
[
"name = '2016-06-10-arcgis-intro'\ntitle = 'Introduction to ArcGIS and its Python interface'\ntags = 'gis, maps, basics'\nauthor = 'Melanie Froude'\n\nfrom nb_tools import connect_notebook_to_post\nfrom IPython.core.display import HTML\n\nhtml = connect_notebook_to_post(name, title, tags, author)",
"Today Melanie lead the meeting with a session on the ArcGIS software and how we can use Python to automatise the geospatial data processing. The slides are available below.\nWe started with a brief introduction to the types of data and analysis you can do in ArcGIS. Then Melanie demonstrated how to produce a 3D terrain model using the ArcScene toolbox.\nPresentation",
"# embed pdf into an automatically resized window (requires imagemagick)\nw_h_str = !identify -format \"%w %h\" ../pdfs/arcgis-intro.pdf[0]\nHTML('<iframe src=../pdfs/arcgis-intro.pdf width={0[0]} height={0[1]}></iframe>'.format([int(i)*0.8 for i in w_h_str[0].split()]))",
"We all agreed that ArcGIS has a lot to offer to geoscientists. But what makes this software even more appealing is that you can work in a command-line interface using Python (ArcPy module).\nSo we looked at how to run processes using the Python window command-by-command and how you might integrate ArcGIS processes within a longer script. This was exemplified by Melanie's script that she used to analyse vegetation regrowth after a volcanic eruption.\nThe script takes two vegetation photos in GeoTIFF format retrieved by Landsat as input and calculates the Normalised Difference Vegetation Index (NDVI) for each of them. We can then compare the output to see how vegetation has changed over the time period.\n<div class=\"alert alert-warning\", style=\"font-size: 100%\">\n<li>Standard ArcGIS uses Python 2.7 (Python 3 is available in ArcGIS Pro)\n<li>The commands below require ArcGIS installed, and hence are not in executable cells in this notebook.\n</div>\n\nArcPy script example: NDVI of the two geotiff images\nImport modules\nimport arcpy, string, arcpy.sa\nfrom arcpy import env\nCheck out extension and set overwrite outputs\narcpy.CheckOutExtension(\"spatial\")\narcpy.env.overwriteOutput = True\nStop outputs being added to the map\narcpy.env.addOutputsToMap = \"FALSE\"\nSet workspace and declare variations\nenv.workspace = (\"/path/to/demo/demo1\")\nprint(arcpy.env.workspace)\nLoad the data\nrasterb3 = arcpy.Raster(\"p046r28_5t900922_nn3.tif\")\nrasterb4 = arcpy.Raster(\"p046r28_5t900922_nn4.tif\")\nDescribe variables\ndesc = arcpy.Describe(rasterb4)\nprint(desc.dataType)\nprint(desc.meanCellHeight)\nCalculate the NDVI\nNum = arcpy.sa.Float(rasterb4-rasterb3)\nDenom = arcpy.sa.Float(rasterb4 + rasterb3)\nNDVI1990 = arcpy.sa.Divide(Num, Denom)\nSave the result as another .tif image\nNDVI1990.save(\"/path/to/demo/demo1/NDVI1990.tif\")\nDo the same calculation for the images from a later year\n```\nrasterb3a = arcpy.Raster(\"L71046028_02820050721_B30.TIF\")\nrasterb4a = arcpy.Raster(\"L71046028_02820050721_B40.TIF\")\nNum = arcpy.sa.Float(rasterb4a-rasterb3a)\nDenom = arcpy.sa.Float(rasterb4a + rasterb3a)\nNDVI2005 = arcpy.sa.Divide(Num, Denom)\n```\nAnd after saving the second result, calculate the NDVI difference\n```\nNDVI2005.save(\"/path/to/demo/demo1/NDVI2005.tif\")\nNDVIdiff = NDVI2005 - NDVI1990\nNDVIdiff.save(\"/path/to/demo/demo1/NDVIdiff.tif\")\n```\nThe result is shown in the slide 5.",
"HTML(html)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
sfegan/calin
|
examples/calib/gain estimation from photostatistics.ipynb
|
gpl-2.0
|
[
"Gain estimation using photo-statistics method\ncalin/examples/calib/gain estimation from photostatistics.ipynb - Stephen Fegan - 2017-03-27\nCopyright 2017, Stephen Fegan sfegan@llr.in2p3.fr\nLaboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris\nThis file is part of \"calin\". \"calin\" is free software: you can redistribute it and/or modify it under the\nterms of the GNU General Public License version 2 or later, as published by\nthe Free Software Foundation. \"calin\" is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nIntroduction\nThis notebook demonstrates how to use the diagnostics results written by compute_diagnostics.py to calculate the gain of the channels using the photo-statistics method. The calculation uses uses equation 30 of MST-CAM-TN-0060, which is appropriate when the intrinsic variance of the flasher is small, i.e. when Poisson fluctuations in the channels dominate. The intrinsic flasher variance is compensated for as described in the memo, however if the flasher resolution exceeds 3% other methods may give better results.\nGet diagnostics results from SQL database\nOpen the SQL diagnostics results previously written by compute_diagnostics.py and load the results.",
"%pylab inline\nimport calin.ix.scripts.compute_diagnostics\nimport calin.io.sql_transceiver\nimport calin.diagnostics.waveform\nimport calin.diagnostics.functional\nimport calin.plotting\n\nsql = calin.io.sql_transceiver.SQLite3Transceiver(\"/CTA/diagnostics.sqlite\",\n calin.io.sql_transceiver.SQLite3Transceiver.READ_ONLY)\ndiagnostics = calin.ix.scripts.compute_diagnostics.Results()\nsql.retrieve_by_oid(\"diagnostics_results\", 1, diagnostics)\ndel sql\n\ncfg = diagnostics.run_config()\nclo = diagnostics.command_line_options()",
"Illustrate signal & background windows\nDraw the average trace over all channels and illustrate the signal and background windows. This is done as a sanity check.",
"wfs = diagnostics.waveform_stats()\nwf_mean = zeros(cfg.num_samples())\nwf_var = zeros(cfg.num_samples())\nfor ich in range(0,wfs.high_gain_size()):\n wf_mean += calin.diagnostics.waveform.WaveformStatsVisitor.waveform_mean(wfs.high_gain(ich))\n wf_var += calin.diagnostics.waveform.WaveformStatsVisitor.waveform_var(wfs.high_gain(ich))\nwf_mean /= wfs.high_gain_size()\nwf_var /= wfs.high_gain_size()**2\n\nerrorbar(frange(len(wf_mean),closed=False),wf_mean,sqrt(wf_var),fmt='k.-')\na=axis()\ngca().add_patch(Rectangle((clo.sig_window_start(), a[2]), clo.window_size(), a[3]-a[2], facecolor='#ffeeee'))\naxvline(clo.sig_window_start(),color='r')\naxvline(clo.sig_window_start()+clo.window_size(),color='r')\ntext(clo.sig_window_start()+clo.window_size()/2,a[2]*0.975+a[3]*0.025,'Signal',ha='center',va='bottom',color='r')\ngca().add_patch(Rectangle((clo.bkg_window_start(), a[2]), clo.window_size(), a[3]-a[2], facecolor='#eeeeff'))\naxvline(clo.bkg_window_start(),color='b')\naxvline(clo.bkg_window_start()+clo.window_size(),color='b')\ntext(clo.bkg_window_start()+clo.window_size()/2,a[3]*0.975+a[2]*0.025,'Background',ha='center',va='top',color='b')\nxlabel('Sample number [ns]')\nylabel('Average pulse amplitude [DC]')",
"Calculate the gain of the high-gain channel\nExtract the mean and variance of the signal and background regions and decompose them to calculate the common-mode component that can be attributed to the intrinsic variance of the flasher, and the component in each channel that is independent of this.\nCalculate the gain in each channel from this accounting for the excess-noise fraction of the PMT single-electron multiplier, which must be specified.",
"enf = 1.14\n\nsmi=calin.diagnostics.functional.channel_mean(diagnostics.sig_stats().high_gain())\nbmi=calin.diagnostics.functional.channel_mean(diagnostics.bkg_stats().high_gain())\nsvi=calin.diagnostics.functional.channel_var(diagnostics.sig_stats().high_gain())\nbvi=calin.diagnostics.functional.channel_var(diagnostics.bkg_stats().high_gain())\n\nsvi_indep,sv_cm=calin.diagnostics.functional.decompose_channel_independent_and_common_var(\\\n diagnostics.sig_stats().high_gain())\n\ng = (svi - bvi - sv_cm*((smi-bmi)/mean(smi-bmi))**2)/(smi-bmi)/enf**2",
"Print and display the results",
"for i,l in enumerate([g[i:i + 7] for i in range(0, len(g), 7)]):\n print(\"| Module %-2d\"%cfg.configured_module_id(i),'|',' | '.\\\n join(map(lambda x: '%5.3f'%x, l)),'|')\n\ncalin.plotting.plot_camera(g, cfg.camera_layout(), cfg.configured_channel_id_view())\ntitle('Gain in high-gain channels')",
"Calculate the gain of the low-gain channel\nThis is mostly only for fun, or more accurately as a sanity check. A better way is probably to estimate the high-to-low gain ratio using the position of the mean signal in each channel and extrapolate from the absolute gain high-gain channels. This is effectively done at the very end.",
"lg_smi=calin.diagnostics.functional.channel_mean(diagnostics.sig_stats().low_gain())\nlg_bmi=calin.diagnostics.functional.channel_mean(diagnostics.bkg_stats().low_gain())\nlg_svi=calin.diagnostics.functional.channel_var(diagnostics.sig_stats().low_gain())\nlg_bvi=calin.diagnostics.functional.channel_var(diagnostics.bkg_stats().low_gain())\n\nlg_svi_indep,lg_sv_cm=calin.diagnostics.functional.decompose_channel_independent_and_common_var(\\\n diagnostics.sig_stats().low_gain())\n\nlg_g = (lg_svi - lg_bvi - lg_sv_cm*((lg_smi-lg_bmi)/mean(lg_smi-lg_bmi))**2)/(lg_smi-lg_bmi)/enf**2\n\nfor i,l in enumerate([lg_g[i:i + 7] for i in range(0, len(lg_g), 7)]):\n print(\"| Module %-2d\"%cfg.configured_module_id(i),'|',' | '.\\\n join(map(lambda x: '%5.3f'%x, l)),'|')\n\ncalin.plotting.plot_camera(lg_g, cfg.camera_layout(), cfg.configured_channel_id_view())\ntitle('Gain in low-gain channels')",
"Calculate the high-to-low gain ratio\n\nFirst, by dividing the absolute gains calculated above\nSecondly, by dividing the means of the signal histograms",
"g_ratio = g/lg_g\nfor i,l in enumerate([g_ratio[i:i + 7] for i in range(0, len(g_ratio), 7)]):\n print(\"| Module %-2d\"%cfg.configured_module_id(i),'|',' | '.\\\n join(map(lambda x: '%5.3f'%x, l)),'|')\n\nrg_ratio = (smi-bmi)/(lg_smi-lg_bmi)\nfor i,l in enumerate([rg_ratio[i:i + 7] for i in range(0, len(rg_ratio), 7)]):\n print(\"| Module %-2d\"%cfg.configured_module_id(i),'|',' | '.\\\n join(map(lambda x: '%5.3f'%x, l)),'|')\n\nplot(g_ratio, rg_ratio,'x')\naxis('square')\nxlabel('Absolute gain raio')\nylabel('Relative gain raio')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/bnu/cmip6/models/bnu-esm-1-1/aerosol.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: BNU\nSource ID: BNU-ESM-1-1\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'aerosol')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of aerosol model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrognostic variables in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of tracers in the aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre aerosol calculations generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for aerosol physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the aerosol model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Variables 2D\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Frequency\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of transport in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for aerosol transport modeling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n",
"7.3. Mass Conservation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to ensure mass conservation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.4. Convention\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTransport by convention",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prescribed Climatology\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nSpecify the climatology type for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n",
"8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Other Method Characteristics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as mass mixing ratios.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Prescribed Fields Mmr\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of optical and radiative properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.2. Dust\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Organics\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there external mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Internal\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.3. Mixing Rule\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact size?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.2. Internal Mixture\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes H2O impact internal mixture?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Shortwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of shortwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of aerosol-cloud interactions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Twomey\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the Twomey effect included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.3. Twomey Minimum Ccn\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Drizzle\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect drizzle?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.5. Cloud Lifetime\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the scheme affect cloud lifetime?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Longwave Bands\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of longwave bands",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosperic aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the Aerosol model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n",
"16.3. Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther model components coupled to the Aerosol model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.4. Gas Phase Precursors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of gas phase aerosol precursors.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.5. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.6. Bulk Scheme Species\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of species covered by the bulk scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
theideasmith/theideasmith.github.io
|
_notebooks/.ipynb_checkpoints/Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization-checkpoint.ipynb
|
mit
|
[
"Supplementary Materials\nThis code accompanies the paper Asymptotic Convergence of Gradient Descent for Linear Regression Least Squares Optimization (Lipshitz, 2017)\nInitialization",
"from pylab import *\nfrom numpy import random as random\nrandom.seed(1)\nN=1000.\nw = array([14., 30.]); \nx = zeros((2, int(N))).astype(float32)\nx[0,:] = arange(N).astype(float32)\nx[1,:] = 1\ny = w.dot(x) + random.normal(size=int(N), scale=100.)",
"Defining Regression",
"yh = lambda xs, ws: \\\n ws.dot(xs)\n \ngrad = lambda ys, yhs, xs: \\\n (1./xs.shape[1])*sum((yhs-ys)*xs).astype(float32)\n \ndelta = lambda gs, a: \\\n a*gs\n \ndef regress(y, x, alpha, T=1000, wh=None, **kwargs):\n\n wh = random.normal(2, size=2)\n whs = zeros((T, 2))\n whs[0,:] = wh\n for i in xrange(1,T): \n wh+=delta(grad(y,yh(x,wh), x), alpha)\n whs[i,:] = wh.copy()\n return wh, whs\n\ndef regrSample(y, x, alpha, T=1000, N=10, **kwargs):\n out = map(\n lambda a: \\\n regress(y,x, alpha, T=T), xrange(N)\n )\n trains = array([o[1] for o in out])\n wDist = array([o[0] for o in out])\n \n return wDist, trains\n\ndef statsRegr(*args, **kwargs):\n wDist, trains = regrSample(*args, **kwargs)\n return np.mean(trains, axis=0), np.std(trains, axis=0)",
"Running Regression above and Below the Upper Bound on $\\alpha$\nThe theoretically derived bounds on $\\alpha$ are $$\\alpha \\in \\left( -2\\frac{N}{|\\mathbf{x}|^2}, 0 \\right]$$\nOther $\\alpha$ values diverge",
"def plotDynamicsForAlpha(alpha, axTitle, T=1000, N=10):\n t = np.arange(T)\n mu, sig = statsRegr(y, x, alpha, T=T, N=N)\n plot(mu[:,0], 'r:', label='$w_1$')\n plot(mu[:,1], 'b:', label='$w_2$')\n fill_between(t, \\\n mu[:,0]+sig[:,0], \\\n mu[:,0]-sig[:,0], \\\n facecolor='red', alpha=0.5)\n fill_between(t,\\\n mu[:,1]+sig[:,1], \\\n mu[:,1]-sig[:,1], \\\n facecolor='blue', alpha=0.5)\n xlabel(\"t [Iterations]\", fontdict={'fontsize':fs*.8})\n yl = ylabel(\"$w_{i,t}$\",fontdict={'fontsize':fs*.8})\n yl.set_rotation('horizontal')\n title(axTitle, fontdict={'fontsize':fs})\n tight_layout()\n return mu, sig\n\n\n\nalphaData = [\n (\"$a=2$\", 2),\n (\"$a=0$\",0.),\n (\"$a=-0.5N/x^2$\",-0.5*N/linalg.norm(x[0,:])**2),\n (\"$a=-N/x^2$\", -N/linalg.norm(x[0,:])**2),\n (\"$a=-1.3N/x^2$\", -1.3*N/linalg.norm(x[0,:])**2),\n (\"$a=-1.6N/x^2$\", -1.6*N/linalg.norm(x[0,:])**2),\n (\"$a=-1.99N/x^2$\", -1.99*N/linalg.norm(x[0,:])**2),\n (\"$a=-2N/x^2$\", -2*N/linalg.norm(x[0,:])**2)\n]\n\n%matplotlib inline\nfrom scipy.stats import norm\nimport seaborn as sns\nfs = 15\nfigure(figsize=(10,3*len(alphaData)))\nouts = []\nfor i, d in enumerate(alphaData):\n k, v = d\n# subplot(len(alphaData),1, i+1)\n figure(figsize=(10,3))\n outs.append(plotDynamicsForAlpha(v, k, T=150 ))\n\ntight_layout()\n# suptitle(\"Dynamical Learning Trajectories for Significant Alpha Values\", y=1.08, fontdict={'fontsize':20});\n\n\n\nfor i, axtitle in enumerate(alphaData):\n axtitle, axnum = axtitle\n mu, sig = outs[i]\n figure(figsize=(10,3))\n \n if np.sum(np.isnan(mu)) > 0:\n k=2\n idx0=argwhere(~np.isnan(mu[:,0]))[-1]-1\n idx1=argwhere(~np.isnan(sig[:,0]))[-1]-1\n idx = min(idx0, idx1)\n xmin = max(mu[idx,0]-k*sig[idx,0], mu[idx,0]-k*sig[idx,0])\n xmax = min(mu[idx,0]+k*sig[idx,0], mu[idx,0]+k*sig[idx,0])\n x_axis = np.linspace(xmin,xmax, num=300);\n else: \n xmin = max(mu[-1,0]-3*sig[-1,0], mu[-1,0]-3*sig[-1,0])\n xmax = min(mu[-1,0]+3*sig[-1,0], mu[-1,0]+3*sig[-1,0])\n x_axis = np.linspace(xmin,xmax, num=300);\n\n plt.plot(x_axis, norm.pdf(x_axis,mu[-1,0],sig[-1,0]),'r:');\n plt.plot(x_axis, norm.pdf(x_axis,mu[-1,1],sig[-1,1]), 'b:');\n xlim(xmin = xmin, xmax=xmax)\n p, v = yticks()\n plt.yticks(p,map(lambda w: round(w, 2),linspace(0, 1, num=len(p))))\n title(axtitle)\n tight_layout()\n\n\nx.shape\n\nfigure(figsize=(10,10))\nsubplot(2,1,1)\ntitle(\"Closed From Expression\", fontdict={'fontsize':10})\nT = 30\nw0 = random.normal(2, size=2)\nt = np.arange(T)\n\na = -2.1*N/linalg.norm(x[0,:])**2\nbeta2 = (1/N)*a*x[0,:].dot(x[0,:])\nbeta1 = -(1/N)*a*x[0,:].dot(y)\nws = w0[0]*(beta2+1)**t - beta1*(1-(beta2+1)**t)/beta2\n# ws = w0[0]*(-1)**t + ((-1)**t -1)*x[0,:].dot(y)/linalg.norm(x[0,:])**2\nplot(ws)\n\nsubplot(2,1,2)\ntitle(\"Simulation\", fontdict={'fontsize':10})\nwh = w0\nwhs = zeros((T, 2))\nwhs[0,:] = wh\nfor i in xrange(1,T): \n wh+=delta(grad(y,yh(x,wh), x), a)\n whs[i,:] = wh.copy()\nplot(whs[:,0])\nsuptitle((\"Asymptotic Behavior \"\n \"of Closed form and Simulated Learning: $a = -2.1N/x^2$\"), fontdict={\"fontsize\":20})",
"$\\alpha = \\sup A$",
"t = arange(0,10)\nws = (0**t)*(w0[0]+x[0,:].dot(y)/linalg.norm(x[0,:])**2) + x[0,:].dot(y)/linalg.norm(x[0,:])**2\nfigure()\nax = subplot(111)\nax.set_title(\"alpha = sup A\")\nax.plot(ws)\n\nt = arange(0,10)\nws = ((-1)**t)*w0[0] - (x[0,:].dot(y)/linalg.norm(x[0,:])**2) + (-2)**t*x[0,:].dot(y)/linalg.norm(x[0,:])**2\nfigure()\nax = subplot(111)\nax.set_title(\"alpha = sup A\")\nax.plot(ws)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mmckerns/tuthpc
|
kocham.ipynb
|
bsd-3-clause
|
[
"Exercise: cracking a password\nThe following requires you first install kocham, which can be found as a zipfile in the tutorial repository. Python 2.7 is required. If the install fails, try installing setuptools first.",
"$ unzip kocham.zip\n$ cd kocham\n$ python setup.py install\n\n\"\"\"\na toy password cracker\n\"\"\"\nimport time\nimport itertools\nfrom multiprocess.dummy import Pool\nimport kocham.imap as imap\nimport kocham.corpus as corpus\nstopwords = corpus.stopwords\nipassword = corpus.ipassword\ncompare = imap.login\n\n# turn on verbosity\ncorpus.VERBOSE = True\n\n# set the password to \"solve\"\nimap.setpass('catdog@123')\n\n# set a delay, to simulate connection time to a server\nimap.setdelay(0.0)\n\n# select a list of possible words and a possible set of characters\nwords = ['cat','dog','horse','apple','foo','bar','python','phobia']\nchars = '1234567890!@#$'\n\n# configure the minimum and maximum password length\nargs = (words, chars, 8, 10)\n# configure the minimum and maximum word length, and the maximum number of words\nkwds = dict(minword=3, maxword=8, size=2)\n\n# build the password generator\npasswd = ipassword(flatten=True, *args, **kwds)\n\nstart = time.time()\n\n# solve\nfor p in passwd:\n x = compare(p)\n if x:\n print x\n break\n\nend = time.time() - start\nprint \"finished in: %s\" % end\n\n# rebuild the password generator\npasswd = ipassword(flatten=True, *args, **kwds)\n\nstart = time.time()\n\n# solve\nfor x in itertools.imap(compare, passwd):\n if x:\n print x\n break\n\nend = time.time() - start\nprint \"finished in: %s\" % end\n\n# rebuild the password generator\npasswd = ipassword(flatten=True, *args, **kwds)\n\nstart = time.time()\n\n# solve\ntp = Pool(50)\nfor x in tp.imap_unordered(compare, passwd):\n if x:\n print x\n break\ntp.close()\ntp.join()\n\nend = time.time() - start\nprint \"finished in: %s\" % end",
"(this takes a long time...)",
"\"\"\"\nsome useful parallel iterated map constructs\n\"\"\"\ndef icompare(pwds):\n import itertools\n res = itertools.imap(compare, pwds)\n for x in res:\n if x:\n return x\n return\n\ndef uicompare(pwds, n=50):\n from multiprocess.dummy import Pool\n tp = Pool(n)\n res = tp.imap_unordered(compare, pwds)\n for x in res:\n if x:\n return x\n return\n\n# rebuild the password generator, this time don't flatten to a single interator\npasswd = ipassword(flatten=False, *args, **kwds)\n\nstart = time.time()\n\n# solve\nfor p in passwd:\n for i in p:\n x = compare(i)\n if x:\n print x\n break\n if x:\n break\n\n\nend = time.time() - start\nprint \"finished in: %s\" % end",
"Can you find a looping pattern that significantly speeds up the time-to-solution?\nHow different are your results when the cracker has a small delay time like: imap.setdelay(0.0001)?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive/10_recommend/labs/content_based_by_hand.ipynb
|
apache-2.0
|
[
"Content Based Filtering by hand\nThis lab illustrates how to implement a content based filter using low level Tensorflow operations.\nThe code here follows the technique explained in Module 2 of Recommendation Engines: Content Based Filtering.\nTo run this lab, we need to use TensorFlow version 1.15.0.",
"!pip install tensorflow==1.15.0",
"Make sure to restart your kernel to ensure this change has taken place.",
"import numpy as np\nimport tensorflow as tf\n\ntf.enable_eager_execution()\nprint(tf.__version__)",
"To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action, Sci-Fi, Comedy, Cartoon, and Drama to describe our movies (and thus our users).\nIn this example, we will assume our database consists of four users and six movies, listed below.",
"users = ['Ryan', 'Danielle', 'Vijay', 'Chris']\nmovies = ['Star Wars', 'The Dark Knight', 'Shrek', 'The Incredibles', 'Bleu', 'Memento']\nfeatures = ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama']\n\nnum_users = len(users)\nnum_movies = len(movies)\nnum_feats = len(features)\nnum_recommendations = 2",
"Initialize our users, movie ratings and features\nWe'll need to enter the user's movie ratings and the k-hot encoded movie features matrix. Each row of the users_movies matrix represents a single user's rating (from 1 to 10) for each movie. A zero indicates that the user has not seen/rated that movie. The movies_feats matrix contains the features for each of the given movies. Each row represents one of the six movies, the columns represent the five categories. A one indicates that a movie fits within a given genre/category.",
"# each row represents a user's rating for the different movies\nusers_movies = tf.constant([\n [4, 6, 8, 0, 0, 0],\n [0, 0, 10, 0, 8, 3],\n [0, 6, 0, 0, 3, 7],\n [10, 9, 0, 5, 0, 2]],dtype=tf.float32)\n\n# features of the movies one-hot encoded\n# e.g. columns could represent ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama']\nmovies_feats = tf.constant([\n [1, 1, 0, 0, 1],\n [1, 1, 0, 0, 0],\n [0, 0, 1, 1, 0],\n [1, 0, 1, 1, 0],\n [0, 0, 0, 0, 1],\n [1, 0, 0, 0, 1]],dtype=tf.float32)",
"Computing the user feature matrix\nWe will compute the user feature matrix; that is, a matrix containing each user's embedding in the five-dimensional feature space. We can calculuate this as the matrix multiplication of the users_movies tensor with the movies_feats tensor. Implement this in the TODO below.",
"users_feats = #TODO \nusers_feats",
"Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users.",
"users_feats = users_feats/tf.reduce_sum(users_feats,axis=1,keepdims=True)\nusers_feats",
"Ranking feature relevance for each user\nWe can use the users_feats computed above to represent the relative importance of each movie category for each user.",
"top_users_features = tf.nn.top_k(users_feats, num_feats)[1]\ntop_users_features\n\nfor i in range(num_users):\n feature_names = [features[index] for index in top_users_features[i]]\n print('{}: {}'.format(users[i],feature_names))",
"Determining movie recommendations.\nWe'll now use the users_feats tensor we computed above to determine the movie ratings and recommendations for each user.\nTo compute the projected ratings for each movie, we compute the similarity measure between the user's feature vector and the corresponding movie feature vector. \nWe will use the dot product as our similarity measure. In essence, this is a weighted movie average for each user.\nHint: you can also implement this as a matrix multiplication, but you will have to transpose one of the operands",
"users_ratings = #TODO\nusers_ratings",
"The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix. \nIf a user has already rated a movie, we ignore that rating. This way, we only focus on ratings for previously unseen/unrated movies.",
"users_ratings_new = tf.where(tf.equal(users_movies, tf.zeros_like(users_movies)),\n users_ratings,\n tf.zeros_like(tf.cast(users_movies, tf.float32)))\nusers_ratings_new",
"Finally let's grab and print out the top 2 rated movies for each user",
"top_movies = tf.nn.top_k(users_ratings_new, num_recommendations)[1]\ntop_movies\n\nfor i in range(num_users):\n movie_names = [movies[index] for index in top_movies[i]]\n print('{}: {}'.format(users[i],movie_names))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Eomys/MoSQITo
|
tutorials/tuto_sharpness_din.ipynb
|
apache-2.0
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Load-signal\" data-toc-modified-id=\"Load-signal-1\"><span class=\"toc-item-num\">1 </span>Load signal</a></span></li><li><span><a href=\"#Compute-sharpness-of-the-whole-signal\" data-toc-modified-id=\"Compute-sharpness-of-the-whole-signal-2\"><span class=\"toc-item-num\">2 </span>Compute sharpness of the whole signal</a></span></li><li><span><a href=\"#Compute-sharpness-per-signal-segments\" data-toc-modified-id=\"Compute-sharpness-per-signal-segments-3\"><span class=\"toc-item-num\">3 </span>Compute sharpness per signal segments</a></span></li><li><span><a href=\"#Compute-sharpness-from-loudness\" data-toc-modified-id=\"Compute-sharpness-from-loudness-4\"><span class=\"toc-item-num\">4 </span>Compute sharpness from loudness</a></span></li><li><span><a href=\"#Compute-sharpness-from-spectrum\" data-toc-modified-id=\"Compute-sharpness-from-spectrum-5\"><span class=\"toc-item-num\">5 </span>Compute sharpness from spectrum</a></span></li></ul></div>\n\nHow to compute acoustic Sharpness according to DIN method\nThis tutorial explains how to use MOSQITO to compute the acoustic sharpness of a signal according to the DIN 45692 method. For more information on the implementation and validation of the metric, you can refer to the documentation.\nThe following commands are used to import the necessary functions.",
"# Add MOSQITO to the Python path\nimport sys\nsys.path.append('..')\n\n# To get inline plots (specific to Jupyter notebook)\n%matplotlib notebook\n\n# Import numpy\nimport numpy as np\n# Import plot function\nimport matplotlib.pyplot as plt\n# Import mosqito functions\nfrom mosqito.utils import load\n# Import spectrum computation tool\nfrom scipy.fft import fft, fftfreq\nfrom mosqito.sq_metrics import loudness_zwst_perseg\nfrom mosqito.sq_metrics import sharpness_din_st\nfrom mosqito.sq_metrics import sharpness_din_perseg\nfrom mosqito.sq_metrics import sharpness_din_from_loudness\nfrom mosqito.sq_metrics import sharpness_din_freq\n\n# Import MOSQITO color sheme [Optional]\nfrom mosqito import COLORS\n\n# To get inline plots (specific to Jupyter notebook)\n%matplotlib notebook",
"Load signal\nIn this tutorial, the signal is imported from a .wav file. The tutorial Audio signal basic operations gives more information about the syntax of the import and the other supported file types. You can use any .wav file to perform the tutorial or you can download the pink noise signal from MOSQITO that is used in the following.",
"# Define path to the .wav file\n# To be replaced by your own path\npath = \"../validations/sq_metrics/loudness_zwst/input/ISO_532_1/Test signal 5 (pinknoise 60 dB).wav\"\n# load signal\nsig, fs = load(path, wav_calib=2 * 2 **0.5)\n# plot signal\nt = np.linspace(0, (len(sig) - 1) / fs, len(sig))\nplt.figure(1)\nplt.plot(t, sig, color=COLORS[0])\nplt.xlabel('Time [s]')\nplt.ylabel('Acoustic pressure [Pa]')",
"Compute sharpness of the whole signal\nThe acoustic sharpness is computed by using the following command line. In addition to the signal (as ndarray) and the sampling frequency, the function takes 1 input arguments: \"weitghting\" to specify the weighting functions to be used ('din' by default, 'aures', 'bismarck' or 'fastl').",
"sharpness = sharpness_din_st(sig, fs, weighting=\"din\")",
"The function return the Sharpness of the signal :",
"print(\"Sharpness = {:.1f} acum\".format(sharpness) )",
"Compute sharpness per signal segments\nTo compute the sharpness for successive, possibly overlaping, time segments, you can use the sharpness_din_perseg function. It accepts two more input paramters:\n- nperseg: to define the length of each segment\n- noverlap: to define the number of points to overlap between segments",
"sharpness, time_axis = sharpness_din_perseg(sig, fs, nperseg=8192 * 2, noverlap=4096, weighting=\"din\")\nplt.figure(2)\nplt.plot(time_axis, sharpness, color=COLORS[0])\nplt.xlabel(\"Time [s]\")\nplt.ylabel(\"S_din [acum]\")\nplt.ylim((0, 3))",
"Compute sharpness from loudness\nIn case you have already computed the loudness of a signal, you can use the sharpness_din_from_loudness function to compute the sharpnes. It takes the loudness and the specific loudness as input. The loudness can be computed per time segment or not.",
"N, N_specific, bark_axis, time_axis = loudness_zwst_perseg(\n sig, fs, nperseg=8192 * 2, noverlap=4096\n)\nsharpness = sharpness_din_from_loudness(N, N_specific, weighting='din')\nplt.figure(3)\nplt.plot(time_axis, sharpness, color=COLORS[0])\nplt.xlabel(\"Time [s]\")\nplt.ylabel(\"S_din [acum]\")\nplt.ylim((0, 3))",
"Compute sharpness from spectrum\nThe commands below shows how to compute the stationary sharpness from a frequency spectrum either in complex values or amplitude values using the functions from MOSQITO. One should note that only stationary values can be computed from a frequency input. \nThe input spectrum can be either 1D with size (Nfrequency) or 2D with size (fNrequency x Ntime). The corresponding time axis can be either the same for all the spectra, with size (Nfrequency) or different for each spectrum with size (Nfrequency x Ntime).\nOne should pay attention that the input spectrum must be in RMS values !",
"# Compute spectrum\nn = len(sig)\nspec = np.abs(2 / np.sqrt(2) / n * fft(sig)[0:n//2])\nfreqs = fftfreq(n, 1/fs)[0:n//2]\n# Compute sharpness\nS = sharpness_din_freq(spec, freqs)\n\nprint(\"Sharpness_din = {:.1f} sone\".format(S) )",
"",
"from datetime import date\nprint(\"Tutorial generation date:\", date.today().strftime(\"%B %d, %Y\"))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MichalBusta/clstm
|
misc/lstm-delay.ipynb
|
apache-2.0
|
[
"%pylab inline\nfigsize(10,5)\n\nimport clstm",
"Network creation and initialization is very similar to C++:\n\nnetworks are created using the make_net(name) factory function\nthe net.set(key,value) method is used to set up parameters\nthe .setLearningRate(lr,mom) method is used to set learning rate and momentum\n.initialize() is called to create the network\n\nAs in C++, the combination of make_net and set does not allow arbitrary network architectures to be constructed. For anything complicated, you",
"net = clstm.make_net_init(\"lstm1\",\"ninput=1:nhidden=4:noutput=2\")\nprint net\n\nnet.setLearningRate(1e-4,0.9)\nprint clstm.network_info(net)",
"You can navigate the network structure as you would in C++. You can use similar methods to create more complex network architectures than possible with make_net.",
"print net.sub.size()\nprint net.sub[0]\nprint net.sub[0].name",
"This cell generally illustrates how to invoke the CLSTM library from Python:\n\nnet.inputs, net.outputs, net.d_inputs, and net.d_outputs are Sequence types\nSequence objects can be converted to rank 3 arrays using the .array() method\nThe values in a Sequence can be set with the .aset(array) method",
"N = 20\nxs = array(randn(N,1,1)<0.2, 'f')\nnet.inputs.aset(xs)\nnet.forward()",
"Here is a training loop that generates a delayed-by-one from a random input sequence and trains the network to learn this task.",
"N = 20\ntest = array(rand(N)<0.3, 'f')\nplot(test, '--', c=\"black\")\nntrain = 30000\nfor i in range(ntrain):\n xs = array(rand(N)<0.3, 'f')\n ys = roll(xs, 1)\n ys[0] = 0\n ys = array([1-ys, ys],'f').T.copy()\n net.inputs.aset(xs.reshape(N,1,1))\n net.forward()\n net.d_outputs.aset(ys.reshape(N,2,1)-net.outputs.array())\n net.backward()\n net.update()\n if i%1000==0:\n net.inputs.aset(test.reshape(N,1,1))\n net.forward()\n plot(net.outputs.array()[:,1,0],c=cm.jet(i*1.0/ntrain))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IBMDecisionOptimization/docplex-examples
|
examples/mp/jupyter/tutorials/Linear_Programming.ipynb
|
apache-2.0
|
[
"Tutorial: Linear Programming, (CPLEX Part 1)\nThis notebook gives an overview of Linear Programming (or LP). After completing this unit, you should be able to \n- describe the characteristics of an LP in terms of the objective, decision variables and constraints, \n- formulate a simple LP model on paper, \n- conceptually explain some standard terms related to LP, such as dual, feasible region, infeasible, unbounded, slack, reduced cost, and degenerate. \nYou should also be able to describe some of the algorithms used to solve LPs, explain what presolve does, and recognize the elements of an LP in a basic DOcplex model.\n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install the DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nIntroduction to Linear Programming\nExample: a production problem\nCPLEX Modeling for Python\nAlgorithms for solving LPs\nSummary\nReferences\n\nIntroduction to Linear Programming\nIn this topic, you’ll learn what the basic characteristics of a linear program are.\nWhat is Linear Programming?\nLinear programming deals with the maximization (or minimization) of a linear objective function, subject to linear constraints, where all the decision variables are continuous. That is, no discrete variables are allowed. The linear objective and constraints must consist of linear expressions. \nWhat is a linear expression?\nA linear expression is a scalar product, for example, the expression:\n$$\n\\sum{a_i x_i}\n$$\nwhere a_i represents constants (that is, data) and x_i represents variables or unknowns.\nSuch an expression can also be written in short form as a vector product:\n$$^{t}A X\n$$\nwhere $A$ is the vector of constants and $X$ is the vector of variables.\nNote: Nonlinear terms that involve variables (such as x and y) are not allowed in linear expressions. \nTerms that are not allowed in linear expressions include \n- multiplication of two or more variables (such as x times y), \n- quadratic and higher order terms (such as x squared or x cubed), \n- exponents, \n- logarithms,\n- absolute values.\nWhat is a linear constraint?\nA linear constraint is expressed by an equality or inequality as follows:\n- $linear_expression = linear_expression$\n- $linear_expression \\le linear_expression$\n- $linear_expression \\ge linear_expression$\nAny linear constraint can be rewritten as one or two expressions of the type linear expression is less than or equal to zero.\nNote that strict inequality operators (that is, $>$ and $<$) are not allowed in linear constraints. \nWhat is a continuous variable?\nA variable (or decision variable) is an unknown of the problem. Continuous variables are variables the set of real numbers (or an interval). \nRestrictions on their values that create discontinuities, for example a restriction that a variable should take integer values, are not allowed. \nSymbolic representation of an LP\nA typical symbolic representation of a Linear Programming is as follows:\n$\nminimize \\sum c_{i} x_{i}\\\n\\\nsubject\\ to:\\\n\\ a_{11}x_{1} + a_{12} x_{2} ... + a_{1n} x_{n} \\ge b_{1}\\\n\\ a_{21}x_{1} + a_{22} x_{2} ... + a_{2n} x_{n} \\ge b_{2}\\\n...\n\\ a_{m1}x_{1} + a_{m2} x_{2} ... + a_{mn} x_{n} \\ge b_{m}\\\nx_{1}, x_{2}...x_{n} \\ge 0\n$\nThis can be written in a concise form using matrices and vectors as:\n$\nmin\\ C^{t}x\\\ns.\\ t.\\ Ax \\ge B\\\nx \\ge 0\n$\nWhere $x$ denotes the vector of variables with size $n$, $A$ denotes the matrix of constraint coefficients, with $m$ rows and $n$ columns and $B$ is a vector of numbers with size $m$.\nCharacteristics of a linear program\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/1.png?raw=true\" >\n</ul> \n\n# Example: a production problem\n\nIn this topic, you’ll analyze a simple production problem in terms of decision variables, the objective function, and constraints. \nYou’ll learn how to write an LP formulation of this problem, and how to construct a graphical representation of the model. You’ll also learn what feasible, optimal, infeasible, and unbounded mean in the context of LP.\n\n## Problem description: telephone production\n\nA telephone company produces and sells two kinds of telephones, namely desk phones and cellular phones. \n\nEach type of phone is assembled and painted by the company. The objective is to maximize profit, and the company has to produce at least 100 of each type of phone.\n\nThere are limits in terms of the company’s production capacity, and the company has to calculate the optimal number of each type of phone to produce, while not exceeding the capacity of the plant.\n\n## Writing a descriptive model\n\nIt is good practice to start with a descriptive model before attempting to write a mathematical model. In order to come up with a descriptive model, you should consider what the decision variables, objectives, and constraints for the business problem are, and write these down in words.\n\n\nIn order to come up with a descriptive model, consider the following questions:\n- What are the decision variables? \n- What is the objective? \n- What are the constraints? \n\n\n\n## Telephone production: a descriptive model\n\nA possible descriptive model of the telephone production problem is as follows:\n- Decision variables:\n - Number of desk phones produced (DeskProduction)\n - Number of cellular phones produced (CellProduction)\n- Objective: Maximize profit\n- Constraints:\n 1. The DeskProduction should be greater than or equal to 100.\n 2. The CellProduction should be greater than or equal to 100.\n 3. The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours.\n 4. The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours.\n\n\n## Writing a mathematical model\n\nConvert the descriptive model into a mathematical model:\n- Use the two decision variables DeskProduction and CellProduction\n- Use the data given in the problem description (remember to convert minutes to hours where appropriate)\n- Write the objective as a mathematical expression\n- Write the constraints as mathematical expressions (use “=”, “<=”, or “>=”, and name the constraints to describe their purpose)\n- Define the domain for the decision variables\n\n### Telephone production: a mathematical model\n\nTo express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:\n\n$\nmaximize:\\\\\n\\ \\ 12\\ desk\\_production + 20\\ cell\\_production\\\\\nsubject\\ to: \\\\\n\\ \\ desk\\_production >= 100 \\\\\n\\ \\ cell\\_production >= 100 \\\\\n\\ \\ 0.2\\ desk\\_production + 0.4\\ cell\\_production <= 400 \\\\\n\\ \\ 0.5\\ desk\\_production + 0.4\\ cell\\_production <= 490 \\\\\n$\n\n### Using DOcplex to formulate the mathematical model in Python\n\nUse the [DOcplex](http://ibmdecisionoptimization.github.io/docplex-doc/) Python library to write the mathematical model in Python.\nThis is done in four steps:\n\n- create a instance of docplex.mp.Model to hold all model objects\n- create decision variables,\n- create linear constraints,\n- finally, define the objective.\n\nBut first, we have to import the class `Model` from the docplex module.\n\n## Use IBM Decision Optimization CPLEX Modeling for Python\n\nLet's use the DOcplex Python library to write the mathematical model in Python.\n\n### Step 1: Download the library\n\nInstall `CPLEX` (Community Edition) and `docplex` if they are not installed.\n\nIn `IBM Cloud Pak for Data as a Service` notebooks, `CPLEX` and `docplex` are preinstalled.",
"import sys\ntry:\n import cplex\nexcept:\n if hasattr(sys, 'real_prefix'):\n #we are in a virtual env.\n !pip install cplex\n else:\n !pip install --user cplex",
"Installs DOcplexif needed",
"import sys\ntry:\n import docplex.mp\nexcept:\n if hasattr(sys, 'real_prefix'):\n #we are in a virtual env.\n !pip install docplex\n else:\n !pip install --user docplex",
"If either CPLEX or docplex where installed in the steps above, you will need to restart your jupyter kernel for the changes to be taken into account.\nStep 2: Set up the prescriptive model\nCreate the model\nAll objects of the model belong to one model instance.",
"# first import the Model class from docplex.mp\nfrom docplex.mp.model import Model\n\n# create one model instance, with a name\nm = Model(name='telephone_production')",
"Define the decision variables\n\nThe continuous variable desk represents the production of desk telephones.\nThe continuous variable cell represents the production of cell phones.",
"# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound\ndesk = m.continuous_var(name='desk')\ncell = m.continuous_var(name='cell')",
"Set up the constraints\n\nDesk and cell phone must both be greater than 100\nAssembly time is limited\nPainting time is limited.",
"# write constraints\n# constraint #1: desk production is greater than 100\nm.add_constraint(desk >= 100)\n\n# constraint #2: cell production is greater than 100\nm.add_constraint(cell >= 100)\n\n# constraint #3: assembly time limit\nct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)\n\n# constraint #4: paiting time limit\nct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)",
"Express the objective\nWe want to maximize the expected revenue.",
"m.maximize(12 * desk + 20 * cell)",
"A few remarks about how we formulated the mathemtical model in Python using DOcplex:\n- all arithmetic operations (+, *, -) are done using Python operators\n- comparison operators used in writing linear constraint use Python comparison operators too.\nPrint information about the model\nWe can print information about the model to see how many objects of each type it holds:",
"m.print_information()",
"Graphical representation of a Linear Problem\nA simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis. \nThis is often done to demonstrate optimization concepts. \nTo do this, follow these steps:\n- Assign one variable to the x-axis and the other to the y-axis.\n- Draw each of the constraints as you would draw any line in 2 dimensions.\n- Use the signs of the constraints (=, <= or >=) to determine which side of each line falls within the feasible region (allowable solutions).\n- Draw the objective function as you would draw any line in 2 dimensions, by substituting any value for the objective (for example, 12 * DeskProduction + 20 * CellProduction = 4000) \nFeasible set of solutions\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/19.png?raw=true\" >\n</ul> \n\nThis graphic shows the feasible region for the telephone problem. \nRecall that the feasible region of an LP is the region delimited by the constraints, and it represents all feasible solutions. In this graphic, the variables DeskProduction and CellProduction are abbreviated to be desk and cell instead. Look at this diagram and search intuitively for the optimal solution. That is, which combination of desk and cell phones will yield the highest profit. \n\n\n#### The optimal solution\n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/20.png?raw=true\" >\n</ul> \n\n To find the optimal solution to the LP, you must find values for the decision variables, within the feasible region, that maximize profit as defined by the objective function. In this problem, the objective function is to maximize \n $$12 * desk + 20 * cell\n $$\n\nTo do this, first draw a line representing the objective by substituting a value for the objective. \n\nNext move the line up (because this is a maximization problem) to find the point where the line last touches the feasible region. Note that all the solutions on one objective line, such as AB, yield the same objective value. Other values of the objective will be found along parallel lines (such as line CD). \n\nIn a profit maximizing problem such as this one, these parallel lines are often called isoprofit lines, because all the points along such a line represent the same profit. In a cost minimization problem, they are known as isocost lines. Since all isoprofit lines have the same slope, you can find all other isoprofit lines by pushing the objective value further out, moving in parallel, until the isoprofit lines no longer intersect the feasible region. The last isoprofit line that touches the feasible region defines the largest (therefore maximum) possible value of the objective function. In the case of the telephone production problem, this is found along line EF. \n\nThe optimal solution of a linear program always belongs to an extreme point of the feasible region (that is, at a vertex or an edge).\n\n\n### Solve with the model\n\nIf you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation. \n\nIn any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`.",
"s = m.solve()\nm.print_solution()",
"In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region.\nMultiple Optimal Solutions\nIt is possible that an LP has multiple optimal solutions. \nAt least one optimal solution will be at a vertex.\nBy default, the CPLEX® Optimizer reports the first optimal solution found. \nExample of multiple optimal solutions\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/22.png?raw=true\" >\n</ul> \n\nThis graphic shows an example of an LP with multiple optimal solutions. This can happen when the slope of the objective function is the same as the slope of one of the constraints, in this case line AB. All the points on line AB are optimal solutions, with the same objective value, because they are all extreme points within the feasible region.\n\n\n### Binding and nonbinding constraints\n\nA constraint is binding if the constraint becomes an equality when the solution values are substituted.\n\nGraphically, binding constraints are constraints where the optimal solution lies exactly on the line representing that constraint. \n\nIn the telephone production problem, the constraint limiting time on the assembly machine is binding:\n\n$$\n 0.2desk + 0.4 cell <= 400\\\\\n desk = 300\u000bcell = 850\u000b0.2(300) + 0.4(850) = 400\n$$\n\nThe same is true for the time limit on the painting machine:\n\n$$\n 0.5desk + 0.4cell <= 490\u000b0.5(300) + 0.4(850) = 490 \n$$\n\nOn the other hand, the requirement that at least 100 of each telephone type be produced is nonbinding because the left and right hand sides are not equal:\n\n$$\n desk >= 100\\\\\n 300 \\neq 100\n$$\n\n\n\n### Infeasibility\n\nA model is infeasible when no solution exists that satisfies all the constraints. This may be because:\nThe model formulation is incorrect.\nThe data is incorrect.\nThe model and data are correct, but represent a real-world conflict in the system being modeled.\n\nWhen faced with an infeasible model, it's not always easy to identify the source of the infeasibility. \n\nDOcplex helps you identify potential causes of infeasibilities, and it will also suggest changes to make the model feasible.\n\n#### An example of infeasible problem\n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/26.png?raw=true\" >\n</ul> \n\nThis graphic shows an example of an infeasible constraint set for the telephone production problem. Assume in this case that the person entering data had accidentally entered lower bounds on the production of 1100 instead of 100. The arrows show the direction of the feasible region with respect to each constraint. This data entry error moves the lower bounds on production higher than the upper bounds from the assembly and painting constraints, meaning that the feasible region is empty and there are no possible solutions. \n\n#### Infeasible models in DOcplex\n\nCalling `solve()` on an infeasible model returns None. Let's experiment this with DOcplex. First, we take a copy of our model and an extra infeasible constraint which states that desk telephone production must be greater than 1100",
"# create a new model, copy of m\nim = m.copy()\n# get the 'desk' variable of the new model from its name\nidesk = im.get_var_by_name('desk')\n# add a new (infeasible) constraint\nim.add_constraint(idesk >= 1100);\n# solve the new proble, we expect a result of None as the model is now infeasible\nims = im.solve()\nif ims is None:\n print('- model is infeasible')",
"Correcting infeasible models\nTo correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.\nIf you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.\nDOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints. \nRelaxing constraints by changing the model\nIn the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.\nFor example, a relaxation of the assembly time constraint is as follows:\n$$\n0.2 \\ desk + 0.4\\ cell <= 440\n$$\nHere, the right hand side has been relaxed from 400 to 440, meaning that you allow more time for assembly than originally planned. \nRelaxing model by converting hard constraints to soft constraints\n\n\nA soft constraint is a constraint that can be violated in some circumstances. \n\n\nA hard constraint cannot be violated under any circumstances. So far, all constraints we have encountered are hard constraints.\n\n\nConverting hard constraints to soft is one way to resolve infeasibilities.\nThe original hard constraint on assembly time is as follows:\n$$\n0.2 \\ desk + 0.4 \\ cell <= 400\n$$\nYou can turn this into a soft constraint if you know that, for example, an additional 40 hours of overtime are available at an additional cost. First add an overtime term to the right-hand side:\n$$\n0.2 \\ desk + 0.4 \\ cell <= 400 + overtime\n$$\nNext, add a hard limit to the amount of overtime available:\n$$\novertime <= 40\n$$\nFinally, add an additional cost to the objective to penalize use of overtime. \nAssume that in this case overtime costs an additional $2/hour, then the new objective becomes:\n$$\nmaximize\\ 12 * desk + 20 * cell — 2 * overtime\n$$\nImplement the soft constraint model using DOcplex\nFirst add an extra variable for overtime, with an upper bound of 40. This suffices to express the hard limit on overtime.",
"overtime = m.continuous_var(name='overtime', ub=40)",
"Modify the assembly time constraint by changing its right-hand side by adding overtime. \nNote: this operation modifies the model by performing a side-effect on the constraint object. DOcplex allows dynamic edition of model elements.",
"ct_assembly.rhs = 400 + overtime",
"Last, modify the objective expression to add the penalization term.\nNote that we use the Python decrement operator.",
"m.maximize(12*desk + 20 * cell - 2 * overtime)",
"And solve again using DOcplex:",
"s2 = m.solve()\nm.print_solution()",
"Unbounded Variable vs. Unbounded model\nA variable is unbounded when one or both of its bounds is infinite. \nA model is unbounded when its objective value can be increased or decreased without limit. \nThe fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded. \nAn unbounded model is almost certainly not correctly formulated. \nWhile infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.\nBy default, DOcplex variables are unbounded: their upper bound is infinite (but their lower bound is zero).\nUnbounded feasible region\nThe telephone production problem would become unbounded if, for example, the constraints on the assembly and painting time were neglected. The feasible region would then look as in this diagram where the objective value can increase without limit, up to infinity, because there is no upper boundary to the region. \n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/32.png?raw=true\" >\n</ul> \n\n\n## Algorithms for solving LPs\n\nThe IBM® CPLEX® Optimizers to solve LP problems in CPLEX include:\n- Simplex Optimizer\n- Dual-simplex Optimizer\n- Barrier Optimizer\n\n### The Simplex algorithm\n\nThe Simplex algorithm, developed by George Dantzig in 1947, was the first generalized algorithm for solving LP problems. It is the basis of many optimization algorithms. The simplex method is an iterative method. It starts with an initial feasible solution, and then tests to see if it can improve the result of the objective function. It continues until the objective function cannot be further improved.\n\nThe following diagram illustrates how the simplex algorithm traverses the boundary of the feasible region for the telephone production problem. The algorithm, starts somewhere along the edge of the shaded feasible region, and advances vertex-by-vertex until arriving at the vertex that also intersects the optimal objective line. Assume it starts at the red dot indicated on the diagam.\n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/36.png?raw=true\" >\n</ul> \n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/37.png?raw=true\" >\n</ul> \n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/38.png?raw=true\" >\n</ul> \n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/39.png?raw=true\" >\n</ul> \n\n\n\n### The Revised Simplex algorithm\n\nTo improve the efficiency of the Simplex algorithm, George Dantzig and W. Orchard-Hays revised it in 1953. CPLEX uses the Revised Simplex algorithm, with a number of improvements. The CPLEX Optimizers are particularly efficient and can solve very large problems rapidly. You can tune some CPLEX Optimizer parameters to change the algorithmic behavior according to your needs. \n\n\n### The Dual Simplex algorithm\n\n#### The dual of a LP\n\nThe concept of duality is important in Linear Programming (LP). Every LP problem has an associated LP problem known as its _dual_. The dual of this associated problem is the original LP problem (known as the primal problem). If the primal problem is a minimization problem, then the dual problem is a maximization problem and vice versa.\n\n#### A primal-dual pair\n\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/42.png?raw=true\" >\n</ul> \n\n\n*Primal (P)* \n-------------------- \n\n $max\\ z=\\sum_{i} c_{i}x_{i}$ \n\n*Dual (D)*\n-------------------------------\n $min\\ w= \\sum_{j}b_{j}y_{j}$ \n\n\n- Each constraint in the primal has an associated dual variable, yi.\n- Any feasible solution to D is an upper bound to P, and any feasible solution to P is a lower bound to D.\n- In LP, the optimal objective values of D and P are equivalent, and occurs where these bounds meet.\n- The dual can help solve difficult primal problems by providing a bound that in the best case equals the optimal solution to the primal problem.\n\n#### Dual prices\n\nIn any solution to the dual, the values of the dual variables are known as the dual prices, also called shadow prices.\n\nFor each constraint in the primal problem, its associated dual price indicates how much the dual objective will change with a unit change in the right hand side of the constraint.\n\nThe dual price of a non-binding constraint is zero. That is, changing the right hand side of the constraint will not affect the objective value.\n\nThe dual price of a binding constraint can help you make decisions regarding the constraint.\n\nFor example, the dual price of a binding resource constraint can be used to determine whether more of the resource should be purchased or not.\n\n#### The Dual Simplex algorithm\n\nThe Simplex algorithm works by finding a feasible solution and moving progressively toward optimality. \n\nThe Dual Simplex algorithm implicitly uses the dual to try and find an optimal solution to the primal as early as it can, and regardless of whether the solution is feasible or not. \n\nIt then moves from one vertex to another, gradually decreasing the infeasibility while maintaining optimality, until an optimal feasible solution to the primal problem is found. \n\nIn CPLEX, the Dual-Simplex Optimizer is the first choice for most LP problems.\n\n\n### Basic solutions and basic variables\n\nYou learned earlier that the Simplex algorithm travels from vertex to vertex to search for the optimal solution. \nA solution at a vertex is known as a _basic_ solution. Without getting into too much detail, it's worth knowing that part of the Simplex algorithm involves setting a subset of variables to zero at each iteration. \nThese variables are known as non-basic variables. The remaining variables are the _basic_ variables. The concepts of basic solutions and variables are relevant in the definition of reduced costs that follows next. \n\n\n### Reduced Costs\n\nThe reduced cost of a variable gives an indication of the amount the objective will change with a unit increase in the variable value.\n\nConsider the simplest form of an LP:\n\n$\nminimize\\ c^{t}x\\\\\ns.t. \\\\\nAx = b \\\\\nx \\ge 0\n$\n\nIf $y$ represents the dual variables for a given basic solution, then the reduced costs are defined as: \n\n$$\nc - y^{t}A\n$$\n\nSuch a basic solution is optimal if: \n\n$$\n c - y^{t}A \\ge 0\n$$\n\nIf all reduced costs for this LP are non-negative, it follows that the objective value can only increase with a change in the variable value, and therefore the solution (when minimizing) is optimal.\n\nDOcplex lets you acces sreduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:\n\n#### Getting reduced cost values with DOcplex\n\nDOcplex lets you access reduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:",
"print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))\nprint('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))",
"Default optimality criteria for CPLEX optimizer\nBecause CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs. \nThe default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being:\n$$\nc — y^{t}A> –10^{-6}\n$$\nYou can adjust this optimality tolerance, for example if the algorithm takes very long to converge and has already achieved a solution sufficiently close to optimality.\nReduced Costs and multiple optimal solutions\nIn the earlier example you saw how one can visualize multiple optimal solutions for an LP with two variables. \nFor larger LPs, the reduced costs can be used to determine whether multiple optimal solutions exist. Multiple optimal solutions exist when one or more non-basic variables with a zero reduced cost exist in an optimal solution (that is, variable values that can change without affecting the objective value). \nIn order to determine whether multiple optimal solutions exist, you can examine the values of the reduced costs with DOcplex. \nSlack values\nFor any solution, the difference between the left and right hand sides of a constraint is known as the slack value for that constraint. \nFor example, if a constraint states that f(x) <= 100, and in the solution f(x) = 80, then the slack value of this constraint is 20.\nIn the earlier example, you learned about binding and non-binding constraints. For example, f(x) <= 100 is binding if f(x) = 100, and non-binding if f(x) = 80.\nThe slack value for a binding constraint is always zero, that is, the constraint is met exactly.\nYou can determine which constraints are binding in a solution by examining the slack values with DOcplex. \nThis might help to better interpret the solution and help suggest which constraints may benefit from a change in bounds or a change into a soft constraint.\nAccessing slack values with DOcplex\nAs an example, let's examine the slack values of some constraints in our problem, after we revert the change to soft constrants",
"# revert soft constraints\nct_assembly.rhs = 440\ns3 = m.solve()\n\n# now get slack value for assembly constraint: expected value is 40\nprint('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))\n# get slack value for painting time constraint, expected value is 0.\nprint('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))",
"Degeneracy\nIt is possible that multiple non-optimal solutions with the same objective value exist. \nAs the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as degeneracy. \nModern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.\nIf the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa.\nSetting a LP algorithm with DOcplex\nUsers can change the algorithm by editing the lpmethod parameter of the model.\nWe won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...\nFor example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the parameters property of the model and from there, assign the lpmethod parameter",
"m.parameters.lpmethod = 4\nm.solve(log_output=True)",
"Barrier methods\nMost of the CPLEX Optimizers for MP call upon the basic simplex method or some variation of it. \nSome, such as the Barrier Optimizer, use alternative methods.\nIn graphical terms, the Simplex Algorithm starts along the edge of the feasible region and searches for an optimal vertex. \nThe barrier method starts somewhere inside the feasible region – in other words, it avoids the “barrier” that is created by the constraints, and burrows through the feasible region to find the optimal solution.\nIn its search, the method uses what is known as a predictor-corrector algorithm that constantly adjusts its path through the center of the feasible region (the central path). \nThis diagram shows how the barrier method works compared to the simplex method. As you can see, the simplex method traverses the edge of the feasible region, while the barrier method moves through the interior, with a predictor-corrector determining the path. In general, it’s a good idea to experiment with different algorithms in CPLEX when trying to improve performance.\n<p>\n<ul>\n<img src = \"https://ibmdecisionoptimization.github.io/tutorials/jupyter/training/52.png?raw=true\" >\n</ul> \n\n### Presolve\n\nCPLEX Optimizer provides a _presolve_ procedure.\n\nPresolve evaluates the model formulation before solving it, and attempts to reduce the size of the problem that is sent to the solver engine. \n\nA reduction in problem size typically translates to a reduction in total run time. \n\nFor example, a real problem presented to CPLEX Optimizer with approximately 160,000 constraints and 596,000 decision variables, was reduced by presolve to a problem with 27,000 constraints and 150,000 decision variables.\n\nThe presolve time was only 1.32 seconds and reduced the solution time from nearly half an hour to under 25 seconds.\n\n#### An example of presolve operations\n\nLet's consider the following Linear problem:\n\n$\n maximize:\\\\\n [1]\\ 2x_{1}+ 3x_{2} — x_{3} — x_{4}\\\\\n subject\\ to:\\\\\n [2]\\ x_{1} + x_{2} + x_{3} — 2x_{4} <= 4\\\\\n [3]\\ -x_{1} — x_{2} + x_{3} — x_{4} <= 1\\\\\n [4]\\ x_{1} + x_{4} <= 3\\\\\n [5]\\ x_{1}, x_{2}, x_{3}, x_{4} >= 0\n$\n- Because $x_{3}$ has a negative coefficient in the objective, the optimization will minimize $x_{3}$.\n- In constraints [2] and [3] $x_{3}$ has positive coefficients, and the constraints are <=. Thus, $x_{3}$ can be reduced to 0, and becomes redundant.\n- In constraint [3], all the coefficients are now negative. Because the left hand side of [3] can never be positive, any assignment of values will satisfy the constraint. The constraint is redundant and can be removed.\n\n# Summary\n\nHaving completed this notebook, you should be able to:\n- Describe the characteristics of an LP in terms of the objective, decision variables and constraints\n- Formulate a simple LP model on paper\n- Conceptually explain the following terms in the context of LP:\n - dual\n - feasible region\n - infeasible\n - unbounded\n - slacks\n - reduced costs\n - degeneracy\n- Describe some of the algorithms used to solve LPs\n- Explain what presolve does\n- Write a simple LP model with DOcplex\n\n\n## References\n* [CPLEX Modeling for Python documentation](http://ibmdecisionoptimization.github.io/docplex-doc/)\n* [IBM Decision Optimization](https://www.ibm.com/analytics/decision-optimization)\n* Need help with DOcplex or to report a bug? Please go [here](https://stackoverflow.com/questions/tagged/docplex).\n* Contact us at dofeedback@wwpdl.vnet.ibm.com."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
LimeeZ/phys292-2015-work
|
assignments/assignment10/ODEsEx01.ipynb
|
mit
|
[
"Ordinary Differential Equations Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed",
"Euler's method\nEuler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation\n$$ \\frac{dy}{dx} = f(y(x), x) $$\nwith the initial condition:\n$$ y(x_0)=y_0 $$\nEuler's method performs updates using the equations:\n$$ y_{n+1} = y_n + h f(y_n,x_n) $$\n$$ h = x_{n+1} - x_n $$\nWrite a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:",
"def solve_euler(derivs, y0, x):\n \"\"\"Solve a 1d ODE using Euler's method.\n \n Parameters\n ----------\n derivs : function\n The derivative of the diff-eq with the signature deriv(y,x) where\n y and x are floats.\n y0 : float\n The initial condition y[0] = y(x[0]).\n x : np.ndarray, list, tuple\n The array of times at which of solve the diff-eq.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions y[i] = y(x[i])\n \"\"\"\n y0 = y(x[0]) #Initial Condition\n h = 0.1\n t = 0:h:5\n \n \n\nassert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])",
"The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:\n$$ y_{n+1} = y_n + h f\\left(y_n+\\frac{h}{2}f(y_n,x_n),x_n+\\frac{h}{2}\\right) $$\nWrite a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:",
"def solve_midpoint(derivs, y0, x):\n \"\"\"Solve a 1d ODE using the Midpoint method.\n \n Parameters\n ----------\n derivs : function\n The derivative of the diff-eq with the signature deriv(y,x) where y\n and x are floats.\n y0 : float\n The initial condition y[0] = y(x[0]).\n x : np.ndarray, list, tuple\n The array of times at which of solve the diff-eq.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions y[i] = y(x[i])\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()\n\nassert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])",
"You are now going to solve the following differential equation:\n$$\n\\frac{dy}{dx} = x + 2y\n$$\nwhich has the analytical solution:\n$$\ny(x) = 0.25 e^{2x} - 0.5 x - 0.25\n$$\nFirst, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:",
"def solve_exact(x):\n \"\"\"compute the exact solution to dy/dx = x + 2y.\n \n Parameters\n ----------\n x : np.ndarray\n Array of x values to compute the solution at.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions at y[i] = y(x[i]).\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()\n\nassert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))",
"In the following cell you are going to solve the above ODE using four different algorithms:\n\nEuler's method\nMidpoint method\nodeint\nExact\n\nHere are the details:\n\nGenerate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).\nDefine the derivs function for the above differential equation.\nUsing the solve_euler, solve_midpoint, odeint and solve_exact functions to compute\n the solutions using the 4 approaches.\n\nVisualize the solutions on a sigle figure with two subplots:\n\nPlot the $y(x)$ versus $x$ for each of the 4 approaches.\nPlot $\\left|y(x)-y_{exact}(x)\\right|$ versus $x$ for each of the 3 numerical approaches.\n\nYour visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.\nWhile your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.",
"# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this for grading the plots"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quantopian/research_public
|
notebooks/tutorials/2_pipeline_lesson4/notebook.ipynb
|
apache-2.0
|
[
"Combining Factors\nFactors can be combined, both with other Factors and with scalar values, via any of the builtin mathematical operators (+, -, *, etc). This makes it easy to write complex expressions that combine multiple Factors. For example, constructing a Factor that computes the average of two other Factors is simply:\n```\n\n\n\nf1 = SomeFactor(...)\nf2 = SomeOtherFactor(...)\naverage = (f1 + f2) / 2.0\n``\nIn this lesson, we will create a pipeline that creates arelative_difference` factor by combining a 10-day average factor and a 30-day average factor. \n\n\n\nAs usual, let's start with our imports:",
"from quantopian.pipeline import Pipeline\nfrom quantopian.research import run_pipeline\nfrom quantopian.pipeline.data.builtin import USEquityPricing\nfrom quantopian.pipeline.factors import SimpleMovingAverage",
"For this example, we need two factors: a 10-day mean close price factor, and a 30-day one:",
"mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)\nmean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)",
"Then, let's create a percent difference factor by combining our mean_close_30 factor with our mean_close_10 factor.",
"percent_difference = (mean_close_10 - mean_close_30) / mean_close_30",
"In this example, percent_difference is still a Factor even though it's composed as a combination of more primitive factors. We can add percent_difference as a column in our pipeline. Let's define make_pipeline to create a pipeline with percent_difference as a column (and not the mean close factors):",
"def make_pipeline():\n\n mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)\n mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)\n\n percent_difference = (mean_close_10 - mean_close_30) / mean_close_30\n\n return Pipeline(\n columns={\n 'percent_difference': percent_difference\n }\n )",
"Let's see what the new output looks like:",
"result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')\nresult",
"In the next lesson, we will learn about filters."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/production_ml/labs/training_example.ipynb
|
apache-2.0
|
[
"Quantization aware training in Keras example\nOverview\nWelcome to an end-to-end example for quantization aware training.\nLearning Objectives\n\nTrain a tf.keras model for MNIST from scratch.\nFine tune the model by applying the quantization aware training API, see the accuracy, and export a quantization aware model.\nUse the model to create an actually quantized model for the TFLite backend.\nSee the persistence of accuracy in TFLite and a 4x smaller model. To see the latency benefits on mobile, try out the TFLite examples in the TFLite app repository.\n\nIntroduction\nQuantization aware training emulates inference-time quantization, creating a model that downstream tools will use to produce actually quantized models. The quantized models use lower-precision (e.g. 8-bit instead of 32-bit float), leading to benefits during deployment.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook\nSetup",
"! pip uninstall -y tensorflow\n! pip install -q tensorflow-model-optimization\n! pip install --upgrade tensorflow==2.6\n\nimport tempfile\nimport os\n\nimport tensorflow as tf\n\nfrom tensorflow import keras",
"This notebook uses TF2.x.\nPlease check your tensorflow version using the cell below.",
"# Show the currently installed version of TensorFlow\nprint(\"TensorFlow version: \",tf.version.VERSION)",
"Train a model for MNIST without quantization aware training",
"# Load MNIST dataset\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\n# Define the model architecture.\n\n# TODO: Your code goes here\n\n\n# Train the digit classification model\n\n# TODO: Your code goes here\n\n",
"Clone and fine-tune pre-trained model with quantization aware training\nDefine the model\nYou will apply quantization aware training to the whole model and see this in the model summary. All layers are now prefixed by \"quant\".\nNote that the resulting model is quantization aware but not quantized (e.g. the weights are float32 instead of int8). The sections after show how to create a quantized model from the quantization aware one.\nIn the comprehensive guide, you can see how to quantize some layers for model accuracy improvements.",
"import tensorflow_model_optimization as tfmot\n\nquantize_model = tfmot.quantization.keras.quantize_model\n\n# q_aware stands for for quantization aware.\nq_aware_model = quantize_model(model)\n\n# `quantize_model` requires a recompile.\nq_aware_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nq_aware_model.summary()",
"Train and evaluate the model against baseline\nTo demonstrate fine tuning after training the model for just an epoch, fine tune with quantization aware training on a subset of the training data.",
"train_images_subset = train_images[0:1000] # out of 60000\ntrain_labels_subset = train_labels[0:1000]\n\nq_aware_model.fit(train_images_subset, train_labels_subset,\n batch_size=500, epochs=1, validation_split=0.1)",
"For this example, there is minimal to no loss in test accuracy after quantization aware training, compared to the baseline.",
"_, baseline_model_accuracy = model.evaluate(\n test_images, test_labels, verbose=0)\n\n_, q_aware_model_accuracy = q_aware_model.evaluate(\n test_images, test_labels, verbose=0)\n\nprint('Baseline test accuracy:', baseline_model_accuracy)\nprint('Quant test accuracy:', q_aware_model_accuracy)",
"Create quantized model for TFLite backend\nAfter this, you have an actually quantized model with int8 weights and uint8 activations.",
"converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\n\nquantized_tflite_model = converter.convert()",
"See persistence of accuracy from TF to TFLite\nDefine a helper function to evaluate the TF Lite model on the test dataset.",
"import numpy as np\n\ndef evaluate_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for i, test_image in enumerate(test_images):\n if i % 1000 == 0:\n print('Evaluated on {n} results so far.'.format(n=i))\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n\n # TODO: Your code goes here\n\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n print('\\n')\n # Compare prediction results with ground truth labels to calculate accuracy.\n prediction_digits = np.array(prediction_digits)\n accuracy = (prediction_digits == test_labels).mean()\n return accuracy",
"You evaluate the quantized model and see that the accuracy from TensorFlow persists to the TFLite backend.",
"interpreter = tf.lite.Interpreter(model_content=quantized_tflite_model)\ninterpreter.allocate_tensors()\n\ntest_accuracy = evaluate_model(interpreter)\n\nprint('Quant TFLite test_accuracy:', test_accuracy)\nprint('Quant TF test accuracy:', q_aware_model_accuracy)",
"See 4x smaller model from quantization\nYou create a float TFLite model and then see that the quantized TFLite model\nis 4x smaller.",
"# Create float TFLite model.\n# TODO: Your code goes here\n\n\n# Measure sizes of models.\n_, float_file = tempfile.mkstemp('.tflite')\n_, quant_file = tempfile.mkstemp('.tflite')\n\nwith open(quant_file, 'wb') as f:\n f.write(quantized_tflite_model)\n\nwith open(float_file, 'wb') as f:\n f.write(float_tflite_model)\n\nprint(\"Float model in Mb:\", os.path.getsize(float_file) / float(2**20))\nprint(\"Quantized model in Mb:\", os.path.getsize(quant_file) / float(2**20))",
"Conclusion\nIn this tutorial, you saw how to create quantization aware models with the TensorFlow Model Optimization Toolkit API and then quantized models for the TFLite backend.\nYou saw a 4x model size compression benefit for a model for MNIST, with minimal accuracy\ndifference. To see the latency benefits on mobile, try out the TFLite examples in the TFLite app repository.\nWe encourage you to try this new capability, which can be particularly important for deployment in resource-constrained environments."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Britefury/deep-learning-tutorial-pydata2016
|
TUTORIAL 05 - Dogs vs cats with standard learning.ipynb
|
mit
|
[
"Dogs vs Cats with Standard Learning\nIn this Notebook we're going to use standard learning to attempt to crack the Dogs vs Cats Kaggle competition.\nWe are going to downsample the images to 64x64; that's pretty small, but should be enough (I hope). Furthermore, large images means longer training time and I'm too impatient for that. ;)\nLets have plots appear inline:",
"%matplotlib inline",
"We're going to need os, numpy, matplotlib, skimage, theano and lasagne. We also want to import some layer classes and utilities from Lasagne for convenience.",
"import os, time, glob, tqdm\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport torch, torch.nn as nn, torch.nn.functional as F\nimport torchvision\nimport skimage.transform, skimage.util\nfrom skimage.util import montage\nfrom sklearn.model_selection import StratifiedShuffleSplit\nimport cv2\nfrom batchup import work_pool, data_source\n\nimport utils\nimport imagenet_classes\n\ntorch_device = torch.device('cuda:0')",
"Data loading\nWe are loading images from a folder of files, so we could approach this a number of ways.\nOur dataset consists of 25,000 images so we could load them all into memory then access them from there. It would work, but it wouldn't scale. I'd prefer to demonstrate an approach that is more scalable and useful outside of this notebook, so we are going to load them on the fly.\nLoading images on the fly poses a challenge as we may find that the GPU is waiting doing nothing while the CPU is loading images in order to build the next mini-batch to train with. It would therefore be desirable to load images in background threads so that mini-batches of images are ready to process when the GPU is able to take one. Luckily my batchup library can help here.\nWe must provide the logic for:\n\ngetting a list of paths where we can find the image files\ngiven a list of indices identifying the images that are to make up this mini-batch, for each image in the mini-batch:\nload each one\nscale each one to the fixed size that we need\nstandardise each image (subtract mean, divide by standard deviation)\ngather them in a mini-batch of shape (sample, channel, height, width)\n\nGetting a list of paths where we can find the image files\nJoin the Kaggle competition and download the training and test data sets. Unzip them into a directory of your choosing, and modify the path definitions below to point to the appropriate location.\nWe split the images into training and validation later on, so we call them trainval for now.",
"TRAIN_PATH = r'E:\\datasets\\dogsvscats\\train'\nTEST_PATH = r'E:\\datasets\\dogsvscats\\test1'\n\n# Get the paths of the images\ntrainval_image_paths = glob.glob(os.path.join(TRAIN_PATH, '*.jpg'))\ntests_image_paths = glob.glob(os.path.join(TEST_PATH, '*.jpg'))",
"Okay. We have our image paths. Now we need to create our ground truths. Luckily the filename of each file starts with either cat. or dog. indicating which it is. We will assign dogs a class of 1 and cats a class of 0.",
"# The ground truth classifications are given by the filename having either a 'dog.' or 'cat.' prefix\n# Use:\n# 0: cat\n# 1: dog\ntrainval_y = [(1 if os.path.basename(p).lower().startswith('dog.') else 0) for p in trainval_image_paths]\ntrainval_y = np.array(trainval_y).astype(np.int32)",
"Split into training and validation\nWe use Scikit-Learn StratifiedShuffleSplit for this.",
"# We only want one split, with 10% of the data for validation\nsplitter = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=12345)\n\n# Get the training set and validation set sample indices\ntrain_ndx, val_ndx = next(splitter.split(trainval_y, trainval_y))\n\nprint('{} training, {} validation'.format(len(train_ndx), len(val_ndx)))",
"Define a function for loading a mini-batch of images\nGiven a list of indices into the train_image_paths list we must:\n\nload each one\nscale each one to the fixed size that we need\nstandardise each image (subtract mean, divide by standard deviation)",
"MODEL_MEAN = np.array([0.485, 0.456, 0.406])\nMODEL_STD = np.array([0.229, 0.224, 0.225])\nTARGET_SIZE = 64\n\ndef img_to_net(img):\n \"\"\"\n Convert an image from\n image format; shape (height, width, channel) range [0-1]\n to\n network format; shape (channel, height, width), standardised by mean MODEL_MEAN and std-dev MODEL_STD\n \"\"\"\n # (H, W, C) -> (C, H, W)\n img = (img - MODEL_MEAN) / MODEL_STD\n img = img.transpose(2, 0, 1)\n return img.astype(np.float32)\n\ndef net_to_img(img):\n \"\"\"\n Convert an image from\n network format; shape (sample, channel, height, width), standardised by mean MODEL_MEAN and std-dev MODEL_STD\n to\n image format; shape (height, width, channel) range [0-1]\n \"\"\"\n # (C, H, W) -> (H, W, C)\n img = img.transpose(1, 2, 0)\n img = img * MODEL_STD + MODEL_MEAN\n return img.astype(np.float32)\n\ndef load_image(path):\n \"\"\"\n Load an image from a given path and convert to network format (4D tensor)\n \"\"\"\n # Read\n img = cv2.imread(path)\n # OpenCV loads images in BGR channel order; reverse to RGB\n img = img[:, :, ::-1]\n \n # Compute scaled dimensions, while preserving aspect ratio\n # py0, py1, px0, px1 are the padding required to get the image to `TARGET_SIZE` x `TARGET_SIZE`\n if img.shape[0] >= img.shape[1]:\n height = TARGET_SIZE\n width = int(img.shape[1] * float(TARGET_SIZE) / float(img.shape[0]) + 0.5)\n py0 = py1 = 0\n px0 = (TARGET_SIZE - width) // 2\n px1 = (TARGET_SIZE - width) - px0\n else:\n width = TARGET_SIZE\n height = int(img.shape[0] * float(TARGET_SIZE) / float(img.shape[1]) + 0.5)\n px0 = px1 = 0\n py0 = (TARGET_SIZE - height) // 2\n py1 = (TARGET_SIZE - height) - py0\n # Resize the image using OpenCV resize\n # We use OpenCV as it is fast\n # We also resize *before* converting from uint8 type to float type as uint8 is significantly faster\n img = cv2.resize(img, (width, height))\n \n # Convert to float\n img = skimage.util.img_as_float(img)\n\n # Convert to network format\n img = img_to_net(img)\n \n # Apply padding to get it to a fixed size\n img = np.pad(img, [(0, 0), (py0, py1), (px0, px1)], mode='constant')\n \n return img\n\n",
"Show an image to check our code so far:",
"plt.imshow(net_to_img(load_image(trainval_image_paths[0])))\nplt.show()",
"Looks okay.\nMake a BatchUp data source\nBatchUp can extract mini-batches from data sources that have an array-like interface.\nWe must first define an image accessor that looks like an array. We do this by implementing __len__ and __getitem__ methods:",
"class ImageAccessor (object):\n def __init__(self, paths):\n \"\"\"\n Constructor\n \n paths - the list of paths of the images that we are to access\n \"\"\"\n self.paths = paths\n \n def __len__(self):\n \"\"\"\n The length of this array\n \"\"\"\n return len(self.paths)\n \n def __getitem__(self, item):\n \"\"\"\n Get images identified by item\n \n item can be:\n - an index as an integer\n - an array of incies\n \"\"\"\n if isinstance(item, int):\n # item is an integer; get a single item\n path = self.paths[item]\n return load_image(path)\n elif isinstance(item, np.ndarray):\n # item is an array of indices\n\n # Get the paths of the images in the mini-batch\n paths = [self.paths[i] for i in item]\n # Load each image\n images = [load_image(path) for path in paths]\n # Stack in axis 0 to make an array of shape `(sample, channel, height, width)`\n return np.stack(images, axis=0)",
"Now we make ArrayDataSource instances for the training and validation sets. These provide methods for getting mini-batches that we will use for training.",
"# image accessor\ntrainval_X = ImageAccessor(trainval_image_paths)\n\ntrain_ds = data_source.ArrayDataSource([trainval_X, trainval_y], indices=train_ndx)\nval_ds = data_source.ArrayDataSource([trainval_X, trainval_y], indices=val_ndx)",
"Process mini-batches in background threads\nWe want to do all the image loading in background threads so that the images are ready for the main thread that must feed the GPU with data to work on.\nBatchUp provides worker pools for this purpose.",
"# A pool with 4 threads\npool = work_pool.WorkerThreadPool(4)",
"Wrap our training and validation data sources so that they generate mini-batches in parallel background threads",
"train_ds = pool.parallel_data_source(train_ds)\nval_ds = pool.parallel_data_source(val_ds)",
"Build the network\nNow we will define a class for the pet classifier network.",
"class PetClassifier (nn.Module):\n def __init__(self):\n super(PetClassifier, self).__init__()\n # First two convolutional layers: 48 filters, 3x3 convolution, 1 pixel padding\n self.conv1_1 = nn.Conv2d(3, 48, kernel_size=3, padding=1)\n self.conv1_2 = nn.Conv2d(48, 48, kernel_size=3, padding=1)\n self.pool1 = nn.MaxPool2d(2)\n\n # Two convolutional layers, 96 filters\n self.conv2_1 = nn.Conv2d(48, 96, kernel_size=3, padding=1)\n self.conv2_2 = nn.Conv2d(96, 96, kernel_size=3, padding=1)\n self.pool2 = nn.MaxPool2d(2)\n\n # Two convolutional layers, 192 filters\n self.conv3_1 = nn.Conv2d(96, 192, kernel_size=3, padding=1)\n self.conv3_2 = nn.Conv2d(192, 192, kernel_size=3, padding=1)\n self.pool3 = nn.MaxPool2d(2)\n\n # Two convolutional layers, 384 filters\n self.conv4_1 = nn.Conv2d(192, 384, kernel_size=3, padding=1)\n self.conv4_2 = nn.Conv2d(384, 384, kernel_size=3, padding=1)\n self.pool4 = nn.MaxPool2d(2)\n\n # Two convolutional layers, 384 filters\n self.conv5_1 = nn.Conv2d(384, 384, kernel_size=3, padding=1)\n self.conv5_2 = nn.Conv2d(384, 384, kernel_size=3, padding=1)\n self.pool5 = nn.MaxPool2d(2)\n \n # Size at this point will be 384 channels, 2x2\n \n self.fc6 = nn.Linear(384 * 2 * 2, 256)\n self.drop = nn.Dropout()\n self.fc7 = nn.Linear(256, 2)\n \n def forward(self, x):\n x = F.relu(self.conv1_1(x))\n x = self.pool1(F.relu(self.conv1_2(x)))\n\n x = F.relu(self.conv2_1(x))\n x = self.pool2(F.relu(self.conv2_2(x)))\n\n x = F.relu(self.conv3_1(x))\n x = self.pool3(F.relu(self.conv3_2(x)))\n\n x = F.relu(self.conv4_1(x))\n x = self.pool4(F.relu(self.conv4_2(x)))\n\n x = F.relu(self.conv5_1(x))\n x = self.pool5(F.relu(self.conv5_2(x)))\n \n x = x.view(x.shape[0], -1)\n \n x = F.relu(self.fc6(x))\n x = self.drop(x)\n x = self.fc7(x)\n \n return x\n \n\n# Build it\npet_net = PetClassifier().to(torch_device)",
"Set up loss and optimizer",
"loss_function = nn.CrossEntropyLoss()\n\noptimizer = torch.optim.Adam(pet_net.parameters(), lr=1e-3)",
"Train the network\nDefine settings for training:",
"NUM_EPOCHS = 50\nBATCH_SIZE = 128",
"The training loop:",
"print('Training...')\n\nfor epoch_i in range(NUM_EPOCHS):\n t1 = time.time()\n \n # TRAIN\n pet_net.train()\n train_loss = 0.0\n n_batches = 0\n # Ask train_ds for batches of size `BATCH_SIZE` and shuffled in random order\n for i, (batch_X, batch_y) in enumerate(train_ds.batch_iterator(batch_size=BATCH_SIZE, shuffle=True)):\n t_x = torch.tensor(batch_X, dtype=torch.float, device=torch_device)\n t_y = torch.tensor(batch_y, dtype=torch.long, device=torch_device)\n \n # Clear gradients\n optimizer.zero_grad()\n \n # Predict logits\n pred_logits = pet_net(t_x)\n \n # Compute loss\n loss = loss_function(pred_logits, t_y)\n \n # Back-prop\n loss.backward()\n \n # Optimizer step\n optimizer.step()\n \n # Accumulate training loss\n train_loss += float(loss)\n n_batches += 1\n # Divide by number of samples to get mean loss\n train_loss /= float(n_batches)\n \n # VALIDATE\n pet_net.eval()\n val_loss = val_err = 0.0\n # For each batch:\n with torch.no_grad():\n for batch_X, batch_y in val_ds.batch_iterator(batch_size=BATCH_SIZE, shuffle=False):\n t_x = torch.tensor(batch_X, dtype=torch.float, device=torch_device)\n # Predict logits\n pred_logits = pet_net(t_x).detach().cpu().numpy()\n pred_cls = np.argmax(pred_logits, axis=1)\n val_err += (batch_y != pred_cls).sum()\n # Divide by number of samples to get mean loss and error\n val_err /= float(len(val_ndx))\n \n t2 = time.time()\n \n # REPORT\n print('Epoch {} took {:.2f}s: train loss={:.6f}; val err={:.2%}'.format(\n epoch_i, t2 - t1, train_loss, val_err))",
"Apply to some example images from the test set",
"# Number of samples to try\nN_TEST = 15\n\n# Shuffle test sample indcies\nrng = np.random.RandomState(12345)\ntest_ndx = rng.permutation(len(tests_image_paths))\n\n# Select first `N_TEST` samples\ntest_ndx = test_ndx[:N_TEST]\n\nfor test_i in test_ndx:\n # Load the image\n X = load_image(tests_image_paths[test_i])\n \n with torch.no_grad():\n t_x = torch.tensor(X[None, ...], dtype=torch.float, device=torch_device)\n\n # Predict class probabilities\n pred_logits = pet_net(t_x)\n pred_prob = F.softmax(pred_logits, dim=1).detach().cpu().numpy()\n \n # Get predicted class\n pred_y = np.argmax(pred_prob, axis=1)\n\n # Get class name\n pred_cls = 'dog' if pred_y[0] == 1 else 'cat'\n\n # Report\n print('Sample {}: predicted as {}, confidence {:.2%}'.format(test_i, pred_cls, pred_prob[0,pred_y[0]]))\n # Show the image\n plt.figure()\n plt.imshow(net_to_img(X))\n plt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io
|
examples/keras_recipes/ipynb/memory_efficient_embeddings.ipynb
|
apache-2.0
|
[
"Memory-efficient embeddings for recommendation systems\nAuthor: Khalid Salama<br>\nDate created: 2021/02/15<br>\nLast modified: 2021/02/15<br>\nDescription: Using compositional & mixed-dimension embeddings for memory-efficient recommendation models.\nIntroduction\nThis example demonstrates two techniques for building memory-efficient recommendation models\nby reducing the size of the embedding tables, without sacrificing model effectiveness:\n\nQuotient-remainder trick, by Hao-Jun Michael Shi et al.,\nwhich reduces the number of embedding vectors to store, yet produces unique embedding\nvector for each item without explicit definition.\nMixed Dimension embeddings, by Antonio Ginart et al.,\nwhich stores embedding vectors with mixed dimensions, where less popular items have\nreduced dimension embeddings.\n\nWe use the 1M version of the Movielens dataset.\nThe dataset includes around 1 million ratings from 6,000 users on 4,000 movies.\nSetup",
"import os\nimport math\nfrom zipfile import ZipFile\nfrom urllib.request import urlretrieve\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.layers import StringLookup\nimport matplotlib.pyplot as plt",
"Prepare the data\nDownload and process data",
"urlretrieve(\"http://files.grouplens.org/datasets/movielens/ml-1m.zip\", \"movielens.zip\")\nZipFile(\"movielens.zip\", \"r\").extractall()\n\nratings_data = pd.read_csv(\n \"ml-1m/ratings.dat\",\n sep=\"::\",\n names=[\"user_id\", \"movie_id\", \"rating\", \"unix_timestamp\"],\n)\n\nratings_data[\"movie_id\"] = ratings_data[\"movie_id\"].apply(lambda x: f\"movie_{x}\")\nratings_data[\"user_id\"] = ratings_data[\"user_id\"].apply(lambda x: f\"user_{x}\")\nratings_data[\"rating\"] = ratings_data[\"rating\"].apply(lambda x: float(x))\ndel ratings_data[\"unix_timestamp\"]\n\nprint(f\"Number of users: {len(ratings_data.user_id.unique())}\")\nprint(f\"Number of movies: {len(ratings_data.movie_id.unique())}\")\nprint(f\"Number of ratings: {len(ratings_data.index)}\")",
"Create train and eval data splits",
"random_selection = np.random.rand(len(ratings_data.index)) <= 0.85\ntrain_data = ratings_data[random_selection]\neval_data = ratings_data[~random_selection]\n\ntrain_data.to_csv(\"train_data.csv\", index=False, sep=\"|\", header=False)\neval_data.to_csv(\"eval_data.csv\", index=False, sep=\"|\", header=False)\nprint(f\"Train data split: {len(train_data.index)}\")\nprint(f\"Eval data split: {len(eval_data.index)}\")\nprint(\"Train and eval data files are saved.\")",
"Define dataset metadata and hyperparameters",
"csv_header = list(ratings_data.columns)\nuser_vocabulary = list(ratings_data.user_id.unique())\nmovie_vocabulary = list(ratings_data.movie_id.unique())\ntarget_feature_name = \"rating\"\nlearning_rate = 0.001\nbatch_size = 128\nnum_epochs = 3\nbase_embedding_dim = 64",
"Train and evaluate the model",
"\ndef get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=True):\n return tf.data.experimental.make_csv_dataset(\n csv_file_path,\n batch_size=batch_size,\n column_names=csv_header,\n label_name=target_feature_name,\n num_epochs=1,\n header=False,\n field_delim=\"|\",\n shuffle=shuffle,\n )\n\n\ndef run_experiment(model):\n # Compile the model.\n model.compile(\n optimizer=keras.optimizers.Adam(learning_rate),\n loss=tf.keras.losses.MeanSquaredError(),\n metrics=[keras.metrics.MeanAbsoluteError(name=\"mae\")],\n )\n # Read the training data.\n train_dataset = get_dataset_from_csv(\"train_data.csv\", batch_size)\n # Read the test data.\n eval_dataset = get_dataset_from_csv(\"eval_data.csv\", batch_size, shuffle=False)\n # Fit the model with the training data.\n history = model.fit(train_dataset, epochs=num_epochs, validation_data=eval_dataset,)\n return history\n",
"Experiment 1: baseline collaborative filtering model\nImplement embedding encoder",
"\ndef embedding_encoder(vocabulary, embedding_dim, num_oov_indices=0, name=None):\n return keras.Sequential(\n [\n StringLookup(\n vocabulary=vocabulary, mask_token=None, num_oov_indices=num_oov_indices\n ),\n layers.Embedding(\n input_dim=len(vocabulary) + num_oov_indices, output_dim=embedding_dim\n ),\n ],\n name=f\"{name}_embedding\" if name else None,\n )\n",
"Implement the baseline model",
"\ndef create_baseline_model():\n # Receive the user as an input.\n user_input = layers.Input(name=\"user_id\", shape=(), dtype=tf.string)\n # Get user embedding.\n user_embedding = embedding_encoder(\n vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, name=\"user\"\n )(user_input)\n\n # Receive the movie as an input.\n movie_input = layers.Input(name=\"movie_id\", shape=(), dtype=tf.string)\n # Get embedding.\n movie_embedding = embedding_encoder(\n vocabulary=movie_vocabulary, embedding_dim=base_embedding_dim, name=\"movie\"\n )(movie_input)\n\n # Compute dot product similarity between user and movie embeddings.\n logits = layers.Dot(axes=1, name=\"dot_similarity\")(\n [user_embedding, movie_embedding]\n )\n # Convert to rating scale.\n prediction = keras.activations.sigmoid(logits) * 5\n # Create the model.\n model = keras.Model(\n inputs=[user_input, movie_input], outputs=prediction, name=\"baseline_model\"\n )\n return model\n\n\nbaseline_model = create_baseline_model()\nbaseline_model.summary()",
"Notice that the number of trainable parameters is 623,744",
"history = run_experiment(baseline_model)\n\nplt.plot(history.history[\"loss\"])\nplt.plot(history.history[\"val_loss\"])\nplt.title(\"model loss\")\nplt.ylabel(\"loss\")\nplt.xlabel(\"epoch\")\nplt.legend([\"train\", \"eval\"], loc=\"upper left\")\nplt.show()",
"Experiment 2: memory-efficient model\nImplement Quotient-Remainder embedding as a layer\nThe Quotient-Remainder technique works as follows. For a set of vocabulary and embedding size\nembedding_dim, instead of creating a vocabulary_size X embedding_dim embedding table,\nwe create two num_buckets X embedding_dim embedding tables, where num_buckets\nis much smaller than vocabulary_size.\nAn embedding for a given item index is generated via the following steps:\n\nCompute the quotient_index as index // num_buckets.\nCompute the remainder_index as index % num_buckets.\nLookup quotient_embedding from the first embedding table using quotient_index.\nLookup remainder_embedding from the second embedding table using remainder_index.\nReturn quotient_embedding * remainder_embedding.\n\nThis technique not only reduces the number of embedding vectors needs to be stored and trained,\nbut also generates a unique embedding vector for each item of size embedding_dim.\nNote that q_embedding and r_embedding can be combined using other operations,\nlike Add and Concatenate.",
"\nclass QREmbedding(keras.layers.Layer):\n def __init__(self, vocabulary, embedding_dim, num_buckets, name=None):\n super(QREmbedding, self).__init__(name=name)\n self.num_buckets = num_buckets\n\n self.index_lookup = StringLookup(\n vocabulary=vocabulary, mask_token=None, num_oov_indices=0\n )\n self.q_embeddings = layers.Embedding(num_buckets, embedding_dim,)\n self.r_embeddings = layers.Embedding(num_buckets, embedding_dim,)\n\n def call(self, inputs):\n # Get the item index.\n embedding_index = self.index_lookup(inputs)\n # Get the quotient index.\n quotient_index = tf.math.floordiv(embedding_index, self.num_buckets)\n # Get the reminder index.\n remainder_index = tf.math.floormod(embedding_index, self.num_buckets)\n # Lookup the quotient_embedding using the quotient_index.\n quotient_embedding = self.q_embeddings(quotient_index)\n # Lookup the remainder_embedding using the remainder_index.\n remainder_embedding = self.r_embeddings(remainder_index)\n # Use multiplication as a combiner operation\n return quotient_embedding * remainder_embedding\n",
"Implement Mixed Dimension embedding as a layer\nIn the mixed dimension embedding technique, we train embedding vectors with full dimensions\nfor the frequently queried items, while train embedding vectors with reduced dimensions\nfor less frequent items, plus a projection weights matrix to bring low dimension embeddings\nto the full dimensions.\nMore precisely, we define blocks of items of similar frequencies. For each block,\na block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim\nprojection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim,\nthe projection weights matrix becomes an identity matrix. Embeddings for a given batch of item\nindices are generated via the following steps:\n\nFor each block, lookup the block_embedding_dim embedding vectors using indices, and\nproject them to the full_embedding_dim.\nIf an item index does not belong to a given block, an out-of-vocabulary embedding is returned.\nEach block will return a batch_size X full_embedding_dim tensor.\nA mask is applied to the embeddings returned from each block in order to convert the\nout-of-vocabulary embeddings to vector of zeros. That is, for each item in the batch,\na single non-zero embedding vector is returned from the all block embeddings.\nEmbeddings retrieved from the blocks are combined using sum to produce the final\nbatch_size X full_embedding_dim tensor.",
"\nclass MDEmbedding(keras.layers.Layer):\n def __init__(\n self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None\n ):\n super(MDEmbedding, self).__init__(name=name)\n self.num_blocks = len(blocks_vocabulary)\n\n # Create vocab to block lookup.\n keys = []\n values = []\n for block_idx, block_vocab in enumerate(blocks_vocabulary):\n keys.extend(block_vocab)\n values.extend([block_idx] * len(block_vocab))\n self.vocab_to_block = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(keys, values), default_value=-1\n )\n\n self.block_embedding_encoders = []\n self.block_embedding_projectors = []\n\n # Create block embedding encoders and projectors.\n for idx in range(self.num_blocks):\n vocabulary = blocks_vocabulary[idx]\n embedding_dim = blocks_embedding_dims[idx]\n block_embedding_encoder = embedding_encoder(\n vocabulary, embedding_dim, num_oov_indices=1\n )\n self.block_embedding_encoders.append(block_embedding_encoder)\n if embedding_dim == base_embedding_dim:\n self.block_embedding_projectors.append(layers.Lambda(lambda x: x))\n else:\n self.block_embedding_projectors.append(\n layers.Dense(units=base_embedding_dim)\n )\n\n def call(self, inputs):\n # Get block index for each input item.\n block_indicies = self.vocab_to_block.lookup(inputs)\n # Initialize output embeddings to zeros.\n embeddings = tf.zeros(shape=(tf.shape(inputs)[0], base_embedding_dim))\n # Generate embeddings from blocks.\n for idx in range(self.num_blocks):\n # Lookup embeddings from the current block.\n block_embeddings = self.block_embedding_encoders[idx](inputs)\n # Project embeddings to base_embedding_dim.\n block_embeddings = self.block_embedding_projectors[idx](block_embeddings)\n # Create a mask to filter out embeddings of items that do not belong to the current block.\n mask = tf.expand_dims(tf.cast(block_indicies == idx, tf.dtypes.float32), 1)\n # Set the embeddings for the items not belonging to the current block to zeros.\n block_embeddings = block_embeddings * mask\n # Add the block embeddings to the final embeddings.\n embeddings += block_embeddings\n\n return embeddings\n",
"Implement the memory-efficient model\nIn this experiment, we are going to use the Quotient-Remainder technique to reduce the\nsize of the user embeddings, and the Mixed Dimension technique to reduce the size of the\nmovie embeddings.\nWhile in the paper, an alpha-power rule is used to determined\nthe dimensions of the embedding of each block, we simply set the number of blocks and the\ndimensions of embeddings of each block based on the histogram visualization of movies popularity.",
"movie_frequencies = ratings_data[\"movie_id\"].value_counts()\nmovie_frequencies.hist(bins=10)",
"You can see that we can group the movies into three blocks, and assign them 64, 32, and 16\nembedding dimensions, respectively. Feel free to experiment with different number of blocks\nand dimensions.",
"sorted_movie_vocabulary = list(movie_frequencies.keys())\n\nmovie_blocks_vocabulary = [\n sorted_movie_vocabulary[:400], # high popularity movies block\n sorted_movie_vocabulary[400:1700], # normal popularity movies block\n sorted_movie_vocabulary[1700:], # low popularity movies block\n]\n\nmovie_blocks_embedding_dims = [64, 32, 16]\n\nuser_embedding_num_buckets = len(user_vocabulary) // 50\n\n\ndef create_memory_efficient_model():\n # Take the user as an input.\n user_input = layers.Input(name=\"user_id\", shape=(), dtype=tf.string)\n # Get user embedding.\n user_embedding = QREmbedding(\n vocabulary=user_vocabulary,\n embedding_dim=base_embedding_dim,\n num_buckets=user_embedding_num_buckets,\n name=\"user_embedding\",\n )(user_input)\n\n # Take the movie as an input.\n movie_input = layers.Input(name=\"movie_id\", shape=(), dtype=tf.string)\n # Get embedding.\n movie_embedding = MDEmbedding(\n blocks_vocabulary=movie_blocks_vocabulary,\n blocks_embedding_dims=movie_blocks_embedding_dims,\n base_embedding_dim=base_embedding_dim,\n name=\"movie_embedding\",\n )(movie_input)\n\n # Compute dot product similarity between user and movie embeddings.\n logits = layers.Dot(axes=1, name=\"dot_similarity\")(\n [user_embedding, movie_embedding]\n )\n # Convert to rating scale.\n prediction = keras.activations.sigmoid(logits) * 5\n # Create the model.\n model = keras.Model(\n inputs=[user_input, movie_input], outputs=prediction, name=\"baseline_model\"\n )\n return model\n\n\nmemory_efficient_model = create_memory_efficient_model()\nmemory_efficient_model.summary()",
"Notice that the number of trainable parameters is 117,968, which is more than 5x less than\nthe number of parameters in the baseline model.",
"history = run_experiment(memory_efficient_model)\n\nplt.plot(history.history[\"loss\"])\nplt.plot(history.history[\"val_loss\"])\nplt.title(\"model loss\")\nplt.ylabel(\"loss\")\nplt.xlabel(\"epoch\")\nplt.legend([\"train\", \"eval\"], loc=\"upper left\")\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cccr-iitm/cmip6/models/sandbox-3/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: CCCR-IITM\nSource ID: SANDBOX-3\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:48\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Alexoner/skynet
|
notebooks/ConvolutionalNetworks.ipynb
|
mit
|
[
"Convolutional Networks\nSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.\nFirst you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.",
"# As usual, a bit of setup\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom skynet.neural_network.classifiers.cnn import *\nfrom skynet.utils.data_utils import get_CIFAR10_data\nfrom skynet.utils.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient\nfrom skynet.neural_network.layers import *\nfrom skynet.neural_network.fast_layers import *\nfrom skynet.solvers.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)",
"Convolution: Naive forward pass\nThe core of a convolutional network is the convolution operation. In the file neural_network/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. \nYou don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.\nYou can test your implementation by running the following:",
"x_shape = (2, 3, 4, 4)\nw_shape = (3, 3, 4, 4)\nx = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)\nw = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)\nb = np.linspace(-0.1, 0.2, num=3)\n\nconv_param = {'stride': 2, 'pad': 1}\nout, _ = conv_forward_naive(x, w, b, conv_param)\ncorrect_out = np.array([[[[[-0.08759809, -0.10987781],\n [-0.18387192, -0.2109216 ]],\n [[ 0.21027089, 0.21661097],\n [ 0.22847626, 0.23004637]],\n [[ 0.50813986, 0.54309974],\n [ 0.64082444, 0.67101435]]],\n [[[-0.98053589, -1.03143541],\n [-1.19128892, -1.24695841]],\n [[ 0.69108355, 0.66880383],\n [ 0.59480972, 0.56776003]],\n [[ 2.36270298, 2.36904306],\n [ 2.38090835, 2.38247847]]]]])\n\n# Compare your output to ours; difference should be around 1e-8\nprint('Testing conv_forward_naive')\nprint('difference: ', rel_error(out, correct_out))",
"Aside: Image processing via convolutions\nAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.",
"from scipy.misc import imread, imresize\n\nkitten, puppy = imread('../skynet/datasets/kitten.jpg'), imread('../skynet/datasets/puppy.jpg')\n# kitten is wide, and puppy is already square\nd = kitten.shape[1] - kitten.shape[0]\nkitten_cropped = kitten[:, d//2:-d//2, :]\n\nimg_size = 200 # Make this smaller if it runs too slow\nx = np.zeros((2, 3, img_size, img_size))\nx[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))\nx[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))\n\n# Set up a convolutional weights holding 2 filters, each 3x3\nw = np.zeros((2, 3, 3, 3))\n\n# The first filter converts the image to grayscale.\n# Set up the red, green, and blue channels of the filter.\nw[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]\nw[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]\nw[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]\n\n# Second filter detects horizontal edges in the blue channel.\nw[1, 2, :, :] = [[1, 2, 1], \n [0, 0, 0], \n [-1, -2, -1]]\n\n# Vector of biases. We don't need any bias for the grayscale\n# filter, but for the edge detection filter we want to add 128\n# to each output so that nothing is negative.\nb = np.array([0, 128])\n\n# Compute the result of convolving each input in x with each filter in w,\n# offsetting by b, and storing the results in out.\nout, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})\n\ndef imshow_noax(img, normalize=True):\n \"\"\" Tiny helper to show images as uint8 and remove axis labels \"\"\"\n if normalize:\n img_max, img_min = np.max(img), np.min(img)\n img = 255.0 * (img - img_min) / (img_max - img_min)\n plt.imshow(img.astype('uint8'))\n plt.gca().axis('off')\n\n# Show the original images and the results of the conv operation\nplt.subplot(2, 3, 1)\nimshow_noax(puppy, normalize=False)\nplt.title('Original image')\nplt.subplot(2, 3, 2)\nimshow_noax(out[0, 0])\nplt.title('Grayscale')\nplt.subplot(2, 3, 3)\nimshow_noax(out[0, 1])\nplt.title('Edges')\nplt.subplot(2, 3, 4)\nimshow_noax(kitten_cropped, normalize=False)\nplt.subplot(2, 3, 5)\nimshow_noax(out[1, 0])\nplt.subplot(2, 3, 6)\nimshow_noax(out[1, 1])\nplt.show()",
"Convolution: Naive backward pass\nImplement the backward pass for the convolution operation in the function conv_backward_naive in the file neural_network/layers.py. Again, you don't need to worry too much about computational efficiency.\nWhen you are done, run the following to check your backward pass with a numeric gradient check.",
"x = np.random.randn(4, 3, 5, 5)\nw = np.random.randn(2, 3, 3, 3)\nb = np.random.randn(2,)\ndout = np.random.randn(4, 2, 5, 5)\nconv_param = {'stride': 1, 'pad': 1}\n\ndx_num = eval_numerical_gradient_array(\n lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(\n lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(\n lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)\n\nout, cache = conv_forward_naive(x, w, b, conv_param)\ndx, dw, db = conv_backward_naive(dout, cache)\n\n# Your errors should be around 1e-9'\nprint('Testing conv_backward_naive function')\nprint('dx error: ', rel_error(dx, dx_num))\nprint('dw error: ', rel_error(dw, dw_num))\nprint('db error: ', rel_error(db, db_num))",
"Max pooling: Naive forward\nImplement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file neural_network/layers.py. Again, don't worry too much about computational efficiency.\nCheck your implementation by running the following:",
"x_shape = (2, 3, 4, 4)\nx = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)\npool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}\n\nout, _ = max_pool_forward_naive(x, pool_param)\n\ncorrect_out = np.array([[[[-0.26315789, -0.24842105],\n [-0.20421053, -0.18947368]],\n [[-0.14526316, -0.13052632],\n [-0.08631579, -0.07157895]],\n [[-0.02736842, -0.01263158],\n [ 0.03157895, 0.04631579]]],\n [[[ 0.09052632, 0.10526316],\n [ 0.14947368, 0.16421053]],\n [[ 0.20842105, 0.22315789],\n [ 0.26736842, 0.28210526]],\n [[ 0.32631579, 0.34105263],\n [ 0.38526316, 0.4 ]]]])\n\n# Compare your output with ours. Difference should be around 1e-8.\nprint('Testing max_pool_forward_naive function:')\nprint('difference: ', rel_error(out, correct_out))",
"Max pooling: Naive backward\nImplement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.\nCheck your implementation with numeric gradient checking by running the following:",
"x = np.random.randn(3, 2, 8, 8)\ndout = np.random.randn(3, 2, 4, 4)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\ndx_num = eval_numerical_gradient_array(\n lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)\n\nout, cache = max_pool_forward_naive(x, pool_param)\ndx = max_pool_backward_naive(dout, cache)\n\n# Your error should be around 1e-12\nprint('Testing max_pool_backward_naive function:')\nprint('dx error: ', rel_error(dx, dx_num))",
"Fast layers\nMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.\nThe fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:\nbash\npython setup.py build_ext --inplace\nThe API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.\nNOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.\nYou can compare the performance of the naive and fast versions of these layers by running the following:",
"from skynet.neural_network.fast_layers import conv_forward_fast, conv_backward_fast\nfrom time import time\n\nx = np.random.randn(100, 3, 31, 31)\nw = np.random.randn(25, 3, 3, 3)\nb = np.random.randn(25,)\ndout = np.random.randn(100, 25, 16, 16)\nconv_param = {'stride': 2, 'pad': 1}\n\nt0 = time()\nout_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)\nt1 = time()\nout_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)\nt2 = time()\n\nprint('Testing conv_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('Difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting conv_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))\nprint('dw difference: ', rel_error(dw_naive, dw_fast))\nprint('db difference: ', rel_error(db_naive, db_fast))\n\nfrom skynet.neural_network.fast_layers import max_pool_forward_fast, max_pool_backward_fast\n\nx = np.random.randn(100, 3, 32, 32)\ndout = np.random.randn(100, 3, 16, 16)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nt0 = time()\nout_naive, cache_naive = max_pool_forward_naive(x, pool_param)\nt1 = time()\nout_fast, cache_fast = max_pool_forward_fast(x, pool_param)\nt2 = time()\n\nprint('Testing pool_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('fast: %fs' % (t2 - t1))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive = max_pool_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast = max_pool_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting pool_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))",
"Convolutional \"sandwich\" layers\nPreviously we introduced the concept of \"sandwich\" layers that combine multiple operations into commonly used patterns. In the file neural_network/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.",
"from skynet.neural_network.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward\n\nx = np.random.randn(2, 3, 16, 16)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nout, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)\ndx, dw, db = conv_relu_pool_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(\n lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(\n lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(\n lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)\n\nprint('Testing conv_relu_pool')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))\n\nfrom skynet.neural_network.layer_utils import conv_relu_forward, conv_relu_backward\n\nx = np.random.randn(2, 3, 8, 8)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\n\nout, cache = conv_relu_forward(x, w, b, conv_param)\ndx, dw, db = conv_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(\n lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(\n lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(\n lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)\n\nprint('Testing conv_relu:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))",
"Three-layer ConvNet\nNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.\nOpen the file neural_network/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:\nSanity check loss\nAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.",
"model = ThreeLayerConvNet()\n\nN = 50\nX = np.random.randn(N, 3, 32, 32)\ny = np.random.randint(10, size=N)\n\nloss, grads = model.loss(X, y)\nprint('Initial loss (no regularization): ', loss)\n\nmodel.reg = 0.5\nloss, grads = model.loss(X, y)\nprint('Initial loss (with regularization): ', loss)\n# Initial loss (no regularization): 2.30258557269\n# Initial loss (with regularization): 2.50903657226",
"Gradient check\nAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.",
"num_inputs = 2\ninput_dim = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nX = np.random.randn(num_inputs, *input_dim)\ny = np.random.randint(num_classes, size=num_inputs)\n\nmodel = ThreeLayerConvNet(num_filters=3, filter_size=3,\n input_dim=input_dim, hidden_dim=7,\n dtype=np.float64)\nloss, grads = model.loss(X, y)\nfor param_name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n param_grad_num = eval_numerical_gradient(\n f, model.params[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print('%s max relative error: %e' % (\n param_name, rel_error(param_grad_num, grads[param_name])))",
"Overfit small data\nA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.",
"num_train = 100\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nmodel = ThreeLayerConvNet(weight_scale=1e-2)\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-4,\n },\n verbose=True, print_every=1)\nsolver.train()",
"Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:",
"plt.subplot(2, 1, 1)\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(solver.train_acc_history, '-o')\nplt.plot(solver.val_acc_history, '-o')\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()",
"Train the net\nBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:\nThe initial settings with learning_rate equal to 1e-3 doesn't work well with me, why? I have to tune the learning_rate to 1e-4 to overfit small data.",
"model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)\n\nsolver = Solver(model, data,\n num_epochs=1, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-4,\n },\n verbose=True, print_every=100)\nsolver.train()",
"Visualize Filters\nYou can visualize the first-layer convolutional filters from the trained network by running the following:",
"from skynet.utils.vis_utils import visualize_grid\n\ngrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))\nplt.imshow(grid.astype('uint8'))\nplt.axis('off')\nplt.gcf().set_size_inches(5, 5)\nplt.show()",
"Spatial Batch Normalization\nWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called \"spatial batch normalization.\"\nNormally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.\nIf the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different images and different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.\nSpatial batch normalization: forward\nIn the file neural_network/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:",
"# Check the training-time forward pass by checking means and variances\n# of features both before and after spatial batch normalization\n\nN, C, H, W = 2, 3, 4, 5\nx = 4 * np.random.randn(N, C, H, W) + 10\n\nprint('Before spatial batch normalization:')\nprint(' Shape: ', x.shape)\nprint(' Means: ', x.mean(axis=(0, 2, 3)))\nprint(' Stds: ', x.std(axis=(0, 2, 3)))\n\n# Means should be close to zero and stds close to one\ngamma, beta = np.ones(C), np.zeros(C)\nbn_param = {'mode': 'train'}\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization:')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\n# Means should be close to beta and stds close to gamma\ngamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization (nontrivial gamma, beta):')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, C, H, W = 10, 4, 11, 12\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(C)\nbeta = np.zeros(C)\nfor t in range(50):\n x = 2.3 * np.random.randn(N, C, H, W) + 13\n spatial_batchnorm_forward(x, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nx = 2.3 * np.random.randn(N, C, H, W) + 13\na_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After spatial batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=(0, 2, 3)))\nprint(' stds: ', a_norm.std(axis=(0, 2, 3)))",
"Spatial batch normalization: backward\nIn the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:",
"N, C, H, W = 2, 3, 4, 5\nx = 5 * np.random.randn(N, C, H, W) + 12\ngamma = np.random.randn(C)\nbeta = np.random.randn(C)\ndout = np.random.randn(N, C, H, W)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))",
"Experiment!\nExperiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:\nThings you should try:\n\nFilter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient\nNumber of filters: Above we used 32 filters. Do more or fewer do better?\nBatch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?\nNetwork architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:\n[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]\n[conv-relu-pool]XN - [affine]XM - [softmax or SVM]\n[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]\n\n\n\nTips for training\nFor each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:\n\nIf the parameters are working well, you should see improvement within a few hundred iterations\nRemember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.\nOnce you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.\n\nGoing above and beyond\nIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.\n\nAlternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.\nAlternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.\nModel ensembles\nData augmentation\n\nIf you do decide to implement something extra, clearly describe it in the \"Extra Credit Description\" cell below.\nWhat we expect\nAt the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.\nYou should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.\nHave fun and happy training!",
"# Train a really good model on CIFAR-10\n# FunConvNet\nprint(\"sanity check\")\nmodel = AntConvNet(conv_params=[\n {\n 'num_filters': 3, 'filter_size': 3,\n 'stride': 1, #'pad': (filter_size - 1) / 2\n }\n ],\n hidden_dims=[50], num_classes=10,\n dropout=0.9, use_batchnorm=True,\n weight_scale=1e-2, reg=0.0,\n dtype=np.float32, seed=None)\n\nN = 50\nX = np.random.randn(N, 3, 32, 32)\ny = np.random.randint(10, size=N)\n\nloss, grads = model.loss(X, y)\nprint('Initial loss (no regularization): ', loss)\nmodel.reg = 0.5\nloss, grads = model.loss(X, y)\nprint('Initial loss (with regularization): ', loss)\n\n# FIXED: gradient check fails here. Actually, this is a data precision problem, use float64\nnum_inputs = 2\ninput_dim = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nX = np.random.randn(num_inputs, *input_dim)\ny = np.random.randint(num_classes, size=num_inputs)\n\nconv_params=[{'num_filters': 4, 'filter_size': 3,},\n# {'num_filters': 8, 'filter_size': 3,}, \n# {'num_filters': 16, 'filter_size': 3,},\n ]\nmodel = AntConvNet(input_dim=input_dim,\n conv_params=conv_params, hidden_dims=[10, 5],\n use_batchnorm=True)\nloss, grads = model.loss(X, y)\nfor param_name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n param_grad_num = eval_numerical_gradient(\n f, model.params[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print('%s max relative error: %e' % (\n param_name, rel_error(param_grad_num, grads[param_name])))\n\nprint (\"overfit small data\")\nmodel_param = dict(conv_params=[\n {\n 'num_filters': 32, 'filter_size': 5,\n 'stride': 1, #'pad': (filter_size - 1) / 2\n },\n {\n 'num_filters': 64, 'filter_size': 3,\n 'stride': 1, #'pad': (filter_size - 1) / 2\n },\n ],\n hidden_dims=[200], num_classes=10,\n dropout=0, use_batchnorm=True,\n weight_scale=1e-2, reg=0.0,\n dtype=np.float32, seed=None)\n\nsolver_param = dict(num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-4,\n },\n verbose=True, print_every=10)\n\nmodel_ex = AntConvNet(**model_param)\n\nsolver_ex = Solver(model_ex, small_data, **solver_param)\nsolver_ex.train()\n\nprint('without batch normalization or dropout')\nmodel = FunConvNet(**model_param)\n\nsolver = Solver(model, small_data, **solver_param)\nsolver.train()\n\n# (Epoch 5 / 5) train acc: 0.881000; val_acc: 0.800000; test accuracy: 0.800000\nmodel_hyperparameters = dict(conv_params=[\n {\n 'num_filters': 64, 'filter_size': 3,\n },\n {\n 'num_filters': 128, 'filter_size': 3,\n },\n {\n 'num_filters': 256, 'filter_size': 3,\n }, \n {\n 'num_filters': 512, 'filter_size': 3,\n },\n ],\n hidden_dims=[512, 256], num_classes=10,\n dropout=0.5, use_batchnorm=True,\n weight_scale=2e-2, reg=1e-3,\n dtype=np.float32, seed=None)\n# change num_epochs to at least 5 to get around 80% test accuracy\nsolver_hyperparameters = dict(num_epochs=5, batch_size=128,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n lr_decay=0.5,\n verbose=True, print_every=100)\n\nmodel = AntConvNet(**model_hyperparameters)\nsolver = Solver(model, data, **solver_hyperparameters)\nsolver.train()\n\nplt.subplot(2, 1, 1)\nplt.plot(solver.loss_history)\nplt.title('loss history')\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(solver.train_acc_history, '-o')\nplt.plot(solver.val_acc_history, '-o')\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()\n\ny_test_pred = np.argmax(model.loss(data['X_test']), axis=-1)\nacc_test = np.mean(y_test_pred == data['y_test'])\nprint('test accuracy: %f' % acc_test)\n\nimport pickle\nimport copy\n\nbest_solver = None\nbest_val_acc = None\nfile_pickle = 'best_cnn.pkl'\ntry:\n with open(file_pickle, 'rb') as f:\n best_solver = pickle.load(f)\n best_acc_test = np.mean(np.argmax(\n best_solver.model.loss(data['X_test']), axis=-1)== data['y_test'])\n print('best_acc_test so far: %f' % (best_acc_test))\nexcept Exception as e:\n print(('Unpickling or testing failed!', e))\nif not best_solver or acc_test >= best_acc_test:\n print('pickling new model and solver to file %s' % (file_pickle,))\n # shallow copy the solver object\n solver_pkl = copy.copy(solver)\n solver_pkl.X_train, solver_pkl.X_val, solver_pkl.y_train, solver_pkl.y_val = \\\n None, None, None, None\n with open(file_pickle, 'wb') as f:\n pickle.dump(solver_pkl, f)",
"Extra Credit Description\nIf you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
moble/MatchedFiltering
|
GW150914/AdjustCoM.ipynb
|
mit
|
[
"Adjust center of mass position and velocity of simulation for GW150914",
"import scri\nimport scri.SpEC\nimport numpy as np\n\ndata_dir = '/Users/boyle/Research/Data/SimulationAnnex/Incoming/BBH_SKS_d13.4_q1.23_sA_0_0_0.320_sB_0_0_-0.580/Lev5/'\n\nw_N2 = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/Extrapolated_N2.dir', file_write_mode='w')\nw_N3 = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/Extrapolated_N3.dir', file_write_mode='a')\nw_N4 = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/Extrapolated_N4.dir', file_write_mode='a')\nw_No = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/OutermostExtraction.dir', file_write_mode='a')",
"Those displacements look pretty large. I wonder how far the system wanders...",
"x_i = np.array([0.0254846374656213, -0.051270560984526176, 3.328532865089032e-06])\nv_i = np.array([-1.4420901467875399e-06, 6.341746857347185e-06, -3.412200633855404e-08])\n\nx_f = x_i + w_No.t[-1]*v_i\nprint(x_f)\nprint(np.linalg.norm(x_f))",
"That's not very far. I guess it's not a very long simulation...",
"w_No.t[-1]",
"Indeed it's pretty short, so the system doesn't get very far. I start to worry when we need to be careful with higher modes, or when the displacements are a few times larger than this.",
"scri.SpEC.metadata.read_metadata_into_object?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amirziai/learning
|
deep-learning/Tensorflow-Tutorial.ipynb
|
mit
|
[
"TensorFlow Tutorial\nWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: \n\nInitialize variables\nStart your own session\nTrain algorithms \nImplement a Neural Network\n\nPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. \n1 - Exploring the Tensorflow Library\nTo start, you will import the library:",
"import math\nimport numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow.python.framework import ops\nfrom tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict\n\n%matplotlib inline\nnp.random.seed(1)",
"Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. \n$$loss = \\mathcal{L}(\\hat{y}, y) = (\\hat y^{(i)} - y^{(i)})^2 \\tag{1}$$",
"y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.\ny = tf.constant(39, name='y') # Define y. Set to 39\n\nloss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss\n\ninit = tf.global_variables_initializer() # When init is run later (session.run(init)),\n # the loss variable will be initialized and ready to be computed\nwith tf.Session() as session: # Create a session and print the output\n session.run(init) # Initializes the variables\n print(session.run(loss)) # Prints the loss",
"Writing and running programs in TensorFlow has the following steps:\n\nCreate Tensors (variables) that are not yet executed/evaluated. \nWrite operations between those Tensors.\nInitialize your Tensors. \nCreate a Session. \nRun the Session. This will run the operations you'd written above. \n\nTherefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.\nNow let us look at an easy example. Run the cell below:",
"a = tf.constant(2)\nb = tf.constant(10)\nc = tf.multiply(a,b)\nprint(c)",
"As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type \"int32\". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.",
"sess = tf.Session()\nprint(sess.run(c))",
"Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session. \nNext, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. \nTo specify values for a placeholder, you can pass in values by using a \"feed dictionary\" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.",
"# Change the value of x in the feed_dict\n\nx = tf.placeholder(tf.int64, name = 'x')\nprint(sess.run(2 * x, feed_dict = {x: 3}))\nsess.close()",
"When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session. \nHere's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.\n1.1 - Linear function\nLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. \nExercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):\n```python\nX = tf.constant(np.random.randn(3,1), name = \"X\")\n```\nYou might find the following functions helpful: \n- tf.matmul(..., ...) to do a matrix multiplication\n- tf.add(..., ...) to do an addition\n- np.random.randn(...) to initialize randomly",
"# GRADED FUNCTION: linear_function\n\ndef linear_function():\n \"\"\"\n Implements a linear function: \n Initializes W to be a random tensor of shape (4,3)\n Initializes X to be a random tensor of shape (3,1)\n Initializes b to be a random tensor of shape (4,1)\n Returns: \n result -- runs the session for Y = WX + b \n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (4 lines of code)\n X = tf.constant(np.random.randn(3, 1), name='X')\n W = tf.constant(np.random.randn(4, 3), name='W')\n b = tf.constant(np.random.randn(4, 1), name='b')\n Y = tf.constant(np.random.randn(4, 1), name='Y')\n ### END CODE HERE ### \n \n # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate\n \n ### START CODE HERE ###\n sess = tf.Session()\n result = sess.run(tf.matmul(W, X) + b)\n ### END CODE HERE ### \n \n # close the session \n sess.close()\n\n return result\n\nprint( \"result = \" + str(linear_function()))",
"Expected Output : \n<table> \n<tr> \n<td>\n**result**\n</td>\n<td>\n[[-2.15657382]\n [ 2.95891446]\n [-1.08926781]\n [-0.84538042]]\n</td>\n</tr> \n\n</table>\n\n1.2 - Computing the sigmoid\nGreat! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input. \nYou will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session. \n Exercise : Implement the sigmoid function below. You should use the following: \n\ntf.placeholder(tf.float32, name = \"...\")\ntf.sigmoid(...)\nsess.run(..., feed_dict = {x: z})\n\nNote that there are two typical ways to create and use sessions in tensorflow: \nMethod 1:\n```python\nsess = tf.Session()\nRun the variables initialization (if needed), run the operations\nresult = sess.run(..., feed_dict = {...})\nsess.close() # Close the session\n**Method 2:**python\nwith tf.Session() as sess: \n # run the variables initialization (if needed), run the operations\n result = sess.run(..., feed_dict = {...})\n # This takes care of closing the session for you :)\n```",
"# GRADED FUNCTION: sigmoid\n\ndef sigmoid(z):\n \"\"\"\n Computes the sigmoid of z\n \n Arguments:\n z -- input value, scalar or vector\n \n Returns: \n results -- the sigmoid of z\n \"\"\"\n \n ### START CODE HERE ### ( approx. 4 lines of code)\n # Create a placeholder for x. Name it 'x'.\n x = tf.placeholder(tf.float32, name='x')\n\n # compute sigmoid(x)\n sigmoid = tf.sigmoid(x)\n\n # Create a session, and run it. Please use the method 2 explained above. \n # You should use a feed_dict to pass z's value to x. \n with tf.Session() as sess:\n # Run session and call the output \"result\"\n result = sess.run(sigmoid, feed_dict={x: z})\n \n ### END CODE HERE ###\n \n return result\n\nprint (\"sigmoid(0) = \" + str(sigmoid(0)))\nprint (\"sigmoid(12) = \" + str(sigmoid(12)))",
"Expected Output : \n<table> \n<tr> \n<td>\n**sigmoid(0)**\n</td>\n<td>\n0.5\n</td>\n</tr>\n<tr> \n<td>\n**sigmoid(12)**\n</td>\n<td>\n0.999994\n</td>\n</tr> \n\n</table>\n\n<font color='blue'>\nTo summarize, you how know how to:\n1. Create placeholders\n2. Specify the computation graph corresponding to operations you want to compute\n3. Create the session\n4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. \n1.3 - Computing the Cost\nYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m: \n$$ J = - \\frac{1}{m} \\sum_{i = 1}^m \\large ( \\small y^{(i)} \\log a^{ [2] (i)} + (1-y^{(i)})\\log (1-a^{ [2] (i)} )\\large )\\small\\tag{2}$$\nyou can do it in one line of code in tensorflow!\nExercise: Implement the cross entropy loss. The function you will use is: \n\ntf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)\n\nYour code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes\n$$- \\frac{1}{m} \\sum_{i = 1}^m \\large ( \\small y^{(i)} \\log \\sigma(z^{2}) + (1-y^{(i)})\\log (1-\\sigma(z^{2})\\large )\\small\\tag{2}$$",
"# GRADED FUNCTION: cost\n\ndef cost(logits, labels):\n \"\"\"\n Computes the cost using the sigmoid cross entropy\n \n Arguments:\n logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)\n labels -- vector of labels y (1 or 0) \n \n Note: What we've been calling \"z\" and \"y\" in this class are respectively called \"logits\" and \"labels\" \n in the TensorFlow documentation. So logits will feed into z, and labels into y. \n \n Returns:\n cost -- runs the session of the cost (formula (2))\n \"\"\"\n \n ### START CODE HERE ### \n \n # Create the placeholders for \"logits\" (z) and \"labels\" (y) (approx. 2 lines)\n z = tf.placeholder(tf.float32, name='z')\n y = tf.placeholder(tf.float32, name='y')\n \n # Use the loss function (approx. 1 line)\n cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y)\n \n # Create a session (approx. 1 line). See method 1 above.\n sess = tf.Session()\n \n # Run the session (approx. 1 line).\n cost = sess.run(cost, feed_dict={z: logits, y: labels})\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n \n return cost\n\nlogits = sigmoid(np.array([0.2,0.4,0.7,0.9]))\ncost = cost(logits, np.array([0,0,1,1]))\nprint (\"cost = \" + str(cost))",
"Expected Output : \n<table> \n <tr> \n <td>\n **cost**\n </td>\n <td>\n [ 1.00538719 1.03664088 0.41385433 0.39956614]\n </td>\n </tr>\n\n</table>\n\n1.4 - Using One Hot encodings\nMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:\n<img src=\"images/onehot.png\" style=\"width:600px;height:150px;\">\nThis is called a \"one hot\" encoding, because in the converted representation exactly one element of each column is \"hot\" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: \n\ntf.one_hot(labels, depth, axis) \n\nExercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.",
"# GRADED FUNCTION: one_hot_matrix\n\ndef one_hot_matrix(labels, C):\n \"\"\"\n Creates a matrix where the i-th row corresponds to the ith class number and the jth column\n corresponds to the jth training example. So if example j had a label i. Then entry (i,j) \n will be 1. \n \n Arguments:\n labels -- vector containing the labels \n C -- number of classes, the depth of the one hot dimension\n \n Returns: \n one_hot -- one hot matrix\n \"\"\"\n \n ### START CODE HERE ###\n \n # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)\n C = tf.constant(C)\n \n # Use tf.one_hot, be careful with the axis (approx. 1 line)\n one_hot_matrix = tf.one_hot(labels, C, axis=0)\n \n # Create the session (approx. 1 line)\n sess = tf.Session()\n \n # Run the session (approx. 1 line)\n one_hot = sess.run(one_hot_matrix)\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n \n return one_hot\n\nlabels = np.array([1,2,3,0,2,1])\none_hot = one_hot_matrix(labels, C = 4)\nprint (\"one_hot = \" + str(one_hot))",
"Expected Output: \n<table> \n <tr> \n <td>\n **one_hot**\n </td>\n <td>\n [[ 0. 0. 0. 1. 0. 0.]\n [ 1. 0. 0. 0. 0. 1.]\n [ 0. 1. 0. 0. 1. 0.]\n [ 0. 0. 1. 0. 0. 0.]]\n </td>\n </tr>\n\n</table>\n\n1.5 - Initialize with zeros and ones\nNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. \nExercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). \n\ntf.ones(shape)",
"# GRADED FUNCTION: ones\n\ndef ones(shape):\n \"\"\"\n Creates an array of ones of dimension shape\n \n Arguments:\n shape -- shape of the array you want to create\n \n Returns: \n ones -- array containing only ones\n \"\"\"\n \n ### START CODE HERE ###\n \n # Create \"ones\" tensor using tf.ones(...). (approx. 1 line)\n ones = tf.ones(shape)\n \n # Create the session (approx. 1 line)\n sess = tf.Session()\n \n # Run the session to compute 'ones' (approx. 1 line)\n ones = sess.run(ones)\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n return ones\n\nprint (\"ones = \" + str(ones([3])))",
"Expected Output:\n<table> \n <tr> \n <td>\n **ones**\n </td>\n <td>\n [ 1. 1. 1.]\n </td>\n </tr>\n\n</table>\n\n2 - Building your first neural network in tensorflow\nIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:\n\nCreate the computation graph\nRun the graph\n\nLet's delve into the problem you'd like to solve!\n2.0 - Problem statement: SIGNS Dataset\nOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.\n\nTraining set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).\nTest set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).\n\nNote that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.\nHere are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.\n<img src=\"images/hands.png\" style=\"width:800px;height:350px;\"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>\nRun the following code to load the dataset.",
"# Loading the dataset\nX_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()",
"Change the index below and run the cell to visualize some examples in the dataset.",
"# Example of a picture\nindex = 0\nplt.imshow(X_train_orig[index])\nprint (\"y = \" + str(np.squeeze(Y_train_orig[:, index])))",
"As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.",
"# Flatten the training and test images\nX_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T\nX_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T\n# Normalize image vectors\nX_train = X_train_flatten/255.\nX_test = X_test_flatten/255.\n# Convert training and test labels to one hot matrices\nY_train = convert_to_one_hot(Y_train_orig, 6)\nY_test = convert_to_one_hot(Y_test_orig, 6)\n\nprint (\"number of training examples = \" + str(X_train.shape[1]))\nprint (\"number of test examples = \" + str(X_test.shape[1]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))",
"Note that 12288 comes from $64 \\times 64 \\times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.\nYour goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. \nThe model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. \n2.1 - Create placeholders\nYour first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session. \nExercise: Implement the function below to create the placeholders in tensorflow.",
"# GRADED FUNCTION: create_placeholders\n\ndef create_placeholders(n_x, n_y):\n \"\"\"\n Creates the placeholders for the tensorflow session.\n \n Arguments:\n n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)\n n_y -- scalar, number of classes (from 0 to 5, so -> 6)\n \n Returns:\n X -- placeholder for the data input, of shape [n_x, None] and dtype \"float\"\n Y -- placeholder for the input labels, of shape [n_y, None] and dtype \"float\"\n \n Tips:\n - You will use None because it let's us be flexible on the number of examples you will for the placeholders.\n In fact, the number of examples during test/train is different.\n \"\"\"\n\n ### START CODE HERE ### (approx. 2 lines)\n X = tf.placeholder(tf.float32, [n_x, None])\n Y = tf.placeholder(tf.float32, [n_y, None])\n ### END CODE HERE ###\n \n return X, Y\n\nX, Y = create_placeholders(12288, 6)\nprint (\"X = \" + str(X))\nprint (\"Y = \" + str(Y))",
"Expected Output: \n<table> \n <tr> \n <td>\n **X**\n </td>\n <td>\n Tensor(\"Placeholder_1:0\", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)\n </td>\n </tr>\n <tr> \n <td>\n **Y**\n </td>\n <td>\n Tensor(\"Placeholder_2:0\", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)\n </td>\n </tr>\n\n</table>\n\n2.2 - Initializing the parameters\nYour second task is to initialize the parameters in tensorflow.\nExercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: \npython\nW1 = tf.get_variable(\"W1\", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\nb1 = tf.get_variable(\"b1\", [25,1], initializer = tf.zeros_initializer())\nPlease use seed = 1 to make sure your results match ours.",
"# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters():\n \"\"\"\n Initializes parameters to build a neural network with tensorflow. The shapes are:\n W1 : [25, 12288]\n b1 : [25, 1]\n W2 : [12, 25]\n b2 : [12, 1]\n W3 : [6, 12]\n b3 : [6, 1]\n \n Returns:\n parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3\n \"\"\"\n \n tf.set_random_seed(1) # so that your \"random\" numbers match ours\n \n ### START CODE HERE ### (approx. 6 lines of code)\n W1 = tf.get_variable(\"W1\", [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\n b1 = tf.get_variable(\"b1\", [25, 1], initializer = tf.zeros_initializer())\n W2 = tf.get_variable(\"W2\", [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\n b2 = tf.get_variable(\"b2\", [12, 1], initializer = tf.zeros_initializer())\n W3 = tf.get_variable(\"W3\", [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\n b3 = tf.get_variable(\"b3\", [6, 1], initializer = tf.zeros_initializer())\n ### END CODE HERE ###\n\n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2,\n \"W3\": W3,\n \"b3\": b3}\n \n return parameters\n\ntf.reset_default_graph()\nwith tf.Session() as sess:\n parameters = initialize_parameters()\n print(\"W1 = \" + str(parameters[\"W1\"]))\n print(\"b1 = \" + str(parameters[\"b1\"]))\n print(\"W2 = \" + str(parameters[\"W2\"]))\n print(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output: \n<table> \n <tr> \n <td>\n **W1**\n </td>\n <td>\n < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **b1**\n </td>\n <td>\n < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **W2**\n </td>\n <td>\n < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **b2**\n </td>\n <td>\n < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >\n </td>\n </tr>\n\n</table>\n\nAs expected, the parameters haven't been evaluated yet.\n2.3 - Forward propagation in tensorflow\nYou will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: \n\ntf.add(...,...) to do an addition\ntf.matmul(...,...) to do a matrix multiplication\ntf.nn.relu(...) to apply the ReLU activation\n\nQuestion: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3!",
"# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX\n \n Arguments:\n X -- input dataset placeholder, of shape (input size, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\"\n the shapes are given in initialize_parameters\n\n Returns:\n Z3 -- the output of the last LINEAR unit\n \"\"\"\n \n # Retrieve the parameters from the dictionary \"parameters\" \n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n W3 = parameters['W3']\n b3 = parameters['b3']\n \n ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:\n Z1 = tf.matmul(W1, X) + b1 # Z1 = np.dot(W1, X) + b1\n A1 = tf.nn.relu(Z1) # A1 = relu(Z1)\n Z2 = tf.matmul(W2, A1) + b2 # Z2 = np.dot(W2, a1) + b2\n A2 = tf.nn.relu(Z2) # A2 = relu(Z2)\n Z3 = tf.matmul(W3, A2) + b3 # Z3 = np.dot(W3,Z2) + b3\n ### END CODE HERE ###\n \n return Z3\n\ntf.reset_default_graph()\n\nwith tf.Session() as sess:\n X, Y = create_placeholders(12288, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n print(\"Z3 = \" + str(Z3))",
"Expected Output: \n<table> \n <tr> \n <td>\n **Z3**\n </td>\n <td>\n Tensor(\"Add_2:0\", shape=(6, ?), dtype=float32)\n </td>\n </tr>\n\n</table>\n\nYou may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.\n2.4 Compute cost\nAs seen before, it is very easy to compute the cost using:\npython\ntf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))\nQuestion: Implement the cost function below. \n- It is important to know that the \"logits\" and \"labels\" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.\n- Besides, tf.reduce_mean basically does the summation over the examples.",
"# GRADED FUNCTION: compute_cost \n\ndef compute_cost(Z3, Y):\n \"\"\"\n Computes the cost\n \n Arguments:\n Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)\n Y -- \"true\" labels vector placeholder, same shape as Z3\n \n Returns:\n cost - Tensor of the cost function\n \"\"\"\n \n # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)\n logits = tf.transpose(Z3)\n labels = tf.transpose(Y)\n \n ### START CODE HERE ### (1 line of code)\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))\n ### END CODE HERE ###\n \n return cost\n\ntf.reset_default_graph()\n\nwith tf.Session() as sess:\n X, Y = create_placeholders(12288, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n cost = compute_cost(Z3, Y)\n print(\"cost = \" + str(cost))",
"Expected Output: \n<table> \n <tr> \n <td>\n **cost**\n </td>\n <td>\n Tensor(\"Mean:0\", shape=(), dtype=float32)\n </td>\n </tr>\n\n</table>\n\n2.5 - Backward propagation & parameter updates\nThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.\nAfter you compute the cost function. You will create an \"optimizer\" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.\nFor instance, for gradient descent the optimizer would be:\npython\noptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)\nTo make the optimization you would do:\npython\n_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})\nThis computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.\nNote When coding, we often use _ as a \"throwaway\" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable). \n2.6 - Building the model\nNow, you will bring it all together! \nExercise: Implement the model. You will be calling the functions you had previously implemented.",
"def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,\n num_epochs = 1500, minibatch_size = 32, print_cost = True):\n \"\"\"\n Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.\n \n Arguments:\n X_train -- training set, of shape (input size = 12288, number of training examples = 1080)\n Y_train -- test set, of shape (output size = 6, number of training examples = 1080)\n X_test -- training set, of shape (input size = 12288, number of training examples = 120)\n Y_test -- test set, of shape (output size = 6, number of test examples = 120)\n learning_rate -- learning rate of the optimization\n num_epochs -- number of epochs of the optimization loop\n minibatch_size -- size of a minibatch\n print_cost -- True to print the cost every 100 epochs\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables\n tf.set_random_seed(1) # to keep consistent results\n seed = 3 # to keep consistent results\n (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)\n n_y = Y_train.shape[0] # n_y : output size\n costs = [] # To keep track of the cost\n \n # Create Placeholders of shape (n_x, n_y)\n ### START CODE HERE ### (1 line)\n X, Y = create_placeholders(n_x, n_y)\n ### END CODE HERE ###\n\n # Initialize parameters\n ### START CODE HERE ### (1 line)\n parameters = initialize_parameters()\n ### END CODE HERE ###\n \n # Forward propagation: Build the forward propagation in the tensorflow graph\n ### START CODE HERE ### (1 line)\n Z3 = forward_propagation(X, parameters)\n ### END CODE HERE ###\n \n # Cost function: Add cost function to tensorflow graph\n ### START CODE HERE ### (1 line)\n cost = compute_cost(Z3, Y)\n ### END CODE HERE ###\n \n # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.\n ### START CODE HERE ### (1 line)\n optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)\n ### END CODE HERE ###\n \n # Initialize all the variables\n init = tf.global_variables_initializer()\n\n # Start the session to compute the tensorflow graph\n with tf.Session() as sess:\n \n # Run the initialization\n sess.run(init)\n \n # Do the training loop\n for epoch in range(num_epochs):\n\n epoch_cost = 0. # Defines a cost related to an epoch\n num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set\n seed = seed + 1\n minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n \n # IMPORTANT: The line that runs the graph on a minibatch.\n # Run the session to execute the \"optimizer\" and the \"cost\", the feedict should contain a minibatch for (X,Y).\n ### START CODE HERE ### (1 line)\n _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})\n ### END CODE HERE ###\n \n epoch_cost += minibatch_cost / num_minibatches\n\n # Print the cost every epoch\n if print_cost == True and epoch % 100 == 0:\n print (\"Cost after epoch %i: %f\" % (epoch, epoch_cost))\n if print_cost == True and epoch % 5 == 0:\n costs.append(epoch_cost)\n \n # plot the cost\n plt.plot(np.squeeze(costs))\n plt.ylabel('cost')\n plt.xlabel('iterations (per tens)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n\n # lets save the parameters in a variable\n parameters = sess.run(parameters)\n print (\"Parameters have been trained!\")\n\n # Calculate the correct predictions\n correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))\n\n # Calculate accuracy on the test set\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n\n print (\"Train Accuracy:\", accuracy.eval({X: X_train, Y: Y_train}))\n print (\"Test Accuracy:\", accuracy.eval({X: X_test, Y: Y_test}))\n \n return parameters",
"Run the following cell to train your model! On our machine it takes about 5 minutes. Your \"Cost after epoch 100\" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!",
"parameters = model(X_train, Y_train, X_test, Y_test)",
"Expected Output:\n<table> \n <tr> \n <td>\n **Train Accuracy**\n </td>\n <td>\n 0.999074\n </td>\n </tr>\n <tr> \n <td>\n **Test Accuracy**\n </td>\n <td>\n 0.716667\n </td>\n </tr>\n\n</table>\n\nAmazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.\nInsights:\n- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. \n- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.\n2.7 - Test with your own image (optional / ungraded exercise)\nCongratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the following code\n 4. Run the code and check if the algorithm is right!",
"import scipy\nfrom PIL import Image\nfrom scipy import ndimage\n\n## START CODE HERE ## (PUT YOUR IMAGE NAME) \nmy_image = \"thumbs_up.jpg\"\n## END CODE HERE ##\n\n# We preprocess your image to fit your algorithm.\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\nmy_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T\nmy_image_prediction = predict(my_image, parameters)\n\nplt.imshow(image)\nprint(\"Your algorithm predicts: y = \" + str(np.squeeze(my_image_prediction)))",
"You indeed deserved a \"thumbs-up\" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any \"thumbs-up\", so the model doesn't know how to deal with it! We call that a \"mismatched data distribution\" and it is one of the various of the next course on \"Structuring Machine Learning Projects\".\n<font color='blue'>\nWhat you should remember:\n- Tensorflow is a programming framework used in deep learning\n- The two main object classes in tensorflow are Tensors and Operators. \n- When you code in tensorflow you have to take the following steps:\n - Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)\n - Create a session\n - Initialize the session\n - Run the session to execute the graph\n- You can execute the graph multiple times as you've seen in model()\n- The backpropagation and optimization is automatically done when running the session on the \"optimizer\" object."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Yangqing/caffe2
|
caffe2/python/tutorials/Python_Op.ipynb
|
apache-2.0
|
[
"Python Op Tutorial\nIn this tutorial we cover the Python operator that allows writing Caffe2 operators using Python, we also discuss some of the underlying implementation details.\nForward Python Operator, Net.Python() Interface\nCaffe2 provides a high-level interface that helps creating Python ops. Let's consider the following example:",
"from caffe2.python import core, workspace\nimport numpy as np\n\ndef f(inputs, outputs):\n outputs[0].feed(2 * inputs[0].data)\n\nworkspace.ResetWorkspace()\nnet = core.Net(\"tutorial\")\nnet.Python(f)([\"x\"], [\"y\"])\nworkspace.FeedBlob(\"x\", np.array([3.]))\nworkspace.RunNetOnce(net)\nprint(workspace.FetchBlob(\"y\"))",
"As seen in the example, net.Python() function returns a callable that can be used just like any other operator. In this example, we add a new Python operator to the net with input \"x\" and output \"y\". Note that you can save the output of net.Python() and call it multiple times to add multiple Python operators (with possibly different inputs and outputs).\nLet's take a closer look at net.Python() function and a corresponding body of a new Python operator (f). Every time net.Python(f) is called it serializes a given function f and saves it in a global registry under a known key (token, passed to a PythonOp as an argument). After this, net.Python() returns a lambda that accepts positional and keyword arguments (typically inputs, outputs and extra arguments) and attaches a new Python operator to the net that calls function f on a given list of inputs and outputs.\nPython operator's function (f) expects two positional arguments: a list of inputs and a list of outputs. When an operator is executed it transparently converts Caffe2 blobs into the elements of these lists.\nIn case of CPU tensor blobs, these blobs are converted into TensorCPU objects that act as wrappers around Numpy arrays. Let's take a closer look at a relationship between Caffe2 CPU tensor, Python's TensorCPU object and a Numpy array:\n1. Conversion between C++ tensor objects and Numpy objects happens automatically and is handled by PyBind library.\n2. When generating a TensorCPU wrapper, a new Numpy array object is created which shares the same memory storage as a corresponding Caffe2 CPU tensor. This Numpy array is accessible in Python as a .data property of a TensorCPU object.\n3. Note that, although Numpy array and Caffe2 tensor might share the same storage, other tensor data (e.g. shape) of Caffe2 tensor is stored separately from a Numpy array. Furthermore, Numpy may copy and reallocate its array to a different location in memory (e.g. when we try to resize an array) during operator's function execution. It's important to keep that in mind when writing a Python operator's code to ensure that Caffe2 and Numpy output tensors are in sync.\n4. TensorCPU's feed method accepts a Numpy tensor, resizes an underying Caffe2 tensor and copies Numpy's tensor data into a Caffe2 tensor.\n5. Another way to ensure that Caffe2's output tensor is properly set is to call reshape function on a corresponding TensorCPU output and then copy data in Python to the output's .data tensor, e.g.:",
"def f_reshape(inputs, outputs):\n outputs[0].reshape(inputs[0].shape)\n outputs[0].data[...] = 2 * inputs[0].data\n\nworkspace.ResetWorkspace()\nnet = core.Net(\"tutorial\")\nnet.Python(f_reshape)([\"x\"], [\"z\"])\nworkspace.FeedBlob(\"x\", np.array([3.]))\nworkspace.RunNetOnce(net)\nprint(workspace.FetchBlob(\"z\"))",
"This example works correctly because \"reshape\" method updates an underlying Caffe2 tensor and a subsequent call to the \".data\" property returns a Numpy array that shares memory with a Caffe2 tensor. The last line in \"f_reshape\" copies data into the shared memory location.\nThere're several additional arguments that net.Python() accepts. When pass_workspace=True is passed, a workspace is passed to an operator's Python function:",
"def f_workspace(inputs, outputs, workspace):\n outputs[0].feed(2 * workspace.blobs[\"x\"].fetch())\n\nworkspace.ResetWorkspace()\nnet = core.Net(\"tutorial\")\nnet.Python(f_workspace, pass_workspace=True)([], [\"y\"])\nworkspace.FeedBlob(\"x\", np.array([3.]))\nworkspace.RunNetOnce(net)\nprint(workspace.FetchBlob(\"y\"))",
"Gradient Python Operator\nAnother important net.Python() argument is \"grad_f\" - a Python function for a corresponding gradient operator:",
"def f(inputs, outputs):\n outputs[0].reshape(inputs[0].shape)\n outputs[0].data[...] = inputs[0].data * 2\n\ndef grad_f(inputs, outputs):\n # Ordering of inputs is [fwd inputs, outputs, grad_outputs]\n grad_output = inputs[2]\n\n grad_input = outputs[0]\n grad_input.reshape(grad_output.shape)\n grad_input.data[...] = grad_output.data * 2\n\nworkspace.ResetWorkspace()\nnet = core.Net(\"tutorial\")\nnet.Python(f, grad_f)([\"x\"], [\"y\"])\nworkspace.FeedBlob(\"x\", np.array([3.]))\nnet.AddGradientOperators([\"y\"])\nworkspace.RunNetOnce(net)\nprint(workspace.FetchBlob(\"x_grad\"))",
"When net.Python() is called with a gradient function specified, it also registers a serialized gradient function that is used by a corresponding gradient Python operator (PythonGradient). This operator executes a gradient function that expects two arguments - input and output lists. The input list argument contains all forward function inputs, followed by all of its outputs, followed by the gradients of forward function outputs. The output list contains the gradients of forward function inputs. Note: net.Python()'s grad_output_indices/grad_input_indices allow specifying indices of gradient output/input blobs that gradient function reads/writes to.\nNote on GPU tensors:\nPythonOp implementation is CPU specific, it uses Numpy arrays that expect CPU memory storage. In order to be able to use a Python operator with GPU tensors, we define a CUDA version of PythonOp using GPUFallbackOp. This operator wraps a CPU-operator and adds GPU-to-CPU (and opposite direction) copy operations. Thus, when using a PythonOp with a CUDA device option, all input CUDA tensors are automatically copied to CPU memory and all CPU output tensors are copied back to GPU."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ds-modules/LINGUIS-110
|
VOT/Assignment.ipynb
|
mit
|
[
"Linguistics 110: Closure and Voice-Onset Time\nProfessor Susan Lin\nThis notebook will familiarize you with some of the basic strategies for data analysis that can be useful not only in this course, but possibly for the rest of your time at Cal. We will cover an overview of our computing environment, and then will explore the data on closure and VOT that you submit. \nIf you want a more in-depth introduction to Python, click <a href='http://datahub.berkeley.edu/user-redirect/interact?account=ds-modules&repo=LINGUIS-110&branch=master&path=Intro'>here</a> to explore that notebook. You should be able to get through this entire notebook without that tutorial, it is there if you want to dive deeper into what is going on in the code.\nTable of Contents\n1 - Computing Environment\n2 - Creating our Dataframe\n3 - Exploring the Data\n4 - Relationships between Closures\n5 - Exploring Metadata\n6 - Comparing to Others\n1. Our Computing Environment, Jupyter notebooks <a id='computing environment'></a>\nThis webpage is called a Jupyter notebook. A notebook is a place to write programs and view their results. \nText cells\nIn a notebook, each rectangle containing text or code is called a cell.\nText cells (like this one) can be edited by double-clicking on them. They're written in a simple format called Markdown to add formatting and section headings. You don't need to learn Markdown, but you might want to.\nAfter you edit a text cell, click the \"run cell\" button at the top that looks like ▶| to confirm any changes. (Try not to delete the instructions of the lab.)\nUnderstanding Check 1 This paragraph is in its own text cell. Try editing it so that this sentence is the last sentence in the paragraph, and then click the \"run cell\" ▶| button . This sentence, for example, should be deleted. So should this one.\nA programming language is a vocabulary and set of grammatical rules for instructing a computer or computing device to perform specific tasks.\nCode cells\nOther cells contain code in the Python 3 language. Just like natural human languages, it has rules -- Python is a programming language, which means that it is a set of grammatical rules and vocabulary for instructing a computer to perform tasks. It differs from natural language in two important ways:\n1. The rules are simple. You can learn most of them in a few weeks and gain reasonable proficiency with the language in a semester.\n2. The rules are rigid. If you're proficient in a natural language, you can understand a non-proficient speaker, glossing over small mistakes. A computer running Python code is not smart enough to do that.\nThere's a lot of terminology in programming languages, but you don't need to know it all in order to program effectively. From time to time, you'll see a cryptic message, but you can often get by without deciphering it, by utilizing appropriate resources (sometimes it's as simple as a Google search).\nRunning a code cell will execute all of the code it contains. \nTo run the code in a code cell, first click on that cell to activate it. It'll be highlighted with a little green or blue rectangle. Next, either press ▶| or hold down the shift key and press return or enter.\nTry running this cell:",
"print(\"Hello, World!\")",
"The fundamental building block of Python code is an expression. Cells can contain multiple lines with multiple expressions. When you run a cell, the lines of code are executed in the order in which they appear. Every print expression prints a line. Run the next cell and notice the order of the output.",
"print(\"First this line is printed,\")\nprint(\"and then this one.\")",
"Writing Jupyter notebooks\nYou can use Jupyter notebooks for your own projects or documents. When you make your own notebook, you'll need to create your own cells for text and code.\nTo add a cell, click the + button in the menu bar. It'll start out as a text cell. You can change it to a code cell by clicking inside it so it's highlighted, clicking the drop-down box next to the restart (⟳) button in the menu bar, and choosing \"Code\".\nOther important things to know about the notebook\n\nClick File > Save and Checkpoint to save the notebook.\nThis page runs on remote servers, meaning that when you run a cell, the code is sent somewhere else to be interpreted, then sends the results back to you to be displayed. So if you notice that it doesn't seem to be running anymore, try steps in this order:\nClick Kernel > Interrupt, then try running the cell again.\nClick Kernel > Restart, then run through all of the cells.\nClose and reopen DataHub.\n\n\nPlots created in the notebook can be copied and pasted by right-clicking and selecting copy.\nIf you want to run all of the cells at once, click Cell > Run All.\n\nRun the cell below so that we can get started on our module! These are our import statements (and a few other things). Because of the size of the Python community, if there is a function that you want to use, there is a good chance that someone has written one already and been kind enough to share their work in the form of packages. We can start using those packages by writing import and then the package name.",
"# imports -- just run this cell\nimport scipy\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom scipy.stats import mode\nfrom ipywidgets import interact\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nfrom matplotlib import colors\nfrom sklearn.linear_model import LinearRegression\nimport warnings\n\nwarnings.filterwarnings('ignore')\nsns.set_style('darkgrid')\n%matplotlib inline",
"2. Creating our Dataframe <a id='dataframe'></a>\nWe will start by familiarizing ourselves with the data.\nTo visualize the data, we need to load the file first. In the line where we we assign file_name to equal the name of our dataset, which is a compilation of the results from the homework you completed last week.\nNote that we have data/ in front of the file name, which means that are file example_data.csv is in the data directory (folder).",
"file_name = 'data/fall17.csv'\ndata = pd.read_csv(file_name)\ndata.head()",
"2.1 Adding features from our data\nWe are going to add several columns to our dataframe. A column for each of the following:\n+ The semester of this class (called class)\n+ Average of all closure/vot for each individual (called clo/vot)\n+ Average voiced closure/vot for each individual (called vclo/vvot)\n+ Average voiceless closure/vot for each individual (called vlclo/vlvot)\nFirst we will add the column for the average of all of the closures for each row. To do that, we'll first pull out just the columns that we want to take the average of.",
"subset = data[['pclo', 'tclo', 'kclo', 'bclo', 'dclo', 'gclo']]\nsubset.head()",
"Then we will take the average across those rows.",
"clo_avg = subset.mean(axis=1)\nclo_avg",
"And finally, we will append those values to our dataframe as a column called clo.",
"data['clo'] = clo_avg\ndata.head()",
"We then repeat this process for all of the other columns that we want to create.",
"data['vot'] = data[['pvot', 'tvot', 'kvot', 'bvot', 'dvot', 'gvot']].mean(axis=1)\ndata['vclo'] = data[['bclo', 'dclo', 'gclo']].mean(axis=1)\ndata['vvot'] = data[['bvot', 'dvot', 'gvot']].mean(axis=1)\ndata['vlclo'] = data[['pclo', 'tclo', 'kclo']].mean(axis=1)\ndata['vlvot'] = data[['pvot', 'tvot', 'kvot']].mean(axis=1) \ndata.head()",
"3. Exploring the Data <a id='exploring data'></a>\n3.1 Descriptive Statistics\nBelow we compute the some basic properties about the column clo.",
"closure_mode = mode(data['clo'])[0][0]\nprint('Mode: ', closure_mode)\n\ndata['clo'].describe()",
"We can calculate all of the above statistics (except mode) for the entire table with one line.",
"data.describe()",
"3.2 Data Visualization\nNow that we have our data in order, let's get a picture of the data with some plots.\nLet's start by visualizing the distribution of vot with a histogram.",
"sns.distplot(data['vot'], kde_kws={\"label\": \"vot\"})",
"Next, we'll compare the distributions of the voiced and voiceless voice-onset times.",
"sns.distplot(data['vvot'], kde_kws={\"label\": \"voiced vot\"})\nsns.distplot(data['vlvot'], kde_kws={\"label\": \"voiceless vot\"})\nplt.xlabel('ms')",
"The distributions of the three voiceless stops are below.",
"sns.distplot(data['pvot'], kde_kws={\"label\": \"pvot\"})\nsns.distplot(data['tvot'], kde_kws={\"label\": \"tvot\"})\nsns.distplot(data['kvot'], kde_kws={\"label\": \"kvot\"})\n\nplt.xlabel('ms')\nplt.ylabel('proportion per ms')",
"The distributions of the three voiced stops are below.",
"sns.distplot(data['bvot'], kde_kws={\"label\": \"bvot\"})\nsns.distplot(data['dvot'], kde_kws={\"label\": \"dvot\"})\nsns.distplot(data['gvot'], kde_kws={\"label\": \"gvot\"})\n\nplt.xlabel('ms')\nplt.ylabel('proportion per ms')",
"Below, we see the native languages represented in the data.",
"sns.countplot(y=\"language\", data=data)",
"Below, we have a the distribution of height.",
"sns.distplot(data['height'])\n\nplt.xlabel('height (cm)')",
"4. Relationships between closures <a id='closures'></a>\nNow will will shift back away from single column visualizations, and start to compare values between columns, looking specifically at the different closures in our dataframe. Run the cell below that will automate some of plotting for us.",
"def plot_with_equality_line(xs, ys, best_fit=False):\n fig, ax = plt.subplots()\n sns.regplot(xs, ys, fit_reg=best_fit, ax=ax)\n\n lims = [np.min([ax.get_xlim(), ax.get_ylim()]), np.max([ax.get_xlim(), ax.get_ylim()])]\n ax.plot(lims, lims, '--', alpha=0.75, zorder=0, c='black')\n ax.set_xlim(lims)\n ax.set_ylim(lims)\n \n print('Points above line: ' + str(sum(xs < ys)))\n print('Points below line: ' + str(sum(xs > ys)))\n print('Points on line: ' + str(sum(xs == ys)))",
"4.1 Using a line where x = y\nWe'll start by making scatter plots. They takes the values (from identified columns) of individual rows, and plots them as a dot on our coordinate plane. So in the plot below, each point will represent a person's tclo and pclo. We are going to plot a dashed line that marks where the x-values are equal to the y-values, which helps us see which value is bigger for an individual. If a point is above the line, their y-value is larger than their x. If a point is below, their x-value is greater than their y. \n4.1.1 Voiceless",
"plot_with_equality_line(data['tclo'], data['pclo'])\n\nplt.xlabel('tclo (ms)')\nplt.ylabel('pclo (ms)')\n\nplot_with_equality_line(data['kclo'], data['pclo'])\n\nplt.xlabel('kclo (ms)')\nplt.ylabel('pclo (ms)')\n\nplot_with_equality_line(data['kclo'], data['tclo'])\n\nplt.xlabel('kclo (ms)')\nplt.ylabel('tclo (ms)')",
"4.1.2 Voiced",
"plot_with_equality_line(data['dclo'], data['bclo'])\n\nplt.xlabel('dclo (ms)')\nplt.ylabel('bclo (ms)')\n\nplot_with_equality_line(data['gclo'], data['bclo'])\n\nplt.xlabel('gclo (ms)')\nplt.ylabel('bclo (ms)')\n\nplot_with_equality_line(data['gclo'], data['dclo'])\n\nplt.ylabel('dclo (ms)')\nplt.xlabel('gclo (ms)')",
"4.2 Using box-and-whisker plots\nThose scatter plots are informative, but sometimes it's difficult to make conclustions from them, especially in our case where we have so much raw data. To make easier comparisons about the ranges of values that our closures we can use boxplots.",
"sns.boxplot(data=data[['pclo', 'tclo', 'kclo']], width=.3, palette=\"Set3\")\n\nplt.ylabel('duration (ms)')\nplt.xlabel('Voiceless Closures')",
"With the above plot, it can be different to compare values of the box-and-whisker plots because the outliers require us to zoom out. Below, we will zoom in to the boxes.",
"sns.boxplot(data=data[['pclo', 'tclo', 'kclo']], width=.3, palette=\"Set3\")\n\nplt.ylabel('duration (ms)')\nplt.xlabel('Voiceless Closures')\nplt.ylim(0, 212)",
"We then recreate those graphs, but using our voiced closures.",
"sns.boxplot(data=data[['bclo', 'dclo', 'gclo']], width=.3, palette=\"Set2\")\n\nplt.ylabel('duration (ms)')\nplt.xlabel('Voiced Closures')\n\nsns.boxplot(data=data[['bclo', 'dclo', 'gclo']], width=.3, palette=\"Set2\")\n\nplt.ylabel('duration (ms)')\nplt.xlabel('Voiced Closures')\nplt.ylim(0, 212)",
"Do our box-whisker plots corroborate the scatter plot data? Are we able to come to the same conclusions that we were before?\n5. Explore relationships to metadata <a id='metadata'></a>\nNow let's explore relationships between closure and different characteristics of the persons who delivered those stats, looking at language and height. We'll draw scatter plots to see whether there are linear relationships between them.\n5.1 Language\nBefore we look at the actual relationship, it is important to realize any potential limitations of our observations. If you look back up to the bar plot of different native languages, you will see that the majority speak English as their native language.\nQuestion: if we try to come up with conclusion about people who speak Tagalog or Farsi as their first language, would the conclusions be reliable and why?",
"sns.violinplot(x=\"vot\", y=\"language\", data=data)\n\nplt.xlabel('vot (ms)')",
"Compare the distributions. Can you make any meaningful observations?\n5.2 Height\nNow we'll look at how height influences closure, but first we are going to trim out one of the outliers.",
"trimmed = data[data['clo'] < 250]\n\nsns.lmplot('height', 'clo', data=trimmed, fit_reg=True)\n\nplt.xlabel('height (cm)')\nplt.ylabel('clo (ms)')",
"In the scatter plot above, each dot represents the average closure and height of an individual. \nChange \"fit_reg\" to \"True\" in the code above to see the regression line.\nWhat does this graph tell about the relationship between height and closure? Regression lines describe a general trend of the data, sometimes referred to as the 'line of best fit'.\nLet's see if there's a different kind of relationship between height and voiced/voiceless.",
"sns.regplot('height', 'vclo', data=trimmed, fit_reg=True)\nsns.regplot('height', 'vlclo', data=trimmed, fit_reg=True)\n\nplt.xlabel('height (cm)')\nplt.ylabel('clo (ms)')",
"5.3 Visualizing Multiple Features\nSo far, we've been presenting two kinds of information in one plot (e.g. language vs. closure). Would presenting more than two at once help us at analyzing? Let's try it.\nBelow, the color of the dots will depend on the language that person speaks rather than its gender.",
"sns.lmplot('height', 'clo',data=trimmed, fit_reg=False, hue=\"language\")\n\nplt.xlabel('height (cm)')\nplt.ylabel('clo (ms)')",
"What conclusions can you make from the graph above, if any? Is it easy to analyze this plot? Why?\nThe lesson here is that sometimes less is more.\n6. Compare our data with data from last semester <a id='to class'></a>\nIt's often useful to compare current data with past data. Below, we'll explore class data collected from Fall 2015.",
"old_file_name = 'data/fall15.csv'\nfa15 = pd.read_csv(old_file_name)\n\nfa15.head()",
"The data from the previous semester does not have all of the same features (columns) that this semester's data has. So in order to make easy comparisons, we will just select out the columns that are in both dataframes.",
"current_subset = data[fa15.columns]\ncurrent_subset.head()",
"Let's look at the difference between the major statistics of the previous data and this semester's.",
"difference = fa15.describe() - current_subset.describe()\ndifference",
"It's a little unintuitive to tell how large of differences those are, so let's look at the relative difference to this semester's data.",
"relative_difference = difference / current_subset.describe()\nrelative_difference",
"Now, let's add some color to help spot the largest relative changes. Run the next two cells.",
"scale = pd.DataFrame({'scale': np.arange(-3,5,1)*.2}).set_index(relative_difference.index)\n\ndef background_gradient(s, df, m=None, M=None, cmap='RdBu_r', low=0, high=0):\n # code modified from: https://stackoverflow.com/questions/38931566/pandas-style-background-gradient-both-rows-and-colums\n if m is None:\n m = df.min().min()\n if M is None:\n M = df.max().max()\n rng = M - m\n \n norm = colors.Normalize(m - (rng * low), M + (rng * high))\n normed = norm(s.values)\n c = [colors.rgb2hex(x) for x in ListedColormap(sns.color_palette(cmap,8))(normed)]\n return ['background-color: %s' % color for color in c]\n\nrelative_difference.merge(scale, left_index=True, right_index=True).style.apply(background_gradient,\n df=relative_difference, m=-1, M=1)",
"Now that we can see where the largest relative differences between this semester's and the prior semester's data are, let's take a look at them with further visualization. We'll start with vot because the column has quite a few rows with dark colors.",
"sns.distplot(data['vot'], kde_kws={\"label\": \"Fall 2017 vot\"})\nsns.distplot(fa15['vot'], kde_kws={\"label\": \"Fall 2015 vot\"})\n\nplt.xlabel('ms')",
"Why is this? The graph below should offer some insight.",
"sns.distplot(data['vlvot'], kde_kws={\"label\": \"Fall 2017 vlvot\"}) # notice the call to voiced vot\nsns.distplot(fa15['vot'], kde_kws={\"label\": \"Fall 2015 vot\"})\n\nplt.xlabel('ms')",
"There are some large differences for kvot, so let's take a look at those distributions.",
"sns.distplot(fa15['kvot'], kde_kws={\"label\": \"Fall 2015 kvot\"})\nsns.distplot(data['kvot'], kde_kws={\"label\": \"Fall 2017 kvot\"})\n\nplt.xlabel('kvot (ms)')",
"Those differences mainly come from the presence of outliers. A particularly large value for Fall 2015 and a particularly small value for Fall 2017. Fell free to copy and paste some of the code from above and explore more of the relationships between the older data and this semester's data. Remember that to insert a cell below, you can either press esc + b or you can click Insert > Insert Cell Below on the toolbar."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
barjacks/pythonrecherche
|
03 intro python II/02 Python II Hausaugaben.ipynb
|
mit
|
[
"Python II Hausaufgaben\n1.Baue eine Funktion mit dem Namen 'double', der die Zahl 5 vedoppelt:",
"double(5)",
"2.Baue einen for-loop, der durch vordefinierte Zahlen-list geht, und mithilfe der eben kreierten eigenen Funktion, alle Resultate verdoppelt ausdruckt.",
"lst = list(range(1,5))",
"3.Entwickle einen Code, der den Nutzer nach der Länge seinem Namen fragt, und ihm dann sagt, wieviele Zeichen sein Name hat.\n4.Entwickle eine Funktion mit dem Namen km_rechner, der für die untenaufgeführten automatisch die Umrechung von Meilen in km durchführt und gerundet auf eine Kommastelle anzeigt.",
"km_rechner(5)\nkm_rechner(123)\nkm_rechner(53)",
"5.Wir haben in einem Dictionary mit Massen, die mit ganz unterschiedlichen Formaten daherkommen. Entwickle eine Funktion namens m_converter, die diese Formate berücksichtigt, und in Meter umwandelt.",
"#Unsere Formate\nvar_first = { 'measurement': 3.4, 'scale': 'kilometer' }\nvar_second = { 'measurement': 9.1, 'scale': 'mile' }\nvar_third = { 'measurement': 2.0, 'scale': 'meter' }\nvar_fourth = { 'measurement': 9.0, 'scale': 'inches' }\n\nprint(m_converter(var_first))\nprint(m_converter(var_second))\nprint(m_converter(var_third))\nprint(m_converter(var_fourth))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
smousavi05/EQTransformer
|
examples/detection.ipynb
|
mit
|
[
"(II) Detection and Picking\nThis notebook demonstrates the use of EQTransformer for performing the earthquake signal detection and seismic phase (P & S) picking on continuous data. Once you have your seismic data - preferentially in mseed format and in individual subfolders for each station- you can perform the detection/picking using the following options:\nOption (I) on preprocessed (hdf5) files:\nThis option is recommended for smaller time periods (a few days to a month). This allows you to test the perfomance and explore the effects of different parameters while the provided hdf5 file makes it easy to access the waveforms.\nFor this option you first need to convert your MiniSeed files for each station into a single hdf5 file and a csv file containting the list of traces in the hdf5 file.\nYou can convert MiniSeed files to a hdf5 file using the following command:",
"import os\nfrom EQTransformer.utils.hdf5_maker import preprocessor\n\njson_basepath = os.path.join(os.getcwd(),\"json/station_list.json\")\n\npreprocessor(preproc_dir=\"preproc\",\n mseed_dir='downloads_mseeds', \n stations_json=json_basepath, \n overlap=0.3, \n n_processor=2)",
"This will generate one \"station_name.hdf5\" and one \"station_name.csv\" file for each of your stations and put them into a directory named \"mseed_dir+_hdfs\". Then you need to pass the name of the directory containing your hdf5 & CSV files and a model. You can use relatively low threshold values for the detection and picking since EQTransformer is very robust to false positives. Enabling uncertaintiy estimation, outputing probabilities, or plotting all the detected events will slow down the process.",
"from EQTransformer.core.predictor import predictor\npredictor(input_dir='downloads_mseeds_processed_hdfs', \n input_model='../ModelsAndSampleData/EqT_original_model.h5',\n output_dir='detections1',\n estimate_uncertainty=False, \n output_probabilities=False,\n number_of_sampling=5,\n loss_weights=[0.02, 0.40, 0.58], \n detection_threshold=0.3, \n P_threshold=0.3,\n S_threshold=0.3, \n number_of_plots=10,\n plot_mode='time',\n batch_size=500,\n number_of_cpus=4,\n keepPS=False,\n spLimit=60) ",
"If you are using local MiniSeed files you can generate a station_list.json by supplying an absolute path to a directory containing Miniseed files and a station location dictionary using the stationListFromMseed function like the following:",
"from EQTransformer.utils.hdf5_maker import stationListFromMseed\n\nmseed_directory = '/Users/username/Downloads/EQTransformer/examples/downloads_mseeds'\nstation_locations = {\"CA06\": [35.59962, -117.49268, 796.4], \"CA10\": [35.56736, -117.667427, 835.9]}\nstationListFromMseed(mseed_directory, station_locations)",
"Option (II) directly on downloaded MiniSeed files:\nYou can perform the detection/picking directly on .mseed files. \nThis save both prerpcessing time and the extra space needed for hdf5 file. However, it can be more memory intensive. So it is recommended when mseed fils are one month long or shorter.\nThis option also does not allow you to estimate the uncertainties, write the prediction probabilities, or use the advantages of having hdf5 files which makes it easy to access the raw event waveforms based on detection results.",
"from EQTransformer.core.mseed_predictor import mseed_predictor\nmseed_predictor(input_dir='downloads_mseeds', \n input_model='../ModelsAndSampleData/EqT_original_model.h5',\n stations_json=json_basepath,\n output_dir='detections2',\n loss_weights=[0.02, 0.40, 0.58], \n detection_threshold=0.7, \n P_threshold=0.3,\n S_threshold=0.3, \n number_of_plots=10,\n plot_mode='time_frequency',\n normalization_mode='std',\n batch_size=500,\n overlap=0.9,\n gpuid=None,\n gpu_limit=None) ",
"Prediction outputs for each station will be written in your output directory (i.e. 'detections').\n'X_report.txt' contains processing info on input parameters used for the detection/picking and final \nresults such as running time, the total number of detected events (these are unique events and duplicated ones have been already removed). \n'X_prediction_results.csv' contains detection/picking results in the figures folder you can find the plots for the number of events that you specified in the above comment."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
evanmiltenburg/python-for-text-analysis
|
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"%%capture\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip\n\n!unzip Data.zip -d ../\n!unzip images.zip -d ./\n!unzip Extra_Material.zip -d ../\n\n!rm Data.zip\n!rm Extra_Material.zip\n!rm images.zip",
"Chapter 8 - Comparison of lists and sets\nYou've been introduced to two containers in this topic: lists and sets. However, a question we often get is when to use a list and when a set. The goal of this chapter is to help you answer that question.\nAt the end of this chapter, you will be able to:\n* decide when to use a list and when to use a set\nIf you have questions about this chapter, please contact us (cltl.python.course@gmail.com).\n1. Properties of sets and lists\nSets: unordered collection of unique elements\nLists: ordered collection of elements\nComparison lists vs sets\n| property | set | list | \n|--------- |---------|---|\n| can contain duplicates | no | yes | \n| ordered | no | yes |\n| finding element(s) | relatively quick | relatively slow | \n| can contain | immutable objects | all objects |\n1.1 Duplication of elements\n\nlist: yes\nset: no\n\nAs shown below, lists allow duplicates (e.g. the integer 1 in the example below), sets do not.",
"list1 = [1, 2, 1, 3, 4, 1]\nset1 = {1, 2, 3, 4}\nset2 = {1, 2, 1, 3, 4, 1}\n\nprint('list1', list1)\nprint('set1', set1)\nprint('set2', set2)\nprint('set1 is the same as set2:', set1 == set2)",
"Tip\nYou can create a set from a list. Attention: duplicates will be removed.",
"a_list = [1,2,3,4, 4]\n\na_set = set(a_list)\n\nprint(a_list)\nprint(a_set)",
"1.2 Order (with respect to how elements are added to it)\n\nlist: yes\nset: no\n\nThe order in which you add elements to a list matters. Please look at the following example:",
"a_list = []\na_list.append(2)\na_list.append(1)\nprint(a_list)",
"However, this information is not kept in sets:",
"a_set = set()\na_set.add(2)\na_set.add(1)\nprint(a_set)",
"Is it possible to understand the order of items in a set? Yes, but we will not cover it here since it is not important for the tasks we cover.\nWhat is then the take home message about order? The answer is: you have it for lists, but not for sets.\nIf you want to learn more about this, look up the data structure called hash table (https://en.wikipedia.org/wiki/Hash_table) \n1.3 Finding element(s)\nIt's usually quicker to check if an element is in a set than to check if it is in a list.\nHence, this will be usally relatively slow:",
"list1 = [1,2,3,4]\nprint(1 in list1)",
"And this will usually be relatively quick:",
"set1 = {1,2,3,4}\nprint(1 in set1)",
"Is it possible to understand the speed of finding elements of items in sets and lists? Yes, but we will not cover it here since it is not important for the tasks we cover.\nWhat is then the take home message about speed? The answer is: it's probably quicker to use sets.\n1.4 Mutability of elements in can contain\nsets can only contain immutable objects.\nThis works:",
"a_set = set()\na_set.add(1)\nprint(a_set)",
"This does not",
"a_set.add([1])",
"lists can contain any Python object.\nThis works:",
"a_list = []\na_list.append(1)\nprint(a_list)",
"This as well",
"a_list = []\na_list.append([1])\nprint(a_list)",
"2. When to choose what?\nLists if you need:\n1. duplicates\n2. the order in which items are added\n3. mutable objects\nAll other scenarios -> sets\nExercises\nExercise 1:\nWhich container can contain duplicates?\nExercise 2:\nWhich container is the faster choice when checking whether it contains an element? \nExercise 3:\nYou want to collect and count all the people taking this class. You can only use their first names. Do you chose a list or a set?\nExercise 4:\nCan you think of a use case for a set and a list (perhaps you think of text analysis)?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Lattecom/HYStudy
|
scripts/[HYStudy 15th] Matplotlib 2.ipynb
|
mit
|
[
"import matplotlib.pylab as plt\nimport seaborn as sns\nimport numpy as np\n\nsns.set(palette=\"hls\", font_scale=1.5)",
"Magic Command\n\n%matplotlib inline: show() 생략\n%matplotlib qt: 외부창 출력",
"# make point with cumulative sum\npoints = np.random.randn(50).cumsum()\npoints",
"Line plot",
"# plt.plot(x, y): x, y = point(x, y) on coordinate\n# put y only(default x = auto)\nplt.plot(points)\nplt.show()\n\n# put x and y points\nplt.plot(range(0, 250, 5), points)\nplt.show()",
"Style setting\n\n\n{color}{marker}{line}\n\n\ncolor: http://matplotlib.org/examples/color/named_colors.html\n\nmarker: http://matplotlib.org/api/markers_api.html?highlight=marker#module-matplotlib.markers\nline: http://matplotlib.org/api/lines_api.html?highlight=line#matplotlib.lines.Line2D.set_linestyle\nstyle(other attributes): http://matplotlib.org/1.5.1/api/lines_api.html#matplotlib.lines.Line2D",
"# set color, marker, line\nplt.plot(points, 'co:')\nplt.show()\n\n# style setting\nplt.plot(points, 'co-', lw=3, ms=5, mfc='b') # lw=linewidth, ms=marker size, mfc=marker face color\nplt.xlim(-10, 60) # set x axis limit\nplt.ylim(-5, 5) # set y axis limit\nplt.show()\n\n# style setting\nplt.plot(points, 'co-', lw=3, ms=5, mfc='b')\nplt.xlim(-10, 60) # set x axis limit\nplt.ylim(-5, 5) # set y axis limit\nplt.xticks([0, 25, 50]) # set x axis ticks\nplt.yticks([-7, -3, 1], [r'$\\theta$', r'2$\\theta$', r'3$\\theta$']) # LaTeX input available\nplt.grid(False) # grid off\nplt.show()\n\n# draw multiple lines\n## plt.plot(x1, y1, xy1_style, x2, y2, xy2_style, x3, y3, xy3_style)\nplt.plot(points, points, 'bo',\n points, 2*points, 'cs-',\n points, 0.5*points, 'r.', lw=0.5, ms=8)\nplt.show()\n\n# draw multiple lines -2\nplt.plot(points, 'co-', lw=3, ms=5, mfc='b')\nplt.plot(points*0.5)\n\nplt.show()",
"Legend, Title\n\nplt.legend(loc=x): x = legend location\nset legend location: https://matplotlib.org/api/legend_api.html\n\n\nplt.xlabel(\"label name\"): set x label as \"label name\"\nplt.ylabel(\"label name\"): set y label as \"label name\"\nplt.title(\"plot title\"): set plot title as \"plot title\")",
"# legend, title\nplt.rc('font', family='nanumgothic') # set font family, use Korean\nplt.plot(points, label='random points') # set plot 1 label\nplt.plot(0.5 * points, label='임의값') # set plot 2 label\nplt.legend()\nplt.xlabel('random x') # set x label\nplt.ylabel('random y') # set y label\nplt.title('random plot') # set the title\nplt.show()",
"Annotation\n\nannotation attributes: https://matplotlib.org/users/annotations_intro.html\nmore details: https://matplotlib.org/users/annotations_guide.html#plotting-guide-annotation",
"plt.plot(points)\nplt.annotate(# text, arrow point(x, y), xy coordinate\n r'(text)', xy=(40, -4), xycoords='data',\n # text location from text coordinate, text coordinate\n xytext=(-50, 50), textcoords='offset points',\n # font, arrow shape\n fontsize=20, arrowprops=dict(arrowstyle=\"->\", linewidth=3, color=\"b\"))\nplt.show()",
"Figure size\n\nplt.figure(figsize=(x, y)): set the figure size as x, y",
"plt.figure(figsize=(20, 3))\nplt.plot(points)\nplt.show()",
"Axes, Subplots\n\nDoc: http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes\nplt.subplot(X, Y, Z): make subplots shape as (X, Y), and Z is location number in (X, Y)",
"ax1 = plt.subplot(2, 1, 1)\nplt.plot(points)\n\nax2 = plt.subplot(2, 1, 2)\nplt.plot(np.random.randn(50))\n\nplt.show()",
"Bar chart\n\nDoc for vertical bar: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.bar\nDoc for horizontal bar: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.barh",
"x = [3, 2, 1]\ny = [1, 2, 3]\nxlabel = ['한개', '두개', '세개']\n\n# plt.bar: vertical / plt.barh: horizontal\nplt.bar(x, y, align='center') # align: ceter(default), edge\nplt.xticks(x, xlabel)\nplt.show()",
"Histogram\n\nDoc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.hist",
"x = np.random.randint(0, 10, 10)\nprint(x)\n\narrays, bins, patches = plt.hist(x, bins=6)\nplt.show()\n\n# value counts for each bin\nprint(arrays)\n\n# the range of each bin\nprint(bins)",
"Pie chart\n\nDoc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.pie\nDemo: https://matplotlib.org/1.5.3/examples/pylab_examples/pie_demo2.html",
"plt.pie([30, 50, 10], # size\n labels = ['피자', '햄버거', '감자튀김'], # label\n colors = ['pink', 'salmon', 'tomato'], # colors\n explode = (0.01, 0.01, 0.2), # explode\n autopct = '%.2f%%', # set the ratio label format\n shadow = True, # pie chart shadow\n startangle = 0) # rotate the chart\nplt.axis('equal') # chart shape slope\nplt.title('품목별 매출비중')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lizoyu/kaggle-DigitRecognizer
|
jupyter/Traditional methods - opencv+scikit_learn.ipynb
|
gpl-3.0
|
[
"Traditional methods - OpenCV+scikit_learn\nIt's for traditional methods of object detection, using OpenCV to preprocess and extract features and then use machine learning algorithm to classify. \nGenerally, it can be divided into three modules: preprocessing, feature extraction and classification.\nFirst, some prepration works.",
"import cv2\nimport numpy as np\nfrom skimage.feature import hog\nfrom sklearn.decomposition import PCA\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import SVC\nfrom lib.data_utils import get_MNIST_data",
"Then read the MNIST data.",
"data = get_MNIST_data(subtract_mean=False)\n\n# check if we load the data successfully\nprint(data['X_train'].shape)",
"Preprocessing\nFeature extraction\nDifferent methods exist to extract feature. Here we try ORB (Oriented FAST and Rotated BRIEF).",
"# check the min number of keypoints\norb = cv2.ORB_create(edgeThreshold=2, patchSize=2)\nlen_k = 500\nfor key in ['X_train', 'X_test']:\n for img in data[key]:\n k = orb.detect(img.astype(np.uint8).reshape((28,28)))\n if len(k) < len_k:\n len_k = len(k)\nprint('minimum number of keypoints:', len_k)\n\n# compute the ORB descriptors\nfeats = {'X_train': np.zeros((41000,len_k*32)), 'X_test': np.zeros((1000,len_k*32))}\nfor key in feats.keys():\n print('compute for data: ', key)\n for i, img in zip(range(data[key].shape[0]), data[key]):\n k = orb.detect(img.astype(np.uint8).reshape((28,28)))\n _, feat = orb.compute(img.astype(np.uint8).reshape((28,28)), k[:len_k])\n feats[key][i,:] = feat.reshape(-1)\n\n# check the computed features size\nprint(feats['X_train'].shape)\nprint(feats['X_test'].shape)",
"Here we try HOG (Histogram of Oriented Gradients).",
"# compute the HOG for each image\nfeats = {'X_train': [], 'X_test': []}\nfor key in feats.keys():\n print('compute for data: ', key)\n for img in data[key]:\n feat = hog(img.reshape((28,28)),\n pixels_per_cell=(7,7),\n cells_per_block=(4,4),\n block_norm='L2-Hys')\n feats[key].append(feat.reshape(-1))\n\nfeats['X_train'] = np.array(feats['X_train'])\nfeats['X_test'] = np.array(feats['X_test'])\n# check the computed features size\nprint(feats['X_train'].shape)\nprint(feats['X_test'].shape)",
"It's possible to use PCA to reduce dimensions of feature to avoid curse of dimensionality for common classifiers.",
"# initialize PCA with top 50\npca = PCA(n_components=50)\npca.fit(feats['X_train'])\nfeats_reduce = {'X_train': [], 'X_test': []}\nfor key in feats.keys():\n feats_reduce[key] = pca.transform(feats[key])\n\n# check the computed features size\nprint(feats_reduce['X_train'].shape)\nprint(feats_reduce['X_test'].shape)",
"Classification\nDifferent machine learning methods are used to classify the digits.",
"# decision tree\ndt = DecisionTreeClassifier()\ndt.fit(feats['X_train'],data['y_train'])\nprint(dt.score(feats['X_test'], data['y_test']))\n# test accuracy of 57.2% using ORB\n# test accuracy of 90.2% using HOG (7, 2)\n# test accuracy of 90.3% using HOG (7, 4)\n\n# decision tree for reduced data\ndt = DecisionTreeClassifier()\ndt.fit(feats_reduce['X_train'],data['y_train'])\nprint(dt.score(feats_reduce['X_test'], data['y_test']))\n# test accuracy of 89% using HOG (7, 2)\n\n# k nearest neighbors\nknn = KNeighborsClassifier(n_neighbors=10)\nknn.fit(feats['X_train'],data['y_train'])\nprint(knn.score(feats['X_test'], data['y_test']))\n# test accuracy of 29.9% using ORB\n# test accuracy of 94.2% using HOG (7, 2)\n# test accuracy of 97.3% using HOG (7, 4)\n\n# k nearest neighbors for reduced data\nknn = KNeighborsClassifier(n_neighbors=10)\nknn.fit(feats_reduce['X_train'],data['y_train'])\nprint(knn.score(feats_reduce['X_test'], data['y_test']))\n# test accuracy of 94% using HOG (7, 2)\n\n# random forest\nrf = RandomForestClassifier()\nrf.fit(feats['X_train'],data['y_train'])\nprint(rf.score(feats['X_test'], data['y_test']))\n# test accuracy of 59.6% using ORB\n# test accuracy of 96% using HOG (7, 2)\n# test accuracy of 94.3% using HOG (8, 3)\n# test accuracy of 96% using HOG (7, 4)\n\n# SVM\nsvm = SVC()\nsvm.fit(feats['X_train'],data['y_train'])\nprint(svm.score(feats['X_test'], data['y_test']))\n# test accuracy of 51.1% using ORB\n# test accuracy of 11.5% using HOG (7, 4)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/hammoz-consortium/cmip6/models/mpiesm-1-2-ham/seaice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: MPIESM-1-2-HAM\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ncc/cmip6/models/noresm2-lm/seaice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: NORESM2-LM\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:24\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lm', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TF Lattice Canned Estimators\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lattice/tutorials/canned_estimators\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/canned_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/lattice/blob/master/docs/tutorials/canned_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/canned_estimators.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\n\nWarning: Estimators are not recommended for new code. Estimators run v1. Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.\n\nOverview\nCanned estimators are quick and easy ways to train TFL models for typical use cases. This guide outlines the steps needed to create a TFL canned estimator.\nSetup\nInstalling TF Lattice package:",
"#@test {\"skip\": true}\n!pip install tensorflow-lattice",
"Importing required packages:",
"import tensorflow as tf\n\nimport copy\nimport logging\nimport numpy as np\nimport pandas as pd\nimport sys\nimport tensorflow_lattice as tfl\nfrom tensorflow import feature_column as fc\nlogging.disable(sys.maxsize)",
"Downloading the UCI Statlog (Heart) dataset:",
"csv_file = tf.keras.utils.get_file(\n 'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')\ndf = pd.read_csv(csv_file)\ntarget = df.pop('target')\ntrain_size = int(len(df) * 0.8)\ntrain_x = df[:train_size]\ntrain_y = target[:train_size]\ntest_x = df[train_size:]\ntest_y = target[train_size:]\ndf.head()",
"Setting the default values used for training in this guide:",
"LEARNING_RATE = 0.01\nBATCH_SIZE = 128\nNUM_EPOCHS = 500\nPREFITTING_NUM_EPOCHS = 10",
"Feature Columns\nAs for any other TF estimator, data needs to be passed to the estimator, which is typically via an input_fn and parsed using FeatureColumns.",
"# Feature columns.\n# - age\n# - sex\n# - cp chest pain type (4 values)\n# - trestbps resting blood pressure\n# - chol serum cholestoral in mg/dl\n# - fbs fasting blood sugar > 120 mg/dl\n# - restecg resting electrocardiographic results (values 0,1,2)\n# - thalach maximum heart rate achieved\n# - exang exercise induced angina\n# - oldpeak ST depression induced by exercise relative to rest\n# - slope the slope of the peak exercise ST segment\n# - ca number of major vessels (0-3) colored by flourosopy\n# - thal 3 = normal; 6 = fixed defect; 7 = reversable defect\nfeature_columns = [\n fc.numeric_column('age', default_value=-1),\n fc.categorical_column_with_vocabulary_list('sex', [0, 1]),\n fc.numeric_column('cp'),\n fc.numeric_column('trestbps', default_value=-1),\n fc.numeric_column('chol'),\n fc.categorical_column_with_vocabulary_list('fbs', [0, 1]),\n fc.categorical_column_with_vocabulary_list('restecg', [0, 1, 2]),\n fc.numeric_column('thalach'),\n fc.categorical_column_with_vocabulary_list('exang', [0, 1]),\n fc.numeric_column('oldpeak'),\n fc.categorical_column_with_vocabulary_list('slope', [0, 1, 2]),\n fc.numeric_column('ca'),\n fc.categorical_column_with_vocabulary_list(\n 'thal', ['normal', 'fixed', 'reversible']),\n]",
"TFL canned estimators use the type of the feature column to decide what type of calibration layer to use. We use a tfl.layers.PWLCalibration layer for numeric feature columns and a tfl.layers.CategoricalCalibration layer for categorical feature columns.\nNote that categorical feature columns are not wrapped by an embedding feature column. They are directly fed into the estimator.\nCreating input_fn\nAs for any other estimator, you can use an input_fn to feed data to the model for training and evaluation. TFL estimators can automatically calculate quantiles of the features and use them as input keypoints for the PWL calibration layer. To do so, they require passing a feature_analysis_input_fn, which is similar to the training input_fn but with a single epoch or a subsample of the data.",
"train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=NUM_EPOCHS,\n num_threads=1)\n\n# feature_analysis_input_fn is used to collect statistics about the input.\nfeature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n # Note that we only need one pass over the data.\n num_epochs=1,\n num_threads=1)\n\ntest_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=test_x,\n y=test_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=1,\n num_threads=1)\n\n# Serving input fn is used to create saved models.\nserving_input_fn = (\n tf.estimator.export.build_parsing_serving_input_receiver_fn(\n feature_spec=fc.make_parse_example_spec(feature_columns)))",
"Feature Configs\nFeature calibration and per-feature configurations are set using tfl.configs.FeatureConfig. Feature configurations include monotonicity constraints, per-feature regularization (see tfl.configs.RegularizerConfig), and lattice sizes for lattice models.\nIf no configuration is defined for an input feature, the default configuration in tfl.config.FeatureConfig is used.",
"# Feature configs are used to specify how each feature is calibrated and used.\nfeature_configs = [\n tfl.configs.FeatureConfig(\n name='age',\n lattice_size=3,\n # By default, input keypoints of pwl are quantiles of the feature.\n pwl_calibration_num_keypoints=5,\n monotonicity='increasing',\n pwl_calibration_clip_max=100,\n # Per feature regularization.\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name='calib_wrinkle', l2=0.1),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='cp',\n pwl_calibration_num_keypoints=4,\n # Keypoints can be uniformly spaced.\n pwl_calibration_input_keypoints='uniform',\n monotonicity='increasing',\n ),\n tfl.configs.FeatureConfig(\n name='chol',\n # Explicit input keypoint initialization.\n pwl_calibration_input_keypoints=[126.0, 210.0, 247.0, 286.0, 564.0],\n monotonicity='increasing',\n # Calibration can be forced to span the full output range by clamping.\n pwl_calibration_clamp_min=True,\n pwl_calibration_clamp_max=True,\n # Per feature regularization.\n regularizer_configs=[\n tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),\n ],\n ),\n tfl.configs.FeatureConfig(\n name='fbs',\n # Partial monotonicity: output(0) <= output(1)\n monotonicity=[(0, 1)],\n ),\n tfl.configs.FeatureConfig(\n name='trestbps',\n pwl_calibration_num_keypoints=5,\n monotonicity='decreasing',\n ),\n tfl.configs.FeatureConfig(\n name='thalach',\n pwl_calibration_num_keypoints=5,\n monotonicity='decreasing',\n ),\n tfl.configs.FeatureConfig(\n name='restecg',\n # Partial monotonicity: output(0) <= output(1), output(0) <= output(2)\n monotonicity=[(0, 1), (0, 2)],\n ),\n tfl.configs.FeatureConfig(\n name='exang',\n # Partial monotonicity: output(0) <= output(1)\n monotonicity=[(0, 1)],\n ),\n tfl.configs.FeatureConfig(\n name='oldpeak',\n pwl_calibration_num_keypoints=5,\n monotonicity='increasing',\n ),\n tfl.configs.FeatureConfig(\n name='slope',\n # Partial monotonicity: output(0) <= output(1), output(1) <= output(2)\n monotonicity=[(0, 1), (1, 2)],\n ),\n tfl.configs.FeatureConfig(\n name='ca',\n pwl_calibration_num_keypoints=4,\n monotonicity='increasing',\n ),\n tfl.configs.FeatureConfig(\n name='thal',\n # Partial monotonicity:\n # output(normal) <= output(fixed)\n # output(normal) <= output(reversible) \n monotonicity=[('normal', 'fixed'), ('normal', 'reversible')],\n ),\n]",
"Calibrated Linear Model\nTo construct a TFL canned estimator, construct a model configuration from tfl.configs. A calibrated linear model is constructed using tfl.configs.CalibratedLinearConfig. It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optional output piecewise-linear calibration. When using output calibration or when output bounds are specified, the linear layer will apply weighted averaging on calibrated inputs.\nThis example creates a calibrated linear model on the first 5 features. We use\ntfl.visualization to plot the model graph with the calibrator plots.",
"# Model config defines the model structure for the estimator.\nmodel_config = tfl.configs.CalibratedLinearConfig(\n feature_configs=feature_configs,\n use_bias=True,\n output_calibration=True,\n regularizer_configs=[\n # Regularizer for the output calibrator.\n tfl.configs.RegularizerConfig(name='output_calib_hessian', l2=1e-4),\n ])\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns[:5],\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Calibrated linear test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph)",
"Calibrated Lattice Model\nA calibrated lattice model is constructed using tfl.configs.CalibratedLatticeConfig. A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration.\nThis example creates a calibrated lattice model on the first 5 features.",
"# This is calibrated lattice model: Inputs are calibrated, then combined\n# non-linearly using a lattice layer.\nmodel_config = tfl.configs.CalibratedLatticeConfig(\n feature_configs=feature_configs,\n regularizer_configs=[\n # Torsion regularizer applied to the lattice to make it more linear.\n tfl.configs.RegularizerConfig(name='torsion', l2=1e-4),\n # Globally defined calibration regularizer is applied to all features.\n tfl.configs.RegularizerConfig(name='calib_hessian', l2=1e-4),\n ])\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns[:5],\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Calibrated lattice test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph)",
"Calibrated Lattice Ensemble\nWhen the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using tfl.configs.CalibratedLatticeEnsembleConfig. A calibrated lattice ensemble model applies piecewise-linear and categorical calibration on the input feature, followed by an ensemble of lattice models and an optional output piecewise-linear calibration.\nRandom Lattice Ensemble\nThe following model config uses a random subset of features for each lattice.",
"# This is random lattice ensemble model with separate calibration:\n# model output is the average output of separately calibrated lattices.\nmodel_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=feature_configs,\n num_lattices=5,\n lattice_rank=3)\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Random ensemble test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)",
"RTL Layer Random Lattice Ensemble\nThe following model config uses a tfl.layers.RTL layer that uses a random subset of features for each lattice. We note that tfl.layers.RTL only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a tfl.layers.RTL layer lets you scale to much larger ensembles than using separate tfl.layers.Lattice instances.",
"# Make sure our feature configs have the same lattice size, no per-feature\n# regularization, and only monotonicity constraints.\nrtl_layer_feature_configs = copy.deepcopy(feature_configs)\nfor feature_config in rtl_layer_feature_configs:\n feature_config.lattice_size = 2\n feature_config.unimodality = 'none'\n feature_config.reflects_trust_in = None\n feature_config.dominates = None\n feature_config.regularizer_configs = None\n# This is RTL layer ensemble model with separate calibration:\n# model output is the average output of separately calibrated lattices.\nmodel_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n lattices='rtl_layer',\n feature_configs=rtl_layer_feature_configs,\n num_lattices=5,\n lattice_rank=3)\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Random ensemble test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)",
"Crystals Lattice Ensemble\nTFL also provides a heuristic feature arrangement algorithm, called Crystals. The Crystals algorithm first trains a prefitting model that estimates pairwise feature interactions. It then arranges the final ensemble such that features with more non-linear interactions are in the same lattices.\nFor Crystals models, you will also need to provide a prefitting_input_fn that is used to train the prefitting model, as described above. The prefitting model does not need to be fully trained, so a few epochs should be enough.",
"prefitting_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(\n x=train_x,\n y=train_y,\n shuffle=False,\n batch_size=BATCH_SIZE,\n num_epochs=PREFITTING_NUM_EPOCHS,\n num_threads=1)",
"You can then create a Crystal model by setting lattice='crystals' in the model config.",
"# This is Crystals ensemble model with separate calibration: model output is\n# the average output of separately calibrated lattices.\nmodel_config = tfl.configs.CalibratedLatticeEnsembleConfig(\n feature_configs=feature_configs,\n lattices='crystals',\n num_lattices=5,\n lattice_rank=3)\n# A CannedClassifier is constructed from the given model config.\nestimator = tfl.estimators.CannedClassifier(\n feature_columns=feature_columns,\n model_config=model_config,\n feature_analysis_input_fn=feature_analysis_input_fn,\n # prefitting_input_fn is required to train the prefitting model.\n prefitting_input_fn=prefitting_input_fn,\n optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n prefitting_optimizer=tf.keras.optimizers.Adam(LEARNING_RATE),\n config=tf.estimator.RunConfig(tf_random_seed=42))\nestimator.train(input_fn=train_input_fn)\nresults = estimator.evaluate(input_fn=test_input_fn)\nprint('Crystals ensemble test AUC: {}'.format(results['auc']))\nsaved_model_path = estimator.export_saved_model(estimator.model_dir,\n serving_input_fn)\nmodel_graph = tfl.estimators.get_model_graph(saved_model_path)\ntfl.visualization.draw_model_graph(model_graph, calibrator_dpi=15)",
"You can plot feature calibrators with more details using the tfl.visualization module.",
"_ = tfl.visualization.plot_feature_calibrator(model_graph, \"age\")\n_ = tfl.visualization.plot_feature_calibrator(model_graph, \"restecg\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mathcoding/Programmazione2
|
Lab 7 - Implementazione di regressioni.ipynb
|
mit
|
[
"Generazione data set sintetico",
"# Vari import da usare\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Supporto per operazioni tra matrici e vettori\nfrom numpy import matmul\nfrom numpy import transpose\nfrom numpy.linalg import inv\nfrom numpy.linalg import pinv\n\ndef MakeSyntethicData(n=100, ifplot=False):\n \"\"\" Restituisce una matrice X di 2 covariate e n righe,\n e il vettore Y di n righe con le rispettive label.\n Se ifplot=True fa uno scatter plot nel piano delle covariate \"\"\"\n \n # Prima genero il campione di covariate per i punti classificati \"blue\"\n np.random.seed(13)\n x1_blue = np.random.normal(2, 0.8, n)\n x2_blue = np.random.normal(6, 0.8, n) \n\n # Poi genero il campione di covariate per i punti classificati \"rossi\"\n # in modo che sia estratti da due distribuzioni diverse\n m = 20\n x1_red = np.random.normal(4, 0.5, max(n, n-m))\n x2_red = np.random.normal(3, 0.5, max(n, n-m))\n if n > m:\n x1_red = np.append(x1_red, np.random.normal(10, 0.5, 20))\n x2_red = np.append(x2_red, np.random.normal(0, 0.5, 20))\n\n if ifplot:\n fig, ax = plt.subplots(figsize=(7, 7))\n ax.scatter(x1_blue, x2_blue, alpha=0.5, c='blue')\n ax.scatter(x1_red, x2_red, alpha=0.5, c='red')\n ax.set_xlabel('Covariata x1')\n ax.set_ylabel('Covariata x2')\n ax.legend(('Blue=0', 'Red=1'))\n plt.show()\n\n # Prepara la matrice delle covariate X e il vettore di label Y \n X = []\n Y = []\n # Documentazione per la funzione zip()\n # https://docs.python.org/3.6/library/functions.html#zip\n for x,y in zip(x1_blue,x2_blue):\n X.append((x,y))\n Y.append(0) # 0 = blue\n\n for x,y in zip(x1_red,x2_red):\n X.append((x,y))\n Y.append(1) # 1 = red\n \n return X, Y\n\nX,Y = MakeSyntethicData(100, True)",
"Esercizio 1: Regressione Lineare\nSi usino le slide usate a lezione. \nConfrontare i coefficienti $w$ trovati dalla vostra soluzione con quelli che trova la classe LinearRegression della libreria Scikit Learn.",
"class RegressioneLineare(object):\n def fit(self, x, y):\n # Build the matrix with vector (1, x) as rows\n X = np.matrix(list(map(lambda row: np.append([1], row), x)))\n # Solve the normal equation (what if X is not invertible?)\n self.w = matmul(matmul(inv(matmul(transpose(X), X)), transpose(X)), y)\n \n def predict(self, x):\n # Build the matrix with vector (1, x) as rows\n X = np.matrix(list(map(lambda row: np.append([1], row), x)))\n # Predict values \n return matmul(transpose(X), self.w)\n\nfrom sklearn.linear_model import LinearRegression \n\nlr = LinearRegression(normalize=False)\nlr.fit(X, Y)\nprint('Scikit LinearRegression, pesi trovati:', lr.intercept_, lr.coef_)\n\nmy = RegressioneLineare()\nmy.fit(X,Y)\nprint('My Regressione lineare, pesi trovati:', my.w)\n",
"Esercizio 2: Regressione Logistica\nSi usino le slide usate a lezione.\nConfrontare i coefficienti $w$ trovati dalla vostra soluzione con quelli che trova la classe LinearRegression della libreria Scikit Learn.",
"class RegressioneLogistica(object):\n def fit(self, x, y):\n # DA COMPLETARE: METODO DI NEWTON RAPHSON SULLE SLIDES\n pass\n \n def predict(self, x):\n # DA COMPLETARE: USARE I PARAMETRI w\n pass"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jinntrance/MOOC
|
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
|
cc0-1.0
|
[
"Latent Dirichlet Allocation for Text Data\nIn this assignment you will\n\napply standard preprocessing techniques on Wikipedia text data\nuse GraphLab Create to fit a Latent Dirichlet allocation (LDA) model\nexplore and interpret the results, including topic keywords and topic assignments for documents\n\nRecall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both \"Politics\" and \"World News\" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.\nWith this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. \nNote to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.\nText Data Preprocessing\nWe'll start by importing our familiar Wikipedia dataset.\nThe following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.",
"import graphlab as gl\nimport numpy as np\nimport matplotlib.pyplot as plt \n\n%matplotlib inline\n\n'''Check GraphLab Create version'''\nfrom distutils.version import StrictVersion\nassert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'\n\n# import wiki data\nwiki = gl.SFrame('people_wiki.gl/')\nwiki",
"In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. \nTherefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create:",
"wiki_docs = gl.text_analytics.count_words(wiki['text'])\nwiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)",
"Model fitting and interpretation\nIn the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module.\nNote: This may take several minutes to run.",
"topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)",
"GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.",
"topic_model",
"It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will\n\nget the top words in each topic and use these to identify topic themes\npredict topic distributions for some example documents\ncompare the quality of LDA \"nearest neighbors\" to the NN output from the first assignment\nunderstand the role of model hyperparameters alpha and gamma\n\nLoad a fitted topic model\nThe method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.\nIt is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. \nWe recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.",
"topic_model = gl.load_model('topic_models/lda_assignment_topic_model')",
"Identifying topic themes by top words\nWe'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. \nIn the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme and that all the topics are relatively distinct.\nWe can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic.\nQuiz Question: Identify the top 3 most probable words for the first topic. \n Quiz Question: What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?\nLet's look at the top 10 words for each topic to see if we can identify any themes:",
"[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]\n\nsum(topic_model.get_topics(topic_ids=[2], num_words=50)['score'])",
"We propose the following themes for each topic:\n\ntopic 0: Science and research\ntopic 2: Team sports\ntopic 3: Music, TV, and film\ntopic 4: American college and politics\ntopic 5: General politics\ntopic 6: Art and publishing\ntopic 7: Business\ntopic 8: International athletics\ntopic 9: Great Britain and Australia\ntopic 10: International music\n\nWe'll save these themes for later:",
"themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \\\n 'art and publishing','Business','international athletics','Great Britain and Australia','international music']",
"Measuring the importance of top words\nWe can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.\nWe'll do this with two visualizations of the weights for the top words in each topic:\n - the weights of the top 100 words, sorted by the size\n - the total weight of the top 10 words\nHere's a plot for the top 100 words by weight in each topic:",
"for i in range(10):\n plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])\nplt.xlabel('Word rank')\nplt.ylabel('Probability')\nplt.title('Probabilities of Top 100 Words in each Topic')",
"In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!\nNext we plot the total weight assigned by each topic to its top 10 words:",
"top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]\n\nind = np.arange(10)\nwidth = 0.5\n\nfig, ax = plt.subplots()\n\nax.bar(ind-(width/2),top_probs,width)\nax.set_xticks(ind)\n\nplt.xlabel('Topic')\nplt.ylabel('Probability')\nplt.title('Total Probability of Top 10 Words in each Topic')\nplt.xlim(-0.5,9.5)\nplt.ylim(0,0.15)\nplt.show()",
"Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.\nFinally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all.\nTopic distributions for some example documents\nAs we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.\nWe'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition.\nTopic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:",
"obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])\npred1 = topic_model.predict(obama, output_type='probability')\npred2 = topic_model.predict(obama, output_type='probability')\nprint(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))",
"To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:",
"def average_predictions(model, test_document, num_trials=100):\n avg_preds = np.zeros((model.num_topics))\n for i in range(num_trials):\n avg_preds += model.predict(test_document, output_type='probability')[0]\n avg_preds = avg_preds/num_trials\n result = gl.SFrame({'topics':themes, 'average predictions':avg_preds})\n result = result.sort('average predictions', ascending=False)\n return result\n\nprint average_predictions(topic_model, obama, 100)\n\nbush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]])\npred11 = topic_model.predict(bush, output_type='probability')\npred22 = topic_model.predict(bush, output_type='probability')\nprint(gl.SFrame({'topics':themes, 'predictions (first draw)':pred11[0], 'predictions (second draw)':pred22[0]}))\nprint average_predictions(topic_model, bush, 100)\n\nger = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]])\npred111 = topic_model.predict(ger, output_type='probability')\npred222 = topic_model.predict(ger, output_type='probability')\nprint(gl.SFrame({'topics':themes, 'predictions (first draw)':pred111[0], 'predictions (second draw)':pred222[0]}))\nprint average_predictions(topic_model, ger, 100)",
"Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.\nQuiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.\nComparing LDA to nearest neighbors for document retrieval\nSo far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. \nIn this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. \nWe'll start by creating the LDA topic distribution representation for each document:",
"wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')",
"Next we add the TF-IDF document representations:",
"wiki['word_count'] = gl.text_analytics.count_words(wiki['text'])\nwiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])",
"For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:",
"model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],\n method='brute_force', distance='cosine')\nmodel_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'],\n method='brute_force', distance='cosine')",
"Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:",
"model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)\n\nmodel_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)\n\nk5000 = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)\nk5000[k5000['reference_label'] == 'Mariano Rivera']\n\nl5000 = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)\nl5000[l5000['reference_label'] == 'Mariano Rivera']",
"Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. \nWith TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are \"close\" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. \nOur LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be \"close\" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies.\nQuiz Question: Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)\nQuiz Question: Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.)\nUnderstanding the role of LDA model hyperparameters\nFinally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. \nIn the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document \"likes\" a topic (in the case of alpha) or how much each topic \"likes\" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences \"smoother\" over topics, and gamma makes the topic preferences \"smoother\" over words.\nOur goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.\nQuiz Question: What was the value of alpha used to fit our original topic model? \nQuiz Question: What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses \"beta\" instead of \"gamma\" to refer to the hyperparameter that influences topic distributions over words.\nWe'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model:\n - tpm_low_alpha, a model trained with alpha = 1 and default gamma\n - tpm_high_alpha, a model trained with alpha = 50 and default gamma",
"tpm_low_alpha = gl.load_model('topic_models/lda_low_alpha')\ntpm_high_alpha = gl.load_model('topic_models/lda_high_alpha')",
"Changing the hyperparameter alpha\nSince alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.",
"a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]\nb = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]\nc = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]\nind = np.arange(len(a))\nwidth = 0.3\n\ndef param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):\n fig = plt.figure()\n ax = fig.add_subplot(111)\n\n b1 = ax.bar(ind, a, width, color='lightskyblue')\n b2 = ax.bar(ind+width, b, width, color='lightcoral')\n b3 = ax.bar(ind+(2*width), c, width, color='gold')\n\n ax.set_xticks(ind+width)\n ax.set_xticklabels(range(10))\n ax.set_ylabel(ylab)\n ax.set_xlabel(xlab)\n ax.set_ylim(0,ylim)\n ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])\n\n plt.tight_layout()\n \nparam_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',\n xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')\n\npk = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])\npk1 = tpm_low_alpha.predict(pk, output_type='probability')\npk2 = tpm_low_alpha.predict(pk, output_type='probability')\nprint(gl.SFrame({'topics':themes, 'predictions (first draw)':pk1[0], 'predictions (second draw)':pk2[0]}))\nprint average_predictions(tpm_low_alpha, pk, 100)",
"Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.\nQuiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the low alpha model? Use the average results from 100 topic predictions.",
"pk = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])\npk1 = tpm_high_alpha.predict(pk, output_type='probability')\npk2 = tpm_high_alpha.predict(pk, output_type='probability')\nprint(gl.SFrame({'topics':themes, 'predictions (first draw)':pk1[0], 'predictions (second draw)':pk2[0]}))\nprint average_predictions(tpm_high_alpha, pk, 100)",
"Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions.\nChanging the hyperparameter gamma\nJust as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models.\nNow we will consider the following two models:\n - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha\n - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha",
"del tpm_low_alpha\ndel tpm_high_alpha\ntpm_low_gamma = gl.load_model('topic_models/lda_low_gamma')\ntpm_high_gamma = gl.load_model('topic_models/lda_high_gamma')\n\na_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]\nb_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]\nc_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]\n\na_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]\nb_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]\nc_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]\n\nind = np.arange(len(a))\nwidth = 0.3\n \nparam_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',\n xlab='Topics (sorted by weight of top 100 words)', \n ylab='Total Probability of Top 100 Words')\n\nparam_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',\n xlab='Topics (sorted by weight of bottom 1000 words)',\n ylab='Total Probability of Bottom 1000 Words')",
"From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary.\nQuiz Question: For each topic of the low gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).",
"sum([len(tpm_low_gamma.get_topics(topic_ids=[i], num_words=5000, cdf_cutoff = 0.5)['score']) for i in range(10)])/10.0",
"Quiz Question: For each topic of the high gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).",
"sum([len(tpm_high_gamma.get_topics(topic_ids=[i], num_words=5000, cdf_cutoff = 0.5)) for i in range(10)])/10.0\n\ntpm_high_gamma.get_topics(topic_ids=[1],num_words=1000, cdf_cutoff = 0.5)",
"We have now seen how the hyperparameters alpha and gamma influence the characteristics of our LDA topic model, but we haven't said anything about what settings of alpha or gamma are best. We know that these parameters are responsible for controlling the smoothness of the topic distributions for documents and word distributions for topics, but there's no simple conversion between smoothness of these distributions and quality of the topic model. In reality, there is no universally \"best\" choice for these parameters. Instead, finding a good topic model requires that we be able to both explore the output (as we did by looking at the topics and checking some topic predictions for documents) and understand the impact of hyperparameter settings (as we have in this section)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rishuatgithub/MLPy
|
nlp/1. NLP Classification - Logistic Classification.ipynb
|
apache-2.0
|
[
"Implement Logistic Classification for classifying tweets / text\nGiven a tweet we will have to decide whether a tweet is positive and negative",
"import numpy as np\nimport pandas as pd\nimport nltk\nfrom nltk.corpus import twitter_samples\n\nnltk.download('twitter_samples')\n\nnltk.download('stopwords')",
"Load and Analyse the dataset",
"# load positive tweets\npositive_tweets = twitter_samples.strings('positive_tweets.json')\npositive_tweets[:3]\n\n# load negative tweets\nnegative_tweets = twitter_samples.strings('negative_tweets.json')\nnegative_tweets[:3]\n\n## total number of pos and neg tweets\n\nprint(f\"Total No. of Positive tweets: {len(positive_tweets)}\")\nprint(f'Total No. of Negative tweets: {len(negative_tweets)}')\n\n## generate a train and test dataset with equal combination of pos and neg tweets\n## in total 1000 words, dividing the list of tweets into 8000 train and 2000 test datasets.\n\ntrain_pos = positive_tweets[:4000]\ntrain_neg = negative_tweets[:4000]\n\ntest_pos = positive_tweets[4000:]\ntest_neg = negative_tweets[4000:]\n\n# combining all of them together\n\ntrain_data = train_pos + train_neg\ntest_data = test_pos + test_neg\n\nprint(f'Total number of data count train data: {len(train_data)} and test data : {len(test_data)}')\n\n# creating labels for the datasets\ntrain_label = np.append(np.ones((len(train_pos),1)), np.zeros((len(train_neg),1)), axis=0)\ntest_label = np.append(np.ones((len(test_pos),1)), np.zeros((len(test_neg),1)), axis=0)\n\nprint(f'Shape of Train and Test labels : {train_label.shape} and {test_label.shape}')",
"Processing of the data to create word frequencies list",
"from nltk.corpus import stopwords\nimport re\n\ndef clean_tweet(tweet):\n '''\n clean the tweet to tokenise, remove stop words and stem the words\n '''\n stop_words = stopwords.words('english')\n #print(f'Total stop words in the vocab: {len(stop_words)}')\n \n tweet = re.sub(r'#','',tweet) ## remove the # symbol\n tweet = re.sub(r'https?:\\/\\/.*[\\r\\n]*','',tweet) ## remove any hyperlinks\n tweet = re.sub(r'^RT[\\s]+','',tweet) ## remove any Retweets (RT)\n \n tokenizer = nltk.tokenize.TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True)\n tweet_token = tokenizer.tokenize(tweet)\n \n tweet_cleaned = []\n \n for word in tweet_token:\n if word not in stop_words:\n tweet_cleaned.append(word)\n \n return tweet_cleaned\n \n\ndef build_tweet_frequency(tweets, label):\n '''\n Build a vocab of tweet word frequencies across corpus. \n @input: Tweets - list of tweets\n label - Array of tweet sentiments\n @output: a dict of (word, label):frequency\n '''\n label_list = np.squeeze(label).tolist()\n \n freq = {}\n \n for t, l in zip(tweets, label_list):\n for word in clean_tweet(t):\n word_pair = (word,l)\n \n if word_pair in freq:\n freq[word_pair] +=1\n else:\n freq[word_pair] =1\n\n return freq\n \n\ntrain_data[0] ## 0, 500\n\nclean_tweet(train_data[0])\n\ntweet_freq_vocab = build_tweet_frequency(train_data, train_label)\n\ntweet_freq_vocab.get(('sad',0))\n\ndef extract_features(tweet, vocab):\n '''\n Given a tweet and frequency vocab, generate a list of \n @input: \n tweet - tweet we want to extract features from\n vocab - frequency vocab dictionary\n @output:\n tweet_feature - a numpy array with [label, total_pos_freq, total_neg_freq]\n '''\n cleaned_tweet = clean_tweet(tweet)\n #print(cleaned_tweet)\n tweet_feature = np.zeros((1,3))\n \n tweet_feature[0,0] = 1\n \n for words in cleaned_tweet: # iterate over the tweet to get the number of pos and neg tweet freqs\n #print(vocab.get((words,1.0),0), \" --- \", vocab.get((words,0.0),0))\n tweet_feature[0,1] += vocab.get((words,1.0),0)\n tweet_feature[0,2] += vocab.get((words,0.0),0)\n \n return tweet_feature\n\nextract_features(train_data[0],tweet_freq_vocab)\n\nextract_features('Hi How are you? I am doing good', tweet_freq_vocab)",
"Model Training",
"## Generate the vector word frequency for all of the training tweets\n\ntrain_X = np.zeros((len(train_data),3))\nfor i in range(len(train_data)):\n train_X[i,:] = extract_features(train_data[i], tweet_freq_vocab)\n\ntrain_y = train_label\n\ntest_X = np.zeros((len(test_data),3))\nfor i in range(len(test_data)):\n test_X[i,:] = extract_features(test_data[i], tweet_freq_vocab)\n \ntest_y = test_label\n\ntrain_X[0:5]\n\ntrain_y.shape\n\nfrom sklearn.linear_model import LogisticRegression\n\nmodel = LogisticRegression(solver='liblinear')\nmodel.fit(train_X, train_y)\n\npredictions = model.predict(test_X)\n\nfrom sklearn.metrics import accuracy_score\n\naccuracy_score(test_y, predictions)\n\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(test_y,predictions))",
"Making your own predictions",
"my_tweet1 = 'i liked my prediction score. happy with the results'\nmodel.predict(extract_features(my_tweet1,tweet_freq_vocab))\n\nmy_tweet2 = 'i am sad with the result of the football match'\nmodel.predict(extract_features(my_tweet2,tweet_freq_vocab))\n\nmy_tweet3 = 'shame that i couldnt get an entry to the competition'\nmodel.predict(extract_features(my_tweet3,tweet_freq_vocab))\n\nmy_tweet3 = 'this movie should have been great.'\nmodel.predict(extract_features(my_tweet3,tweet_freq_vocab)) ## misclassified example\n\nmy_tweet3 = 'i liked my prediction score. not happy with the results'\nmodel.predict(extract_features(my_tweet3,tweet_freq_vocab))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cmcc/cmip6/models/cmcc-cm2-hr4/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: CMCC-CM2-HR4\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/ja/tensorboard/dataframe_api.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorBoard の DataFrames データにアクセスする\n概要\nTensorBoard の主な機能はインタラクティブ GUI ですが、ログれーたの事後分析やカスタム視覚化の作成目的で、TensorBoard に保存されているデータログを プログラムで 読み取るユーザーもいます。\nTensorBoard 2.3 は、tensorboard.data.experimental.ExperimentFromDev() でこのようなユースケースをサポートしており、TensorBoard のスカラーログにプログラムを使ってアクセスすることができます。このページでは、この新しい API の基本的な使用方法を実演します。\n\n注意:\n\nこの API は、名前空間で想像できるように、まだ実験段階にある API です。そのため、将来的に重大な変更が適用される場合があります。\n現在のところ、この機能は TensorBoard.dev にアップロードされる logdir のみをサポートしています。TensorBoard.dev は、TensorBoard の永続化と共有を可能にする無料のホステッドサービスです。ローカルに保存されている TensorBoard logdir のサポートは、今後追加される予定です。簡単に言うと、ローカルのファイルシステムに保存されている TensorBoard logdir を、1 行のコマンド(tensorboard dev upload --logdir <logdir>)で TensorBoard.dev にアップロードすることができます。詳細は、tensorboard.dev をご覧ください。\n\n\nセットアップ\nプログラマティック API を使用するには、tensorboard とともに pandas がインストールされていることを確認してください。\nこのガイドではカスタムプロットの作成に matplotlib と seaborn を使用しますが、任意のツールを使って DataFrame の分析と視覚化を行えます。",
"!pip install tensorboard pandas\n!pip install matplotlib seaborn\n\nfrom packaging import version\n\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom scipy import stats\nimport tensorboard as tb\n\nmajor_ver, minor_ver, _ = version.parse(tb.__version__).release\nassert major_ver >= 2 and minor_ver >= 3, \\\n \"This notebook requires TensorBoard 2.3 or later.\"\nprint(\"TensorBoard version: \", tb.__version__)",
"pandas.DataFrame として TensorBoard スカラーを読み込む\nTensorBoard logdir が TensorBoard.dev にアップロードされると、logdir は「実験」となります。各実験には一意の ID が割り当てられており、実験の TensorBoard.dev URL で確認することができます。次のデモでは、https://tensorboard.dev/experiment/c1KCv3X3QvGwaXfgX1c4tg にある TensorBoard.dev を使用しています。",
"experiment_id = \"c1KCv3X3QvGwaXfgX1c4tg\"\nexperiment = tb.data.experimental.ExperimentFromDev(experiment_id)\ndf = experiment.get_scalars()\ndf",
"df は、実験のすべてのスカラーログを含む pandas.DataFrame です。\nDataFrame の列は次のとおりです。\n\nrun: run(実行)は、元の logdir のサブディレクトリに対応しています。この実験では、run は特定のオプティマイザタイプ(トレーニングハイパーパラメータ)を使用した MNIST データセットのニューラルネットワーク(CNN)の完全なトレーニングに由来しています。この DataFrame は、このような run が複数含まれており、別のオプティマイザタイプの配下にある反復トレーニングに対応しています。\ntag: これは、同一の行にある value の意味、つまり値が表現するメトリックが何であるかを記述しています。この実験では、epoch_accuracy と epoch_loss という、それぞれ精度と損失のメトリックに対応する 2 つのタグのみがあります。\nstep: これは、run の中で対応する行のシリアル順を反映する番号です。ここでは、step は実際にエポック番号を指します。step 値とは別にタイムスタンプを取得する場合は、get_scalars() を呼び出す際にキーワード引数 include_wall_time=True を使用できます。\nvalue: これは関心のある実際の数値です。上述のとおり、この特定の DataFrame の各 value は、行の tag に応じて損失か精度になります。",
"print(df[\"run\"].unique())\nprint(df[\"tag\"].unique())",
"ピボット(ワイドフォーム)DataFrame を取得する\nこの実験では、各実行の同じステップ時Iに 2 つのタグ(epoch_loss と epoch_accuracy)が存在します。このため、pivot=True キーワード引数を使用することで、「ワイドフォーム」DataFrame を get_scalars() から直接取得することができます。すべてのタグがワイドフォーム DataFrame の列として含まれているため、このケースを含み、場合によっては操作がより便利になります。\nただし、すべての実行のすべてのタグで統一したステップ値を持つ条件が満たされる場合、pivot=True を使用するとエラーになることに注意してください。",
"dfw = experiment.get_scalars(pivot=True) \ndfw",
"ワイドフォーム DataFrame には、1 つの「value」列の代わりに、epoch_accuracy と epoch_loss の 2 つのタグ(メトリック)が列として明示的に含まれています。\nDataFrame を CSV として保存する\npandas.DataFrame has good interoperability with CSV. You can store it as a local CSV file and load it back later. For example:",
"csv_path = '/tmp/tb_experiment_1.csv'\ndfw.to_csv(csv_path, index=False)\ndfw_roundtrip = pd.read_csv(csv_path)\npd.testing.assert_frame_equal(dfw_roundtrip, dfw)",
"カスタム視覚化と統計分析を実行する",
"# Filter the DataFrame to only validation data, which is what the subsequent\n# analyses and visualization will be focused on.\ndfw_validation = dfw[dfw.run.str.endswith(\"/validation\")]\n# Get the optimizer value for each row of the validation DataFrame.\noptimizer_validation = dfw_validation.run.apply(lambda run: run.split(\",\")[0])\n\nplt.figure(figsize=(16, 6))\nplt.subplot(1, 2, 1)\nsns.lineplot(data=dfw_validation, x=\"step\", y=\"epoch_accuracy\",\n hue=optimizer_validation).set_title(\"accuracy\")\nplt.subplot(1, 2, 2)\nsns.lineplot(data=dfw_validation, x=\"step\", y=\"epoch_loss\",\n hue=optimizer_validation).set_title(\"loss\")",
"上記のプロットは、検証精度と検証損失のタイムコースを示し、それぞれの曲線は、あるオプティマイザタイプによる 5 回の実行の平均を示します。seaborn.lineplot() に組み込まれた機能により、それぞれの曲線は、平均に関する ±1 の標準偏差も表示するため、曲線の変動性と 3 つのオプティマイザの差の重要性がわかりやすくなります。この変動性の視覚化は、TensorBoard の GUI ではまだサポートされていません。\n最小検証損失が「adam」、「rmsprop」、および「sgd」オプティマイザ間で大きく異なるという仮説を調べるため、それぞれのオプティマイザにおける最小検証損失の DataFrame を抽出します。\nそして、最小検証損失の差を視覚化する箱ひげ図を作成します。",
"adam_min_val_loss = dfw_validation.loc[optimizer_validation==\"adam\", :].groupby(\n \"run\", as_index=False).agg({\"epoch_loss\": \"min\"})\nrmsprop_min_val_loss = dfw_validation.loc[optimizer_validation==\"rmsprop\", :].groupby(\n \"run\", as_index=False).agg({\"epoch_loss\": \"min\"})\nsgd_min_val_loss = dfw_validation.loc[optimizer_validation==\"sgd\", :].groupby(\n \"run\", as_index=False).agg({\"epoch_loss\": \"min\"})\nmin_val_loss = pd.concat([adam_min_val_loss, rmsprop_min_val_loss, sgd_min_val_loss])\n\nsns.boxplot(data=min_val_loss, y=\"epoch_loss\",\n x=min_val_loss.run.apply(lambda run: run.split(\",\")[0]))\n\n# Perform pairwise comparisons between the minimum validation losses\n# from the three optimizers.\n_, p_adam_vs_rmsprop = stats.ttest_ind(\n adam_min_val_loss[\"epoch_loss\"],\n rmsprop_min_val_loss[\"epoch_loss\"]) \n_, p_adam_vs_sgd = stats.ttest_ind(\n adam_min_val_loss[\"epoch_loss\"],\n sgd_min_val_loss[\"epoch_loss\"]) \n_, p_rmsprop_vs_sgd = stats.ttest_ind(\n rmsprop_min_val_loss[\"epoch_loss\"],\n sgd_min_val_loss[\"epoch_loss\"]) \nprint(\"adam vs. rmsprop: p = %.4f\" % p_adam_vs_rmsprop)\nprint(\"adam vs. sgd: p = %.4f\" % p_adam_vs_sgd)\nprint(\"rmsprop vs. sgd: p = %.4f\" % p_rmsprop_vs_sgd)",
"したがって、分析では、重要度レベル 0.05 で、最小検証損失が、実験に含まれるほかの 2 つのオプティマイザよりも rmsprop オプティマイザの方が大幅に高い(つまり悪化する)という仮説が実証されます。\nまとめると、このチュートリアルでは、 TensorBoard.dev から panda.DataFrame のスカラーデータにアクセスする例を示しました。DataFrame を使用して行える柔軟で強力な分析と視覚化を実演しました。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nbokulich/short-read-tax-assignment
|
ipynb/novel-taxa/taxonomy-assignment.ipynb
|
bsd-3-clause
|
[
"Data generation: using python to sweep over methods and parameters\nIn this notebook, we illustrate how to use python to generate and run a list of commands. In this example, we generate a list of QIIME 1.9.0 assign_taxonomy.py commands, though this workflow for command generation is generally very useful for performing parameter sweeps (i.e., exploration of sets of parameters for achieving a specific result for comparative purposes). \nEnvironment preparation",
"from os import system\nfrom os.path import join, expandvars \nfrom joblib import Parallel, delayed\nfrom glob import glob\nfrom tax_credit.framework_functions import (recall_novel_taxa_dirs,\n parameter_sweep,\n move_results_to_repository)\n\n\nproject_dir = expandvars(\"$HOME/Desktop/projects/short-read-tax-assignment\")\nanalysis_name= \"novel-taxa-simulations\"\n\nresults_dir = expandvars(\"$HOME/Desktop/projects/novel-taxa-simulations/\")",
"Preparing data set sweep\nFirst, we're going to define the data sets that we'll sweep over. As the simulated novel taxa dataset names depend on how the database generation notebook was executed, we must define the variables used to create these datasets. If you modified any variables in that notebook, set these same variables below. If you did not, then do not modify.\nrecall_novel_taxa_dirs() generates a list of dataset_reference_combinations and a dictionary of reference_dbs mapped to each dataset, which we feed to parameter_sweep below.",
"iterations = 3\ndata_dir = join(project_dir, \"data\", analysis_name)\n# databases is a list of names given as dictionary keys in the second\n# cell of the database generation notebook. Just list the names here.\ndatabases = ['B1-REF', 'F1-REF']\n\n# Generate a list of input directories\n(dataset_reference_combinations, reference_dbs) = recall_novel_taxa_dirs(data_dir, databases, iterations)",
"Preparing the method/parameter combinations and generating commands\nNow we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.\nAssignment Using QIIME 1 or Command-Line Classifiers\nHere we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.",
"method_parameters_combinations = { # probabalistic classifiers\n 'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,\n 0.6, 0.7, 0.8, 0.9, 1.0]},\n \n # global alignment classifiers\n 'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0], \n 'similarity': [0.8, 0.9],\n 'uclust_max_accepts': [1, 3, 5]},\n \n # local alignment classifiers\n 'sortmerna': {'sortmerna_e_value': [1.0],\n 'min_consensus_fraction': [0.51, 0.76, 1.0], \n 'similarity': [0.8, 0.9],\n 'sortmerna_best_N_alignments ': [1, 3, 5],\n 'sortmerna_coverage' : [0.8, 0.9]},\n 'blast' : {'blast_e_value' : [0.0000000001, 0.001, 1, 1000]}\n }",
"Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().\nFields must adhere to following format:\n {0} = output directory\n {1} = input data\n {2} = output destination\n {3} = reference taxonomy\n {4} = method name\n {5} = other parameters",
"command_template = \"source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000\"\n \ncommands = parameter_sweep(data_dir, results_dir, reference_dbs,\n dataset_reference_combinations,\n method_parameters_combinations, command_template,\n infile='query.fasta', output_name='query_tax_assignments.txt')\n",
"As a sanity check, we can look at the first command that was generated and the number of commands generated.",
"print(len(commands))\ncommands[0]",
"Finally, we run our commands.",
"Parallel(n_jobs=4)(delayed(system)(command) for command in commands)",
"BLAST+",
"method_parameters_combinations = {\n 'blast+' : {'p-evalue': [0.001],\n 'p-maxaccepts': [1, 10],\n 'p-min-id': [0.80, 0.97, 0.99],\n 'p-min-consensus': [0.51, 0.99]}\n }\n\ncommand_template = (\"mkdir -p {0}; \"\n \"qiime feature-classifier blast --i-query {1} --o-classification \"\n \"{0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; \"\n \"qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}; \"\n \"mv {0}/taxonomy.tsv {0}/query_tax_assignments.txt\")\n \n(dataset_reference_combinations, reference_dbs) = recall_novel_taxa_dirs(\n data_dir, databases, iterations, ref_seqs='ref_seqs.qza', ref_taxa='ref_taxa.qza')\n\ncommands = parameter_sweep(data_dir, results_dir, reference_dbs,\n dataset_reference_combinations,\n method_parameters_combinations, command_template,\n infile='query.qza', output_name='rep_seqs_tax_assignments.qza')\n\nParallel(n_jobs=4)(delayed(system)(command) for command in commands)",
"VSEARCH",
"method_parameters_combinations = {\n 'vsearch' : {'p-maxaccepts': [1, 10],\n 'p-min-id': [0.80, 0.99],\n 'p-min-consensus': [0.51, 0.99]}\n }\n\ncommand_template = (\"mkdir -p {0}; \"\n \"qiime feature-classifier vsearch --i-query {1} --o-classification \"\n \"{0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; \"\n \"qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}; \"\n \"mv {0}/taxonomy.tsv {0}/query_tax_assignments.txt\")\n \ncommands = parameter_sweep(data_dir, results_dir, reference_dbs,\n dataset_reference_combinations,\n method_parameters_combinations, command_template,\n infile='query.qza', output_name='rep_seqs_tax_assignments.qza')\n\nParallel(n_jobs=4)(delayed(system)(command) for command in commands)",
"Move result files to repository\nAdd results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.",
"precomputed_results_dir = join(project_dir, \"data\", \"precomputed-results\", analysis_name)\nmethod_dirs = glob(join(results_dir, '*', '*', '*', '*'))\nmove_results_to_repository(method_dirs, precomputed_results_dir)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dblyon/PandasIntro
|
Exercises_part_B.ipynb
|
mit
|
[
"import numpy as np\nimport pandas as pd\nimport math\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib inline\nmatplotlib.style.use('ggplot')",
"<h1 id=\"tocheading\">Table of Contents</h1>\n<div id=\"toc\"></div>",
"%%javascript\n$.getScript('misc/kmahelona_ipython_notebook_toc.js')",
"Getting and Knowing your Data\nTask: load the following file as a data frame",
"fn = r\"data/drinks.csv\" \n\n# Write your answer here",
"Task: See the first 10 entries",
"# Write your answer here",
"Task: Which country has the highest alcohol consumption (total litres of pure alcohol)?",
"# Write your answer here",
"Groupby\nTask: Which continent drinks most beer on average?",
"# Write your answer here",
"Task: List all unique continents.",
"# Write your answer here",
"Task: Which countries have missing values in the continent column?",
"# Write your answer here",
"Task: Set \"the\" missing continent with a name of your choice.",
"# Write your answer here",
"Task: For each continent print \"the\" statistics (summary stats using \"df.describe()\") for wine consumption.",
"# Write your answer here",
"Task: Print the median alcoohol consumption per continent for every column",
"# Write your answer here",
"Task: Print the mean, min and max values for spirit consumption.",
"# Write your answer here",
"Task: GroupBy Continent and create a Boxplot. (Hint: using e.g. figsize=(12, 9), rot=90 might help with legibility.)",
"# Write your answer here",
"Concatenate, Merge & Join\nTask: Import the first dataset cars1 and cars2. Assign each to a to a variable called cars1 and cars2.",
"# Write your answer here",
"Task: It seems our first dataset has some unnamed blank columns, fix cars1.",
"# Write your answer here",
"Task: Join cars1 and cars2 into a single DataFrame called cars",
"# Write your answer here",
"Apply (interspersed)\nTask: Create function that returns the first word of the string in the \"car\" column, the manufacturer name. Use the \"apply\" method to create a new column in the DataFrame.",
"# Write your answer here",
"Consider the following DataFrames for the next exercises",
"df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],\n 'B': ['B0', 'B1', 'B2', 'B3'],\n 'C': ['C0', 'C1', 'C2', 'C3'],\n 'D': ['D0', 'D1', 'D2', 'D3']},\n index=[0, 1, 2, 3])\n\ndf2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],\n 'B': ['B4', 'B5', 'B6', 'B7'],\n 'C': ['C4', 'C5', 'C6', 'C7'],\n 'D': ['D4', 'D5', 'D6', 'D7']},\n index=[4, 5, 6, 7])\n\ndf3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],\n 'B': ['B8', 'B9', 'B10', 'B11'],\n 'C': ['C8', 'C9', 'C10', 'C11'],\n 'D': ['D8', 'D9', 'D10', 'D11']},\n index=[8, 9, 10, 11])",
"Task: Concatenate the three DataFrames along the rows.",
"# Write your answer here",
"Task: How many missing values (NaNs) are produced if you concatenate along the other axis (appending the columns)?",
"# Write your answer here",
"Let's consider another data set to do some more Merge, Join & Concatenate exerciseses",
"raw_data_1 = {\n 'subject_id': ['1', '2', '3', '4', '5'],\n 'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'], \n 'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}\n\nraw_data_2 = {\n 'subject_id': ['4', '5', '6', '7', '8', '9', '10'],\n 'first_name': ['Alice', 'Ayoung', 'Bran', 'Bryce', 'Betty', 'Jane', np.nan], \n 'last_name': ['Aoni', 'Atiches', 'Balwner', 'Brice', 'Btisan', np.nan, 'Doe']}\n\nraw_data_3 = {\n 'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'],\n 'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}\ndata1 = pd.DataFrame(raw_data_1, columns = ['subject_id', 'first_name', 'last_name'])\ndata2 = pd.DataFrame(raw_data_2, columns = ['subject_id', 'first_name', 'last_name'])\ndata3 = pd.DataFrame(raw_data_3, columns = ['subject_id','test_id'])",
"Task: Join the two dataframes, data1 and data2, along rows and assign all_data. Make sure that the row index is unique.",
"# Write your answer here",
"Task: Join the two dataframes, data1 and data2, along columns and assing to all_data_col.",
"# Write your answer here",
"Task: Merge all_data and data3 along the subject_id value.",
"# Write your answer here",
"Task: How many test_ids have missing values in the first or last name column?",
"# Write your answer here",
"Task: Merge only the data that has the same 'subject_id' in both data1 and data2.",
"# Write your answer here",
"Transform\nThe transform method returns an object that is indexed the same (same size) as the one being grouped.\nTask: Given a DataFrame with a column of group IDs, 'groups', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.",
"# Write your answer here\n\n# Write your answer here",
"Task: Use groupby in conjunction with transform across multiple columns: We want to group by one to n columns and apply a function on these groups across two columns.\n 1. Calculate the sum of a and b and assign it to a column named e.\n 2. Group by 'c' and d, and calculate the sum of e",
"df = pd.DataFrame({'a':[1,2,3,4,5,6],\n 'b':[1,2,3,4,5,6],\n 'c':['q', 'q', 'q', 'q', 'w', 'w'], \n 'd':['z','z','z','o','o','o']})\n\n# Write your answer here",
"Task: Normalize (standardize) the data by calculating the z-score. Group the data by year and calculate the z-score per group. z = (value - mean) / standard_deviation\n<div style=\"font-size: 150%;\"> \n$$z=\\frac{x-\\mu}{\\sigma}$$\n</div>",
"index = pd.date_range('10/1/1999', periods=1100)\nser = pd.Series(np.random.normal(0.5, 2, 1100), index=index)\nser = ser.rolling(window=100,min_periods=100).mean().dropna()\n\n# Answer:\nkey = lambda x: x.year\nzscore = lambda x: (x - x.mean()) / x.std()\ntransformed = ser.groupby(key).transform(zscore)",
"Task: We would expect the result to now have mean 0 and standard deviation 1 within each group, which we can easily check. Calculate the mean and standard deviation within each group.",
"# Write your answer here",
"Task: Visually compare the original and transformed data sets.",
"# Write your answer here",
"Pivot\nTask: Let's reshape this small example DataFrame of ICD10 codes. Each person has different code-associations. Only positive associations are listed. Transform (reshape) the DataFrame to a wide format (one column per code) that lists positive and negative (missing) associations as Booleans.",
"df = pd.DataFrame({\"Person\": [\"a\", \"a\", \"a\", \"b\", \"c\", \"c\"], \"Code\": [\"D99\", \"E32\", \"A41\", \"D99\", \"D99\", \"A41\"]}, columns=[\"Person\", \"Code\"])\ndf\n\n# Write your answer here",
"Combine DataFrames\nTask: In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame.",
"# Write your answer here",
"GroupBy Titanic data\nLoad the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.\nTask:\nWomen and children first?\n\nUse the groupby method to calculate the proportion of passengers that survived by sex.\nCalculate the same proportion, but by class and sex.\nCreate age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex. Additionally, count the number of passengers per group.",
"# Write your answer here",
"Let's plot the number of survivors grouped by sex and passenger class.",
"# Write your answer here",
"Task: Let's also look at the deaths (and not only at the survivors) within the groups and create a stacked Barplot of survivers vs. deaths grouped by sex and passenger-class (as before).\n1. Convert the \"survived\" column to boolean values\n2. Compute the cross tabulation (a.k.a. contingency table) of passenger-class and sex vs. survived. Assign the result to the variable name \"death_counts\" --> checkout pd.crosstab()\n3. Create a stacked barplot of the computed death_counts",
"# Write your answer here",
"Task: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group.\nCreate a stacked, horizontal Barplot of the adjusted death counts\n1. Sum the death_counts per passenger-class and sex, and convert to data type float (for Python 2.x division purposes).\n2. Compute the adjusted survival rate by dividing the death_counts by the result from 1. \n3. Plot a stacked, horizontal Barplot from the result of 3.",
"# Write your answer here"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cnrm-cerfacs/cmip6/models/cnrm-cm6-1/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: CNRM-CERFACS\nSource ID: CNRM-CM6-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:52\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mohsinhaider/pythonbootcampacm
|
Functions and Methods/Lambda Expressions.ipynb
|
mit
|
[
"Lambda Expressions\nLambdas are anonymous functions. They are functions that can be made on the fly, and are used to for a variety of purposes. They could be used as a key for a sorting algorithm, or as a quick way to delegate items in some fashion. In this lecture we will cover:\n1. Initialzing a Lambda Expression\n2. Rewriting normal functions as lambda expressions\n3. Using lambda expressions in other contexts\n\nLet's go over examples of lambda expressions, from creating them to using them in time-saving manners.\nInitializing a Lambda Expression\nBefore beginning to explain this lambda expressions workings, it is important to understand that the following lambda expression is actually no use to us. This is because lambda expressions need labels to be used. These labels can be assigned when assigning the expression to a variable.",
"lambda x,y : x%y\n\nlength_func = lambda x: len(x)\n\nlength_func(\"hello, there!\")",
"As seen, when we assign a lambda expression to a label we can use it. Note that at default lambda expressions return the expected type of whatever it is handling. If you send in a number and do a numerical operation, you will receive back a number, a string a string, etc.\nBelow, we want to create more complex lambda expressions. How would we return True or False if a number was even or odd, respectively? How about first getting the what result of mod 2 is?",
"# Even or Odd lambda\neven_odd = lambda x: x % 2 == 0",
"As we can see, the lambda expressions returns the number we expected. How do we return a True or False value? Note: we can't use if statements or returns. This limits the power of the lambda. However, we can have lambda expressions return True or False, still. Observe the following syntax.",
"even_odd = lambda x: True if x % 2 == 0 else False\n\neven_odd(9)",
"Soon, we will learn about creating our own classes, and eventually data structures. When we learn how to make our own data structures, we'll rewrite what are \"conventionally\" (take it lightly) known as a \"magic methods\". These methods are not called upon explicitly, but are triggered by some internal action that Python sees you carried out. For example the + in Python triggers the magic method:\n__add__()\n\nWe are going to create data structures that can be printed out. When you print out a list you get the following output:\n[obj1, obj2, ...., objn]\n\nHowever, this favorable stringification is something we have to build for our own data structures. So, how would be implement it? Well, we would rewrite what is known as the\n__str__()\n\nfunction. This is the equivalent to toString() if you are familiar with Java. We will generate a bunch of numbers from a data structure that is arbitrary to us right now (lets just use a tuple), and try to get it into looking like a list. We'll use a lambda expression because we can perform the action ad-hoc, and act as a temporary str() method, for a pretend list.",
"# script that \"converts\" a tuple to a list\nsome_tup = (\"[\", 3, 4, \"hello\", \"]\")\nx = lambda tup: \", \".join(str(item) for item in tup)\nprint(x(some_tup))",
"Somewhat scrappy, but let's just say it's pretty close to looking like an actual list. Strings are not mutable so we can't just add a bracket at the beginning or right at the end. There are certain ways to make this possible, however. Operations like this are better suited for functions, anyways.\nLambdas as Parameters\nWe have a function known as sorted(). This function accepts an iterable, and a key (and reverse, talked below). The iterable alone acts as expected.",
"sorted([4, 2, 8, 5, 2, 9])",
"However, we have much more power with the key. The key accepts a form of some type of filter. We can send in a lambda expression to change what sorted means to us. What if we wanted the even numbers to be at the end?",
"sorted([4, 2, 8, 5, 2, 9], key=lambda x: x%2 == 0)",
"What if we wanted the numbers to be reversed, such as in descending order?",
"sorted([1, 5, 2, 5, 2, 9, 4], reverse=True)\n\nlst = [1, 3, 4,5]\n\nlst[::-1]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ky822/Data_Bootcamp
|
Code/SQL/SQL_Intro_DBcopy.ipynb
|
mit
|
[
"SQL Bootcamp\nSarah Beckett-Hile | NYU Stern School of Business | March 2015 \nToday's plan\n\nSQL, the tool of business\nRelational Databases\nWhy can't I do this in Excel? \nSetting up this course\nBasic Clauses\n\nAbout SQL\n\n\nSQL = \"Structured Query Language\" (pronounced \"S-Q-L\" or \"sequel\") \n\n\nDatabase language of choice for most businesses\n\n\nThe software optimized for storing relational databases that you access with SQL varies. Relational Database Management Systems (RDBMS) include MySQL, Microsoft SQL Server, Oracle, and SQLite. We will be working with SQLite.\n\n\nRelational Databases have multiple tables. Visualize it like an Excel file:\n\nDatabase = a single Excel file/workbook\nTable = a single worksheet in the same Excel file\n\n\n\nSQL lets you perform four basic functions: C.R.U.D. = Create, Read, Update, Delete\n\n\n\"Read\" is all you'll need for business analytics\n\n\nAdditional reading: http://www.w3schools.com/sql/sql_intro.asp\n\n\nFind examples of queries for business analysis at botton of this lesson page\n\n\nAbout this file\n\nWe'll use SQL in Python, specifically an IPython Notebook\nNo need to know what that means, but be sure you have SQL_support_code.py saved in the same folder as this file. \nDownload if you haven't already: https://www.dropbox.com/s/dacxdvkk11tyr4n/SQL_support_code.py?dl=0\n\n\nAll SQL queries are in red\nIf you get stumped on a challenge, there are cheats at the bottom of a challenge cell. You'll see something like \"#print(cheat1)\". Delete the hash and run the cell (SHIFT-RETURN). Once you've figured it out, replace the hash, and try again.",
"# check to see if support code is there\nimport os\n\nprint('List of files in working directory:')\n[print(file) for file in os.listdir()]\n\nfile = 'SQL_support_code.py'\nif not os.path.isfile(file):\n raise Exception('***** Program halted, file missing *****')",
"TO GET STARTED, CLICK \"CELL\" IN THE MENU BAR ABOVE, THEN SELECT \"RUN ALL\"",
"from SQL_support_code import *",
"Structure and Formatting Query Basics:\n\n\nIndentations and Returns:\n\nMostly arbitrary in SQL\nUsually for readability\n\n\n\nCapitalization:\n\nConvention to put keywords (functions, clauses) in CAPS\nConsistency is best\n\n\n\nOrder of Clauses:\n\nVery strict\nNot all clauses need to be present in a query, but when they are present, then they must be in the correct order\nBelow are the major clauses that we are going to cover. Use this list as reference if you are getting errors with your queries - there's a chance you just have the clauses in the wrong order:\n SELECT\n FROM\n JOIN...ON \n WHERE\n GROUP BY\n UNION\n ORDER BY\n LIMIT\n\n\n\nReading a table's structure:\nPRAGMA TABLE_INFO(table_name)\n\nRunning this will let you see the column heads and data types of any table. \nThe SQL query above only works for SQLite, which is what we're using here. If you're interested in knowing the equivalent versions for other RDBMS options, see the table below.",
"describe_differences",
"These are the names of the tables in our mini SQLite database:\nsales_table\ncar_table\nsalesman_table\ncust_table\n\nStart by looking at the columns and their data types in the sales_table.",
"run('''\n PRAGMA TABLE_INFO(sales_table)\n ''')",
"Rewrite the query to look at the other tables:",
"run('''\n PRAGMA TABLE_INFO(sales_table)\n ''')\n#print(describe_cheat)",
"Different RDBMS have different datatypes available:\n- Oracle: http://docs.oracle.com/cd/B10501_01/appdev.920/a96624/03_types.htm\n- MySQL:\n - Numeric: http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html\n - Date/time: http://dev.mysql.com/doc/refman/5.0/en/date-and-time-type-overview.html\n - String/text: http://dev.mysql.com/doc/refman/5.0/en/string-type-overview.html\n- SQLite: https://www.sqlite.org/datatype3.html\n- Microsoft: http://msdn.microsoft.com/en-us/library/ms187752.aspx\n\n\n\n\n\nSELECT & FROM:\n\nBasically every \"read\" query will contain a SELECT and FROM clause\nIn the SELECT clause, you tell SQL which columns you want to see\nIn the FROM clause, you tell SQL the table where those columns are located \nMore on SELECT: http://www.w3schools.com/sql/sql_select.asp\n\n\nSELECT * (ALL COLUMNS)\nSELECT # specifies which columns you want to see \n * # asterisk returns all columns\nFROM # specifies the table or tables where these columns can be found\n table_name\n\nUse an asterisk to tell SQL to return all columns from the table:",
"run('''\n SELECT\n *\n FROM\n sales_table\n ''')",
"Write a query to select all columns from the car_table:",
"run('''\n SELECT NULL\n ''')\n#print(select_cheat1)",
"SELECT COLUMN:\nSELECT \n column_a, # comma-separate multiple columns\n column_b\nFROM \n table_name\n\nInstead of using an asterisk for \"all columns\", you can specify a particular column or columns:",
"run('''\n SELECT\n model_id, \n revenue\n FROM\n sales_table\n ''')",
"Write a query to select model_id and model from the car_table:",
"run('''\n SELECT NULL\n ''')\n#print(select_cheat2)",
"One more quick note on the basics of SELECT - technically you can SELECT a value without using FROM to specify a table. You could just tell the query exactly what you want to see in the result-set. If it's a number, you can write the exact number. If you are using various characters, put them in quotes.\nSee the query below as an example:",
"run('''\n SELECT \n 4, \n 5, \n 7, \n 'various characters or text'\n ''')",
"SELECT DISTINCT VALUES IN COLUMNS:\nSELECT\n DISTINCT column_a # returns a list of each unique value in column_a\nFROM\n table_name\n\n\nUse DISTINCT to return unique values from a column\nMore on DISTINCT: http://www.w3schools.com/sql/sql_distinct.asp\n\nThe query below pulls each distinct value from the model_id column in the sales_table, so each value is only listed one time:",
"run('''\n SELECT\n DISTINCT model_id\n FROM\n sales_table\n ''')",
"Use DISTINCT to select unqiue values from the salesman_id column in sales_table. Delete DISTINCT and rerun to see the effect.",
"run('''\n SELECT NULL\n ''')\n#print(select_cheat3)",
"WHERE\nSELECT \n column_a\nFROM\n table_name\nWHERE\n column_a = x # filters the result-set to rows where column_a's value is exactly x\n\nA few more options for the where clause:\nWHERE column_a = 'some_text' # put text in quotations. CAPITALIZATION IS IMPORTANT\n\nWHERE column_a != x # filters the result-set to rows where column_a's value DOES NOT EQUAL x\n\nWHERE column_a < x # filters the result-set to rows where column_a's value is less than x\n\nWHERE columna_a <= x # filters the result-set to rows where column_a's value is less than or equal to x\n\nWHERE column_a IN (x, y) # column_a's value can be EITHER x OR y\n\nWHERE column_a NOT IN (x, y) # column_a's value can be NEITHER x NOR y\n\nWHERE column_a BETWEEN x AND y # BETWEEN lets you specify a range\n\nWHERE column_a = x AND column_b = y # AND lets you add more filters\n\nWHERE column_a = x OR column_b = y # OR will include results that fulfill either criteria\n\nWHERE (column_a = x AND column_b = y) OR (column_c = z) # use parentheses to create complex AND/OR statements\n\n\nWHERE allows you to filter the result-set to only include rows matching specific values/criteria. If the value/criteria is text, remember to put it in single or double quotation marks\nMore on WHERE: http://www.w3schools.com/sql/sql_where.asp\n\nBelow, WHERE filters out any rows that don't match the criteria. The result-set will only contain rows where the payment type is cash AND where the model_id is 46:",
"run('''\n SELECT\n *\n FROM\n sales_table\n WHERE\n payment_type = 'cash'\n AND model_id = 46 \n ''')",
"Rewrite the query to return rows where payment_type is NOT cash, and the model_id is either 31 or 36\n- Extra: Try changing 'cash' to 'Cash' to see what happens.",
"run('''\n SELECT NULL\n ''')\n#print(where_cheat1)",
"Using BETWEEN, rewrite the query to return rows where the revenue was between 24,000 and 25,000:",
"run('''\n SELECT NULL\n ''')\n#print(where_cheat2)",
"WHERE column LIKE:\nSELECT \n column_a\nFROM\n table_name\nWHERE\n column_a LIKE '%text or number%' # Filters the result_set to rows where that text or value can be found, with % standing in as a wildcard\n\n\nLIKE lets you avoid issues with capitalization in quotes, and you can use % as a wildcard to stand in for any character\nUseful if you have an idea of what text you're looking for, but you are not sure of the spelling or you want all results that contatin those letters\nMore on LIKE: http://www.w3schools.com/sql/sql_like.asp\nMore on wildcards: http://www.w3schools.com/sql/sql_wildcards.asp\n\nNote that you don't have to use the whole word \"cash\" when you use LIKE, and that the capital \"C\" now doesn't cause a problem:",
"run('''\n SELECT\n *\n FROM\n sales_table\n WHERE\n payment_type LIKE 'Cas%' \n ''').head()",
"Be careful with LIKE though - it can't deal with extra characters or mispellings:",
"run('''\n SELECT\n *\n FROM\n sales_table\n WHERE\n payment_type LIKE 'ces%'\n LIMIT 5\n ''')",
"LIKE and % will also return too much if you're not specific enough. This returns both 'cash' and 'finance' because both have a 'c' with some letters before or after:",
"run('''\n SELECT\n *\n FROM\n sales_table\n WHERE\n payment_type LIKE '%c%'\n LIMIT 5\n ''')",
"You can use different wildcards besides % to get more specific. An underscore is a substitute for a single letter or character, rather than any number. The query below uses 3 underscores after c to get 'cash':",
"run('''\n SELECT\n *\n FROM\n sales_table\n WHERE\n payment_type LIKE 'c___'\n LIMIT 5\n ''')",
"Say you can't remember the model of the car you're trying to look up. You know it's \"out\"...something. Outcast? Outstanding? Write a query to return the model_id and model from the car_table and use LIKE to help you search:",
"run('''\n SELECT NULL\n ''')\n#print(where_cheat3)",
"ORDER BY\nSELECT \n column_a\nFROM\n table_name\nWHERE # optional \n column_a = x\nORDER BY # sorts the result-set by column_a\n column_a DESC # DESC is optional. It sorts results in descending order (100->1) instead of ascending (1->100)\n\n\nWithout an ORDER BY clause, the default result-set will be ordered by however it appears in the database \nBy default, ORDER BY will sort values in ascending order (A→Z, 1→100). Add DESC to order results in desceding order instead (Z→A, 100→1)\nMore on ORDER BY: http://www.w3schools.com/sql/sql_orderby.asp\n\nThe query below orders the result-set by revenue amount, starting with the smallest amount listed first:",
"run('''\n SELECT\n *\n FROM\n sales_table\n ORDER BY\n revenue DESC\n LIMIT 5\n ''')",
"Rewrite the query above to look at the sticker_price of cars from the car_table in descending order:",
"run('''\n SELECT NULL\n ''')\n#print(order_cheat)",
"LIMIT\nSELECT\n column_a\nFROM\n table_name\nWHERE\n columna_a = x # optional\nORDER BY\n column_a # optional\nLIMIT # Limits the result-set to N rows\n N\n\n\nLIMIT just limits the number of rows in your result set\nMore on LIMIT: http://www.w3schools.com/sql/sql_top.asp \nThe ability to limit results varies by RBDSM. Below you can see the different ways to do this:",
"limit_differences",
"The query below limits the number of rows to 5 results. Change it to 10 to get a quick sense of what we're doing here:",
"run('''\n SELECT\n *\n FROM\n sales_table\n LIMIT 5\n ''')",
"ALIASES\nSELECT\n T.column_a AS alias_a # creates a nickname for column_a, and states that it's from table_name (whose alias is T)\nFROM\n table_name AS T # creates a nickname for table_name\nWHERE\n alias_a = z # refer to an alias in the WHERE clause\nORDER BY\n alias_a # refer to an alias in the ORDER BY clause\n\n\nAliases are optional, but save you time and make column headers cleaner\nAS isn't necessary to create and alias, but commonly used \nThe convention is to use \"AS\" in the \"SELECT\" clause, but not in the \"FROM\" clause.\nMore on Aliases: http://www.w3schools.com/sql/sql_alias.asp\n\nChange the aiases for model_id and revenue, or add extra columns to see how they work:",
"run('''\n SELECT\n model_id AS Model_of_car,\n revenue AS Rev_per_car\n FROM \n sales_table\n ''')",
"You can use an alias in the ORDER BY and WHERE clauses now. Write a query to:\n- pull the model_id and revenue for each transaction\n- give model_id the alias \"Model\"\n- give revenue the alias \"Rev\"\n- limit the results to only include rows where the model_id id 36, use the alias in the WHERE clause\n- order the results by revenue in descending order, use the alias in the ORDER BY clause\n- Run the query\nTHEN:\n- Try giving model_id the alias \"ID\" and use it in the WHERE clause, then rerun the query. What do you think is causing the error?",
"run('''\n SELECT NULL\n ''') \n#print(alias_cheat)",
"You can also assign an alias to a table, and use the alias to tell SQL which table the column is coming from. This isn't of much use when you're only using one table, but it will come in handy when you start using multiple tables.\nBelow,the sales_table has the alias \"S\". Read \"S.model_id\" as \"the model_id column from S, which is the sales_table\"\nChange the S to another letter in the FROM clause and run. Why did you hit an error? What can you do to fix it?",
"run('''\n SELECT\n S.model_id,\n S.revenue\n FROM \n sales_table AS S\n LIMIT 5\n ''')",
"JOINS\nSELECT\n *\nFROM\n table_x \n JOIN table_y # use JOIN to add the second table\n ON table_x.column_a = table_y.column_a # use ON to specify which columns correspond on each table\n\n\nJoining tables is the most fundamental and useful part about relational databases\nUse columns on different tables with corresponding values to join the two tables\nThe format \"table_x.column_a\" can be read as \"column_a from table_x\"; it tells SQL the table where it can find that column\nMore on JOINS: http://www.w3schools.com/sql/sql_join.asp\n\nStart by looking at the first few rows of sales_table again:",
"run('''\n SELECT\n *\n FROM\n sales_table\n LIMIT 5\n ''')",
"Now the first few rows of the car_table:",
"run('''\n SELECT\n *\n FROM\n car_table\n LIMIT 5\n ''')",
"These tables are related. There's a column named \"model_id\" in the sales_table and a \"model_id\" in the car_table - but the column names don't need to be the same, what's important is that the values in the sales_table's model_id column correspond to the values in the car_table's model_id column. \nYou can join these tables by using these columns as keys.",
"run('''\n SELECT\n *\n FROM\n sales_table\n JOIN car_table ON sales_table.model_id = car_table.model_id\n LIMIT 10\n ''')",
"Write a query to join the cust_table to the sales_table, using the customer_id columns in both tables as the key:",
"run('''\n SELECT NULL\n ''') \n#print(join_cheat1)",
"Rewrite the query from above, but instead of selecting all columns, specify just the customer gender and the revenue:",
"run('''\n SELECT NULL\n ''')\n#print(join_cheat2)",
"Rewrite the query from above, but this time select the customer_id, gender, and revenue:\n- You'll probably hit an error at first. Try to use what you've learned about this structure \"table_x.column_a\" to fix the issue. Why do you think you need to use this?",
"run('''\n SELECT NULL\n ''')\n#print(join_cheat3)",
"A column with the name customer_id appears in both the cust_table and the sales_table. SQL doesn't know which one you want to see. You have to tell it from which table you want the customer_id. \nThis can be important when columns in different tables have the same names but totally unrelated values.\nLook at the sales_table again:",
"run('''\n SELECT\n *\n FROM\n sales_table\n LIMIT 5\n ''') ",
"Above, there's a column called \"id\".\nNow look at the salesman_table again:",
"run('''\n SELECT\n *\n FROM\n salesman_table\n LIMIT 5\n ''') ",
"There's a column named \"id\" in the salesman_table too. However, it doesn't look like those IDs correspond to the sales_table IDs. In fact, it's the salesman_id column in the sales_table that corresponds to the id column in the salesman_table. More often than not, your tables will use different names for corresponding columns, and will have columns with identical names that don't correspond at all. \nWrite a query to join the salesman_table with the sales_table (select all columns using an asterisk)",
"run('''\n SELECT NULL\n ''') \n#print(join_cheat4)",
"Practice applying this \"table_x.column_a\" format to all columns in the SELECT clause when you are joining multiple tables, since multiple tables frequenty use the same column names even when they don't correspond. \nIt's common to use single-letter aliases for tables to make queries shorter. Take a look at the query below and make sure you understand what's going on with the table aliases. It's the same query that you wrote earlier, but with aliases to help identify the columns",
"run('''\n SELECT\n S.customer_id,\n C.gender,\n S.revenue\n FROM\n sales_table AS S\n JOIN cust_table AS C on S.customer_id = C.customer_id\n ''')",
"Join the sales_table (assign it the alias S) and salesman_table (alias SM) again.\n- Select the id and salesman_id column from the sales_table \n- Also, select the id column from the salesman_table\n- Optional: assign aliases to the columns in the SELECT clause to make the result-set easier to read",
"run('''\n SELECT NULL\n ''') \n#print(join_cheat5)",
"Different Types of Joins\nThere are different types of joins you can do according to your needs. Here's a helpful way to visualize your options: http://www.codeproject.com/Articles/33052/Visual-Representation-of-SQL-Joins\nHowever, not all types of joins are compatible with SQLite and MySQL. The table below breaks down compatibility:",
"join_differences",
"So far, we've just done a simple join, also called an \"inner join\". To illustrate different types of joins, we're going to use a different \"database\" for the following lesson. First, let's take a look at each one:",
"run('''\n SELECT \n *\n FROM\n Dog_Table\n ''') \n\nrun('''\n SELECT \n *\n FROM\n Cat_Table\n ''') ",
"Notice that the Owner_Name columns on each table have some corresponding values (Michael, Gilbert, May and Elizabeth and Donna are in both tables), but they both also have values that don't overlap. \nJOINS or INNER JOINS\nSELECT\n *\nFROM \n table_x X\n JOIN table_y Y ON X.column_a = Y.column_a # Returns rows when values match on both tables.\n\nThis is what we used in the initial example. Simple joins, (also called Inner Joins) will combine tables only where there are corresponding values on both tables.\nWrite a query below to join the Cat_Table and Dog_Table using the same method we've used before:",
"run('''\n SELECT NULL\n ''')\n#print(inner_join_cheat)",
"Notice that the result-set only includes the names that are in both tables. Think of inner joins as being the overlapping parts of a Venn Diagram. So, essentially we're looking at results only where the pet owner has both a cat and a dog. \n\nLEFT JOINS or LEFT OUTER JOINS\nSELECT\n *\nFROM \n table_x X\n LEFT JOIN table_y Y ON X.column_a = Y.column_a # Returns all rows from 1st table, rows that match from 2nd\n\n\nLEFT JOINS will return all rows from the first table, but only rows from the second table if a value matches on the key column. \n\nRewrite your query from above, but instead of \"JOIN\", write \"LEFT JOIN\":",
"run('''\n SELECT NULL\n ''')\n#print(left_join_cheat)",
"This time, you're seeing everything from the Dog_Table, but only results from the Cat_Table IF the owner also has a dog. \n\nOUTER JOINS or FULL OUTER JOINS:\nSELECT\n *\nFROM \n table_x X\n OUTER JOIN table_y Y ON X.column_a = Y.column_a # Returns all rows, regardless of whether values match\n\n\nOuter joins include ALL rows from both tables, even if the values on the key columns don't match up. \nSQLite doesn't support this, so the query below is a workaround to show you the visual effect of an outer join\nThis provides a great workaround for MySQL: http://stackoverflow.com/questions/4796872/full-outer-join-in-mysql\n\nFor now, this query won't totally make sense, just pay attention to the results so you can visualize an outer join:",
"run('''\n SELECT\n C.Owner_Name, \n Cat_Name, \n Dog_Name\n FROM\n Cat_Table C \n LEFT JOIN Dog_Table D ON D.Owner_Name = C.Owner_Name\n\n UNION ALL\n \n SELECT\n D.Owner_Name, \n ' ', \n Dog_Name \n FROM\n Dog_Table D \n WHERE \n Owner_Name NOT IN (SELECT Owner_Name from Cat_Table)\n ''')",
"Essentially, in Venn Diagram terms, and outer join lets you see all contents of both circles. This join will let you see all pet owners, regardless of whether the own only a cat or only a dog\n\nUsing the \"WHERE\" Clause to Join Tables\nSELECT\n *\nFROM\n table_x X\n JOIN table y Y\nWHERE\n X.column_a = Y.column_a # tells SQL the key for the join\n\n\nSome people prefer to use the WHERE clause to specify the key for a join\nFine if the query is short, but SUPER messy when the query is complex\nWe won't use this moving forward, but it's good to see it in case you run across someone else's code and you need to make sense of it\n\nWhen it's simple, it's not so bad:",
"run('''\n SELECT\n C.model, \n S.revenue\n FROM\n sales_table S, car_table C \n WHERE\n S.model_id = C.model_id\n LIMIT 5\n ''')",
"When the query is longer, this method is messy. Suddenly it's harder to parse out which parts of the \"WHERE\" clause are actual filters, and which parts are just facilitating the join. \nNote that we've covered all of these clauses and expressions by now, try to parse out what's going on:",
"run('''\n SELECT\n C.make,\n C.model, \n S.revenue,\n CUST.gender,\n SM.first_name\n FROM\n sales_table S\n JOIN car_table C\n JOIN salesman_table SM\n JOIN cust_table CUST\n WHERE\n S.customer_id = CUST.customer_id\n AND S.model_id = C.model_id\n AND S.salesman_id = SM.id\n AND (C.model in ('Tundra', 'Camry', 'Corolla') OR C.make = 'Subaru')\n AND S.revenue between 17000 and 22000\n AND CUST.gender = 'female' \n AND SM.first_name NOT IN ('Kathleen', 'Samantha')\n LIMIT 5\n ''')",
"OPERATORS\nADDING / SUBSTRACTING / MULTIPLYING / DIVIDING\nSELECT\n column_a + column_b # adds the values in column_a to the values in columns_b\nFROM\n table_name\n\nUse the standard formats for add, subtract, mutiply, and divide: + - * /\nThe query below subtracts cogs (from the car_table) from revenue (from the sales_table) to show us the gross_profit per transaction",
"run('''\n SELECT\n S.id,\n C.model, \n S.revenue,\n C.cogs,\n S.revenue - C.cogs AS gross_profit\n FROM\n sales_table S \n JOIN car_table C on S.model_id = C.model_id\n LIMIT 5\n ''') ",
"Rewrite the query above to return gross margin instead of gross profit. Rename the alias as well. Limit it to 5 results",
"run('''\n SELECT NULL\n ''') \n#print(operator_cheat)",
"CONCATENATING:\nConcatenating varies by RDBMS:",
"concat_differences",
"Here we'll use SQLite and use the concatenating operator || to combine words/values in different columns:",
"run('''\n SELECT\n last_name, \n first_name,\n last_name || ', ' || first_name AS full_name\n FROM\n salesman_table\n ''') ",
"Use || to pull the make and model from the car_table and make it appear in this format: \"Model (Make)\"\n- give it an alias to clean up the column header, otherwise it'll look pretty messy",
"run('''\n SELECT NULL\n ''') \n#print(concat_cheat)",
"FUNCTIONS:\nSELECT\n SUM(column_a), # sums up the values in column_a\n AVG(column_a), # averages the values in column_a\n ROUND(AVG(column_a), 2), # rounds the averaged values in column_a to 2 digits\n COUNT(column_a), # counts the number of rows in column_a \n MAX(column_a), # returns the maximum value in column_a\n MIN(column_a), # returns the minimum value in column_a\n GROUP_CONCAT(column_a) # returns a comma separated list of all values in column_a\nFROM\n table_name\n\n\nFunctions can be applied to columns to help analyze data\nYou can find more than just these basic few in the link below, or just Google what you're looking to do - there's a lot of help available on forums \nMore on functions: http://www.w3schools.com/sql/sql_functions.asp\n\nThe function below will sum up everything in the revenue column. Note that now we only get one row:",
"run('''\n SELECT\n SUM(revenue) AS Total_Revenue\n FROM \n sales_table\n ''')",
"Rewrite the query to return the average cost of goods for a car in the car table. Try rounding it to cents. \n- If you can't remember the name of the column for cost of goods in the car_table, remember you can use \"SELECT * FROM car_table LIMIT 1\" to see the first row of all columns, or you can use \"PRAGMA TABLE_INFO(car_table)\"",
"run('''\n SELECT NULL\n ''')\n#print(avg_cheat)",
"Using COUNT(*) will return the number of rows in any given table. Rewrite the query to return the number of rows in the car_table:\n- After you've run the query, try changing it by adding \"WHERE make = 'Subaru'\" and see what happens",
"run('''\n SELECT NULL\n ''')\n#print(count_cheat)",
"You can apply functions on top of other operators. Below is the sum of gross profits:",
"run('''\n SELECT\n '$ ' || SUM(S.revenue - C.cogs) total_gross_profit\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n ''') ",
"Write a query to show the average difference between the sticker_price (in car_table) and the revenue.\n\nIf you want a challenge, try to join cust_table and limit the query to only look at transactions where the customer's age is over 35",
"run('''\n SELECT NULL\n ''') \n#print(avg_cheat2)",
"GROUP_CONCAT\nSELECT\n GROUP_CONCAT(column_a, '[some character separating items]') \nFROM\n table_x\n\nThis function is useful to return comma-separated lists of the values in a column",
"run('''\n SELECT \n GROUP_CONCAT(model, ', ') as Car_Models\n FROM\n car_table\n ''') ",
"Use GROUP_CONCAT to return a comma-separated list of last names from the salesman_table:",
"run('''\n SELECT NULL\n''')\n#print(concat_cheat)",
"GROUP BY:\nSELECT\n column_a, \n SUM(column_b) # sums up the values in column_b\nFROM\n table_name\nGROUP BY # creates one group for each unique value in column_a\n column_a\n\n\nCreates a group for each unique value in the column you specify\nExtremely helpful when you're using functions - it segments out results\nMore on GROUP BY: http://www.w3schools.com/sql/sql_groupby.asp\n\nThe query below creates a group for each unique value in the car_table's model column, then sums up the revenue for each group. Note that you can use an alias in the GROUP BY clause.",
"run('''\n SELECT\n C.model AS Car_Model, \n SUM(revenue) AS Total_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id\n GROUP BY\n Car_Model\n ''')",
"Rewrite the query above to return the average gross profit (revenue - cogs) per make (remember that \"make\" is in the car_table)\nExtra things to try:\n- Round average revenue to two decimal points\n- Order the results by gross profit in descending order\n- Rename the make column as \"Car_Maker\" and use the alias in the GROUP BY clause\n- Rename gross profit column as \"Avg_Gross_Profit\" and use the alias in the ORDER BY clause\n- Join the salesman_table and filter results to only look at revenue where first_name is Michael\n - After you've gotten the query to run with all of these adjustments, think about the risks involved with adding something in the WHERE clause that doesn't show up in the SELECT clause. Think about a potential solution to these risks.",
"run('''\n SELECT NULL\n ''')\n#print(group_cheat)",
"Write a query to make a comma-separated list of models for each car maker:",
"run('''\n SELECT NULL\n ''')\n#print(group_cheat1)",
"GROUP BY, when used with joins and functions, can help you quickly see trends in your data. Parse out what's going on here:",
"run('''\n SELECT\n C.model AS Car_Model, \n MIN(S.revenue) || ' - ' || MAX(S.revenue) AS Min_to_Max_Sale,\n MAX(S.revenue) - MIN(S.revenue) AS Range,\n ROUND(AVG(S.revenue), 2) AS Average_Sale\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n GROUP BY\n Car_Model\n ORDER BY\n Average_Sale DESC\n ''')",
"You can also use GROUP BY with multiple columns to segment out the results further:",
"run('''\n SELECT\n C.make AS car_caker, \n payment_type,\n ROUND(AVG(revenue)) as avg_revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n GROUP BY\n C.Make, \n payment_type\n ''')",
"Rewrite the query to find the total revenue grouped by each salesperson's first_name and by the customer's gender (gender column in cust_table)\n- For an extra challenge, use the concatenating operator to use the salesperson's full name instead\n- Add COUNT(S.id) to the SELECT clause to see the number of transactions in each group",
"run('''\n SELECT NULL\n ''')\n#print(group_cheat2)",
"\"HAVING\" in GROUP BY statements:\nSELECT\n column_a,\n SUM(column_b) AS alias_b\nFROM\n table_name\nGROUP BY\n column_a HAVING alias_b > x # only includes groups in column_a when the sum of column_b is greater than x\n\n\nIf you've applied a function to a column and want to filter to only show results meeting a particular criteria, use HAVING in your GROUP BY clause.\nMore on HAVING: http://www.w3schools.com/sql/sql_having.asp\n\nThe query below will sum up all the revenue for each car maker, but it will only show you results for car maker's whose total revenue is greater than 50,000:",
"run('''\n SELECT\n C.Make as Car_Maker, \n SUM(revenue) as Total_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id\n GROUP BY\n Car_Maker HAVING Total_Revenue > 500000\n ''')",
"Rewrite the query above to look at average revenue per model, and using HAVING to filter your result-set to only include models whose average revenue is less than 18,000:",
"run('''\n SELECT NULL\n ''')\n#print(having_cheat)",
"HAVING vs WHERE:\nWHERE filters which rows will be included in the function, whereas HAVING filters what's returned after the function has been applied.\nTake a look at the query below. It might look like the query you just wrote (above) if you'd tried to use WHERE instead of HAVING:\nSELECT \n C.model as Car_Model,\n AVG(S.revenue) as Avg_Revenue\nFROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \nWHERE \n S.revenue < 18000\nGROUP BY\n Car_Model\n\n\nFind the sales_table and join it to the car_table\nPull the data from the 'model' column in car_table and 'revenue' column in sales_table\nFilter out all rows where revenue is less than 18000\nAverage remaining rows for each Car_Model\n\nEven though AVG( ) appears early in the query, it's not actually being applied until after the WHERE statement has filtered out rows with less than 18,000 in revenue.\nThis is the result:",
"run('''\n SELECT \n C.model as Car_Model,\n AVG(S.revenue) as Avg_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n WHERE \n S.revenue < 18000\n GROUP BY\n Car_Model\n ''')",
"All model_ids are returned, but the averages are all much lower than they should be. That's because the query first drops all rows that have revenue greater than 18000, and then averages the remaining rows.\nWhen you use HAVING, SQL follows these steps instead (this query should look like the one you wrote in the last challenge):\nSELECT\n C.model as Car_Model,\n AVG(S.revenue) as Avg_Revenue\nFROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id\nGROUP BY\n Car_Model HAVING Avg_Revenue < 18000\n\n\nFind the sales_table and join it to the car_table (same as before)\nPull the data from the 'model' column in car_table and 'revenue' column in sales_table (same as before)\nAverage the rows for each Car_Model\nReturn only the Car_Models whose averages are less than 18,000\n\nAnd as you can see, there's a big difference in these results and the results of the query that used \"WHERE\" instead of HAVING:",
"run('''\n SELECT\n C.model as Car_Model,\n AVG(S.revenue) as Avg_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id\n GROUP BY\n Car_Model HAVING Avg_Revenue < 18000\n ''') ",
"HAVING & WHERE in the same query:\n\nSometimes, you will want to use WHERE and HAVING in the same query\nJust be aware of the order of the steps that SQL takes\nRule of thumb: if you're applying a function to a column, you probably don't want that column in there WHERE clause\n\nThis query is only looking at Toyotas whose revenue is less than 18,000, using WHERE to limit the results to Toyotas, and HAVING to limit the results by revenue:",
"run('''\n SELECT\n C.model as Car_Model,\n AVG(S.revenue) as Avg_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id\n WHERE\n C.make = 'Toyota'\n GROUP BY\n Car_Model HAVING Avg_Revenue < 18000\n ''') ",
"Write a query with the following criteria:\n- SELECT clause:\n - salesman's last name and average revenue, rounded to the nearest cent\n- FROM clause:\n - sales_table joined with the salesman_table and the cust_table\n- WHERE clause: \n - only female customers\n- GROUP BY clause:\n - only salespeople whose average revenue was greater than 20,000\nSo, in plain English, we want to see salespeople whose average revenue for female customers is greater than 20,000",
"run('''\n SELECT NULL\n ''')\n#print(having_where_cheat)",
"ROLLUP\nSELECT\n column_a,\n SUM(column_b)\nFROM\n table_x\nGROUP BY\n ROLLUP(column_a) # adds up all groups' values in a single final row\n\n\nRollup, used with GROUP BY, provides subtotals and totals for your groups\nUseful for quick analysis\nVaries by RDBMS",
"rollup_differences",
"Because SQLite doesn't support ROLLUP, the query below is just intended to illustrate how ROLLUP would work. Don't worry about understanding the query itself, just get familiar with what's going on in the result-set:",
"run('''\n SELECT\n C.model AS Car_Model, \n SUM(S.revenue) as Sum_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n GROUP BY C.model\n \n UNION ALL\n \n SELECT \n 'NULL', \n SUM(S.revenue)\n FROM \n sales_table S\n''')",
"Conditional Expressions: IF & CASE WHEN\nSELECT\n CASE WHEN column_a = x THEN some_value\n WHEN column_a = y THEN some_value2\n ELSE some_other_value\n END some_alias # alias optional after END\nFROM\n table_name\n\n\nConditional expressions let you use IF/THEN logic in SQL\nIn SQLite, you have to use CASE WHEN, but in other RDBMS you may prefer to use IF, depending on your needs\nMore on CASE WHEN: http://www.dotnet-tricks.com/Tutorial/sqlserver/1MS1120313-Understanding-Case-Expression-in-SQL-Server-with-Example.html",
"conditional_differences",
"Starting with a simple example, here we'll use CASE WHEN to create a new column on the sales_table:",
"run('''\n SELECT\n revenue, \n CASE WHEN revenue > 20000 THEN 'Revenue is more than 20,000' \n END Conditional_Column\n FROM\n sales_table\n LIMIT 10\n ''')",
"CASE WHEN gives you the value \"Revenue is more MORE 20,000\" when revenue in that same row is greater than 20,000. Otherwise, it has no value.\nNow let's add a level:",
"run('''\n SELECT\n revenue, \n CASE WHEN revenue > 20000 THEN 'Revenue is MORE than 20,000' \n WHEN revenue < 15000 THEN 'Revenue is LESS than 15,000'\n END Conditional_Column\n FROM\n sales_table\n LIMIT 10\n ''')",
"Now to deal with the blank spaces. You can assign an \"ELSE\" value to catch anything that's not included in the prior expressions:",
"run('''\n SELECT\n revenue,\n CASE WHEN revenue > 20000 THEN 'Revenue is MORE than 20,000' \n WHEN revenue < 15000 THEN 'Revenue is LESS than 15,000'\n ELSE 'NEITHER'\n END Conditional_Column\n FROM\n sales_table\n LIMIT 10\n ''') ",
"You can use values from another column as well. Remember this query from the GROUP BY lesson? It's often helpful to look at information broken out by multiple groups, but it's not especially easy to digest:",
"run('''\n SELECT\n C.Make as car_maker, \n payment_type,\n ROUND(AVG(S.revenue)) as avg_revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n GROUP BY\n C.Make, \n payment_type\n ''')",
"Look at what's going on in that query without the AVG( ) function and the GROUP BY clause:",
"run('''\n SELECT\n C.Make as Car_Maker, \n payment_type,\n S.revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n ''')",
"The result-set above is essentially what SQL is working with right before it separates the rows into groups and averages the revenue within those groups. \nNow, we're going to use some CASE WHEN statements to change this a little:",
"run('''\n SELECT\n C.Make as Car_Maker, \n payment_type,\n CASE WHEN payment_type = 'cash' THEN S.revenue END Cash_Revenue,\n CASE WHEN payment_type = 'finance' THEN S.revenue END Finance_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n ''')",
"Now let's add back the ROUND() and AVG() functions and the GROUP BY statement:",
"run('''\n SELECT\n C.Make as Car_Maker, \n ROUND(AVG(CASE WHEN payment_type = 'cash' THEN S.revenue END)) AS Avg_Cash_Revenue,\n ROUND(AVG(CASE WHEN payment_type = 'finance' THEN S.revenue END)) AS Avg_Finance_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n GROUP BY\n C.Make\n ''')",
"CASE WHEN makes this same information a lot easier to read by letting you pivot the result set a little.\nWrite a query using CASE WHEN to look at total revenue per gender, grouped by each car model",
"run('''\n SELECT NULL\n ''')\n#print(case_cheat)",
"CASE WHEN also lets you create new groups. Start by looking at the cust_table grouped by age - remember that COUNT(***) tells you how many rows are in each group (which is the same as telling you the number of customers in each group):",
"run('''\n SELECT\n age,\n COUNT(*) customers\n FROM\n cust_table\n GROUP BY\n age\n ''') ",
"When you want to segment your results, but there are too many different values for GROUP BY to be helpful, use CASE WHEN to make your own groups. GROUP BY the column you created with CASE WHEN to look at your newly created segments.",
"run('''\n SELECT\n CASE WHEN age BETWEEN 18 AND 24 THEN '18-24 years'\n WHEN age BETWEEN 25 AND 34 THEN '25-34 years'\n WHEN age BETWEEN 35 AND 44 THEN '35-45 years'\n WHEN age BETWEEN 45 AND 54 THEN '45-54 years'\n WHEN age BETWEEN 55 AND 64 THEN '55-64 years'\n END Age_Group,\n COUNT(*) as Customers\n FROM\n cust_table\n GROUP BY\n Age_Group\n ''') ",
"Ta-DA! Useful customer segments!\nTry to break up the \"Customers\" column into 2 columns - one for male and one for female. Keep the age segments intact. \n- Note that COUNT(***) cannot be wrapped around a CASE WHEN expression the way that other functions can. Try to think of a different way to get a count.\n- Extra challenge: try to express male and female customers as a percentage of the total for each group, rounded to 2 decimal points",
"run('''\n SELECT NULL\n ''')\n#print(case_cheat2)",
"NESTING\n\nNested queries allow you to put a query within a query\nDepending on your needs, you might put a nested query in the SELECT clause, the FROM clause, or the WHERE clause\n\nConsider the following query. We're using a nested query in the SELECT clause to see the sum of all revenue in the sales_table, and then using it again to what percentage of total revenue can be attributed to each Car_Model.",
"run('''\n SELECT\n C.model AS Car_Model,\n SUM(S.revenue) AS Revenue_Per_Model,\n (SELECT SUM(revenue) FROM sales_table) AS Total_Revenue,\n SUM(S.revenue) / (SELECT SUM(revenue) FROM sales_table) AS Contribution_to_Revenue\n FROM\n sales_table S\n JOIN car_table C ON C.model_id = S.model_id\n GROUP BY\n Car_Model\n ''')",
"Write a query to look at the model name and COGs for each car in car_table, then use a nested query to also look at the average COGs off all car models in a third column \n- Extra Challenge: add a fourth colum using another nested query to return the difference between each car model's COGs and the average COGs",
"run('''\n SELECT NULL\n ''')\n#print(nest_cheat1)",
"UNION & UNION ALL\nSELECT\n column_a\nFROM\n table_x\n\nUNION # or UNION ALL\n\nSELECT\n column_b\nFROM\n table_y\n\n\nUNION allows you to run a 2nd query (or 3rd or 4th), the results will be ordered by default with the results of the first query\nUNION ALL ensures that the results in the result set appear in order that the queries are written\nThe number of columns in each query must be the same in order for UNION & UNION ALL to work\n\nStarting with something simple (and a little nonsensical), UNION basically lets you run two entirely separate queries. Technically, they could have nothing to do with each other:",
"run('''\n SELECT\n model\n FROM\n car_table\n WHERE\n model = 'Tundra'\n \n UNION\n \n SELECT \n first_name\n FROM\n salesman_table\n WHERE first_name = 'Jared'\n ''')",
"Some things to note:\n- Although these queries and their results are unrelated, the column header is dictated by the query that appears first\n- Even though the query for \"Tundra\" is first, \"Tundra\" is second in the results. UNION will sort all results according to the normal default rules, acending order. \n- Replace UNION with UNION ALL and run the query again. What changes? \nUse UNION to join two queries. The first should have two columns: car model and COGs per car. The second query should show you to average COGs for all the car models, rounded to cents. You want the the average COGs to appear in the last row. \n- Remember that united queries need to have the same number of columns.",
"run('''\n SELECT NULL\n ''')\n#print(union_cheat1)",
"Consider the issue we had before, where SQLite didn't support WITH ROUNDUP. We used this query as a workaround. Does it make sense now?",
"run('''\n SELECT\n C.model AS Car_Model, \n SUM(S.revenue) as Sum_Revenue\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n GROUP BY C.model\n \n UNION ALL\n \n SELECT \n 'NULL', \n SUM(S.revenue)\n FROM \n sales_table S\n''')",
"Optimization:\nNon-optimized queries can cause a lot of problems because tables frequently have thousands or millions of rows:\nIf you haven't optimized your query, it might:\n\nTake several minutes (or even hours) to return the information you're requesting\nCrash your computer\nMuck up the server's processes, and you'll face the wrath of your company's system administrators once they figure out that you are the reason why the whole system has slowed down and everyone is sending them angry emails (this will probably happen to you no matter what. It's a rite of passage).\n\nFind a few more useful optimization tips here: http://hungred.com/useful-information/ways-optimize-sql-queries/\nSome of these seem strange, because we're going ling you NOT to do a bunch of things that you've learned how to do. Stick to this principal: if you're dealing with a small table, you can break a few of these rules. The larger the table, the fewer rules you can break.\nDO name specific columns in the SELECT CLAUSE:",
"run(''' \n SELECT\n date,\n revenue\n FROM \n sales_table\n ''').head()",
"DON'T use an asterisk unless you absolutely have to:\nThis can put a lot of strain on servers. Only use if you know for certain that your using a small table",
"run(''' \n SELECT\n *\n FROM \n sales_table\n ''').head()",
"DO use LIKE on small tables and in simple queries:\nLIKE is helpful if you know where to find something but you can't quite remember what it's called. Try to use a wildcard sparingly - don't use 2 when 1 will suffice:",
"run('''\n SELECT\n model_id, model\n FROM \n car_table\n WHERE\n model LIKE '%undra'\n ''')",
"DON'T use LIKE on large tables or when using JOINs:",
"run('''\n SELECT \n C.model, \n AVG(revenue)\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n WHERE\n C.model LIKE '%undra'\n''')",
"If you want to look at average revenue for car models that are like \"%undra\", run the LIKE query on the small table (car_table) first to figure out exacly what you're looking for, then use that information to search for the data you need from the sales_table\nDO dip your toe in by starting with a small data set\nUse WHERE to only view a few days of data at first. If the query runs quickly, add a few days at a time. If it starts to run slowly, run just a few days at a time and paste results into excel to combine results (or use Python...ask me later!!!).\nThe query below won't work because SQLite doesn't recognize dates, but remember these concepts when working with other RDBMS",
"run('''\n SELECT\n revenue,\n date\n FROM\n sales_table\n WHERE\n date = '1/1/14'\n ''')",
"DO use a UNION to look at result-sets that aren't mutually exclusive\nLet's say you were interested in seeing all Toyotas as well as cars with COGs of more than 13000. Write a query for the first group, then a query for the second group, and unite them with UNION. The result set won't show you repeats - if a row matches both result sets, it will only display once.",
"run('''\n SELECT\n make, model, cogs\n FROM\n car_table\n WHERE \n make = 'Toyota'\n \n UNION\n \n SELECT \n make, model, cogs\n FROM\n car_table\n WHERE\n cogs > 13000\n ''')",
"DON'T use OR when a UNION will generate the same results\nNote that we'll get the same results as above, but this query could run MUCH slower on a large table. It's tempting to use OR because it's faster to write, but unless you're dealing with very small tables, avoid the temptation. In 5 years of doing business analytics with SQL, I never used OR once. It's slow. Use a UNION.",
"run('''\n SELECT\n make, model, cogs\n FROM\n car_table\n WHERE\n make = 'Toyota' OR cogs > 13000\n ''')",
"DON'T use negative filters when a positive filter is possible\nLet's say you want to look at cars made by Toyato and Honda, but you don't care about Subaru. It might be tempting to use a negative filter:",
"run('''\n SELECT\n *\n FROM\n car_table\n WHERE\n make != 'Subaru'\n ''')",
"On a big table, this will run much more slowly than if you use a positive filter. Try this instead - it might require a little extra typing, but it will run much faster:",
"run('''\n SELECT\n *\n FROM\n car_table\n WHERE\n make in ('Toyota', 'Honda')\n ''')",
"Wrapping Up:\nDebugging:\nIf you run into errors when you start writing your own queries, here are some things to make sure your query has:\n- The right names for columns in the SELECT clause\n- Columns that can be found in the tables in the FROM clause\n- Consistent use of aliases throughout (if using aliases)\n- Joined tables on the corresponding column and proper aliases to indicate each table\n- The correct order of clauses:\n SELECT\n FROM\n JOIN...ON \n WHERE\n GROUP BY\n UNION\n ORDER BY\n LIMIT\n- Consistent use of capitalization for variables in quotes\n- Fuctions and opperators for real numbers, not integers \n- The same number of columns/expressions in their SELECT clauses of your queries when using UNION\nGain a deeper understanding:\nhttp://tech.pro/tutorial/1555/10-easy-steps-to-a-complete-understanding-of-sql\nPractice on other databases:\nhttp://sqlzoo.net/wiki/SELECT_.._WHERE\nSample Queries for Business Analysis:\nLet's say you recently opened a car dealership, and you now have one month's worth of sales data. You want to know how your sales team is doing.\nStart by looking at the number of cars each person sold last month. The names of the sales team and the list of transactions are on different tables in your database, but SQL can help you with that:",
"run('''\n SELECT\n first_name || ' ' || last_name as Salesperson,\n COUNT(*) as Cars_Sold\n FROM\n sales_table S\n JOIN salesman_table M ON S.salesman_id = M.id\n GROUP BY \n Salesperson\n ORDER BY \n Cars_Sold DESC\n ''') ",
"Add on the average amount of revenue made per sale:",
"run('''\n SELECT\n first_name || ' ' || last_name as Salesperson,\n COUNT(*) as Cars_Sold, \n ROUND(AVG(revenue)) as Revenue_per_Sale\n FROM\n sales_table S\n JOIN salesman_table M ON S.salesman_id = M.id\n GROUP BY \n Salesperson\n ORDER BY \n Cars_Sold DESC\n ''') ",
"Make it easier to compare the average revenue of Jared's sales to the average revenue of per sale overall by adding a column to see by what percent each salesperson's sales are more or less than average:",
"run('''\n SELECT\n first_name || ' ' || last_name as Salesperson,\n COUNT(*) as Cars_Sold, \n ROUND(AVG(revenue), 2) as Rev_per_Sale,\n ROUND((((AVG(revenue) \n - (SELECT AVG(revenue) from sales_table))\n /(SELECT AVG(revenue) from sales_table))*100), 1) || ' %'\n as RPS_Compared_to_Avg\n FROM\n sales_table S\n JOIN salesman_table M ON S.salesman_id = M.id\n GROUP BY \n Salesperson\n ORDER BY \n Cars_Sold DESC\n ''')",
"So maybe Jared is just selling cheaper cars.\nLet's go further and compare the sale price of each car against the sticker price to see how low Jared was willing to negotiate with customers. Sticker price is in anther table, but again, that's no problem with SQL:",
"run('''\n SELECT\n first_name || ' ' || last_name as Salesperson,\n COUNT(*) as Cars_Sold, \n '$ ' || ROUND(AVG(revenue), 2) as Rev_per_Sale,\n ROUND((((AVG(revenue) \n - (SELECT AVG(revenue) from sales_table where salesman_id != 215))\n /(SELECT AVG(revenue) from sales_table where salesman_id != 215))*100), 1) || ' %'\n AS RPS_Compared_to_Avg,\n ROUND((1-(SUM(revenue) / SUM(sticker_price)))*100, 1) || ' %' as Avg_Customer_Discount\n FROM\n sales_table S\n JOIN salesman_table M ON S.salesman_id = M.id\n JOIN car_table C ON S.model_id = C.model_id\n GROUP BY \n Salesperson\n ORDER BY \n Cars_Sold DESC\n ''') ",
"Looks like Jared is letting customers negotiate prices down much more than his peers. \nBut is this a real problem? How much is each salesperson contributing to our gross profits?",
"run('''\n SELECT\n first_name || ' ' || last_name as Salesperson,\n COUNT(*) as Cars_Sold, \n '$ ' || ROUND(AVG(revenue), 2) as Rev_per_Sale,\n ROUND((((AVG(revenue) \n - (SELECT AVG(revenue) from sales_table where salesman_id != 215))\n /(SELECT AVG(revenue) from sales_table where salesman_id != 215))*100), 1) || ' %'\n AS RPS_Compared_to_Peers,\n ROUND((1-(SUM(revenue) / SUM(sticker_price)))*100, 1) || ' %' as Avg_Customer_Discount, \n ROUND(((SUM(revenue)-sum(C.cogs))\n /(SELECT SUM(revenue)-sum(cogs) FROM sales_table S join car_table C on S.model_id = C.model_id))*100, 1) || ' %' as Gross_Profit_Contribution\n FROM\n sales_table S\n JOIN salesman_table M ON S.salesman_id = M.id\n JOIN car_table C ON S.model_id = C.model_id\n GROUP BY \n Salesperson\n ORDER BY \n Cars_Sold DESC\n ''') ",
"SQL really lets you dig. \nSome other quick examples - we could do a gender breakdown of customers per car model and add a total at the bottom:",
"run('''\n SELECT\n C.model as Car_Model, \n ROUND(SUM(CASE WHEN CUST.gender = 'female' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Female Customers',\n ROUND(SUM(CASE WHEN CUST.gender = 'male' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Male Customers'\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id \n JOIN cust_table CUST on S.customer_id = CUST.customer_id\n GROUP BY \n Car_Model\n \n UNION ALL\n \n SELECT\n 'Total:', \n ROUND(SUM(CASE WHEN CUST.gender = 'female' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Female Customers',\n ROUND(SUM(CASE WHEN CUST.gender = 'male' THEN 1 END)/(COUNT(S.id)*1.0), 2) AS '% Male Customers'\n FROM\n sales_table S\n JOIN cust_table CUST on S.customer_id = CUST.customer_id\n ''')",
"Easily create age groups and see how aggressively each group negotiates (judged by the difference between the actual sale amount and the sticker price):",
"run('''\n SELECT\n CASE WHEN age BETWEEN 18 AND 24 THEN '18-24 years'\n WHEN age BETWEEN 25 AND 34 THEN '25-34 years'\n WHEN age BETWEEN 35 AND 44 THEN '35-44 years'\n WHEN age BETWEEN 45 AND 54 THEN '45-54 years'\n WHEN age BETWEEN 55 AND 64 THEN '55-64 years'\n END Age_Group,\n ROUND((SUM(S.revenue)-SUM(C.sticker_price))/SUM(C.sticker_price), 2) as '% Paid Below Sticker Price'\n FROM\n sales_table S\n JOIN car_table C on S.model_id = C.model_id\n JOIN cust_table CUST on S.customer_id = CUST.customer_id\n GROUP BY\n Age_Group\n ''')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
florianm/biosys-etl
|
Kimberley LCI.ipynb
|
mit
|
[
"Overview\nThis notebook will contain the loading component of the Kimberley data loading procedure.\nIn preparation, the original data will have been \n\nuploaded as-is to DPaW's internal CKAN data catalogue, \ncleaned in OpenRefine (extract & tranform),\nexported as CSV from OpenRefine, and \nuploaded as additional resources to the CKAN dataset.\n\nThis workbook will parse the CSV versions and upload the data to BioSys via its API.\nWorkhorse functions will be located in a separate file helpers.py.\nSetup\nCopy secret_template.py to secret.py and modify to contain your CKAN instance and API key.",
"import ckanapi\nimport csv\nimport json\nimport requests\n\nfrom secret import CKAN, LCI, BIOSYS\nimport helpers as h",
"ck will be a ckanapi instance that carries your CKAN account's write permissions, and is able to read all public datasets.",
"ck = ckanapi.RemoteCKAN(CKAN[\"dpaw-internal\"][\"url\"], apikey=CKAN[\"dpaw-internal\"][\"key\"])",
"A CKAN resource's URL changes if the file resource changes, but the resource ID will be persistent. \nThe config dict LCI lists resource names (from original data worksheet names) against their CKAN resource ID. \nA helper function get_data reads all configured datasets (CSV resources in CKAN).",
"data = h.get_data(ck, LCI)\ndata\n\n[r for r in data[\"sites\"]][0]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bizreach/common-ml
|
common/notebook/skchainer/iris_save_restore.ipynb
|
apache-2.0
|
[
"From Tensor SkFlow: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/skflow/iris_save_restore.py\nImport",
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport shutil\n\nfrom sklearn import datasets, metrics, cross_validation\nfrom tensorflow.contrib import learn\n\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer import serializers, optimizers, Chain\nfrom commonml.skchainer import ChainerEstimator, SoftmaxCrossEntropyClassifier\n\nimport logging\nlogging.basicConfig(format='%(levelname)s : %(message)s', level=logging.INFO)\nlogging.root.level = 20\n\niris = datasets.load_iris()\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target,\n test_size=0.2, random_state=42)\n\nclass Model(Chain):\n\n def __init__(self, in_size):\n super(Model, self).__init__(l1=L.Linear(in_size, 3))\n\n def __call__(self, x):\n h1 = self.l1(x)\n return h1\n\nclassifier = ChainerEstimator(model=SoftmaxCrossEntropyClassifier(Model(X_train.shape[1])),\n optimizer=optimizers.AdaGrad(lr=0.1),\n batch_size=100,\n device=0,\n stop_trigger=(100, 'epoch'))\nclassifier.fit(X_train, y_train)\nscore = metrics.accuracy_score(y_test, classifier.predict(X_test))\nprint('Accuracy: {0:f}'.format(score))",
"Clean checkpoint folder if exists",
"try:\n shutil.rmtree('/tmp/chainer_examples')\nexcept OSError:\n pass",
"Save model, parameters and learned variables.",
"os.makedirs('/tmp/chainer_examples/')\nserializers.save_hdf5('/tmp/chainer_examples/iris_custom_model', classifier.model.predictor)\nserializers.save_hdf5('/tmp/chainer_examples/iris_custom_optimizer', classifier.optimizer)\nclassifier = None",
"Restore everything",
"model = Model(X_train.shape[1])\nserializers.load_hdf5('/tmp/chainer_examples/iris_custom_model', model)\nnew_classifier = ChainerEstimator(model=SoftmaxCrossEntropyClassifier(model),\n optimizer=optimizers.AdaGrad(lr=0.1),\n batch_size=100,\n device=0,\n stop_trigger=(100, 'epoch'))\nserializers.load_hdf5('/tmp/chainer_examples/iris_custom_optimizer', new_classifier.optimizer)\nscore = metrics.accuracy_score(y_test, new_classifier.predict(X_test))\nprint('Accuracy: {0:f}'.format(score))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
empet/Math
|
Animating-the-Dragon-curve-construction.ipynb
|
bsd-3-clause
|
[
"Animated construction of the Dragon curve\nThe most known method to draw a Dragon curve is by using turtle graphics. Here we implement a method visually illustrated \n in a video posted by Numberphile: \n https://www.youtube.com/watch?v=NajQEiKFom4.\nWe are starting with a vertical segment and the successive rotations are counterclockwise.",
"import numpy as np\nfrom numpy import pi\nimport plotly.graph_objects as go\n\ndef rot_matrix(alpha):\n#Define the matrix of rotation about origin with an angle of alpha radians:\n return np.array([[np.cos(alpha), -np.sin(alpha)], \n [np.sin(alpha), np.cos(alpha)]])\n\ndef rotate_dragon(x, y, alpha=pi/2):\n #x,y lists or 1D-array containng the (x, y)-coordinates of the turn points on the dragon curve constructed \n # in a single step\n X, Y = rot_matrix(alpha).dot(np.stack((x, y))) # the lists of coordinates of turn points on the rotated curve\n return X, Y\n\n\n#the initial step dragon cuvre is represented by a vertical line of length L\nL = 0.12\nX = np.array([0, 0])\nY = np.array([-L, 0])\n\nfig = go.Figure(data=[go.Scatter(x=X,y=Y, \n mode='lines', \n line_color='#0000ee',\n line_width=1.5,\n showlegend=False)\n ])\ntitle = \"Animated construction of the Dragon curve,<br>through successive rotations\" \nfig.update_layout(title_text=title, title_x=0.5,\n font=dict(family='Balto', size=16),\n width=700, height=700,\n xaxis_visible=False, \n yaxis_visible=False,\n \n xaxis_range=[-11, 6],\n yaxis_range=[-11, 3],\n #margin_l=40,\n );",
"The frame 0 displays the initial vertical segment, as the dragon cuve defined in step 0 of the iterative \nprocess of construction.",
"alpha = pi/10 # The rotation of 90 degrees is defined as 5 successive rotations of 18 degrees=pi10 radians\nn_rot90 = 13 # we have 13 steps\nframes = []\n\nfor k in range(n_rot90):\n #Record the last point on the dragon, defined in the previous step\n x0, y0 = X[-1], Y[-1]\n x = X-x0 #Translation with origin at (x0, y0) to be the center of rotation\n y = Y-y0\n for j in range(5): \n X, Y = rotate_dragon(x, y, alpha=(j+1)*alpha)\n X = np.concatenate((x[:-1], X[::-1]), axis=None) #concatenate to the (k-1)^th step dragon its rotated version\n Y = np.concatenate((y[:-1], Y[::-1]), axis=None)\n X = X+x0\n Y = Y+y0\n frames.append(go.Frame(data=[go.Scatter(x=X,y=Y)],\n traces=[0]))",
"Define a button that triggers the animation:",
"buttonPlay = {'args': [None, \n {'frame': {'duration': 100,\n 'redraw': False}, \n 'transition': {'duration': 0}, \n 'fromcurrent': True,\n 'mode': 'immediate'}],\n 'label': 'Play',\n 'method': 'animate'}\n\nfig.update_layout(updatemenus=[{'buttons': [buttonPlay],\n 'showactive': False,\n 'type': 'buttons',\n 'x': 1,\n 'xanchor': 'left',\n 'y': 1,\n 'yanchor': 'top'\n }])\n\n \n\nfig.frames=frames\n\nimport chart_studio.plotly as py\npy.iplot(fig, filename='rot-dragon1')",
"A gif file derived from this animation is posted on Wikimedia."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ALEXKIRNAS/DataScience
|
CS231n/assignment2/PyTorch.ipynb
|
mit
|
[
"Training a ConvNet PyTorch\nIn this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.",
"import torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data import sampler\n\nimport torchvision.datasets as dset\nimport torchvision.transforms as T\n\nimport numpy as np\n\nimport timeit",
"What's this PyTorch business?\nYou've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.\nFor the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook). \nWhy?\n\nOur code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).\nWe want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. \nWe want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) \nWe want you to be exposed to the sort of deep learning code you might run into in academia or industry. \n\nHow will I learn PyTorch?\nIf you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html\nOtherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.\nLoad Datasets\nWe load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.",
"class ChunkSampler(sampler.Sampler):\n \"\"\"Samples elements sequentially from some offset. \n Arguments:\n num_samples: # of desired datapoints\n start: offset where we should start selecting from\n \"\"\"\n def __init__(self, num_samples, start = 0):\n self.num_samples = num_samples\n self.start = start\n\n def __iter__(self):\n return iter(range(self.start, self.start + self.num_samples))\n\n def __len__(self):\n return self.num_samples\n\nNUM_TRAIN = 49000\nNUM_VAL = 1000\n\ncifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n transform=T.ToTensor())\nloader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0))\n\ncifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n transform=T.ToTensor())\nloader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))\n\ncifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,\n transform=T.ToTensor())\nloader_test = DataLoader(cifar10_test, batch_size=64)\n",
"For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.",
"dtype = torch.FloatTensor # the CPU datatype\n\n# Constant to control how frequently we print train loss\nprint_every = 100\n\n# This is a little utility that we'll use to reset the model\n# if we want to re-initialize all our parameters\ndef reset(m):\n if hasattr(m, 'reset_parameters'):\n m.reset_parameters()",
"Example Model\nSome assorted tidbits\nLet's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.\nWe'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:\n* N is the number of datapoints\n* C is the number of channels\n* H is the height of the intermediate feature map in pixels\n* W is the height of the intermediate feature map in pixels\nThis is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a \"Flatten\" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a \"view\" of that data. \"View\" is analogous to numpy's \"reshape\" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).",
"class Flatten(nn.Module):\n def forward(self, x):\n N, C, H, W = x.size() # read in N, C, H, W\n return x.view(N, -1) # \"flatten\" the C * H * W values into a single vector per image",
"The example model itself\nThe first step to training your own model is defining its architecture.\nHere's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer\none after the other.\nIn that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used. \nMake sure you understand why the parameters of the Linear layer are 5408 and 10.",
"# Here's where we define the architecture of the model... \nsimple_model = nn.Sequential(\n nn.Conv2d(3, 32, kernel_size=7, stride=2),\n nn.ReLU(inplace=True),\n Flatten(), # see above for explanation\n nn.Linear(5408, 10), # affine layer\n )\n\n# Set the type of all data in this model to be FloatTensor \nsimple_model.type(dtype)\n\nloss_fn = nn.CrossEntropyLoss().type(dtype)\noptimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer",
"PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class \"spatial batch norm\" is called \"BatchNorm2D\" in PyTorch.\n\nLayers: http://pytorch.org/docs/nn.html\nActivations: http://pytorch.org/docs/nn.html#non-linear-activations\nLoss functions: http://pytorch.org/docs/nn.html#loss-functions\nOptimizers: http://pytorch.org/docs/optim.html#algorithms\n\nTraining a specific model\nIn this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model. \nUsing the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:\n\n7x7 Convolutional Layer with 32 filters and stride of 1\nReLU Activation Layer\nSpatial Batch Normalization Layer\n2x2 Max Pooling layer with a stride of 2\nAffine layer with 1024 output units\nReLU Activation Layer\nAffine layer from 1024 input units to 10 outputs\n\nAnd finally, set up a cross-entropy loss function and the RMSprop learning rule.",
"fixed_model_base = nn.Sequential( \n nn.Conv2d(3, 32, kernel_size=7, stride=1), # N x 32 x 32 x 3 -> N x 26 x 26 x 32\n nn.ReLU(inplace=True), \n nn.BatchNorm2d(32),\n nn.MaxPool2d(kernel_size=2, stride=2), # N x 26 x 26 x 32 -> N x 13 x 13 x 32\n Flatten(),\n nn.Linear(5408, 1024),\n nn.ReLU(inplace=True),\n nn.Linear(1024, 10), # affine layer\n )\n\nfixed_model = fixed_model_base.type(dtype)",
"To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):",
"## Now we're going to feed a random batch into the model you defined and make sure the output is the right size\nx = torch.randn(64, 3, 32, 32).type(dtype)\nx_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data\nans = fixed_model(x_var) # Feed it through the model! \n\n# Check to make sure what comes out of your model\n# is the right dimensionality... this should be True\n# if you've done everything correctly\nnp.array_equal(np.array(ans.size()), np.array([64, 10])) ",
"GPU!\nNow, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.\nIf this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.",
"# Verify that CUDA is properly configured and you have a GPU available\n\ntorch.cuda.is_available()\n\nimport copy\ngpu_dtype = torch.cuda.FloatTensor\n\nfixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype)\n\nx_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype)\nx_var_gpu = Variable(x.type(gpu_dtype)) # Construct a PyTorch Variable out of your input data\nans = fixed_model_gpu(x_var_gpu) # Feed it through the model! \n\n# Check to make sure what comes out of your model\n# is the right dimensionality... this should be True\n# if you've done everything correctly\nnp.array_equal(np.array(ans.size()), np.array([64, 10]))",
"Run the following cell to evaluate the performance of the forward pass running on the CPU:",
"%%timeit \nans = fixed_model(x_var)",
"... and now the GPU:",
"%%timeit \ntorch.cuda.synchronize() # Make sure there are no pending GPU computations\nans = fixed_model_gpu(x_var_gpu) # Feed it through the model! \ntorch.cuda.synchronize() # Make sure there are no pending GPU computations",
"You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is torch.cuda.FloatTensor (in our notebook here as gpu_dtype)\nTrain the model.\nNow that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).\nMake sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.\nNote that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).\nFirst, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:",
"loss_fn = nn.CrossEntropyLoss().type(dtype)\noptimizer = optim.RMSprop(fixed_model_gpu.parameters(), lr=1e-3)\n\n# This sets the model in \"training\" mode. This is relevant for some layers that may have different behavior\n# in training mode vs testing mode, such as Dropout and BatchNorm. \nfixed_model_gpu.train()\n\n# Load one batch at a time.\nfor t, (x, y) in enumerate(loader_train):\n x_var = Variable(x.type(gpu_dtype))\n y_var = Variable(y.type(gpu_dtype).long())\n\n # This is the forward pass: predict the scores for each class, for each x in the batch.\n scores = fixed_model_gpu(x_var)\n \n # Use the correct y values and the predicted y values to compute the loss.\n loss = loss_fn(scores, y_var)\n \n if (t + 1) % print_every == 0:\n print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))\n\n # Zero out all of the gradients for the variables which the optimizer will update.\n optimizer.zero_grad()\n \n # This is the backwards pass: compute the gradient of the loss with respect to each \n # parameter of the model.\n loss.backward()\n \n # Actually update the parameters of the model using the gradients computed by the backwards pass.\n optimizer.step()",
"Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:",
"def train(model, loss_fn, optimizer, num_epochs = 1):\n for epoch in range(num_epochs):\n print('Starting epoch %d / %d' % (epoch + 1, num_epochs))\n model.train()\n for t, (x, y) in enumerate(loader_train):\n x_var = Variable(x.type(gpu_dtype))\n y_var = Variable(y.type(gpu_dtype).long())\n\n scores = model(x_var)\n \n loss = loss_fn(scores, y_var)\n if (t + 1) % print_every == 0:\n print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\ndef check_accuracy(model, loader):\n if loader.dataset.train:\n print('Checking accuracy on validation set')\n else:\n print('Checking accuracy on test set') \n num_correct = 0\n num_samples = 0\n model.eval() # Put the model in test mode (the opposite of model.train(), essentially)\n for x, y in loader:\n x_var = Variable(x.type(gpu_dtype), volatile=True)\n\n scores = model(x_var)\n _, preds = scores.data.cpu().max(1)\n num_correct += (preds == y).sum()\n num_samples += preds.size(0)\n acc = float(num_correct) / num_samples\n print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))",
"Check the accuracy of the model.\nLet's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.\nYou should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.\nBut don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.",
"torch.cuda.random.manual_seed(12345)\nfixed_model_gpu.apply(reset)\ntrain(fixed_model_gpu, loss_fn, optimizer, num_epochs=1)\ncheck_accuracy(fixed_model_gpu, loader_val)",
"Don't forget the validation set!\nAnd note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value. \nTrain a great model on CIFAR-10!\nNow it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.\nThings you should try:\n\nFilter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient\nNumber of filters: Above we used 32 filters. Do more or fewer do better?\nPooling vs Strided Convolution: Do you use max pooling or just stride convolutions?\nBatch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?\nNetwork architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:\n[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]\n\n\nGlobal Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).\nRegularization: Add l2 weight regularization, or perhaps use Dropout.\n\nTips for training\nFor each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:\n\nIf the parameters are working well, you should see improvement within a few hundred iterations\nRemember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.\nOnce you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.\nYou should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.\n\nGoing above and beyond\nIf you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.\n\nAlternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.\nAlternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.\nModel ensembles\nData augmentation\nNew Architectures\nResNets where the input from the previous layer is added to the output.\nDenseNets where inputs into previous layers are concatenated together.\nThis blog has an in-depth overview\n\nIf you do decide to implement something extra, clearly describe it in the \"Extra Credit Description\" cell below.\nWhat we expect\nAt the very least, you should be able to train a ConvNet that gets at least 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.\nYou should use the space below to experiment and train your network. \nHave fun and happy training!",
"# Train your model here, and make sure the output of this cell is the accuracy of your best model on the \n# train, val, and test sets. Here's some code to get you started. The output of this cell should be the training\n# and validation accuracy on your best model (measured by validation accuracy).\n\nmodel_base = nn.Sequential( \n nn.Conv2d(3, 32, kernel_size=3, \n stride=1, padding=1), # N x 32 x 32 x 3 -> N x 32 x 32 x 32\n \n nn.BatchNorm2d(32),\n nn.ReLU(inplace=True),\n \n nn.Conv2d(32, 64, kernel_size=3, \n stride=2, padding=1), # N x 32 x 32 x 32 -> N x 16 x 16 x 64\n \n nn.BatchNorm2d(64),\n nn.ReLU(inplace=True), \n \n nn.Conv2d(64, 64, kernel_size=3, \n stride=1, padding=1), # N x 16 x 16 x 64 -> N x 16 x 16 x 64\n \n nn.BatchNorm2d(64),\n nn.ReLU(inplace=True),\n \n nn.Conv2d(64, 128, kernel_size=3, \n stride=2, padding=1), # N x 16 x 16 x 64 -> N x 8 x 8 x 128\n \n nn.BatchNorm2d(128),\n nn.ReLU(inplace=True),\n \n nn.AvgPool2d(kernel_size=(8, 8)), # N x 1 x 1 x 128\n Flatten(),\n nn.Linear(128, 10), # affine layer\n )\n\nmodel = copy.deepcopy(model_base).type(gpu_dtype)\nloss_fn = nn.CrossEntropyLoss().type(dtype)\noptimizer = optim.Adam(model.parameters(), lr=3e-3)\n\ntrain(model, loss_fn, optimizer, num_epochs=10)\ncheck_accuracy(model, loader_val)",
"Describe what you did\nIn the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.\nTell us here!\nTest set -- run this only once\nNow that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.",
"best_model = model\ncheck_accuracy(best_model, loader_test)",
"Going further with PyTorch\nThe next assignment will make heavy use of PyTorch. You might also find it useful for your projects. \nHere's a nice tutorial by Justin Johnson that shows off some of PyTorch's features, like dynamic graphs and custom NN modules: http://pytorch.org/tutorials/beginner/pytorch_with_examples.html\nIf you're interested in reinforcement learning for your final project, this is a good (more advanced) DQN tutorial in PyTorch: http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
muratcemkose/cy-rest-python
|
advanced/CytoscapeREST_KEGG_f1000.ipynb
|
mit
|
[
"Integrating transcriptome profile with KEGG pathway\nby Kozo Nishida (Riken, Japan)\nThis example demonstrates how to integrate transcriptome data (preprocessed with bioconductor packages) with KEGG pathways and visualize it in Cytoscape.\n## Software Requirments\n\nCytoscape 3.2.1\nKEGGScape 0.7.x\ncyREST 0.9.x or later\n\nFor data pre-processing\n\nR\nBioconductor - ecoliLeucine\nBioconductor - affy\nBioconductor - genefilter\n\nInput and Output\n\nInput - bioconductor ecoliLeucine package\nOutput - Cytoscape session file containing KEGG pathway with differentially expressed genes\n\nImporting a KEGG Pathway\nGlycine, serine and threonine metabolism - Escherichia coli K-12 MG1655)",
"import requests\nimport json\n\n# Basic Setup\nPORT_NUMBER = 1234\nBASE_URL = \"http://localhost:\" + str(PORT_NUMBER) + \"/v1/\"\n\n# Header for posting data to the server as JSON\nHEADERS = {'Content-Type': 'application/json'}\n\n# Delete all networks in current session\nrequests.delete(BASE_URL + 'session')\n\npathway_location = \"http://rest.kegg.jp/get/eco00260/kgml\"\nres1 = requests.post(BASE_URL + \"networks?source=url\", data=json.dumps([pathway_location]), headers=HEADERS)\nresult = json.loads(res1.content)\npathway_suid = result[0][\"networkSUID\"][0]\nprint(\"Pathway SUID = \" + str(pathway_suid))",
"Pre-processing transcriptome data and testing differentially expressed genes with Bioconductor\nYou need to run the following code in R\nsource(\"http://bioconductor.org/biocLite.R\")\nbiocLite(c(\"genefilter\", \"ecoliLeucine\"))\nlibrary(\"ecoliLeucine\")\nlibrary(\"genefilter\")\ndata(\"ecoliLeucine\")\neset = rma(ecoliLeucine)\nr = rowttests(eset, eset$strain)\nfiltered = r[r$p.value < 0.05,]\nwrite.csv(filtered, file=\"ttest.csv\") \nLoading ttest.csv as Pandas DataFrame",
"import pandas as pd\n\nttest_df = pd.read_csv('ttest.csv')\nttest_df.head()",
"Getting node table from Cytoscape and merge with ttest.csv",
"deftable = requests.get('http://localhost:1234/v1/networks/' + str(pathway_suid) + '/tables/defaultnode.tsv')\nhandle = open('defaultnode.tsv','w')\nhandle.write(deftable.content)\nhandle.close()\n\ndeftable_df = pd.read_table('defaultnode.tsv')\ndeftable_df.head()\n\nimport re\nbnum_re = re.compile('b[0-9]{4}')\n\nkeggids = []\nkeggnode_labels = []\nfor index, probe in ttest_df['Unnamed: 0'].iteritems():\n m = bnum_re.search(probe)\n if m:\n keggids.append(None)\n keggnode_labels.append(None)\n for i, keggid in deftable_df['KEGG_ID'].iteritems():\n if m.group(0) in keggid:\n keggids.pop()\n keggids.append(keggid)\n keggnode_labels.pop()\n keggnode_labels.append(deftable_df['KEGG_NODE_LABEL'][i])\n else:\n keggids.append(None)\n keggnode_labels.append(None)\n\ns1 = pd.Series(keggids, name='KEGG_ID_INPATHWAY')\ns2 = pd.Series(keggnode_labels, name='KEGG_NODE_LABEL_INPATHWAY')\n\nmerged_df = pd.concat([ttest_df, s1, s2], axis=1)\nmerged_df.head()\n\nttestjson = json.loads(merged_df.to_json(orient=\"records\"))\n\nnew_table_data = {\n \"key\": \"KEGG_NODE_LABEL\",\n \"dataKey\": \"KEGG_NODE_LABEL_INPATHWAY\",\n \"data\" : ttestjson\n}\n\nupdate_table_url = BASE_URL + \"networks/\" + str(pathway_suid) + \"/tables/defaultnode\"\n\nprint(update_table_url)\n\nrequests.put(update_table_url, data=json.dumps(new_table_data), headers=HEADERS)",
"You can see the t-test results in Cytoscape default node table!\nDiscussion\nThis workflow integrates data, but visualization part is not fully automated. This is a TODO item..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Housebeer/Natural-Gas-Model
|
backup/Matching Market v1.ipynb
|
mit
|
[
"Matching Market\nThis simple model consists of a buyer, a supplier, and a market. \nThe buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a get_bid function. \nThe supplier is similiar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices. \nThe willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded. \nMicroeconomic Foundations\nThe market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.\nAgent-Based Objects\nThe following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.",
"import random as rnd\n\nclass Supplier():\n\n def __init__(self):\n self.wta = []\n \n # the supplier has n quantities that they can sell\n # they may be willing to sell this quantity anywhere from a lower price of l\n # to a higher price of u\n def set_quantity(self,n,l,u):\n for i in range(n):\n p = rnd.uniform(l,u)\n self.wta.append(p)\n \n # return the dictionary of willingness to ask\n def get_ask(self):\n return self.wta\n\nclass Buyer():\n \n def __init__(self):\n self.wtp = []\n \n # the supplier has n quantities that they can buy\n # they may be willing to sell this quantity anywhere from a lower price of l\n # to a higher price of u\n def set_quantity(self,n,l,u):\n for i in range(n):\n p = rnd.uniform(l,u)\n self.wtp.append(p)\n \n # return list of willingness to pay\n def get_bid(self):\n return self.wtp\n\nclass Market():\n count = 0\n last_price = ''\n b = []\n s = []\n \n def __init__(self,b,s):\n # buyer list sorted in descending order\n self.b = sorted(b, reverse=True)\n # seller list sorted in ascending order\n self.s = sorted(s, reverse=False)\n \n # return the price at which the market clears\n # assume equal numbers of sincere buyers and sellers\n def get_clearing_price(self):\n # buyer makes a bid, starting with the buyer which wants it most\n\n for i in range(len(self.b)):\n if (self.b[i] > self.s[i]):\n self.count +=1\n self.last_price = self.b[i]\n \n return self.last_price\n \n def get_units_cleared(self):\n return self.count\n \n \n ",
"Example Market\nIn the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.",
"# make a supplier and get the asks\nsupplier = Supplier()\nsupplier.set_quantity(60,10,30)\nask = supplier.get_ask()\n\n# make a buyer and get the bids (n,l,u)\nbuyer = Buyer()\nbuyer.set_quantity(60,10,30)\nbid = buyer.get_bid()\n\n# make a market where the buyers and suppliers can meet\n# the bids and asks are a list of prices\nmarket = Market(bid,ask)\nprice = market.get_clearing_price()\nquantity = market.get_units_cleared()\n\n# output the results of the market \nprint(\"Goods cleared for a price of \",price)\nprint(\"Units sold are \", quantity)",
"Operations Research Formulation\nThe market can also be formulated as a very simple linear program or linear complementarity problem. It is clearer and easier to implement this market clearing mechanism with agents. One merit of the agent-based approach is that we don't need linear or linearizeable supply and demand function. \nThe auctioneer is effectively following a very simple linear program subject to constraints on units sold. The auctioneer is, in the primal model, maximizing the consumer utility received by customers, with respect to the price being paid, subject to a fixed supply curve. On the dual side the auctioneer is minimizing the cost of production for the supplier, with respect to quantity sold, subject to a fixed demand curve. It is the presumed neutrality of the auctioneer which justifies the honest statement of supply and demand. \nAn alternative formulation is a linear complementarity problem. Here the presence of an optimal space of trades ensures that there is a Pareto optimal front of possible trades. The perfect opposition of interests in dividing the consumer and producer surplus means that this is a zero sum game. Furthermore the solution to this zero-sum game maximizes societal welfare and is therefore the Hicks optimal solution.\nNext Steps\nA good elaboration of this model would be to have multiple buyers with differing willingness to pay and quantities to purchase. The market object would need to be expanded to keep track of the \"book\" -- which parties actually completed the transaction. Another possible addition would be to have a weekly varying demand of customers, for instance caused by the use of natural gas as a heating agent. This would require the bids and asks to be time varying, and for the market to be run over successive time periods. A third or fourth addition would be to create transport costs, or enable intermediate goods to be produced. This would need a more elaborate market operator."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
zaqwes8811/micro-apps
|
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/03-Gaussians.ipynb
|
mit
|
[
"Table of Contents\nProbabilities, Gaussians, and Bayes' Theorem",
"from __future__ import division, print_function\n%matplotlib inline\n\n#format the book\nimport book_format\nbook_format.set_style()",
"Introduction\nThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is unimodal and continuous. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us \"it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79).\" That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.\nWe desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features.\nMean, Variance, and Standard Deviations\nMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned.\nRandom Variables\nEach time you roll a die the outcome will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the probability, or odds of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. \nThis combination of values and associated probabilities is called a random variable. Here random does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.\nWhile we are defining terms, the range of values is called the sample space. For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. Space is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.\nAnother example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.\nRandom variables such as coin tosses and die rolls are discrete random variables. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called continuous random variables since they can take on any real value between two limits.\nDo not confuse the measurement of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. \nIn statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two.\nProbability Distribution\nThe probability distribution gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:\n|Value|Probability|\n|-----|-----------|\n|1|1/6|\n|2|1/6|\n|3|1/6|\n|4|1/6|\n|5|1/6|\n|6|1/6|\nWe denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:\n$$P(X{=}4) = p(4) = \\frac{1}{6}$$\nThis states that the probability of the die landing on 4 is $\\frac{1}{6}$. $P(X{=}x_k)$ is notation for \"the probability of $X$ being $x_k$\". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. \nAnother example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as\n$$\\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\\end{gathered}$$\nSample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.\nThe probabilities for all values of a discrete random value is known as the discrete probability distribution and the probabilities for all values of a continuous random value is known as the continuous probability distribution.\nTo be a probability distribution the probability of each value $x_i$ must be $x_i \\ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as\n$$\\sum\\limits_u P(X{=}u)= 1$$\nfor discrete distributions, and as \n$$\\int\\limits_u P(X{=}u) \\,du= 1$$\nfor continuous distributions.\nIn the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example:",
"import numpy as np\nimport kf_book.book_plots as book_plots\n\nbelief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2])\nbelief = belief / np.sum(belief)\nwith book_plots.figsize(y=2):\n book_plots.bar_plot(belief)\nprint('sum = ', np.sum(belief))",
"Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction.\nThe Mean, Median, and Mode of a Random Variable\nGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a measure of central tendency. For example we might want to know the average height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the mean. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is \n$$X = {1.8, 2.0, 1.7, 1.9, 1.6}$$\nwe compute the mean as\n$$\\mu = \\frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$\nIt is traditional to use the symbol $\\mu$ (mu) to denote the mean.\nWe can formalize this computation with the equation\n$$ \\mu = \\frac{1}{n}\\sum^n_{i=1} x_i$$\nNumPy provides numpy.mean() for computing the mean.",
"x = [1.8, 2.0, 1.7, 1.9, 1.6]\nnp.mean(x)",
"As a convenience NumPy arrays provide the method mean().",
"x = np.array([1.8, 2.0, 1.7, 1.9, 1.6])\nx.mean()",
"The mode of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a unimodal set, and if two or more numbers occur the most with equal frequency than the set is multimodal. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the Discrete Bayes chapter we talked about our belief in the dog's position as a multimodal distribution because we assigned different probabilities to different positions.\nFinally, the median of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.\nNumpy provides numpy.median() to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true.",
"np.median(x)",
"Expected Value of a Random Variable\nThe expected value of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we expect $x$ to have, on average?\nIt would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the mean of the sample space.\nNow suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute\n$$\\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$\nHere I have introduced the notation $\\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.\nWe can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us\n$$\\mathbb E[X] = \\sum_{i=1}^n p_ix_i$$\nA trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:\n$$\\mathbb E[X] = \\sum_{i=1}^n p_ix_i = \\frac{1}{n}\\sum_{i=1}^n x_i = \\mu_x$$\nIf $x$ is continuous we substitute the sum for an integral, like so\n$$\\mathbb E[X] = \\int_{a}^b\\, xf(x) \\,dx$$\nwhere $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.\nWe can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically.",
"total = 0\nN = 1000000\nfor r in np.random.rand(N):\n if r <= .80: total += 1\n elif r < .95: total += 3\n else: total += 5\n\ntotal / N",
"You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size.\nExercise\nWhat is the expected value of a die role?\nSolution\nEach side is equally likely, so each has a probability of 1/6. Hence\n$$\\begin{aligned}\n\\mathbb E[X] &= 1/6\\times1 + 1/6\\times 2 + 1/6\\times 3 + 1/6\\times 4 + 1/6\\times 5 + 1/6\\times6 \\\n&= 1/6(1+2+3+4+5+6)\\&= 3.5\\end{aligned}$$\nExercise\nGiven the uniform continuous distribution\n$$f(x) = \\frac{1}{b - a}$$\ncompute the expected value for $a=0$ and $B=20$.\nSolution\n$$\\begin{aligned}\n\\mathbb E[X] &= \\int_0^{20}\\, x\\frac{1}{20} \\,dx \\\n&= \\bigg[\\frac{x^2}{40}\\bigg]_0^{20} \\\n&= 10 - 0 \\\n&= 10\n\\end{aligned}$$\nVariance of a Random Variable\nThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights:",
"X = [1.8, 2.0, 1.7, 1.9, 1.6]\nY = [2.2, 1.5, 2.3, 1.7, 1.3]\nZ = [1.8, 1.8, 1.8, 1.8, 1.8]",
"Using NumPy we see that the mean height of each class is the same.",
"print(np.mean(X), np.mean(Y), np.mean(Z))",
"The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.\nThe mean tells us something about the data, but not the whole story. We want to be able to specify how much variation there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. \nStatistics has formalized this concept of measuring variation into the notion of standard deviation and variance. The equation for computing the variance is\n$$\\mathit{VAR}(X) = \\mathbb E[(X - \\mu)^2]$$\nIgnoring the square for a moment, you can see that the variance is the expected value for how much the sample space $X$ varies from the mean $\\mu:$ ($X-\\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\\mathbb E[X] = \\sum\\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get\n$$\\mathit{VAR}(X) = \\frac{1}{n}\\sum_{i=1}^n (x_i - \\mu)^2$$\nLet's compute the variance of the three classes to see what values we get and to become familiar with this concept.\nThe mean of $X$ is 1.8 ($\\mu_x = 1.8$) so we compute\n$$ \n\\begin{aligned}\n\\mathit{VAR}(X) &=\\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\\n&= \\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\n\\mathit{VAR}(X)&= 0.02 \\, m^2\n\\end{aligned}$$\nNumPy provides the function var() to compute the variance:",
"print(\"{:.2f} meters squared\".format(np.var(X)))",
"This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the standard deviation, which is defined as the square root of the variance:\n$$\\sigma = \\sqrt{\\mathit{VAR}(X)}=\\sqrt{\\frac{1}{n}\\sum_{i=1}^n(x_i - \\mu)^2}$$\nIt is typical to use $\\sigma$ for the standard deviation and $\\sigma^2$ for the variance. In most of this book I will be using $\\sigma^2$ instead of $\\mathit{VAR}(X)$ for the variance; they symbolize the same thing.\nFor the first class we compute the standard deviation with\n$$ \n\\begin{aligned}\n\\sigma_x &=\\sqrt{\\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\\n&= \\sqrt{\\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\n\\sigma_x&= 0.1414\n\\end{aligned}$$\nWe can verify this computation with the NumPy method numpy.std() which computes the standard deviation. 'std' is a common abbreviation for standard deviation.",
"print('std {:.4f}'.format(np.std(X)))\nprint('var {:.4f}'.format(np.std(X)**2))",
"And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.\nWhat does the standard deviation signify? It tells us how much the heights vary amongst themselves. \"How much\" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. \nWe can view this in a plot:",
"from kf_book.gaussian_internal import plot_height_std\nimport matplotlib.pyplot as plt\n\nplot_height_std(X)",
"For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\\pm1\\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.\n\nWe write one standard deviation as $1\\sigma$, which is pronounced \"one standard deviation\", not \"one sigma\". Two standard deviations is $2\\sigma$, and so on.",
"from numpy.random import randn\ndata = 1.8 + randn(100)*.1414\nmean, std = data.mean(), data.std()\n\nplot_height_std(data, lw=2)\nprint('mean = {:.3f}'.format(mean))\nprint('std = {:.3f}'.format(std))",
"By eye roughly 68% of the heights lie within $\\pm1\\sigma$ of the mean 1.8, but we can verify this with code.",
"np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100.",
"We'll discuss this in greater depth soon. For now let's compute the standard deviation for \n$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$\nThe mean of $Y$ is $\\mu=1.8$ m, so \n$$ \n\\begin{aligned}\n\\sigma_y &=\\sqrt{\\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\\n&= \\sqrt{0.152} = 0.39 \\ m\n\\end{aligned}$$\nWe will verify that with NumPy with",
"print('std of Y is {:.2f} m'.format(np.std(Y)))",
"This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.\nFinally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.\n$$ \n\\begin{aligned}\n\\sigma_z &=\\sqrt{\\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\\n&= \\sqrt{\\frac{0+0+0+0+0}{5}} \\\n\\sigma_z&= 0.0 \\ m\n\\end{aligned}$$",
"print(np.std(Z))",
"Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. \nI suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! \nWe will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues.\nWhy the Square of the Differences\nWhy are we taking the square of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$",
"X = [3, -3, 3, -3]\nmean = np.average(X)\nfor i in range(len(X)):\n plt.plot([i ,i], [mean, X[i]], color='k')\nplt.axhline(mean)\nplt.xlim(-1, len(X))\nplt.tick_params(axis='x', labelbottom=False)",
"If we didn't take the square of the differences the signs would cancel everything out:\n$$\\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$\nThis is clearly incorrect, as there is more than 0 variance in the data. \nMaybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.\nThis is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have:",
"X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100]\nprint('Variance of X with outlier = {:6.2f}'.format(np.var(X)))\nprint('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1])))",
"Is this \"correct\"? You tell me. Without the outlier of 100 we get $\\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.\nI will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called Bayesian robustness, or the excellent publications on robust statistics by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.\nThe point to gather from this is that these summary statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way.\nGaussians\nWe are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.\n\nWe desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.\n\nLet's look at a graph of a Gaussian distribution to get a sense of what we are talking about.",
"from filterpy.stats import plot_gaussian_pdf\nplot_gaussian_pdf(mean=1.8, variance=0.1414**2, \n xlabel='Student Height', ylabel='pdf');",
"This curve is a probability density function or pdf for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.\n\nI explain how to plot Gaussians, and much more, in the Notebook Computing_and_Plotting_PDFs in the \nSupporting_Notebooks folder. You can read it online here [1].\n\nThis may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.\nThis curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.\nTo further motivate you, recall the shapes of the probability distributions in the Discrete Bayes chapter:",
"import kf_book.book_plots as book_plots\nbelief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]\nbook_plots.bar_plot(belief)",
"They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter!\nNomenclature\nA bit of nomenclature before we continue - this chart depicts the probability density of a random variable having any value between ($-\\infty..\\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this:",
"plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)');",
"The y-axis depicts the probability density — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.\nThe Gaussian model is imperfect. Though these charts do not show it, the tails of the distribution extend out to infinity. Tails are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\\infty$ or $\\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. \nYou will hear these distributions called Gaussian distributions or normal distributions. Gaussian and normal both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a Gaussian or normal — these are both typical shortcut names for the Gaussian distribution. \nGaussian Distributions\nLet's explore how Gaussians work. A Gaussian is a continuous probability distribution that is completely described with two parameters, the mean ($\\mu$) and the variance ($\\sigma^2$). It is defined as:\n$$ \nf(x, \\mu, \\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}} \\exp\\big [{-\\frac{(x-\\mu)^2}{2\\sigma^2} }\\big ]\n$$\n$\\exp[x]$ is notation for $e^x$.\n<p> Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. \n\nShorn of the constants, you can see it is a simple exponential:\n\n$$f(x)\\propto e^{-x^2}$$\n\nwhich has the familiar bell curve shape",
"x = np.arange(-3, 3, .01)\nplt.plot(x, np.exp(-x**2));",
"Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now.",
"from filterpy.stats import gaussian\n#gaussian??",
"Let's plot a Gaussian with a mean of 22 $(\\mu=22)$, with a variance of 4 $(\\sigma^2=4)$.",
"plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\\circ}C$');",
"What does this curve mean? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called Central Limit Theorem states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. \nRecall that a Gaussian distribution is continuous. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being exactly 2°C is 0% because there are an infinite number of values the reading can take.\nWhat is this curve? It is something we call the probability density function. The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. \nHere is another way to understand it. What is the density of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.\n$$M = \\iiint_R p(x,y,z)\\, dV$$\nWe do the same with probability density. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. \nWhat is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. \nThinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.\nIn practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.\nWe can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. \nHow do you compute the probability, or area under the curve? You integrate the equation for the Gaussian \n$$ \\int^{x_1}_{x_0} \\frac{1}{\\sigma\\sqrt{2\\pi}} e^{-\\frac{1}{2}{(x-\\mu)^2}/\\sigma^2 } dx$$\nThis is called the cumulative probability distribution, commonly abbreviated cdf.\nI wrote filterpy.stats.norm_cdf which computes the integral for you. For example, we can compute",
"from filterpy.stats import norm_cdf\nprint('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format(\n norm_cdf((21.5, 22.5), 22,4)*100))\nprint('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format(\n norm_cdf((23.5, 24.5), 22,4)*100))",
"The mean ($\\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. \nThe notation for a normal distribution for a random variable $X$ is $X \\sim\\ \\mathcal{N}(\\mu,\\sigma^2)$ where $\\sim$ means distributed according to. This means I can express the temperature reading of our thermometer as\n$$\\text{temp} \\sim \\mathcal{N}(22,4)$$\nThis is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\\mu=22$ and $\\sigma^2=4$ I can compute the distribution of measurements for over any range.\nSome sources use $\\mathcal N (\\mu, \\sigma)$ instead of $\\mathcal N (\\mu, \\sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\\mathcal{N}(22,4)$. In this book I always use $\\mathcal N (\\mu, \\sigma^2)$, so $\\sigma=2$, $\\sigma^2=4$ for this example.\nThe Variance and Belief\nSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, something happened, and the probability of something happening is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\\infty$ to $\\infty$)",
"print(norm_cdf((-1e8, 1e8), mu=0, var=4))",
"This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of how much the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.\nLet's look at that graphically. We will use the aforementioned filterpy.stats.gaussian which can take either a single value or array of values.",
"from filterpy.stats import gaussian\n\nprint(gaussian(x=3.0, mean=2.0, var=1))\nprint(gaussian(x=[3.0, 2.0], mean=2.0, var=1))",
"By default gaussian normalizes the output, which turns the output back into a probability distribution. Use the argumentnormed to control this.",
"print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False))",
"If the Gaussian is not normalized it is called a Gaussian function instead of Gaussian distribution.",
"xs = np.arange(15, 30, 0.05)\nplt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\\sigma^2=0.2^2$')\nplt.plot(xs, gaussian(xs, 23, .5**2), label='$\\sigma^2=0.5^2$', ls=':')\nplt.plot(xs, gaussian(xs, 23, 1**2), label='$\\sigma^2=1^2$', ls='--')\nplt.legend();",
"What is this telling us? The Gaussian with $\\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\\pm 0.2$ std. In contrast, the Gaussian with $\\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\\sigma^2=1^2$ considers them nearly as likely as $23$.\nIf we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.\nAn equivalent formation for a Gaussian is $\\mathcal{N}(\\mu,1/\\tau)$ where $\\mu$ is the mean and $\\tau$ the precision. $1/\\tau = \\sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our belief about a measurement, they express the precision of the measurement, and they express how much variance there is in the measurements. These are all different ways of stating the same fact.\nI'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using.\nThe 68-95-99.7 Rule\nIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\\pm1\\sigma$) of the mean, 95% falls within two standard deviations ($\\pm2\\sigma$), and 99.7% within three ($\\pm3\\sigma$). This is often called the 68-95-99.7 rule. If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \\pm (2 * 9.4)$). \nFinally, these are not arbitrary numbers. If the Gaussian for our position is $\\mu=22$ meters, then the standard deviation also has units meters. Thus $\\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.\nThe following graph depicts the relationship between the standard deviation and the normal distribution.",
"from kf_book.gaussian_internal import display_stddev_plot\ndisplay_stddev_plot()",
"Interactive Gaussians\nFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\\mu$ and $\\sigma^2$. Adjusting $\\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\\sigma^2$ will make the bell curve thicker and thinner.",
"import math\nfrom ipywidgets import interact, FloatSlider\n\ndef plt_g(mu,variance):\n plt.figure()\n xs = np.arange(2, 8, 0.01)\n ys = gaussian(xs, mu, variance)\n plt.plot(xs, ys)\n plt.ylim(0, 0.04)\n\ninteract(plt_g, mu=FloatSlider(value=5, min=3, max=7),\n variance=FloatSlider(value = .03, min=.01, max=1.));",
"Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\\mu=5$ and the variance is modified.\n<img src='animations/04_gaussian_animate.gif'>\nComputational Properties of Gaussians\nThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. \nA remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).\nBefore we do the math, let's test this visually.",
"x = np.arange(-1, 3, 0.01)\ng1 = gaussian(x, mean=0.8, var=.1)\ng2 = gaussian(x, mean=1.3, var=.2)\nplt.plot(x, g1, x, g2)\n\ng = g1 * g2 # element-wise multiplication\ng = g / sum(g) # normalize\nplt.plot(x, g, ls='-.');",
"Here I created two Gaussians, g1=$\\mathcal N(0.8, 0.1)$ and g2=$\\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result looks like a Gaussian distribution.\nGaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from sin(x).",
"x = np.arange(0, 4*np.pi, 0.01)\nplt.plot(np.sin(1.2*x))\nplt.plot(np.sin(1.2*x) * np.sin(2*x));",
"But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians because they are computationally nice. \nThe product of two independent Gaussians is given by:\n$$\\begin{aligned}\\mu &=\\frac{\\sigma_1^2\\mu_2 + \\sigma_2^2\\mu_1}{\\sigma_1^2+\\sigma_2^2}\\\n\\sigma^2 &=\\frac{\\sigma_1^2\\sigma_2^2}{\\sigma_1^2+\\sigma_2^2} \n\\end{aligned}$$\nThe sum of two Gaussians is given by\n$$\\begin{gathered}\\mu = \\mu_1 + \\mu_2 \\\n\\sigma^2 = \\sigma^2_1 + \\sigma^2_2\n\\end{gathered}$$\nAt the end of the chapter I derive these equations. However, understanding the deriviation is not very important.\nPutting it all Together\nNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.\nIn the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so:",
"def normalize(p):\n return p / sum(p)\n\ndef update(likelihood, prior):\n return normalize(likelihood * prior)\n\nprior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2]))\nlikelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16]))\nposterior = update(likelihood, prior)\nbook_plots.bar_plot(posterior)",
"In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. \nBut this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart.",
"xs = np.arange(0, 10, .01)\n\ndef mean_var(p):\n x = np.arange(len(p))\n mean = np.sum(p * x,dtype=float)\n var = np.sum((x - mean)**2 * p)\n return mean, var\n\nmean, var = mean_var(posterior)\nbook_plots.bar_plot(posterior)\nplt.plot(xs, gaussian(xs, mean, var, normed=False), c='r');\nprint('mean: %.2f' % mean, 'var: %.2f' % var)",
"This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.\nNext, recall that our filter implements the update function with\npython\ndef update(likelihood, prior):\n return normalize(likelihood * prior)\nIf the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with\n$$\\begin{aligned}\\mu &=\\frac{\\sigma_1^2\\mu_2 + \\sigma_2^2\\mu_1}{\\sigma_1^2+\\sigma_2^2}\\\n\\sigma^2 &=\\frac{\\sigma_1^2\\sigma_2^2}{\\sigma_1^2+\\sigma_2^2} \n\\end{aligned}$$\nwhich is three multiplications and two divisions.\nBayes Theorem\nIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered Bayes' Theorem. Bayes theorem tells us how to compute the probability of an event given prior information. \nWe implemented the update() function with this probability calculation:\n$$ \\mathtt{posterior} = \\frac{\\mathtt{likelihood}\\times \\mathtt{prior}}{\\mathtt{normalization}}$$ \nIt turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:\n$$ updated\\,knowledge = \\big\\|likelihood\\,of\\,new\\,knowledge\\times prior\\, knowledge \\big\\|$$\nwhere $\\| \\cdot\\|$ expresses normalizing the term.\nWe came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.\nTo review, the prior is the probability of something happening before we include the probability of the measurement (the likelihood) and the posterior is the probability we compute after incorporating the information from the measurement.\nBayes theorem is\n$$P(A \\mid B) = \\frac{P(B \\mid A)\\, P(A)}{P(B)}$$\n$P(A \\mid B)$ is called a conditional probability. That is, it represents the probability of $A$ happening if $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\\mid$ rain yesterday).\nI've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a probability distribution. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions\n$$p(A \\mid B) = \\frac{p(B \\mid A)\\, p(A)}{p(B)}$$\nIn the equation above $B$ is the evidence, $p(A)$ is the prior, $p(B \\mid A)$ is the likelihood, and $p(A \\mid B)$ is the posterior. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at i, and $z$ for the measurement. Hence, we want to know $P(x_i \\mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. \nSo, let's plug that into the equation and solve it.\n$$p(x_i \\mid z) = \\frac{p(z \\mid x_i) p(x_i)}{p(z)}$$\nThat looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \\mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the prior - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the update() function:\npython\ndef update(likelihood, prior):\n posterior = prior * likelihood # p(z|x) * p(x)\n return normalize(posterior)\nThe last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the evidence. We compute that by taking the sum of $x$, or sum(belief) in the code. That is how we compute the normalization! So, the update() function is doing nothing more than computing Bayes' theorem.\nThe literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as\n$$p(A \\mid B) = \\frac{p(B \\mid A)\\, p(A)}{\\int p(B \\mid A_j) p(A_j) \\,\\, \\mathtt{d}A_j}\\cdot$$\nThis denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent opinion piece for the Royal Statistical Society called it a \"dog's breakfast\" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the Particle Filters chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself \"why are we summing these values\", and \"why am I dividing by this term\". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.\nIt's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \\mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.\nBut Bayes' Theorem lets us compute this by using the inverse $p(Z\\mid x_i)$, which is often straightforward to compute\n$$p(x_i \\mid Z) \\propto p(Z\\mid x_i)\\, p(x_i)$$\nThat is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a much easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. \nLikewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position x. A hard problem becomes easy. \nTotal Probability Theorem\nWe now know the formal mathematics behind the update() function; what about the predict() function? predict() implements the total probability theorem. Let's recall what predict() computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is\n$$P(X_i^t) = \\sum_j P(X_j^{t-1}) P(x_i | x_j)$$\nThat equation is called the total probability theorem. Quoting from Wikipedia [6] \"It expresses the total probability of an outcome which can be realized via several distinct events\". I could have given you that equation and implemented predict(), but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation\npython\nfor i in range(N):\n for k in range (kN):\n index = (i + (width-k) - offset) % N\n result[i] += prob_dist[index] * kernel[k]\nComputing Probabilities with scipy.stats\nIn this chapter I used code from FilterPy to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with \"batteries included\" as the saying goes, and it comes with a wide range of statistics functions in the module scipy.stats. So let's walk through how to use scipy.stats to compute statistics and probabilities.\nThe scipy.stats module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses scipy.stats.norm to compute a Gaussian, and compare its value to the value returned by the gaussian() function from FilterPy.",
"from scipy.stats import norm\nimport filterpy.stats\nprint(norm(2, 3).pdf(1.5))\nprint(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3))",
"The call norm(2, 3) creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so:",
"n23 = norm(2, 3)\nprint('pdf of 1.5 is %.4f' % n23.pdf(1.5))\nprint('pdf of 2.5 is also %.4f' % n23.pdf(2.5))\nprint('pdf of 2 is %.4f' % n23.pdf(2))",
"The documentation for scipy.stats.norm [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the rvs() function.",
"np.set_printoptions(precision=3, linewidth=50)\nprint(n23.rvs(size=15))",
"We can get the cumulative distribution function (CDF), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.",
"# probability that a random value is less than the mean 2\nprint(n23.cdf(2))",
"We can get various properties of the distribution:",
"print('variance is', n23.var())\nprint('standard deviation is', n23.std())\nprint('mean is', n23.mean())",
"Limitations of Using Gaussians to Model the World\nEarlier I mentioned the central limit theorem, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. \nHowever, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. \nThis is a broad topic which I will not treat exhaustively. \nLet's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for any value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.\nBut for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions.",
"xs = np.arange(10, 100, 0.05)\nys = [gaussian(x, 90, 30) for x in xs]\nplt.plot(xs, ys, label='var=0.2')\nplt.xlim(0, 120)\nplt.ylim(-0.02, 0.09);",
"The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. \nSensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the Student's $t$-distribution. \nLet's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function numpy.random.randn() to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with:",
"from numpy.random import randn\ndef sense():\n return 10 + randn()*2",
"Let's plot that signal and see what it looks like.",
"zs = [sense() for i in range(5000)]\nplt.plot(zs, lw=1);",
"That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\\pm$ 2 of 10, and 99% will be within $\\pm$ 6 of 10, and that looks like what is happening. \nNow let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it.",
"import random\nimport math\n\ndef rand_student_t(df, mu=0, std=1):\n \"\"\"return random number distributed by Student's t \n distribution with `df` degrees of freedom with the \n specified mean and standard deviation.\n \"\"\"\n x = random.gauss(0, std)\n y = 2.0*random.gammavariate(0.5*df, 2.0)\n return x / (math.sqrt(y / df)) + mu\n\ndef sense_t():\n return 10 + rand_student_t(7)*2\n\nzs = [sense_t() for i in range(5000)]\nplt.plot(zs, lw=1);",
"We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). \nIt is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. \nThis is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.\nThe code for rand_student_t is included in filterpy.stats. You may use it with\npython\nfrom filterpy.stats import rand_student_t\nWhile I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called skew. The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called kurtosis. the scipy.stats module contains the function describe which computes these statistics, among others.",
"import scipy\nscipy.stats.describe(zs)",
"Let's examine two normal populations, one small, one large:",
"print(scipy.stats.describe(np.random.randn(10)))\nprint()\nprint(scipy.stats.describe(np.random.randn(300000)))",
"The small sample has very non-zero skew and kurtosis because the small number of samples is not well distributed around the mean of 0. You can see this also by comparing the computed mean and variance with the theoretical mean of 0 and variance 1. In comparison the large sample's mean and variance are very close to the theoretical values, and both the skew and kurtosis are near zero.\nProduct of Gaussians (Optional)\nIt is not important to read this section. Here I derive the equations for the product of two Gaussians.\nYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\\bar\\mu, \\bar\\sigma^2)$, and measurement be $z \\propto N(z, \\sigma_z^2)$. What is the posterior x given the measurement z?\nWrite the posterior as $p(x \\mid z)$. Now we can use Bayes Theorem to state\n$$p(x \\mid z) = \\frac{p(z \\mid x)p(x)}{p(z)}$$\n$p(z)$ is a normalizing constant, so we can create a proportinality\n$$p(x \\mid z) \\propto p(z|x)p(x)$$\nNow we subtitute in the equations for the Gaussians, which are\n$$p(z \\mid x) = \\frac{1}{\\sqrt{2\\pi\\sigma_z^2}}\\exp \\Big[-\\frac{(z-x)^2}{2\\sigma_z^2}\\Big]$$\n$$p(x) = \\frac{1}{\\sqrt{2\\pi\\bar\\sigma^2}}\\exp \\Big[-\\frac{(x-\\bar\\mu)^2}{2\\bar\\sigma^2}\\Big]$$\nWe can drop the leading terms, as they are constants, giving us\n$$\\begin{aligned}\np(x \\mid z) &\\propto \\exp \\Big[-\\frac{(z-x)^2}{2\\sigma_z^2}\\Big]\\exp \\Big[-\\frac{(x-\\bar\\mu)^2}{2\\bar\\sigma^2}\\Big]\\\n&\\propto \\exp \\Big[-\\frac{(z-x)^2}{2\\sigma_z^2}-\\frac{(x-\\bar\\mu)^2}{2\\bar\\sigma^2}\\Big] \\\n&\\propto \\exp \\Big[-\\frac{1}{2\\sigma_z^2\\bar\\sigma^2}[\\bar\\sigma^2(z-x)^2-\\sigma_z^2(x-\\bar\\mu)^2]\\Big]\n\\end{aligned}$$\nNow we multiply out the squared terms and group in terms of the posterior $x$.\n$$\\begin{aligned}\np(x \\mid z) &\\propto \\exp \\Big[-\\frac{1}{2\\sigma_z^2\\bar\\sigma^2}[\\bar\\sigma^2(z^2 -2xz + x^2) + \\sigma_z^2(x^2 - 2x\\bar\\mu+\\bar\\mu^2)]\\Big ] \\\n&\\propto \\exp \\Big[-\\frac{1}{2\\sigma_z^2\\bar\\sigma^2}[x^2(\\bar\\sigma^2+\\sigma_z^2)-2x(\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z) + (\\bar\\sigma^2z^2+\\sigma_z^2\\bar\\mu^2)]\\Big ]\n\\end{aligned}$$\nThe last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.\n$$p(x \\mid z) \\propto \\exp \\Big[-\\frac{1}{2}\\frac{x^2(\\bar\\sigma^2+\\sigma_z^2)-2x(\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z)}{\\sigma_z^2\\bar\\sigma^2}\\Big ]\n$$\nDivide numerator and denominator by $\\bar\\sigma^2+\\sigma_z^2$ to get\n$$p(x \\mid z) \\propto \\exp \\Big[-\\frac{1}{2}\\frac{x^2-2x(\\frac{\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z}{\\bar\\sigma^2+\\sigma_z^2})}{\\frac{\\sigma_z^2\\bar\\sigma^2}{\\bar\\sigma^2+\\sigma_z^2}}\\Big ]\n$$\nProportionality allows us create or delete constants at will, so we can factor this into\n$$p(x \\mid z) \\propto \\exp \\Big[-\\frac{1}{2}\\frac{(x-\\frac{\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z}{\\bar\\sigma^2+\\sigma_z^2})^2}{\\frac{\\sigma_z^2\\bar\\sigma^2}{\\bar\\sigma^2+\\sigma_z^2}}\\Big ]\n$$\nA Gaussian is\n$$N(\\mu,\\, \\sigma^2) \\propto \\exp\\Big [-\\frac{1}{2}\\frac{(x - \\mu)^2}{\\sigma^2}\\Big ]$$\nSo we can see that $p(x \\mid z)$ has a mean of\n$$\\mu_\\mathtt{posterior} = \\frac{\\sigma_z^2\\bar\\mu + \\bar\\sigma^2z}{\\bar\\sigma^2+\\sigma_z^2}$$\nand a variance of\n$$\n\\sigma_\\mathtt{posterior} = \\frac{\\sigma_z^2\\bar\\sigma^2}{\\bar\\sigma^2+\\sigma_z^2}\n$$\nI've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $p(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.\n$$\\mathcal N_1 = \\| \\mathcal N_2\\cdot \\mathcal N_3\\|$$\nSum of Gaussians (Optional)\nLikewise, this section is not important to read. Here I derive the equations for the sum of two Gaussians.\nThe sum of two Gaussians is given by\n$$\\begin{gathered}\\mu = \\mu_1 + \\mu_2 \\\n\\sigma^2 = \\sigma^2_1 + \\sigma^2_2\n\\end{gathered}$$\nThere are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. \nTo find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with\n$p(x) = \\int\\limits_{-\\infty}^\\infty f_p(x-z)f_z(z)\\, dx$\nThis is the equation for a convolution. Now we just do some math:\n$p(x) = \\int\\limits_{-\\infty}^\\infty f_2(x-x_1)f_1(x_1)\\, dx$\n$= \\int\\limits_{-\\infty}^\\infty \n\\frac{1}{\\sqrt{2\\pi}\\sigma_z}\\exp\\left[-\\frac{(x - z - \\mu_z)^2}{2\\sigma^2_z}\\right]\n\\frac{1}{\\sqrt{2\\pi}\\sigma_p}\\exp\\left[-\\frac{(x - \\mu_p)^2}{2\\sigma^2_p}\\right] \\, dx$\n$= \\int\\limits_{-\\infty}^\\infty\n\\frac{1}{\\sqrt{2\\pi}\\sqrt{\\sigma_p^2 + \\sigma_z^2}} \\exp\\left[ -\\frac{(x - (\\mu_p + \\mu_z)))^2}{2(\\sigma_z^2+\\sigma_p^2)}\\right]\n\\frac{1}{\\sqrt{2\\pi}\\frac{\\sigma_p\\sigma_z}{\\sqrt{\\sigma_p^2 + \\sigma_z^2}}} \\exp\\left[ -\\frac{(x - \\frac{\\sigma_p^2(x-\\mu_z) + \\sigma_z^2\\mu_p}{}))^2}{2\\left(\\frac{\\sigma_p\\sigma_x}{\\sqrt{\\sigma_z^2+\\sigma_p^2}}\\right)^2}\\right] \\, dx$\n$= \\frac{1}{\\sqrt{2\\pi}\\sqrt{\\sigma_p^2 + \\sigma_z^2}} \\exp\\left[ -\\frac{(x - (\\mu_p + \\mu_z)))^2}{2(\\sigma_z^2+\\sigma_p^2)}\\right] \\int\\limits_{-\\infty}^\\infty\n\\frac{1}{\\sqrt{2\\pi}\\frac{\\sigma_p\\sigma_z}{\\sqrt{\\sigma_p^2 + \\sigma_z^2}}} \\exp\\left[ -\\frac{(x - \\frac{\\sigma_p^2(x-\\mu_z) + \\sigma_z^2\\mu_p}{}))^2}{2\\left(\\frac{\\sigma_p\\sigma_x}{\\sqrt{\\sigma_z^2+\\sigma_p^2}}\\right)^2}\\right] \\, dx$\nThe expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us\n$$p(x) = \\frac{1}{\\sqrt{2\\pi}\\sqrt{\\sigma_p^2 + \\sigma_z^2}} \\exp\\left[ -\\frac{(x - (\\mu_p + \\mu_z)))^2}{2(\\sigma_z^2+\\sigma_p^2)}\\right]$$\nThis is in the form of a normal, where\n$$\\begin{gathered}\\mu_x = \\mu_p + \\mu_z \\\n\\sigma_x^2 = \\sigma_z^2+\\sigma_p^2\\, \\square\\end{gathered}$$\nSummary and Key Points\nThis chapter is a poor introduction to statistics in general. I've only covered the concepts that needed to use Gaussians in the remainder of the book, no more. What I've covered will not get you very far if you intend to read the Kalman filter literature. If this is a new topic to you I suggest reading a statistics textbook. I've always liked the Schaum series for self study, and Alan Downey's Think Stats [5] is also very good and freely available online. \nThe following points must be understood by you before we continue:\n\nNormals express a continuous probability distribution\nThey are completely described by two parameters: the mean ($\\mu$) and variance ($\\sigma^2$)\n$\\mu$ is the average of all possible values\nThe variance $\\sigma^2$ represents how much our measurements vary from the mean\nThe standard deviation ($\\sigma$) is the square root of the variance ($\\sigma^2$)\nMany things in nature approximate a normal distribution, but the math is not perfect.\nIn filtering problems computing $p(x\\mid z)$ is nearly impossible, but computing $p(z\\mid x)$ is straightforward. Bayes' lets us compute the former from the latter. \n\nThe next several chapters will be using Gaussians with Bayes' theorem to help perform filtering. As noted in the last section, sometimes Gaussians do not describe the world very well. Latter parts of the book are dedicated to filters which work even when the noise or system's behavior is very non-Gaussian. \nReferences\n[1] https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb\n[2] http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html\n[3] http://docs.scipy.org/doc/scipy/reference/tutorial/stats.html\n[4] Huber, Peter J. Robust Statistical Procedures, Second Edition. Society for Industrial and Applied Mathematics, 1996.\n[5] Downey, Alan. Think Stats, Second Edition. O'Reilly Media.\nhttps://github.com/AllenDowney/ThinkStats2\nhttp://greenteapress.com/thinkstats/\nUseful Wikipedia Links\nhttps://en.wikipedia.org/wiki/Probability_distribution\nhttps://en.wikipedia.org/wiki/Random_variable\nhttps://en.wikipedia.org/wiki/Sample_space\nhttps://en.wikipedia.org/wiki/Central_tendency\nhttps://en.wikipedia.org/wiki/Expected_value\nhttps://en.wikipedia.org/wiki/Standard_deviation\nhttps://en.wikipedia.org/wiki/Variance\nhttps://en.wikipedia.org/wiki/Probability_density_function\nhttps://en.wikipedia.org/wiki/Central_limit_theorem\nhttps://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule\nhttps://en.wikipedia.org/wiki/Cumulative_distribution_function\nhttps://en.wikipedia.org/wiki/Skewness\nhttps://en.wikipedia.org/wiki/Kurtosis"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Yen-HuaChen/STA663-Final-Project
|
final.ipynb
|
mit
|
[
"Hierarchical Topic Models and the Nested Chinese Restaurant Process\nTun-Chieh Hsu, Xialingzi Jin, Yen-Hua Chen\nI. Background\nRecently, complex probabilistic models are increasingly prevalent in various of domains. However, there are several challenges that should be dealt with due to their open-ended nature. That is, the data sets often grow over time, as they growing, they bring new entities and new structures to the fore. Take the problem of learning a topic hierarchy from data for example. Given a collection of documents, each of which contains a set of words and the goal is to discover common usage patterns or topics in the documents, and to organize these topics into a hierarchy.\nThis paper proposes a new method that specified a generative probabilistic model for hierarchical structures and adopt Bayesian perspective to learn such structures from data. The hierarchies in this case could be considered as random variables and specified procedurally. In addition, the underlying approach of constructing the probabilistic object is Chinese restaurant process (CRP), a distribution on partitions of integers. In this paper, they extend CRP to a hierarchy of partitions, known as nested Chinese restaruant process (nCRP), and apply it as a representation of prior and posterior distributions for topic hierarchies. To be more specific, each node in the hierarchy is associated with a topic, where a topic is a distribution across words. A document is generated by choosing a path from the root to a leaf, repeatedly sampling topics along that path, and sampling the words from the selected topics. Thus the orga- nization of topics into a hierarchy aims to capture the breadth of usage of topics across the corpus, reflecting underlying syntactic and semantic notions of generality and specificity.\nII. Algorithm Description\nA. Chinese Restaurant Process\nCRP is an analogous to seating customers at tables in a Chinese restaurant. Imagine there is a Chinese restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 sits at the first table. The next customer either sits at the same table as customer 1, or the next table. The $m$th subsequent customer sits at a table drawn from the following distribution:\n\\begin{align}\np(\\text{occupied table}\\hspace{0.5ex}i\\hspace{0.5ex}\\text{ | previous customers}) = \\frac{m_i}{\\gamma+m-1}\\\np(\\text{next unoccupied table | previous customers}) = \\frac{\\gamma}{\\gamma + m -1}\n\\end{align}\nwhere $m_i$ is the number of previous customers at table $i$, and $\\gamma$ is a parameter. After $M$\ncustomers sit down, the seating plan gives a partition of $M$ items. This distribution gives\nthe same partition structure as draws from a Dirichlet process.\nB. Nested Chinese Restaurant Process\nA nested Chinese restaurant process (nCRP) is an extended version of CRP. Suppose that there are an infinite number of infinite-table Chinese restaurants in a city. A restaurant is determined to be the root restaurant and on each of its infinite tables is a card with the name of another restaurant. On each of the tables in those restaurants are cards that refer to other restaurants, and this structure repeats infinitely. Each restaurant is referred to exactly once. As a result, the whole process could be imagined as an infinitely-branched tree.\nNow, consider a tourist arrives in the city for a culinary vacation. On the first first day, he select a root Chinese restaurant and selects a table from the equation above. On the second day, he enters to the restaurant refered by previous restaurant , again from the above equation. This process was repeated for $L$ days, and at the end, the tourist has sat at L restaurants which constitute a path from the root to a restaurant at the $L$th level in the infinite tree. After M tourists take L-day vacations, the collection of paths describe a particular L-level subtree of the infinite tree.\nC. Hierarchical Topic Model (hLDA)\nThe hierarchical latent Dirichlet allocation model (hLDA) together with nested Chinese restaruant process (nCRP) illustrate the pattern of words from the collection of documents. There are 3 procedures in hLDA: (1) Draw a path from root-node to a leaf; (2) Select a specific path, draw a vector of topic along the path; (3) Draw the words from the topic. In addition, all documents share the topic associated with the root restaurant.\n\nLet $c_1$ be the root restaurant.\nFor each level $\\ell\\in{2,...,L}$:\nDraw a table from restaurant $c_{\\ell-1}$ using CRP. Set $c_{\\ell}$ to be the restaurant reffered to by that table.\n\n\nDraw an L-dimensional topic proportion vector $\\theta$ from Dir($\\alpha$).\nFor each word $n\\in{1,...,N}$:\nDraw $z\\in{1,...,L}$ from Mult($\\theta$).\nDraw $w_n$ from the topic associated with restaurant $c_z$.\n\n\n\n<img src=\"hLDA.png\" style=\"width:400px\">\n\nNotation:\n$T$ : L-level infinite-tree - drawn from CRP($\\gamma$)\n$\\theta$ : L-dimensional topic propotional distribution - drawn from Dir($\\alpha$)\n$\\beta$ : probability of words for each topic - drawn from $\\eta$\n$c_{\\ell}$ : L-level paths, given $T$\n$z$ : actual number of topics for each level - drawn from Mult($\\theta$)\n$w$ : word distribution for each topic at each level\n$N$ : number of words - $n\\in{1,...,N}$\n$M$ : number of documents - $m\\in{1,...,M}$\n\n\n\nIII. Approximate Inference by Gibbs Sampling\nGibbs sampling will sample from the posterior nCRP and corresponding topics in the hLDA model. The sampler are divided into 2 parts -- $z_{m,n}$ and $ c_{m,\\ell}$. In addition, variables $\\theta$ and $\\beta$ are integrated out.\nA. Notation\n\n$w_{m,n}$ : the $n$th word in the $m$th documnt\n$c_{m,\\ell}$ : the restaurant corresponding to the $\\ell$th topic in document $m$\n$z_{m,n}$ : the assignment of the $n$th word in the $m$th document to one of the $L$ available topics\n\nB. Topic distribution : $z_{m,n}$\n\\begin{align}\np(z_{i}=j\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf z}{-i},{\\bf w})\\propto\\frac{n{-i,j}^{(w_{i})}+\\beta}{n_{-i,j}^{(\\cdot)}+W\\beta}\\frac{n_{-i,j}^{(d_{i})}+\\alpha}{n_{-i,\\cdot}^{(d_{i})}+T\\alpha}\n\\end{align}\n\n$z_{i}$ : assignments of words to topics\n$n_{-i,j}^{(w_{i})}$ : number of words assigned to topic $j$ that are the same as $w_i$\n$n_{-i,j}^{(\\cdot)}$ : total number of words assigned to topic $j$\n$n_{-i,j}^{(d_{i})}$ : number of words from document $d_i$ assigned to topic $j$\n$n_{-i,\\cdot}^{(d_{i})}$ : total number of words in document $d_i$\n$W$ : number of words have been assigned\n\nC. Path : ${\\bf c}_{m}$\n$$p({\\bf c}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf w}, {\\bf c}{-m}, {\\bf z})\\propto p({\\bf w}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf c}, {\\bf w}{-m}, {\\bf z})\\cdot p({\\bf c}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf c}{-m})$$\n\n$p({\\bf c}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf w}, {\\bf c}{-m}, {\\bf z})$ : posterior of the set of probabilities of possible novel paths\n$p({\\bf w}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf c}, {\\bf w}{-m}, {\\bf z})$ : likelihood of the data given a particular choice of ${\\bf c}_{m}$\n$p({\\bf c}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf c}{-m})$ : prior on ${\\bf c}_{m}$ which implies by the nCRP\n\n$$p({\\bf w}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf c}, {\\bf w}{-m}, {\\bf z})=\\prod_{\\ell=1}^{L}\\left(\\frac{\\Gamma(n_{c_{m,\\ell},-m}^{(\\cdot)}+W\\eta)}{\\prod_{w}\\Gamma(n_{c_{m,\\ell},-m}^{(w)}+\\eta)}\\frac{\\prod_{w}\\Gamma(n_{c_{m,\\ell},-m}^{(w)}+n_{c_{m,\\ell},m}^{(w)}+\\eta)}{\\Gamma(n_{c_{m,\\ell},-m}^{(\\cdot)}+n_{c_{m,\\ell},m}^{(\\cdot)}+W\\eta)}\\right)$$\n\n$p({\\bf w}{m}\\hspace{0.5ex}|\\hspace{0.5ex}{\\bf c}, {\\bf w}{-m}, {\\bf z})$ : joint distribution of likelihood\n$n_{c_{m,\\ell},-m}^{(w)}$ : number of instances of word $w$ that have been assigned to the topic indexed by $c_{m,\\ell}$, not in the document $m$\n$W$ : total vocabulary size\n\nIV. Implementation\nA. Package import",
"import numpy as np\nfrom scipy.special import gammaln\nimport random\nfrom collections import Counter\nimport string\nimport graphviz\nimport pygraphviz\nimport pydot",
"B. Function construction\nB.1 Chinese Restaurant Process (CRP)",
"def CRP(topic, phi):\n '''\n CRP gives the probability of topic assignment for specific vocabulary\n Return a 1 * j array, where j is the number of topic\n \n Parameter\n ---------\n topic: a list of lists, contains assigned words in each sublist (topic)\n phi: double, parameter for CRP\n \n Return\n ------\n p_crp: the probability of topic assignments for new word\n '''\n p_crp = np.empty(len(topic)+1)\n m = sum([len(x) for x in topic])\n p_crp[0] = phi / (phi + m)\n for i, word in enumerate(topic):\n p_crp[i+1] = len(word) / (phi + m)\n return p_crp",
"B.2 Node Sampling",
"def node_sampling(corpus_s, phi):\n '''\n Node sampling samples the number of topics, L\n return a j-layer list of lists, where j is the number of topics\n \n Parameter\n ---------\n corpus_s: a list of lists, contains words in each sublist (document)\n phi: double, parameter for CRP\n \n Return\n ------\n topic: a list of lists, contains assigned words in each sublist (topic)\n '''\n topic = [] \n for corpus in corpus_s:\n for word in corpus:\n cm = CRP(topic, phi)\n theta = np.random.multinomial(1, (cm/sum(cm))).argmax()\n if theta == 0:\n topic.append([word])\n else:\n topic[theta-1].append(word)\n return topic",
"B.3 Gibbs sampling -- $z_{m,n}$",
"def Z(corpus_s, topic, alpha, beta):\n '''\n Z samples from LDA model\n Return two j-layer list of lists, where j is the number of topics\n \n Parameter\n ---------\n corpus_s: a list of lists, contains words in each sublist (document)\n topic: a L-dimensional list of lists, sample from node_sampling\n alpha: double, parameter\n beta: double, parameter\n \n Return\n ------\n z_topic: a j-dimensional list of lists, drawn from L-dimensioanl topic, j<L\n z_doc: a j-dimensioanl list of lists, report from which document the word is assigned to each topic\n '''\n n_vocab = sum([len(x) for x in corpus_s])\n t_zm = np.zeros(n_vocab).astype('int')\n z_topic = [[] for _ in topic]\n z_doc = [[] for _ in topic]\n z_tmp = np.zeros((n_vocab, len(topic)))\n assigned = np.zeros((len(corpus_s), len(topic)))\n n = 0\n for i in range(len(corpus_s)):\n for d in range(len(corpus_s[i])): \n wi = corpus_s[i][d] \n for j in range(len(topic)):\n lik = (z_topic[j].count(wi) + beta) / (assigned[i, j] + n_vocab * beta)\n pri = (len(z_topic[j]) + alpha) / ((len(corpus_s[i]) - 1) + len(topic) * alpha)\n z_tmp[n, j] = lik * pri\n t_zm[n] = np.random.multinomial(1, (z_tmp[n,:]/sum(z_tmp[n,:]))).argmax()\n z_topic[t_zm[n]].append(wi)\n z_doc[t_zm[n]].append(i)\n assigned[i, t_zm[n]] += 1\n n += 1\n z_topic = [x for x in z_topic if x != []]\n z_doc = [x for x in z_doc if x != []]\n return z_topic, z_doc",
"B.4 Gibbs sampling -- ${\\bf c}_{m}$, CRP prior",
"def CRP_prior(corpus_s, doc, phi):\n '''\n CRP_prior implies by nCRP\n Return a m*j array, whre m is the number of documents and j is the number of topics\n \n Parameter\n ---------\n corpus_s: a list of lists, contains words in each sublist (document)\n doc: a j-dimensioanl list of lists, drawn from Z function (z_doc)\n phi: double, parameter for CRP\n \n Return\n ------\n c_p: a m*j array, for each document the probability of the topics\n '''\n c_p = np.empty((len(corpus_s), len(doc)))\n for i, corpus in enumerate(corpus_s):\n p_topic = [[x for x in doc[j] if x != i] for j in range(len(doc))]\n tmp = CRP(p_topic, phi)\n c_p[i,:] = tmp[1:]\n return c_p",
"B.5 Gibbs sampling -- ${\\bf c}_{m}$, likelihood",
"def likelihood(corpus_s, topic, eta):\n '''\n likelihood gives the propability of data given a particular choice of c\n Return a m*j array, whre m is the number of documents and j is the number of topics\n \n Parameter\n ---------\n corpus_s: a list of lists, contains words in each sublist (document)\n topic: a j-dimensional list of lists, drawn from Z function (z_assigned)\n eta: double, parameter\n \n Return\n ------\n w_m: a m*j array\n '''\n w_m = np.empty((len(corpus_s), len(topic)))\n allword_topic = [word for t in topic for word in t]\n n_vocab = sum([len(x) for x in corpus_s])\n for i, corpus in enumerate(corpus_s):\n prob_result = []\n for j in range(len(topic)):\n current_topic = topic[j]\n n_word_topic = len(current_topic)\n prev_dominator = 1\n later_numerator = 1\n prob_word = 1 \n\n overlap = [val for val in set(corpus) if val in current_topic]\n \n prev_numerator = gammaln(len(current_topic) - len(overlap) + n_vocab * eta)\n later_dominator = gammaln(len(current_topic) + n_vocab * eta)\n for word in corpus: \n corpus_list = corpus \n if current_topic.count(word) - corpus_list.count(word) < 0 :\n a = 0\n else:\n a = current_topic.count(word) - corpus_list.count(word)\n \n prev_dominator += gammaln(a + eta)\n later_numerator += gammaln(current_topic.count(word) + eta)\n \n prev = prev_numerator - prev_dominator\n later = later_numerator - later_dominator\n \n like = prev + later \n w_m[i, j] = like\n w_m[i, :] = w_m[i, :] + abs(min(w_m[i, :]) + 0.1)\n w_m = w_m/w_m.sum(axis = 1)[:, np.newaxis]\n return w_m",
"B.6 Gibbs sampling -- ${\\bf c}_{m}$, posterior",
"def post(w_m, c_p):\n '''\n Parameter\n ---------\n w_m: likelihood, drawn from likelihood function\n c_p: prior, drawn from CRP_prior function\n \n Return\n ------\n c_m, a m*j list of lists\n '''\n c_m = (w_m * c_p) / (w_m * c_p).sum(axis = 1)[:, np.newaxis]\n return np.array(c_m)",
"B.7 Gibbs sampling -- $w_{n}$",
"def wn(c_m, corpus_s):\n '''\n wn return the assignment of words for topics, drawn from multinomial distribution\n Return a n*1 array, where n is the total number of word\n \n Parameter\n ---------\n c_m: a m*j list of lists, drawn from post function\n corpus_s: a list of lists, contains words in each sublist (document)\n \n Return\n ------\n wn_ass: a n*1 array, report the topic assignment for each word\n '''\n wn_ass = []\n for i, corpus in enumerate(corpus_s):\n for word in corpus:\n theta = np.random.multinomial(1, c_m[i]).argmax()\n wn_ass.append(theta)\n return np.array(wn_ass)",
"C. Gibbs sampling\nC.1 Find most common value",
"most_common = lambda x: Counter(x).most_common(1)[0][0]",
"C.2 Gibbs sampling",
"def gibbs(corpus_s, topic, alpha, beta, phi, eta, ite):\n '''\n gibbs will return the distribution of words for topics\n Return a j-dimensional list of lists, where j is the number of topics\n \n Parameter\n ---------\n corpus_s: a list of lists, contains words in each sublist (document)\n topic: a j-dimensional list of lists, drawn from Z function (z_assigned)\n alpha: double, parameter for Z function\n beta: double, parameter for Z function\n phi: double, parameter fro CRP_prior function\n eta: double, parameter for w_n function\n ite: int, number of iteration\n \n Return\n ------\n wn_topic: a j-dimensional list of lists, the distribution of words for topics\n '''\n n_vocab = sum([len(x) for x in corpus_s])\n gibbs = np.empty((n_vocab, ite)).astype('int')\n \n for i in range(ite):\n z_topic, z_doc = Z(corpus_s, topic, alpha, beta)\n c_p = CRP_prior(corpus_s, z_doc, phi)\n w_m = likelihood(corpus_s, z_topic, eta)\n c_m = post(w_m, c_p)\n gibbs[:, i] = wn(c_m, corpus_s) \n # drop first 1/10 data\n gibbs = gibbs[:, int(ite/10):]\n theta = [most_common(gibbs[x]) for x in range(n_vocab)]\n \n n_topic = max(theta)+1\n \n wn_topic = [[] for _ in range(n_topic)]\n wn_doc_topic = [[] for _ in range(n_topic)]\n\n doc = 0\n n = 0\n for i, corpus_s in enumerate(corpus_s):\n if doc == i:\n for word in corpus_s:\n wn_doc_topic[theta[n]].append(word)\n n += 1\n for j in range(n_topic):\n if wn_doc_topic[j] != []:\n wn_topic[j].append(wn_doc_topic[j])\n wn_doc_topic = [[] for _ in range(n_topic)] \n doc += 1\n wn_topic = [x for x in wn_topic if x != []]\n return wn_topic",
"V. Topic Model with hLDA\nGibbs sampling in section IV distributes the input vocabularies from documents in corpus to available topics, which sampled from $L$-dimensional topics. In section V, an $n$-level tree will be presented by tree plot, which the root-node will be more general and the leaves will be more specific. In addition, tree plot will return the words sorted by their frequencies for each node.\nA. hLDA model",
"def hLDA(corpus_s, alpha, beta, phi, eta, ite, level):\n '''\n hLDA generates an n*1 list of lists, where n is the number of level\n \n Parameter\n ---------\n corpus_s: a list of lists, contains words in each sublist (document)\n alpha: double, parameter for Z function\n beta: double, parameter for Z function\n phi: double, parameter fro CRP_prior function\n eta: double, parameter for w_n function\n ite: int, number of iteration\n level: int, number of level\n \n Return\n hLDA_tree: an n*1 list of lists, each sublist represents a level, the sublist in each level represents a topic\n node: an n*1 list of lists, returns how many nodes there are in each level\n '''\n \n topic = node_sampling(corpus_s, phi)\n print(len(topic))\n \n hLDA_tree = [[] for _ in range(level)]\n tmp_tree = []\n node = [[] for _ in range(level+1)]\n node[0].append(1)\n \n for i in range(level):\n if i == 0:\n wn_topic = gibbs(corpus_s, topic, alpha, beta, phi, eta, ite)\n node_topic = [x for word in wn_topic[0] for x in word]\n hLDA_tree[0].append(node_topic)\n tmp_tree.append(wn_topic[1:])\n tmp_tree = tmp_tree[0]\n node[1].append(len(wn_topic[1:]))\n else:\n for j in range(sum(node[i])):\n if tmp_tree == []:\n break\n wn_topic = gibbs(tmp_tree[0], topic, alpha, beta, phi, eta, ite)\n node_topic = [x for word in wn_topic[0] for x in word]\n hLDA_tree[i].append(node_topic)\n tmp_tree.remove(tmp_tree[0])\n if wn_topic[1:] != []:\n tmp_tree.extend(wn_topic[1:])\n node[i+1].append(len(wn_topic[1:]))\n \n return hLDA_tree, node[:level]",
"B. hLDA plot",
"def HLDA_plot(hLDA_object, Len = 8, save = False):\n \n from IPython.display import Image, display\n def viewPydot(pdot):\n plt = Image(pdot.create_png())\n display(plt)\n\n words = hLDA_object[0]\n struc = hLDA_object[1]\n \n graph = pydot.Dot(graph_type='graph')\n end_index = [np.insert(np.cumsum(i),0,0) for i in struc]\n \n for level in range(len(struc)-1):\n leaf_level = level + 1\n leaf_word = words[leaf_level]\n leaf_struc = struc[leaf_level]\n word = words[level]\n end_leaf_index = end_index[leaf_level]\n\n for len_root in range(len(word)):\n root_word = '\\n'.join([x[0] for x in Counter(word[len_root]).most_common(Len)])\n leaf_index = leaf_struc[len_root] \n start = end_leaf_index[len_root]\n end = end_leaf_index[len_root+1]\n lf = leaf_word[start:end] \n for l in lf:\n leaf_w = '\\n'.join([x[0] for x in Counter(list(l)).most_common(Len)])\n edge = pydot.Edge(root_word, leaf_w)\n graph.add_edge(edge)\n if save == True:\n graph.write_png('graph.png')\n viewPydot(graph)",
"VI. Empirical Example\nA. Simulated data\nFor simulated data example, each document, $d$, in corpus is generated by normal distribution with different size of words, $w_{d,n}$, where $n\\in{10,...,200}$ and ${\\bf w}_{d}\\sim N(0, 1)$. In this example, by generating 35 documents in the corpus, we are able to see the simulated tree with the number near mean, $0$, such as {w0, w1, w-1} in the root node and the number far from mean such as {w10, w-10, w15} in the leaves.",
"def sim_corpus(n):\n n_rows = n\n corpus = [[] for _ in range(n_rows)]\n for i in range(n_rows):\n n_cols = np.random.randint(10, 200, 1, dtype = 'int')[0]\n for j in range(n_cols):\n num = np.random.normal(0, 1, n_cols)\n word = 'w%s' % int(round(num[j], 1)*10)\n corpus[i].append(word)\n return corpus\n\ncorpus_0 = sim_corpus(35)\n\ntree_0 = hLDA(corpus_0, 0.1, 0.01, 2, 0.01, 100, 3)\n\nHLDA_plot(tree_0, 5, False)",
"B. Real data\nFor real data example, the corpus of documents is generated from Blei's sample data. The documents are splitted by paragraph; that is, each paragraph reprents one document. We take first 11 documents to form the sample corpus used in the hLDA model. To form the corpus, we read the corpus as a large list of lists. The sublists in the nested list represent the documents; the elements in each sublist represent the words in specific document. Note that the punctuations are removed from the corpus.",
"def read_corpus(corpus_path):\n punc = ['`', ',', \"'\", '.', '!', '?']\n corpus = []\n with open(corpus_path, 'r') as f:\n for line in f:\n for x in punc:\n line = line.replace(x, '')\n line = line.strip('\\n')\n word = line.split(' ')\n corpus.append(word)\n return(corpus)\n\ncorpus_1 = read_corpus('sample.txt')\n\ntree_1 = hLDA(corpus_1, 0.1, 0.01, 1, 0.01, 100, 3)\n\nHLDA_plot(tree_1, 5, False)",
"VII. Download and Install from Github\nThe hLDA code of the paper Hierarchical Topic Models and the Nested Chinese Restaurant Process is released on github with the package named hLDA (click to clone). One can easily download (click to download) and install by running python setup.py install. The package provides 4 functions:\n\nhLDA.sim_corpus(n): return a simulated corpus with $n$ number of documents\ninputs: \nn: int, number of documents in the corpus\n\n\n\n\nhLDA.read_corpus(corpus_path): return a list of lists of corpus with length $n$, where $n$ is the number of documents.\ninputs: \ncorpus_path: the path of txt file, note that each paragraph represents a document\n\n\n\n\nhLDA.hLDA(corpus, alpha, beta, phi, eta, iteration, level): return a $n$-level tree, where $n$ is the input level\ninputs:\ncorpus: corpus read from hLDA.read_corpus or simulated from sim_corpus\nalpha: double, parameter for Z function\nbeta: double, parameter for Z function\nphi: double, parameter fro CRP_prior function\neta: double, parameter for w_n function\niteration: int, number of iteration for gibbs sampling\nlevel: int, number of level\n\n\n\n\nhLDA.HLDA_plot(hLDA_result, n_words, save): return a tree plot from hLDA topic model\ninputs:\nhLDA_result: the hLDA result generated from hLDA.hLDA\nn_words: int, how many words to show in each node (sorted by frequency), default with 5\nsave: boolean, save the plot or not, default with False\n\n\n\n\n\nNote that the requirement packages for hLDA are: (1) numpy; (2) scipy; (3) collections; (4) string; (5) pygraphviz; (6) pydot.",
"import hLDA\n\nsim = hLDA.sim_corpus(5)\nprint(sim[0])\n\ncorpus = hLDA.read_corpus('sample.txt')\nprint(corpus[0])\n\ntree = hLDA.hLDA(corpus, 0.1, 0.01, 1, 0.01, 10, 3)\n\nhLDA.HLDA_plot(tree)",
"VIII. Optimization\nTo optimize the hLDA model, we choose cython to speed the functions up, since the only matrix calculation function, c_m, was already vectorized. However, after applying cython, the code is not able to speed up efficiently. The possible reasons are shown as follows.\nFirst, if we simply speed up single function, cython does it well. Take the first function, node_sampling, for example, the run time decreased from 52.2 ms to 47.2ms, which menas cython is about 10% faster than python code. On the other hand, if we try to speed up all the functions used in gibbs sampling function, gibbs, the run time is similar or even slower, since it has to import external cython function to complete the work.\nSecond, most of the variables used in hLDA are lists. When coding cython in python, we fail to initialize the data type for the list variables efficiently.",
"%load_ext Cython\n\n%%cython -a\n\ncimport cython\ncimport numpy as np\n\nimport numpy as np\n\n@cython.cdivision\n@cython.boundscheck(False)\n@cython.wraparound(False)\n\n\ndef CRP_c(list topic, double phi):\n cdef double[:] cm = np.empty(len(topic)+1)\n cdef int m = sum([len(x) for x in topic])\n \n cm[0] = phi / (phi + m)\n \n cdef int i\n cdef list word\n for i, word in enumerate(topic):\n cm[i+1] = len(word) / (phi + m)\n return np.array(cm)\n\ndef node_sampling_c(list corpus_s, double phi):\n cdef list topic = [] \n cdef int theta\n \n cdef list corpus\n cdef str word\n for corpus in corpus_s:\n for word in corpus:\n cm = CRP_c(topic, phi)\n theta = np.random.multinomial(1, (cm/sum(cm))).argmax()\n if theta == 0:\n topic.append([word])\n else:\n topic[theta-1].append(word)\n return topic\n\n%timeit node_sampling_c(corpus_1, 1)\n\n%timeit node_sampling(corpus_1, 1)",
"IX. Code Comparison\nThis section will introduce LDA model as the comparison with hLDA model. The LDA model needs user to specify the number of topics and returns the probability of the words in each topic, which are the most different parts compares to hLDA model. The hLDA model applies nonparametric prior which allows arbitrary factors and readily accommodates growing data collections. That is , the hLDA model will sample the number of topics by nCRP and return a topic hierarchy tree.\nThe lda_topic function returns a single-layer word distributions for topics, which number is specified as parameter in the function. In each topic, the LDA model gives the probability distribution of possible words. In LDA model, it treats corpus as a big document, instead of consider each document by it own. Furthermore, the model is not able to illustrate the relationship between topics and words which are provided in hLDA model.",
"import matplotlib.pyplot as plt\nfrom nltk.tokenize import RegexpTokenizer\nfrom stop_words import get_stop_words\nfrom nltk.stem.porter import PorterStemmer\nfrom gensim import corpora, models\nimport gensim\n\ndef lda_topic(corpus_s, dic, n_topics, ite):\n lda = gensim.models.ldamodel.LdaModel(corpus = corpus_s,\n id2word = dic,\n num_topics = n_topics,\n update_every = 1,\n chunksize = 1,\n passes = 1,\n iterations = ite)\n return lda.print_topics()\n\ncorpus = read_corpus('sample.txt')\n\ndef lda_corpus(corpus_s):\n texts = []\n tokenizer = RegexpTokenizer(r'\\w+')\n\n for doc in corpus_s:\n for word in doc:\n raw = word.lower()\n tokens = tokenizer.tokenize(raw)\n texts.append(tokens)\n \n dictionary = corpora.Dictionary(texts)\n n_corpus = [dictionary.doc2bow(text) for text in texts]\n corpora.MmCorpus.serialize('sample.mm', n_corpus)\n sample = gensim.corpora.MmCorpus('sample.mm')\n \n return sample, dictionary\n\nsample, dic = lda_corpus(corpus)\n\nlda_topic(sample, dic, 3, 5000)",
"X. Conclusion\nBy introducing nCRP as the nonparametric prior for hierarchical extension to the LDA, here forms the hLDA. First, in the hLDA topic model, the words are allocated by Gibbs sampling of two critical variable -- ${\\bf z}$ and ${\\bf c}{m}$. The formor variable, ${\\bf z}$, illustrates how words are allocated to each topic, thus finding the number of topics for each parent node. The later variable, ${\\bf c}{m}$, the posterior of likelihood (${\\bf w}{m}$) and nCRP prior (${\\bf c}{m}$), is a set of possible values correspondings to the topics simulated from ${\\bf z}$ for each document $m$. After setting up ${\\bf z}$ and ${\\bf c}{m}$, the hLDA then runs the Gibbs sampling to draw $w{n}$, the distribution of the words to the topics drawn from ${\\bf z}$ and ${\\bf c}_{m}$. Last, we write the hLDA function and HLDA_plot function to print the result in list and plot it as a topic tree.\nReferences\n[1] Griffiths, Thomas L., and Mark Steyvers. \"A probabilistic approach to semantic representation.\" Proceedings of the 24th annual conference of the cognitive science society. 2002.\n[2] Griffiths, D. M. B. T. L., and M. I. J. J. B. Tenenbaum. \"Hierarchical topic models and the nested chinese restaurant process.\" Advances in neural information processing systems 16 (2004): 17.\n[3] Blei, David M., Thomas L. Griffiths, and Michael I. Jordan. \"The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies.\" Journal of the ACM (JACM) 57.2 (2010): 7."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_raw_objects.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
".. _tut_raw_objects\nThe :class:Raw <mne.io.RawFIF> data structure: continuous data",
"from __future__ import print_function\n\nimport mne\nimport os.path as op\nfrom matplotlib import pyplot as plt",
"Continuous data is stored in objects of type :class:Raw <mne.io.RawFIF>.\nThe core data structure is simply a 2D numpy array (channels × samples,\n._data) combined with an :class:Info <mne.io.meas_info.Info> object\n(.info) (:ref:tut_info_objects.\nThe most common way to load continuous data is from a .fif file. For more\ninformation on :ref:loading data from other formats <ch_raw>, or creating\nit :ref:from scratch <tut_creating_data_structures>.\nLoading continuous data",
"# Load an example dataset, the preload flag loads the data into memory now\ndata_path = op.join(mne.datasets.sample.data_path(), 'MEG',\n 'sample', 'sample_audvis_raw.fif')\nraw = mne.io.RawFIF(data_path, preload=True, verbose=False)\n\n# Give the sample rate\nprint('sample rate:', raw.info['sfreq'], 'Hz')\n# Give the size of the data matrix\nprint('channels x samples:', raw._data.shape)",
"Information about the channels contained in the :class:Raw <mne.io.RawFIF>\nobject is contained in the :class:Info <mne.io.meas_info.Info> attribute.\nThis is essentially a dictionary with a number of relevant fields (see\n:ref:tut_info_objects).\nIndexing data\nThere are two ways to access the data stored within :class:Raw\n<mne.io.RawFIF> objects. One is by accessing the underlying data array, and\nthe other is to index the :class:Raw <mne.io.RawFIF> object directly.\nTo access the data array of :class:Raw <mne.io.Raw> objects, use the\n_data attribute. Note that this is only present if preload==True.",
"print('Shape of data array:', raw._data.shape)\narray_data = raw._data[0, :1000]\n_ = plt.plot(array_data)",
"You can also pass an index directly to the :class:Raw <mne.io.RawFIF>\nobject. This will return an array of times, as well as the data representing\nthose timepoints. This may be used even if the data is not preloaded:",
"# Extract data from the first 5 channels, from 1 s to 3 s.\nsfreq = raw.info['sfreq']\ndata, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]\n_ = plt.plot(times, data.T)\n_ = plt.title('Sample channels')",
"Selecting subsets of channels and samples\nIt is possible to use more intelligent indexing to extract data, using\nchannel names, types or time ranges.",
"# Pull all MEG gradiometer channels:\n# Make sure to use copy==True or it will overwrite the data\nmeg_only = raw.pick_types(meg=True, copy=True)\neeg_only = raw.pick_types(meg=False, eeg=True, copy=True)\n\n# The MEG flag in particular lets you specify a string for more specificity\ngrad_only = raw.pick_types(meg='grad', copy=True)\n\n# Or you can use custom channel names\npick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']\nspecific_chans = raw.pick_channels(pick_chans, copy=True)\nprint(meg_only, eeg_only, grad_only, specific_chans, sep='\\n')",
"Notice the different scalings of these types",
"f, (a1, a2) = plt.subplots(2, 1)\neeg, times = eeg_only[0, :int(sfreq * 2)]\nmeg, times = meg_only[0, :int(sfreq * 2)]\na1.plot(times, meg[0])\na2.plot(times, eeg[0])",
"You can restrict the data to a specific time range",
"restricted = raw.crop(5, 7) # in seconds\nprint('New time range from', restricted.times.min(), 's to',\n restricted.times.max(), 's')",
"And drop channels by name",
"restricted = restricted.drop_channels(['MEG 0241', 'EEG 001'])\nprint('Number of channels reduced from', raw.info['nchan'], 'to',\n restricted.info['nchan'])",
"Concatenating :class:Raw <mne.io.RawFIF> objects\n:class:Raw <mne.io.RawFIF> objects can be concatenated in time by using the\n:func:append <mne.io.RawFIF.append> function. For this to work, they must\nhave the same number of channels and their :class:Info\n<mne.io.meas_info.Info> structures should be compatible.",
"# Create multiple :class:`Raw <mne.io.RawFIF>` objects\nraw1 = raw.copy().crop(0, 10)\nraw2 = raw.copy().crop(10, 20)\nraw3 = raw.copy().crop(20, 100)\n\n# Concatenate in time (also works without preloading)\nraw1.append([raw2, raw3])\nprint('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ehongdata/Network-Analysis-Made-Simple
|
4. Cliques, Triangles and Squares (Instructor).ipynb
|
mit
|
[
"import networkx as nx\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"Cliques, Triangles and Squares\nLet's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such:",
"G = nx.Graph()\nG.add_nodes_from(['a', 'b', 'c'])\nG.add_edges_from([('a','b'), ('b', 'c')])\nnx.draw(G, with_labels=True)",
"Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like?",
"G.add_node('d')\nG.add_edge('c', 'd')\nG.add_edge('d', 'a')\nnx.draw(G, with_labels=True)",
"The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square.\nYou may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well.\nIf all of the triangles in the two small-scale networks were closed, then the graph would have represented cliques, in which everybody within that subgraph knows one another.\nIn this section, we will attempt to answer the following questions:\n\nCan we identify cliques?\nCan we identify potential cliques that aren't captured by the network?\nCan we model the probability that two unconnected individuals know one another?\n\nAs usual, let's start by loading the synthetic network.",
"# Load the network.\nG = nx.read_gpickle('Synthetic Social Network.pkl')\nnx.draw(G, with_labels=True)",
"Cliques\nIn a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not.\nThe core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.",
"# Example code that shouldn't be too hard to follow.\ndef in_triangle(G, node):\n neighbors1 = G.neighbors(node)\n neighbors2 = []\n for n in neighbors1:\n neighbors = G.neighbors(n)\n if node in neighbors2:\n neighbors2.remove(node)\n neighbors2.extend(G.neighbors(n))\n \n neighbors3 = []\n for n in neighbors2:\n neighbors = G.neighbors(n)\n neighbors3.extend(G.neighbors(n))\n \n if node in neighbors3:\n return True\n else:\n return False\n \nin_triangle(G, 3)",
"In reality, NetworkX already has a function that counts the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.",
"nx.triangles(G, 3)",
"Exercise\nCan you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with?\nHint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship.\nHint: Python Sets may be of great use for this problem. https://docs.python.org/2/library/stdtypes.html#set\nVerify your answer by drawing out the subgraph composed of those nodes.",
"# Possible answer\ndef get_triangles(G, node):\n neighbors = set(G.neighbors(node))\n triangle_nodes = set()\n \"\"\"\n Fill in the rest of the code below.\n \"\"\"\n for n in neighbors:\n neighbors2 = set(G.neighbors(n))\n neighbors.remove(n)\n neighbors2.remove(node)\n triangle_nodes.update(neighbors2.intersection(neighbors))\n neighbors.add(n)\n triangle_nodes.add(node)\n return triangle_nodes\n\n# Verify your answer with the following funciton call. Should return:\n# {1, 2, 3, 6, 23}\nget_triangles(G, 3)\n\n# Then, draw out those nodes.\nnx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)\n\nneighbors3 = G.neighbors(3)\nneighbors3.append(3)\nnx.draw(G.subgraph(neighbors3), with_labels=True)",
"Friend Recommendation: Open Triangles\nNow that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.\nOpen triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. \nExercise\nCan you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one? \nHint: You may still want to stick with set operations. Suppose we have the A-B-C triangle. If there are neighbors of C that are also neighbors of B, then those neighbors are in a triangle with B and C; consequently, if there are nodes for which C's neighbors do not overlap with B's neighbors, then those nodes are in an open triangle. The final implementation should include some conditions, and probably won't be as simple as described above.",
"# Possible Answer, credit Justin Zabilansky (MIT) for help on this.\ndef get_open_triangles(G, node):\n \"\"\"\n There are many ways to represent this. One may choose to represent only the nodes involved \n in an open triangle; this is not the approach taken here.\n \n Rather, we have a code that explicitly enumrates every open triangle present.\n \"\"\"\n open_triangle_nodes = []\n neighbors = set(G.neighbors(node))\n \n for n in neighbors:\n neighbors2 = set(G.neighbors(n))\n neighbors2.remove(node)\n \n overlaps = set()\n for n2 in neighbors2:\n if n2 in neighbors:\n overlaps.add(n2)\n \n difference = neighbors.difference(overlaps)\n difference.remove(n)\n \n for n2 in difference:\n if set([node, n, n2]) not in open_triangle_nodes:\n open_triangle_nodes.append(set([node, n, n2]))\n return open_triangle_nodes\n# # Uncomment the following code if you want to draw out each of the triplets.\n# nodes = get_open_triangles(G, 2)\n# for i, triplet in enumerate(nodes):\n# fig = plt.figure(i)\n# nx.draw(G.subgraph(triplet), with_labels=True)\nprint(get_open_triangles(G, 3))\nlen(get_open_triangles(G, 3))",
"If you remember the previous section on hubs and paths, you will note that node 19 was involved in a lot of open triangles.\nTriangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eecs445-f16/umich-eecs445-f16
|
handsOn_lecture10_bias-variance_tradeoff/bias_variance_handsOn.ipynb
|
mit
|
[
"EECS 445: Machine Learning\nHands On 10: Bias Variance Tradeoff\nConsider a sequence of IID random variable: \n$$\nX_i =\n\\begin{cases}\n100 & \\text{ with prob. } 0.02 \\\n0 & \\text{ with prob. } 0.97 \\\n-100 & \\text{ with prob. } 0.01 \\\n\\end{cases}\n$$\nThe true mean of $X_i$ is \n$$\n0.02 \\times 100 + 0.97 \\times 0 + 0.01 \\times -100 = 1\n$$\nWe want to estimate the true mean of this distribution. We will consider two different estimators of the true mean.\nLet's say you take three samples $X_1, X_2, X_3$, and you compute the empirical mean $Z=\\frac{X_1 + X_2 + X_3}{3}$ and empirical median $Y$ of these three samples (recall that the median is obtained by sorting $X_1, X_2, X_3$ and then choosing the middle (2nd) entry).\nWhat is the bias-variance tradeoff of the $Y$ and $Z$ for estimating the true mean of the above distribution?\n\nThey are both unbiased estimators of the true mean, and have the same variance.\nThe median has higher bias and higher variance.\nThe mean has higher bias and higher variance.\nThey both have no bias, but the mean has lower variance.\nThe mean has no bias but some variance, and the median has non-zero bias but less variance\n\nActivity 1: Bias Variance Tradeoff\nWe will now see try to see the inherent tradeoff between bias and variance of estimators through linear regression. Consider the following dataset.",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.matlib import repmat\nfrom sklearn.preprocessing import PolynomialFeatures\ndegrees = [1,2,3,4,5]\n\n\n#define data\nn = 20\nsub = 1000\nmean = 0\nstd = 0.25\n\n#define test set\nXtest = np.random.random((n,1))*2*np.pi\nytest = np.sin(Xtest) + np.random.normal(mean,std,(n,1))\n\n#pre-allocate variables\npreds = np.zeros((n,sub))\nbias = np.zeros(len(degrees))\nvariance = np.zeros(len(degrees))\nmse = np.zeros(len(degrees))\nvalues = np.expand_dims(np.linspace(0,2*np.pi,100),1)\n",
"Let's try several polynomial fits to the data:",
"for j,degree in enumerate(degrees):\n \n for i in range(sub):\n \n #create data - sample from sine wave \n x = np.random.random((n,1))*2*np.pi\n y = np.sin(x) + np.random.normal(mean,std,(n,1))\n \n poly = PolynomialFeatures(degree=degree)\n\n \n #TODO\n #create features corresponding to degree - ex: 1, x, x^2, x^3...\n A = \n \n #TODO: \n #fit model using least squares solution (linear regression)\n #later include ridge regression/normalization\n coeffs = \n \n #store predictions for each sampling\n preds[:,i] = poly.fit_transform(Xtest).dot(coeffs)[:,0]\n \n #plot 9 images\n if i < 9:\n plt.subplot(3,3,i+1)\n plt.plot(values,poly.fit_transform(values).dot(coeffs),x,y,'.b')\n\n plt.axis([0,2*np.pi,-2,2])\n plt.suptitle('PolyFit = %i' % (degree))\n plt.show()\n\n #TODO\n #Calculate mean bias, variance, and MSE (UNCOMMENT CODE BELOW!)\n #bias[j] = \n #variance[j] = \n #mse[j] = \n",
"Let's plot the data with the estimators!",
"plt.subplot(3,1,1)\nplt.plot(degrees,bias)\nplt.title('bias')\nplt.subplot(3,1,2)\nplt.plot(degrees,variance)\nplt.title('variance')\nplt.subplot(3,1,3)\nplt.plot(degrees,mse)\nplt.title('MSE')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fotis007/python_intermediate
|
Python_2_10.ipynb
|
gpl-3.0
|
[
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Natural-Language-Processing-mit-Python\" data-toc-modified-id=\"Natural-Language-Processing-mit-Python-1\"><span class=\"toc-item-num\">1 </span>Natural Language Processing mit Python</a></div><div class=\"lev2 toc-item\"><a href=\"#Text-in-Sätze-zerlegen\" data-toc-modified-id=\"Text-in-Sätze-zerlegen-11\"><span class=\"toc-item-num\">1.1 </span>Text in Sätze zerlegen</a></div><div class=\"lev3 toc-item\"><a href=\"#1.-Textblob:\" data-toc-modified-id=\"1.-Textblob:-111\"><span class=\"toc-item-num\">1.1.1 </span>1. Textblob:</a></div><div class=\"lev3 toc-item\"><a href=\"#2.-Spacy\" data-toc-modified-id=\"2.-Spacy-112\"><span class=\"toc-item-num\">1.1.2 </span>2. Spacy</a></div><div class=\"lev2 toc-item\"><a href=\"#Tokenisieren\" data-toc-modified-id=\"Tokenisieren-12\"><span class=\"toc-item-num\">1.2 </span>Tokenisieren</a></div><div class=\"lev3 toc-item\"><a href=\"#Spacy\" data-toc-modified-id=\"Spacy-121\"><span class=\"toc-item-num\">1.2.1 </span>Spacy</a></div><div class=\"lev2 toc-item\"><a href=\"#Part-of-Speech-Tagging\" data-toc-modified-id=\"Part-of-Speech-Tagging-13\"><span class=\"toc-item-num\">1.3 </span>Part-of-Speech-Tagging</a></div><div class=\"lev2 toc-item\"><a href=\"#Named-Entity-Recognition\" data-toc-modified-id=\"Named-Entity-Recognition-14\"><span class=\"toc-item-num\">1.4 </span>Named Entity Recognition</a></div><div class=\"lev2 toc-item\"><a href=\"#Literatur-/-Links\" data-toc-modified-id=\"Literatur-/-Links-15\"><span class=\"toc-item-num\">1.5 </span>Literatur / Links</a></div>\n\n# Natural Language Processing mit Python\n\nIn den nächsten zwei Sitzungen geht es um NLP mit Python. Da hierbei auch die jeweiligen Konzepte aus der Linguistik eingeführt werden müssen, werden wir nur einige wenige Grundbegriffe ansprechen. \n\nSie werden dabei lernen, wie man mit zwei Bibliotheken diese Aufgaben erledigt, um dann in den Hausaufgaben mit weiteren Bibliotheken zu arbeiten. \n\n1. Textblob\n2. Spacy",
"text = \"\"\"Als der Abend herbeikam und die Freunde in einer weitumherschauenden Laube saßen, trat eine ansehnliche Figur auf die Schwelle, welche unser Freund sogleich für den Barbier von heute früh erkannte. Auf einen tiefen, stummen Bückling des Mannes erwiderte Lenardo: Ihr kommt, wie immer, sehr gelegen und werdet nicht säumen, uns mit Eurem Talent zu erfreuen. — Ich kann Ihnen wohl, fuhr er zu Wilhelmen gewendet fort, Einiges von der Gesellschaft erzählen, deren Band zu sein ich mich rühmen darf. Niemand tritt in unsern Kreis, als wer gewisse Talente aufzuweisen hat, die zum Nutzen oder Vergnügen einer jeden Gesellschaft dienen würden. Dieser Mann ist ein derber Wundarzt, der in bedenklichen Fällen, wo Entschluß und körperliche Kraft gefordert wird, seinem Meister trefflich an der Seite zu stehen bereit ist. Was er als Bartkünstler leistet, davon können Sie ihm selbst ein Zeugniß geben. Hiedurch ist er uns eben so nöthig als willkommen. Da nun aber diese Beschäftigung gewöhnlich eine große und oft lästige Geschwätzigkeit mit sich führt, so hat er sich zu eigner Bildung eine Bedingung gefallen lassen, wie denn Jeder, der unter uns leben will, sich von einer gewissen Seite bedingen muß, wenn ihm nach anderen Seiten hin die größere Freiheit gewährt ist. Dieser also hat nun auf die Sprache Verzicht gethan, insofern etwas Gewöhnliches oder Zufälliges durch sie ausgedrückt wird; daraus aber hat sich ihm ein anderes Redetalent entwickelt, welches absichtlich, klug und erfreulich wirkt, die Gabe des Erzählens nämlich. Sein Leben ist reich an wunderlichen Erfahrungen, die er sonst zu ungelegener Zeit schwätzend zersplitterte, nun aber durch Schweigen genöthigt im stillen Sinne wiederholt und ordnet. Hiermit verbindet sich denn die Einbildungskraft und verleiht dem Geschehenen Leben und Bewegung. Mit besonderer Kunst und Geschicklichkeit weiß er wahrhafte Märchen und märchenhafte Geschichten zu erzählen, wodurch er oft zur schicklichen Stunde uns gar sehr ergötzt, wenn ihm die Zunge durch mich gelös't wird; wie ich denn gegenwärtig thue, und ihm zugleich das Lob ertheile, daß er sich in geraumer Zeit, seitdem ich ihn kenne, noch niemals wiederholt hat. Nun hoff' ich, daß er auch diesmal, unserm theuren Gast zu Lieb' und Ehren, sich besonders hervorthun werde.\nUeber das Gesicht des Rothmantels verbreitete sich eine geistreiche Heiterkeit, und er fing ungesäumt folgendermaßen zu sprechen an:\nHochverehrte Herren! da mir bekannt ist, daß Sie vorläufige Reden und Einleitungen nicht besonders lieben, so will ich ohne weiteres versichern, daß ich diesmal vorzüglich gut zu bestehen hoffe. Von mir sind zwar schon gar manche wahrhafte Geschichten zu hoher und allseitiger Zufriedenheit ausgegangen, heute aber darf ich sagen, daß ich eine zu erzählen habe, welche die bisherigen weit übertrifft, und die, wiewohl sie mir schon vor einigen Jahren begegnet ist, mich noch immer in der Erinnerung unruhig macht, ja sogar eine endliche Entwicklung hoffen läßt. Sie möchte schwerlich ihres Gleichen finden.\n\"\"\"",
"Textblob installieren:\nText in Sätze zerlegen\n1. Textblob:",
"from textblob_de import TextBlobDE as TextBlob\nfrom textblob_de import PatternParser\n\ndoc = TextBlob(text)\nprint(\"Number of sentences: \", len(doc.sentences))\nprint(\"Length of sentences in characters: \")\nfor s in doc.sentences:\n print(len(s), end=\" - \")\n",
"Achtung: Mit doc.sentences iterieren wir über die Sätze im Text. Aber der Satz ist kein String, sondern ein besonderes Objekt:",
"type(s)\n\n\nDas gilt auch schon für unser Dokument-Objekt doc:\n\ntype(doc)",
"Das Gute daran, ist, dass wir - wie oben - über dieses Objekt iterieren können:\nfor s in doc.sentences\nAber genau genommen iterieren wir ja nicht über das 'doc'-Objekt, sondern über die Daten einer bestimmten Sicht, die wir mit dem Attribut 'sentences' aktivieren. \nWir können auch andere Sichten aktivieren, z.B. Worte: \nfor w in doc.words",
"doc.words[:20]\n\nw = doc.words[0]\n\ntype(w)",
"Vielleicht sollten wir erst einmal erläutern, warum es nicht ganz einfach ist, einen Text in Sätze zu zerlegen. Zuuerst könnte man denken, dass man das mit einigen sehr einfachen Regeln erledigen kann, aber wie ein Blick auf das nächste Beispiel zeigt, ist das nicht so einfach:",
"text_2 = \"\"\"Johann Wolfgang Goethe wurde, glaube ich, am 28.8.1749 geboren. Es könnte auch am 20.8. sein. Ich muss zugeben: Genau weiß ich das nicht.\"\"\"\ntext_3 = \"\"\"Die heutige Agenda ist kurz. 1. Die Frage nach dem Anfang. 2. Ende. Viel Spaß!\"\"\"\n\ndoc = TextBlob(text_2)\nlist(doc.sentences)\n\ndoc = TextBlob(text_3)\nlist(doc.sentences)\n\nblob = TextBlob(\"Das ist ein schönes Auto.\", parser=PatternParser(pprint=True, lemmata=True))\nblob.parse()\n\ndoc.sentences[0].words",
"2. Spacy",
"import spacy\nnlp = spacy.load('de')\ndoc = nlp(text_2)\nfor s in doc.sents:\n print(s)\n\ndoc = nlp(text_3)\nfor s in doc.sents:\n print(s)",
"Im folgenden werden wir nur mit Spacy weiterarbeiten. Für Spacy spricht, dass es recht neu ist, eine ganze Reihe von Sprachen unterstützt, ein modernes Python-Interface mit einer wohlüberlegten API hat, vergleichsweise neue Aspekte der Sprachtechnologie, z.B. Word Embeddings, unterstützt und dass Deutsch zu den gut unterstützten Sprachen zählt. Gegen Spacy spricht, dass es von einer privaten Firma entwickelt wird, allerdings wird das dadurch gemildert, dass spacy selbst auf github unter einer sehr freizügigen MIT-Lizenz verfügbar ist.",
"print(spacy.__version__)",
"Tokenisieren\nSpacy",
"import spacy\ndoc = nlp(text_2)\nfor token in doc: \n print(token.text, end=\"< | >\")\n\ndoc = nlp(text_3)\na = [print(token.text, end=\"< | >\") for token in doc]\n\ndoc = nlp(text_2)\nprint(\"{:<15}{:<15}{:<15}\".format(\"TOKEN\", \"LEMMA\", \"POS-Tag\"))\nfor token in doc:\n print(\"{:15}{:15}{:15}\".format(token.text, token.lemma_, token.pos_ ))\n\n\ndoc = nlp(\"Diese Auskünfte muss ich dir nicht geben.\")\n[token.lemma_ for token in doc]\n \n\nfrom spacy_iwnlp import spaCyIWNLP\niwnlp = spaCyIWNLP(lemmatizer_path=r'\\mydata\\Dropbox\\uni\\progrs\\spacy-iwnlp\\IWNLP.Lemmatizer_20170501.json')\nnlp.add_pipe(iwnlp)\n\n\nimport spacy\nfrom spacy_iwnlp import spaCyIWNLP\nnlp = spacy.load('de')\niwnlp = spaCyIWNLP(lemmatizer_path=r'\\mydata\\Dropbox\\uni\\progrs\\spacy-iwnlp\\IWNLP.Lemmatizer_20170501.json')\nnlp.add_pipe(iwnlp)\ndoc = nlp('Wir mögen Fußballspiele mit ausgedehnten Verlängerungen.')\nfor token in doc:\n print('POS: {}\\tIWNLP:{}'.format(token.pos_, token._.iwnlp_lemmas)) \nl\n\ndoc = nlp('Wir mögen Fußballspiele mit ausgedehnten Verlängerungen.')\nfor token in doc:\n print('POS: {}\\tIWNLP:{}'.format(token.pos_, token._.iwnlp_lemmas))\n\nfrom spacy import displacy\ntext_4 = \"Am Anfang war das Wort, das aber bald durch blutige Taten ersetzt wurde.\"\ndoc = nlp(text_4)\ndisplacy.render(doc, style='dep', jupyter=True)",
"Part-of-Speech-Tagging\nNamed Entity Recognition",
"text_5 = \"\"\"Früher hat man über Johann Wolfang von Goethe gesprochen, weil er den 'Faust' geschrieben hat, oder über Mozart, \nweil der die Zauberflöte komponiert hat. Heute dagegen redet man über Samsung, weil das neue Samsung Note4 erschienen ist, \noder über den neuen BMW. Gut, über Steve Jobs hat man noch so geredet, als wäre er ein neuer Mozart der Technologie. \nIn den USA weiß man kaum noch wer Shakespeare ist, und in Berlin benimmt man sich schon so, also könnte man mit \n1 Mio. € einen Goethe kaufen.\"\"\"\n#text_5 = text_5.replace(\"\\n\", \"\") #new lines irritate the parser\ndoc = nlp(text_5)\nfor ent in doc.ents:\n print(ent.text, ent.start_char, ent.end_char, ent.label_)",
"Literatur / Links\n\nTextblob\nSpacy\nImproved German lemmatization in Spacy: spacy-iwnlp"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ericmjl/hiv-resistance-prediction
|
old_notebooks/Prototype Neural Network for predicting HA host tropism.ipynb
|
mit
|
[
"Research Problem\nLast year, I read a paper titled, \"Feature Selection Methods for Identifying Genetic Determinants of Host Species in RNA Viruses\". This year, I read another paper titled, \"Predicting host tropism of influenza A virus proteins using random forest\". The essence of these papers were to predict influenza virus host tropism from sequence features. The particular feature engineering steps were somewhat distinct, in which the former used amino acid sequences encoded as binary 1/0s, while the latter used physiochemical characteristics of the amino acid sequences instead. However, the core problem was essentially identical - predict a host classification from influenza protein sequence features. Random forest classifiers were used in both papers, and is a powerful method for identifying non-linear mappings from features to class labels. My question here was to see if I could get comparable performance using a simple neural network.\nData\nI downloaded influenza HA sequences from the Influenza Research Database. Sequences dated from 1980 to 2015. Lab strains were excluded, duplicates allowed (captures host tropism of certain sequences). All viral subtypes were included.\nBelow, let's take a deep dive into what it takes to construct an artificial neural network!\nThe imports necessary for running this notebook.",
"! echo $PATH\n! echo $CUDA_ROOT\n\nimport pandas as pd\nimport numpy as np\nfrom Bio import SeqIO\nfrom Bio import AlignIO\nfrom Bio.Align import MultipleSeqAlignment\nfrom collections import Counter\nfrom sklearn.preprocessing import LabelBinarizer\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier\nfrom sklearn.metrics import mutual_info_score as mi\n\nfrom lasagne import layers\nfrom lasagne.updates import nesterov_momentum\nfrom nolearn.lasagne import NeuralNet\nimport theano",
"Read in the viral sequences.",
"sequences = SeqIO.to_dict(SeqIO.parse('20150902_nnet_ha.fasta', 'fasta'))\n# sequences",
"The sequences are going to be of variable length. To avoid the problem of doing multiple sequence alignments, filter to just the most common length (i.e. 566 amino acids).",
"lengths = Counter()\nfor accession, seqrecord in sequences.items():\n lengths[len(seqrecord.seq)] += 1\n \nlengths.most_common(1)[0][0]",
"There are sequences that are ambiguously labeled. For example, \"Environment\" and \"Avian\" samples. We would like to give a more detailed prediction as to which hosts it likely came from. Therefore, take out the \"Environment\" and \"Avian\" samples.",
"# For convenience, we will only work with amino acid sequencees of length 566.\nfinal_sequences = dict()\nfor accession, seqrecord in sequences.items():\n host = seqrecord.id.split('|')[1]\n if len(seqrecord.seq) == lengths.most_common(1)[0][0]:\n final_sequences[accession] = seqrecord",
"Create a numpy array to store the alignment.",
"alignment = MultipleSeqAlignment(final_sequences.values())\nalignment_array = np.array([list(rec) for rec in alignment])",
"The first piece of meat in the code begins here. In the cell below, we convert the sequence matrix into a series of binary 1s and 0s, to encode the features as numbers. This is important - AFAIK, almost all machine learning algorithms require numerical inputs.",
"# Create an empty dataframe.\n# df = pd.DataFrame()\n\n# # Create a dictionary of position + label binarizer objects.\n# pos_lb = dict()\n\n# for pos in range(lengths.most_common(1)[0][0]):\n# # Convert position 0 by binarization.\n# lb = LabelBinarizer()\n# # Fit to the alignment at that position.\n# lb.fit(alignment_array[:,pos])\n# # Add the label binarizer to the dictionary.\n# pos_lb[pos] = lb\n# # Create a dataframe.\n# pos = pd.DataFrame(lb.transform(alignment_array[:,pos]))\n\n# # Append the columns to the dataframe.\n# for col in pos.columns:\n# maxcol = len(df.columns)\n# df[maxcol + 1] = pos[col]\nfrom isoelectric_point import isoelectric_points\ndf = pd.DataFrame(alignment_array).replace(isoelectric_points)\n\n# Add in host data\ndf['host'] = [s.id.split('|')[1] for s in final_sequences.values()]\ndf = df.replace({'X':np.nan, 'J':np.nan, 'B':np.nan, 'Z':np.nan})\ndf.dropna(inplace=True)\ndf.to_csv('isoelectric_point_data.csv')\n\n# Normalize data to between 0 and 1.\nfrom sklearn.preprocessing import StandardScaler\n\ndf_std = pd.DataFrame(StandardScaler().fit_transform(df.ix[:,:-1]))\ndf_std['host'] = df['host']\n\nambiguous_hosts = ['Environment', 'Avian', 'Unknown', 'NA', 'Bird', 'Sea_Mammal', 'Aquatic_Bird']\n\nunknowns = df_std[df_std['host'].isin(ambiguous_hosts)]\n\ntrain_test_df = df_std[df_std['host'].isin(ambiguous_hosts) == False]\ntrain_test_df.dropna(inplace=True)",
"With the cell above, we now have a sequence feature matrix, in which the 566 amino acids positions have been expanded to 6750 columns of binary sequence features.\nThe next step is to grab out the host species labels, and encode them as 1s and 0s as well.",
"set([i for i in train_test_df['host'].values])\n\n# Grab out the labels.\noutput_lb = LabelBinarizer()\noutput_lb.fit(train_test_df['host'])\nY = output_lb.fit_transform(train_test_df['host'])\nY = Y.astype(np.float32) # Necessary for passing the data into nolearn.\nY.shape\n\nX = train_test_df.ix[:,:-1].values\nX = X.astype(np.float32) # Necessary for passing the data into nolearn.\nX.shape",
"Next up, we do the train/test split.",
"X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=42)",
"For comparison, let's train a random forest classifier, and see what the concordance is between the predicted labels and the actual labels.",
"rf = RandomForestClassifier()\nrf.fit(X_train, Y_train)\npredictions = rf.predict(X_test)\npredicted_labels = output_lb.inverse_transform(predictions)\n# Compute the mutual information between the predicted labels and the actual labels.\nmi(predicted_labels, output_lb.inverse_transform(Y_test))",
"By the majority-consensus rule, and using mutual information as the metric for scoring, things look not so bad! As mentioned above, the RandomForestClassifier is a pretty powerful method for finding non-linear patterns between features and class labels.\nUncomment the cell below if you want to try the scikit-learn's ExtraTreesClassifier.",
"# et = ExtraTreesClassifier()\n# et.fit(X_train, Y_train)\n# predictions = et.predict(X_test)\n# predicted_labels = output_lb.inverse_transform(predictions)\n# mi(predicted_labels, output_lb.inverse_transform(Y_test))",
"As a demonstration of how this model can be used, let's look at the ambiguously labeled sequences, i.e. those from \"Environment\" and \"Avian\", to see whether we can make a prediction as to what host it likely came frome.",
"# unknown_hosts = unknowns.ix[:,:-1].values\n\n# preds = rf.predict(unknown_hosts)\n# output_lb.inverse_transform(preds)",
"Alrighty - we're now ready to try out a neural network! For this try, we will use lasagne and nolearn, two packages which have made things pretty easy for building neural networks. In this segment, I'm going to not show experiments with multiple architectures, activations and the like. The goal is to illustrate how easy the specification of a neural network is.\nThe network architecture that we'll try is as such:\n\n1 input layer, of shape 6750 (i.e. taking in the columns as data).\n1 hidden layer, with 300 units.\n1 output layer, of shape 140 (i.e. each of the class labels).",
"from lasagne import nonlinearities as nl\nnet1 = NeuralNet(layers=[\n ('input', layers.InputLayer),\n ('hidden1', layers.DenseLayer),\n #('dropout', layers.DropoutLayer),\n #('hidden2', layers.DenseLayer),\n #('dropout2', layers.DropoutLayer),\n ('output', layers.DenseLayer),\n ],\n # Layer parameters:\n input_shape=(None, X.shape[1]),\n hidden1_num_units=300,\n #dropout_p=0.3,\n #hidden2_num_units=500,\n #dropout2_p=0.3,\n output_nonlinearity=nl.softmax,\n output_num_units=Y.shape[1],\n #allow_input_downcast=True,\n\n # Optimization Method:\n update=nesterov_momentum,\n update_learning_rate=0.01,\n update_momentum=0.9,\n \n regression=True,\n max_epochs=100,\n verbose=1\n )",
"Training a simple neural network on my MacBook Air takes quite a bit of time :). But the function call for fitting it is a simple nnet.fit(X, Y).",
"net1.fit(X_train, Y_train)",
"Let's grab out the predictions!",
"preds = net1.predict(X_test)\npreds.shape",
"We're going to see how good the classifier did by examining the class labels. The way to visualize this is to have, say, the class labels on the X-axis, and the probability of prediction on the Y-axis. We can do this sample by sample. Here's a simple example with no frills in the matplotlib interface.",
"import matplotlib.pyplot as plt\n\n%matplotlib inline\n\nplt.bar(np.arange(len(preds[0])), preds[0])",
"Alrighty, let's add some frills - the class labels, the probability of each class label, and the original class label.",
"### NOTE: Change the value of i to anything above!\ni = 111\nplt.figure(figsize=(20,5))\nplt.bar(np.arange(len(output_lb.classes_)), preds[i])\nplt.xticks(np.arange(len(output_lb.classes_)) + 0.5, output_lb.classes_, rotation='vertical')\nplt.title('Original Label: ' + output_lb.inverse_transform(Y_test)[i])\nplt.show()\n# print(output_lb.inverse_transform(Y_test)[i])",
"Let's do a majority-consensus rule applied to the labels, and then compute the mutual information score again.",
"preds_labels = []\nfor i in range(preds.shape[0]):\n maxval = max(preds[i])\n pos = list(preds[i]).index(maxval)\n \n preds_labels.append(output_lb.classes_[pos])\n\nmi(preds_labels, output_lb.inverse_transform(Y_test))",
"With a score of 0.73, that's not bad either! It certainly didn't outperform the RandomForestClassifier, but the default parameters on the RFC were probably pretty good to begin with. Notice how little tweaking on the neural network we had to do as well.\nFor good measure, these were the class labels. Notice how successful influenza has been in replicating across the many different species!",
"output_lb.classes_",
"The biology behind this dataset.\nA bit more about the biology of influenza.\nIf you made it this far, thank you for hanging on! How does this mini project relate to the biology of flu? \nAs the flu evolves and moves between viral hosts, it gradually adapts to that host. This allows it to successfully establish an infection in the host population. \nWe can observe the viral host as we sample viruses from it. Sometimes, we don't catch it in its adapted state, but it's un-adapted state, as if it had freshly joined in from its other population. That is likely why some of the class labels are mis-identified.\nAlso, there are environmentally sampled isolates. They obviously aren't simply replicating in the environment (i.e. bodies of water), but in some host, and were shed into the water. For these guys, the host labels won't necessarily match up, as there'll be a stronger signal with particular hosts - whether it be from ducks, pigs or even humans.\nNext steps?\nThere's a few obvious things that can be done. \n\nLatin hypercube sampling for Random Forest parameters.\nExperimenting with adding more layers, tweaking the layer types etc.\n\nWhat else might be done? Ping me at ericmajinglong@gmail.com with the subject \"neural nets and HA\". :)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
whitead/numerical_stats
|
unit_4/hw_2017/problem_set_2.ipynb
|
gpl-3.0
|
[
"Answer the following questions in Python. Do all calculations in Python. Your answers should have a pattern similar to this:\npython\nif 3 < (5 * 2):\n print('3 is less than 5 times 2')\nelse:\n print('3 is not less than 5 times 2')\nWhich is greater: $10^5$ or $3^9$?",
"if 10**5 > 3**9:\n print('10^5 is greater')\nelse:\n print('3^9 is greater')",
"Demonstrate the $0.25 \\neq 0.35$",
"if 0.25 != 0.35:\n print('0.25 != 0.35')\nelse:\n print('hmmm')",
"Using the // operator, show that 3 is not divisible by 2.",
"if 3 // 2 != 3 / 2:\n print('3 is not divisble by 2')\nelse:\n print('it is divisible by 2')",
"Using a set of if if statements, print whether a variable is odd or even and negative or positive. Use the variable name x and demonstrate your code works using x = -3, but ensure it can handle any integer (e.g., 3, 0, -100). Make sure your print statements use the value of x, not the name of the variable. For example, you should print out -3 is negative, not x is negative.",
"x = -3\n\nif x // 2 == x / 2:\n print('{} is even'.format(x))\nelse:\n print('{} is odd'.format(x))\nif x < 0:\n print('{} is negative'.format(x))\nelif x > 0:\n print('{} is positive'.format(x))\nelse:\n print('{} is 0'.format(x))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sheqi/TVpgGLM
|
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
|
mit
|
[
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#TF-intro\" data-toc-modified-id=\"TF-intro-1\"><span class=\"toc-item-num\">1 </span>TF intro</a></div><div class=\"lev1 toc-item\"><a href=\"#MLP-on-MNIST\" data-toc-modified-id=\"MLP-on-MNIST-2\"><span class=\"toc-item-num\">2 </span>MLP on MNIST</a></div><div class=\"lev1 toc-item\"><a href=\"#VAEs!\" data-toc-modified-id=\"VAEs!-3\"><span class=\"toc-item-num\">3 </span>VAEs!</a></div>\n\n# TF intro\n\nThis guide assumes you can read through basic Python code or use your google skills to catch up on that as needed. We begin by understanding how tensorflow works. The key point to remember is that all the tensorflow computation happens in a graph, and all that you get to do in python is to manipulate and run that graph. This creates a programming paradigm that looks a lot like python but is actually quite different. We begin with a simple logistic regression example.",
"import tensorflow as tf\nimport numpy as np\n\nn_obs = 1000\nn_features = 5\n\nx_ph = tf.placeholder(shape=(n_obs, n_features), name=\"x_ph\", dtype=tf.float32)\nbeta_init = np.random.normal(size=(n_features, 1))\nbeta_hat = tf.Variable(beta_init, dtype=tf.float32, name=\"beta_hat\")\ny_hat = tf.nn.sigmoid(tf.matmul(x_ph, beta_hat), name=\"yhat\")",
"We can visualize thie graph, and then explain what the different parts are: \nThis code is a modification from the DeepDream notebook. There is more visualization / exploration that can be done using tensorboard, which is the tool this uses.",
"# TensorFlow Graph visualizer code\n# https://stackoverflow.com/questions/41388673/visualizing-a-tensorflow-graph-in-jupyter-doesnt-work\nimport numpy as np\nfrom IPython.display import clear_output, Image, display, HTML\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = bytes(\"<stripped %d bytes>\"%size, 'utf-8')\n return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script src=\"//cdnjs.cloudflare.com/ajax/libs/polymer/0.3.3/platform.js\"></script>\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:800px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n iframe = \"\"\"\n <iframe seamless style=\"width:1200px;height:800px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '"'))\n display(HTML(iframe))\n\nshow_graph(tf.get_default_graph().as_graph_def())",
"What we do in tensorflow is construct graphs like this, and then evaluate nodes. Each graph node is associated with some code. When we evaluate a node like y_hat, tensorflow figures out what nodes it depends on, evaluates all of those nodes, and then evaluates y_hat. In this graph, there are three types of nodes (Tensors), a Variable, a placeholder, and a vanilla Tensor that is none of the above. Tensors have no state: they are computable from the rest of the graph. Variables have state (they're the only thing we can optimize / save / load). Placeholders are not computable and don't have state: we must feed values into them. We can also feed values into other tensors, but TF will explicitly complain if we fail to feed a value into a placeholder. \nOur python objects like beta_hat are references to the TF graph nodes, not the nodes themselves (i.e. copying the python object does not dupliate the graph node). \nTo evaluate a graph we need to associate it with a session, and then either a tensor's eval method or the session's run method. The difference is that we can run multiple tensors together, which might be useful if they share dependencies. \nNow we define the loss, generate synth data, and optimize:",
"from scipy.special import expit as logistic\n\ntrue_beta = np.random.normal(size=(n_features, 1))\nx = np.random.normal(size=(n_obs, n_features))\ny = np.random.binomial(n=1, p=logistic(x @ true_beta))\n\ny_ph = tf.placeholder(shape=(n_obs, 1), name=\"y_ph\", dtype=tf.float32)\n\nlogistic_loss = -tf.reduce_sum(y_ph * tf.log(1e-10 + y_hat)+ (1-y_ph) * tf.log(1e-10 + 1 - y_hat))\n\n# if we needed the gradients for some reason, e.g. to pass to an external optimizier or to plot\n# grads = tf.gradients(logistic_loss, beta_hat)\n\n# create the optimizer\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0005)\n\n# this is the long way of doing it\n# grads_and_vars = optimizer.compute_gradients(logistic_loss, beta_hat)\n# optionally, modify gradients here (e.g. threshold)\n# minimize_op = optimizer.apply_gradients(grads_and_vars)\n\n# the short way\nminimize_op = optimizer.minimize(logistic_loss) # by default, minimize w.r.t. all variables\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\nfor i in range(2500):\n if i % 100 == 0:\n l = logistic_loss.eval(session=sess, feed_dict={x_ph:x, y_ph:y})\n print(\"Iter %i, loss %f\" % (i, l))\n sess.run(minimize_op, feed_dict={x_ph:x, y_ph:y})\n\naccuracy = np.average(tf.equal(tf.round(y_hat), y_ph).eval(session=sess, feed_dict={x_ph:x, y_ph:y}))\nprint(accuracy)\n\n# another optimizer\noptimizer = tf.train.AdamOptimizer(learning_rate=0.01)\nminimize_op = optimizer.minimize(logistic_loss) # by default, minimize w.r.t. all variables\n\nsess.run(tf.global_variables_initializer())\nfor i in range(1000):\n if i % 100 == 0:\n l = logistic_loss.eval(session=sess, feed_dict={x_ph:x, y_ph:y})\n print(\"Iter %i, loss %f\" % (i, l))\n sess.run(minimize_op, feed_dict={x_ph:x, y_ph:y})\n\naccuracy = np.average(tf.equal(tf.round(y_hat), y_ph).eval(session=sess, feed_dict={x_ph:x, y_ph:y}))\nprint(accuracy)",
"One more tutorial point is on how to print things. Since a tensor only has a value when the graph is executed, inspecting things is trickier than usual. The Print op returns the same op as its input, but prints as a side-effect. This means we need to inject the op into the graph. Unfortunately, the print happens on the C++ end, so we will see the logging messages in the jupyter server log (or in a shell).",
"logistic_loss_with_print = tf.Print(input_=logistic_loss, data=[x, logistic_loss])\n\n_ = logistic_loss_with_print.eval(session=sess, feed_dict={x_ph:x, y_ph:y})",
"MLP on MNIST\nNow we build some neural network building blocks we will reuse for VAEs.",
"from tensorflow.examples.tutorials.mnist import input_data\n\nglobal_dtype = tf.float32\nmnist = input_data.read_data_sets('MNIST_data', one_hot=True)\ninput_size = mnist.train.images.shape[1]",
"Define our neural network building blocks.",
"def _dense_mlp_layer(x, input_size, out_size, nonlinearity=tf.nn.softmax, name_prefix=\"\"):\n w_init = tf.truncated_normal(shape=[input_size, out_size], stddev=0.001)\n b_init = tf.ones(shape=[out_size]) * 0.1\n W = tf.Variable(w_init, name=\"%s_W\" % name_prefix)\n b = tf.Variable(b_init, name=\"%s_b\" % name_prefix)\n out = nonlinearity(tf.matmul(x, W) + b)\n return out, [W, b]\n\ndef _mlp(x, n_layers, units_per_layer, input_size, out_size, nonlinearity=tf.tanh):\n train_vars = []\n\n x, v = _dense_mlp_layer(x, input_size, units_per_layer, nonlinearity, name_prefix=\"into_hidden\")\n train_vars.extend(v)\n # exploit the fact that repeatedly calling the same TF function creates multiple ops. \n # no need to hang onto the intermediate layer handles (though we can get them back if we need them)\n for l in range(n_layers-1):\n x, v = _dense_mlp_layer(x, units_per_layer, units_per_layer, nonlinearity, name_prefix=\"hidden\")\n train_vars.extend(v)\n\n x, v = _dense_mlp_layer(x, units_per_layer, out_size, nonlinearity, name_prefix=\"readout\")\n train_vars.extend(v)\n return x, train_vars",
"Now we construct the graph. The graph and scope boilerplate makes our life easier as far as visualization and debugging is concerned. We can visualize/run only this graph and not the graph for logistic regression (above).",
"mlp_graph = tf.Graph()\n\nwith mlp_graph.as_default():\n with tf.name_scope(\"Feedforward_Net\"):\n x = tf.placeholder(shape=[None, input_size], dtype=global_dtype, name='x')\n y = tf.placeholder(shape=[None, 10], dtype=global_dtype, name='y')\n y_hat, mlp_test_vars = _mlp(x, n_layers=2, units_per_layer=30, input_size=784, out_size=10)\n\n with tf.name_scope(\"Opt_and_loss\"):\n cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_hat))\n # learning rates are much smaller for optimizers like Adam and RMSProp\n train_step_mlptest = tf.train.AdamOptimizer(0.01).minimize(cross_entropy)\n\n with tf.name_scope(\"Support_stuff\"):\n init = tf.global_variables_initializer()\n correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_hat,1))\n mlp_acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))",
"We visualize our graph:",
"show_graph(mlp_graph.as_graph_def())",
"Next we create a session, initialize our variables, and train the network:",
"sess = tf.Session(graph=mlp_graph)\nsess.run(init)\ntrain_steps = 2500\n\nacc = np.zeros(train_steps)\n\n# create this op outside of the loop so we don't create it 5000 times\nfor i in range(train_steps):\n batch_xs, batch_ys = mnist.train.next_batch(100)\n acc[i] = mlp_acc.eval(session = sess, feed_dict = {x: batch_xs, y: batch_ys})\n sess.run(train_step_mlptest, feed_dict={x: batch_xs, y: batch_ys})\n\ntest_acc = mlp_acc.eval(session=sess, feed_dict={x: mnist.test.images, y: mnist.test.labels})\nprint(\"Test accuracy: %f\" % test_acc)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.plot(acc)",
"VAEs!\n$$\n\\DeclareMathOperator{\\Tr}{Tr}\n\\newcommand{\\trp}{{^\\top}} % transpose\n\\newcommand{\\trace}{\\text{Trace}} % trace\n\\newcommand{\\inv}{^{-1}}\n\\newcommand{\\mb}{\\mathbf{b}}\n\\newcommand{\\M}{\\mathbf{M}}\n\\newcommand{\\G}{\\mathbf{G}}\n\\newcommand{\\A}{\\mathbf{A}}\n\\newcommand{\\R}{\\mathbf{R}}\n\\renewcommand{\\S}{\\mathbf{S}}\n\\newcommand{\\B}{\\mathbf{B}}\n\\newcommand{\\Q}{\\mathbf{Q}}\n\\newcommand{\\mH}{\\mathbf{H}}\n\\newcommand{\\U}{\\mathbf{U}}\n\\newcommand{\\mL}{\\mathbf{L}}\n\\newcommand{\\diag}{\\mathrm{diag}}\n\\newcommand{\\etr}{\\mathrm{etr}}\n\\renewcommand{\\H}{\\mathbf{H}}\n\\newcommand{\\vecop}{\\mathrm{vec}}\n\\newcommand{\\I}{\\mathbf{I}}\n\\newcommand{\\X}{\\mathbf{X}{ij}}\n\\newcommand{\\Y}{\\mathbf{Y}{jk}}\n\\newcommand{\\Z}{\\mathbf{Z}_{ik}}\n$$\nIn our generative model, we would like to estimate the density of some complicated probability density $\\log P_{\\theta}(X)$. This is slightly odd notation but seems standard in these papers: it says that $X$ is a random variable but $\\theta$ are parameters. To do this, we will write it as follows:\n$$\n\\log P_{\\theta}(X) = \\log P_{\\theta}(X,Z) - \\log P_{\\theta}(Z\\mid X) \n$$\nThis is just the definition of marginal probability. Next, we add/subtract $\\log Q_{\\phi}(Z\\mid X)$ which sums to 0: \n$$\n\\log P_{\\theta}(X) = \\log P_{\\theta}(X,Z) - \\log P_{\\theta}(Z\\mid X) - \\log Q_{\\phi}(Z\\mid X) + \\log Q_{\\phi}(Z\\mid X)\n$$\nWe take expectation of both sides. Note that this expectation has to be w.r.t. the conditional distribution $Q(Z\\mid X)$ for the rest of this to work properly: \n$$\n\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X) = \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X,Z) - \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(Z\\mid X) - \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X) + \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X)\n$$\nSince the LHS is independent of Z, that expectation just goes away. We rearrange the terms and realize we ended up with the evidence lower bound (ELBO) and a KL divergence: \n$$\n\\begin{align}\n\\log P_{\\theta}(X) =& \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X,Z) - \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X) + \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X)- \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(Z\\mid X) \\\n =& \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X,Z) - \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X) + \\int_Z Q_{\\phi}(Z\\mid X)\\log Q_{\\phi}(Z\\mid X)- \\int_Z Q_{\\phi}(Z\\mid X)\\log P_{\\theta}(Z\\mid X) \\\n=& \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X,Z) +\\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X) - \\int_Z Q_{\\phi}(Z\\mid X)\\log \\frac{Q_{\\phi}(Z\\mid X)}{P_{\\theta}(Z\\mid X)} \\\n=& \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X,Z) + \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X) -\\mathcal{D}{KL}(Q{\\theta}(Z\\mid X)|| P_{\\phi}(Z\\mid X)) \n\\end{align}\n$$\nThe ELBO is a lower bound because KL divergence is greater than equal to 0. So if we want to maximize the LHS, we can choose to either minimize the KL divergence, or maximize the ELBO. We would rather do the former, because the KL divergence contains the posterior $P(Z\\mid X)$ and if we knew how to do that, we wouldn't be going through this hassle. The nice thing is, since this holds for any $Q$ and any $Z$ we can define both distributions to be as nice as we like. So we're going to say that the likelihood $P_{\\theta}(X,Z)$, prior $P_{\\theta}(Z)$ and approximate posterior $Q_{\\theta}(X,Z)$ are all gaussian. We pick the easiest possible marginal distribution over $Q(Z)$, an identity-covariance gaussian: \n$$\n\\begin{align}\nP_{\\theta}(X\\mid Z) :=& \\mathcal{N}(a(Z,\\theta), b(Z,\\theta)b(Z,\\theta)\\trp) \\\nQ_{\\phi}(Z\\mid X):=&\\mathcal{N}(f(X,\\phi),g(X,\\phi)g(X,\\phi)^{\\intercal})\\\nP_{\\theta}(Z) :=& \\mathcal{N}(0,\\mathbf{I})\n\\end{align}\n$$\nThe distributions $P_{\\theta}(X\\mid Z)$ and $Q_{\\phi}(Z\\mid X)$ are parameterized by mean and covariance-square-root functions $a,b,f,g$ which we leave unspecified for now. In practice for VAEs people use the equivalent of a mean-field assumption, which means that the covariance functions will just return SDs/variances, but I'd like to write the reparameterization trick in general form. We can additionally rewrite the expression to get a second KL divergence: \n$$\n\\begin{align}\n\\log P_{\\theta}(X) =& \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z) + \\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(Z) + \\mathbb{E}{Q(Z\\mid X)}\\log Q{\\phi}(Z\\mid X)-\\mathcal{D}{KL}(Q{\\theta}(Z\\mid X)|| P_{\\phi}(Z\\mid X)) \\\n=&\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z) + \\mathcal{D}{KL}( Q{\\phi}(Z\\mid X)||P_{\\theta}(Z)) - \\mathcal{D}{KL}(Q{\\theta}(Z\\mid X)|| P_{\\phi}(Z\\mid X))\n\\end{align}\n$$\nConveniently, the KL divergence between two gaussians (the prior and approximate posterior) is analytic. What remains is the expectation $\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z)$, which we can compute from its empirical, sample-based mean: \n$$\n\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z) = \\int Q_{\\phi}(Z\\mid X)\\log P_{\\theta}(X|Z)d Q \\approx \\frac{1}{N} \\sum \\log P_{\\theta}(X|Z) Z_i, \\\\\nZ_i\\sim\\mathcal{N}(f(X,\\phi),g(X,\\phi)g(X,\\phi)\\trp)\n$$\nThe naive gradient estimator here has very high variance according to the VAE paper, though it is used in Blei, Jordan and Paisley 2012 (ICML):\n$$\n\\nabla_{\\phi}\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z) \\approx \\frac{1}{N} \\sum \\nabla_{\\phi}\\log P_{\\theta}(X|Z) Z_i\n$$\nWhat the VAE paper does instead is apply the reparameterization trick: \n\\begin{align}\n\\epsilon&\\sim\\mathcal{N}(0, \\I)\\\nZ &= f(X,\\phi) + g(X,\\phi)\\epsilon\\\n\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z) &\\approx \\frac{1}{N} \\sum \\log P_{\\theta}(X|Z)(f(X,\\phi) + g(X,\\phi)\\epsilon)\\\n\\nabla_{\\phi}\\mathbb{E}{Q(Z\\mid X)}\\log P{\\theta}(X\\mid Z) &\\approx \\frac{1}{N} \\sum \\nabla_{\\phi}\\log P_{\\theta}(X|Z)(f(X,\\phi) + g(X,\\phi)\\epsilon)\n\\end{align}\nNow let's see if we can implement it using tensorflow. We begin by sanity-checking a basic MLP and then go to VAEs. First, import some things we'll need and download MNIST:",
"from tensorflow.examples.tutorials.mnist import input_data\n\nencoder_depth = 2\ndecoder_depth = 2\nencoder_units = 500\ndecoder_units = 500\nlatent_size = 10\nglobal_dtype = tf.float32\nminibatch_size = 100\ninput_size = mnist.train.images.shape[1]\ntrain_steps = mnist.train.num_examples // minibatch_size\nencoder_nonlinearity = tf.nn.sigmoid\ndecoder_nonlinearity = tf.nn.sigmoid\nn_epochs = 10",
"Construct the graph:",
"vae_graph = tf.Graph()\n\nwith vae_graph.as_default():\n\n with tf.name_scope(\"Encoder_Q\"):\n\n x = tf.placeholder(shape=[None, input_size], dtype=global_dtype, name='x')\n q_network, q_mu_vars = _mlp(x, n_layers=encoder_depth, units_per_layer=encoder_units, input_size=input_size, out_size=encoder_units, nonlinearity=encoder_nonlinearity)\n w_mu = tf.Variable(tf.truncated_normal(shape=[encoder_units, latent_size], stddev=0.1), name=\"w_mu\")\n w_logsig = tf.Variable(tf.truncated_normal(shape=[encoder_units, latent_size], stddev=0.1), name=\"w_logsig\")\n b_mu = tf.Variable(tf.truncated_normal(shape=[latent_size], stddev=0.1), name=\"b_mu\")\n b_logsig = tf.Variable(tf.truncated_normal(shape=[latent_size], stddev=0.1), name=\"b_logsig\")\n q_mu = tf.matmul(q_network, w_mu) + b_mu\n q_logsigma = tf.matmul(q_network, w_logsig) + b_logsig\n epsilon = tf.random_normal([minibatch_size, latent_size])\n z = q_mu + tf.sqrt(tf.exp(q_logsigma)) * epsilon\n\n\n with tf.name_scope(\"Decoder_P\"):\n\n p_mu, p_mu_vars = _mlp(z, n_layers=decoder_depth, units_per_layer=decoder_units, input_size=latent_size, out_size=input_size, nonlinearity=decoder_nonlinearity)\n\n with tf.name_scope(\"Opt_and_loss\"):\n kld = 0.5 * tf.reduce_sum(1 + q_logsigma - tf.square(q_mu) - tf.exp(q_logsigma), 1)\n ll = tf.reduce_sum(x * tf.log(1e-10 + p_mu)+ (1-x) * tf.log(1e-10 + 1 - p_mu), 1)\n elbo = tf.reduce_mean(ll + kld)\n minimize_op = tf.train.AdamOptimizer(0.001).minimize(-elbo)\n\n with tf.name_scope(\"Support_stuff\"):\n init = tf.global_variables_initializer()\n \n\nshow_graph(vae_graph.as_graph_def())",
"Now we run and visualize:",
"sess = tf.Session(graph=vae_graph)\nsess.run(init)\n\nelbo_log = np.zeros(n_epochs * train_steps)\n\nfor i in range(n_epochs):\n for j in range(train_steps):\n batch_xs, batch_ys = mnist.train.next_batch(minibatch_size)\n sess.run(minimize_op, feed_dict={x: batch_xs})\n elbo_log[i*train_steps + j] = elbo.eval(session=sess, feed_dict={x: batch_xs})\n if j % 10 == 0:\n print(\"Epoch %i, step %i: average elbo=%f\" % (i, j, elbo_log[i*train_steps + j]))\n\nplt.plot(elbo_log)\n\nx_sample = mnist.test.next_batch(minibatch_size)[0]\nx_reconstruct = p_mu.eval(session=sess, feed_dict={x:x_sample})[0]\nplt.subplot(1, 2, 1)\nplt.imshow(x_sample[0].reshape(28, 28), vmin=0, vmax=1, cmap=\"gray\")\nplt.title(\"Test input\")\nplt.colorbar()\nplt.subplot(1, 2, 2)\nplt.imshow(x_reconstruct.reshape(28, 28), vmin=0, vmax=1, cmap=\"gray\")\nplt.title(\"Reconstruction\")\nplt.colorbar()\nplt.tight_layout()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
uber/pyro
|
tutorial/source/intro_part_ii.ipynb
|
apache-2.0
|
[
"An Introduction to Inference in Pyro\nMuch of modern machine learning can be cast as approximate inference and expressed succinctly in a language like Pyro. To motivate the rest of this tutorial, let's build a generative model for a simple physical problem so that we can use Pyro's inference machinery to solve it. However, we will first import the required modules for this tutorial:",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport torch\n\nimport pyro\nimport pyro.infer\nimport pyro.optim\nimport pyro.distributions as dist\n\npyro.set_rng_seed(101)",
"A Simple Example\nSuppose we are trying to figure out how much something weighs, but the scale we're using is unreliable and gives slightly different answers every time we weigh the same object. We could try to compensate for this variability by integrating the noisy measurement information with a guess based on some prior knowledge about the object, like its density or material properties. The following model encodes this process:\n$${\\sf weight} \\, | \\, {\\sf guess} \\sim \\cal {\\sf Normal}({\\sf guess}, 1) $$\n$${\\sf measurement} \\, | \\, {\\sf guess}, {\\sf weight} \\sim {\\sf Normal}({\\sf weight}, 0.75)$$\nNote that this is a model not only for our belief over weight, but also for the result of taking a measurement of it. The model corresponds to the following stochastic function:",
"def scale(guess):\n weight = pyro.sample(\"weight\", dist.Normal(guess, 1.0))\n return pyro.sample(\"measurement\", dist.Normal(weight, 0.75))",
"Conditioning\nThe real utility of probabilistic programming is in the ability to condition generative models on observed data and infer the latent factors that might have produced that data. In Pyro, we separate the expression of conditioning from its evaluation via inference, making it possible to write a model once and condition it on many different observations. Pyro supports constraining a model's internal sample statements to be equal to a given set of observations.\nConsider scale once again. Suppose we want to sample from the distribution of weight given input guess = 8.5, but now we have observed that measurement == 9.5. That is, we wish to infer the distribution:\n$$({\\sf weight} \\, | \\, {\\sf guess}, {\\sf measurement} = 9.5) \\sim \\, ? $$\nPyro provides the function pyro.condition to allow us to constrain the values of sample statements. pyro.condition is a higher-order function that takes a model and a dictionary of observations and returns a new model that has the same input and output signatures but always uses the given values at observed sample statements:",
"conditioned_scale = pyro.condition(scale, data={\"measurement\": torch.tensor(9.5)})",
"Because it behaves just like an ordinary Python function, conditioning can be deferred or parametrized with Python's lambda or def:",
"def deferred_conditioned_scale(measurement, guess):\n return pyro.condition(scale, data={\"measurement\": measurement})(guess)",
"In some cases it might be more convenient to pass observations directly to individual pyro.sample statements instead of using pyro.condition. The optional obs keyword argument is reserved by pyro.sample for that purpose:",
"def scale_obs(guess): # equivalent to conditioned_scale above\n weight = pyro.sample(\"weight\", dist.Normal(guess, 1.))\n # here we condition on measurement == 9.5\n return pyro.sample(\"measurement\", dist.Normal(weight, 0.75), obs=torch.tensor(9.5))",
"Finally, in addition to pyro.condition for incorporating observations, Pyro also contains pyro.do, an implementation of Pearl's do-operator used for causal inference with an identical interface to pyro.condition. condition and do can be mixed and composed freely, making Pyro a powerful tool for model-based causal inference.\nFlexible Approximate Inference With Guide Functions\nLet's return to conditioned_scale. Now that we have conditioned on an observation of measurement, we can use Pyro's approximate inference algorithms to estimate the distribution over weight given guess and measurement == data. \nInference algorithms in Pyro, such as pyro.infer.SVI, allow us to use arbitrary stochastic functions, which we will call guide functions or guides, as approximate posterior distributions. Guide functions must satisfy these two criteria to be valid approximations for a particular model: \n1. all unobserved (i.e., not conditioned) sample statements that appear in the model appear in the guide.\n2. the guide has the same input signature as the model (i.e., takes the same arguments)\nGuide functions can serve as programmable, data-dependent proposal distributions for importance sampling, rejection sampling, sequential Monte Carlo, MCMC, and independent Metropolis-Hastings, and as variational distributions or inference networks for stochastic variational inference. Currently, importance sampling, MCMC, and stochastic variational inference are implemented in Pyro, and we plan to add other algorithms in the future.\nAlthough the precise meaning of the guide is different across different inference algorithms, the guide function should generally be chosen so that, in principle, it is flexible enough to closely approximate the distribution over all unobserved sample statements in the model. \nIn the case of scale, it turns out that the true posterior distribution over weight given guess and measurement is actually ${\\sf Normal}(9.14, 0.6)$. As the model is quite simple, we are able to determine our posterior distribution of interest analytically (for derivation, see for example Section 3.4 of these notes).",
"def perfect_guide(guess):\n loc = (0.75**2 * guess + 9.5) / (1 + 0.75**2) # 9.14\n scale = np.sqrt(0.75**2 / (1 + 0.75**2)) # 0.6\n return pyro.sample(\"weight\", dist.Normal(loc, scale))",
"Parametrized Stochastic Functions and Variational Inference\nAlthough we could write out the exact posterior distribution for scale, in general it is intractable to specify a guide that is a good approximation to the posterior distribution of an arbitrary conditioned stochastic function. In fact, stochastic functions for which we can determine the true posterior exactly are the exception rather than the rule. For example, even a version of our scale example with a nonlinear function in the middle may be intractable:",
"def intractable_scale(guess):\n weight = pyro.sample(\"weight\", dist.Normal(guess, 1.0))\n return pyro.sample(\"measurement\", dist.Normal(some_nonlinear_function(weight), 0.75))",
"What we can do instead is use the top-level function pyro.param to specify a family of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called variational inference.\npyro.param is a frontend for Pyro's key-value parameter store, which is described in more detail in the documentation. Like pyro.sample, pyro.param is always called with a name as its first argument. The first time pyro.param is called with a particular name, it stores its argument in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. It is similar to simple_param_store.setdefault here, but with some additional tracking and management functionality.\npython\nsimple_param_store = {}\na = simple_param_store.setdefault(\"a\", torch.randn(1))\nFor example, we can parametrize a and b in scale_posterior_guide instead of specifying them by hand:",
"def scale_parametrized_guide(guess):\n a = pyro.param(\"a\", torch.tensor(guess))\n b = pyro.param(\"b\", torch.tensor(1.))\n return pyro.sample(\"weight\", dist.Normal(a, torch.abs(b)))",
"As an aside, note that in scale_parametrized_guide, we had to apply torch.abs to parameter b because the standard deviation of a normal distribution has to be positive; similar restrictions also apply to parameters of many other distributions. The PyTorch distributions library, which Pyro is built on, includes a constraints module for enforcing such restrictions, and applying constraints to Pyro parameters is as easy as passing the relevant constraint object to pyro.param:",
"from torch.distributions import constraints\n\ndef scale_parametrized_guide_constrained(guess):\n a = pyro.param(\"a\", torch.tensor(guess))\n b = pyro.param(\"b\", torch.tensor(1.), constraint=constraints.positive)\n return pyro.sample(\"weight\", dist.Normal(a, b)) # no more torch.abs",
"Pyro is built to enable stochastic variational inference, a powerful and widely applicable class of variational inference algorithms with three key characteristics: \n\nParameters are always real-valued tensors\nWe compute Monte Carlo estimates of a loss function from samples of execution histories of the model and guide\nWe use stochastic gradient descent to search for the optimal parameters. \n\nCombining stochastic gradient descent with PyTorch's GPU-accelerated tensor math and automatic differentiation allows us to scale variational inference to very high-dimensional parameter spaces and massive datasets. \nPyro's SVI functionality is described in detail in the SVI tutorial. Here is a very simple example applying it to scale:",
"guess = 8.5\n\npyro.clear_param_store()\nsvi = pyro.infer.SVI(model=conditioned_scale, \n guide=scale_parametrized_guide,\n optim=pyro.optim.Adam({\"lr\": 0.003}),\n loss=pyro.infer.Trace_ELBO())\n\n\nlosses, a, b = [], [], []\nnum_steps = 2500\nfor t in range(num_steps):\n losses.append(svi.step(guess))\n a.append(pyro.param(\"a\").item())\n b.append(pyro.param(\"b\").item())\n \nplt.plot(losses)\nplt.title(\"ELBO\")\nplt.xlabel(\"step\")\nplt.ylabel(\"loss\");\nprint('a = ',pyro.param(\"a\").item())\nprint('b = ', pyro.param(\"b\").item())\n\nplt.subplot(1,2,1)\nplt.plot([0,num_steps],[9.14,9.14], 'k:')\nplt.plot(a)\nplt.ylabel('a')\n\nplt.subplot(1,2,2)\nplt.ylabel('b')\nplt.plot([0,num_steps],[0.6,0.6], 'k:')\nplt.plot(b)\nplt.tight_layout()",
"Note that SVI obtains parameters very close to the true parameters of the desired conditional distribution. This is to be expected as our guide is from the same family.\nNote that optimization will update the values of the guide parameters in the parameter store, so that once we find good parameter values, we can use samples from the guide as posterior samples for downstream tasks.\nNext Steps\nIn the Variational Autoencoder tutorial, we'll see how models like scale can be augmented with deep neural networks and use stochastic variational inference to build a generative model of images."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ibm-cds-labs/pixiedust
|
notebook/data-load-samples/Load from Object Storage - Python.ipynb
|
apache-2.0
|
[
"Loading data from Object Storage\nYou can load data from cloud storage such as Object Storage.\nPrerequisites\n\nCollect your Object Storage connection information: \nAuthorization URL (auth_url), e.g. https://identity.open.softlayer.com\nProject ID (projectId)\nRegion (region), e.g. dallas\nUser id (userId)\n\nPassword (password)\n \n<div class=\"alert alert-block alert-info\">\nIf your Object Storage instance was provisioned in Bluemix you can find the connectivity information in the _Service Credentials_ tab.\n </div>\n\n\n\nCollect your data set information \n\nContainer name, e.g. my_sample_data\n\nFile name, e.g. my_data_set.csv\n\n\nImport PixieDust and enable the Spark Job monitor",
"import pixiedust\npixiedust.enableJobMonitor()",
"Configure Object Storage connectivity\nCustomize this cell with your Object Storage connection information",
"# @hidden_cell\n# Enter your ...\nOS_AUTH_URL = 'https://identity.open.softlayer.com'\nOS_USERID = '...' \nOS_PASSWORD = '...'\nOS_PROJECTID = '...'\nOS_REGION = '...'\nOS_SOURCE_CONTAINER = '...'\nOS_FILENAME = '....csv'",
"Load CSV data\nLoad csv file from Object Storage into a Spark DataFrame.",
"# no changes are required to this cell\nfrom ingest import Connectors\nfrom pyspark.sql import SQLContext\nsqlContext = SQLContext(sc)\n\nobjectstoreloadOptions = {\n Connectors.BluemixObjectStorage.AUTH_URL : OS_AUTH_URL,\n Connectors.BluemixObjectStorage.USERID : OS_USERID,\n Connectors.BluemixObjectStorage.PASSWORD : OS_PASSWORD,\n Connectors.BluemixObjectStorage.PROJECTID : OS_PROJECTID,\n Connectors.BluemixObjectStorage.REGION : OS_REGION,\n Connectors.BluemixObjectStorage.SOURCE_CONTAINER : OS_SOURCE_CONTAINER,\n Connectors.BluemixObjectStorage.SOURCE_FILE_NAME : OS_FILENAME,\n Connectors.BluemixObjectStorage.SOURCE_INFER_SCHEMA : '1'}\n\nos_data = sqlContext.read.format(\"com.ibm.spark.discover\").options(**objectstoreloadOptions).load()",
"Explore the loaded data using PixieDust",
"display(os_data)",
"<div class=\"alert alert-block alert-info\">\nFor information on how to load data from other sources refer to [these code snippets](https://apsportal.ibm.com/docs/content/analyze-data/python_load.html).\n</div>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
miky-kr5/Presentations
|
EVI - 2018/EVI 04/Modulo3.ipynb
|
cc0-1.0
|
[
"Manipulación y Análisis de Datos con Python\n\n<br>\nPre-procesamiento de datos\n<br>\nManejo de datos faltantes",
"datos1 = pd.DataFrame([24, np.nan, np.nan, 23,np.nan, 12, np.nan, 17, np.nan, 2 ,5], columns = list('A'))\ndatos1",
"<br>\nEn dropna() el argumento subset considera la etiqueta para seleccionar el conjunto a descartar, axis=0 descarta filas (axis=1 columnas) y inplace= True hace que los cambios se ejecuten directamente en DataFrame.",
"datos1.dropna(subset=['A'], axis= 0, inplace= True)\ndatos1",
"<br> \nLa función replace() permite reemplazar valores faltantes en el DataFrame por valores nuevos. En nuestro ejemplo, reemplazaremos con el promedio, que se calcula con la función mean().",
"datos1 = pd.DataFrame([24, np.nan, np.nan, 23,np.nan, 12, np.nan, 17, np.nan, 2 ,5], columns = list('A'))\nmedia = datos1['A'].mean()\nmedia",
"<br>\nAhora usamos la función replace()",
"datos1['A'].replace(np.nan, media, inplace = True)\ndatos1",
"<br>\nTransformando los datos\n<br>\nMezclando y combinando DataFrames",
"import pandas as pd\n\ncompra_1 = pd.Series({'Nombre': 'Adelis',\n 'Artículo comprado': 'Libro',\n 'Costo': 1200})\ncompra_2 = pd.Series({'Nombre': 'Miguel',\n 'Artículo comprado': 'Raspberry pi 3',\n 'Costo': 15000})\ncompra_3 = pd.Series({'Nombre': 'Jaime',\n 'Artículo comprado': 'Balón',\n 'Costo': 5000})\ndf = pd.DataFrame([compra_1, compra_2, compra_3], index=['Tienda 1', 'Tienda 1', 'Tienda 2'])\ndf",
"<br>\nPodemos agregar elementos al DataFrame de la siguiente manera:",
"df['Fecha'] = ['Diciembre 1', 'Febrero 4', 'Mediados de Julio']\ndf\n\ndf['Entregado'] = 'Sí'\ndf\n\ndf['Retroalimentación'] = ['Positiva', None, 'Negativa']\ndf",
"<br>\nPandas reset_index () es un método para restablecer el índice de un DataFrame. El establece como índices una lista de enteros que van desde 0 hasta la longitud de los datos.",
"adf = df.reset_index()\nadf",
"<br>\nPodemos tener un par de tablas de datos que nos interese unir o combinar en un mismo DataFrame.",
"empleados_df = pd.DataFrame([{'Nombre': 'Adriana', 'Función': 'Gerente de ventas'},\n {'Nombre': 'Andrés', 'Función': 'Vendedor 1'},\n {'Nombre': 'Cristóbal', 'Función': 'Gerente de departamento'}])\nempleados_df = empleados_df.set_index('Nombre')\ngrado_df = pd.DataFrame([{'Nombre': 'Andrés', 'Grado': 'Nivel 3'},\n {'Nombre': 'Cristóbal', 'Grado': 'Nivel 1'},\n {'Nombre': 'Adriana', 'Grado': 'Nivel 2'}])\ngrado_df = grado_df.set_index('Nombre')\nprint(empleados_df.head())\nprint()\nprint(grado_df.head())",
"<br>\npd.merge() conecta filas en el DataFrames basado en una o más teclas. Para los conocedores de SQL esta función hace unión de bases de datos por columnas o índices.",
"df_info_empleados=pd.merge(empleados_df, grado_df, how='outer', left_index=True, right_index=True)\ndf_info_empleados",
"<br>\nOtros ejemplos de cómo variar el parámetro how se pueden encontrar en el libro Python for Data Analysis - McKinney.\n<br>\nSupongamos que tenemos ahora un nuevo DataFrame que coincide en número de filas con el anterior. Por ejemplo:",
"fecha_ingreso_df = pd.DataFrame([{'Nombre': 'Adriana', 'Fecha de Ingreso': '20/06/2013'},\n {'Nombre': 'Andrés', 'Fecha de Ingreso': '10/01/2018'},\n {'Nombre': 'Cristóbal', 'Fecha de Ingreso': '20/03/2011'}])\nfecha_ingreso_df = fecha_ingreso_df.set_index('Nombre')\nart_vendidos_df = pd.DataFrame([{'Nombre': 'Adriana', 'Art.Vendidos/Total Art.': 123/10000},\n {'Nombre': 'Andrés', 'Art.Vendidos/Total Art.': 1450/10000},\n {'Nombre': 'Cristóbal', 'Art.Vendidos/Total Art.': 5000/10000}])\nart_vendidos_df = art_vendidos_df.set_index('Nombre')\n\nprint(fecha_ingreso_df.head())\nprint(art_vendidos_df.head())\n",
"<br>\npd.concat() pega o apila objetos a lo largo de un eje.",
"new_data = pd.concat([df_info_empleados, fecha_ingreso_df, art_vendidos_df], axis=1)\nnew_data",
"<br>\nHay mucho más que aprender! Por ejemplo: ¿Qué sucede si axis=0? R: pues posiblemente el resultado sea que Pandas pegue todos los valores y sus índices. Como se muestra a continuación:",
"pd.concat([df_info_empleados, fecha_ingreso_df, art_vendidos_df], axis=0)",
"<br>\nOtra transformación de interés podría ser hacer algún cálculo sobre una columna entera. En nuestro ejemplo, supongamos que deseamos colocar % de artículos vendidos y cambiar la etiqueta de esa columna.",
"new_data\nnew_data['Art.Vendidos/Total Art.']= new_data['Art.Vendidos/Total Art.']*100\nnew_data.rename(columns = {'Art.Vendidos/Total Art.': '% Art. Vendidos'}, inplace = True)\nnew_data",
"<br>\nNormalizando datos\n<br>\nTomemos un DataFrame que representa dimensiones de cajas a ser vendidas en un almacén.",
"dimension1 = pd.DataFrame([168.7, 170.0, 150.3, 168.7, 145.2, 200.0, 175.4, 163.0, 230.0, 129.6, 178.2], columns = list('L'))\ndimension1.rename(columns = {'L': 'Largo'}, inplace = True)\n\n\ndimension2 = pd.DataFrame([68.3, 60.2, 65.0, 68.3, 45.9, 70.0, 75.1, 63.5, 65.2, 68.7, 78], columns = list('A'))\ndimension2.rename(columns = {'A': 'Ancho'}, inplace = True)\n\n\ndimension3 = pd.DataFrame([46.8, 47.0, 45.0, 46.8, 45.3, 40.9, 45.6, 43.8, 46.8, 49.0, 47.2], columns = list('A'))\ndimension3.rename(columns = {'A': 'Alto'}, inplace = True)\n\n\ndimensiones = pd.concat([dimension1, dimension2, dimension3], axis=1)\ndimensiones",
"<br>\nMétodo de \"Escala de característica simple\": se divide cada valor por el\nvalor máximo para esa característica, $x_{nuevo} = \\frac{x_{viejo}}{x_{máximo}}$",
"dimensiones['Largo'] = dimensiones['Largo']/dimensiones['Largo'].max()\ndimensiones['Ancho'] = dimensiones['Ancho']/dimensiones['Ancho'].max()\ndimensiones['Alto'] = dimensiones['Alto']/dimensiones['Alto'].max()\ndimensiones\n",
"<br>\nMétodo Mínimo - Máximo: toma cada valor, $x_{viejo}$ le resta el mínimo\nvalor de esa característica y luego se divide por el rango de esa característica, es decir, $x_{nuevo} = \\frac{x_{viejo} - x_{mínimo}}{x_{máximo} - x_{mínimo}}$",
"dimensiones['Largo'] = (dimensiones['Largo']-dimensiones['Largo'].min())/(dimensiones['Largo'].max() - dimensiones['Largo'].min())\ndimensiones['Ancho'] = (dimensiones['Ancho']-dimensiones['Ancho'].min())/(dimensiones['Ancho'].max() - dimensiones['Ancho'].min())\ndimensiones['Alto'] = (dimensiones['Alto']-dimensiones['Alto'].min())/(dimensiones['Alto'].max() - dimensiones['Alto'].min())\ndimensiones\n",
"<br>\nMétodo Puntaje estándar:",
"dimensiones['Largo'] = (dimensiones['Largo']-dimensiones['Largo'].mean())/(dimensiones['Largo'].std())\ndimensiones['Ancho'] = (dimensiones['Ancho']-dimensiones['Ancho'].mean())/(dimensiones['Ancho'].std())\ndimensiones['Alto'] = (dimensiones['Alto']-dimensiones['Alto'].mean())/(dimensiones['Alto'].std())\ndimensiones",
"<br>\nEstadística descriptiva\n<br>\nTabla de resumen estadístico",
"import numpy as np\ndf = pd.read_csv('Automobile_data.csv')\ndf.head()\n\ndf.describe()",
"<br>\nGráficos de cajas (o Boxplots)\n<br>\nVamos a generar datos aleatoriamente y hacer un gráfico de caja.",
"np.random.seed(1500) #generación aleatoria números\ndfb = pd.DataFrame(np.random.randn(10,5)) #DataFrame de dimensiones 10x5\ndfb.boxplot(return_type='axes') #Grafico de caja de cada categoría.\ndfb.head()",
"<br>\nTomemos los datos del archivo Automobile_data.csv para crear un gráfico de caja de 3 variables que definen las dimensiones de los automóviles.",
"x = df['length'] #Variable Largo\ny = df['width'] #Variable Ancho\nz =df['height'] #Variable Alto\ndfbp = pd.DataFrame([x,y,z]).T #Creando un DataFrame con las dimensiones de los autosmóviles\ndfbp.boxplot(fontsize=13, return_type='axes') #Gráfico de caja de las 3 variables \n#Tarea!!!!! Normalice estos datos y haga el nuevo gráfico de caja",
"<br>\nGráficos de barras (o histogramas)\n<br>\nVamos a generar datos aleatoriamente y hacer un gráfico de barras.",
"np.random.seed(14000) #Generación de números aleatorios\npdhist = pd.Series(np.random.randn(1000)) #Serie de números aleatorios\npdhist.hist(normed=True) # Muestra las barras\npdhist.plot(fontsize=13, kind='kde') #Gráfico de barras (kde = Kernel Density Estimation plot. Haga la prueba con 'hist')",
"<br>\nUtilicemos los datos de Automobile_data.csv para hacer un gráfico de barras o histograma de la variable price (precio).",
"import matplotlib.pyplot as plt\np = df['price'] #Seleccionamos la variable price\npdf = pd.Series(p) #Convertimos la selección en una serie de Pandas\npdf.hist(normed=True) # Muestra las barras\npdf.plot(fontsize=11, kind = 'hist') #Gráfico de barras\nplt.xlabel('Precio',fontsize=13)\nplt.ylabel('Frecuencia', fontsize=13)",
"<br>\nEste gráfico de barras nos indica que hay un número alto de automóviles con precio menor a 10000, entre otras cosas .... ¿Qué cosas? ;)\n<br>\nGráfico de dispersión\n<br>\nEste gráfico de dispersión muestra la relación entre las variables tamaño del motor y precio.",
"import matplotlib.pyplot as plt\nx= df['engine-size'] #Variable predictora\ny= df['price'] #Variable objetivo o que deseamos predecir\nplt.scatter(x, y) #Gráfico de dispersión en Matplotlib \nplt.title('Gráfico de dispersión de Tamaño del motor Vs. Precio', fontsize=13)#Nombre del gráfico\nplt.xlabel('Tamaño del motor', fontsize=13)#Etiquetal del eje-x\nplt.ylabel('Precio', fontsize=13)#Etiqueta del eje-y",
"<br>\nCorrelación entre variables\nTomememos las dos variables del ejemplo anterior...",
"import matplotlib.pyplot as plt\nfrom scipy import stats\nx=df['engine-size'] #Variable predictora\ny= df['price'] #Variable objetivo o que deseamos predecir\nslope, intercept, r_value, p_value, std_err = stats.linregress(x,y)\nline = slope*x+intercept\nplt.plot(x,y,'o', x, line)\nax = plt.gca()\nfig = plt.gcf()\nplt.xlabel('Tamaño del motor', fontsize=9)#Etiquetal del eje-x\nplt.ylabel('Precio', fontsize=9)#Etiqueta del eje-y\nplt.title('Gráfico de dispersión de Tamaño del motor Vs. Precio', fontsize=13)#Nombre del gráfico\n\n",
"El gráfico de dispersión anterior revela que hay una relación lineal positiva entre el tamaño del motor y el precio del auto. Es decir, a medida que aumenta el tamaño del motor aumenta el precio.\nEste gráfico de dispersión revela que hay una relación lineal negativa entre las millas que recorre el auto por combustible que usa y el precio del mismo. Es decir, mientras más millas por galón el auto es más económico.",
"import matplotlib.pyplot as plt\nfrom scipy import stats\nx=df['highway-mpg'] #Variable predictora\ny= df['price'] #Variable objetivo o que deseamos predecir\nslope, intercept, r_value, p_value, std_err = stats.linregress(x,y)\nline = slope*x+intercept\nplt.plot(x,y,'o', x, line)\nax = plt.gca()\nfig = plt.gcf()\nplt.xlabel('Millas por galón en autopista', fontsize=9)#Etiquetal del eje-x\nplt.ylabel('Precio', fontsize=9)#Etiqueta del eje-y\nplt.title('Gráfico de dispersión de Millas por galón en autopista Vs. Precio', fontsize=13)#Nombre del gráfico\n",
"Ahora calculemos el coeficiente de correlación y el p-valor entre las variables 'Caballos de Fuerza' y 'Precio' usando 'stats.pearson()'",
"from scipy import stats\nstats.pearsonr(df['horsepower'], df['price'])",
"Existe una fuerte correlación positiva entre las variables ya que el coeficiente de correlación es cercano a 1 y el p-valor es mucho menor que 0.001"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statkclee/ThinkStats2
|
code/chap10soln-kor.ipynb
|
gpl-3.0
|
[
"통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>\nAllen Downey / 이광춘(xwMOOC)\n연습문제 10.1\nBRFSS에서 나온 데이터를 사용해서, log(체중) 대비 신장에 대한 선형 최소자승적합을 계산하라. 변수중 하나가 로그 변환된 이와 같은 모형에 대해서 추정된 모수를 나타내는 가장 좋은 방식은 어떻게 될까요? 만약 누군가의 체중을 추측하려고 한다면, 신장을 아는 것이 얼마나 도움이 될까?\nNSFG와 마찬가지로, BRFSS는 일부 집단을 과다표집(oversampling)하고 각 응답자에 대해서 표집 가중치 정보를 제공한다. BRFSS 데이터에서, 해당 가중치에 대한 변수명은 totalwt다. 가중치를 갖는, 갖지 않는 재표집을 사용해서, BRFSS에 나온 평균 응답자 신장, 평균에 대한 표준오차, 90% 신뢰구간을 추정하시오. 보정 가중치가 추정값에 얼마나 영향을 주는가?\nBRFSS 데이터를 불러들여서, 신장과 log 체중을 추출한다.",
"import brfss\nimport numpy as np\n\n%matplotlib inline\n\ndf = brfss.ReadBrfss(nrows=None)\ndf = df.dropna(subset=['htm3', 'wtkg2'])\nheights, weights = df.htm3, df.wtkg2\nweights = np.log10(weights)",
"절편과 기울기를 추정한다.",
"import thinkstats2\ninter, slope = thinkstats2.LeastSquares(heights, weights)\ninter, slope",
"데이터에 대한 산점도와 적합선을 보여준다.",
"import thinkplot\nthinkplot.Scatter(heights, weights, alpha=0.01)\nfxs, fys = thinkstats2.FitLine(heights, inter, slope)\nthinkplot.Plot(fxs, fys)\nthinkplot.Config(xlabel='height (cm)', ylabel='log10 weight (kg)', legend=False)",
"동일한 도식화를 하지만, 역변환을 적용해서 선형(log 아님) 척도로 체중을 나타낸다.",
"thinkplot.Scatter(heights, 10**weights, alpha=0.01)\nfxs, fys = thinkstats2.FitLine(heights, inter, slope)\nthinkplot.Plot(fxs, 10**fys)\nthinkplot.Config(xlabel='height (cm)', ylabel='weight (kg)', legend=False)",
"잔차 백분위수를 도식화한다.\n선들이 범위 대부분에 걸쳐 평평하다. 관계가 선형임을 나타낸다.\n선들이 거의 평행하다. 잔차 분산이 범위에 걸쳐 같음을 나타낸다.",
"res = thinkstats2.Residuals(heights, weights, inter, slope)\ndf['residual'] = res\n\nbins = np.arange(130, 210, 5)\nindices = np.digitize(df.htm3, bins)\ngroups = df.groupby(indices)\n\nmeans = [group.htm3.mean() for i, group in groups][1:-1]\ncdfs = [thinkstats2.Cdf(group.residual) for i, group in groups][1:-1]\n\nthinkplot.PrePlot(3)\nfor percent in [75, 50, 25]:\n ys = [cdf.Percentile(percent) for cdf in cdfs]\n label = '%dth' % percent\n thinkplot.Plot(means, ys, label=label)\n \nthinkplot.Config(xlabel='height (cm)', ylabel='residual weight (kg)', legend=False)",
"상관을 계산한다.",
"rho = thinkstats2.Corr(heights, weights)\nrho",
"결정계수를 계산한다.",
"r2 = thinkstats2.CoefDetermination(weights, res)\nr2",
"$R^2 = \\rho^2$ 임을 확증한다.",
"rho**2 - r2",
"Std(ys)를 계산하는데, 신장을 사용하지 않은 예측 RMSE가 된다.",
"std_ys = thinkstats2.Std(weights)\nstd_ys",
"Std(res)를 계산하는데, 신장을 사용하는 예측 RMSE가 된다.",
"std_res = thinkstats2.Std(res)\nstd_res",
"신장 정보가 RMSE를 얼마나 줄이는가? 약 15%",
"1 - std_res / std_ys",
"재표본추출을 사용해서 절편과 기울기에 대한 표집분포를 계산하시오.",
"t = []\nfor _ in range(100):\n sample = thinkstats2.ResampleRows(df)\n estimates = thinkstats2.LeastSquares(sample.htm3, np.log10(sample.wtkg2))\n t.append(estimates)\n\ninters, slopes = zip(*t)",
"기울기에 대한 표집분포를 도식화하시오.",
"cdf = thinkstats2.Cdf(slopes)\nthinkplot.Cdf(cdf)\nthinkplot.Show(legend=False)",
"기울기에 대한 p-값을 계산하시오.",
"pvalue = cdf[0]\npvalue",
"기울기 90% 신뢰구간을 계산하시오.",
"ci = cdf.Percentile(5), cdf.Percentile(95)\nci",
"표집분포의 평균을 계산하시오.",
"mean = thinkstats2.Mean(slopes)\nmean",
"표집분포에 대한 표준편차를 계산하시오. 이것이 표준오차다.",
"stderr = thinkstats2.Std(slopes)\nstderr",
"표집가중치를 사용해서 재표본추출하시오.",
"def ResampleRowsWeighted(df, column='finalwt'):\n \"\"\"Resamples a DataFrame using probabilities proportional to given column.\n\n df: DataFrame\n column: string column name to use as weights\n\n returns: DataFrame\n \"\"\"\n weights = df[column]\n cdf = thinkstats2.Cdf(dict(weights))\n indices = cdf.Sample(len(weights))\n sample = df.loc[indices]\n return sample",
"표집분포를 요약하시오.",
"def Summarize(estimates):\n mean = thinkstats2.Mean(estimates)\n stderr = thinkstats2.Std(estimates)\n cdf = thinkstats2.Cdf(estimates)\n ci = cdf.Percentile(5), cdf.Percentile(95)\n print('mean', mean)\n print('stderr', stderr)\n print('ci', ci)",
"가중치 없이 행을 재표본추출하고 결과를 요약하시오.",
"estimates_unweighted = [thinkstats2.ResampleRows(df).htm3.mean() for _ in range(100)]\nSummarize(estimates_unweighted)",
"가중치를 갖고 행을 재표본추출하시오. 만약 표집 가중치를 고려하면, 추정된 평균 신장이 거의 2cm 더 크고, 차이는 표집오차보다 훨씬 크다.",
"estimates_weighted = [ResampleRowsWeighted(df).htm3.mean() for _ in range(100)]\nSummarize(estimates_weighted)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jplattel/notebooks
|
phone-usage.ipynb
|
mit
|
[
"Playing with phone usage\nStarting on the first day of this year, I started tracking how often I was using my phone, I've been tinkering with iPython Notebooks and Pineapple pro to explore this data. Here's my first try for analyzing the data with these new tools!",
"# Import libraries\n\nimport pandas as pd\n%matplotlib inline ",
"Data\nThe data is collected and exported in JSON format, with a quick python and dirty python script 'convert.py' I converted into CSV format. We start by reading the data:",
"# Read local CSV\n\ndf = pd.read_csv('usage.csv')\n\n# Describe the dataset\ndf.describe()",
"Interesting...\nThis already learns me something useful, I've been using my phone for 70 minutes on average each day... That's a lot of time spend on a mobile device... On average I would pick it up for 41 times a day, meaning the average duration of my phone use is about 1,7 minutes per session.\nSo hows the distribution?\nLet's find out!",
"df.hist()",
"Pickups\nNow for some more interesting things, let's look at the pickups:",
"# Read local CSV file\ndf = pd.read_csv('pickups.csv')\n\n# Describe the dataset\ndf.describe()",
"Mmh, it looks like there are some pickups where the length in seconds is rather great, let's remove them. Also note, I picked up my phone over 11530 times. Woah, that's a lot of wear!",
"\n# Filter out values over 10 minutes (600 seconds)\ndf = df[df['seconds'] < 600]\n\n# Show histogram of usage ()\ndf.hist(bins=100)",
"Most of my phone usage is very short, with the exception of 2 minutes exactly (a rare peak in the histogram around 120 seconds... Any ideas what caused it? D'oh! It's the time my phone turns off if I don't use it*\nHow about dates and times?\nWell.. let's have a look, shall we?",
"# Create a additional column to save hour \ndf['hour'] = pd.to_datetime(df['date']).map( lambda x: x.hour )\n\n# Plot histogram of hours\ndf.hist(['hour'], bins=24)",
"As it turns out, phone usage is the highest durign lunch break. There's also a dent in the usage around 19:00 hours, meaning I don't use my phone that often during and after dinner. How about weekdays, could they differ? On we go again:",
"# Create a additional column to save the weekday\ndf['weekday'] = pd.to_datetime(df['date']).map( lambda x: x.isoweekday() )\n\n# Then plot\ndf.hist(['weekday'], bins=7)",
"Sunday is my best day! The peak on wednesday is also very interesting! Mmh..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pombredanne/gensim
|
docs/notebooks/ldaseqmodel.ipynb
|
lgpl-2.1
|
[
"Dynamic Topic Models\nImagine you have a gigantic corpus which spans over a couple of years. You want to find semantically similar documents; one from the very beginning of your time-line, and one in the very end. How would you?\nThis is where Dynamic Topic Models comes in. By having a time-based element to topics, context is preserved while ley-words may change.\nDynamic Topic Models are used to model the evolution of topics in a corpus, over time. The Dynamic Topic Model is part of a class of probabilistic topic models, like the LDA. \nWhile most traditional topic mining algorithms do not expect time-tagged data or take into account any prior ordering, Dynamic Topic Models (DTM) leverages the knowledge of different documents belonging to a different time-slice in an attempt to map how the words in a topic change over time.\nDavid Blei does a good job explaining the theory behind this in this Google talk. If you prefer to directly read the paper on DTM by Blei and Lafferty, that should get you upto speed too.\nMotivation\nBut - why even undertake this, especially when Gensim itself have a wrapper?\nThe main motivation was the lack of documentation in the original code - and the fact that doing an only python version makes it easier to use gensim building blocks. For example, for setting up the Sufficient Statistics to initialize the DTM, you can just pass a pre-trained gensim LDA model!\nThere is some clarity on how they built their code now - Variational Inference using Kalman Filters. I've tried to make things as clear as possible in the code, but it still needs some polishing. \nAny help through PRs would be greatly appreciated!\nI have been regularly blogging about my progress with implementing this, which you can find here.\nUse Case\nIf you would have seen the video or read the paper, it's use case would be pretty clear - and the example of modelling it on Science research papers gives us some pretty interesting results. It was used to not only catch how various themes of research such as Physics or Neuroscience evolved over the decades but also in identifying similar documents in a way not many other modelling algorithms can. While words may change over time, the fact that DTM can identify topics over time can help us find semantically similar documents over a long time-period.\nThis blog post is also useful in breaking down the ideas and theory behind DTM.\nUsing LdaSeqModel for DTM\nGensim already has a wrapper for original C++ DTM code, but the LdaSeqModel class is an effort to have a pure python implementation of the same.\nUsing it is very similar to using any other gensim topic-modelling algorithm, with all you need to start is an iterable gensim corpus, id2word and a list with the number of documents in each of your time-slices.",
"# setting up our imports\n\nfrom gensim.models import ldaseqmodel\nfrom gensim.corpora import Dictionary, bleicorpus\nimport numpy\nfrom gensim.matutils import hellinger",
"We will be loading the corpus and dictionary from disk. Here our corpus in the Blei corpus format, but it can be any iterable corpus.\nThe data set here consists of news reports over 3 months downloaded from here and cleaned. \nTODO: better, more interesting data-set.\nWhat is a time-slice?\nA very important input for DTM to work is the time_slice input. It should be a list which contains the number of documents in each time-slice. In our case, the first month had 438 articles, the second 430 and the last month had 456 articles. This means we'd need an input which looks like this: time_slice = [438, 430, 456]. \nOnce you have your corpus, id2word and time_slice ready, we're good to go!",
"# loading our corpus and dictionary\ndictionary = Dictionary.load('Corpus/news_dictionary')\ncorpus = bleicorpus.BleiCorpus('Corpus/news_corpus')\n# it's very important that your corpus is saved in order of your time-slices!\n\ntime_slice = [438, 430, 456]",
"For DTM to work it first needs the Sufficient Statistics from a trained LDA model on the same dataset. \nBy default LdaSeqModel trains it's own model and passes those values on, but can also accept a pre-trained gensim LDA model, or a numpy matrix which contains the Suff Stats.\nWe will be training our model in default mode, so LDA will be first performed on the dataset. The passes parameter is to instruct LdaModel on the number of passes.",
"ldaseq = ldaseqmodel.LdaSeqModel(corpus=corpus, id2word=dictionary, time_slice=time_slice, num_topics=5, passes=20)",
"Now that our model is trained, let's see what our results look like.\nResults\nMuch like LDA, the points of interest would be in what the topics are and how the documents are made up of these topics.\nIn DTM we have the added interest of seeing how these topics evolve over time.\nLet's go through some of the functions to print Topics and analyse documents.",
"# to print all topics, use `print_topics`. \n# the input parameter to `print_topics` is only a time-slice option. By passing `0` we are seeing the topics in the 1st time-slice.\nldaseq.print_topics(time=0)\n\n# to fix a topic and see it evolve, use `print_topic_times`\n\nldaseq.print_topic_times(topic=1) # evolution of 1st topic",
"If you look at the lower frequencies; the word broadband is creeping itself up into prominence in topic number 1. \nWe've had our fun looking at topics, now let us see how to analyse documents.\nDoc-Topics\nthe function doc_topics checks the topic proportions on documents already trained on. It accepts the document number in the corpus as an input.\nLet's pick up document number 558 arbitrarily and have a look.",
"# to check Document - Topic proportions, use `doc-topics`\nwords = [dictionary[word_id] for word_id, count in ldaseq.corpus.corpus[558]]\nprint (words)",
"It's pretty clear that it's a news article about football. What topics will it likely be comprised of?",
"doc_1 = ldaseq.doc_topics(558) # check the 244th document in the corpuses topic distribution\nprint (doc_1)",
"It's largely made of topics 3 and 5 - and if we go back and inspect our topics, it's quite a good match.\nIf we wish to analyse a document not in our training set, we can use simply pass the doc to the model similar to the __getitem__ funciton for LdaModel.\nLet's let our document be a hypothetical news article about the effects of Ryan Giggs buying mobiles affecting the British economy.",
"doc_2 = ['economy', 'bank', 'mobile', 'phone', 'markets', 'buy', 'football', 'united', 'giggs']\ndoc_2 = dictionary.doc2bow(doc_2)\ndoc_2 = ldaseq[doc_2]\nprint (doc_2)",
"Pretty neat! Topics 2 and 3 are about technology, the market and football, so this works well for us.\nDistances between documents\nOne of the more handy uses of DTMs topic modelling is that we can compare documents across different time-frames and see how similar they are topic-wise. When words may not necessarily overlap over these time-periods, this is very useful.\nThe current dataset doesn't provide us the diversity for this to be an effective example; but we will nevertheless illustrate how to do the same.",
"hellinger(doc_1, doc_2)",
"The topic distributions are quite similar, so we get a high value.\nFor more information on how to use the gensim distance metrics, check out this notebook.\nPerformance\nThe code currently runs between 5 to 7 times slower than the original C++ DTM code. The bottleneck is in the scipy optimize.fmin_cg method for updating obs. Speeding this up would fix things up!\nSince it uses iterable gensim corpuses, the memory stamp is also cleaner. The corpus size doesn't matter.\nThe advantages of the python port are that unlike the C++ code we needn't treat it like a black-box; PRs to help make the code better are welcomed, as well as help to make the documentation clearer and improve performance. It is also in pure python and doesn't need any dependancy outside of what gensim already needs. The added functionality of being able to analyse new documents is also a plus!\nDTM wrapper comparison\nLet's now compare these results with the DTM wrapper.",
"from gensim.models.wrappers.dtmmodel import DtmModel\n\n\ndtm_path = \"/Users/bhargavvader/Downloads/dtm_release/dtm/main\"\ndtm_model = DtmModel(dtm_path, corpus, time_slice, num_topics=5, id2word=dictionary, initialize_lda=True)\ndtm_model.save('dtm_news')\nldaseq.save('ldaseq_news')\n\n# if we've saved before simply load the model\ndtm_model = DtmModel.load('dtm_news')\n\n# setting up the DTM wrapper for \n\nfrom gensim import matutils\nnum_topics = 5\ntopic_term = dtm_model.lambda_[:,:,0] # the lambda matrix contains \n\ndef validate(topic_term):\n topic_term = numpy.exp(topic_term)\n topic_term = topic_term / topic_term.sum()\n topic_term = topic_term * num_topics\n return topic_term\n\ndef get_topics(topic_terms, topic_number):\n topic_terms = topic_terms[topic_number]\n bestn = matutils.argsort(topic_terms, 20, reverse=True)\n beststr = [dictionary[id_] for id_ in bestn]\n return beststr\n\ntopic_term = validate(topic_term)\n# next is doc_topic_dist\ndoc_topic = dtm_model.gamma_\n# next is the vocabulary, which we already have\n\nvocab = []\nfor i in range(0, len(dictionary)):\n vocab.append(dictionary[i])\n\n# we now need term-frequency and doc_lengths\n\ndef term_frequency(corpus, dictionary):\n term_frequency = [0] * len(dictionary)\n doc_lengths = []\n for doc in corpus:\n doc_lengths.append(len(doc))\n for pair in doc:\n term_frequency[pair[0]] += pair[1]\n return term_frequency, doc_lengths\n\ntopics_wrapper = []\nfor i in range(0, num_topics):\n topics_wrapper.append(get_topics(topic_term, i))\n \n \nterm_frequency, doc_lengths = term_frequency(corpus, dictionary)\n\nimport pyLDAvis\nvis_wrapper = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)\npyLDAvis.display(vis_wrapper)\n\n# now let us visualize the DTM python port.\n\n# getting a list of just words for each topics\ndtm_tp = ldaseq.print_topics()\ndtm_topics = []\nfor topic in dtm_tp:\n topics = []\n for prob, word in topic:\n topics.append(word)\n dtm_topics.append(topics)\n \n# getting dtm python doc-topic proportions\ndoc_topic = numpy.copy(ldaseq.gammas)\ndoc_topic /= doc_topic.sum(axis=1)[:, numpy.newaxis]\n\n# getting dtm topic_word proportions for first time_slice\ndef get_topic_term(ldaseq, topic, time=0):\n topic = numpy.transpose(ldaseq.topic_chains[topic].e_log_prob)\n topic = topic[time]\n topic = numpy.exp(topic)\n topic = topic / topic.sum()\n return topic\n\n# get_topic_term(ldaseq, 0).shape\ntopic_term =numpy.array(numpy.split(numpy.concatenate((get_topic_term(ldaseq, 0), get_topic_term(ldaseq, 1), get_topic_term(ldaseq, 2), get_topic_term(ldaseq, 3), get_topic_term(ldaseq, 4))), 5))\nvis_dtm = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)\npyLDAvis.display(vis_dtm)\n\nfrom gensim.models.coherencemodel import CoherenceModel\nimport pickle\n\n\ncm_wrapper = CoherenceModel(topics=topics_wrapper, corpus=corpus, dictionary=dictionary, coherence='u_mass')\ncm_DTM = CoherenceModel(topics=dtm_topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')\n\nprint (cm_wrapper.get_coherence())\nprint (cm_DTM.get_coherence())\n\n# to use 'c_v' we need texts, which we have saved to disk.\ntexts = pickle.load(open('Corpus/texts', 'rb'))\ncm_wrapper = CoherenceModel(topics=topics_wrapper, texts=texts, dictionary=dictionary, coherence='c_v')\ncm_DTM = CoherenceModel(topics=dtm_topics, texts=texts, dictionary=dictionary, coherence='c_v')\n\nprint (cm_wrapper.get_coherence())\nprint (cm_DTM.get_coherence())",
"So while u_mass coherence prefers the wrapper topics, c_v seems to favor our python port better. :)\nConclusion\nSo while there is already a python wrapper of DTM, a pure python implementation will be useful to better understand what goes on undcer the hood and better the code. When it comes to performance, the C++ is undoubtably faster, but we can continue to work on ours to make it as fast.\nAs for evaluating the results, our topics are on par if not better than the wrapper!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
javierfdr/credit-scoring-analysis
|
src/credit_notebook.ipynb
|
mit
|
[
"Fitting Linear and Non-Linear Models to solve the German credit risk scoring classification problem\nLet's import the support libraries developed manually for this project and load the original dataset",
"%matplotlib inline\nfrom classifiers import *\nfrom dim_red import *",
"Loading German Credit scoring dataset transformed to use comma-separated values and printing numpy array dimensions",
"\n[X,y] = load_dataset('new-german-data.numeric',delim=',')\nprint X.shape\nprint y.shape",
"In order to take a first glance of the distribution of data, the first two principal components are calculated from the original data using PCA and plotted in 2D. It can be observed how there are certain spaces where data points of an specific class appear together, however there is no clear separation devisable through PCA analysis.",
"pca = PCA(n_components=2)\npca.fit(X,y)\nprint pca.explained_variance_\nplotPCA(X,y)",
"In order to understand the relevance of the features of the dataset wrapper methods will be use to see the influence of the features on the final output. First the dataset will be splitted two obtain a training and test set, and then a simple SVM Linear classifier will be built in order to have a first glance on the influence of each attribute.",
"from sklearn import cross_validation\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3, random_state=0)\nclf = svm.SVC(kernel='linear', C=10, probability=True)\n\ndef feature_analysis_prec(X,y,clf):\n scores = []\n names = range(0,X.shape[1])\n for i in range(X.shape[1]):\n score = cross_val_score(clf, X[:, i:i+1], y, scoring=\"precision\",\n cv=ShuffleSplit(len(X), 3, .3))\n scores.append((round(np.mean(score), 3), names[i]))\n sorted_scores = sorted(scores, reverse=True)\n print sorted_scores\n return sorted_scores[0:4]\n \nss = feature_analysis_prec(X,y,clf)\n",
"It can be seen that most of the features has a level of relevance regarding the precision of the classifier; which works as an indication of the false positive rate, critical for credit scoring. To take an additional glance to the ability of the features to represent the outcome let's take the first 4 higher relevant features and calculate and plot PCA on them.",
"X_r = X[:,[ss[0][1],ss[1][1],ss[2][1],ss[3][1]]]\npca = PCA(n_components=2)\npca.fit(X_r,y)\nprint pca.explained_variance_\nplotPCA(X_r,y)",
"As expected, no much better representational ability is obtained from the principal components on the first 4 most relevant features since each of the features is adding a certain degree of value on predicting the final outcome.\nGiven the complexity of the feature space more sophisticated models must be built in order to represent more accurately the nature of the data. In the following sections three different models are built using a Linear SVM, RBF Kernel SVM and Random Decision Forests. The latest two are expected to give better prediction rates (in terms of f1 measure, a ratio of precision and recall) at the expense of higher risk of overfitting. In order to avoid this a grid search is performed on each method to obtain the better model in terms of test set fitting accuracy, and cross validation is performed on each parameter combination to obtain a more representative mean accuracy in each case.\nLinear SVM\nLet's split the dataset into 70% training set and 30% test set. Then less obtain the best model through cross validated grid search",
"from sklearn.metrics import classification_report\n\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3, random_state=0)\n\nparam_grid = [\n {'C': [0.01,0.1, 1, 10, 100,1000], 'kernel': ['linear']}\n ]\n\nclf = svm.SVC(kernel='linear', C=10)\nclf = GridSearchCV(clf, param_grid, cv=5, scoring='f1')\nclf.fit(X_train, y_train)\n\nprint(\"Best parameters combination\")\nprint(clf.best_params_)\nprint(\"F1 scores on each combination:\")\nfor params, mean_score, scores in clf.grid_scores_:\n print(\"%0.3f (+/-%0.03f) for %r\"\n % (mean_score, scores.std() * 2, params))\nprint(\"Detailed classification report:\")\ny_true, y_pred = y_test, clf.predict(X_test)\nprint(classification_report(y_true, y_pred))",
"Plotting the confusion matrix on the results we obtain the following:",
"from sklearn.metrics import confusion_matrix\nfrom __future__ import division\n\ncm = confusion_matrix(y_test,y_pred)\nplot_confusion_matrix(cm,['Approve','Deny'])\n\n# brute confusion matrix values\nprint cm\nTP = cm[0,0]\nFN = cm[0,1]\nFP = cm[1,0]\nTN = cm[1,1]\n\n# percentual confusion matrix values\ntotal = cm.sum()\nprint cm / total\nprint(\"Good Accepted + Bad Rejected: %0.2f \"% ((cm[0,0]+cm[1,1])/total))\nprint(\"Bad Accepted + Good Rejected: %0.2f \"% ((cm[0,1]+cm[1,0])/total))\n\n# storing linear SVM results\nlsvm_cm = cm\n\n# Associated cost - from cost matrix\ncost = (TN*1) + (FP*5)\nprint \"Associated cost: %0.2f\" % cost",
"RBF Kernel SVM\nGiven the complexity of the feature space, evidenced in the initial plots, it is expected that a more sophisticated transformation of the feature space, such as the nonlinear transformation attempted by RBF Kernel SVM, would allow a better fit. We will reuse the same training/test splitting performed before in order to guarantee consistency. Again the best model will be obtained a cross validated grid search.",
"from sklearn.metrics import classification_report\nfrom sklearn.lda import LDA\n\n\nparam_grid = [\n {'C': [0.01,0.1, 1, 10, 100,1000], 'kernel': ['rbf'], 'gamma' : [0.01,0.1,1,10,100,1000]}\n ]\n\nclf = svm.SVC(kernel='rbf', C=10, gamma = 1)\nclf = GridSearchCV(clf, param_grid, cv=5, scoring='f1')\n\nclf.fit(X_train, y_train)\n\nprint(\"Best parameters combination\")\nprint(clf.best_params_)\nprint(\"F1 scores on each combination:\")\nfor params, mean_score, scores in clf.grid_scores_:\n print(\"%0.3f (+/-%0.03f) for %r\"\n % (mean_score, scores.std() * 2, params))\nprint(\"Detailed classification report:\")\ny_true, y_pred = y_test, clf.predict(X_test)\nprint(classification_report(y_true, y_pred))",
"Plotting the confusion matrix on the results we obtain the following:",
"from sklearn.metrics import confusion_matrix\nfrom __future__ import division\n\ncm = confusion_matrix(y_test,y_pred)\nplot_confusion_matrix(cm,['Approve','Deny'])\n\n# brute confusion matrix values\nprint cm\nTP = cm[0,0]\nFN = cm[0,1]\nFP = cm[1,0]\nTN = cm[1,1]\n\n# percentual confusion matrix values\ntotal = cm.sum()\nprint cm / total\nprint(\"Good Accepted + Bad Rejected: %0.2f \"% ((cm[0,0]+cm[1,1])/total))\nprint(\"Bad Accepted + Good Rejected: %0.2f \"% ((cm[0,1]+cm[1,0])/total))\n\n# storing RBF Kernel SVM results\nrbfsvm_cm = cm\n\n# Associated cost - from cost matrix\ncost = (TN*1) + (FP*5)\nprint \"Associated cost: %0.2f\" % cost",
"Random Decision Forests\nThis bagging algorithm produces and ensemble of decision trees, capable to fit very complex feature spaces. RDF's have proven to be consistently accurate in wider ranges of problems. Since a sufficiently long trees within the ensembles are capable to fit any particular space, a high risk of overfitting arises, so its necessary to perform a thorough search in the parameters space. The same procedure as before is followed.",
"from sklearn.metrics import classification_report\nfrom scipy.stats import randint as sp_randint\n\nparam_grid = [{\"max_depth\": [3, 5,10,15,20,30,40,70],\n \"max_features\": [1, 3, 10,15,20]\n }\n ]\n\nclf = RandomForestClassifier(max_features = 'auto', max_depth=10)\nclf = GridSearchCV(clf, param_grid, cv=5, scoring='f1')\nclf.fit(X_train, y_train)\n\nprint(\"Best parameters combination\")\nprint(clf.best_params_)\nprint(\"F1 scores on each combination:\")\nfor params, mean_score, scores in clf.grid_scores_:\n print(\"%0.3f (+/-%0.03f) for %r\"\n % (mean_score, scores.std() * 2, params))\nprint(\"Detailed classification report:\")\ny_true, y_pred = y_test, clf.predict(X_test)\nprint(classification_report(y_true, y_pred))\n\n# Please wait, this could take a while\n\nfrom sklearn.metrics import confusion_matrix\nfrom __future__ import division\n\ncm = confusion_matrix(y_test,y_pred)\nplot_confusion_matrix(cm,['Approve','Deny'])\n\n# brute confusion matrix values\nprint cm\nTP = cm[0,0]\nFN = cm[0,1]\nFP = cm[1,0]\nTN = cm[1,1]\n\n# percentual confusion matrix values\ntotal = cm.sum()\nprint cm / total\nprint(\"Good Accepted + Bad Rejected: %0.2f \"% ((cm[0,0]+cm[1,1])/total))\nprint(\"Bad Accepted + Good Rejected: %0.2f \"% ((cm[0,1]+cm[1,0])/total))\n\n# storing RDF results\nrdf_cm = cm\n\n# Associated cost - from cost matrix\ncost = (TN*1) + (FP*5)\nprint \"Associated cost: %0.2f\" % cost",
"Results Summary\nThe following graph presents the results comparison between the best models of the three presented approaches. The comparison is performed directly on the ability of discerning of a credit should be assigned or rejected, and the feasibility of a wrongly conceded credit which is a critical issue for this problem.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nN = 4\nsvm_linear = (lsvm_cm[0,0],lsvm_cm[0,1],lsvm_cm[1,0],lsvm_cm[1,1])\nrbfsvm = (rbfsvm_cm[0,0],rbfsvm_cm[0,1],rbfsvm_cm[1,0],rbfsvm_cm[1,1])\nrdf = (rdf_cm[0,0],rdf_cm[0,1],rdf_cm[1,0],rdf_cm[1,1])\n\nind = np.arange(N) # the x locations for the groups\nwidth = 0.25 # the width of the bars: can also be len(x) sequence\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nrects1 = ax.bar(ind, svm_linear, width, color='r')\nrects2 = ax.bar(ind+width, rbfsvm, width, color='g')\nrects3 = ax.bar(ind+(width*2), rdf, width, color='b')\n\n# add some\nax.set_ylabel('Number of examples')\nax.set_title('Number of examples per classification type')\nax.set_xticks(ind+width)\nax.set_xticklabels( ('True Positive', 'False Positive', 'False Negative', 'True Negative') )\n\nax.legend( (rects1[0], rects2[0],rects3[0]), ('LinearSVM', 'RBF-SVM','RDF') )\n\nplt.show()\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN = 4\ntotal = lsvm_cm.sum()\nlsvm_cm = lsvm_cm/total\nrbfsvm_cm = rbfsvm_cm/total\nrdf_cm = rdf_cm/total\n\nsvm_linear = (lsvm_cm[0,0],lsvm_cm[0,1],lsvm_cm[1,0],lsvm_cm[1,1])\nrbfsvm = (rbfsvm_cm[0,0],rbfsvm_cm[0,1],rbfsvm_cm[1,0],rbfsvm_cm[1,1])\nrdf = (rdf_cm[0,0],rdf_cm[0,1],rdf_cm[1,0],rdf_cm[1,1])\n\nind = np.arange(N) # the x locations for the groups\nwidth = 0.25 # the width of the bars: can also be len(x) sequence\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nrects1 = ax.bar(ind, svm_linear, width, color='r')\nrects2 = ax.bar(ind+width, rbfsvm, width, color='g')\nrects3 = ax.bar(ind+(width*2), rdf, width, color='b')\n\n# add some\nax.set_ylabel('Percentage from the total')\nax.set_title('Percentage from the total per classification type')\nax.set_xticks(ind+width)\nax.set_xticklabels( ('True Positive', 'False Positive', 'False Negative', 'True Negative') )\n\nax.legend( (rects1[0], rects2[0],rects3[0]), ('LinearSVM', 'RBF-SVM','RDF') )\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io
|
examples/graph/ipynb/gnn_citations.ipynb
|
apache-2.0
|
[
"Node Classification with Graph Neural Networks\nAuthor: Khalid Salama<br>\nDate created: 2021/05/30<br>\nLast modified: 2021/05/30<br>\nDescription: Implementing a graph neural network model for predicting the topic of a paper given its citations.\nIntroduction\nMany datasets in various machine learning (ML) applications have structural relationships\nbetween their entities, which can be represented as graphs. Such application includes\nsocial and communication networks analysis, traffic prediction, and fraud detection.\nGraph representation Learning\naims to build and train models for graph datasets to be used for a variety of ML tasks.\nThis example demonstrate a simple implementation of a Graph Neural Network\n(GNN) model. The model is used for a node prediction task on the Cora dataset\nto predict the subject of a paper given its words and citations network.\nNote that, we implement a Graph Convolution Layer from scratch to provide better\nunderstanding of how they work. However, there is a number of specialized TensorFlow-based\nlibraries that provide rich GNN APIs, such as Spectral,\nStellarGraph, and\nGraphNets.\nSetup",
"import os\nimport pandas as pd\nimport numpy as np\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers",
"Prepare the Dataset\nThe Cora dataset consists of 2,708 scientific papers classified into one of seven classes.\nThe citation network consists of 5,429 links. Each paper has a binary word vector of size\n1,433, indicating the presence of a corresponding word.\nDownload the dataset\nThe dataset has two tap-separated files: cora.cites and cora.content.\n\nThe cora.cites includes the citation records with two columns:\ncited_paper_id (target) and citing_paper_id (source).\nThe cora.content includes the paper content records with 1,435 columns:\npaper_id, subject, and 1,433 binary features.\n\nLet's download the dataset.",
"zip_file = keras.utils.get_file(\n fname=\"cora.tgz\",\n origin=\"https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz\",\n extract=True,\n)\ndata_dir = os.path.join(os.path.dirname(zip_file), \"cora\")",
"Process and visualize the dataset\nThen we load the citations data into a Pandas DataFrame.",
"citations = pd.read_csv(\n os.path.join(data_dir, \"cora.cites\"),\n sep=\"\\t\",\n header=None,\n names=[\"target\", \"source\"],\n)\nprint(\"Citations shape:\", citations.shape)",
"Now we display a sample of the citations DataFrame.\nThe target column includes the paper ids cited by the paper ids in the source column.",
"citations.sample(frac=1).head()",
"Now let's load the papers data into a Pandas DataFrame.",
"column_names = [\"paper_id\"] + [f\"term_{idx}\" for idx in range(1433)] + [\"subject\"]\npapers = pd.read_csv(\n os.path.join(data_dir, \"cora.content\"), sep=\"\\t\", header=None, names=column_names,\n)\nprint(\"Papers shape:\", papers.shape)",
"Now we display a sample of the papers DataFrame. The DataFrame includes the paper_id\nand the subject columns, as well as 1,433 binary column representing whether a term exists\nin the paper or not.",
"print(papers.sample(5).T)",
"Let's display the count of the papers in each subject.",
"print(papers.subject.value_counts())",
"We convert the paper ids and the subjects into zero-based indices.",
"class_values = sorted(papers[\"subject\"].unique())\nclass_idx = {name: id for id, name in enumerate(class_values)}\npaper_idx = {name: idx for idx, name in enumerate(sorted(papers[\"paper_id\"].unique()))}\n\npapers[\"paper_id\"] = papers[\"paper_id\"].apply(lambda name: paper_idx[name])\ncitations[\"source\"] = citations[\"source\"].apply(lambda name: paper_idx[name])\ncitations[\"target\"] = citations[\"target\"].apply(lambda name: paper_idx[name])\npapers[\"subject\"] = papers[\"subject\"].apply(lambda value: class_idx[value])",
"Now let's visualize the citation graph. Each node in the graph represents a paper,\nand the color of the node corresponds to its subject. Note that we only show a sample of\nthe papers in the dataset.",
"plt.figure(figsize=(10, 10))\ncolors = papers[\"subject\"].tolist()\ncora_graph = nx.from_pandas_edgelist(citations.sample(n=1500))\nsubjects = list(papers[papers[\"paper_id\"].isin(list(cora_graph.nodes))][\"subject\"])\nnx.draw_spring(cora_graph, node_size=15, node_color=subjects)\n",
"Split the dataset into stratified train and test sets",
"train_data, test_data = [], []\n\nfor _, group_data in papers.groupby(\"subject\"):\n # Select around 50% of the dataset for training.\n random_selection = np.random.rand(len(group_data.index)) <= 0.5\n train_data.append(group_data[random_selection])\n test_data.append(group_data[~random_selection])\n\ntrain_data = pd.concat(train_data).sample(frac=1)\ntest_data = pd.concat(test_data).sample(frac=1)\n\nprint(\"Train data shape:\", train_data.shape)\nprint(\"Test data shape:\", test_data.shape)",
"Implement Train and Evaluate Experiment",
"hidden_units = [32, 32]\nlearning_rate = 0.01\ndropout_rate = 0.5\nnum_epochs = 300\nbatch_size = 256",
"This function compiles and trains an input model using the given training data.",
"\ndef run_experiment(model, x_train, y_train):\n # Compile the model.\n model.compile(\n optimizer=keras.optimizers.Adam(learning_rate),\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy(name=\"acc\")],\n )\n # Create an early stopping callback.\n early_stopping = keras.callbacks.EarlyStopping(\n monitor=\"val_acc\", patience=50, restore_best_weights=True\n )\n # Fit the model.\n history = model.fit(\n x=x_train,\n y=y_train,\n epochs=num_epochs,\n batch_size=batch_size,\n validation_split=0.15,\n callbacks=[early_stopping],\n )\n\n return history\n",
"This function displays the loss and accuracy curves of the model during training.",
"\ndef display_learning_curves(history):\n fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n\n ax1.plot(history.history[\"loss\"])\n ax1.plot(history.history[\"val_loss\"])\n ax1.legend([\"train\", \"test\"], loc=\"upper right\")\n ax1.set_xlabel(\"Epochs\")\n ax1.set_ylabel(\"Loss\")\n\n ax2.plot(history.history[\"acc\"])\n ax2.plot(history.history[\"val_acc\"])\n ax2.legend([\"train\", \"test\"], loc=\"upper right\")\n ax2.set_xlabel(\"Epochs\")\n ax2.set_ylabel(\"Accuracy\")\n plt.show()\n",
"Implement Feedforward Network (FFN) Module\nWe will use this module in the baseline and the GNN models.",
"\ndef create_ffn(hidden_units, dropout_rate, name=None):\n fnn_layers = []\n\n for units in hidden_units:\n fnn_layers.append(layers.BatchNormalization())\n fnn_layers.append(layers.Dropout(dropout_rate))\n fnn_layers.append(layers.Dense(units, activation=tf.nn.gelu))\n\n return keras.Sequential(fnn_layers, name=name)\n",
"Build a Baseline Neural Network Model\nPrepare the data for the baseline model",
"feature_names = set(papers.columns) - {\"paper_id\", \"subject\"}\nnum_features = len(feature_names)\nnum_classes = len(class_idx)\n\n# Create train and test features as a numpy array.\nx_train = train_data[feature_names].to_numpy()\nx_test = test_data[feature_names].to_numpy()\n# Create train and test targets as a numpy array.\ny_train = train_data[\"subject\"]\ny_test = test_data[\"subject\"]",
"Implement a baseline classifier\nWe add five FFN blocks with skip connections, so that we generate a baseline model with\nroughly the same number of parameters as the GNN models to be built later.",
"\ndef create_baseline_model(hidden_units, num_classes, dropout_rate=0.2):\n inputs = layers.Input(shape=(num_features,), name=\"input_features\")\n x = create_ffn(hidden_units, dropout_rate, name=f\"ffn_block1\")(inputs)\n for block_idx in range(4):\n # Create an FFN block.\n x1 = create_ffn(hidden_units, dropout_rate, name=f\"ffn_block{block_idx + 2}\")(x)\n # Add skip connection.\n x = layers.Add(name=f\"skip_connection{block_idx + 2}\")([x, x1])\n # Compute logits.\n logits = layers.Dense(num_classes, name=\"logits\")(x)\n # Create the model.\n return keras.Model(inputs=inputs, outputs=logits, name=\"baseline\")\n\n\nbaseline_model = create_baseline_model(hidden_units, num_classes, dropout_rate)\nbaseline_model.summary()",
"Train the baseline classifier",
"history = run_experiment(baseline_model, x_train, y_train)",
"Let's plot the learning curves.",
"display_learning_curves(history)",
"Now we evaluate the baseline model on the test data split.",
"_, test_accuracy = baseline_model.evaluate(x=x_test, y=y_test, verbose=0)\nprint(f\"Test accuracy: {round(test_accuracy * 100, 2)}%\")",
"Examine the baseline model predictions\nLet's create new data instances by randomly generating binary word vectors with respect to\nthe word presence probabilities.",
"\ndef generate_random_instances(num_instances):\n token_probability = x_train.mean(axis=0)\n instances = []\n for _ in range(num_instances):\n probabilities = np.random.uniform(size=len(token_probability))\n instance = (probabilities <= token_probability).astype(int)\n instances.append(instance)\n\n return np.array(instances)\n\n\ndef display_class_probabilities(probabilities):\n for instance_idx, probs in enumerate(probabilities):\n print(f\"Instance {instance_idx + 1}:\")\n for class_idx, prob in enumerate(probs):\n print(f\"- {class_values[class_idx]}: {round(prob * 100, 2)}%\")\n",
"Now we show the baseline model predictions given these randomly generated instances.",
"new_instances = generate_random_instances(num_classes)\nlogits = baseline_model.predict(new_instances)\nprobabilities = keras.activations.softmax(tf.convert_to_tensor(logits)).numpy()\ndisplay_class_probabilities(probabilities)",
"Build a Graph Neural Network Model\nPrepare the data for the graph model\nPreparing and loading the graphs data into the model for training is the most challenging\npart in GNN models, which is addressed in different ways by the specialised libraries.\nIn this example, we show a simple approach for preparing and using graph data that is suitable\nif your dataset consists of a single graph that fits entirely in memory.\nThe graph data is represented by the graph_info tuple, which consists of the following\nthree elements:\n\nnode_features: This is a [num_nodes, num_features] NumPy array that includes the\nnode features. In this dataset, the nodes are the papers, and the node_features are the\nword-presence binary vectors of each paper.\nedges: This is [num_edges, num_edges] NumPy array representing a sparse\nadjacency matrix\nof the links between the nodes. In this example, the links are the citations between the papers.\nedge_weights (optional): This is a [num_edges] NumPy array that includes the edge weights, which quantify\nthe relationships between nodes in the graph. In this example, there are no weights for the paper citations.",
"# Create an edges array (sparse adjacency matrix) of shape [2, num_edges].\nedges = citations[[\"source\", \"target\"]].to_numpy().T\n# Create an edge weights array of ones.\nedge_weights = tf.ones(shape=edges.shape[1])\n# Create a node features array of shape [num_nodes, num_features].\nnode_features = tf.cast(\n papers.sort_values(\"paper_id\")[feature_names].to_numpy(), dtype=tf.dtypes.float32\n)\n# Create graph info tuple with node_features, edges, and edge_weights.\ngraph_info = (node_features, edges, edge_weights)\n\nprint(\"Edges shape:\", edges.shape)\nprint(\"Nodes shape:\", node_features.shape)",
"Implement a graph convolution layer\nWe implement a graph convolution module as a Keras Layer.\nOur GraphConvLayer performs the following steps:\n\nPrepare: The input node representations are processed using a FFN to produce a message. You can simplify\nthe processing by only applying linear transformation to the representations.\nAggregate: The messages of the neighbours of each node are aggregated with\nrespect to the edge_weights using a permutation invariant pooling operation, such as sum, mean, and max,\nto prepare a single aggregated message for each node. See, for example, tf.math.unsorted_segment_sum\nAPIs used to aggregate neighbour messages.\nUpdate: The node_repesentations and aggregated_messages—both of shape [num_nodes, representation_dim]—\nare combined and processed to produce the new state of the node representations (node embeddings).\nIf combination_type is gru, the node_repesentations and aggregated_messages are stacked to create a sequence,\nthen processed by a GRU layer. Otherwise, the node_repesentations and aggregated_messages are added\nor concatenated, then processed using a FFN.\n\nThe technique implemented use ideas from Graph Convolutional Networks,\nGraphSage, Graph Isomorphism Network,\nSimple Graph Networks, and\nGated Graph Sequence Neural Networks.\nTwo other key techniques that are not covered are Graph Attention Networks\nand Message Passing Neural Networks.",
"\nclass GraphConvLayer(layers.Layer):\n def __init__(\n self,\n hidden_units,\n dropout_rate=0.2,\n aggregation_type=\"mean\",\n combination_type=\"concat\",\n normalize=False,\n *args,\n **kwargs,\n ):\n super(GraphConvLayer, self).__init__(*args, **kwargs)\n\n self.aggregation_type = aggregation_type\n self.combination_type = combination_type\n self.normalize = normalize\n\n self.ffn_prepare = create_ffn(hidden_units, dropout_rate)\n if self.combination_type == \"gated\":\n self.update_fn = layers.GRU(\n units=hidden_units,\n activation=\"tanh\",\n recurrent_activation=\"sigmoid\",\n dropout=dropout_rate,\n return_state=True,\n recurrent_dropout=dropout_rate,\n )\n else:\n self.update_fn = create_ffn(hidden_units, dropout_rate)\n\n def prepare(self, node_repesentations, weights=None):\n # node_repesentations shape is [num_edges, embedding_dim].\n messages = self.ffn_prepare(node_repesentations)\n if weights is not None:\n messages = messages * tf.expand_dims(weights, -1)\n return messages\n\n def aggregate(self, node_indices, neighbour_messages):\n # node_indices shape is [num_edges].\n # neighbour_messages shape: [num_edges, representation_dim].\n num_nodes = tf.math.reduce_max(node_indices) + 1\n if self.aggregation_type == \"sum\":\n aggregated_message = tf.math.unsorted_segment_sum(\n neighbour_messages, node_indices, num_segments=num_nodes\n )\n elif self.aggregation_type == \"mean\":\n aggregated_message = tf.math.unsorted_segment_mean(\n neighbour_messages, node_indices, num_segments=num_nodes\n )\n elif self.aggregation_type == \"max\":\n aggregated_message = tf.math.unsorted_segment_max(\n neighbour_messages, node_indices, num_segments=num_nodes\n )\n else:\n raise ValueError(f\"Invalid aggregation type: {self.aggregation_type}.\")\n\n return aggregated_message\n\n def update(self, node_repesentations, aggregated_messages):\n # node_repesentations shape is [num_nodes, representation_dim].\n # aggregated_messages shape is [num_nodes, representation_dim].\n if self.combination_type == \"gru\":\n # Create a sequence of two elements for the GRU layer.\n h = tf.stack([node_repesentations, aggregated_messages], axis=1)\n elif self.combination_type == \"concat\":\n # Concatenate the node_repesentations and aggregated_messages.\n h = tf.concat([node_repesentations, aggregated_messages], axis=1)\n elif self.combination_type == \"add\":\n # Add node_repesentations and aggregated_messages.\n h = node_repesentations + aggregated_messages\n else:\n raise ValueError(f\"Invalid combination type: {self.combination_type}.\")\n\n # Apply the processing function.\n node_embeddings = self.update_fn(h)\n if self.combination_type == \"gru\":\n node_embeddings = tf.unstack(node_embeddings, axis=1)[-1]\n\n if self.normalize:\n node_embeddings = tf.nn.l2_normalize(node_embeddings, axis=-1)\n return node_embeddings\n\n def call(self, inputs):\n \"\"\"Process the inputs to produce the node_embeddings.\n\n inputs: a tuple of three elements: node_repesentations, edges, edge_weights.\n Returns: node_embeddings of shape [num_nodes, representation_dim].\n \"\"\"\n\n node_repesentations, edges, edge_weights = inputs\n # Get node_indices (source) and neighbour_indices (target) from edges.\n node_indices, neighbour_indices = edges[0], edges[1]\n # neighbour_repesentations shape is [num_edges, representation_dim].\n neighbour_repesentations = tf.gather(node_repesentations, neighbour_indices)\n\n # Prepare the messages of the neighbours.\n neighbour_messages = self.prepare(neighbour_repesentations, edge_weights)\n # Aggregate the neighbour messages.\n aggregated_messages = self.aggregate(node_indices, neighbour_messages)\n # Update the node embedding with the neighbour messages.\n return self.update(node_repesentations, aggregated_messages)\n",
"Implement a graph neural network node classifier\nThe GNN classification model follows the Design Space for Graph Neural Networks approach,\nas follows:\n\nApply preprocessing using FFN to the node features to generate initial node representations.\nApply one or more graph convolutional layer, with skip connections, to the node representation\nto produce node embeddings.\nApply post-processing using FFN to the node embeddings to generat the final node embeddings.\nFeed the node embeddings in a Softmax layer to predict the node class.\n\nEach graph convolutional layer added captures information from a further level of neighbours.\nHowever, adding many graph convolutional layer can cause oversmoothing, where the model\nproduces similar embeddings for all the nodes.\nNote that the graph_info passed to the constructor of the Keras model, and used as a property\nof the Keras model object, rather than input data for training or prediction.\nThe model will accept a batch of node_indices, which are used to lookup the\nnode features and neighbours from the graph_info.",
"\nclass GNNNodeClassifier(tf.keras.Model):\n def __init__(\n self,\n graph_info,\n num_classes,\n hidden_units,\n aggregation_type=\"sum\",\n combination_type=\"concat\",\n dropout_rate=0.2,\n normalize=True,\n *args,\n **kwargs,\n ):\n super(GNNNodeClassifier, self).__init__(*args, **kwargs)\n\n # Unpack graph_info to three elements: node_features, edges, and edge_weight.\n node_features, edges, edge_weights = graph_info\n self.node_features = node_features\n self.edges = edges\n self.edge_weights = edge_weights\n # Set edge_weights to ones if not provided.\n if self.edge_weights is None:\n self.edge_weights = tf.ones(shape=edges.shape[1])\n # Scale edge_weights to sum to 1.\n self.edge_weights = self.edge_weights / tf.math.reduce_sum(self.edge_weights)\n\n # Create a process layer.\n self.preprocess = create_ffn(hidden_units, dropout_rate, name=\"preprocess\")\n # Create the first GraphConv layer.\n self.conv1 = GraphConvLayer(\n hidden_units,\n dropout_rate,\n aggregation_type,\n combination_type,\n normalize,\n name=\"graph_conv1\",\n )\n # Create the second GraphConv layer.\n self.conv2 = GraphConvLayer(\n hidden_units,\n dropout_rate,\n aggregation_type,\n combination_type,\n normalize,\n name=\"graph_conv2\",\n )\n # Create a postprocess layer.\n self.postprocess = create_ffn(hidden_units, dropout_rate, name=\"postprocess\")\n # Create a compute logits layer.\n self.compute_logits = layers.Dense(units=num_classes, name=\"logits\")\n\n def call(self, input_node_indices):\n # Preprocess the node_features to produce node representations.\n x = self.preprocess(self.node_features)\n # Apply the first graph conv layer.\n x1 = self.conv1((x, self.edges, self.edge_weights))\n # Skip connection.\n x = x1 + x\n # Apply the second graph conv layer.\n x2 = self.conv2((x, self.edges, self.edge_weights))\n # Skip connection.\n x = x2 + x\n # Postprocess node embedding.\n x = self.postprocess(x)\n # Fetch node embeddings for the input node_indices.\n node_embeddings = tf.gather(x, input_node_indices)\n # Compute logits\n return self.compute_logits(node_embeddings)\n",
"Let's test instantiating and calling the GNN model.\nNotice that if you provide N node indices, the output will be a tensor of shape [N, num_classes],\nregardless of the size of the graph.",
"gnn_model = GNNNodeClassifier(\n graph_info=graph_info,\n num_classes=num_classes,\n hidden_units=hidden_units,\n dropout_rate=dropout_rate,\n name=\"gnn_model\",\n)\n\nprint(\"GNN output shape:\", gnn_model([1, 10, 100]))\n\ngnn_model.summary()",
"Train the GNN model\nNote that we use the standard supervised cross-entropy loss to train the model.\nHowever, we can add another self-supervised loss term for the generated node embeddings\nthat makes sure that neighbouring nodes in graph have similar representations, while faraway\nnodes have dissimilar representations.",
"x_train = train_data.paper_id.to_numpy()\nhistory = run_experiment(gnn_model, x_train, y_train)",
"Let's plot the learning curves",
"display_learning_curves(history)",
"Now we evaluate the GNN model on the test data split.\nThe results may vary depending on the training sample, however the GNN model always outperforms\nthe baseline model in terms of the test accuracy.",
"x_test = test_data.paper_id.to_numpy()\n_, test_accuracy = gnn_model.evaluate(x=x_test, y=y_test, verbose=0)\nprint(f\"Test accuracy: {round(test_accuracy * 100, 2)}%\")",
"Examine the GNN model predictions\nLet's add the new instances as nodes to the node_features, and generate links\n(citations) to existing nodes.",
"# First we add the N new_instances as nodes to the graph\n# by appending the new_instance to node_features.\nnum_nodes = node_features.shape[0]\nnew_node_features = np.concatenate([node_features, new_instances])\n# Second we add the M edges (citations) from each new node to a set\n# of existing nodes in a particular subject\nnew_node_indices = [i + num_nodes for i in range(num_classes)]\nnew_citations = []\nfor subject_idx, group in papers.groupby(\"subject\"):\n subject_papers = list(group.paper_id)\n # Select random x papers specific subject.\n selected_paper_indices1 = np.random.choice(subject_papers, 5)\n # Select random y papers from any subject (where y < x).\n selected_paper_indices2 = np.random.choice(list(papers.paper_id), 2)\n # Merge the selected paper indices.\n selected_paper_indices = np.concatenate(\n [selected_paper_indices1, selected_paper_indices2], axis=0\n )\n # Create edges between a citing paper idx and the selected cited papers.\n citing_paper_indx = new_node_indices[subject_idx]\n for cited_paper_idx in selected_paper_indices:\n new_citations.append([citing_paper_indx, cited_paper_idx])\n\nnew_citations = np.array(new_citations).T\nnew_edges = np.concatenate([edges, new_citations], axis=1)",
"Now let's update the node_features and the edges in the GNN model.",
"print(\"Original node_features shape:\", gnn_model.node_features.shape)\nprint(\"Original edges shape:\", gnn_model.edges.shape)\ngnn_model.node_features = new_node_features\ngnn_model.edges = new_edges\ngnn_model.edge_weights = tf.ones(shape=new_edges.shape[1])\nprint(\"New node_features shape:\", gnn_model.node_features.shape)\nprint(\"New edges shape:\", gnn_model.edges.shape)\n\nlogits = gnn_model.predict(tf.convert_to_tensor(new_node_indices))\nprobabilities = keras.activations.softmax(tf.convert_to_tensor(logits)).numpy()\ndisplay_class_probabilities(probabilities)",
"Notice that the probabilities of the expected subjects\n(to which several citations are added) are higher compared to the baseline model."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
hschh86/usersong-extractor
|
documents/Investigationing II.ipynb
|
mit
|
[
"The big reset\nSo I went ahead and cleared the memory.",
"import sys\nsys.path.append('..')\n\nimport collections\n\nimport mido\n\nfrom commons import dgxdump\nfrom commons.dumpdata import messages, songdata, regdata, regvalues\n\nold_syx_messages = mido.read_syx_file('../data/syxout5.syx')\nclear_syx_messages = mido.read_syx_file('../data/clear_bulk.txt')\n\no_dump = dgxdump.DgxDump(old_syx_messages)\nc_dump = dgxdump.DgxDump(clear_syx_messages)\n\n# songs slices\nsongslices = collections.OrderedDict([\n('songs', slice(0x00, 0x01)),\n('mystery', slice(0x01, 0x15D)),\n('tracks', slice(0x15D, 0x167)),\n('durations', slice(0x167, 0x17B)),\n('trackdurations', slice(0x17B, 0x1F3)),\n('presetstyle', slice(0x1F3, 0x22F)),\n('beginningblocks', slice(0x22F, 0x24D)),\n('nextblocks', slice(0x24D, 0x2CF)),\n('startmarker', slice(0x2CF, 0x2D5)),\n('blockdata', slice(0x2D5, 0x106D5)),\n('endmarker', slice(0x106D5, None)),\n])\nEXPECTED_SIZE = 0x106DB\n\nPRESETSTYLE = b'PresetStyle\\0'*5\nMARKER = b'PK0001'\n\ndef hex_string(data):\n return \" \".join(\"{:02X}\".format(b) for b in data)\n\ndef bin_string(data):\n return \" \".join(\"{:08b}\".format(b) for b in data)\n\ndef line_hex(data, head=None, tail=0):\n if head is None:\n head = len(data)\n tailstart = len(data) - tail\n if tailstart <= head:\n return (hex_string(data))\n else:\n return (\"{} .. {}\".format(hex_string(data[:head]), hex_string(data[tailstart:])))\n \ndef song_section(dump, section):\n return dump.song_data.data[songslices[section]]\n\nfor sec in songslices:\n print(sec)\n print(line_hex(song_section(o_dump, sec), 32, 4))\n print(line_hex(song_section(c_dump, sec), 32, 4))\n\nsong_section(o_dump, 'mystery') == song_section(c_dump, 'mystery')",
"The mystery section remains the same.",
"all(b==0 for b in song_section(c_dump, 'nextblocks'))\n\nall(b==0 for b in song_section(c_dump, 'blockdata'))",
"All the blocks are empty.",
"bytes(song_section(c_dump, 'presetstyle'))",
"The 'PresetStyle' settings are empty, too.",
"print(line_hex(o_dump.reg_data.data, 32, 4))\nprint(line_hex(c_dump.reg_data.data, 32, 4))\n\nfor bank in range(1, 8+1):\n for button in range(1, 2+1):\n print(bank, button)\n print(line_hex(o_dump.reg_data.settings.get_setting(bank, button).data))\n print(line_hex(c_dump.reg_data.settings.get_setting(bank, button).data))",
"Each of the registry settings are completely blank.\nInteresting things to note: the first byte is 0 instead of 1, which probably indicates that the setting is unused.\nThe bytes that were FF in each recorded setting are 00 here.\nInvestigating FUNCTION backup\nAccording to the manual (page 49), the following settings can be saved to backup, i.e. persistent memory for startup bu holding the FUNCTION button:\n\nUser songs (These are saved when recorded anyway)\nStyle files (the ones loaded using SmartMedia)\nTouch response (ON/OFF)\nRegistration memory\nThese function settings:\nTuning\nSplit point\nTouch sensitivity\nStyle volume\nSong volume\nMetronome volume\nGrade\nDemo cancel\nLanguage\nMedia Select\nPanel Sustain.\n\nThese backup settings are also cleared with the rest of the memory.\nThe default values for these settings are as follows:\n| setting | default |\n|-------------------|--------------|\n| Touch response | ON |\n| Tuning | 000 |\n| Split point | 54 (F#2) |\n| Touch sensitivity | 2 (Medium) |\n| Style volume | 100 |\n| Song volume | 100 |\n| Metronome volume | 100 |\n| Grade | ON |\n| Demo cancel | OFF |\n| Language | English |\n| Media Select | Flash Memory |\n| Panel sustain | OFF |\nAs an experiment, I changed the values of the function settings:\n| setting | new value |\n|-------------------|--------------|\n| Touch response | ON |\n| Tuning | 057 |\n| Split point | 112 (E7) |\n| Touch sensitivity | 3 (Hard) |\n| Style volume | 045 |\n| Song volume | 079 |\n| Metronome volume | 121 |\n| Grade | OFF |\n| Demo cancel | ON |\n| Language | Japanese |\n| Media Select | Smart Media |\n| Panel sustain | ON |\nand without making a backup:\n - took a bulk dump. (cb1.txt),\n - then made the backup, took another bulk dump, (cb2.txt),\n - restarted with the new settings, took another (cb3.txt),\n - reset everything to default without backup (cb4.txt),\n - made a backup again and took another dump (cb5.txt),\n - then restarted again (cb6.txt).\nAll of these files were identical to each other, which suggests that these backup settings are not stored any part we can retrieve.\nHowever, there is one thing interesting about these files, in that they differ from the dump I got immediately after resetting the memory (clear_bulk.txt).",
"for x in range(2, 7):\n !diff -qs ../data/backup_experiment/cb1.txt ../data/backup_experiment/cb{x}.txt\n!diff -qs ../data/backup_experiment/cb1.txt ../data/clear_bulk.txt\n\nc2_syx_messages = mido.read_syx_file('../data/backup_experiment/cb1.txt')\nc2_dump = dgxdump.DgxDump(c2_syx_messages)\n\n\nc_dump.song_data.data == c2_dump.song_data.data\n\nc_dump.reg_data.data == c2_dump.reg_data.data\n\nfor sec in songslices:\n c_sec = song_section(c_dump, sec)\n c2_sec = song_section(c2_dump, sec)\n if c_sec != c2_sec:\n print(sec)\n print(line_hex(c_sec, 32, 4))\n print(line_hex(c2_sec, 32, 4))\n\nfor n, (a, b) in enumerate(zip(c_dump.song_data.data, c2_dump.song_data.data)):\n if a != b:\n print(\"{0:02X}: {1:02X} {2:02X} ({1:03d} {2:03d})\".format(n, a, b))",
"The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08.\nPerhaps this has to do with some kind of internal wear levelling or something.\nRegistration extension\nNow that the memory has been cleared, we can hopefully figure out more about the registration settings.\nRecording Bank 3, Button 2 as the following settings:\n| setting | value |\n|------------------|-------|\n| Style | 092 |\n| Accompaniment | ON |\n| Split point | 053 |\n| Main A/B | A |\n| Style vol | 050 |\n| Main voice | 060 |\n| Main Octave | -1 |\n| Main Volume | 054 |\n| Main Pan | 092 |\n| Main Reverb | 078 |\n| Main Chorus | 103 |\n| Split | ON |\n| Split voice | 003 |\n| Split Octave | 0 |\n| Split Volume | 108 |\n| Split Pan | 064 |\n| Split Reverb | 032 |\n| Split Chorus | 127 |\n| Dual | OFF |\n| Dual voice | 201 |\n| Dual Octave | +2 |\n| Dual Volume | 095 |\n| Dual Pan | 048 |\n| Dual Reverb | 017 |\n| Dual Chorus | 082 |\n| Pitch bend range | 05 |\n| Reverb type | --(Room) |\n| Chorus type | --(Celeste) |\n| Harmony | OFF |\n| Harmony type | 06(Trill1/4) |\n| Harmony volume | 085/---* |\n| Transpose | +03 |\n| Tempo | 080 |\n| Panel Sustain | ON |\n*This was set using a different Harmony type setting.",
"r1_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/1reg.syx'))\n\nc2_dump.song_data.data == r1_dump.song_data.data\n\nc2_dump.reg_data.data == r1_dump.reg_data.data\n\nfor bank in range(1, 8+1):\n for button in range(1, 2+1):\n if not all(x == 0 for x in r1_dump.reg_data.settings.get_setting(bank, button).data):\n print(bank, button)\n\nline_hex(r1_dump.reg_data.settings.get_setting(3, 2).data)\n\nfor bb in [(3, 2), (1, 1)]:\n sets = r1_dump.reg_data.settings.get_setting(*bb)\n print(line_hex(sets.data))\n sets.print_settings()\n sets.print_unusual()",
"I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero.\nThis means that the first byte which has a value of 01 for all recorded settings can indeed be used as a flag... along with the FF byte at offset 24, and any other setting that cannot be set to a value of zero, such as the Pitch Bend range, Reverb type, Chorus type, and panel Sustain.\nPersonally, I think it makes more sense for the first byte to act as the recorded flag, so I think I'll use that.",
"r2_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/2reg.txt'))\nsets = r2_dump.reg_data.settings.get_setting(2,2)\nsets.print_settings()\nsets.print_unusual()",
"The voice number 000 is used for the default voice for the whichever song or style is selected. If saved to a registration setting, the number 000 is not actually recorded, but rather the actual voice settings are saved.\nSong Stuff\nAccording to the manual, page 45, the following data is recorded in melody tracks:\n - Notes on/off and velocity\n - Voice number\n - Reverb and chorus type, (at beginning only, i.e. no changes)\n - Harmony notes\n - Pedal sustain and Function sustain\n - Tempo and time signature (at beginning only, and only when style track not recorded)\n - I believe this is what gets recorded onto the actual time track when Track A has not been selected for recording,\n which suggests that this gets overwritten by Track A. We could test this by recording then deleting A, which\n should then remove the old time information entirely\n - Pitch bend and pitch bend range\n - Dual voice on/off\n - Main/Dual voice volume/octave/pan/reverb/chorus levels\nAnd on the style track (A):\n - Chord changes and timing\n - Style pattern changes (Intro/Main A/B etc)\n - Style number (at beginning only)\n - Reverb and chorus type (at beginning only)\n - Tempo\n - Time signature (at beginning only)\n - Style volume (at beginning only)\nNote that the split voice and notes are not recorded at all (p.46). I suspect this may be because with five tracks each with main and dual, plus accompaniment, plus the actual keyboard voices, there aren't enough MIDI channels to accomodate them."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.