repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
machinelearningnanodegree/stanford-cs231
|
solutions/levin/assignment2/BatchNormalization.ipynb
|
mit
|
[
"Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.",
"# As usual, a bit of setup\nimport sys\nimport os\nsys.path.insert(0, os.path.abspath('..'))\nimport time\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n'../../assignment1/cs231n/datasets/cifar-10-batches-py'\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.",
"# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint 'Before batch normalization:'\nprint ' means: ', a.mean(axis=0)\nprint ' stds: ', a.std(axis=0)\n\n# Means should be close to zero and stds close to one\nprint 'After batch normalization (gamma=1, beta=0)'\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint ' mean: ', a_norm.mean(axis=0)\nprint ' std: ', a_norm.std(axis=0)\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint 'After batch normalization (nontrivial gamma, beta)'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in xrange(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint 'After batch normalization (test-time):'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)",
"Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.",
"# Gradient check batchnorm backward pass\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dgamma error: ', rel_error(da_num, dgamma)\nprint 'dbeta error: ', rel_error(db_num, dbeta)",
"Batch Normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.",
"N, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint 'dx difference: ', rel_error(dx1, dx2)\nprint 'dgamma difference: ', rel_error(dgamma1, dgamma2)\nprint 'dbeta difference: ', rel_error(dbeta1, dbeta2)\nprint 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))",
"Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n if reg == 0: print",
"Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.",
"# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()",
"Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.",
"plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.",
"# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()",
"Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:\nWith atch normalization, nerual network learning is far less senstive to weight initialization, and generally performs better. Batch normalization can be interpreted as doing preprocessing at every layer of the network, but integrated into the network itself in a differentiably manner."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jorgemauricio/INIFAP_Course
|
ejercicios/Pandas/Ejercicio_Estaciones_Aguascalientes_Solucion.ipynb
|
mit
|
[
"Ejercicio de análisis, exploración y visualización de bases de datos.\nPara el siguiente ejercicio vamos a utilizar la base de datos de las Estaciones del Estado de Aguascalientes con un tiempo de registro de 15 minutos.\nLa base de datos contiene los siguientes campos:\nnombre: Nombre de la estación\nfecha: Fecha en que se tomo el registro\nprec: Precipitación\n\nEsta base se encuentra localizada en la carpeta data\nTienes 15 minutos como máximo para resolver las siguientes preguntas:\nNúmero de estaciones que se encuentran en la base de datos?\nPrecipitación acumulada de la base de datos?\nLos 5 años con mayor precipitación de la base de datos?\nLa estación con la mayor acumulación de precipitación de la base de datos?\nAño y mes en la que se presenta la mayor acumulación de precipitación en la base de datos?",
"# importar librerías\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\nplt.style.use(\"ggplot\")\n\n# leer csv\ndf = pd.read_csv(\"/Users/jorgemauricio/Documents/Research/INIFAP_Course/data/ags_ejercicio_curso.csv\")\n\n# estructura de la base de datos\ndf.head()",
"Número de estaciones que se encuentran en la base de datos?",
"df[\"nombre\"].nunique()",
"Precipitación acumulada de la base de datos?",
"df[\"prec\"].sum()",
"Los 5 años con mayor precipitación de la base de datos?",
"# debemos de generar la columna año\ndf[\"year\"] = pd.DatetimeIndex(df[\"fecha\"]).year\n\n# agrupar la información por años\ngrouped = df.groupby(\"year\").sum()[\"prec\"]\ngrouped.sort_values(ascending=False).head()",
"La estación con la mayor acumulación de precipitación de la base de datos?",
"grouped_st = df.groupby(\"nombre\").sum()[\"prec\"]\ngrouped_st.sort_values(ascending=False).head(2)",
"Año y mes en la que se presenta la mayor acumulación de precipitación en la base de datos?",
"# debemos de generar la columna mes\ndf[\"month\"] = pd.DatetimeIndex(df[\"fecha\"]).month\n\n# agrupar la información por año y mes\ngrouped_ym = df.groupby([\"year\", \"month\"]).sum()[\"prec\"]\ngrouped_ym.sort_values(ascending=False).head()",
"Bonus\nDesplegar la información en un heatmap",
"# clasificar los datos a modo de tabla para desplegarlos en un heatmap\ntable = pd.pivot_table(df, values=\"prec\", index=[\"year\"], columns=[\"month\"], aggfunc=np.sum)\n\n# visualizar la tabla de datos\ntable\n\n# visualización de la información en heatmap\nsns.heatmap(table)\n\n# cambiar los colores del heatmap\nsns.heatmap(table, cmap=\"jet\")\n\n# agregar línea divisora\nsns.heatmap(table, cmap=\"jet\", linewidths=.2)\n\n# agregar el valor de cada una de las celdas\nsns.heatmap(table, cmap=\"jet\", linewidths=.2, annot=True)\n\n# disminuir el tamaño de la letra del valor de cada una de las celdas\nsns.heatmap(table, cmap=\"jet\", linewidths=.2, annot=True, annot_kws={\"size\": 5})"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
daniel-severo/dask-ml
|
docs/source/examples/dask-glm.ipynb
|
bsd-3-clause
|
[
"Dask GLM\ndask-glm is a library for fitting generalized linear models on large datasets.\nThe heart of the project is the set of optimization routines that work on either NumPy or dask arrays.\nSee these two blogposts describing how dask-glm works internally.\nThis notebook is shows an example of the higher-level scikit-learn style API built on top of these optimization routines.",
"import os\nimport s3fs\nimport pandas as pd\nimport dask.array as da\nimport dask.dataframe as dd\nfrom distributed import Client\n\nfrom dask import persist, compute\nfrom dask_glm.estimators import LogisticRegression",
"We'll setup a distributed.Client locally. In the real world you could connect to a cluster of dask-workers.",
"client = Client()",
"For demonstration, we'll use the perennial NYC taxi cab dataset.\nSince I'm just running things on my laptop, we'll just grab the first month's worth of data.",
"if not os.path.exists('trip.csv'):\n s3 = S3FileSystem(anon=True)\n s3.get(\"dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv\", \"trip.csv\")\n\nddf = dd.read_csv(\"trip.csv\")\nddf = ddf.repartition(npartitions=8)",
"I happen to know that some of the values in this dataset are suspect, so let's drop them.\nScikit-learn doesn't support filtering observations inside a pipeline (yet), so we'll do this before anything else.",
"# these filter out less than 1% of the observations\nddf = ddf[(ddf.trip_distance < 20) &\n (ddf.fare_amount < 150)]\nddf = ddf.repartition(npartitions=8)",
"Now, we'll split our DataFrame into a train and test set, and select our feature matrix and target column (whether the passenger tipped).",
"df_train, df_test = ddf.random_split([0.8, 0.2], random_state=2)\n\ncolumns = ['VendorID', 'passenger_count', 'trip_distance', 'payment_type', 'fare_amount']\n\nX_train, y_train = df_train[columns], df_train['tip_amount'] > 0\nX_test, y_test = df_test[columns], df_test['tip_amount'] > 0\n\nX_train, y_train, X_test, y_test = persist(\n X_train, y_train, X_test, y_test\n)\n\nX_train.head()\n\ny_train.head()\n\nprint(f\"{len(X_train):,d} observations\")",
"With our training data in hand, we fit our logistic regression.\nNothing here should be surprising to those familiar with scikit-learn.",
"%%time\n# this is a *dask-glm* LogisticRegresion, not scikit-learn\nlm = LogisticRegression(fit_intercept=False)\nlm.fit(X_train.values, y_train.values)",
"Again, following the lead of scikit-learn we can measure the performance of the estimator on the training dataset using the .score method.\nFor LogisticRegression this is the mean accuracy score (what percent of the predicted matched the actual).",
"%%time\nlm.score(X_train.values, y_train.values).compute()",
"and on the test dataset:",
"%%time\nlm.score(X_test.values, y_test.values).compute()",
"Pipelines\nThe bulk of my time \"doing data science\" is data cleaning and pre-processing.\nActually fitting an estimator or making predictions is a relatively small proportion of the work.\nYou could manually do all your data-processing tasks as a sequence of function calls starting with the raw data.\nOr, you could use scikit-learn's Pipeline to accomplish this and then some.\nPipelines offer a few advantages over the manual solution.\nFirst, your entire modeling process from raw data to final output is in a self-contained object. No more wondering \"did I remember to scale this version of my model?\" It's there in the Pipeline for you to check.\nSecond, Pipelines combine well with scikit-learn's model selection utilties, specifically GridSearchCV and RandomizedSearchCV. You're able to search over hyperparameters of the pipeline stages, just like you would for an estimator.\nThird, Pipelines help prevent leaking information from your test and validation sets to your training set.\nA common mistake is to compute some pre-processing statistic on the entire dataset (before you've train-test split) rather than just the training set. For example, you might normalize a column by the average of all the observations.\nThese types of errors can lead you overestimate the performance of your model on new observations.\nSince dask-glm follows the scikit-learn API, we can reuse scikit-learn's Pipeline machinery, with a few caveats.\nMany of the tranformers built into scikit-learn will validate their inputs. As part of this,\narray-like things are cast to numpy arrays. Since dask-arrays are array-like they are converted\nand things \"work\", but this might not be ideal when your dataset doesn't fit in memory.\nSecond, some things are just fundamentally hard to do on large datasets.\nFor example, naively dummy-encoding a dataset requires a full scan of the data to determine the set of unique values per categorical column.\nWhen your dataset fits in memory, this isn't a huge deal. But when it's scattered across a cluster, this could become\na bottleneck.\nIf you know the set of possible values ahead of time, you can do much better.\nYou can encode the categorical columns as pandas Categoricals, and then convert with get_dummies, without having to do an expensive full-scan, just to compute the set of unique values.\nWe'll do that on the VendorID and payment_type columnms.",
"from sklearn.base import TransformerMixin, BaseEstimator\nfrom sklearn.pipeline import make_pipeline",
"First let's write a little transformer to convert columns to Categoricals.\nIf you aren't familar with scikit-learn transformers, the basic idea is that the transformer must implement two methods: .fit and .tranform.\n.fit is called during training.\nIt learns something about the data and records it on self.\nThen .transform uses what's learned during .fit to transform the feature matrix somehow.\nA Pipeline is simply a chain of transformers, each one is fit on some data, and passes the output of .transform onto the next step; the final output is an Estimator, like LogisticRegression.",
"class CategoricalEncoder(BaseEstimator, TransformerMixin):\n \"\"\"Encode `categories` as pandas `Categorical`\n \n Parameters\n ----------\n categories : Dict[str, list]\n Mapping from column name to list of possible values\n \"\"\"\n def __init__(self, categories):\n self.categories = categories\n \n def fit(self, X, y=None):\n # \"stateless\" transformer. Don't have anything to learn here\n return self\n \n def transform(self, X, y=None):\n X = X.copy()\n for column, categories in self.categories.items():\n X[column] = X[column].astype('category').cat.set_categories(categories)\n return X",
"We'll also want a daskified version of scikit-learn's StandardScaler, that won't eagerly\nconvert a dask.array to a numpy array (N.B. the scikit-learn version has more features and error handling, but this will work for now).",
"class StandardScaler(BaseEstimator, TransformerMixin):\n def __init__(self, columns=None, with_mean=True, with_std=True):\n self.columns = columns\n self.with_mean = with_mean\n self.with_std = with_std\n\n def fit(self, X, y=None):\n if self.columns is None:\n self.columns_ = X.columns\n else:\n self.columns_ = self.columns\n if self.with_mean:\n self.mean_ = X[self.columns_].mean(0)\n if self.with_std:\n self.scale_ = X[self.columns_].std(0)\n return self\n \n def transform(self, X, y=None):\n X = X.copy()\n if self.with_mean:\n X[self.columns_] = X[self.columns_] - self.mean_\n if self.with_std:\n X[self.columns_] = X[self.columns_] / self.scale_\n return X.values",
"Finally, I've written a dummy encoder transformer that converts categoricals\nto dummy-encoded interger columns. The full implementation is a bit long for a blog post, but you can see it here.",
"from dummy_encoder import DummyEncoder\n\npipe = make_pipeline(\n CategoricalEncoder({\"VendorID\": [1, 2],\n \"payment_type\": [1, 2, 3, 4, 5]}),\n DummyEncoder(),\n StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']),\n LogisticRegression(fit_intercept=False)\n)",
"So that's our pipeline.\nWe can go ahead and fit it just like before, passing in the raw data.",
"%%time\npipe.fit(X_train, y_train.values)",
"And we can score it as well. The Pipeline ensures that all of the nescessary transformations take place before calling the estimator's score method.",
"pipe.score(X_train, y_train.values).compute()\n\npipe.score(X_test, y_test.values).compute()",
"Grid Search\nAs explained earlier, Pipelines and grid search go hand-in-hand.\nLet's run a quick example with dask-searchcv.",
"from sklearn.model_selection import GridSearchCV\nimport dask_searchcv as dcv",
"We'll search over two hyperparameters\n\nWhether or not to standardize the variance of each column in StandardScaler\nThe strength of the regularization in LogisticRegression\n\nThis involves fitting many models, one for each combination of paramters.\ndask-searchcv is smart enough to know that early stages in the pipeline (like the categorical and dummy encoding) are shared among all the combinations, and so only fits them once.",
"param_grid = {\n 'standardscaler__with_std': [True, False],\n 'logisticregression__lamduh': [.001, .01, .1, 1],\n}\n\npipe = make_pipeline(\n CategoricalEncoder({\"VendorID\": [1, 2],\n \"payment_type\": [1, 2, 3, 4, 5]}),\n DummyEncoder(),\n StandardScaler(columns=['passenger_count', 'trip_distance', 'fare_amount']),\n LogisticRegression(fit_intercept=False)\n)\n\ngs = dcv.GridSearchCV(pipe, param_grid)\n\n%%time\ngs.fit(X_train, y_train.values)",
"Now we have access to the usual attributes like cv_results_ learned by the grid search object:",
"pd.DataFrame(gs.cv_results_)",
"And we can do our usual checks on model fit for the training set:",
"gs.score(X_train, y_train.values).compute()",
"And the test set:",
"gs.score(X_test, y_test.values).compute()",
"Hopefully your reaction to everything here is somewhere between a nodding head and a yawn.\nIf you're familiar with scikit-learn, everything here should look pretty routine.\nIt's the same API you know and love, scaled out to larger datasets thanks to dask-glm."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
usantamaria/iwi131
|
ipynb/25a-C3_2015_S1/Certamen3_2015_S1_CC.ipynb
|
cc0-1.0
|
[
"\"\"\"\nIPython Notebook v4.0 para python 2.7\nLibrerías adicionales: Ninguna.\nContenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores.\n\"\"\"\n\n# Configuracion para recargar módulos y librerías \n%reload_ext autoreload\n%autoreload 2\n\nfrom IPython.core.display import HTML\n\nHTML(open(\"style/iwi131.css\", \"r\").read())",
"<header class=\"w3-container w3-teal\">\n<img src=\"images/utfsm.png\" alt=\"\" align=\"left\"/>\n<img src=\"images/inf.png\" alt=\"\" align=\"right\"/>\n</header>\n<br/><br/><br/><br/><br/>\nIWI131\nProgramación de Computadores\nSebastián Flores\nhttp://progra.usm.cl/ \nhttps://www.github.com/usantamaria/iwi131\nSoluciones a Certamen 3, S1 2015, Casa Central\nPregunta 1 [25%]\n(a) La web Linkedpy analiza los procesos de postulación de recién titulados a empresas. Para ello tiene el archivo titulados.txt, donde cada línea tiene a los titulados en el formato nombre;rut, y el archivo postulaciones.txt, donde cada línea tiene un rut del titulado, el puesto y la empresa a la que cada titulado postula en el formato rut#puesto#empresa.\nA partir de estos archivos se desea generar un archivo por empresa, los cuales deben tener los titulados que postularon a algún puesto en la empresa con el formato rut;nombre;puesto.\nA continuación se presentan las líneas de código que resuelven este problema, pero que están desordenadas. Usted debe ordenarlas e indentarlas (dejar los espacios correspondientes de python) para que ambas funciones estén correctas.\nLa primera función retorna una lista con todas las empresas en el archivo con postulaciones (recibido como parámetro). Y la segunda función resuelve el problema antes descrito, recibiendo como parámetro el nombre del archivo con titulados y el nombre del archivo con postulaciones.\nLa primera función retorna una lista con todas las empresas en el archivo con postulaciones (recibido como parámetro).",
"def empresas(post):\nemp.append(e)\narch_P.close()\nfor li in arch_P:\nr, p, e = li.strip().split('#')\nif e not in emp:\narch_P = open(post)\nemp = list()\nreturn emp\n\n# Solucion Ordenada\ndef empresas(post):\n arch_P = open(post)\n emp = list()\n for li in arch_P:\n r, p, e = li.strip().split('#')\n if e not in emp:\n emp.append(e)\n arch_P.close()\n return emp\n\nprint empresas(\"data/postulaciones.txt\")",
"La segunda función resuelve el problema antes descrito (generar un archivo por empresa, los cuales deben tener los titulados que postularon a algún puesto en la empresa con el formato rut;nombre;puesto), recibiendo como parámetro el nombre del archivo con titulados y el nombre del archivo con postulaciones.",
"arch_E = open(e + '.txt', 'w')\nfor pos in arch_P:\narch_T.close()\ndef registros(tit, post):\narch_E.write(li.format(r, n, p))\nemp = empresas(post)\narch_T = open(tit)\nn, r2 = titu.strip().split(';')\narch_P.close()\nfor e in emp:\narch_E.close()\nif e2 == e:\nr, p, e2 = pos.strip().split('#')\nif r2 == r:\nli = '{0};{1};{2}\\n'\narch_P = open(post)\nreturn None\nfor titu in arch_T:\n\n# Solución ordenada\ndef registros(tit, post):\n li = '{0};{1};{2}\\n'\n emp = empresas(post)\n for e in emp:\n arch_E = open(e + '.txt', 'w')\n arch_P = open(post)\n for pos in arch_P:\n r, p, e2 = pos.strip().split('#')\n if e2 == e:\n arch_T = open(tit)\n for titu in arch_T:\n n, r2 = titu.strip().split(';')\n if r2 == r:\n arch_E.write(li.format(r, n, p))\n arch_T.close()\n arch_E.close()\n arch_P.close()\n return None\n\n# Utilización\nregistros(\"data/titulados.txt\", \"data/postulaciones.txt\")",
"Pregunta 2 [35%]\nAndrónico Bank es un banco muy humilde que hasta hace poco usaban sólo papel y lápiz para manejar toda la información de sus clientes, también humildes. Como una manera de mejorar sus procesos, Andrónico Bank quiere\nutilizar un sistema computacional basado en Python. Por eso se traspasa la información de sus clientes a un archivo de texto, indicando su rut, nombre y clase cliente. El archivo clientes.txt es un ejemplo de lo anterior:\n9234539-9;Sebastian Davalos;VIP\n11231709-k;Choclo Delano;Pendiente\n5555555-6;Sebastian Pinera;VIP\n9999999-k;Gladis Maryn;RIP\n12312312-1;Michel Bachelet;VIP\n8888888-8;Companero Yuri;Estandar\n7987655-1;Sergio Estandarte;RIP\nPregunta 2.a\nEscriba una función buscar_clientes(archivo, clase) que reciba como parámetros el nombre del archivo de clientes y una clase, y retorne un diccionario con los rut de los clientes como llaves y los nombres como valor de todos los clientes pertenecientes a la clase entregada como parámetro.\n```Python\n\n\n\nbuscar_clientes('clientes.txt', 'Pendiente')\n{'11231709-k': 'Choclo Delano'}\n```\n\n\n\nEstrategia de solución:\n* ¿Qué estructura tienen los datos de entrada?\n* ¿Qué estructura deben tener los datos de salida?\n* ¿Cómo proceso los inputs para generar el output deseado?",
"def buscar_clientes(nombre_archivo, clase_buscada):\n archivo = open(nombre_archivo)\n clientes_buscados = {}\n for linea in archivo:\n rut,nombre,clase = linea[:-1].split(\";\")\n if clase==clase_buscada:\n clientes_buscados[rut] = nombre\n archivo.close()\n return clientes_buscados\n\nprint buscar_clientes('data/clientes.txt', 'Pendiente')\nprint buscar_clientes('data/clientes.txt', 'VIP')\nprint buscar_clientes('data/clientes.txt', 'RIP')\nprint buscar_clientes('data/clientes.txt', 'Estandar')",
"Pregunta 2.b\nEscriba una función dar_credito(archivo, rut) que reciba como parámetros el nombre del archivo de clientes y el rut de un cliente, y que retorne True si éste es VIP o False si no lo es.\nSi no encuentra el cliente la función retorna False \n```Python\n\n\n\ndar_credito('clientes.txt', '9999999-k')\nFalse\n```\n\n\n\nEstrategia de solución:\n* ¿Qué estructura tienen los datos de entrada?\n* ¿Qué estructura deben tener los datos de salida?\n* ¿Cómo proceso los inputs para generar el output deseado?",
"def dar_credito(nombre_archivo, rut):\n clientes_VIP = buscar_clientes(nombre_archivo, \"VIP\")\n return rut in clientes_VIP\n\nprint dar_credito('data/clientes.txt', '9999999-k')\nprint dar_credito('data/clientes.txt', '11231709-k')\nprint dar_credito('data/clientes.txt', '9234539-9')",
"Pregunta 2.c\nEscriba una función contar_clientes(archivo) que reciba como parámetros el nombre del archivo de clientes y que retorne un diccionario con la cantidad de clientes de cada clase en el archivo.\n```Python\n\n\n\ncontar_clientes('clientes.txt')\n{'VIP': 3, 'Pendiente': 1, 'RIP': 2, 'Estandar': 1}\n```\n\n\n\nEstrategia de solución:\n* ¿Qué estructura tienen los datos de entrada?\n* ¿Qué estructura deben tener los datos de salida?\n* ¿Cómo proceso los inputs para generar el output deseado?",
"def contar_clientes(nombre_archivo):\n archivo = open(nombre_archivo)\n cantidad_clases = {}\n for linea in archivo:\n rut,nombre,clase = linea.strip().split(\";\")\n if clase in cantidad_clases:\n cantidad_clases[clase] += 1\n else:\n cantidad_clases[clase] = 1\n archivo.close()\n return cantidad_clases\n\nprint contar_clientes('data/clientes.txt')",
"Pregunta 3 [40%]\nComplementando la pregunta 2, se le solicita:\nPregunta 3.a\nEscriba la función nuevo_cliente(archivo, rut, nombre, clase) que reciba como parámetro el nombre del archivo de clientes y el rut, nombre y clase de un nuevo cliente. La función debe agregar el nuevo cliente al final del archivo. Esta función retorna None.\n```Python\n\n\n\nnuevo_cliente('clientes.txt', '2121211-2', 'Sergio Lagos', 'VIP')\n```\n\n\n\nEstrategia de solución:\n* ¿Qué estructura tienen los datos de entrada?\n* ¿Qué estructura deben tener los datos de salida?\n* ¿Cómo proceso los inputs para generar el output deseado?",
"def nuevo_cliente(nombre_archivo, rut, nombre, clase):\n archivo = open(nombre_archivo,\"a\")\n formato_linea = \"{0};{1};{2}\\n\"\n linea = formato_linea.format(rut, nombre, clase)\n archivo.write(linea)\n archivo.close()\n return None\n \nprint nuevo_cliente('data/clientes.txt', '2121211-2', 'Sergio Lagos', 'VIP')",
"Pregunta 3.b\nEscriba la función actualizar_clase(archivo, rut, clase) que reciba como parámetro el nombre del archivo de clientes, el rut de un cliente y una nueva clase. La función debe modificar la clase del cliente con el rut indicado, cambiándola por clase en el archivo. Esta función retorna True si logra hacer el cambio o False si no encuentra al cliente con el rut indicado.\n```Python\n\n\n\nactualizar_clase('clientes.txt', '9234539-9', 'Estandar')\nTrue\n```\n\n\n\nEstrategia de solución:\n* ¿Qué estructura tienen los datos de entrada?\n* ¿Qué estructura deben tener los datos de salida?\n* ¿Cómo proceso los inputs para generar el output deseado?",
"def actualizar_clase(nombre_archivo, rut_buscado, nueva_clase):\n archivo = open(nombre_archivo)\n lista_lineas = []\n formato_linea = \"{0};{1};{2}\\n\"\n rut_hallado = False\n for linea in archivo:\n rut,nombre,clase = linea[:-1].split(\";\")\n if rut==rut_buscado:\n nueva_linea = formato_linea.format(rut,nombre,nueva_clase)\n lista_lineas.append(nueva_linea)\n rut_hallado = True\n else:\n lista_lineas.append(linea)\n archivo.close()\n # Ahora escribir todas las lineas, si es necesario\n if rut_hallado:\n archivo = open(nombre_archivo, \"w\")\n for linea in lista_lineas:\n archivo.write(linea)\n archivo.close()\n return rut_hallado\n\nactualizar_clase('data/clientes.txt', '9234539-9', 'Estandar')",
"Pregunta 3.c\nEscriba una función filtrar_clientes(archivo, clase) que reciba como parámetros el nombre del archivo de clientes y una clase de cliente. La función debe crear un archivo clientes_[clase].txt con los rut y los nombres de los clientes pertenecientes a esa clase. Note que el archivo debe ser nombrado según la clase solicitada. Esta función retorna None.\n```Python\n\n\n\nfiltrar_clientes('clientes.txt', 'VIP')\n``\ngenera el archivoclientes_VIP.txt` con el siguiente contenido\n\n\n\n5555555-6;Sebastian Pinera\n12312312-1;Michel Bachelet\n2121211-2;Sergio Lagos\nEstrategia de solución:\n* ¿Qué estructura tienen los datos de entrada?\n* ¿Qué estructura deben tener los datos de salida?\n* ¿Cómo proceso los inputs para generar el output deseado?",
"def filtrar_clientes(nombre_archivo, clase_buscada):\n archivo_original = open(nombre_archivo)\n nombre_archivo_clase = \"data\\clientes_\"+clase_buscada+\".txt\"\n archivo_clase = open(nombre_archivo_clase,\"w\")\n formato_linea = \"{0};{1}\\n\"\n for linea in archivo_original:\n rut,nombre,clase = linea[:-1].split(\";\")\n if clase==clase_buscada:\n nueva_linea = formato_linea.format(rut,nombre)\n archivo_clase.write(nueva_linea)\n archivo_original.close()\n archivo_clase.close()\n return None\n \nfiltrar_clientes('data/clientes.txt', 'VIP')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aldian/tensorflow
|
tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb
|
apache-2.0
|
[
"Train a gesture recognition model for microcontroller use\nThis notebook demonstrates how to train a 20kb gesture recognition model for TensorFlow Lite for Microcontrollers. It will produce the same model used in the magic_wand example application.\nThe model is designed to be used with Google Colaboratory.\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/magic_wand/train/train_magic_wand_model.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nTraining is much faster using GPU acceleration. Before you proceed, ensure you are using a GPU runtime by going to Runtime -> Change runtime type and selecting GPU. Training will take around 5 minutes on a GPU runtime.\nConfigure dependencies\nRun the following cell to ensure the correct version of TensorFlow is used.\nWe'll also clone the TensorFlow repository, which contains the training scripts, and copy them into our workspace.",
"# Clone the repository from GitHub\n!git clone --depth 1 -q https://github.com/tensorflow/tensorflow\n# Copy the training scripts into our workspace\n!cp -r tensorflow/tensorflow/lite/micro/examples/magic_wand/train train",
"Prepare the data\nNext, we'll download the data and extract it into the expected location within the training scripts' directory.",
"# Download the data we will use to train the model\n!wget http://download.tensorflow.org/models/tflite/magic_wand/data.tar.gz\n# Extract the data into the train directory\n!tar xvzf data.tar.gz -C train 1>/dev/null",
"We'll then run the scripts that split the data into training, validation, and test sets.",
"# The scripts must be run from within the train directory\n%cd train\n# Prepare the data\n!python data_prepare.py\n# Split the data by person\n!python data_split_person.py",
"Load TensorBoard\nNow, we set up TensorBoard so that we can graph our accuracy and loss as training proceeds.",
"# Load TensorBoard\n%load_ext tensorboard\n%tensorboard --logdir logs/scalars",
"Begin training\nThe following cell will begin the training process. Training will take around 5 minutes on a GPU runtime. You'll see the metrics in TensorBoard after a few epochs.",
"!python train.py --model CNN --person true",
"Create a C source file\nThe train.py script writes a model, model.tflite, to the training scripts' directory.\nIn the following cell, we convert this model into a C++ source file we can use with TensorFlow Lite for Microcontrollers.",
"# Install xxd if it is not available\n!apt-get -qq install xxd\n# Save the file as a C source file\n!xxd -i model.tflite > /content/model.cc\n# Print the source file\n!cat /content/model.cc"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
km-Poonacha/python4phd
|
Session 2/ipython/.ipynb_checkpoints/Lesson 4 - Web API -checkpoint.ipynb
|
gpl-3.0
|
[
"Lesson 4 - Web API\nRequesting information from the web\nPython 'requests' module.\n\nThis module provides functions to send a HTTP request and get the response from the server \nRequests is a third party module. If not installed, we will need to do \"pip install requests\" in the mac terminal or in the command pronpt of windows.\nhttp://docs.python-requests.org/en/master/user/quickstart/#make-a-request\n\nUsing 'requests' module\nUse the requests module to make a HTTP request to http://www.github.com/ibm\n- Check the status of the request \n- Display the response header information\n<center>\n <img src=\"HTTPresponse.gif\" width=\"200\" title=\"HTTP response packet\">\n</center>\nGet status code for the request",
"import requests \nurl = 'http://www.github.com/ibm'\nresponse = requests.get(url)\n\nprint(response.status_code)",
"Get header information",
"import requests \nurl = 'http://www.github.com/ibm'\nresponse = requests.get(url)\n\nprint(response.status_code)\n\nif response.status_code == 200:\n print('Response status - OK ')\n print(response.headers)\nelse: \n print('Error making the HTTP request ',response.status_code )",
"Get the body Information",
"import requests \nurl = 'http://www.github.com/ibm'\nresponse = requests.get(url)\n\nprint(response.status_code)\n\nif response.status_code == 200:\n print('Response status - OK ')\n print(response.text)\nelse: \n print('Error making the HTTP request ',response.status_code )",
"Using a Web API to Collect Data\n\nAn application programming interface is a set of functions that you call to get access to some service.\nAn API is basically a list of functions and datatsructures for interfacting with websites's data. \n\nThe way these work is similar to viewing a web page. When you point your browser to a website, you do it with a URL (http://www.github.com/ibm for instance). Github sends you back data containing HTML, CSS, and Javascript. Your browser uses this data to construct the page that you see. The API works similarly, you request data with a URL (http://api.github.com/org/ibm), but instead of getting HTML and such, you get data formatted as JSON.\nAccess data using web APIs\nWrite a program to access all the public OSS projects hosted by IBM on github.com using the web apis\nStep 1: Access the Web API service and check rate limits",
"import requests\n \nurl = \"https://api.github.com/orgs/ibm\"\nresponse = requests.get(url)\n\nif response.status_code == 200:\n print('Response status - OK ')\n print(response.headers['X-RateLimit-Remaining'])\nelse:\n print('Error making the HTTP request ',response.status_code )",
"Step 2: Authentication (if required)\nAuthenticate requests to increase the API request limit. Access data that requires authentication.\nBasic Authentication\n\nPass the userid and password as parameters in the response.get function\nLittle risky and prone to hacking. Create dummy user ID and password\n\nOAUTH\n\nOAuth 2 is an authorization framework that enables a user to connect to their account using a third party application\nWhile this is more secure thant the basic authentication (i.e. passing the userid and password while you make a http request), it is a little more difficult to code.\nIt needs a personal token and a consumer key to be generated and passed to the webserver\n\nUnfortunately different websites have different ways of generating and using the token and consumer keys. Hence we will need to write the authorization code for each website seperately. HOwever, every website provides detailed information on how you can generate and send the token and keys.",
"import requests\n \ndef GithubAPI(url):\n \"\"\" Make a HTTP request for the given URL and send the response body\n back to the calling function\"\"\"\n # Use basic authentication\n response = requests.get(url, auth=(\"ENTER USER ID\",\"ENTER PASSWORD\"))\n if response.status_code == 200:\n print('Response status - OK ')\n print(response.headers['X-RateLimit-Remaining'])\n return response.text\n else:\n print('Error making the HTTP request ',response.status_code )\n return None\n\ndef main():\n url = \"https://api.github.com/orgs/ibm\"\n txt_response = GithubAPI(url)\n \n if txt_response:\n print(txt_response)\n\nmain() ",
"Step 3: Parse the response\nThe json module gives us functions to convert the JSON response to a python readable data structure.\nWrite a program to get the number of OSS projects started by IBM",
"import requests\nimport json\n\ndef GithubAPI(url):\n \"\"\" Make a HTTP request for the given URL and send the response body\n back to the calling function\"\"\"\n response = requests.get(url)\n if response.status_code == 200:\n print('Response status - OK ')\n return response.json()\n else:\n print('Error making the HTTP request ',response.status_code )\n return None\n\ndef main():\n url = \"https://api.github.com/orgs/ibm\"\n txt_response = GithubAPI(url)\n \n if txt_response:\n print('The number of public repos are : ',txt_response['public_repos'])\n\nmain() ",
"Step 3: Follow the url information from the Web API to find what you need\n*Let us collect the information regarding the different projects started by IBM *",
"import requests\nimport json\n \ndef GithubAPI(url):\n \"\"\" Make a HTTP request for the given URL and send the response body\n back to the calling function\"\"\"\n response = requests.get(url, auth(\"ENTER USER ID\",\"ENTER PASSWORD\"))\n if response.status_code == 200:\n print('Response status - OK ')\n return response.json()\n else:\n print('Error making the HTTP request ',response.status_code )\n return None\n\ndef main():\n url = \"https://api.github.com/orgs/ibm\"\n response_json = GithubAPI(url)\n \n if response_json:\n print('The number of public repos are : ',response_json['public_repos'])\n repo_url = response_json['repos_url']\n repo_response = GithubAPI(repo_url)\n for repo in repo_response:\n print([repo['id'],repo['name']])\n \nmain() ",
"Step 4: Paginate to get data from other pages\nTraverse the pages if the data is spread across multiple pages",
"import requests\nimport json\n \ndef GithubAPI(url):\n \"\"\" Make a HTTP request for the given URL and send the response body\n back to the calling function\"\"\"\n response = requests.get(url, auth = (\"ENTER USER ID\",\"ENTER PASSWORD\"))\n if response.status_code == 200:\n print('Response status - OK ')\n return response.json()\n else:\n print('Error making the HTTP request ',response.status_code )\n return None\n\ndef main():\n url = \"https://api.github.com/orgs/ibm\"\n response_json = GithubAPI(url)\n \n if response_json:\n print('The number of public repos are : ',response_json['public_repos'])\n repo_url = response_json['repos_url']\n total_no = response_json['public_repos']\n per_page = 100\n page_count = 1\n while page_count < total_no/100:\n #Display 100 repos per page and traverse the pages untill we get the last page\n page_url = repo_url+\"?per_page=100&page_no=\"+str(page_count)\n print(page_url)\n repo_response = GithubAPI(page_url)\n # Increment page number\n page_count = page_count+1\n for repo in repo_response:\n print([repo['id'],repo['name']])\n \nmain() ",
"3. Write a CSV\nLets try to write the repos into a CSV file.\nWrite a code to append data row wise to a csv file",
"import csv\nWRITE_CSV = \"C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv\"\nwith open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj:\n write = csv.writer(csv_obj) # Note it is csv.writer not reader\n \n write.writerow(['REPO ID','REPO NAME'])",
"What do you think will happen if we use 'wt' as mode instead of 'at' ?\nWrite a program so that you save the IBM repositories into the CSV file. So that each row is a new repository and column 1 is the ID and column 2 is the name",
"#Enter code here\n\nimport requests\nimport json\nimport csv\n\nWRITE_CSV = \"C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 2/ipython/Repo_csv.csv\"\n\ndef appendcsv(data_list):\n with open(WRITE_CSV, 'at',encoding = 'utf-8', newline='') as csv_obj:\n write = csv.writer(csv_obj) # Note it is csv.writer not reader \n write.writerow(data_list)\n \ndef GithubAPI(url):\n \"\"\" Make a HTTP request for the given URL and send the response body\n back to the calling function\"\"\"\n response = requests.get(url, auth = (\"ENTER USER ID\",\"ENTER PASSWORD\"))\n if response.status_code == 200:\n print('Response status - OK ')\n return response.json()\n else:\n print('Error making the HTTP request ',response.status_code )\n return None\n\ndef main():\n url = \"https://api.github.com/orgs/ibm\"\n response_json = GithubAPI(url)\n \n if response_json:\n print('The number of public repos are : ',response_json['public_repos'])\n repo_url = response_json['repos_url']\n total_no = response_json['public_repos']\n per_page = 100\n page_count = 1\n while page_count < total_no/100:\n #Display 100 repos per page and traverse the pages untill we get the last page\n page_url = repo_url+\"?per_page=100&page_no=\"+str(page_count)\n print(page_url)\n repo_response = GithubAPI(page_url)\n # Increment page number\n page_count = page_count+1\n for repo in repo_response:\n print([repo['id'],repo['name']])\n appendcsv([repo['id'],repo['name']])\n \n \nmain() "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/structured/solutions/4b_keras_dnn_babyweight.ipynb
|
apache-2.0
|
[
"LAB 4b: Create Keras DNN model.\nLearning Objectives\n\nSet CSV Columns, label column, and column defaults\nMake dataset of features and label from CSV files\nCreate input layers for raw features\nCreate feature columns for inputs\nCreate DNN dense hidden layers and output layer\nCreate custom evaluation metric\nBuild DNN model tying all of the pieces together\nTrain and evaluate\n\nIntroduction\nIn this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.\nWe'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nLoad necessary libraries",
"import datetime\nimport os\nimport shutil\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nprint(tf.__version__)",
"Verify CSV files exist\nIn the seventh lab of this series 4a_sample_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.",
"%%bash\nls *.csv\n\n%%bash\nhead -5 *.csv",
"Create Keras model\nSet CSV Columns, label column, and column defaults.\nNow that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.\n* CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files\n* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.\n* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.",
"# Determine CSV, label, and key columns\n# Create list of string column headers, make sure order matches.\nCSV_COLUMNS = [\"weight_pounds\",\n \"is_male\",\n \"mother_age\",\n \"plurality\",\n \"gestation_weeks\"]\n\n# Add string name for label column\nLABEL_COLUMN = \"weight_pounds\"\n\n# Set default values for each CSV column as a list of lists.\n# Treat is_male and plurality as strings.\nDEFAULTS = [[0.0], [\"null\"], [0.0], [\"null\"], [0.0]]",
"Make dataset of features and label from CSV files.\nNext, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.",
"def features_and_labels(row_data):\n \"\"\"Splits features and labels from feature dictionary.\n\n Args:\n row_data: Dictionary of CSV column names and tensor values.\n Returns:\n Dictionary of feature tensors and label tensor.\n \"\"\"\n label = row_data.pop(LABEL_COLUMN)\n\n return row_data, label # features, label\n\n\ndef load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n \"\"\"Loads dataset using the tf.data API from CSV files.\n\n Args:\n pattern: str, file pattern to glob into list of files.\n batch_size: int, the number of examples per batch.\n mode: tf.estimator.ModeKeys to determine if training or evaluating.\n Returns:\n `Dataset` object.\n \"\"\"\n # Make a CSV dataset\n dataset = tf.data.experimental.make_csv_dataset(\n file_pattern=pattern,\n batch_size=batch_size,\n column_names=CSV_COLUMNS,\n column_defaults=DEFAULTS)\n\n # Map dataset to features and label\n dataset = dataset.map(map_func=features_and_labels) # features, label\n\n # Shuffle and repeat for training\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(buffer_size=1000).repeat()\n\n # Take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(buffer_size=1)\n\n return dataset",
"Create input layers for raw features.\nWe'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:\n* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.\n* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.\n* dtype: The data type expected by the input, as a string (float32, float64, int32...)",
"def create_input_layers():\n \"\"\"Creates dictionary of input layers for each feature.\n\n Returns:\n Dictionary of `tf.Keras.layers.Input` layers for each feature.\n \"\"\"\n inputs = {\n colname: tf.keras.layers.Input(\n name=colname, shape=(), dtype=\"float32\")\n for colname in [\"mother_age\", \"gestation_weeks\"]}\n\n inputs.update({\n colname: tf.keras.layers.Input(\n name=colname, shape=(), dtype=\"string\")\n for colname in [\"is_male\", \"plurality\"]})\n\n return inputs",
"Create feature columns for inputs.\nNext, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.",
"def categorical_fc(name, values):\n \"\"\"Helper function to wrap categorical feature by indicator column.\n\n Args:\n name: str, name of feature.\n values: list, list of strings of categorical values.\n Returns:\n Indicator column of categorical feature.\n \"\"\"\n cat_column = tf.feature_column.categorical_column_with_vocabulary_list(\n key=name, vocabulary_list=values)\n\n return tf.feature_column.indicator_column(categorical_column=cat_column)\n\n\ndef create_feature_columns():\n \"\"\"Creates dictionary of feature columns from inputs.\n\n Returns:\n Dictionary of feature columns.\n \"\"\"\n feature_columns = {\n colname : tf.feature_column.numeric_column(key=colname)\n for colname in [\"mother_age\", \"gestation_weeks\"]\n }\n\n feature_columns[\"is_male\"] = categorical_fc(\n \"is_male\", [\"True\", \"False\", \"Unknown\"])\n feature_columns[\"plurality\"] = categorical_fc(\n \"plurality\", [\"Single(1)\", \"Twins(2)\", \"Triplets(3)\",\n \"Quadruplets(4)\", \"Quintuplets(5)\", \"Multiple(2+)\"])\n\n return feature_columns",
"Create DNN dense hidden layers and output layer.\nSo we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.",
"def get_model_outputs(inputs):\n \"\"\"Creates model architecture and returns outputs.\n\n Args:\n inputs: Dense tensor used as inputs to model.\n Returns:\n Dense tensor output from the model.\n \"\"\"\n # Create two hidden layers of [64, 32] just in like the BQML DNN\n h1 = tf.keras.layers.Dense(64, activation=\"relu\", name=\"h1\")(inputs)\n h2 = tf.keras.layers.Dense(32, activation=\"relu\", name=\"h2\")(h1)\n\n # Final output is a linear activation because this is regression\n output = tf.keras.layers.Dense(\n units=1, activation=\"linear\", name=\"weight\")(h2)\n\n return output",
"Create custom evaluation metric.\nWe want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.",
"def rmse(y_true, y_pred):\n \"\"\"Calculates RMSE evaluation metric.\n\n Args:\n y_true: tensor, true labels.\n y_pred: tensor, predicted labels.\n Returns:\n Tensor with value of RMSE between true and predicted labels.\n \"\"\"\n return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))",
"Build DNN model tying all of the pieces together.\nExcellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.",
"def build_dnn_model():\n \"\"\"Builds simple DNN using Keras Functional API.\n\n Returns:\n `tf.keras.models.Model` object.\n \"\"\"\n # Create input layer\n inputs = create_input_layers()\n\n # Create feature columns\n feature_columns = create_feature_columns()\n\n # The constructor for DenseFeatures takes a list of numeric columns\n # The Functional API in Keras requires: LayerConstructor()(inputs)\n dnn_inputs = tf.keras.layers.DenseFeatures(\n feature_columns=feature_columns.values())(inputs)\n\n # Get output of model given inputs\n output = get_model_outputs(dnn_inputs)\n\n # Build model and compile it all together\n model = tf.keras.models.Model(inputs=inputs, outputs=output)\n model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[rmse, \"mse\"])\n\n return model\n\nprint(\"Here is our DNN architecture so far:\\n\")\nmodel = build_dnn_model()\nprint(model.summary())",
"We can visualize the DNN using the Keras plot_model utility.",
"tf.keras.utils.plot_model(\n model=model, to_file=\"dnn_model.png\", show_shapes=False, rankdir=\"LR\")",
"Run and evaluate model\nTrain and evaluate.\nWe've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.",
"TRAIN_BATCH_SIZE = 32\nNUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around\nNUM_EVALS = 5 # how many times to evaluate\n# Enough to get a reasonable sample, but not so much that it slows down\nNUM_EVAL_EXAMPLES = 10000\n\ntrainds = load_dataset(\n pattern=\"train*\",\n batch_size=TRAIN_BATCH_SIZE,\n mode=tf.estimator.ModeKeys.TRAIN)\n\nevalds = load_dataset(\n pattern=\"eval*\",\n batch_size=1000,\n mode=tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nlogdir = os.path.join(\n \"logs\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\ntensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir=logdir, histogram_freq=1)\n\nhistory = model.fit(\n trainds,\n validation_data=evalds,\n epochs=NUM_EVALS,\n steps_per_epoch=steps_per_epoch,\n callbacks=[tensorboard_callback])",
"Visualize loss curve",
"# Plot\nimport matplotlib.pyplot as plt\nnrows = 1\nncols = 2\nfig = plt.figure(figsize=(10, 5))\n\nfor idx, key in enumerate([\"loss\", \"rmse\"]):\n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history[\"val_{}\".format(key)])\n plt.title(\"model {}\".format(key))\n plt.ylabel(key)\n plt.xlabel(\"epoch\")\n plt.legend([\"train\", \"validation\"], loc=\"upper left\");",
"Save the model",
"OUTPUT_DIR = \"babyweight_trained\"\nshutil.rmtree(OUTPUT_DIR, ignore_errors=True)\nEXPORT_PATH = os.path.join(\n OUTPUT_DIR, datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\ntf.saved_model.save(\n obj=model, export_dir=EXPORT_PATH) # with default serving function\nprint(\"Exported trained model to {}\".format(EXPORT_PATH))\n\n!ls $EXPORT_PATH",
"Lab Summary:\nIn this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a deep neural network in Keras. We created a custom evaluation metric and built our DNN model. Finally, we trained and evaluated our model.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
|
doc/notebooks/automaton.delay_automaton.ipynb
|
gpl-3.0
|
[
"automaton.delay_automaton\nCreate a new transducer, equivalent to the first one, with the states labeled with the delay of the state, i.e. the difference of input length on each tape.\nPreconditions:\n- The input automaton is a transducer.\n- Input.has_bounded_lag\nSee also:\n- automaton.has_bounded_lag\n- automaton.is_synchronized\nExamples",
"import vcsn\n\nctx = vcsn.context(\"lat<law_char, law_char>, b\")\nctx\n\na = ctx.expression(r\"'abc, \\e''d,v'*'\\e,wxyz'\").standard()\na",
"The lag is bounded, because every cycle (here, the loop) produces a delay of 0.",
"a.delay_automaton()",
"State 1 has a delay of $(3, 0)$ because the first tape is 3 characters longer than the shortest tape (the second one) for all possible inputs leading to this state.",
"s = ctx.expression(r\"(abc|x+ab|y)(d|z)\").automaton()\ns\n\ns.delay_automaton()",
"Here, state 1 is split in two, because for one input the delay is $(1, 0)$, and for the other the delay is $(2, 0)$."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nguyenphucdev/BookManagementSample
|
NguyenPhuc_Ecommerce+Purchases+Exercise+_.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/nguyenphucdev/BookManagementSample/blob/master/NguyenPhuc_Ecommerce%2BPurchases%2BExercise%2B_.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nEcommerce Purchases Exercise\nIn this Exercise you will be given some Fake Data about some purchases done through Amazon! Just go ahead and follow the directions and try your best to answer the questions and complete the tasks. Feel free to reference the solutions. Most of the tasks can be solved in different ways. For the most part, the questions get progressively harder.\nPlease excuse anything that doesn't make \"Real-World\" sense in the dataframe, all the data is fake and made-up.\nAlso note that all of these questions can be answered with one line of code.\n\n Import pandas and read in the Ecommerce Purchases csv file and set it to a DataFrame called ecom.",
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\n\ndata = pd.read_csv('https://s3-ap-southeast-1.amazonaws.com/intro-to-ml-minhdh/EcommercePurchases.csv')",
"Check the head of the DataFrame.",
"data.head()",
"How many rows and columns are there?",
"data.shape",
"What is the average Purchase Price?",
"data[\"Purchase Price\"].mean()",
"What were the highest and lowest purchase prices?",
"data[\"Purchase Price\"].max()\n\ndata[\"Purchase Price\"].min()",
"How many people have English 'en' as their Language of choice on the website?",
"data[data['Language'] == 'en'].count()[0]",
"How many people have the job title of \"Lawyer\" ?",
"data[data['Job'] == 'Lawyer'].count()[0]",
"How many people made the purchase during the AM and how many people made the purchase during PM ? \n(Hint: Check out value_counts() )",
"data['AM or PM'].value_counts()",
"What are the 5 most common Job Titles?",
"data['Job'].value_counts().head()",
"Someone made a purchase that came from Lot: \"90 WT\" , what was the Purchase Price for this transaction?",
"data['Purchase Price'][data['Lot'] == '90 WT']",
"What is the email of the person with the following Credit Card Number: 4926535242672853",
"data['Email'][data['Credit Card'] == 4926535242672853]",
"How many people have American Express as their Credit Card Provider and made a purchase above $95 ?",
"data2 = data[data['Purchase Price'] > 95]\ndata2[data2['CC Provider'] == 'American Express'].count()[0]",
"Hard: How many people have a credit card that expires in 2025?",
"data[data['CC Exp Date'].str.contains('/25')].shape[0]",
"Hard: What are the top 5 most popular email providers/hosts (e.g. gmail.com, yahoo.com, etc...)",
"\ndata[data['Email'].split('@')]",
"Data Visualization\n Implement a bar plot for top 5 most popular email providers/hosts \n Plot distribution of Purchase Price",
"sns.distplot(data['Purchase Price'])",
"Implement countplot on Language",
"sns.countplot(data['Language'])\n\nFeel free to plot more graphs to dive deeper into the dataset.",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gdementen/larray
|
doc/source/tutorial/tutorial_presenting_larray_objects.ipynb
|
gpl-3.0
|
[
"Presenting LArray objects (Axis, Groups, Array, Session)\nImport the LArray library:",
"from larray import *",
"<div class=\"alert alert-warning\">\n\n**Note:** \nThe tutorial is generated from Jupyter notebooks which work in the \"interactive\" mode (like in the LArray Editor console). \nIn the interactive mode, there is no need to use the print() function to display the content of a variable.\nSimply writing its name is enough. The same remark applies for the returned value of an expression.<br><br>\nIn a Python script (file with .py extension), you always need to use the print() function to display the content of \na variable or the value returned by a function or an expression.\n\n</div>",
"s = 1 + 2\n\n# In the interactive mode, there is no need to use the print() function \n# to display the content of the variable 's'.\n# Simply typing 's' is enough\ns\n\n# In the interactive mode, there is no need to use the print() function \n# to display the result of an expression\n1 + 2",
"Axis\nAn Axis represents a dimension of an Array object.\nIt consists of a name and a list of labels. \nThey are several ways to create an axis:",
"# labels given as a list\ntime = Axis([2007, 2008, 2009, 2010], 'time')\n# create an axis using one string\ngender = Axis('gender=M,F')\n# labels generated using the special syntax start..end\nage = Axis('age=0..100')\n\ntime, gender, age",
"<div class=\"alert alert-warning\">\n\n**Warning:** \n When using the string syntax `\"axis_name=list,of,labels\"` or `\"axis_name=start..end\"`, LArray will automatically infer the type of labels.<br>\n For instance, the command line `age = Axis(\"age=0..100\")` will create an age axis with labels of type `int`.<br><br>\n Mixing special characters like `+` with numbers will lead to create an axis with labels of type `str` instead of `int`.\n As a consequence, the command line `age = Axis(\"age=0..98,99+\")` will create an age axis with labels of type `str` instead of `int`! \n\n</div>",
"# When a string is passed to the Axis() constructor, LArray will automatically infer the type of the labels\nage = Axis(\"age=0..5\")\nage\n\n# Mixing special characters like + with numbers will lead to create an axis with labels of type str instead of int.\nage = Axis(\"age=0..4,5+\")\nage",
"See the Axis section of the API Reference to explore all methods of Axis objects.\nGroups\nA Group represents a selection of labels from an Axis. It can optionally have a name (using operator >>). \nGroups can be used when selecting a subset of an array and in aggregations. \nGroup objects are created as follow:",
"age = Axis('age=0..100')\n\n# create an anonymous Group object 'teens'\nteens = age[10:18]\nteens\n\n# create a Group object 'pensioners' with a name \npensioners = age[67:] >> 'pensioners'\npensioners",
"It is possible to set a name or to rename a group after its declaration:",
"# method 'named' returns a new group with the given name\nteens = teens.named('teens')\n\n# operator >> is just a shortcut for the call of the method named\nteens = teens >> 'teens'\n\nteens",
"<div class=\"alert alert-warning\">\n\n**Warning:** Mixing slices and individual labels inside the `[ ]` will generate **several groups** (a tuple of groups) instead of a single group.<br>If you want to create a single group using both slices and individual labels, you need to use the `.union()` method (see below). \n\n</div>",
"# mixing slices and individual labels leads to the creation of several groups (a tuple of groups)\nage[0:10, 20, 30, 40]\n\n# the union() method allows to mix slices and individual labels to create a single group\nage[0:10].union(age[20, 30, 40])",
"See the Group section of the API Reference to explore all methods of Group objects.\nArray\nAn Array object represents a multidimensional array with labeled axes.\nCreate an array from scratch\nTo create an array from scratch, you need to provide the data and a list of axes. \nOptionally, metadata (title, description, creation date, authors, ...) can be associated to the array:",
"# define axes\nage = Axis('age=0-9,10-17,18-66,67+')\ngender = Axis('gender=female,male')\ntime = Axis('time=2015..2017')\n# list of the axes\naxes = [age, gender, time]\n\n# define some data. This is the belgian population (in thousands). Source: eurostat.\ndata = [[[633, 635, 634],\n [663, 665, 664]],\n [[484, 486, 491],\n [505, 511, 516]],\n [[3572, 3581, 3583],\n [3600, 3618, 3616]],\n [[1023, 1038, 1053],\n [756, 775, 793]]]\n\n# metadata\nmeta = {'title': 'random array'}\n\narr = Array(data, axes, meta=meta)\narr",
"Metadata can be added to an array at any time using:",
"arr.meta.description = 'array containing random values between 0 and 100'\n\narr.meta",
"<div class=\"alert alert-warning\">\n\n**Warning:** \n <ul>\n <li>Currently, only the HDF (.h5) file format supports saving and loading array metadata.</li>\n <li>Metadata is not kept when actions or methods are applied on an array\n except for operations modifying the object in-place, such as `population[age < 10] = 0`,\n and when the method `copy()` is called. Do not add metadata to an array if you know\n you will apply actions or methods on it before dumping it.</li>\n </ul>\n\n</div>\n\nArray creation functions\nArrays can also be generated in an easier way through creation functions:\n\nndtest : creates a test array with increasing numbers as data\nempty : creates an array but leaves its allocated memory\n unchanged (i.e., it contains \"garbage\". Be careful !)\nzeros: fills an array with 0\nones : fills an array with 1\nfull : fills an array with a given value\nsequence : creates an array from an axis by iteratively applying a function to a given initial value.\n\nExcept for ndtest, a list of axes must be provided.\nAxes can be passed in different ways:\n\nas Axis objects\nas integers defining the lengths of auto-generated wildcard axes\nas a string : 'gender=M,F;time=2007,2008,2009' (name is optional)\nas pairs (name, labels)\n\nOptionally, the type of data stored by the array can be specified using argument dtype.",
"# start defines the starting value of data\nndtest((3, 3), start=-1)\n\n# start defines the starting value of data\n# label_start defines the starting index of labels\nndtest((3, 3), start=-1, label_start=2)\n\n# empty generates uninitialised array with correct axes\n# (much faster but use with care!).\n# This not really random either, it just reuses a portion\n# of memory that is available, with whatever content is there.\n# Use it only if performance matters and make sure all data\n# will be overridden.\nempty([age, gender, time])\n\nzeros([age, gender, time])\n\n# dtype=int forces to store int data instead of default float\nones([age, gender, time], dtype=int)\n\nfull([age, gender, time], fill_value=1.23)",
"All the above functions exist in (func)_like variants which take axes from another array",
"ones_like(arr)",
"Create an array using the special sequence function (see link to documention of sequence in API reference for more examples):",
"# With initial=1.0 and inc=0.5, we generate the sequence 1.0, 1.5, 2.0, 2.5, 3.0, ...\nsequence(age, initial=1.0, inc=0.5)",
"Inspecting Array objects",
"# create a test array\nndtest([age, gender, time])",
"Get array summary : metadata + dimensions + description of axes + dtype + size in memory",
"arr.info",
"Get axes",
"arr.axes",
"Get number of dimensions",
"arr.ndim",
"Get length of each dimension",
"arr.shape",
"Get total number of elements of the array",
"arr.size",
"Get type of internal data (int, float, ...)",
"arr.dtype",
"Get size in memory",
"arr.memory_used",
"Display the array in the viewer (graphical user interface) in read-only mode.\nThis will open a new window and block execution of the rest of code until the windows is closed! Required PyQt installed.\npython\nview(arr)\nOr load it in Excel:\npython\narr.to_excel()\nExtract an axis from an array\nIt is possible to extract an axis belonging to an array using its name:",
"# extract the 'time' axis belonging to the 'arr' array\ntime = arr.time\ntime",
"More on Array objects\nTo know how to save and load arrays in CSV, Excel or HDF format, please refer to the Loading and Dumping Arrays section of the tutorial.\nSee the Array section of the API Reference to explore all methods of Array objects.\nSession\nA Session object is a dictionary-like object used to gather several arrays, axes and groups. \nA session is particularly adapted to gather all input objects of a model or to gather the output arrays from different scenarios. Like with arrays, it is possible to associate metadata to sessions.\nCreating Sessions\nTo create a session, you can first create an empty session and then populate it with arrays, axes and groups:",
"gender = Axis(\"gender=Male,Female\")\ntime = Axis(\"time=2013..2017\")\n\n# create an empty session\ndemography_session = Session()\n\n# add axes to the session\ndemography_session.gender = gender\ndemography_session.time = time\n\n# add arrays to the session\ndemography_session.population = zeros((gender, time))\ndemography_session.births = zeros((gender, time))\ndemography_session.deaths = zeros((gender, time))\n\n# add metadata after creation\ndemography_session.meta.title = 'Demographic Model of Belgium'\ndemography_session.meta.description = 'Models the demography of Belgium'\n\n# print content of the session\nprint(demography_session.summary())",
"or you can create and populate a session in one step:",
"gender = Axis(\"gender=Male,Female\")\ntime = Axis(\"time=2013..2017\")\n\ndemography_session = Session(gender=gender, time=time, population=zeros((gender, time)), \n births=zeros((gender, time)), deaths=zeros((gender, time)), \n meta=Metadata(title='Demographic Model of Belgium', description='Modelize the demography of Belgium'))\n\n# print content of the session\nprint(demography_session.summary())",
"<div class=\"alert alert-warning\">\n\n**Warning:**\n <ul>\n <li>Contrary to array metadata, saving and loading session metadata is supported for\n all current session file formats: Excel, CSV and HDF (.h5).</li>\n <li>Metadata is not kept when actions or methods are applied on a session\n except for operations modifying a session in-place, such as: `s.arr1 = 0`.\n Do not add metadata to a session if you know you will apply actions or methods\n on it before dumping it.</li>\n </ul>\n\n</div>\n\nMore on Session objects\nTo know how to save and load sessions in CSV, Excel or HDF format, please refer to the Loading and Dumping Sessions section of the tutorial.\nTo see how to work with sessions, please read the Working With Sessions section of the tutorial.\nFinally, see the Session section of the API Reference to explore all methods of Session objects."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
briennakh/BIOF509
|
Wk11/Wk11-regression-classification-in-class-exercises.ipynb
|
mit
|
[
"Week 11 - Regression and Classification\nIn previous weeks we have looked at the steps needed in preparing different types of data for use by machine learning algorithms.",
"import matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n\nfrom sklearn import datasets\n\ndiabetes = datasets.load_diabetes()\n\n# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\n\nX = diabetes.data\ny = diabetes.target\n\nprint(X.shape, y.shape)\n\nfrom sklearn import linear_model\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()",
"All the different models in scikit-learn follow a consistent structure. \n\nThe class is passed any parameters needed at initialization. In this case none are needed.\nThe fit method takes the features and the target as the parameters X and y.\nThe predict method takes an array of features and returns the predicted values\n\nThese are the basic components with additional methods added when needed. For example, classifiers also have \n\nA predict_proba method that gives the probability that a sample belongs to each of the classes.\nA predict_log_proba method that gives the log of the probability that a sample belongs to each of the classes.\n\nEvaluating models\nBefore we consider whether we have a good model, or which model to choose, we must first decide on how we will evaluate our models.\nMetrics\nAs part of our evaluation having a single number with which to compare models can be very useful. Choosing a metric that is as close a representation of our goal as possible enables many models to be automatically compared. This can be important when choosing model parameters or comparing different types of algorithm. \nEven if we have a metric we feel is reasonable it can be worthwhile considering in detail the predictions made by any model. Some questions to ask:\n\nIs the model sufficiently sensitive for our use case?\nIs the model sufficiently specific for our use case?\nIs there any systemic bias?\nDoes the model perform equally well over the distribution of features?\nHow does the model perform outside the range of the training data?\nIs the model overly dependent on one or two samples in the training dataset?\n\nThe metric we decide to use will depend on the type of problem we have (regression or classification) and what aspects of the prediction are most important to us. For example, a decision we might have to make is between:\n\nA model with intermediate errors for all samples\nA model with low errors for the majority of samples but with a small number of samples that have large errors.\n\nFor these two situations in a regression task we might choose mean_squared_error and mean_absolute_error.\nThere are lists for regression metrics and classification metrics.\nWe can apply the mean_squared_error metric to the linear regression model on the diabetes dataset:",
"diabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()\n\nfrom sklearn import metrics\n\nmetrics.mean_squared_error(y, clf.predict(X))",
"Although this single number might seem unimpressive, metrics are a key component for model evaluation. As a simple example, we can perform a permutation test to determine whether we might see this performance by chance.",
"diabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nerror = metrics.mean_squared_error(y, clf.predict(X))\n\nrounds = 1000\nnp.random.seed(0)\nerrors = []\n\nfor i in range(rounds):\n y_shuffle = y.copy()\n np.random.shuffle(y_shuffle)\n clf_shuffle = linear_model.LinearRegression()\n clf_shuffle.fit(X, y_shuffle)\n errors.append(metrics.mean_squared_error(y_shuffle, clf_shuffle.predict(X)))\n\nbetter_models_by_chance = len([i for i in errors if i <= error])\n\nif better_models_by_chance > 0:\n print('Probability of observing a mean_squared_error of {0} by chance is {1}'.format(error, \n better_models_by_chance / rounds))\nelse:\n print('Probability of observing a mean_squared_error of {0} by chance is <{1}'.format(error, \n 1 / rounds))",
"Training, validation, and test datasets\nWhen evaluating different models the approach taken above is not going to work. Particularly for models with high variance, that overfit the training data, we will get very good performance on the training data but perform no better than chance on new data.",
"from sklearn import tree\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = tree.DecisionTreeRegressor()\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y, clf.predict(X))\n\nfrom sklearn import neighbors\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = neighbors.KNeighborsRegressor(n_neighbors=1)\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y, clf.predict(X))",
"Both these models appear to give perfect solutions but all they do is map our test samples back to the training samples and return the associated value.\nTo understand how our model truly performs we need to evaluate the performance on previously unseen samples. The general approach is to divide a dataset into training, validation and test datasets. Each model is trained on the training dataset. Multiple models can then be compared by evaluating the model against the validation dataset. There is still the potential of choosing a model that performs well on the validation dataset by chance so a final check is made against a test dataset.\nThis unfortunately means that part of our, often expensively gathered, data can't be used to train our model. Although it is important to leave out a test dataset an alternative approach can be used for the validation dataset. Rather than just building one model we can build multiple models, each time leaving out a different validation dataset. Our validation score is then the average across each of the models. This is known as cross-validation.\nScikit-learn provides classes to support cross-validation but a simple solution can also be implemented directly. Below we will separate out a test dataset to evaluate the nearest neighbor model.",
"from sklearn import neighbors\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\nprint(X_train.shape, X_test.shape)\n\nclf = neighbors.KNeighborsRegressor(1)\nclf.fit(X_train, y_train)\n\nplt.plot(y_test, clf.predict(X_test), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y_test, clf.predict(X_test))\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\nprint(X_train.shape, X_test.shape)\n\nclf = linear_model.LinearRegression()\nclf.fit(X_train, y_train)\n\nplt.plot(y_test, clf.predict(X_test), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y_test, clf.predict(X_test))",
"Model types\nScikit-learn includes a variety of different models. The most commonly used algorithms probably include the following:\n\nRegression\nSupport Vector Machines\nNearest neighbors\nDecision trees\nEnsembles & boosting\n\nRegression\nWe have already seen several examples of regression. The basic form is: \n$$f(X) = \\beta_{0} + \\sum_{j=1}^p X_j\\beta_j$$\nEach feature is multipled by a coefficient and then the sum returned. This value is then transformed for classification to limit the value to the range 0 to 1.\nSupport Vector Machines\nSupport vector machines attempt to project samples into a higher dimensional space such that they can be divided by a hyperplane. A good explanation can be found in this article.\nNearest neighbors\nNearest neighbor methods identify a number of samples from the training set that are close to the new sample and then return the average or most common value depending on the task. \nDecision trees\nDecision trees attempt to predict the value of a new sample by learning simple rules from the training samples.\nEnsembles & boosting\nEnsembles are combinations of other models. Combining different models together can improve performance by boosting generalizability. An average or most common value from the models is returned.\nBoosting builds one model and then attempts to reduce the errors with the next model. At each stage the bias in the model is reduced. In this way many weak predictors can be combined into one much more powerful predictor.\nI often begin with an ensemble or boosting approach as they typically give very good performance without needing to be carefully optimized. Many of the other algorithms are sensitive to their parameters.\nParameter selection\nMany of the models require several different parameters to be specified. Their performance is typically heavily influenced by these parameters and choosing the best values is vital in developing the best model.\nSome models have alternative implementations that handle parameter selection in an efficient way.",
"from sklearn import datasets\n\ndiabetes = datasets.load_diabetes()\n\n# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\n\nX = diabetes.data\ny = diabetes.target\n\nprint(X.shape, y.shape)\n\nfrom sklearn import linear_model\n\nclf = linear_model.LassoCV(cv=20)\nclf.fit(X, y)\n\nprint('Alpha chosen was ', clf.alpha_)\n\nplt.plot(y, clf.predict(X), 'k.')",
"There is an expanded example in the documentation.\nThere are also general classes to handle parameter selection for situations when dedicated classes are not available. As we will often have parameters in preprocessing steps these general classes will be used much more often.",
"from sklearn import grid_search\n\nfrom sklearn import neighbors\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\nprint(X_train.shape, X_test.shape)\n\nknn = neighbors.KNeighborsRegressor()\n\nparameters = {'n_neighbors':[1,2,3,4,5,6,7,8,9,10]}\nclf = grid_search.GridSearchCV(knn, parameters)\n\nclf.fit(X_train, y_train)\n\nplt.plot(y_test, clf.predict(X_test), 'k.')\nplt.show()\n\n\nprint(metrics.mean_squared_error(y_test, clf.predict(X_test)))\n\nclf.get_params()",
"Exercises\n\nLoad the handwritten digits dataset and choose an appropriate metric\nDivide the data into a training and test dataset\nBuild a RandomForestClassifier on the training dataset, using cross-validation to evaluate performance\nChoose another classification algorithm and apply it to the digits dataset. \nUse grid search to find the optimal parameters for the chosen algorithm.\nComparing the true values with the predictions from the best model identify the numbers that are most commonly confused.",
"# 1. Load the handwritten digits dataset and choose an appropriate metric\n# 2. Divide the data into a training and test dataset\n\nfrom sklearn import datasets, metrics, ensemble\n\ndigits = datasets.load_digits()\nX = digits.data\ny = digits.target\nprint(X.shape, y.shape)\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\n# 3. Build a RandomForestClassifier on the training dataset, \n# using cross-validation to evaluate performance\n\nscores = []\ncv = 10\nfor i in range(cv):\n split = np.random.random(y_train.shape) > 1/cv\n X_train_train = X_train[split]\n y_train_train = y_train[split]\n X_val = X_train[np.logical_not(split)]\n y_val = y_train[np.logical_not(split)]\n \n clf = ensemble.RandomForestClassifier(n_estimators=100)\n clf.fit(X_train_train, y_train_train)\n scores.append(metrics.accuracy_score(y_val, clf.predict(X_val)))\nprint(scores, np.array(scores).mean())\n\nfrom sklearn import cross_validation\n\nclf = ensemble.RandomForestClassifier(n_estimators=100)\n\ncross_validation.cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='accuracy')\n\n# 6. Comparing the true values with the predictions from the best model identify the \n# numbers that are most commonly confused.\n\nclf = ensemble.RandomForestClassifier(n_estimators=100)\nclf.fit(X_train, y_train)\n\nmetrics.confusion_matrix(y_test, clf.predict(X_test))\n\n# 4. Choose another classification algorithm and apply it to the digits dataset. \n# 5. Use grid search to find the optimal parameters for the chosen algorithm.\n\nfrom sklearn import svm, grid_search\n\nclf = svm.SVC()\n\nparameters = {'C':[10, 1, 0.1, 0.001, 0.0001, 0.00001],\n 'kernel':['linear', 'poly', 'rbf', 'sigmoid']}\nclf = grid_search.GridSearchCV(clf, parameters)\n\nclf.fit(X_train, y_train)\n\nclf.get_params()\n\nmetrics.accuracy_score(y_test, clf.predict(X_test))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/nerc/cmip6/models/ukesm1-0-ll/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: NERC\nSource ID: UKESM1-0-LL\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:26\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-ll', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
StudyExchange/Udacity
|
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.\n图像分类\n在该项目中,你将会对来自 CIFAR-10 数据集 中的图像进行分类。数据集中图片的内容包括飞机(airplane)、狗(dogs)、猫(cats)及其他物体。你需要处理这些图像,接着对所有的样本训练一个卷积神经网络。\n具体而言,在项目中你要对图像进行正规化处理(normalization),同时还要对图像的标签进行 one-hot 编码。接着你将会应用到你所学的技能来搭建一个具有卷积层、最大池化(Max Pooling)层、Dropout 层及全连接(fully connected)层的神经网络。最后,你会训练你的神经网络,会得到你神经网络在样本图像上的预测结果。\n下载数据\n运行如下代码下载 CIFAR-10 dataset for python。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.\n探索数据集\n为防止在运行过程中内存不足的问题,该数据集已经事先被分成了5批(batch),名为data_batch_1、data_batch_2等。每一批中都含有 图像 及对应的 标签,都是如下类别中的一种:\n\n飞机\n汽车\n鸟\n猫\n鹿\n狗\n青蛙\n马\n船\n卡车\n\n理解数据集也是对数据进行预测的一部分。修改如下代码中的 batch_id 和 sample_id,看看输出的图像是什么样子。其中,batch_id 代表着批次数(1-5),sample_id 代表着在该批内图像及标签的编号。\n你可以尝试回答如下问题:\n* 可能出现的 标签 都包括哪些?\n* 图像数据的取值范围是多少?\n* 标签 的排列顺序是随机的还是有序的?\n对这些问题的回答,会有助于更好地处理数据,并能更好地进行预测。\n答:\n\n可能出现的标签范围是0到9,分别是依次对应飞机、汽车、鸟、猫、鹿、狗、青蛙、马、船、卡车。\n图像数据的取值范围是0到255。\n给出的数据中标签的排列顺序是随机的。",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 2\nsample_id = 1\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.\n图像预处理功能的实现\n正规化\n在如下的代码中,修改 normalize 函数,使之能够对输入的图像数据 x 进行处理,输出一个经过正规化的、Numpy array 格式的图像数据。\n注意:\n处理后的值应当在 $[0,1]$ 的范围之内。返回值应当和输入值具有相同的形状。",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n return x/255.0\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint:\nLook into LabelBinarizer in the preprocessing module of sklearn.\nOne-hot 编码\n在如下代码中,你将继续实现预处理的功能,实现一个 one_hot_encode 函数。函数的输入 x 是 标签 构成的列表,返回值是经过 One_hot 处理过后的这列 标签 对应的 One_hot 编码,以 Numpy array 储存。其中,标签 的取值范围从0到9。每次调用该函数时,对相同的标签值,它输出的编码也是相同的。请确保在函数外保存编码的映射(map of encodings)。\n提示:\n你可以尝试使用 sklearn preprocessing 模块中的 LabelBinarizer 函数。\n【CodeReview170905】\n170905-19:34发现函数one_hot_encode()实现错了,本来应该是0-9,也就是range(1,10),而我直接写成了[1,2,3,4,5,6,7,8,9,10],导致错误,进一步地导致网络发散。",
"from sklearn import preprocessing\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n lb = preprocessing.LabelBinarizer()\n labels = list(range(0, 10))\n lb.fit(labels)\n one_hot = lb.transform(x)\n return one_hot\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\n随机打乱数据\n正如你在上方探索数据部分所看到的,样本的顺序已经被随机打乱了。尽管再随机处理一次也没问题,不过对于该数据我们没必要再进行一次相关操作了。\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.\n对所有图像数据进行预处理并保存结果\n运行如下代码,它将会预处理所有的 CIFAR-10 数据并将它另存为文件。此外,如下的代码还将会把 10% 的训练数据留出作为验证数据。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.\n检查点\n这是你的首个检查点。因为预处理完的数据已经被保存到硬盘上了,所以如果你需要回顾或重启该 notebook,你可以在这里重新开始。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.\n搭建神经网络\n为搭建神经网络,你需要将搭建每一层的过程封装到一个函数中。大部分的代码你在函数外已经见过。为能够更透彻地测试你的代码,我们要求你把每一层都封装到一个函数中。这能够帮助我们给予你更好的回复,同时还能让我们使用 unittests 在你提交报告前检测出你项目中的小问题。\n\n注意: 如果你时间紧迫,那么在该部分我们为你提供了一个便捷方法。在接下来的一些问题中,你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来搭建各层,不过不可以用他们搭建卷积-最大池化层。TF Layers 和 Keras 及 TFLean 中对层的抽象比较相似,所以你应该很容易上手。\n\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n不过,如果你希望能够更多地实践,我们希望你能够在不使用 TF Layers 的情况下解决所有问题。你依然能使用来自其他包但和 layers 中重名的函数。例如,你可以使用 TF Neural Network 版本的 `conv_2d\n让我们开始吧!\n输入\n神经网络需要能够读取图像数据、经 one-hot 编码之后的标签及 dropout 中的保留概率。修改如下函数:\n\n修改 neural_net_image_input 函数:\n返回 TF Placeholder。\n使用 image_shape 设定形状,设定批大小(batch size)为 None。\n使用 TF Placeholder 中的 Name 参数,命名该 TensorFlow placeholder 为 \"x\"。\n修改 neural_net_label_input 函数: \n返回 TF Placeholder。\n使用 n_classes 设定形状,设定批大小(batch size)为 None。\n使用 TF Placeholder 中的 Name 参数,命名该 TensorFlow placeholder 为 \"y\"。\n修改 neural_net_keep_prob_input 函数:\n返回 TF Placeholder 作为 dropout 的保留概率(keep probability)。\n使用 TF Placeholder 中的 Name 参数,命名该 TensorFlow placeholder 为 \"keep_prob\"。\n\n我们会在项目最后使用这些名字,来载入你储存的模型。\n注意:在 TensorFlow 中,对形状设定为 None,能帮助设定一个动态的大小。\n这里本来是想用tensorflow-gpu的,但是,gpu报错,原因是卷积网络和神经网络连接那两层节点太多,GPU内存不够。另外,data_validation的过程中,5000个实例没有做分割,一起加载,也导致了tensor过大。",
"import tensorflow as tf\n\n# There are tensorflow-gpu settings, but gpu can not work becourse of the net is too big.\nfrom keras.backend.tensorflow_backend import set_session\nconfig = tf.ConfigProto()\nconfig.gpu_options.per_process_gpu_memory_fraction = 0.3\nset_session(tf.Session(config=config))\n \ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n x = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')\n return x\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n y = tf.placeholder(tf.float32, shape=(None, n_classes), name='y')\n return y\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n return keep_prob\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.\n Hint: \nWhen unpacking values as an argument in Python, look into the unpacking operator. \n卷积-最大池(Convolution and Max Pooling)化层\n卷积层在图像处理中取得了不小的成功。在这部分的代码中,你需要修改 conv2d_maxpool 函数来先后实现卷积及最大池化的功能。\n\n使用 conv_ksize、conv_num_outputs 及 x_tensor 来创建权重(weight)及偏差(bias)变量。\n对 x_tensor 进行卷积,使用 conv_strides 及权重。\n我们建议使用 SAME padding,不过你也可尝试其他 padding 模式。 \n加上偏差。\n对卷积结果加上一个非线性函数作为激活层。\n基于 pool_kszie 及 pool_strides 进行最大池化。\n我们建议使用 SAME padding,不过你也可尝试其他 padding 模式。\n\n注意:\n你不可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现这一层的功能。但是你可以使用 TensorFlow 的Neural Network包。\n对于如上的快捷方法,你在其他层中可以尝试使用。\n提示:\n当你在 Python 中希望展开(unpacking)某个变量的值作为函数的参数,你可以参考 unpacking 运算符。",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n #input = tf.placeholder(tf.float32, (None, 32, 32, 3))\n x_tensor_shape = x_tensor.get_shape().as_list()\n print('x_tensor_shape:\\t{0}'.format(x_tensor_shape))\n print('conv_num_outputs:{0}'.format(conv_num_outputs))\n print('conv_ksize:\\t{0}'.format(conv_ksize))\n print('conv_strides:\\t{0}'.format(conv_strides))\n print('pool_ksize:\\t{0}'.format(pool_ksize))\n print('pool_strides:\\t{0}'.format(pool_strides))\n \n filter_weights = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], x_tensor_shape[3], conv_num_outputs), mean=0.0, stddev = 0.05)) # (height, width, input_depth, output_depth)\n filter_bias = tf.Variable(tf.zeros(conv_num_outputs))\n strides = [1, conv_strides[0], conv_strides[1], 1] # (batch, height, width, depth)\n\n conv_layer = tf.nn.conv2d(x_tensor, filter_weights, strides=strides, padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, filter_bias)\n# conv_layer = conv_layer + filter_bias\n conv_layer = tf.nn.relu(conv_layer)\n # Apply Max Pooling\n conv_layer = tf.nn.max_pool(\n conv_layer,\n ksize=[1, pool_ksize[0], pool_ksize[1], 1],\n strides=[1, pool_strides[0], pool_strides[1], 1],\n padding='SAME')\n return conv_layer \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\n展开层\n修改 flatten 函数,来将4维的输入张量 x_tensor 转换为一个二维的张量。输出的形状应当是 (Batch Size, Flattened Image Size)。\n快捷方法:你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现该功能。不过你也可以只使用 TensorFlow 包中的函数来挑战自己。",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.flatten(x_tensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\n全连接层\n修改 fully_conn 函数,来对形如 (batch Size, num_outputs) 的输入 x_tensor 应用一个全连接层。快捷方法:你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现该功能。不过你也可以只使用 TensorFlow 包中的函数来挑战自己。",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.\n输出层\n修改 output 函数,来对形如 (batch Size, num_outputs) 的输入 x_tensor 应用一个全连接层。快捷方法:你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现该功能。不过你也可以只使用 TensorFlow 包中的函数来挑战自己。\n注意:\n激活函数、softmax 或者交叉熵(corss entropy)不应被加入到该层。\n【Code review 20170901】\n这个地方请注意下题目要求, 注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)\nTensorflow提供的全链接函数tf.contrib.layers.fully_connected, 这里 tf.contrib.layers.fully_connected 预设了使用relu 作为非线性激活函数, 所以这里如果使用tf.contrib.layers.fully_connected 需要把激活函数设为None",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob. \n\n创建卷积模型\n修改 conv_net 函数,使之能够生成一个卷积神经网络模型。该函数的输入为一批图像数据 x,输出为 logits。在函数中,使用上方你修改的创建各种层的函数来创建该模型:\n\n使用 1 到 3 个卷积-最大池化层\n使用一个展开层\n使用 1 到 3 个全连接层\n使用一个输出层\n返回呼出结果\n在一个或多个层上使用 TensorFlow's Dropout,对应的保留概率为 keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n \n conv_num_outputs1 = 32\n conv_ksize1 = (4, 4)\n conv_strides1 = (1, 1)\n pool_ksize1 = (2, 2)\n pool_strides1 = (2, 2)\n conv_layer1 = conv2d_maxpool(x, conv_num_outputs1, conv_ksize1, conv_strides1, pool_ksize1, pool_strides1)\n conv_layer1 = tf.nn.dropout(conv_layer1, keep_prob)\n \n conv_num_outputs2 = 64\n conv_ksize2 = (4, 4)\n conv_strides2 = (1, 1)\n pool_ksize2 = (2, 2)\n pool_strides2 = (2, 2)\n conv_layer2 = conv2d_maxpool(x, conv_num_outputs2, conv_ksize2, conv_strides2, pool_ksize2, pool_strides2)\n conv_layer2 = tf.nn.dropout(conv_layer2, keep_prob)\n \n conv_num_outputs3 = 128\n conv_ksize3 = (4, 4)\n conv_strides3 = (1, 1)\n pool_ksize3 = (2, 2)\n pool_strides3 = (2, 2)\n conv_layer3 = conv2d_maxpool(x, conv_num_outputs3, conv_ksize3, conv_strides3, pool_ksize3, pool_strides3)\n conv_layer3 = tf.nn.dropout(conv_layer3, keep_prob)\n \n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n conv_layer_flatten = flatten(conv_layer3)\n print('conv_layer_flatten.shape:%s' %conv_layer_flatten.shape)\n \n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n \n fc_num_outputs1 = 1024\n fc_layer1 = fully_conn(conv_layer_flatten, fc_num_outputs1)\n fc_layer1 = tf.nn.dropout(fc_layer1, keep_prob)\n \n fc_num_outputs2 = 512\n fc_layer2 = fully_conn(fc_layer1, fc_num_outputs2)\n fc_layer2 = tf.nn.dropout(fc_layer2, keep_prob)\n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n \n num_outputs = 10\n nn_output = output(fc_layer2, num_outputs)\n \n print('fc_num_outputs1:\\t{0}'.format(fc_num_outputs1))\n print('fc_num_outputs2:\\t{0}'.format(fc_num_outputs2))\n print('num_outputs:\\t\\t{0}'.format(num_outputs))\n print('')\n # TODO: return output\n return nn_output\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.\n训练该神经网络\n最优化\n修改 train_neural_network 函数以执行单次最优化。该最优化过程应在一个 session 中使用 optimizer 来进行该过程,它的 feed_dict 包括:\n* x 代表输入图像\n* y 代表标签\n* keep_prob 为 Dropout 过程中的保留概率\n对每批数据该函数都会被调用,因而 tf.global_variables_initializer() 已经被调用过。\n注意:该函数并不要返回某个值,它只对神经网络进行最优化。",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n session.run(optimizer, feed_dict={keep_prob: keep_probability, x: feature_batch, y: label_batch})\n pass\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.\n显示状态\n修改 print_stats 函数来打印 loss 值及验证准确率。 使用全局的变量 valid_features 及 valid_labels 来计算验证准确率。 设定保留概率为 1.0 来计算 loss 值及验证准确率。",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.0 })\n valid_accuracy = session.run(accuracy, feed_dict={ x: valid_features[0:400], y: valid_labels[0:400], keep_prob: 1.0 })\n print('Loss: %.6f' %loss, end=' ')\n print('Validation Accuracy: %.6f' %valid_accuracy)\n pass",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout\n超参数调节\n你需要调节如下的参数:\n* 设定 epoches 为模型停止学习或开始过拟合时模型的迭代次数。\n* 设定 batch_size 为你内存能支持的最大值。一般我们设定该值为:\n * 64\n * 128\n * 256\n * ...\n* 设定 keep_probability 为在 dropout 过程中保留一个节点的概率。",
"# TODO: Tune Parameters\nepochs = 20\nbatch_size = 256\nkeep_probability = 0.5",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.\n对单批 CIFAR-10 数据进行训练\n相比于在所有 CIFAR-10 数据上训练神经网络,我们首先使用一批数据进行训练。这会帮助你在调节模型提高精度的过程中节省时间。当最终的验证精度超过 50% 之后,你就可以前往下一节在所有数据上运行该模型了。\n0902上午开始,因为这个activation_fn=None,导致神经网络发散。我用keras搭了一个一摸一样的网络,收拾收敛的,但是这个网络不收敛,估计是哪里实现错了,可以帮忙看一下为什么网络会发散吗?\n【CodeReview170905】\n170905-19:34发现函数one_hot_encode()实现错了,本来应该是0-9,也就是range(1,10),而我直接写成了[1,2,3,4,5,6,7,8,9,10],导致错误,进一步地导致网络发散。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.\n完全训练该模型\n因为你在单批 CIFAR-10 数据上已经得到了一个不错的准确率了,那你可以尝试在所有五批数据上进行训练。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.\n检查点\n该模型已经被存储到你的硬盘中。\n测试模型\n这部分将在测试数据集上测试你的模型。这边得到的准确率将作为你的最终准确率。你应该得到一个高于 50% 准确率。如果它没有超过 50%,那么你需要继续调整模型架构及参数。",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. That's because there are many more techniques that can be applied to your model and we recemmond that once you are done with this project, you explore!\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission.\n为什么仅有 50%~ 80% 的准确率?\n你也许会觉得奇怪,为什么你的准确率总是提高不上去。对于简单的 CNN 网络而言,50% 并非是很差的表现。纯粹的猜测只会得到 10% 的准确率(因为一共有 10 类)。这是因为还有许多许多能够应用到你模型的技巧。在你做完了该项目之后,你可以探索探索我们给你推荐的一些方法。\n提交该项目\n在提交项目前,请确保你在运行了所有的 cell 之后保存了项目。将项目储存为 \"image_classification.ipynb\" 并导出为一个 HTML 文件。你可以再菜单栏中选择 File -> Download as 进行导出。请将 \"helper.py\" 及 \"problem_unittests.py\" 文件也放在你的提交文件中。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wgong/open_source_learning
|
learn_stem/python/dive-into-python-xml.ipynb
|
apache-2.0
|
[
"Source : Dive Into Python - Chapter 12 XML by Mark Pilgrim\nXML overview\nXML is a generalized way of describing hierarchical structured data. \nAn xml document contains one or more elements, which are delimited by start and end tags. Elements can be nested to any depth.\nThe first element in every xml document is called the root element. An xml document can only have one root element. \nElements can have attributes, which are name-value pairs. Attributes are listed within the start tag of an element and separated by whitespace. Attribute names can not be repeated within an element. Attribute values must be quoted. You may use either single or double quotes.\nAn element’s attributes form an unordered set of keys and values, like a Python dictionary. \nElements can have text content.\nLike Python functions can be declared in different modules, xml elements can be declared in different namespaces. Namespaces usually look like URLs.\nYou can also use an xmlns:prefix declaration to define a namespace and associate it with a prefix. Then each element in that namespace must be explicitly declared with the prefix.\nxml documents can contain character encoding information on the first line, before the root element.\nParsing XML",
"#import lxml.etree as etree\n\ntry:\n from lxml import etree as etree\nexcept ImportError:\n import xml.etree.ElementTree as etree\n\ntree = etree.parse('feed.xml')\nroot = tree.getroot()\nroot",
"Elements Are Lists",
"root.tag\n\nlen(root)\n\nfor child in root:\n print(child)",
"Attributes Are Dictonaries",
"root.attrib\n\nc4_att = root[4].attrib\nc4_att\n\nc4_att['rel'],c4_att['href']",
"Searching",
"# find 1st matching entry\ntree.find('//{http://www.w3.org/2005/Atom}entry')\n\n# find all entry elements\ntree.findall('//{http://www.w3.org/2005/Atom}entry')\n\n# find all category elements\ntree.findall('//{http://www.w3.org/2005/Atom}category')\n\n# find all category element with attribute term=\"mp4\"\ntree.findall('//{http://www.w3.org/2005/Atom}category[@term=\"mp4\"]')\n\n# find all elements with href attribute\nhref_nodes = tree.findall('//{http://www.w3.org/2005/Atom}*[@href]')\nfor e in href_nodes:\n print(e.attrib['href']) # get link url\n\n# advanced search with XPath\nNSMAP = {'atom': 'http://www.w3.org/2005/Atom'}\nentries = tree.xpath(\"//atom:category[@term='accessibility']/..\", namespaces=NSMAP)\nentries[0].tag\n\ntitle = entries[0].xpath('./atom:title/text()', namespaces=NSMAP)\ntitle",
"Generating XML",
"new_feed = etree.Element('{http://www.w3.org/2005/Atom}feed', \n attrib={'{http://www.w3.org/XML/1998/namespace}lang': 'en'}) \nprint(etree.tostring(new_feed))\n\n# add more element/text\ntitle = etree.SubElement(new_feed, 'title', attrib={'type':'html'})\nprint(etree.tounicode(new_feed))\n\ntitle.text = 'Dive into Python!'\nprint(etree.tounicode(new_feed))\n\n# pretty print XML\nprint(etree.tounicode(new_feed, pretty_print=True))",
"You might also want to check out xmlwitch,\nanother third-party library for generating xml. It makes extensive use of the with statement to make xml generation code more readable.\nFurther Reading\n\n\nlxml \n\nTutorial: http://lxml.de/tutorial.html\nAPI: http://lxml.de/api/index.html\n\n\n\nxml on Wikipedia.org http://en.wikipedia.org/wiki/XML\n\n\nThe ElementTree xml API http://docs.python.org/3.1/library/xml.etree.elementtree.html\n\n\nElements and Element Trees http://effbot.org/zone/element.htm\n\n\nXPath Support in ElementTree http://effbot.org/zone/element-xpath.htm\n\n\nThe ElementTree iterparse Function http://effbot.org/zone/element-iterparse.htm\n\n\nParsing xml and html with lxml http://codespeak.net/lxml/1.3/parsing.html\n\n\nXPath and xslt with lxml http://codespeak.net/lxml/1.3/xpathxslt.html\n\n\nxmlwitch http://github.com/galvez/xmlwitch/tree/master"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IBMDecisionOptimization/docplex-examples
|
examples/cp/jupyter/truck_fleet.ipynb
|
apache-2.0
|
[
"The Truck Fleet puzzle\nThis tutorial includes everything you need to set up decision optimization engines, build constraint programming models.\nWhen you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.\n\nThis notebook is part of Prescriptive Analytics for Python\nIt requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account\nand you can start using IBM Cloud Pak for Data as a Service right away).\nCPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:\n - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:\n - <i>Python 3.x</i> runtime: Community edition\n - <i>Python 3.x + DO</i> runtime: full edition\n - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition\n\nTable of contents:\n\nDescribe the business problem\nHow decision optimization (prescriptive analytics) can help\nUse decision optimization\nStep 1: Download the library\nStep 2: Model the Data\nStep 3: Set up the prescriptive model\nPrepare data for modeling\nDefine the decision variables\nExpress the business constraints\nExpress the objective\nSolve with Decision Optimization solve service\n\n\nStep 4: Investigate the solution and run an example analysis\n\n\nSummary\n\n\nDescribe the business problem\n\nThe problem is to deliver some orders to several clients with a single truck.\nEach order consists of a given quantity of a product of a certain type.\nA product type is an integer in {0, 1, 2}.\nLoading the truck with at least one product of a given type requires some specific installations. \nThe truck can be configured in order to handle one, two or three different types of product. \nThere are 7 different configurations for the truck, corresponding to the 7 possible combinations of product types:\nconfiguration 0: all products are of type 0,\nconfiguration 1: all products are of type 1,\nconfiguration 2: all products are of type 2,\nconfiguration 3: products are of type 0 or 1,\nconfiguration 4: products are of type 0 or 2,\nconfiguration 5: products are of type 1 or 2,\nconfiguration 6: products are of type 0 or 1 or 2.\nThe cost for configuring the truck from a configuration A to a configuration B depends on A and B.\nThe configuration of the truck determines its capacity and its loading cost.\nA delivery consists of loading the truck with one or several orders for the same customer.\nBoth the cost (for configuring and loading the truck) and the number of deliveries needed to deliver all the orders must be minimized, the cost being the most important criterion.\n\nPlease refer to documentation for appropriate setup of solving configuration.\n\nHow decision optimization can help\n\n\nPrescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes. \n\n\nPrescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. \n\n\nPrescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.\n<br/>\n\n\nFor example:\n\nAutomate complex decisions and trade-offs to better manage limited resources.\nTake advantage of a future opportunity or mitigate a future risk.\nProactively update recommendations based on changing events.\nMeet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.\n\n\n\nUse decision optimization\nStep 1: Download the library\nRun the following code to install Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.",
"from sys import stdout\ntry:\n import docplex.cp\nexcept:\n if hasattr(sys, 'real_prefix'):\n #we are in a virtual env.\n !pip install docplex\n else:\n !pip install --user docplex",
"Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.\nStep 2: Model the data\nNext section defines the data of the problem.",
"from docplex.cp.model import *\n\n# List of possible truck configurations. Each tuple is (load, cost) with:\n# load: max truck load for this configuration,\n# cost: cost for loading the truck in this configuration\nTRUCK_CONFIGURATIONS = ((11, 2), (11, 2), (11, 2), (11, 3), (10, 3), (10, 3), (10, 4))\n\n# List of customer orders.\n# Each tuple is (customer index, volume, product type)\nCUSTOMER_ORDERS = ((0, 3, 1), (0, 4, 2), (0, 3, 0), (0, 2, 1), (0, 5, 1), (0, 4, 1), (0, 11, 0),\n (1, 4, 0), (1, 5, 0), (1, 2, 0), (1, 4, 2), (1, 7, 2), (1, 3, 2), (1, 5, 0), (1, 2, 2),\n (2, 5, 1), (2, 6, 0), (2, 11, 2), (2, 1, 0), (2, 6, 0), (2, 3, 0))\n\n# Transition costs between configurations.\n# Tuple (A, B, TCost) means that the cost of modifying the truck from configuration A to configuration B is TCost\nCONFIGURATION_TRANSITION_COST = tuple_set(((0, 0, 0), (0, 1, 0), (0, 2, 0), (0, 3, 10), (0, 4, 10),\n (0, 5, 10), (0, 6, 15), (1, 0, 0), (1, 1, 0), (1, 2, 0),\n (1, 3, 10), (1, 4, 10), (1, 5, 10), (1, 6, 15), (2, 0, 0),\n (2, 1, 0), (2, 2, 0), (2, 3, 10), (2, 4, 10), (2, 5, 10),\n (2, 6, 15), (3, 0, 3), (3, 1, 3), (3, 2, 3), (3, 3, 0),\n (3, 4, 10), (3, 5, 10), (3, 6, 15), (4, 0, 3), (4, 1, 3),\n (4, 2, 3), (4, 3, 10), (4, 4, 0), (4, 5, 10), (4, 6, 15),\n (5, 0, 3), (5, 1, 3), (5, 2, 3), (5, 3, 10), (5, 4, 10),\n (5, 5, 0), (5, 6, 15), (6, 0, 3), (6, 1, 3), (6, 2, 3),\n (6, 3, 10), (6, 4, 10), (6, 5, 10), (6, 6, 0)\n ))\n\n# Compatibility between the product types and the configuration of the truck\n# allowedContainerConfigs[i] = the array of all the configurations that accept products of type i\nALLOWED_CONTAINER_CONFIGS = ((0, 3, 4, 6),\n (1, 3, 5, 6),\n (2, 4, 5, 6))\n",
"Step 3: Set up the prescriptive model\nPrepare data for modeling\nNext section extracts from problem data the parts that are frequently used in the modeling section.",
"nbTruckConfigs = len(TRUCK_CONFIGURATIONS)\nmaxTruckConfigLoad = [tc[0] for tc in TRUCK_CONFIGURATIONS]\ntruckCost = [tc[1] for tc in TRUCK_CONFIGURATIONS]\nmaxLoad = max(maxTruckConfigLoad)\n\nnbOrders = len(CUSTOMER_ORDERS)\nnbCustomers = 1 + max(co[0] for co in CUSTOMER_ORDERS)\nvolumes = [co[1] for co in CUSTOMER_ORDERS]\nproductType = [co[2] for co in CUSTOMER_ORDERS]\n\n# Max number of truck deliveries (estimated upper bound, to be increased if no solution)\nmaxDeliveries = 15",
"Create CPO model",
"mdl = CpoModel(name=\"trucks\")",
"Define the decision variables",
"# Configuration of the truck for each delivery\ntruckConfigs = integer_var_list(maxDeliveries, 0, nbTruckConfigs - 1, \"truckConfigs\")\n# In which delivery is an order\nwhere = integer_var_list(nbOrders, 0, maxDeliveries - 1, \"where\")\n# Load of a truck\nload = integer_var_list(maxDeliveries, 0, maxLoad, \"load\")\n# Number of deliveries that are required\nnbDeliveries = integer_var(0, maxDeliveries)\n# Identification of which customer is assigned to a delivery\ncustomerOfDelivery = integer_var_list(maxDeliveries, 0, nbCustomers, \"customerOfTruck\")\n# Transition cost for each delivery\ntransitionCost = integer_var_list(maxDeliveries - 1, 0, 1000, \"transitionCost\")",
"Express the business constraints",
"# transitionCost[i] = transition cost between configurations i and i+1\nfor i in range(1, maxDeliveries):\n auxVars = (truckConfigs[i - 1], truckConfigs[i], transitionCost[i - 1])\n mdl.add(allowed_assignments(auxVars, CONFIGURATION_TRANSITION_COST))\n\n# Constrain the volume of the orders in each truck\nmdl.add(pack(load, where, volumes, nbDeliveries))\nfor i in range(0, maxDeliveries):\n mdl.add(load[i] <= element(truckConfigs[i], maxTruckConfigLoad))\n\n# Compatibility between the product type of an order and the configuration of its truck\nfor j in range(0, nbOrders):\n configOfContainer = integer_var(ALLOWED_CONTAINER_CONFIGS[productType[j]])\n mdl.add(configOfContainer == element(truckConfigs, where[j]))\n\n# Only one customer per delivery\nfor j in range(0, nbOrders):\n mdl.add(element(customerOfDelivery, where[j]) == CUSTOMER_ORDERS[j][0])\n\n# Non-used deliveries are at the end\nfor j in range(1, maxDeliveries):\n mdl.add((load[j - 1] > 0) | (load[j] == 0))\n\n# Dominance: the non used deliveries keep the last used configuration\nmdl.add(load[0] > 0)\nfor i in range(1, maxDeliveries):\n mdl.add((load[i] > 0) | (truckConfigs[i] == truckConfigs[i - 1]))\n\n# Dominance: regroup deliveries with same configuration\nfor i in range(maxDeliveries - 2, 0, -1):\n ct = true()\n for p in range(i + 1, maxDeliveries):\n ct = (truckConfigs[p] != truckConfigs[i - 1]) & ct\n mdl.add((truckConfigs[i] == truckConfigs[i - 1]) | ct)",
"Express the objective",
"# Objective: first criterion for minimizing the cost for configuring and loading trucks \n# second criterion for minimizing the number of deliveries\ncost = sum(transitionCost) + sum(element(truckConfigs[i], truckCost) * (load[i] != 0) for i in range(maxDeliveries))\nmdl.add(minimize_static_lex([cost, nbDeliveries]))",
"Solve with Decision Optimization solve service",
"# Search strategy: first assign order to truck\nmdl.set_search_phases([search_phase(where)])\n\n# Solve model\nprint(\"\\nSolving model....\")\nmsol = mdl.solve(TimeLimit=20)",
"Step 4: Investigate the solution and then run an example analysis",
"if msol.is_solution():\n print(\"Solution: \")\n ovals = msol.get_objective_values()\n print(\" Configuration cost: {}, number of deliveries: {}\".format(ovals[0], ovals[1]))\n for i in range(maxDeliveries):\n ld = msol.get_value(load[i])\n if ld > 0:\n stdout.write(\" Delivery {:2d}: config={}\".format(i,msol.get_value(truckConfigs[i])))\n stdout.write(\", items=\")\n for j in range(nbOrders):\n if (msol.get_value(where[j]) == i):\n stdout.write(\" <{}, {}, {}>\".format(j, productType[j], volumes[j]))\n stdout.write('\\n')\nelse:\n stdout.write(\"Solve status: {}\\n\".format(msol.get_solve_status()))",
"Summary\nYou learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate and solve a Constraint Programming model.\nReferences\n\nCPLEX Modeling for Python documentation\nIBM Decision Optimization\nNeed help with DOcplex or to report a bug? Please go here\nContact us at dofeedback@wwpdl.vnet.ibm.com\n\nCopyright © 2017, 2021 IBM. IPLA licensed Sample Materials."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
datastax-demos/Muvr-Analytics
|
ipython-analysis/exercise-cnn.ipynb
|
bsd-3-clause
|
[
"CNN Experiments on muvr data\nFirst we need to setup the environment and import all the necessary stuff.",
"%matplotlib inline\n\nimport logging\n\nlogging.basicConfig(level=10)\nlogger = logging.getLogger()\n\nimport shutil\nfrom os import remove\nimport cPickle as pkl\nfrom os.path import expanduser, exists",
"This time we are not going to generate the data but rather use real world annotated training examples.",
"# Dataset creation\n\nimport numpy as np\nimport math\nimport random\nimport csv\nfrom neon.datasets.dataset import Dataset\n\nclass WorkoutDS(Dataset):\n # Number of features per example\n feature_count = None\n\n # Number of examples\n num_train_examples = None\n num_test_examples = None\n\n # Number of classes\n num_labels = None\n \n # Indicator if the data has been loaded yet\n initialized = False\n \n # Mapping of integer class labels to strings\n human_labels = {}\n \n def human_label_for(self, id):\n return self.human_labels[id]\n \n # Convert an integer representation to a one-hot-vector\n def as_one_hot(self, i, n):\n v = np.zeros(n)\n v[i] = 1\n return v\n \n # Convert an one-hot-vector to an integer representation\n def as_int_rep(self, oh):\n return np.where(oh == 1)[0][0]\n \n # Loads a label mapping from file. The file should contain a CSV tabel mapping integer labels to human readable\n # labels. Integer class labels should start with 1\n def load_label_mapping(self, filename):\n with open(expanduser(filename), 'rb') as csvfile:\n dialect = csv.Sniffer().sniff(csvfile.read(1024))\n csvfile.seek(0)\n csv_data = csv.reader(csvfile, dialect)\n next(csv_data, None) # skip the headers\n label_mapping = {}\n for row in csv_data:\n # We need to offset the labels by one since counting starts at 0 in python...\n label_mapping[int(row[0]) - 1] = row[1]\n return label_mapping\n \n # Load examples from given CSV file. The dataset should already be splitted into test and train externally\n def load_examples(self, filename):\n with open(expanduser(filename), 'rb') as csvfile:\n dialect = csv.Sniffer().sniff(csvfile.read(1024))\n csvfile.seek(0)\n csv_data = csv.reader(csvfile, dialect)\n next(csv_data, None) # skip the headers\n y = []\n X = []\n for row in csv_data:\n label = int(row[2]) - 1\n y.append(self.as_one_hot(label, self.num_labels))\n X.append(map(int, row[3:]))\n \n X = np.reshape(np.asarray(X, dtype = float), (len(X), len(X[0]))) \n y = np.reshape(np.asarray(y, dtype = float), (X.shape[0], self.num_labels))\n \n return X,y\n \n # Load label mapping and train / test data from disk.\n def initialize(self):\n logger.info(\"Loading DS from files...\")\n self.human_labels = self.load_label_mapping(expanduser('~/data/labeled_exercise_data_f400_LABELS.csv'))\n self.num_labels = len(self.human_labels)\n \n X_train, y_train = self.load_examples(expanduser('~/data/labeled_exercise_data_f400_TRAIN.csv'))\n X_test, y_test = self.load_examples(expanduser('~/data/labeled_exercise_data_f400_TEST.csv'))\n \n self.num_train_examples = X_train.shape[0]\n self.num_test_examples = X_test.shape[0]\n self.feature_count = X_train.shape[1]\n self.X_train = X_train\n self.y_train = y_train\n self.X_test = X_test\n self.y_test = y_test\n self.initialized = True\n\n # Get the dataset ready for Neon training\n def load(self, **kwargs):\n if not self.initialized:\n self.initialize()\n\n # Assign training and test datasets\n # INFO: This assumes the data is already shuffeled! Make sure it is!\n self.inputs['train'] = self.X_train\n self.targets['train'] = self.y_train\n\n self.inputs['test'] = self.X_test\n self.targets['test'] = self.y_test\n\n self.format()\n \ndataset = WorkoutDS()\ndataset.initialize()\nprint \"Number of training examples:\", dataset.num_train_examples\nprint \"Number of test examples:\", dataset.num_test_examples\nprint \"Number of features:\", dataset.feature_count\nprint \"Number of labels:\", dataset.num_labels",
"At first we want to inspect the class distribution of the training and test examples.",
"from ipy_table import *\nfrom operator import itemgetter\nimport numpy as np\n\ntrain_dist = np.reshape(np.transpose(np.sum(dataset.y_train, axis=0)), (dataset.num_labels,1))\ntest_dist = np.reshape(np.transpose(np.sum(dataset.y_test, axis=0)), (dataset.num_labels,1))\n\ntrain_ratio = train_dist / dataset.num_train_examples\ntest_ratio = test_dist / dataset.num_test_examples\n\n# Fiddle around to get it into table shape\ntable = np.hstack((np.zeros((dataset.num_labels,1), dtype=int), train_dist, train_ratio, test_dist, test_ratio))\ntable = np.vstack((np.zeros((1, 5), dtype=int), table)).tolist()\n\nhuman_labels = map(dataset.human_label_for, range(0,dataset.num_labels))\n\nfor i,s in enumerate(human_labels):\n table[i + 1][0] = s\n \ntable.sort(lambda x,y: cmp(x[1], y[1]))\n\ntable[0][0] = \"\"\ntable[0][1] = \"Train\"\ntable[0][2] = \"Train %\"\ntable[0][3] = \"Test\"\ntable[0][4] = \"Test %\"\n\nmake_table(table)\nset_global_style(float_format='%0.0f', align=\"center\")\nset_column_style(2, float_format='%0.2f%%')\nset_column_style(4, float_format='%0.2f%%')\nset_column_style(0, align=\"left\")",
"Let's have a look at the generated data. We will plot some of the examples of the different classes.",
"from matplotlib import pyplot, cm\nfrom pylab import *\n\n# Choose some random examples to plot from the training data\nnumber_of_examples_to_plot = 3\nplot_ids = np.random.random_integers(0, dataset.num_train_examples - 1, number_of_examples_to_plot)\n\nprint \"Ids of plotted examples:\",plot_ids\n\n# Retrieve a human readable label given the idx of an example\ndef label_of_example(i):\n label_id = np.where(dataset.y_train[i] == 1)[0][0]\n return dataset.human_label_for(label_id)\n\nfigure(figsize=(20,10))\nax1 = subplot(311)\nsetp(ax1.get_xticklabels(), visible=False)\nax1.set_ylabel('X - Acceleration')\n\nax2 = subplot(312, sharex=ax1)\nsetp(ax2.get_xticklabels(), visible=False)\nax2.set_ylabel('Y - Acceleration')\n\nax3 = subplot(313, sharex=ax1)\nax3.set_ylabel('Z - Acceleration')\n\nfor i in plot_ids:\n c = np.random.random((3,))\n\n ax1.plot(range(0, dataset.feature_count / 3), dataset.X_train[i,0:400], '-o', c=c)\n ax2.plot(range(0, dataset.feature_count / 3), dataset.X_train[i,400:800], '-o', c=c)\n ax3.plot(range(0, dataset.feature_count / 3), dataset.X_train[i,800:1200], '-o', c=c)\n \nlegend(map(label_of_example, plot_ids))\nsuptitle('Feature values for the first three training examples', fontsize=16)\nxlabel('Time')\nshow()",
"Now we are going to create a neon model. We will start with a realy simple one layer preceptron having 500 hidden units.",
"from neon.backends import gen_backend\nfrom neon.layers import *\nfrom neon.models import MLP\nfrom neon.transforms import RectLin, Tanh, Logistic, CrossEntropy\nfrom neon.experiments import FitPredictErrorExperiment\nfrom neon.params import val_init\nfrom neon.util.persist import serialize\n\n# General settings\nmax_epochs = 75\nepoch_step_size = 1\nbatch_size = 30 # max(10, min(100, dataset.num_train_examples/10))\nrandom_seed = 42 # Take your lucky number\n\n\n# Storage director of the model and its snapshots\nfile_path = expanduser('~/data/workout-cnn/workout-cnn.prm')\n#if exists(file_path):\n# remove(file_path)\n\n# Captured errors for the different epochs\ntrain_err = []\ntest_err = []\n\nprint 'Epochs: %d Batch-Size: %d' % (max_epochs, batch_size)\n\n# Generate layers and a MLP model using the given settings\ndef model_gen(lrate, momentum_coef, num_epochs, batch_size):\n layers = []\n\n lrule = {\n 'lr_params': {\n 'learning_rate': lrate,\n 'momentum_params': {\n 'coef': momentum_coef,\n 'type': 'constant'\n }},\n 'type': 'gradient_descent_momentum'\n }\n\n weight_init = val_init.UniformValGen(low=-0.1, high=0.1)\n\n layers.append(DataLayer(\n nofm=3,\n ofmshape=[400,1],\n is_local=True\n ))\n \n layers.append(ConvLayer(\n name=\"cv_1\",\n nofm=16,\n fshape = [5,1],\n stride = 1,\n lrule_init=lrule,\n weight_init= weight_init,\n activation=RectLin()\n ))\n\n layers.append(PoolingLayer(\n name=\"po_1\",\n op=\"max\",\n fshape=[2,1],\n stride=2,\n ))\n\n layers.append(ConvLayer(\n name=\"cv_2\",\n nofm=32,\n fshape = [5,1],\n stride = 1,\n lrule_init=lrule,\n weight_init= weight_init,\n activation=RectLin()\n ))\n\n layers.append(PoolingLayer(\n name=\"po_2\",\n op=\"max\",\n fshape=[2,1],\n stride=2,\n ))\n \n layers.append(DropOutLayer(\n name=\"do_1\",\n keep = 0.9\n )\n )\n\n layers.append(FCLayer(\n name=\"fc_1\",\n nout=100,\n lrule_init=lrule,\n weight_init=weight_init,\n activation=RectLin()\n ))\n \n layers.append(DropOutLayer(\n name=\"do_2\",\n keep = 0.9\n )\n )\n\n layers.append(FCLayer(\n name=\"fc_2\",\n nout=dataset.num_labels,\n lrule_init=lrule,\n weight_init=weight_init,\n activation = Logistic()\n ))\n\n layers.append(CostLayer(\n name='cost',\n ref_layer=layers[0],\n cost=CrossEntropy()\n ))\n\n model = MLP(num_epochs=num_epochs, batch_size=batch_size, layers=layers, serialized_path=file_path)\n return model\n\n# Set logging output...\nfor name in [\"neon.util.persist\", \"neon.datasets.dataset\", \"neon.models.mlp\"]:\n dslogger = logging.getLogger(name)\n dslogger.setLevel(20)\n\nprint \"Starting training...\"\nfor num_epochs in range(26,max_epochs+1, epoch_step_size):\n\n if num_epochs > 230:\n lrate = 0.0000003\n elif num_epochs > 60:\n lrate = 0.00001\n elif num_epochs > 40:\n lrate = 0.00003\n else:\n lrate = 0.0001\n\n # set up the model and experiment\n model = model_gen(lrate = lrate,\n momentum_coef = 0.9,\n num_epochs = num_epochs,\n batch_size = batch_size)\n\n # Uncomment line below to run on CPU backend\n backend = gen_backend(rng_seed=random_seed)\n # Uncomment line below to run on GPU using cudanet backend\n # backend = gen_backend(rng_seed=0, gpu='cudanet')\n experiment = FitPredictErrorExperiment(model=model,\n backend=backend,\n dataset=dataset)\n\n # Run the training, and dump weights\n dest_path = expanduser('~/data/workout-cnn/workout-ep' + str(num_epochs) + '.prm')\n if num_epochs > 0:\n res = experiment.run()\n train_err.append(res['train']['MisclassPercentage_TOP_1'])\n test_err.append(res['test']['MisclassPercentage_TOP_1'])\n # Save the weights at this epoch\n shutil.copy2(file_path, dest_path)\n print \"Finished epoch \" + str(num_epochs)\n else:\n model.epochs_complete = 0\n serialize(model.get_params(), dest_path)\n\nprint \"Finished training!\"\n",
"To check weather the network is learning something we will plot the weight matrices of the different training epochs.",
"import numpy as np\nimport math\nfrom matplotlib import pyplot, cm\nfrom pylab import *\nfrom IPython.html import widgets\nfrom IPython.html.widgets import interact\n\ndef closestSqrt(i):\n N = int(math.sqrt(i))\n while True:\n M = int(i / N)\n if N * M == i:\n return N, M\n N -= 1\n \ndef plot_filters(**kwargs):\n n = kwargs['n']\n layer_name = kwargs['layer']\n dest_path = expanduser('~/data/workout-cnn/workout-ep' + str(n) + '.prm')\n params = pkl.load(open(dest_path, 'r'))\n\n wts = params[layer_name]['weights']\n\n nrows, ncols = closestSqrt(wts.shape[0])\n fr, fc = closestSqrt(wts.shape[1])\n \n fi = 0\n\n W = np.zeros((fr*nrows, fc*ncols))\n for row, col in [(row, col) for row in range(nrows) for col in range(ncols)]:\n W[fr*row:fr*(row+1):,fc*col:fc*(col+1)] = wts[fi].reshape(fr,fc)\n fi = fi + 1\n\n matshow(W, cmap=cm.gray)\n title('Visualizing weights of '+layer_name+' in epoch ' + str(n) )\n show()\n \nlayer_names = map(lambda l: l[1].name+\"_\"+str(l[0]), filter(lambda l: l[1].has_params, enumerate(model.layers)))\n\n_i = interact(plot_filters,\n layer=widgets.widget_selection.ToggleButtons(options = layer_names),\n n=widgets.IntSliderWidget(description='epochs',\n min=0, max=max_epochs, value=0, step=epoch_step_size))\n\n\n\nprint \"Lowest test error: %0.2f%%\" % np.min(test_err)\nprint \"Lowest train error: %0.2f%%\" % np.min(train_err)\n\npyplot.plot(range(epoch_step_size*26, max_epochs+1, epoch_step_size), train_err, linewidth=3, label='train')\npyplot.plot(range(epoch_step_size*26, max_epochs+1, epoch_step_size), test_err, linewidth=3, label='test')\npyplot.grid()\npyplot.legend()\npyplot.xlabel(\"epoch\")\npyplot.ylabel(\"error %\")\npyplot.show()",
"Let's also have a look at the confusion matrix for the test dataset.",
"from sklearn.metrics import confusion_matrix\nfrom ipy_table import *\n\n# confusion_matrix(y_true, y_pred)\npredicted, actual = model.predict_fullset(dataset, \"test\")\n\ny_pred = np.argmax(predicted.asnumpyarray(), axis = 0) \ny_true = np.argmax(actual.asnumpyarray(), axis = 0) \n\nconfusion_mat = confusion_matrix(y_true, y_pred, range(0,dataset.num_labels))\n\n# Fiddle around with cm to get it into table shape\nconfusion_mat = vstack((np.zeros((1,dataset.num_labels), dtype=int), confusion_mat))\nconfusion_mat = hstack((np.zeros((dataset.num_labels + 1, 1), dtype=int), confusion_mat))\n\ntable = confusion_mat.tolist()\n\nhuman_labels = map(dataset.human_label_for, range(0,dataset.num_labels))\n\nfor i,s in enumerate(human_labels):\n table[0][i+1] = s\n table[i+1][0] = s\n\ntable[0][0] = \"actual \\ predicted\"\n\nmt = make_table(table)\nset_row_style(0, color='lightGray', rotate = \"315deg\")\nset_column_style(0, color='lightGray')\nset_global_style(align='center')\n\nfor i in range(1, dataset.num_labels + 1):\n for j in range(1, dataset.num_labels + 1):\n if i == j:\n set_cell_style(i,j, color='lightGreen', width = 80)\n elif table[i][j] > 20:\n set_cell_style(i,j, color='Pink')\n elif table[i][j] > 0:\n set_cell_style(i,j, color='lightYellow')\nmt"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wzbozon/statsmodels
|
examples/notebooks/statespace_sarimax_stata.ipynb
|
bsd-3-clause
|
[
"SARIMAX: Introduction\nThis notebook replicates examples from the Stata ARIMA time series estimation and postestimation documentation.\nFirst, we replicate the four estimation examples http://www.stata.com/manuals13/tsarima.pdf:\n\nARIMA(1,1,1) model on the U.S. Wholesale Price Index (WPI) dataset.\nVariation of example 1 which adds an MA(4) term to the ARIMA(1,1,1) specification to allow for an additive seasonal effect.\nARIMA(2,1,0) x (1,1,0,12) model of monthly airline data. This example allows a multiplicative seasonal effect.\nARMA(1,1) model with exogenous regressors; describes consumption as an autoregressive process on which also the money supply is assumed to be an explanatory variable.\n\nSecond, we demonstrate postestimation capabilitites to replicate http://www.stata.com/manuals13/tsarimapostestimation.pdf. The model from example 4 is used to demonstrate:\n\nOne-step-ahead in-sample prediction\nn-step-ahead out-of-sample forecasting\nn-step-ahead in-sample dynamic prediction",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nimport requests\nfrom io import BytesIO",
"ARIMA Example 1: Arima\nAs can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so that the difference is assumed to be stationary, and fit a model with one autoregressive lag and one moving average lag, as well as an intercept term.\nThe postulated data process is then:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $c$ is the intercept of the ARMA model, $\\Delta$ is the first-difference operator, and we assume $\\epsilon_{t} \\sim N(0, \\sigma^2)$. This can be rewritten to emphasize lag polynomials as (this will be useful in example 2, below):\n$$\n(1 - \\phi_1 L ) \\Delta y_t = c + (1 + \\theta_1 L) \\epsilon_{t}\n$$\nwhere $L$ is the lag operator.\nNotice that one difference between the Stata output and the output below is that Stata estimates the following model:\n$$\n(\\Delta y_t - \\beta_0) = \\phi_1 ( \\Delta y_{t-1} - \\beta_0) + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $\\beta_0$ is the mean of the process $y_t$. This model is equivalent to the one estimated in the Statsmodels SARIMAX class, but the interpretation is different. To see the equivalence, note that:\n$$\n(\\Delta y_t - \\beta_0) = \\phi_1 ( \\Delta y_{t-1} - \\beta_0) + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t} \\\n\\Delta y_t = (1 - \\phi_1) \\beta_0 + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nso that $c = (1 - \\phi_1) \\beta_0$.",
"# Dataset\nwpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content\ndata = pd.read_stata(BytesIO(wpi1))\ndata.index = data.t\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))\nres = mod.fit()\nprint(res.summary())",
"Thus the maximum likelihood estimates imply that for the process above, we have:\n$$\n\\Delta y_t = 0.1050 + 0.8740 \\Delta y_{t-1} - 0.4206 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $\\epsilon_{t} \\sim N(0, 0.5226)$. Finally, recall that $c = (1 - \\phi_1) \\beta_0$, and here $c = 0.1050$ and $\\phi_1 = 0.8740$. To compare with the output from Stata, we could calculate the mean:\n$$\\beta_0 = \\frac{c}{1 - \\phi_1} = \\frac{0.1050}{1 - 0.8740} = 0.83$$\nNote: these values are slightly different from the values in the Stata documentation because the optimizer in Statsmodels has found parameters here that yield a higher likelihood. Nonetheless, they are very close.\nARIMA Example 2: Arima with additive seasonal effects\nThis model is an extension of that from example 1. Here the data is assumed to follow the process:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t}\n$$\nThe new part of this model is that there is allowed to be a annual seasonal effect (it is annual even though the periodicity is 4 because the dataset is quarterly). The second difference is that this model uses the log of the data rather than the level.\nBefore estimating the dataset, graphs showing:\n\nThe time series (in logs)\nThe first difference of the time series (in logs)\nThe autocorrelation function\nThe partial autocorrelation function.\n\nFrom the first two graphs, we note that the original time series does not appear to be stationary, whereas the first-difference does. This supports either estimating an ARMA model on the first-difference of the data, or estimating an ARIMA model with 1 order of integration (recall that we are taking the latter approach). The last two graphs support the use of an ARMA(1,1,1) model.",
"# Dataset\ndata = pd.read_stata(BytesIO(wpi1))\ndata.index = data.t\ndata['ln_wpi'] = np.log(data['wpi'])\ndata['D.ln_wpi'] = data['ln_wpi'].diff()\n\n# Graph data\nfig, axes = plt.subplots(1, 2, figsize=(15,4))\n\n# Levels\naxes[0].plot(data.index._mpl_repr(), data['wpi'], '-')\naxes[0].set(title='US Wholesale Price Index')\n\n# Log difference\naxes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-')\naxes[1].hlines(0, data.index[0], data.index[-1], 'r')\naxes[1].set(title='US Wholesale Price Index - difference of logs');\n\n# Graph data\nfig, axes = plt.subplots(1, 2, figsize=(15,4))\n\nfig = sm.graphics.tsa.plot_acf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[0])\nfig = sm.graphics.tsa.plot_pacf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[1])",
"To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model:\npython\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))\nThe order argument is a tuple of the form (AR specification, Integration order, MA specification). The integration order must be an integer (for example, here we assumed one order of integration, so it was specified as 1. In a pure ARMA model where the underlying data is already stationary, it would be 0).\nFor the AR specification and MA specification components, there are two possiblities. The first is to specify the maximum degree of the corresponding lag polynomial, in which case the component is an integer. For example, if we wanted to specify an ARIMA(1,1,4) process, we would use:\npython\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,4))\nand the corresponding data process would be:\n$$\ny_t = c + \\phi_1 y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_2 \\epsilon_{t-2} + \\theta_3 \\epsilon_{t-3} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t}\n$$\nor\n$$\n(1 - \\phi_1 L)\\Delta y_t = c + (1 + \\theta_1 L + \\theta_2 L^2 + \\theta_3 L^3 + \\theta_4 L^4) \\epsilon_{t}\n$$\nWhen the specification parameter is given as a maximum degree of the lag polynomial, it implies that all polynomial terms up to that degree are included. Notice that this is not the model we want to use, because it would include terms for $\\epsilon_{t-2}$ and $\\epsilon_{t-3}$, which we don't want here.\nWhat we want is a polynomial that has terms for the 1st and 4th degrees, but leaves out the 2nd and 3rd terms. To do that, we need to provide a tuple for the specifiation parameter, where the tuple describes the lag polynomial itself. In particular, here we would want to use:\npython\nar = 1 # this is the maximum degree specification\nma = (1,0,0,1) # this is the lag polynomial specification\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(ar,1,ma)))\nThis gives the following form for the process of the data:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t} \\\n(1 - \\phi_1 L)\\Delta y_t = c + (1 + \\theta_1 L + \\theta_4 L^4) \\epsilon_{t}\n$$\nwhich is what we want.",
"# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1))\nres = mod.fit()\nprint(res.summary())",
"ARIMA Example 3: Airline Model\nIn the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \\times (P,D,Q)_s$, where the lowercast letters indicate the specification for the non-seasonal component, and the uppercase letters indicate the specification for the seasonal component; $s$ is the periodicity of the seasons (e.g. it is often 4 for quarterly data or 12 for monthly data). The data process can be written generically as:\n$$\n\\phi_p (L) \\tilde \\phi_P (L^s) \\Delta^d \\Delta_s^D y_t = A(t) + \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nwhere:\n\n$\\phi_p (L)$ is the non-seasonal autoregressive lag polynomial\n$\\tilde \\phi_P (L^s)$ is the seasonal autoregressive lag polynomial\n$\\Delta^d \\Delta_s^D y_t$ is the time series, differenced $d$ times, and seasonally differenced $D$ times.\n$A(t)$ is the trend polynomial (including the intercept)\n$\\theta_q (L)$ is the non-seasonal moving average lag polynomial\n$\\tilde \\theta_Q (L^s)$ is the seasonal moving average lag polynomial\n\nsometimes we rewrite this as:\n$$\n\\phi_p (L) \\tilde \\phi_P (L^s) y_t^* = A(t) + \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nwhere $y_t^* = \\Delta^d \\Delta_s^D y_t$. This emphasizes that just as in the simple case, after we take differences (here both non-seasonal and seasonal) to make the data stationary, the resulting model is just an ARMA model.\nAs an example, consider the airline model ARIMA $(2,1,0) \\times (1,1,0)_{12}$, with an intercept. The data process can be written in the form above as:\n$$\n(1 - \\phi_1 L - \\phi_2 L^2) (1 - \\tilde \\phi_1 L^{12}) \\Delta \\Delta_{12} y_t = c + \\epsilon_t\n$$\nHere, we have:\n\n$\\phi_p (L) = (1 - \\phi_1 L - \\phi_2 L^2)$\n$\\tilde \\phi_P (L^s) = (1 - \\phi_1 L^12)$\n$d = 1, D = 1, s=12$ indicating that $y_t^*$ is derived from $y_t$ by taking first-differences and then taking 12-th differences.\n$A(t) = c$ is the constant trend polynomial (i.e. just an intercept)\n$\\theta_q (L) = \\tilde \\theta_Q (L^s) = 1$ (i.e. there is no moving average effect)\n\nIt may still be confusing to see the two lag polynomials in front of the time-series variable, but notice that we can multiply the lag polynomials together to get the following model:\n$$\n(1 - \\phi_1 L - \\phi_2 L^2 - \\tilde \\phi_1 L^{12} + \\phi_1 \\tilde \\phi_1 L^{13} + \\phi_2 \\tilde \\phi_1 L^{14} ) y_t^* = c + \\epsilon_t\n$$\nwhich can be rewritten as:\n$$\ny_t^ = c + \\phi_1 y_{t-1}^ + \\phi_2 y_{t-2}^ + \\tilde \\phi_1 y_{t-12}^ - \\phi_1 \\tilde \\phi_1 y_{t-13}^ - \\phi_2 \\tilde \\phi_1 y_{t-14}^ + \\epsilon_t\n$$\nThis is similar to the additively seasonal model from example 2, but the coefficients in front of the autoregressive lags are actually combinations of the underlying seasonal and non-seasonal parameters.\nSpecifying the model in Statsmodels is done simply by adding the seasonal_order argument, which accepts a tuple of the form (Seasonal AR specification, Seasonal Integration order, Seasonal MA, Seasonal periodicity). The seasonal AR and MA specifications, as before, can be expressed as a maximum polynomial degree or as the lag polynomial itself. Seasonal periodicity is an integer.\nFor the airline model ARIMA $(2,1,0) \\times (1,1,0)_{12}$ with an intercept, the command is:\npython\nmod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12))",
"# Dataset\nair2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content\ndata = pd.read_stata(BytesIO(air2))\ndata.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS')\ndata['lnair'] = np.log(data['air'])\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True)\nres = mod.fit()\nprint(res.summary())",
"Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.\nThe default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation.\nARIMA Example 4: ARMAX (Friedman)\nThis model demonstrates the use of explanatory variables (the X part of ARMAX). When exogenous regressors are included, the SARIMAX module uses the concept of \"regression with SARIMA errors\" (see http://robjhyndman.com/hyndsight/arimax/ for details of regression with ARIMA errors versus alternative specifications), so that the model is specified as:\n$$\ny_t = \\beta_t x_t + u_t \\\n \\phi_p (L) \\tilde \\phi_P (L^s) \\Delta^d \\Delta_s^D u_t = A(t) +\n \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nNotice that the first equation is just a linear regression, and the second equation just describes the process followed by the error component as SARIMA (as was described in example 3). One reason for this specification is that the estimated parameters have their natural interpretations.\nThis specification nests many simpler specifications. For example, regression with AR(2) errors is:\n$$\ny_t = \\beta_t x_t + u_t \\\n(1 - \\phi_1 L - \\phi_2 L^2) u_t = A(t) + \\epsilon_t\n$$\nThe model considered in this example is regression with ARMA(1,1) errors. The process is then written:\n$$\n\\text{consump}_t = \\beta_0 + \\beta_1 \\text{m2}_t + u_t \\\n(1 - \\phi_1 L) u_t = (1 - \\theta_1 L) \\epsilon_t\n$$\nNotice that $\\beta_0$ is, as described in example 1 above, not the same thing as an intercept specified by trend='c'. Whereas in the examples above we estimated the intercept of the model via the trend polynomial, here, we demonstrate how to estimate $\\beta_0$ itself by adding a constant to the exogenous dataset. In the output, the $beta_0$ is called const, whereas above the intercept $c$ was called intercept in the output.",
"# Dataset\nfriedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content\ndata = pd.read_stata(BytesIO(friedman2))\ndata.index = data.time\n\n# Variables\nendog = data.ix['1959':'1981', 'consump']\nexog = sm.add_constant(data.ix['1959':'1981', 'm2'])\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1))\nres = mod.fit()\nprint(res.summary())",
"ARIMA Postestimation: Example 1 - Dynamic Forecasting\nHere we describe some of the post-estimation capabilities of Statsmodels' SARIMAX.\nFirst, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considering performance of out-of-sample forecasting and facilitates comparison to Stata's documentation).",
"# Dataset\nraw = pd.read_stata(BytesIO(friedman2))\nraw.index = raw.time\ndata = raw.ix[:'1981']\n\n# Variables\nendog = data.ix['1959':, 'consump']\nexog = sm.add_constant(data.ix['1959':, 'm2'])\nnobs = endog.shape[0]\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(endog.ix[:'1978-01-01'], exog=exog.ix[:'1978-01-01'], order=(1,0,1))\nfit_res = mod.fit()\nprint(fit_res.summary())",
"Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).",
"mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1))\nres = mod.filter(fit_res.params)",
"The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).\nWith no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample.",
"# In-sample one-step-ahead predictions\npredict = res.get_prediction()\npredict_ci = predict.conf_int()",
"We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.\nThe dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0.\nHere we perform dynamic prediction starting in the first quarter of 1978.",
"# Dynamic predictions\npredict_dy = res.get_prediction(dynamic='1978-01-01')\npredict_dy_ci = predict_dy.conf_int()",
"We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.",
"# Graph\nfig, ax = plt.subplots(figsize=(9,4))\nnpre = 4\nax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars')\n\n# Plot data points\ndata.ix['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed')\n\n# Plot predictions\npredict.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='r--', label='One-step-ahead forecast')\nci = predict_ci.ix['1977-07-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='r', alpha=0.1)\npredict_dy.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='g', label='Dynamic forecast (1978)')\nci = predict_dy_ci.ix['1977-07-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='g', alpha=0.1)\n\nlegend = ax.legend(loc='lower right')",
"Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.",
"# Prediction error\n\n# Graph\nfig, ax = plt.subplots(figsize=(9,4))\nnpre = 4\nax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual')\n\n# In-sample one-step-ahead predictions and 95% confidence intervals\npredict_error = predict.predicted_mean - endog\npredict_error.ix['1977-10-01':].plot(ax=ax, label='One-step-ahead forecast')\nci = predict_ci.ix['1977-10-01':].copy()\nci.iloc[:,0] -= endog.loc['1977-10-01':]\nci.iloc[:,1] -= endog.loc['1977-10-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], alpha=0.1)\n\n# Dynamic predictions and 95% confidence intervals\npredict_dy_error = predict_dy.predicted_mean - endog\npredict_dy_error.ix['1977-10-01':].plot(ax=ax, style='r', label='Dynamic forecast (1978)')\nci = predict_dy_ci.ix['1977-10-01':].copy()\nci.iloc[:,0] -= endog.loc['1977-10-01':]\nci.iloc[:,1] -= endog.loc['1977-10-01':]\nax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='r', alpha=0.1)\n\nlegend = ax.legend(loc='lower left');\nlegend.get_frame().set_facecolor('w')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
|
5b_PyART_visualization.ipynb
|
bsd-2-clause
|
[
"Visualizations with Py-ART\nFirst we'll import needed packages",
"import pyart\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport os\nfrom datetime import datetime as dt\n\n%matplotlib inline\nprint(pyart.__version__)\n\nimport warnings\nwarnings.simplefilter(\"ignore\", category=DeprecationWarning)\n#warnings.simplefilter('ignore')",
"The following function will not work with the original VM for the Short course. To use, install s3fs package (via conda or pip). It is a file system interface for AWS S3 buckets and provides a nice interface similar to unix/ftp command line arguments.",
"def open_nexrad_file(filename, io='radx'):\n \"\"\"\n Open file using pyart nexrad archive method.\n\n Parameters\n ----------\n filename: str\n Radar filename to open.\n io: str\n Py-ART open method. If radx then file is opened via Radx\n otherwise via native Py-ART function.\n Using Radx will handle split-cut sweeps\n \"\"\"\n filename, zipped = try_file_gunzip(filename)\n if io.lower() == 'radx':\n radar = pyart.aux_io.read_radx(filename)\n else:\n radar = pyart.io.read_nexrad_archive(filename)\n if zipped:\n os.system('gzip {}'.format(filename))\n return radar\n\ndef get_latest_file(radar_id, bucket='noaa-nexrad-level2', engine='s3fs', io='radx'):\n \"\"\"Return latest NEXRAD data file name.\"\"\"\n try:\n import s3fs\n import tempfile\n \n s3conn = s3fs.S3FileSystem(anon=True)\n latest_year = os.path.join(\n bucket, os.path.basename(s3conn.ls(bucket)[-1]))\n latest_month = os.path.join(\n latest_year, os.path.basename(s3conn.ls(latest_year)[-1]))\n latest_day = os.path.join(\n latest_month, os.path.basename(s3conn.ls(latest_month)[-1]))\n s3key = s3conn.ls(os.path.join(latest_day, radar_id))[-1]\n\n path, filename = os.path.split(s3key)\n with tempfile.TemporaryFile() as temp88d:\n s3fs.get(s3key, temp88d)\n return open_nexrad_file(temp88d, io=io)\n except:\n print(\"Missing s3fs package, please install via conda or pip.\")",
"Py-ART Colormaps\nRetrieve the names of colormaps and the colormap list dictionary.\nThe colormaps are registered with matplotlib and can be accessed by inserting 'pyart_' in front of any name.",
"cm_names = pyart.graph.cm._cmapnames\ncms = pyart.graph.cm.cmap_d\n\nnrows = len(cm_names)\ngradient = np.linspace(0, 1, 256)\ngradient = np.vstack((gradient, gradient))\n\n# Create a figure and axes instance\nfig, axes = plt.subplots(nrows=nrows, figsize=(5,10))\nfig.subplots_adjust(top=0.95, bottom=0.01, left=0.2, right=0.99)\naxes[0].set_title('Py-Art Colormaps', fontsize=14)\n\n# Loop through the possibilities\nfor nn, pymap in enumerate(cm_names):\n axes[nn].imshow(gradient, aspect='auto', cmap=cms[pymap])\n pos = list(axes[nn].get_position().bounds)\n x_text = pos[0] - 0.01\n y_text = pos[1] + pos[3]/2.\n fig.text(x_text, y_text, pymap, va='center', ha='right', fontsize=8)\n\n# Turn off *all* ticks & spines, not just the ones with colormaps.\nfor ax in axes:\n ax.set_axis_off()",
"The RadarDisplay\nThis is the most commonly used class designed for surface-based scanning radar\nPlot a NEXRAD file",
"nexf = \"data/KILN20140429_231254_V06\"\nnexr = pyart.io.read(nexf)\nnexd = pyart.graph.RadarDisplay(nexr)\n\nnexr.fields.keys()\n\nfig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 12))\nnexd.plot('reflectivity', sweep=1, cmap='pyart_NWSRef', vmin=0., vmax=55., mask_outside=False, ax=ax[0, 0])\nnexd.plot_range_rings([50, 100], ax=ax[0, 0])\nnexd.set_limits((-150., 150.), (-150., 150.), ax=ax[0, 0])\n\nnexd.plot('velocity', sweep=1, cmap='pyart_NWSVel', vmin=-30, vmax=30., mask_outside=False, ax=ax[0, 1])\nnexd.plot_range_rings([50, 100], ax=ax[0, 1])\nnexd.set_limits((-150., 150.), (-150., 150.), ax=ax[0, 1])\n\nnexd.plot('cross_correlation_ratio', sweep=0, cmap='pyart_BrBu12', vmin=0.85, vmax=1., mask_outside=False, ax=ax[1, 0])\nnexd.plot_range_rings([50, 100], ax=ax[0, 1])\nnexd.set_limits((-150., 150.), (-150., 150.), ax=ax[0, 1])\n\nnexd.plot('differential_reflectivity', sweep=0, cmap='pyart_BuDOr12', vmin=-2, vmax=2., mask_outside=False, ax=ax[1, 1])\nnexd.plot_range_rings([50, 100], ax=ax[1, 1])\nnexd.set_limits((-150., 150.), (-150., 150.), ax=ax[1, 1])",
"There are many keyword values we can employe to refine the plot\nKeywords exist for title, labels, colorbar, along with others.\nIn addition, there are many methods that can be employed. For example, pull out a constructed RHI at a given azimuth.",
"nexd.plot_azimuth_to_rhi('reflectivity', 305., cmap='pyart_NWSRef', vmin=0., vmax=55.)\nnexd.set_limits((0., 150.), (0., 15.))",
"Py-ART RHI\nNot only can we construct an RHI from a PPI volume, but RHI scans may be plotted as well.",
"rhif = \"data/noxp_rhi_140610232635.RAWHJFH\"\nrhir = pyart.io.read(rhif)\nrhid = pyart.graph.RadarDisplay(rhir)\n\nrhid.plot_rhi('reflectivity', 0, vmin=-5.0, vmax=70,)\nrhid.set_limits(xlim=(0, 50), ylim=(0, 15))",
"Py-ART RadarMapDisplay or RadarMapDisplayCartopy\nThis converts the x-y coordinates to latitude and longitude overplotting on a map\nLet us see what version we have. The first is works on Py-ART which uses a standard definition. For other packages that may not the second method should work",
"pyart_ver = pyart.__version__\n\nimport pkg_resources\npyart_ver2 = pkg_resources.get_distribution(\"arm_pyart\").version\n\nif int(pyart_ver.split('.')[1]) == 8:\n print(\"8\")\n nexmap = pyart.graph.RadarMapDisplayCartopy(nexr)\nelse:\n print(\"7\")\n nexmap = pyart.graph.RadarMapDisplay(nexr)\n\nlimits = [-87., -82., 37., 42.]\nfig, ax = plt.subplots(1, 1, figsize=(7, 7))\n\nnexmap.plot_ppi_map('reflectivity', sweep=1, vmin=0., vmax=55., ax=ax,\n min_lon=limits[0], max_lon=limits[1], min_lat=limits[2], max_lat=limits[3])",
"Use what you have learned!\nUsing all that you have learned, make a two panel plot of reflectivity and doppler velocity using the data file from an RHI of NOXP data/noxp_rhi_140610232635.RAWHJFH. Use Cartopy to overlay the plots on a map of Austrailia and play around with differing colormaps and axes limits! \nSolution\nThe following cell loads a solution to the above problem.",
"# %load solution.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sequana/resources
|
coverage/05-sensitivity/sensitivity.ipynb
|
bsd-3-clause
|
[
"# Sensitivity of sequana_coverage in detecting CNVs",
"Author: Thomas Cokelaer\nJan 2018\nLocal time execution: about 10 minutes\n\nIn this notebook, we will simulate fastq reads and inject CNVs. We will then look at the sensitivity (proportion of true positive by the sum of positives) of sequana_coverage.\nWe use the data and strategy described in section 3.2 of \"CNOGpro: detection and quantification \nof CNVs in prokaryotic whole-genome sequencing data, bioinformatics 31(11), 2015 (Brynildsrud et al)\"\nHere, we will use the same reference: FN433596 (staphylococcus aureus) as in the paper above, which is also used in the manuscript \nThe main goal is to generate simulated data, and check that the sensitivity is high (keeping specificity low) by injecting various CNVs.\nRequirements\n\nsequana version 0.7.0 was used\nart_illumina\n\nGet the reference\nThere are many ways to download the reference (FN433596). Here below we use sequana_coverage tool but of course, you can use your own tool, or simply go to http://github.com/sequana/resources/coverage (look for FN433596.fasta.bz2).",
"!sequana_coverage --download-reference FN433596",
"Simulated FastQ data\nInstallation: conda install art\nSimulation of data coverage 100X\n-l: length of the reads\n-f: coverage\n-m: mean size of fragments\n-s: standard deviation of fragment size\n-ss: type of hiseq\nThis takes a few minutes to produce",
"! art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f 100",
"Creating the BAM (mapping) and BED files",
"# no need for the *aln and *sam, let us remove them to save space\n!rm -f paired*.aln paired_dat.sam\n!sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq 1>out 2>err",
"This uses bwa and samtools behind the scene. Then, we will convert the resulting BAM file (FN433596.fasta.sorted.bam) into a BED file once for all. To do so, we use bioconvert (http://bioconvert.readthedocs.io) that uses bedtools behind the scene:",
"# bioconvert FN433596.fa.sorted.bam simulated.bed -f\n# or use e.g. bedtools:\n!bedtools genomecov -d -ibam FN433596.fa.sorted.bam > simulated.bed",
"sequana_coverage\nWe execute sequana_coverage to find the ROI (region of interest). We should a few detections (depends on the threshold and length of the genome of course). \nLater, we will inject events as long as 8000 bases. So, we should use at least 16000 bases for the window parameter length. As shown in the window_impact notebook a 20,000 bases is a good choice to keep false detection rate low.",
"!sequana_coverage --input simulated.bed --reference FN433596.fa -w 20001 -o --level WARNING -C .5\n!cp report/*/*/rois.csv rois_noise_20001.csv\n\n# An instance of coverage signal (yours may be slightly different)\nfrom IPython.display import Image\nImage(\"coverage.png\")",
"The false positives",
"%pylab inline\n\n# Here is a convenient function to plot the ROIs in terms of sizes\n# and max zscore\n\ndef plot_results(file_roi, choice=\"max\"):\n import pandas as pd\n roi = pd.read_csv(file_roi) #\"rois_cnv_deletion.csv\")\n roi = roi.query(\"start>100 and end<3043210\")\n plot(roi[\"size\"], roi[\"{}_zscore\".format(choice)], \"or\", label=\"candidate ROIs\")\n for this in [3,4,5,-3,-4,-5]:\n if this == 3: label = \"thresholds\"\n else: label=\"_nolegend_\"\n axhline(this, ls=\"--\", label=label)\n print(\"{} ROIs found\".format(len(roi)))\n xlabel(\"length of the ROIs\")\n ylabel(\"z-scores\")\n legend()\n return roi\n\nroi = plot_results(\"rois_noise_20001.csv\", \"max\")",
"Most of the detected events have a zscore close to the chosen thresholds (-4 and 4). Moreover, \nmost events have a size below 50. \nSo for the detection of CNVs with size above let us say 2000, the False positives is (FP = 0).\nMore simulations would be required to get a more precise idea of the FP for short CNVs but the FP would remain small. For instance on this example, FP=1 for CNV size >100, FP=2 for CNV size >40, which remains pretty small given the length of the genome (3Mbp).\nChecking CNV detection\nEvent injections (deletion, duplication, or mix of both)",
"import random\nimport pandas as pd\n\ndef create_deletion():\n df = pd.read_csv(\"simulated.bed\", sep=\"\\t\", header=None)\n positions = []\n sizes = []\n for i in range(80):\n # the + and -4000 shift are there to guarantee the next\n # CNV does not overlap with the previous one since\n # CNV length can be as much as 8000\n pos = random.randint(37000*i+4000, 37000*(i+1)-4000)\n size = random.randint(1,8) * 1000\n positions.append(pos) \n #size = 2000\n df.loc[pos:pos+size,2] = 0 #deletion\n sizes.append(size)\n df.to_csv(\"cnv_deletion.bed\", sep=\"\\t\", header=None, index=None)\n return positions, sizes\n\n\ndef create_duplicated():\n df = pd.read_csv(\"simulated.bed\", sep=\"\\t\", header=None)\n positions = []\n sizes = []\n for i in range(80):\n pos = random.randint(37000*i+4000, 37000*(i+1)-4000)\n size = random.randint(1,8) * 1000\n positions.append(pos) \n \n df.loc[pos:pos+size,2] += 100 #duplicated\n sizes.append(size)\n df.to_csv(\"cnv_duplicated.bed\", sep=\"\\t\", header=None, index=None)\n return positions, sizes\n\n\ndef create_cnvs_mixed():\n df = pd.read_csv(\"simulated.bed\", sep=\"\\t\", header=None)\n # we will place 10% of CNV of size from 1000 to 8000\n import random\n positions = []\n sizes = []\n for i in range(80):\n pos = random.randint(37000*i+4000, 37000*(i+1)-4000)\n size = random.randint(1,8) * 1000\n positions.append(pos)\n \n status = random.randint(0,1)\n \n if status == 0:\n df.loc[pos:pos+size,2] -= 50 \n elif status == 1:\n df.loc[pos:pos+size,2] += 50 \n \n sizes.append(size)\n df.to_csv(\"cnv_mixed.bed\", sep=\"\\t\", header=None, index=None)\n return positions, sizes\n\n\n\ndef check_found(positions, sizes, roi, precision=200, min_size=150):\n \"\"\"A simple function to check given the position and size that\n the injected CNVs are detected in the ROIs\n \n We check that the starting or ending position of at least one\n ROI coincide with one ROI and that this ROI has at least a length of 200.\n \n Indeed, injections are at least 1000 bases and noise are generally below 100 bases\n as shown above.\n \n \n \"\"\"\n found = [False] * len(positions)\n i = 0\n zscores = []\n for position,size in zip(positions, sizes):\n\n for this in roi.iterrows():\n this = this[1] \n if (abs(this.start-position)<precision or abs(this.end-position-size)<precision )and this['size'] > min_size:\n #print(this.start, this.end, position, size)\n found[i] = True\n zscores.append(this.mean_zscore)\n continue\n \n if found[i] is False:\n print(\"position not found {} size={}\".format(position, size))\n i+=1\n print(\"Found {}\".format(sum(found)))\n return zscores\n",
"Deleted regions are all detected",
"# call this only once !!!!\npositions_deletion, sizes_deletion = create_deletion()\n!sequana_coverage --input cnv_deletion.bed -o -w 20001 --level WARNING \n!cp report/*/*/rois.csv rois_cnv_deleted.csv\n\nrois_deletion = plot_results(\"rois_cnv_deleted.csv\")\n\n# as precise as 2 base positions but for safety, we put precision of 10 and we can check that the detection rate is 100%\nzscores = check_found(positions_deletion, sizes_deletion, rois_deletion, \n precision=5)",
"duplicated regions",
"positions_duplicated, sizes_duplicated = create_duplicated()\n!sequana_coverage --input cnv_duplicated.bed -o -w 40001 --level ERROR -C .3 --no-html --no-multiqc\n!cp report/*/*/rois.csv rois_cnv_duplicated_40001.csv\n\nrois_duplicated = plot_results(\"rois_cnv_duplicated_40001.csv\", choice=\"max\")",
"Same results with W=20000,40000,60000,100000 but recovered CN is better \nwith larger W",
"rois_duplicated = plot_results(\"rois_cnv_duplicated_20000.csv\", choice=\"max\")",
"Note that you may see events with negative zscore. Those are false detection due to the presence of two CNVs close to each other. This can be avoided by increasing the window size e.g. to 40000",
"check_found(positions_duplicated, sizes_duplicated, rois_duplicated, \n precision=5)",
"Mixes of duplicated and deleted regions",
"positions_mix, sizes_mix = create_cnvs_mixed()\n!sequana_coverage --input cnv_mixed.bed -o -w 40001 --level ERROR --no-multiqc --no-html --cnv-clustering 1000 \n!cp report/*/*/rois.csv rois_cnv_mixed.csv\n\nImage(\"coverage_with_cnvs.png\")\n\nrois_mixed = plot_results(\"rois_cnv_mixed.csv\", choice=\"max\")\n\n# note that here we increase the precision to 100 bases. The positions\n# are not as precise as in the duplication or deletion cases. \ncheck_found(positions_mix, sizes_mix, rois_mixed, precision=20)",
"Some events (about 1%) may be labelled as not found but visual inspection will show that there are actually detected. This is due to a starting position being offset due to noise data set that interfer with the injected CNVs.\nConclusions\n\nwith simulated data and no CNV injections, sequana coverage detects some events that cross the threshold. However, there all have low zscores (close to the chosen threshold) and exhibit short lengths (below 100 bases). \nSimulated CNVs:\nthe 80 deletions are all detected with the correct position (+/- 1 base) and sizes (+/- 1 base)\nthe 80 duplications are all detected with the correct position (+/- 1 base) and sizes (+/- 1 base !)\nthe mix of 80 detection with coverage at 50 and 150 are all detected. Note, however, that some CNVs detections are split in several events. We implemented an additional clustering for CNVs (use --cnv-clustering 1000). This solve this issue. As for the position, there are usually correct (+-20 bases). Visual inspection of the ROIs files show that the events are all detected. However, they may be split or the actual starting point of the event is not precise.\n\n\n\nSo, for those simulated data and type of CNVs injection (CN 0, 2, 0.5, 1.5), we get a sensitivity close to 100%. \nExtra notes about False positives\nwhen we applied sequana coverage on the simulated mapped reads to estimate the rate of False Positives, we got about 20 ROIs with events having (max) zscore below 5 but up to 50 bases. \nOne question we had is\nDoes ROIs shorter than 50bp and with z-scores below 5 should be\nignored in CNV or other analyses, or are these bases identifying \ngenuine features in the genome (such as unmappable sequence)?\n\nIn brief, we think that those events are part of the background noise. So, such events should not be interpreted as genuine features. Here is why",
"roi = plot_results(\"rois_noise_20001.csv\")\n\nwhat is happening here is that we detect many events close to the threshold. \nSo for instance all short events on the left hand side have z-score close to 4, \nwhich is our threshold. \n\nBy pure chance, we get longer events of 40 or 50bp. This is quite surprinsing and wanted to know whether those \nare real false positives or due to a genuine feature in the genome (e.g. repeated regions that prevent a good mapping)\n\nWhat is not shown in this plot is the position of the event. We can simulate the same data again (different seed). \nIf those long events appear at the same place, they ca be considered as genuine, otherwise, they should be considered \nas potential background appearing just by chance. \n\nso, we generated 50 simulated data set and reproduce the image above. We store the data in 50_rois.csv\n\n\n\nfrom easydev import execute as shell\n\ndef create_data(start=0,end=10):\n for i in range(start, end):\n print(\"---------------- {}\".format(i))\n cmd = \"art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f 100\"\n shell(cmd)\n\n cmd = \"rm -f paired*.aln paired_dat.sam\"\n shell(cmd)\n\n cmd = \"sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq 1>out 2>err\"\n shell(cmd)\n\n cmd = \"bedtools genomecov -d -ibam FN433596.fa.sorted.bam > simulated.bed\"\n shell(cmd)\n\n cmd = \"sequana_coverage --input simulated.bed --reference FN433596.fa -w 20001 -o --no-html --no-multiqc\"\n shell(cmd)\n\n cmd = \"cp report/*/*/rois.csv rois_{}.csv\".format(i)\n shell(cmd)\n#create_data(0,50)\n\nimport pandas as pd\nrois = pd.read_csv(\"50_simulated_rois.csv\")\nrois = rois.query(\"start>100 and end <3043210\")\n\nroi = plot_results(\"50_simulated_rois.csv\", choice=\"max\")",
"With 50 simulations, we get 826 events. (100 are removed because on the edge of the origin of replication), which means about 16 events per simulation. The max length is 90. \nNone of the long events (above 50) appear at the same position (distance by more than 500 bases at least) so long events are genuine false positives.",
"roi = plot_results(\"100_simulated_rois.csv\", choice=\"mean\")\n\nroi = plot_results(\"100_simulated_rois.csv\", choice=\"max\")"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
maviator/Kaggle_home_price_prediction
|
Script/SKlearn models.ipynb
|
mit
|
[
"How to score 0.11952 and get top 19%\nby Mohtadi Ben Fraj\nCredits\nPart of the code for data exploration is taken for this notebook (https://www.kaggle.com/neviadomski/how-to-get-to-top-25-with-simple-model-sklearn/notebook). \nThe idea of averaging the models is inspired from this notebook (https://www.kaggle.com/serigne/stacked-regressions-top-4-on-leaderboard)\nImporting libraries and data",
"# Adding needed libraries and reading data\nimport pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC\nfrom sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor\nfrom sklearn.kernel_ridge import KernelRidge\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import RobustScaler\nfrom sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone\nfrom sklearn.model_selection import KFold, cross_val_score, train_test_split\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split, cross_val_score\nfrom sklearn.metrics import r2_score, mean_squared_error\nfrom sklearn.utils import shuffle\n\nfrom xgboost.sklearn import XGBRegressor\n\n%matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\n\n# get home price train & test csv files as a DataFrame\ntrain = pd.read_csv(\"../Data/train.csv\")\ntest = pd.read_csv(\"../Data/test.csv\")\nfull = train.append(test, ignore_index=True)\nprint (train.shape, test.shape, full.shape)\n\ntrain.head()",
"Check for missing data",
"#Checking for missing data\nNAs = pd.concat([train.isnull().sum(), test.isnull().sum()], axis=1, keys=['Train', 'Test'])\nNAs[NAs.sum(axis=1) > 0]",
"Helper functions",
"# Prints R2 and RMSE scores\ndef get_score(prediction, lables): \n print('R2: {}'.format(r2_score(prediction, lables)))\n print('RMSE: {}'.format(np.sqrt(mean_squared_error(prediction, lables))))\n\n# Shows scores for train and validation sets \ndef train_test(estimator, x_trn, x_tst, y_trn, y_tst):\n prediction_train = estimator.predict(x_trn)\n # Printing estimator\n print(estimator)\n # Printing train scores\n get_score(prediction_train, y_trn)\n prediction_test = estimator.predict(x_tst)\n # Printing test scores\n print(\"Test\")\n get_score(prediction_test, y_tst)",
"Removing outliers",
"sns.lmplot(x='GrLivArea', y='SalePrice', data=train)\n\ntrain = train[train.GrLivArea < 4500]\n\nsns.lmplot(x='GrLivArea', y='SalePrice', data=train)",
"Splitting to features and labels and deleting variables I don't need",
"# Spliting to features and lables\ntrain_labels = train.pop('SalePrice')\n\nfeatures = pd.concat([train, test], keys=['train', 'test'])\n\n# Deleting features that are more than 50% missing\nfeatures.drop(['PoolQC', 'MiscFeature', 'FireplaceQu', 'Fence', 'Alley'],\n axis=1, inplace=True)\nfeatures.shape",
"Filling missing values",
"# MSZoning NA in pred. filling with most popular values\nfeatures['MSZoning'] = features['MSZoning'].fillna(features['MSZoning'].mode()[0])\n\n# LotFrontage NA in all. I suppose NA means 0\nfeatures['LotFrontage'] = features['LotFrontage'].fillna(features['LotFrontage'].mean())\n\n# MasVnrType NA in all. filling with most popular values\nfeatures['MasVnrType'] = features['MasVnrType'].fillna(features['MasVnrType'].mode()[0])\n\n# MasVnrArea NA in all. filling with mean value\nfeatures['MasVnrArea'] = features['MasVnrArea'].fillna(features['MasVnrArea'].mean())\n\n# BsmtQual, BsmtCond, BsmtExposure, BsmtFinType1, BsmtFinType2\n# NA in all. NA means No basement\nfor col in ('BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'):\n features[col] = features[col].fillna('NoBSMT')\n\n# BsmtFinSF1 and BsmtFinSF2 NA in pred. I suppose NA means 0\nfeatures['BsmtFinSF1'] = features['BsmtFinSF1'].fillna(0) \nfeatures['BsmtFinSF2'] = features['BsmtFinSF2'].fillna(0) \n \n# BsmtFullBath and BsmtHalfBath NA in all. filling with most popular value\nfeatures['BsmtFullBath'] = features['BsmtFullBath'].fillna(features['BsmtFullBath'].median())\nfeatures['BsmtHalfBath'] = features['BsmtHalfBath'].fillna(features['BsmtHalfBath'].median())\n\n# BsmtUnfSF NA in all. Filling with mean value\nfeatures['BsmtUnfSF'] = features['BsmtUnfSF'].fillna(features['BsmtUnfSF'].mean())\n\n# Exterior1st and Exterior2nd NA in all. filling with most popular value\nfeatures['Exterior1st'] = features['Exterior1st'].fillna(features['Exterior1st'].mode()[0])\nfeatures['Exterior2nd'] = features['Exterior2nd'].fillna(features['Exterior2nd'].mode()[0])\n\n# Functional NA in all. filling with most popular value\nfeatures['Functional'] = features['Functional'].fillna(features['Functional'].mode()[0])\n\n# TotalBsmtSF NA in pred. I suppose NA means 0\nfeatures['TotalBsmtSF'] = features['TotalBsmtSF'].fillna(0)\n\n# Electrical NA in pred. filling with most popular values\nfeatures['Electrical'] = features['Electrical'].fillna(features['Electrical'].mode()[0])\n\n# KitchenQual NA in pred. filling with most popular values\nfeatures['KitchenQual'] = features['KitchenQual'].fillna(features['KitchenQual'].mode()[0])\n\n# GarageArea NA in all. NA means no garage so 0\nfeatures['GarageArea'] = features['GarageArea'].fillna(0.0)\n\n# GarageType, GarageFinish, GarageQual NA in all. NA means No Garage\nfor col in ('GarageType', 'GarageFinish', 'GarageQual', 'GarageQual', 'GarageCond'):\n features[col] = features[col].fillna('NoGRG')\n\n# GarageCars NA in pred. I suppose NA means 0\nfeatures['GarageCars'] = features['GarageCars'].fillna(0.0)\n\n# SaleType NA in pred. filling with most popular values\nfeatures['SaleType'] = features['SaleType'].fillna(features['SaleType'].mode()[0])\n\n# Utilities NA in all. filling with most popular value\nfeatures['Utilities'] = features['Utilities'].fillna(features['Utilities'].mode()[0])\n\n# Adding total sqfootage feature and removing Basement, 1st and 2nd floor features\nfeatures['TotalSF'] = features['TotalBsmtSF'] + features['1stFlrSF'] + features['2ndFlrSF']\nfeatures.drop(['TotalBsmtSF', '1stFlrSF', '2ndFlrSF', 'GarageYrBlt'], axis=1, inplace=True)\n\nfeatures.shape",
"Log transformation",
"# Our SalesPrice is skewed right (check plot below). I'm logtransforming it. \nax = sns.distplot(train_labels)\n\n## Log transformation of labels\ntrain_labels = np.log(train_labels)\n\n## Now it looks much better\nax = sns.distplot(train_labels)",
"Converting categorical features with order to numerical\nConverting categorical variables with choices: Ex, Gd, TA, FA and Po\ndef cat2numCondition(x):\n if x == 'Ex':\n return 5\n if x == 'Gd':\n return 4\n if x == 'TA':\n return 3\n if x == 'Fa':\n return 2\n if x == 'Po':\n return 1\n return -1\nfeatures.shape\ncols = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC',\n 'KitchenQual', 'GarageQual', 'GarageCond']\nfor col in cols:\n features[col+'_num'] = features[col].apply(cat2numCondition)\n features.pop(col)\nfeatures.shape\nConverting categorical condition: Gd, Av, Mn, No\ndef cat2numBsmnt(x):\n if x == 'Gd':\n return 3\n if x == 'Av':\n return 2\n if x == 'Mn':\n return 1\n if x == 'No':\n return 0\n return -1\nfeatures['BsmtExposure_num'] = features['BsmtExposure'].apply(cat2numBsmnt)\nfeatures.pop('BsmtExposure')\nfeatures.shape\nConverting categorical values: GLQ, ALQ, BLQ, Rec, LwQ, Unf\n''''\ndef cat2numQual(x):\n if x == 'GLQ':\n return 5\n if x == 'ALQ':\n return 4\n if x == 'BLQ':\n return 3\n if x == 'Rec':\n return 2\n if x == 'LwQ':\n return 1\n if x == 'Unf':\n return 0\n return -1\n ''''\ncols = ['BsmtFinType1', 'BsmtFinType2']\nfor col in cols:\n features[col+'_num'] = features[col].apply(cat2numCondition)\n features.pop(col)\nfeatures.shape",
"def num2cat(x):\n return str(x)\n\nfeatures['MSSubClass_str'] = features['MSSubClass'].apply(num2cat)\nfeatures.pop('MSSubClass')\nfeatures.shape",
"Converting categorical features to binary",
"# Getting Dummies from all other categorical vars\nfor col in features.dtypes[features.dtypes == 'object'].index:\n for_dummy = features.pop(col)\n features = pd.concat([features, pd.get_dummies(for_dummy, prefix=col)], axis=1)\n\nfeatures.shape\n\nfeatures.head()",
"Overfitting columns",
"#features.drop('MSZoning_C (all)',axis=1)",
"Splitting train and test features",
"### Splitting features\ntrain_features = features.loc['train'].drop('Id', axis=1).select_dtypes(include=[np.number]).values\ntest_features = features.loc['test'].drop('Id', axis=1).select_dtypes(include=[np.number]).values",
"Splitting to train and validation sets",
"### Splitting\nx_train, x_test, y_train, y_test = train_test_split(train_features,\n train_labels,\n test_size=0.1,\n random_state=200)",
"Modeling\n1. Gradient Boosting Regressor",
"GBR = GradientBoostingRegressor(n_estimators=12000,\n learning_rate=0.05, max_depth=3, max_features='sqrt',\n min_samples_leaf=15, min_samples_split=10, loss='huber')\n\nGBR.fit(x_train, y_train)\n\ntrain_test(GBR, x_train, x_test, y_train, y_test)\n\n# Average R2 score and standart deviation of 5-fold cross-validation\nscores = cross_val_score(GBR, train_features, train_labels, cv=5)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))",
"2. LASSO regression",
"lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.0005, random_state=1))\n\nlasso.fit(x_train, y_train)\n\ntrain_test(lasso, x_train, x_test, y_train, y_test)\n\n# Average R2 score and standart deviation of 5-fold cross-validation\nscores = cross_val_score(lasso, train_features, train_labels, cv=5)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))",
"3. Elastic Net Regression",
"ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.9, random_state=3))\n\nENet.fit(x_train, y_train)\n\ntrain_test(ENet, x_train, x_test, y_train, y_test)\n\n# Average R2 score and standart deviation of 5-fold cross-validation\nscores = cross_val_score(ENet, train_features, train_labels, cv=5)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))",
"Averaging models",
"# Retraining models on all train data\nGBR.fit(train_features, train_labels)\nlasso.fit(train_features, train_labels)\nENet.fit(train_features, train_labels)\n\ndef averaginModels(X, train, labels, models=[]):\n for model in models:\n model.fit(train, labels)\n predictions = np.column_stack([\n model.predict(X) for model in models\n ])\n return np.mean(predictions, axis=1)\n\ntest_y = averaginModels(test_features, train_features,\n train_labels, [GBR, lasso, ENet])\ntest_y = np.exp(test_y)",
"Submission",
"test_id = test.Id\ntest_submit = pd.DataFrame({'Id': test_id, 'SalePrice': test_y})\ntest_submit.shape\ntest_submit.head()\ntest_submit.to_csv('house_price_pred_avg_gbr_lasso_enet.csv', index=False)",
"History\n\nUsing Gradient boosting regression model: 0.12217\nUsing Random Forest regression mode: 0.14146\nWeighted average of GBR and RF with .75 and .25 weights respectively produces better result: 0.12178\nLink on ensembling and stacking models: https://mlwave.com/kaggle-ensembling-guide/ \nAveraging of 4 models: GBR, RF, lasso and ENet: error: 0.11952\nRemoving outliers and averaging GBR, lasso and ENEt: error: 0.11793\nRetraining final models on all train data: error: 0.11739\nConverted MSSubClass to categorical: error: 0.11660"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ktakagaki/kt-2015-DSPHandsOn
|
MedianFilter/Python/01. Basic Tests Median Filter/basic median filter with window length 16.ipynb
|
gpl-2.0
|
[
"Basic median filter\n2015.10.06 DW KT",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.backends.backend_pdf import PdfPages\nimport sys\nsys.path.insert(0, 'C:\\Users\\Dominik\\Documents\\GitRep\\kt-2015-DSPHandsOn\\MedianFilter\\Python') #Add a new path with needed .py files\n\nimport functions\nimport gitInformation\n\n%matplotlib inline\n\ngitInformation.printInformation()",
"Testing the median filter with a fixed window length of 16.",
"median = plt.figure(figsize=(30,20))\nfor x in range(1, 5):\n for y in range(1, 6):\n plt.subplot(5, 5, x + (y-1)*4)\n wavenum = (x-1) + (y-1)*4\n functions.medianSinPlot(wavenum, 15)\n plt.suptitle('Median filtered Sine Waves with window length 15', fontsize = 60)\n plt.xlabel((\"Wave number = \"+str((x-1) + (y-1)*4)), fontsize=18)",
"Summary\n\nwith higher wave numbers (n=10), the filter makes the signal even worse with a phase (amplitude) reversal!\na bit of ailiasing, would benefit from more sample points\n\nGraphic Export",
"pp=PdfPages( 'median sin window length 15.pdf' )\npp.savefig( median )\npp.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
batfish/pybatfish
|
jupyter_notebooks/Safely refactoring ACLs and firewall rules.ipynb
|
apache-2.0
|
[
"Safely refactoring ACLs and firewall rules\nChanging ACLs or firewall rules (or filters) is one of the riskiest updates to a network. Even a small error can block connectivity for a large set of critical services or open up sensitive resources to the world at large. Earlier notebooks showed how to analyze filters for what they do and do not allow and how to make specific changes in a provably safe manner.\nThis notebook shows how to refactor complex filters in a way that the full impact of refactoring can be understood and analyzed for correctness before refactored filters are pushed to the network. \nOriginal ACL\nWe will use the following ACL as a running example in this notebook. The ACL can be read as a few separate sections:\n\nLine 10: Deny ICMP redirects\nLines 20, 23: Permit BFD traffic on certain blocks\nLines 40-80: Permit BGP traffic\nLines 90-100: Permit DNS traffic a /24 subnet while denying it from a /32 within that\nLines 110-500: Permit or deny IP traffic from certain subnets\nLine 510: Permit ICMP echo reply\nLines 520-840: Deny IP traffic to certain subnets\nLines 850-880: Deny all other types of traffic\n\n(The IP address space in the ACL appears all over the place because it has been anonymized via Netconan. Netconan preserves the super- and sub-prefix relationships when anonymizing IP addresses and prefixes.)",
"# The ACL before refactoring\noriginal_acl = \"\"\"\nip access-list acl\n 10 deny icmp any any redirect\n 20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 eq 3784\n 30 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 eq 3785\n 40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp\n 50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp\n 60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp\n 70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp\n 80 permit tcp 205.248.58.190/32 205.248.58.188/32 eq bgp\n 90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain\n 100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain\n 110 deny ip 54.0.0.0/8 any\n 120 deny ip 163.157.0.0/16 any\n 130 deny ip 166.144.0.0/12 any\n 140 deny ip 198.170.50.0/24 any\n 150 deny ip 198.120.0.0/16 any\n 160 deny ip 11.36.192.0/19 any\n 170 deny ip 11.125.64.0/19 any\n 180 permit ip 166.146.58.184/32 any\n 190 deny ip 218.66.57.0/24 any\n 200 deny ip 218.66.56.0/24 any\n 210 deny ip 218.67.71.0/24 any\n 220 deny ip 218.67.72.0/24 any\n 230 deny ip 218.67.96.0/22 any\n 240 deny ip 8.89.120.0/22 any\n 250 deny ip 54.203.159.1/32 any\n 260 permit ip 218.8.104.0/25 any\n 270 permit ip 218.8.104.128/25 any\n 280 permit ip 218.8.103.0/24 any\n 290 deny ip 144.49.45.40/32 any\n 300 deny ip 163.255.18.63/32 any\n 310 deny ip 202.45.130.141/32 any\n 320 deny ip 212.26.132.18/32 any\n 330 deny ip 218.111.16.132/32 any\n 340 deny ip 218.246.165.90/32 any\n 350 deny ip 29.228.179.210/32 any\n 360 deny ip 194.181.135.214/32 any\n 370 deny ip 10.64.90.249/32 any\n 380 deny ip 207.70.46.217/32 any\n 390 deny ip 219.185.241.117/32 any\n 400 deny ip 2.80.3.219/32 any\n 410 deny ip 27.212.145.150/32 any\n 420 deny ip 131.159.53.215/32 any\n 430 deny ip 214.220.213.107/32 any\n 440 deny ip 196.64.84.239/32 any\n 450 deny ip 28.69.250.136/32 any\n 460 deny ip 200.45.87.238/32 any\n 470 deny ip any 11.125.89.32/30\n 480 deny ip any 11.125.89.36/30\n 490 deny ip any 11.125.89.40/30\n 500 deny ip any 11.125.89.44/30\n 510 permit icmp any any echo-reply\n 520 deny ip any 11.36.199.216/30\n 530 deny ip any 11.36.199.36/30\n 540 deny ip any 11.36.199.2/30\n 550 deny ip any 11.36.199.52/30\n 560 deny ip any 11.36.199.20/30\n 570 deny ip any 11.125.82.216/30\n 580 deny ip any 11.125.82.220/32\n 590 deny ip any 11.125.82.36/30\n 600 deny ip any 11.125.82.12/30\n 610 deny ip any 11.125.80.136/30\n 620 deny ip any 11.125.80.141/32\n 630 deny ip any 11.125.87.48/30\n 640 deny ip any 11.125.87.168/30\n 650 deny ip any 11.125.87.173/32\n 660 deny ip any 11.125.90.56/30\n 670 deny ip any 11.125.90.240/30\n 680 deny ip any 11.125.74.224/30\n 690 deny ip any 11.125.91.132/30\n 700 deny ip any 11.125.89.132/30\n 710 deny ip any 11.125.89.12/30\n 720 deny ip any 11.125.92.108/30\n 730 deny ip any 11.125.92.104/32\n 740 deny ip any 11.125.92.28/30\n 750 deny ip any 11.125.92.27/32\n 760 deny ip any 11.125.92.160/30\n 770 deny ip any 11.125.92.164/32\n 780 deny ip any 11.125.92.204/30\n 790 deny ip any 11.125.92.202/32\n 800 deny ip any 11.125.93.192/29\n 810 deny ip any 11.125.95.204/30\n 820 deny ip any 11.125.95.224/30\n 830 deny ip any 11.125.95.180/30\n 840 deny ip any 11.125.95.156/30\n 850 deny tcp any any\n 860 deny icmp any any\n 870 deny udp any any\n 880 deny ip any any\n\"\"\"",
"Compressed ACL\nNow, assume that we want to compress this ACL to make it more manageable. We do the following operations:\n\nMerge the two BFD permit statements on lines 20-30 into one statement using the range directive.\nRemove the BGP session on line 80 because it has been decommissioned\nRemove lines 180 and 250 because they are shadowed by earlier lines and will never match a packet. Such lines can be found via the filterLineReachability question, as shown here.\nMerge pairs of lines (190, 200), (210, 220), and (260, 270) by combining their prefixes into a less specific prefix.\nRemove all deny statements on lines 520-870. They are not needed given the final deny on line 880.\n\nThe result of these actions, which halve the ACL size, is shown below. To enable easy observation of changes, we have preserved the line numbers.",
"compressed_acl = \"\"\"\nip access-list acl\n 10 deny icmp any any redirect\n 20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785\n! 30 MERGED WITH LINE ABOVE \n 40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp\n 50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp\n 60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp\n 70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp\n! 80 DECOMMISSIONED BGP SESSION\n 90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain\n 100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain\n 110 deny ip 54.0.0.0/8 any\n 120 deny ip 163.157.0.0/16 any\n 130 deny ip 166.144.0.0/12 any\n 140 deny ip 198.170.50.0/24 any\n 150 deny ip 198.120.0.0/16 any\n 160 deny ip 11.36.192.0/19 any\n 170 deny ip 11.125.64.0/19 any\n! 180 REMOVED UNREACHABLE LINE\n 190 deny ip 218.66.56.0/23 any\n! 200 MERGED WITH LINE ABOVE \n 210 deny ip 218.67.71.0/23 any\n! 220 MERGED WITH LINE ABOVE \n 230 deny ip 218.67.96.0/22 any\n 240 deny ip 8.89.120.0/22 any\n! 250 REMOVED UNREACHABLE LINE\n 260 permit ip 218.8.104.0/24 any\n! 270 MERGED WITH LINE ABOVE\n 280 permit ip 218.8.103.0/24 any\n 290 deny ip 144.49.45.40/32 any\n 300 deny ip 163.255.18.63/32 any\n 310 deny ip 202.45.130.141/32 any\n 320 deny ip 212.26.132.18/32 any\n 330 deny ip 218.111.16.132/32 any\n 340 deny ip 218.246.165.90/32 any\n 350 deny ip 29.228.179.210/32 any\n 360 deny ip 194.181.135.214/32 any\n 370 deny ip 10.64.90.249/32 any\n 380 deny ip 207.70.46.217/32 any\n 390 deny ip 219.185.241.117/32 any\n 400 deny ip 2.80.3.219/32 any\n 410 deny ip 27.212.145.150/32 any\n 420 deny ip 131.159.53.215/32 any\n 430 deny ip 214.220.213.107/32 any\n 440 deny ip 196.64.84.239/32 any\n 450 deny ip 28.69.250.136/32 any\n 460 deny ip 200.45.87.238/32 any\n 470 deny ip any 11.125.89.32/28\n 510 permit icmp any any echo-reply\n! 520-870 REMOVED UNNECESSARY DENIES\n 880 deny ip any any\n\"\"\"",
"The challenge for us is to find out if and how this compressed ACL differs from the original. That is, is there is traffic that is treated differently by the two ACLs, and if so, which lines are responsible for the difference.\nThis task is difficult to get right through manual reasoning alone, which is why we developed the compareFilters question in Batfish.\nComparing filters\nWe can compare the two ACLs above as follows. To initialize snapshots, we will use Batfish's init_snapshot_from_text function which creates a snapshot with a single device who configuration is the provided text. The analysis shown below can be done even when the filters are embedded within bigger device configurations.",
"# Import packages \n%run startup.py\nbf = Session(host=\"localhost\")\n\n# Initialize a snapshot with the original ACL\noriginal_snapshot = bf.init_snapshot_from_text(\n original_acl, \n platform=\"cisco-nx\", \n snapshot_name=\"original\", \n overwrite=True)\n\n# Initialize a snapshot with the compressed ACL\ncompressed_snapshot = bf.init_snapshot_from_text(\n compressed_acl, \n platform=\"cisco-nx\", \n snapshot_name=\"compressed\", \n overwrite=True)\n\n# Now, compare the two ACLs in the two snapshots\nanswer = bf.q.compareFilters().answer(snapshot=compressed_snapshot, reference_snapshot=original_snapshot)\nshow(answer.frame())",
"The compareFilters question compares two filters and returns pairs of lines, one from each filter, that match the same flow(s) but treat them differently. If it reports no output, the filters are guaranteed to be identical. The analysis is exhaustive and considers all possible flows.\nAs we can see from the output above, our compressed ACL is not the same as the original one. In particular, line 210 of the compressed ACL will deny some flows that were being permitted by line 510 of the original; and line 510 of the compressed ACL will permit some flows that were being denied by line 220 of the original ACL. Because the permit statements correspond to ICMP traffic, we can tell that the traffic treated by the two filters is ICMP. To narrow learn specific source and destination IPs that are impacted, one may run the searchFilters question, as shown here. \nBy looking at the output above, we can immediately understand the difference: \n\nThe first line is showing that the compressed ACL is denying some traffic on line 210 (with index 16) that the original ACL was permitting via line 510, and the compressed ACL is permitting some traffic on line 510 that the original ACL was denying via line 220. \n\nIt turns out that the address space merger we did for lines 210 and 220 in the original ACL, where we combined 218.67.72.0/24 and 218.67.71.0/24 into 218.67.71.0/23, was not correct. The other similar mergers of 218.66.57.0/24 and 218.66.56.0/24 into 218.66.56.0/23 and of 218.8.104.0/25 and 218.8.104.128/25 into 218.8.104.0/24 were correct.\n\nThe third line is showing that the compressed ACL is denying some traffic at the end of the ACL that the original ACL was permitting via line 80. This is an expected change of decommissioning the BGP session on line 80. \n\nIt is not always the case that refactoring is semantics preserving. Where compareFilters helps is succinctly enumerating all differences. Engineers can look at the differences and decide if the refactored filter meets their intent.\nSplitting ACLs\nCompressing large ACLs is one type of refactoring engineers do; another one is splitting a large ACL into multiple smaller ACLs and composing them on the same device or spreading across multiple devices in the network. Smaller ACLs are easier to maintain and evolve. However, the split operation is risky. We may forget to include in the smaller ACLs some protections that exist in the original ACL. We show how such splits can be safely done using Batfish.\nSuppose we want to split the compressed ACL above into multiple smaller ACLs that handle different concerns. So, we should have different ACLs for different types of traffic and different ACLs for different logical groups of nodes in the network. The result of such splitting is shown below. For ease of exposition, we have retained the line numbers from the original ACL and mimic a scenario in which all ACLs live on the same device.",
"smaller_acls = \"\"\"\nip access-list deny-icmp-redirect\n 10 deny icmp any any redirect\n\nip access-list permit-bfd\n 20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785\n\nip access-list permit-bgp-session\n 40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp\n 50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp\n 60 permit tcp 204.150.33.175/32 204.150.33.83/32 eq bgp\n 70 permit tcp 205.248.59.64/32 205.248.59.67/32 eq bgp\n\nip access-list acl-dns\n 90 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain\n 100 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain\n\nip access-list deny-untrusted-sources-group1\n 110 deny ip 54.0.0.0/8 any\n 120 deny ip 163.157.0.0/16 any\n 130 deny ip 166.144.0.0/12 any\n 140 deny ip 198.170.50.0/24 any\n 150 deny ip 198.120.0.0/16 any\n 160 deny ip 11.36.192.0/19 any\n\nip access-list deny-untrusted-sources-group2\n 160 deny ip 11.36.192.0/20 any\n 190 deny ip 218.66.56.0/23 any\n 210 deny ip 218.67.71.0/23 any\n 230 deny ip 218.67.96.0/22 any\n 240 deny ip 8.89.120.0/22 any\n \nip access-list permit-trusted-sources\n 260 permit ip 218.8.104.0/24 any\n 280 permit ip 218.8.103.0/24 any\n\nip access-list deny-untrusted-sources-group3\n 290 deny ip 144.49.45.40/32 any\n 300 deny ip 163.255.18.63/32 any\n 310 deny ip 202.45.130.141/32 any\n 320 deny ip 212.26.132.18/32 any\n 300 deny ip 218.111.16.132/32 any\n 340 deny ip 218.246.165.90/32 any\n 350 deny ip 29.228.179.210/32 any\n 360 deny ip 194.181.135.214/32 any\n 370 deny ip 10.64.90.249/32 any\n 380 deny ip 207.70.46.217/32 any\n 390 deny ip 219.185.241.117/32 any\n \nip access-list deny-untrusted-sources-group4\n 400 deny ip 2.80.3.219/32 any\n 410 deny ip 27.212.145.150/32 any\n 420 deny ip 131.159.53.215/32 any\n 430 deny ip 214.220.213.107/32 any\n 440 deny ip 196.64.84.239/32 any\n 450 deny ip 28.69.250.136/32 any\n 460 deny ip 200.45.87.238/32 any\n\nip access-list acl-tail\n 470 deny ip any 11.125.89.32/28\n 510 permit icmp any any echo-reply\n 880 deny ip any any\n\"\"\"",
"Given the split ACLs above, one analysis may be to figure out if each untrusted source subnet was included in a smaller ACL. Otherwise, we have lost protection that was present in the original ACL. We can accomplish this analysis via the findMatchingFilterLines question, as shown below. \nOnce we are satisfied with analysis of filters, for an end-to-end safety guarantee, we should also analyze if there are new flows that the network will allow (or disallow) after the change. Such an analysis can be done via the differentialReachability question, as shown here.",
"# Initialize a snapshot with the smaller ACLs\nsmaller_snapshot = bf.init_snapshot_from_text(\n smaller_acls, \n platform=\"cisco-nx\", \n snapshot_name=\"smaller\", \n overwrite=True)\n\n# All untrusted subnets\nuntrusted_source_subnets = [\"54.0.0.0/8\", \n \"163.157.0.0/16\", \n \"166.144.0.0/12\", \n \"198.170.50.0/24\", \n \"198.120.0.0/16\", \n \"11.36.192.0/19\", \n \"11.125.64.0/19\", \n \"218.66.56.0/24\", \n \"218.66.57.0/24\", \n \"218.67.71.0/23\", \n \"218.67.96.0/22\", \n \"8.89.120.0/22\"\n ]\n\nfor subnet in untrusted_source_subnets:\n # Find which ACLs match traffic from this source subnet\n answer = bf.q.findMatchingFilterLines(\n headers=HeaderConstraints(srcIps=subnet),\n filters=\"/deny-untrusted/\").answer(snapshot=smaller_snapshot)\n\n # Each source subnet should match exactly one ACL\n af = answer.frame()\n if len(af) == 1:\n print(\"{} .... OK\".format(subnet))\n elif len(af) == 0:\n print(\"{} .... ABSENT\".format(subnet))\n else:\n print(\"{} .... Multiply present\".format(subnet))\n show(af)",
"In the code above, we first enumerate all untrusted subnets in the network. The granularity of this specification need not be the same as that in the ACL. For instance, we enumerate 218.66.56.0/24 and 218.66.57.0/24 as untrusted subnets but the ACL has a less specific prefix 218.66.56.0/23. Batfish understands such relationships and provides an accurate analysis that is not possible with simple string matching.\nThe for loop above uses the findMatchingFilterLines question to find out which lines across all ACLs whose names contain \"deny-untrusted\" will match packets starting the the specified subnet. Our expectation is that each subnet should match exactly one line in exactly one ACL, and the output shows \"OK\" against such subnets. It shows \"Absent\" for subnets that do not match any line and shows the multiple matching lines for subnets where that happens.\nWe see that during the split above, we ended up matching the subnet 11.36.192.0/19 twice, once as a /19 in ACL deny-untrusted-sources-group1 and then as /20 in ACL deny-untrusted-sources-group2. More dangerously, we completely forgot to match the 11.125.64.0/19, which will open a security hole in the network if these smaller ACLs were applied.\nSummary\nIn this notebook, we showed how to use the compareFilters and findMatchingFilterLines questions of Batfish to safely refactor complex filters. \n\ncompareFilters analyzes the original and revised filter to enumerate all cases that will treat any flow differently. \nfindMatchingFilterLines enumerates all lines across all specified filters that match the given space of flows.\n\nFor additional ways to analyze filter using Batfish, see the \"Analyzing ACLs and Firewall Rules\" and the \"Provably Safe ACL and Firewall Changes\" notebooks.\n\nGet involved with the Batfish community\nJoin our community on Slack and GitHub."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst
|
quests/endtoendml/labs/2_sample.ipynb
|
apache-2.0
|
[
"<h1> 2. Creating a sampled dataset </h1>\n\nIn this notebook, you will implement:\n<ol>\n<li> Sampling a BigQuery dataset to create datasets for ML\n<li> Preprocessing with Pandas\n</ol>",
"# TODO: change these to reflect your environment\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\n\n%%bash\nif ! gsutil ls | grep -q gs://${BUCKET}/; then\n gsutil mb -l ${REGION} gs://${BUCKET}\nfi",
"<h2> Create ML dataset by sampling using BigQuery </h2>\n<p>\nSample the BigQuery table publicdata.samples.natality to create a smaller dataset of approximately 10,000 training and 3,000 evaluation records. Restrict your samples to data after the year 2000.\n</p>",
"# TODO",
"Preprocess data using Pandas\nCarry out the following preprocessing operations:\n\nAdd extra rows to simulate the lack of ultrasound. \nChange the plurality column to be one of the following strings:\n\n<pre>\n['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)']\n</pre>\n\nRemove rows where any of the important numeric fields are missing.",
"## TODO",
"<h2> Write out </h2>\n<p>\nIn the final versions, we want to read from files, not Pandas dataframes. So, write the Pandas dataframes out as CSV files. \nUsing CSV files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.\n\nModify this code appropriately (i.e. change the name of the Pandas dataframe to reflect your variable names)",
"traindf.to_csv('train.csv', index=False, header=False)\nevaldf.to_csv('eval.csv', index=False, header=False)\n\n%%bash\nwc -l *.csv\nhead *.csv\ntail *.csv",
"Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aaossa/Dear-Notebooks
|
More/DefaultParametersInPython_ES.ipynb
|
gpl-3.0
|
[
"El \"problema\"\nEs posible que al usar funciones con parámetros por defecto se encuentren con cierto comportamiento inesperado o poco intuitivo de Python. Por estas cosas siempre hay que revisar el código, conocerlo lo mejor posible y saber responder cuando las cosas no funcionan como uno espera.\nVeamos el comportamiento de los parametros por defecto en funciones",
"def funcion(lista=[]):\n lista.append(1)\n print(\"La lista vale: {}\".format(lista))",
"Si llamamos a la función una vez...",
"funcion()",
"... todo funciona como lo suponemos, pero y si probamos otra vez...",
"funcion()\nfuncion()",
"... ok? No funciona como lo supondriamos.\nEsto también podemos extenderlo a clases, donde es comun usar parámetros por defecto:",
"class Clase:\n\n def __init__(self, lista=[]):\n self.lista = lista\n self.lista.append(1)\n print(\"Lista de la clase: {}\".format(self.lista))\n\n# Instanciamos dos objetos\nA = Clase()\nB = Clase()\n\n# Modificamos el parametro en una\nA.lista.append(5)\n\n# What??\nprint(A.lista)\nprint(B.lista)",
"Investigando nuestro código\nVeamos un poco qué está pasando en nuestro código:",
"# Instanciemos algunos objetos\nA = Clase()\nB = Clase()\nC = Clase(lista=[\"GG\"]) # Usaremos esta isntancia como control\n\nprint(\"\\nLos objetos son distintos!\")\nprint(\"id(A): {} \\nid(B): {} \\nid(C): {}\".format(id(A), id(B), id(C)))\n\nprint(\"\\nPero la lista es la misma para A y para B :O\")\nprint(\"id(A.lista): {} \\nid(B.lista): {} \\nid(C.lista): {}\".format(id(A.lista), id(B.lista), id(C.lista)))",
"¿Qué está pasando? D:\nEn Python, las funciones son objetos del tipo callable, es decir, que son llamables, ejecutan una operación.",
"# De hecho, tienen atributos...\n\ndef funcion(lista=[]):\n lista.append(5)\n \n# En la funcion \"funcion\"...\nprint(\"{}\".format(funcion.__defaults__))\n\n# ... si la invocamos...\nfuncion()\n\n# ahora tenemos...\nprint(\"{}\".format(funcion.__defaults__))\n\n\n# Si vemos como quedo el metodo \"__init__\" de la clase Clase...\nprint(\"{}\".format(Clase.__init__.__defaults__))",
"El código que define a función es evaluado una vez y dicho valor evaluado es el que se usa en cada llamado posterior. Por lo tanto, al modificar el valor de un parámetro por defecto que es mutable (list, dict, etc.) se modifica el valor por defecto para el siguiente llamado.\n¿Cómo evitar esto?\nUna solución simple es usar None como el valor predeterminado para los parámetros por defecto. Y otra solución es la declaración de variables condicionales:",
"class Clase:\n \n def __init__(self, lista=None):\n # Version \"one-liner\":\n self.lista = lista if lista is not None else list()\n \n # En su version extendida:\n if lista is not None:\n self.lista = lista\n else:\n self.lista = list()",
"Importante: Esto no es un bug/error/magia negra... Es Python. En Python todo es un objeto, incluso las funciones...\nRecursos sobre el tema:\n\nStackOverflow - “Least Astonishment” in Python: The Mutable Default Argument [link]\nEffbot.org - Default Parameter Values in Python [link]\nPython Docs - Compound statements > Function definitions [link]\nPython Docs - Data model > The standard type hierarchy [link]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
readywater/caltrain-predict
|
06initial_analysis.ipynb
|
mit
|
[
"import sys\nimport re\nimport time\nimport datetime\nimport pandas as pd\nimport numpy as np\nimport func\n# inline plot\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndf = pd.read_csv(\"data/merged_concat_final.csv\",sep='\\t',error_bad_lines=False)\ndel df['Unnamed: 0']\nprint df.shape\n\ndf.columns.values\n\ndf[['del_min','del_med','del_maj','del_cat']].sum()\n\ndf.set_index('timestamp')\ndf['timestamp'] = pd.to_datetime(df['timestamp'],format=\"%Y-%m-%d %H:%M:%S\")\n\ndef delay_to_ordinal(r):\n v = 0 #no delay\n v = 1 if r['del_min'] == 1 else v\n v = 2 if r['del_med'] == 1 else v\n v = 3 if r['del_maj'] == 1 else v\n v = 4 if r['del_cat'] == 1 else v\n return v\n\ndf['ord_del'] = df.apply(lambda x:delay_to_ordinal(x),axis=1)\n\ndef days_to_ordinal(r):\n v = 0 # Sunday\n v = 1 if r['d_monday'] == 1 else v\n v = 2 if r['d_tuesday'] == 1 else v\n v = 3 if r['d_wednesday'] == 1 else v\n v = 4 if r['d_thursday'] == 1 else v\n v = 5 if r['d_friday'] == 1 else v\n v = 5 if r['d_saturday'] == 1 else v\n return v\n\ndf['ord_weekdays'] = df.apply(lambda x:days_to_ordinal(x),axis=1)\n\nonly_delay = df[(df['is_delay']==1)]\n\n# df.plot.scatter(x=df['timestamp'],y=df['ord_del'],figsize=[15,6], alpha='0.2')\n\n# only_delay[['del_min','del_med','del_maj','del_cat','is_bullet','is_limited']].plot.hist(color='k',alpha=0.5,stacked=True,bins=4,figsize=[12,6])\n\nprint \"relative to delay\"\nprint (df[['is_delay','del_min','del_med','del_maj','del_cat']].sum()/float(df['is_delay'].sum()))*100 , '%'\nprint \"Relative to total\"\nprint (df[['is_delay','del_min','del_med','del_maj','del_cat']].sum()/float(len(df)))*100 , '%'\n\n# Train IDs swapped into cat variables and concat into main dataset\ntrain_id_dummies = pd.get_dummies(df['train_id'],prefix='tid')\ntrain_id_dummies.shape\ntrain_id_dummies.columns.values\ndel train_id_dummies['tid_101.0'] # Delete as base var\ntid_col = train_id_dummies.columns.values\ndf = pd.concat([df, train_id_dummies], axis=1)",
"Pick one of these to explore re: below models",
"# Look only at train IDs\nfeatures = df.columns.values\nX = train_id_dummies\ny = df['ord_del']\n\n# Non Delay Specific\nfeatures = df.columns.values\ntarget_cols = ['temp','precipiation',\n 'visability','windspeed','humidity','cloudcover',\n 'is_bullet','is_limited','t_northbound',\n 'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']\nX = df[target_cols]\n# del X['is_delay']\n# del X['tweet_id']\n# X['timestamp'] = X['timestamp'].apply(lambda x: (np.datetime64(x).astype('uint64') / 1e6).astype('uint32'))\n# y = df['ord_del']\ny = df['is_delay']\n\n# Including train IDs\nfeatures = df.columns.values\ntarget_cols = ['temp','precipiation',\n 'visability','windspeed','humidity','cloudcover',\n 'is_bullet','is_limited','t_northbound',\n 'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday'] + list(tid_col)\nX = df[target_cols]\n# del X['is_delay']\n# del X['tweet_id']\n# X['timestamp'] = X['timestamp'].apply(lambda x: (np.datetime64(x).astype('uint64') / 1e6).astype('uint32'))\n# y = df['ord_del']\ny = df['is_delay']\n\n# If there IS a delay...\nfeatures = df.columns.values\nX = only_delay[['is_backlog', 'is_canceled',\n 'is_passing', 'is_accident', 'is_medical', 'is_mechanical',\n 'is_customer', 'is_event']]\n# del X['is_delay']\n# del X['tweet_id']\n# X['timestamp'] = X['timestamp'].apply(lambda x: (np.datetime64(x).astype('uint64') / 1e6).astype('uint32'))\ny = df['ord_del']\n\n# X['timestamp'] = X['timestamp'].apply(lambda x:int(x))\n# X['stop_pa'] = X['stop_pa'].apply(lambda x:int(x))\n# X['train_id'] = X['train_id'].apply(lambda x:int(x))\nX['t_northbound'] = X['t_northbound'].apply(lambda x:int(x))\n\nX['cloudcover'] = X['cloudcover'].fillna(X['cloudcover'].mean())\n\n# X.isnull().sum()\n\n# df.plot.scatter(x='timestamp',y='del_ord',figsize=[15,5])\n\nX_y = only_delay[['is_delay','ord_del','temp','precipiation',\n 'visability','windspeed','humidity','cloudcover',\n 'is_bullet','is_limited','t_northbound',\n 'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']]\ncr = X_y.corr()\nnp.round(cr, 4)\n# \n\nX_y.sum()",
"Run Decision Trees, Prune, and consider False Positives",
"from sklearn.tree import DecisionTreeClassifier\nTreeClass = DecisionTreeClassifier(\n max_depth = 2,\n min_samples_leaf = 5)\nTreeClass.fit(X,y)\n\nfrom sklearn.cross_validation import cross_val_score\nscores = cross_val_score(TreeClass, X, y, cv=10)\nprint(scores.mean()) # Score = More is better, error is 1-score\n\nfrom sklearn.metrics import confusion_matrix\ny_hat = TreeClass.predict(X)\ncmat = confusion_matrix(y, y_hat)\nprint cmat\n\nfrom sklearn.metrics import roc_curve, auc,roc_auc_score\ny_hat_probability = TreeClass.predict_proba(X).T[1] \nprint(y_hat_probability)\nprint(roc_auc_score(y, y_hat_probability))\nvals = roc_curve(y, y_hat_probability) \n\nRoc_DataFrame = pd.DataFrame({'False_Positive_Rate':vals[0],'True_Positive_Rate':vals[1]})\nRoc_DataFrame.plot(x = 'False_Positive_Rate' , y = 'True_Positive_Rate' ) ",
"As a check, consider Feature selection",
"from sklearn import feature_selection\npvals = feature_selection.f_regression(X,y)[1] \nsorted(zip(X.columns.values,np.round(pvals,4)),key=lambda x:x[1],reverse=True)\n\nX_lr=df[['windspeed','t_northbound','precipiation','d_friday']]\n# localize your search around the maximum value you found\nc_list = np.logspace(-1,1,21) \nc_index = np.linspace(-1,1,21)\n#C is just the inverse of Lambda - the smaller the C - the stronger the\n#regulatization. The smaller C's choose less variables\ncv_scores = []\nfor c_score in c_list:\n lm = LogisticRegression(C = c_score, penalty = \"l1\")\n cv_scores.append(cross_val_score(lm,X,y,cv=10).mean())\n\n\nC_Choice_df = pd.DataFrame({'cv_scores': cv_scores ,'Log_C': c_index })\nC_Choice_df.plot(x ='Log_C',y = 'cv_scores' )\n# it sounds like our best choice is C = -0.1 (we chose the most restrictive option)",
"Find the Principal Components",
"X = only_delay[['temp','precipiation',\n 'visability','windspeed','humidity','cloudcover',\n 'is_bullet','is_limited','t_northbound',\n 'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']]\n\nfrom sklearn.decomposition import PCA\nclf = PCA(.99)\nX_trans = clf.fit_transform(X)\nX_trans.shape\n\nprint \"Exp Var ratio:\",clf.explained_variance_ratio_\nprint \"PCA Score:\",clf.score(X,y)\n\n\nplt.scatter(X_trans[:, 0], X_trans[:, 1],c=y, alpha=0.2)\nplt.colorbar();\n\nfrom sklearn.linear_model import LogisticRegression\nlm = LogisticRegression()\nlm.fit(X_trans,y)\n\nprint(lm.intercept_)\nprint(lm.coef_)\n\nfrom sklearn.cross_validation import cross_val_score\nprint(cross_val_score(lm,X_trans,y,cv=10).mean()) \nMisClassificationError = 1 - (cross_val_score(lm,X_trans,y,cv=10).mean())\nprint(MisClassificationError)",
"Seeing if I can get anything interesting out of KNN given above\nLecture 10, look at Confusion matrix and ROC curve. Fiddle with the thresholds and AUC",
"print df['windspeed'].max()\nprint df['windspeed'].min()\n\ndf['windspeed_st'] = df['windspeed'].apply(lambda x:x/15.0) # Ballparking \n\n\nX_reg = df[['precipiation','d_friday','t_northbound','windspeed_st']]\ny_reg = df['is_delay']\n\nfrom sklearn import cross_validation\nfrom sklearn import neighbors, metrics\nkf = cross_validation.KFold(len(X_reg), n_folds = 10, shuffle = True) #10 fold CV\nScore_KNN_CV = []\nRangeOfK = range(1,20) \nscores = []\nfor k in RangeOfK:\n knn = neighbors.KNeighborsClassifier(n_neighbors=k, weights='uniform')\n scores = []\n for train_index, test_index in kf: \n knn.fit(X_reg.iloc[train_index], y_reg.iloc[train_index])\n scores.append(knn.score(X_reg.iloc[test_index],y_reg.iloc[test_index]))\n Score_KNN_CV.append(np.mean(scores))\n\nScore_KNN_CV_df = pd.DataFrame({'Score_KNN_CV': Score_KNN_CV ,'K': RangeOfK })\nScore_KNN_CV_df.plot(x = 'K',y = 'Score_KNN_CV',figsize=[15,5])",
"Cross Validation and Random Forest",
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.cross_validation import cross_val_score\nRFClass = RandomForestClassifier(n_estimators = 10000, \n max_features = 4, # You can set it to a number or 'sqrt', 'log2', etc\n min_samples_leaf = 5,\n oob_score = True)\nRFClass.fit(X,y)\nprint(RFClass.oob_score_)\nscores = cross_val_score(RFClass, X, y, cv=10)\nprint(scores.mean())\n#out of bag error = 25% , CV_error is 35%\n\nRFClass.fit(X,y)\nImportanceDataFrame = pd.DataFrame({'feature':X.columns.values, 'importance':RFClass.feature_importances_})\nImportanceDataFrame.sort_values(by = ['importance'],ascending = 0)\n\nDepth_Choice_df = pd.DataFrame({'cv_scores': score,'Number of Features': Features})\nDepth_Choice_df.plot(x ='Number of Features',y = 'cv_scores')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
joelgrus/codefellows-data-science-week
|
ipynb/pandas bikes.ipynb
|
unlicense
|
[
"Standard pandas imports",
"from pandas import DataFrame, Series\nimport pandas as pd\nimport numpy as np",
"We'll be working with data from the bike rental setup:",
"weather = pd.read_table('daily_weather.tsv', parse_dates=['date'])\nstations = pd.read_table('stations.tsv')\nusage = pd.read_table('usage_2012.tsv', parse_dates=['time_start', 'time_end'])",
"Let's inspect it",
"usage.describe()",
"That must be the only numeric value, let's try again:",
"usage\n\nweather.describe()\n\nweather",
"Wow, those season_desc values look wrong! We'll come back to that in a minute.\nNow let's look at some of the columns",
"weather.columns\n\nweather['is_holiday'].value_counts()\n\nweather['temp'].describe()",
"Sanity checking the weather data",
"weather.groupby(['season_code', 'season_desc'])['date'].agg([min, max])",
"The season codes look OK (1 = Winter, 2 = Spring ...), but the season descriptions look wrong. Let's fix them:",
"weather.season_desc = weather.season_desc.map({'Spring' : 'Winter', 'Winter' : 'Fall', 'Fall' : 'Summer', 'Summer' : 'Spring' })\n\nweather.season_desc",
"That looks right, but we would rather the date be our index:",
"weather.index = pd.DatetimeIndex(weather['date'])",
"Let's start asking some questions\nHow many unique bikes are there?",
"usage.columns\n\nusage['bike_id'].describe()\n\nusage['bike_id'].nunique()",
"How many unique stations are there?",
"stations.shape\n\nlen(stations)",
"How many of those actually appear in our usage data?",
"usage['station_start'].nunique()\n\nusage['station_end'].nunique()\n\nusage['station_start'].unique()",
"Let's look for correlations in the weather:",
"weather[['temp', 'subjective_temp', 'humidity', 'windspeed', 'total_riders']].corr()",
"When temperature is higher, there are more riders, and when windspeed is higher there are fewer riders. Maybe these trends are different at different times of the year?",
"weather[weather.season_desc=='Winter'][['temp', 'subjective_temp', 'humidity', 'windspeed', 'total_riders']].corr()\n\nweather[weather.season_desc=='Summer'][['temp', 'subjective_temp', 'humidity', 'windspeed', 'total_riders']].corr()",
"So in winter higher temperatures mean more riders, but in summer higher temperatures mean fewer riders. This makes a good bit of sense.\nStation Success\nWe'll measure a station's success by its average number of daily rentals. We can take a couple of different approaches. \nThe first is to use groupby. We'll start by adding a date column:",
"usage['date'] = usage.time_start.dt.date\n\n\nusage.groupby('date')\n\nstation_counts = usage.groupby(['station_start']).size() / 366\nstation_counts.sort()\nstation_counts",
"We can also use a pivot table:",
"pivot = pd.pivot_table(usage, index='date', columns='station_start', values='bike_id', aggfunc=len, fill_value=0)\npivot\n\navg_daily_trips = pivot.mean()\navg_daily_trips.sort()\navg_daily_trips.index.name = 'station'\navg_daily_trips",
"There are invariably other ways to do this\nJoins\nWe'll want to look at the avg daily trips by geographic location, which is in the stations data frame. Let's pull it out into its own:",
"station_geos = stations[['lat','long']]\nstation_geos.index = stations['station']\nstation_geos",
"And then we need to make trips into a data frame",
"trips = DataFrame({ 'avg_daily_trips' : avg_daily_trips})\ntrips\n\ntrips_by_geo = station_geos.join(trips, how='inner')\ntrips_by_geo",
"Getting Our Data Ready\nBefore we merge, we'd like to aggregate the usage data to the daily level:",
"daily_usage = usage.groupby(['date', 'station_start', 'cust_type'], as_index=False)['duration_mins'].agg(['mean', len])\ndaily_usage.columns = ['avg_trip_duration', 'num_trips']\ndaily_usage\n\ndaily_usage = daily_usage.reset_index()\n\ndaily_usage.columns\n\ndaily_usage.index\n\nstations.columns\n\nweather.columns\n\nweather_rentals = daily_usage.merge(weather, left_on='date', right_on='date')\nweather_rentals\n\nweather['date'] = weather['date'].dt.date\n\nusage_weather = daily_usage.merge(weather)\nusage_weather\n\nuws = usage_weather.merge(stations, left_on='station_start', right_on='station')\n\nuws\n\nsorted(uws.columns)\n\nuws['crossing']"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tgquintela/temporal_tasks
|
deliver/Report-ToyProblem.ipynb
|
mit
|
[
"A toy problem: implement training a Support Vector Machine with SGD\nAntonio G. Quintela\nSVM\nSupport Vectors Machines are generalizations of linear decision boundaries for linearly nonseparable cases. It is an optimization based algorithm based on the idea of soft constraint optimization in which nondesirable decisions are penalized following a cost function.\nDepending on the loss function, the optimization problem could be a convex optimization problem. That convex optimization problems are highly interesting for machine learning because of simplicity and the quantity of possiblities we have to optimize it.\nThe case of the SVM we have a linear model that we can define as:\n$$f(x) = x^Tw + w_0$$\nand we can do predictions in the binary classification by applying $sign$ function,\n$$y_{pred} = sign(x^Tw + w_0)$$\nThe margin can be defined as $m(w) = y\\cdot (x^Tw + w_0)$. If $m > 0$ it is the margin safety by which f(x) is correct. If $m < 0$ then $m$ is a measure of the margin by which f(x) is wrong.\nThe problem could be defined in the form,\n$$w = \\underset{w}{\\mathrm{argmin}} \\quad \\mathbb{L}(w)$$\nThe part of the cost it could be composed for different terms. The term represented the loss and a term to penalize high weight parameters, called regularization. The term of the cost can be descomposed by the samples allowing us to apply SGD.\n$$ \\mathbb{L}(w) = \\sum_{i=0}^{n_samp}\\ell(m_t(w)) + \\frac{\\lambda}{2} \\| w \\|^2$$\nWe could say that this optimization problem is a SVM when $\\ell(m_t(w))$ is a Hinge loss function.\nHinge Loss\nHinge loss is the function used to define the margin optimization. \n$${\\displaystyle \\ell(y) = \\max(0, 1-y_{pred} \\cdot y)}$$\nThe interesting of Hinge is that is a convex function. It is not differenciable but its gradient with respect the weight parameters can be analytical expressed in a two part function in which,\n$${\\frac {\\partial \\ell }{\\partial w_{i}}}={\\begin{cases}-y_{pred}\\cdot x_{i}&{\\text{if }}y_{pred}\\cdot y<1\\0&{\\text{otherwise}}\\end{cases}}$$\nStochastic Gradient Descent\nStochastic Gradient Descent or SGD for short is an iterative Gradient Descent in which it is used some parts of the sample to optimize locally using the gradient of the curve in a given point of the parameter space trying to find the maxima or the minima.\n$${\\displaystyle w:=w-\\eta \\nabla L_{i}(w)}$$\nin which $\\eta$ is the learning rate.\nImports",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom svm import SVM\n\n## Apply test to the code\n%run tests.py",
"Load data",
"data = np.loadtxt('data/features.txt', delimiter=',')\ntarget = np.loadtxt('data/target.txt', delimiter=',')",
"Function definitions",
"def weights_histogram(model):\n title = r'Histogram of the distribution of weight values '\n title += r'($\\eta=0.0001$, $\\lambda=1.$, $b_s={}$)'.format(model.batch_size)\n fig = plt.figure(figsize=(10, 5))\n _ = plt.hist(model.w, bins=20)\n _ = plt.title(title)\n _ = plt.xlabel('weights')\n _ = plt.ylabel('Counts')\n\ndef losses_accross_training(model):\n title = r'Loss through epochs of SGD '\n title += r'($\\eta=0.0001$, $\\lambda=1.$, $b_s={}$)'.format(model.batch_size)\n fig = plt.figure(figsize=(14, 6))\n plt.xlim([0, len(model.train_loss_history)])\n _ = plt.plot(model.train_loss_history, 'b--', label='train_loss')\n _ = plt.plot(model.test_loss_history, 'r-', label='test_loss')\n _ = plt.title(title)\n _ = plt.xlabel(\"epochs\")\n _ = plt.ylabel(\"Loss\")\n legend = plt.legend(loc='upper right')\n\ndef acc_accross_training(model):\n title = r'Accuracy through epochs of SGD '\n title += r'($\\eta=0.0001$, $\\lambda=1.$, $b_s={}$)'.format(model.batch_size)\n fig = plt.figure(figsize=(14, 6))\n plt.xlim([0, len(model.train_accuracy_history)])\n _ = plt.plot(model.train_accuracy_history, 'b--', label='train_loss')\n _ = plt.plot(model.test_accuracy_history, 'r-', label='test_loss')\n _ = plt.title(title)\n _ = plt.xlabel(\"epochs\")\n _ = plt.ylabel(\"Accuracy\")\n legend = plt.legend(loc='lower right')",
"Study with batch_size=1",
"model = SVM(n_epochs=1000, batch_size=1, learning_rate=0.0001)\nmodel = model.fit(data[:-1000], target[:-1000], data[-1000:], target[-1000:])\n\nweights_histogram(model)\n\nlosses_accross_training(model)\n\nacc_accross_training(model)\n\nmodel.report_results()",
"Study with batch_size=10",
"model = SVM(n_epochs=1000, batch_size=10, learning_rate=0.0001)\nmodel = model.fit(data[:-1000], target[:-1000], data[-1000:], target[-1000:])\n\nweights_histogram(model)\n\nlosses_accross_training(model)\n\nacc_accross_training(model)\n\nmodel.report_results()",
"Study with batch_size=100",
"model = SVM(n_epochs=1000, batch_size=100, learning_rate=0.0001)\nmodel = model.fit(data[:-1000], target[:-1000], data[-1000:], target[-1000:])\n\nweights_histogram(model)\n\nlosses_accross_training(model)\n\nacc_accross_training(model)\n\nmodel.report_results()",
"Comments\nThe results are self-conclusives. Quicker convergence for smaller batch size but lower accuracy reached. On the other hand, for higher batch_size, you need more epochs to converge but reach higher accuracy.\nComments about the code\nThe code has lack of completitude. The design is open to new improvements that are not in the code because of the lack of time and the purpose of that exercise.\nThere are several improvements over that code design. From control values of the step in order to keep numerical stability (it could be certain regions with huge slope, due to the nonderivative property of Hinge in certain points of the parameter space.\nAlso we can make dynamic learning step in order to skip quickly uninteresting places of the loss landscape and put more effort in exploring the interesting ones.\nIt could be easily implmented other \nIf we would desire to use different optimizers probably it had been better to keep separated the optimizer and the model."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Ironlors/SmartIntersection-Ger
|
Journal/Seminararbeit.ipynb
|
apache-2.0
|
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Seminararbeit - autonome Verkehrsleitsysteme\nvon Kay Kleinvogel und Lisa-Marie Nehring\nÜbersicht:\nDie Hauptaufgabe dieser Arbeit ist das Forschen an effizienteren Ampelsystemen. Dies ist von Nöten, da jedes Jahr immer mehr Fahrzeuge auf den Straßen unterwegs sind. Um die Kapazitäten zu schaffen, muss entweder die Fläche der Infrastruktur erhöht werden, welches durch die bereits dichte Bebauung in den Städten nur begrenzt möglich ist, oder die vorhandenen Flächen müssen effizienter genutzt werden.\nAmpelkreuzungen bieten dabei das höchste Optimierungspotential, da hier oft Wartezeiten zustande kommen, in welchen sich kein Verkehrsteilnehmer bewegt. Diese Wartezeiten können durch Optimierung der Schaltphasen minimiert werden.\nAmpelschaltung\nFür die Optimierung der Ampel müssen wir uns erst einmal bewusst machen, wie diese funktioniert. Einfach ausgedrückt schaltet diese nur verschiedene LEDs entweder aus oder an. Dies tut sie Abhängig von der genauen Art der Schaltung. Für eine einfache Schaltung wird zum Beispiel nur darauf geachtet welche Zeit die verschiedenen Spuren haben. So hat zum Beispiel jede Spur ein Zeitintervall von 20 Sekunden unabhängig von der Anzahl der Fahrzeugen auf der Spur. Die Anzahl an Fahrzeugen die diese Ampel in einen bestimmten Zeitintervall verwalten kann, ist fest und kann nicht verändert werden. Bei doppelter Anzahl der Fahrzeuge verdoppelt sich auch die Wartezeit. Die Wartezeit ist dadurch also direkt proportional zu der Anzahl der Verkehrsteilnehmer an dieser Ampel.\n// Code zur Anschauung der Wartezeit in Verhältnis zu Autos/Stunde\nAmpelalgorithmus\nEin wichtiger Teil des Algorithmuses ist die konstante Abfrage nach der Anzahl der Fahrzeuge auf dein einzelnen Spuren. Nehmen wir dafür ein einfaches Beispiel. Wir haben eine Kreuzung von Zwei gleichstark befahrenen Straßen. Beide Straßen können nur geradeaus überquert werden. Um diese beiden Interferierenden Verkehrsströme nun zu sammeln benötigen wir eine Ampelschaltung. Um nun zu entscheiden, welche Spur fahren darf, müssen wir überprüfen wie viele Fahrzeuge auf der jeweiligen Spur sind. Dafür nehmen wir die Summe der beiden einzelnen Spuren jeder Orientierung (v1,v2 = vertikale Spuren ; h1,h2 = horizontale Spuren). Die Summe, also die Gesamtzahl der Fahrzeuge wird dann mit v respektive h dargestellt. Dies kann durch den Folgenden Code wiedergegeben werden.",
"v1 = int(input('v1: '))\nv2 = int(input('v2: '))\nh1 = int(input('h1: '))\nh2 = int(input('h2: '))\nv = v1+v2\nh = h1+h2\n\nprint ('V', v)\nprint ('H', h)",
"Nun müssen wir diese Werte miteinander vergleichen. Dafür bietet es sich an die totale Anzahl an Fahrzeugen zu vergleichen. Dazu nutzen wir eine einfache Abfrage welche nach jeden Fahrzeug überprüft welche Spur mehr Fahrzeuge beeinhaltet. In diesen Beispiel haben wir zwei Möglichkeiten. Es können entweder die horizontalen oder vertikalen Fahrzeuge fahren. Hier schauen wir uns ein Beispiel an, was passieren würde, falls eine Ampelphase jeweils ein Fahrzeug lang wäre. Die Problematik dabei ist, dass die einzelnen Fahrzeuge eine jeweilige Reaktionszeit haben und daher eine lange Verzögerung entsteht, welche sich negativ auf die Kapazität der Kreuzung auswirkt.",
"print(\"h v\")\nwhile(v > 0 or h > 0):\n if(h>v):\n h = h-1\n else :\n v = v-1\n print(h,v)",
"Der Aufbau des Modells\nFür den entsprechenden Aufbau des Modells nutzen wir eine Kreuzung von 2 vierspurigen Straßen, welche sich schneiden.(siehe Abbildung 1). Die rechte Spur ist jeweils für die Rechtsabbieger vorgesehen, und die linke für die Linksabbieger beziehungsweise die Fahrer, welche geradeaus fahren möchten. In jeder einzelnen Spur sind dabei Sensoren eingebaut, welche Die Anzahl der Fahrzeuge auf der jeweiligen Spur anzeigen. Diese Zahl kann als binärer Wert ausgegebn werden. Gehen wir zum Beispiel von der Situation in der Abbildung 2 aus, und legen für ein Fahrzeug auf dem Sensor den Wert 1 und für kein Fahrzeug den Wert 0 fest, so erhalten wir für die Spur in diesen Moment einen Wert von 0011. Diese Ampel kann nun mit einen anderen Wert von der schneidenen Spur verglichen werden. Hat diese zum Beispiel den Wert 1001 so ist im Moment ein Fahrzeug vorne an der Kreuzung und ein anderes auf dem Weg. Hierbei würde der Algorythmus nun 0011 und 1001 vergleichen. Es würde sich anbieten den kleineren Wert zu wählen, aber hierbei würde dann eine leere Spur immer die Priorität erhalten. Um dieses Problem zu umgehen, können die Werte umgedreht werden (abcd --> dcba) und der höhere Wert erhält die Priorität, dies löst unser Problem, da eine leere Spur nun den Wert 0000 hat und daher jede Spur mit einen Fahrzeug an beliebiger Stelle einen höheren Wert erreicht. Nehmen wir dafür unser vorheriges Beispiel so vergleichen wir die beiden Werte 1100 und 1001. Da 1100 nun einen höheren numerischen Wert hat bekommt diese Spur die Grünphase zugesprochen. Betrachten wir dies nun mit Fahrzeugen erscheint dies auch logisch, da dort bereits 2 Fahrzeuge warten, während auf der anderen Spur nur 1 Fahrzeug wartet und das zweite erst dabei ist den Ampelbereich zu betreten.\nStraßenkreuzung 2.0\nIn dieser Version der Kreuzung nutzen wir eine vierspurige Straße mit 2 Spuren in jede Richtung. Diese besteht aus der Linksabbiegerspur und der Spur für die Geradeausfahrer. Die Rechtsabbierger werden durch eine vorherige seperate Spur von der Kreuzung entfernt, da dies den Verkehrfluss erheblich verbessert.\nBetrachten wir nun die Spuren so erkenenn wir Zwei weitere Markierungen. Dies sind Sensoren, welche die Anzahl der Fahrzeuge auf der jeweiligen Spur erkennen. Fährt ein Fahrzeug über den gelben Sensor so wird der Wert der Spur um 1 erhöht, und sobald ein Fahrzeug die Spur verlässt, also den roten Sensor überquert wird dieser Wert um 1 verringert. Daher haben wir stets den Momentanwert der Anzahl aller Fahrzeuge an der Kreuzung, und das Wissen an welcher Stelle sich diese befinden. \nVergleich der Spuren\nUm diese Spuren nun zu vergleichen, können wir betrachten, welche Kombination an sich nicht kollidiernden Spuren die höchste Anzahl an Fahrzeugen besitzt. Diese können dann Grün erhalten.\nAm Anfang suchen wir uns Werte für die jeweiligen Spuren, dazu wird im eigentlichen Versuch der Momentanwert einer jeden Spur genutzt. In dieser Simulation verwenden wir dazu User Input.",
"# Beispiel für \"Vergleich der Spuren\"\n#Kompatible Kombinationen: A(G1,G3) B(G1,L1) C(L1,L3) D(G3,L3) E(L2,L4) F(G2,G4) G(G2,L2) H(G4,L3)\n#<X>#S = Status der Spur (0 = rot ; 1 = grün)\n\nG1 = int(input('G1: '))\nG2 = int(input('G2: '))\nG3 = int(input('G3: '))\nG4 = int(input('G4: '))\nL1 = int(input('L1: '))\nL2 = int(input('L2: '))\nL3 = int(input('L3: '))\nL4 = int(input('L4: '))",
"Nun definieren wir unsere Reset Funktion. Diese wird benötigt, da die Ampeln vor jeden Umschalten auf rot gesetzt werden.",
"def reset():\n G1S = 0\n G2S = 0\n G3S = 0\n G4S = 0\n L1S = 0\n L2S = 0\n L3S = 0\n L4S = 0",
"Nun addieren wir die jeweiligen numerischen Werte der Spuren, in den Kombinationen welche sich nicht beeinträchtigen. Daher finden wir heraus, welche beiden Spuren zeitgleich grün erhalten müssen, wenn wir die Anzahl an Fahrzeugen maximieren möchten. Danach erstellen wir eine Liste mit den einzelnen Werten, aus welcher wir nun den höchsten Wert herausnehmen.",
"A = G1+G2\nB = G1+L1\nC = L1+L3\nD = G3+L3\nE = L2+L4\nF = G2+G4\nG = G2+L2\nH = G4+L3\n\nl = [A,B,C,D,E,F,G]\nm = max(l)",
"Nun definieren wir unsere check Funktion, welche uns erlaubt entsprechend der Werte die Ampeln umzuschalten.\nDafür nutzen wir l.index(m) um herauszufinden an welcher Stelle der Liste der Maximalwert steht. Diese Position gleichen wir nun mit der Kombination an Spuren ab, und setzen den Status der entsprechenden Spuren auf 1 (grün).\nDes weiteren nutzen wir print() um eine Ausgabe mit der jeweiligen Kombination zu erhalten.",
"# A(G1,G3) B(G1,L1) C(L1,L3) D(G3,L3) E(L2,L4) F(G2,G4) G(G2,L2) H(G4,L3)\n\ndef check():\n reset()\n\n if(l.index(m)==1):\n print ('G1+G3')\n G1S = 1\n G3S = 1\n elif(l.index(m)==2):\n print ('G1+L1')\n G1S = 1\n G3S = 1\n elif(l.index(m)==3):\n print ('L1+L2')\n L1S = 1\n L2S = 1\n elif(l.index(m)==4):\n print('G3+L3')\n G3S = 1\n L3S = 1\n elif(l.index(m)==5):\n print('L2+L4')\n L2S = 1\n L4S = 1\n elif(l.index(m)==6):\n print('G2+G4')\n G2S = 1\n G4S = 1\n elif(l.index(m)==7):\n print('G2+L2')\n G2S = 1\n L2S = 1\n elif(l.index(m)==8):\n print('G4+L3')\n G4S = 1\n L3S = 1",
"Hier nutzen wir jetzt die oben beschriebene Funktion check() und geben die Menge an Fahrzeugen aus, welche mit dieser Möglichkeit der Schaltung bedient werden.",
"check()\nprint (m)",
"TO-DO\nAbbruchbedingung\nFür die Abbruchbedingung muss geschaut werden, ob die Summe der aktiven Spuren = 0 ist bevor die Grünphase vorbei ist, da sobald sich keine Fahrzeuge mehr auf der Spur befinden, auch kein Bedarf mehr für eine grüne Ampel besteht.\nGrünphasentimer\nBei dem Timer für die Grünphase sollte variables geschtaltet werden. Hierfür sollte die Ampel schauen, wie viele Fahrzeuge in einer gegeben Phasenzeit es geschafft haben die Ampel zu überqueren. Sollte sich herausstellen, dass die Länge der Grünphase unzureichend war, sollte die Software eine entsprechende Verlängerung der Phase vornehmen.\nDes weiteren sollte die Länge der Phase abhängig von der Anzahl an Fahrzeugen sein. Dies sollte über das Verhältniss der Menge beider Spuren geregelt sein. Ein Beispiel hierfür wäre, dass eine Spur mit 30% mehr Fahrzeugen als die andere Spur auch eine ca. 30% längere Grünphase erhält. Dieses Verhältniss hat ein Maximum, welches durch den Abstand von Ampel und Sensor gegeben ist.\nFahrzeug Simulation Unity 3d\nDatengewinnung\nUm die optimalen Attribute des Fahrzeuges für einen optimalen Verkehrsfluss in der 3D Simulation zu garantieren, benötigen wir Daten. Da zu dieser Kombination aus Fahrzeug und Strecke keine vorgefassten Datenbanken gibt, müssen wir unsere eigenen Daten generieren. Hierfür wurde ein Pfad erstellt, welcher alle Möglichkeiten ausnutzte, und eine gewisse Strecke beträgt, um einigermaßen genaue Werte zu liefern. Die Werte, welche verändert werden ist auf der einen Seite die Maximalgeschwindigkeit des Fahrzeuges, sowie die Stärke des Abtriebes. Hierfür wurde eine Unendlichschleife implementiert, und bei jeder absolvierten Runde auf der Strecke wurden die beiden Werte um 10% erhöht. Paralel dazu läuft ein Datenlogger, welche jede Sekunde den aktuellen Wegpunkt, die Zeit in dieser Runde, die momentane Geschwindigkeit sowie die Position in der X und Z Achse. Diese Daten sind benötigt, um zu allererst die schnellste Rundenzeit zu ermitteln, aber auch um eine Überwachung der Kurvenlage und Geschwindigkeit an bestimmten Stellen im Parkour zu beobachten. Die aktuelle Maximalgeschwindigkeit und Motorstärke werden am Anfang jeder Runde angegeben.\nEine Idee wäre es, sobald eine Verschlechterung der Zeit ermittelt wird, welche bei erhöhter Geschwindigkeit durch zum Beispiel starkes schleudern in der Kurve entstehen können, sollte die Geschwindigkeit auf den vorherigen Wert zurückgesetzt werden. Danach sollte der Wert zur Erhöhung der Geschwindigkeit halbiert werden (5% ; 2.5% ; 1.75% ; ...). Dies stellt sicher, dass ein präziser Wert für die beste Runde gesichert werden kann.\nDiese Daten wurden in der Datei data1.txt gespeichert. Die Auswertung dieser Daten befindet sich im gleichnamigen Journal.\nAnfahrt an Kreuzung\nUm eine geeignete Kontrolle der Ampel zu erreichen, müssen wir sicher stellen, dass die Fahrzeuge mit konstanter Geschwindigkeit anfahren. Hierzu versuchten wir zuerst die Position des Fahrzeuges beim erreichen des letzten Wegpunktes (Ende der Strecke) einfach auf den Startwert zu setzen. Das Problem bei dieser Technik ist allerdings, dass die Geschwindigkeit der vorherigen Runde beibehalten wird, und das Auto daher aus der Fahrt anfährt, und nicht aus dem Stand. Daher nutze ich den Unity Befehl rigidbody.velocity = vector3.null um alle Geschwindigkeiten auf 0 zu setzen, und dadurch das Auto zu zwingen aus dem Stand zu starten.\nDazu führte ich eine Simulation durch, in welcher ich analog zum obrigen Abschnitt ('Datengewinnung'), einen Datenlogger nutze, welche hier nur die Anzahl der Runden, sowie die benötigte Zeit in eine Textdatei schrieb. Die Entsprechende Datei trägt den Namen lap1.txt. Der Nutzen dieser Datei ist, dass wir in der Lage sind zu überprüfen, ob das Fahrzeug sich mit einer konstanten Geschwindigkeit der Ampel nähert, da alle anderen Umstände gleich sind, müsste die Zeit, welche das Fahrzeug für die verschiedenen Runden benötigte konstant sein, und der Graph müsste einer Konstanten gleichen.",
"data = pd.read_csv('/Users/kay/SmartIntersection-Ger/Data/DrivingStraight/lap.txt',sep=' ')\ndata.columns=['lap','time']\ndata.set_index('lap')\n\nnp_data= data.values\nprint(np_data)\n\nplot = plt.plot(np_data[:,0],np_data[:,1])\nplt.xlabel('Anzahl an Durchgängen')\nplt.ylabel('s / runde')\nplt.title('Geschwindigkeit des Fahrzeuges')\nplt.yticks([16.5,17,17.5,18])\nplt.show()",
"Hier ist zu betrachten, dass der Unterschied zwischen den Geschwindigkeiten recht gering ist, und daher können wir davon ausgehen, dass die Fahrzeuge mit einer konstanten Geschwindigkeit die Kreuzung erreichen.\nDies bedeutet, dass unser Script zum abbremsen des Fahrzeuges erfolgreich war."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxis42/ML-DA-Coursera-Yandex-MIPT
|
2 Supervised learning/Homework/7 gradient boosting/grad_boosting.ipynb
|
mit
|
[
"Градиентный бустинг своими руками\nВнимание: в тексте задания произошли изменения - поменялось число деревьев (теперь 50), правило изменения величины шага в задании 3 и добавился параметр random_state у решающего дерева. Правильные ответы не поменялись, но теперь их проще получить. Также исправлена опечатка в функции gbm_predict.\nВ этом задании будет использоваться датасет boston из sklearn.datasets. Оставьте последние 25% объектов для контроля качества, разделив X и y на X_train, y_train и X_test, y_test.\nЦелью задания будет реализовать простой вариант градиентного бустинга над регрессионными деревьями для случая квадратичной функции потерь.",
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.datasets import load_boston\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.linear_model import LinearRegression\n\nboston = load_boston()\nprint(boston.data.shape)\nprint(boston.DESCR)\n\np = 0.75\n\nidx = int(p * boston.data.shape[0]) + 1\n\nX_train, X_test = np.split(boston.data, [idx])\ny_train, y_test = np.split(boston.target, [idx])",
"Задание 1\nКак вы уже знаете из лекций, бустинг - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом. \nГрадиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.\nВоспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь L - квадрат отклонения ответа композиции a(x) от правильного ответа y на данном x.\nЕсли вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.",
"def L_derivative(y_train, z):\n return (y_train - z)",
"Задание 2\nЗаведите массив для объектов DecisionTreeRegressor (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами). \nВ цикле обучите последовательно 50 решающих деревьев с параметрами max_depth=5 и random_state=42 (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом. \nПопробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.\nВ процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке X:\ndef gbm_predict(X):\n return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]\n(считаем, что base_algorithms_list - список с базовыми алгоритмами, coefficients_list - список с коэффициентами перед алгоритмами)\nЭта же функция поможет вам получить прогноз на контрольной выборке и оценить качество работы вашего алгоритма с помощью mean_squared_error в sklearn.metrics. \nВозведите результат в степень 0.5, чтобы получить RMSE. Полученное значение RMSE — ответ в пункте 2.",
"def gbm_predict(X):\n return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]\n\nbase_algorithms_list = []\ncoefficients_list = []\n\nz = np.zeros( (y_train.shape) )\n\nfor _ in range(50):\n coefficients_list.append(0.9)\n dt_regressor = DecisionTreeRegressor(max_depth=5, random_state=42)\n dt_regressor.fit(X_train, L_derivative(y_train, z))\n base_algorithms_list.append(dt_regressor)\n z = gbm_predict(X_train)\n \nalg_predict = gbm_predict(X_test)\nalg_rmse = np.sqrt(mean_squared_error(y_test, alg_predict))\nprint(alg_rmse)\n\nwith open('answer2.txt', 'w') as fout:\n fout.write(str(alg_rmse))",
"Задание 3\nВас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум. \nПопробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле 0.9 / (1.0 + i), где i - номер итерации (от 0 до 49). Используйте качество работы алгоритма как ответ в пункте 3. \nВ реальности часто применяется следующая стратегия выбора шага: как только выбран алгоритм, подберем коэффициент перед ним численным методом оптимизации таким образом, чтобы отклонение от правильных ответов было минимальным. Мы не будем предлагать вам реализовать это для выполнения задания, но рекомендуем попробовать разобраться с такой стратегией и реализовать ее при случае для себя.",
"base_algorithms_list = []\ncoefficients_list = []\n\nz = np.zeros( (y_train.shape) )\n\nfor i in range(50):\n coeff = 0.9 / (1. + i)\n coefficients_list.append(coeff)\n dt_regressor = DecisionTreeRegressor(max_depth=5, random_state=42)\n dt_regressor.fit(X_train, L_derivative(y_train, z))\n base_algorithms_list.append(dt_regressor)\n z = gbm_predict(X_train)\n \nalg_predict = gbm_predict(X_test)\nalg_rmse = np.sqrt(mean_squared_error(y_test, alg_predict))\nprint(alg_rmse)\n\nwith open('answer3.txt', 'w') as fout:\n fout.write(str(alg_rmse))",
"Задание 4\nРеализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке sklearn, так и в сторонней библиотеке XGBoost, которая имеет свой питоновский интерфейс. На практике XGBoost работает заметно лучше GradientBoostingRegressor из sklearn, но для этого задания вы можете использовать любую реализацию. \nИсследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет ответ в п.4):\n1. С увеличением числа деревьев, начиная с некоторого момента, качество работы градиентного бустинга не меняется существенно.\n\n2. С увеличением числа деревьев, начиная с некоторого момента, градиентный бустинг начинает переобучаться.\n\n3. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга на тестовой выборке начинает ухудшаться.\n\n4. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга перестает существенно изменяться",
"answer = str(2) + ' ' + str(3)\nwith open('answer4.txt', 'w') as fout:\n fout.write(answer)",
"Задание 5\nСравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии. \nДля этого обучите LinearRegression из sklearn.linear_model (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке RMSE. Полученное качество - ответ в пункте 5. \nВ данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации.",
"lr_regressor = LinearRegression()\nlr_regressor.fit(X_train, y_train)\n \nalg_predict = lr_regressor.predict(X_test)\nalg_rmse = np.sqrt(mean_squared_error(y_test, alg_predict))\nprint(alg_rmse)\n\nwith open('answer5.txt', 'w') as fout:\n fout.write(str(alg_rmse))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/mri/cmip6/models/sandbox-2/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: SANDBOX-2\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n
|
site/zh-cn/io/tutorials/prometheus.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow IO Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"从 Prometheus 服务器加载指标\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/io/tutorials/prometheus\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a> </td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/prometheus.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行</a></td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/prometheus.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 Github 上查看源代码</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/io/tutorials/prometheus.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载笔记本</a> </td>\n</table>\n\n小心:除了 Python 软件包以外,此笔记本还使用 sudo apt-get install 安装了第三方软件包。\n概述\n本教程会将 Prometheus 服务器中的 CoreDNS 指标加载到 tf.data.Dataset 中,然后使用 tf.keras 进行训练和推理。\nCoreDNS 是一种专注于服务发现的 DNS 服务器,作为 Kubernetes 集群的一部分广泛部署。因此,CoreDNS 常通过 DevOps 运算进行密切监控。\n本教程中提供的示例可帮助 DevOps 通过机器学习实现自动化运算。\n设置和用法\n安装所需的 tensorflow-io 软件包,然后重新启动运行时",
"import os\n\ntry:\n %tensorflow_version 2.x\nexcept Exception:\n pass\n\n!pip install tensorflow-io\n\nfrom datetime import datetime\n\nimport tensorflow as tf\nimport tensorflow_io as tfio",
"安装并设置 CoreDNS 和 Prometheus\n出于演示目的,CoreDNS 服务器在本地开放了 9053 端口用于接收 DNS 查询,并开放了 9153 端口(默认)用于公开抓取指标。以下为 CoreDNS 的基本 Corefile 配置,可供下载:\n.:9053 {\n prometheus\n whoami\n}\n有关安装的更多详细信息,请参阅 CoreDNS 文档。",
"!curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz\n!tar -xzf coredns_1.6.7_linux_amd64.tgz\n\n!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile\n\n!cat Corefile\n\n# Run `./coredns` as a background process.\n# IPython doesn't recognize `&` in inline bash cells.\nget_ipython().system_raw('./coredns &')",
"下一步是设置 Prometheus 服务器,并使用 Prometheus 抓取在上述 9153 端口上公开的 CoreDNS 指标。用于配置的 prometheus.yml 文件同样可供下载:",
"!curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz\n!tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1\n\n!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml\n\n!cat prometheus.yml\n\n# Run `./prometheus` as a background process.\n# IPython doesn't recognize `&` in inline bash cells.\nget_ipython().system_raw('./prometheus &')",
"为了展示一些活动,可以使用 dig 命令针对已设置的 CoreDNS 服务器生成一些 DNS 查询:",
"!sudo apt-get install -y -qq dnsutils\n\n!dig @127.0.0.1 -p 9053 demo1.example.org\n\n!dig @127.0.0.1 -p 9053 demo2.example.org",
"现在设置的是 CoreDNS 服务器,Prometheus 服务器将抓取该 CoreDNS 服务器的指标并准备用于 TensorFlow。\n为 CoreDNS 指标创建数据集并在 TensorFlow 中使用\n可以使用 tfio.experimental.IODataset.from_prometheus 为 CoreDNS 指标创建可在 PostgreSQL 服务器上访问的数据集。至少需要两个参数。需要将 query 传递至 Prometheus 服务器以选择指标,length 为要加载到数据集的时间段。\n您可以从 \"coredns_dns_request_count_total\" 和 \"5\"(秒)开始来创建以下数据集。由于在本教程前面部分中已发送了两个 DNS 查询,因此在时间序列末尾,\"coredns_dns_request_count_total\" 的指标将为 \"2.0\"。",
"dataset = tfio.experimental.IODataset.from_prometheus(\n \"coredns_dns_request_count_total\", 5, endpoint=\"http://localhost:9090\")\n\n\nprint(\"Dataset Spec:\\n{}\\n\".format(dataset.element_spec))\n\nprint(\"CoreDNS Time Series:\")\nfor (time, value) in dataset:\n # time is milli second, convert to data time:\n time = datetime.fromtimestamp(time // 1000)\n print(\"{}: {}\".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total']))",
"进一步研究数据集的规范:\n```\n(\n TensorSpec(shape=(), dtype=tf.int64, name=None),\n {\n 'coredns': {\n 'localhost:9153': {\n 'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None)\n }\n }\n }\n)\n```\n显而易见,数据集由 (time, values) 元组组成,其中 values 字段为 Python 字典,扩展为:\n\"job_name\": {\n \"instance_name\": {\n \"metric_name\": value,\n },\n}\n在上例中,'coredns' 为作业名称,'localhost:9153' 为实例名称,而 'coredns_dns_request_count_total' 为指标名称。请注意,根据所使用的 Prometheus 查询,可能会返回多个作业/实例/指标。这也是在数据集结构中使用 Python 字典的原因。\n以另一项查询 \"go_memstats_gc_sys_bytes\" 为例。由于 CoreDNS 和 Prometheus 均使用 Go 语言进行编写,\"go_memstats_gc_sys_bytes\" 指标可用于 \"coredns\" 作业和 \"prometheus\" 作业:\n注:此单元在您第一次运行时可能会出错。再次运行将通过。",
"dataset = tfio.experimental.IODataset.from_prometheus(\n \"go_memstats_gc_sys_bytes\", 5, endpoint=\"http://localhost:9090\")\n\nprint(\"Time Series CoreDNS/Prometheus Comparision:\")\nfor (time, value) in dataset:\n # time is milli second, convert to data time:\n time = datetime.fromtimestamp(time // 1000)\n print(\"{}: {}/{}\".format(\n time,\n value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'],\n value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes']))",
"现在,可以将创建的 Dataset 直接传递至 tf.keras 用于训练或推理了。\n使用数据集进行模型训练\n在指标数据集创建完成后,可以将数据集直接传递至 tf.keras 用于模型训练或推理。\n出于演示目的,本教程将仅使用一种非常简单的 LSTM 模型,该模型以 1 个特征和 2 个步骤作为输入:",
"n_steps, n_features = 2, 1\nsimple_lstm_model = tf.keras.models.Sequential([\n tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)),\n tf.keras.layers.Dense(1)\n])\n\nsimple_lstm_model.compile(optimizer='adam', loss='mae')\n",
"要使用的数据集为带有 10 个样本的 CoreDNS 的 'go_memstats_sys_bytes' 的值。但是,由于形成了 window=n_steps 和 shift=1 的滑动窗口,因此还需要使用其他样本(对于任意两个连续元素,将第一个元素作为 x,将第二个元素作为 y 用于训练)。总计为 10 + n_steps - 1 + 1 = 12 秒。\n数据值还将缩放到 [0, 1]。",
"n_samples = 10\n\ndataset = tfio.experimental.IODataset.from_prometheus(\n \"go_memstats_sys_bytes\", n_samples + n_steps - 1 + 1, endpoint=\"http://localhost:9090\")\n\n# take go_memstats_gc_sys_bytes from coredns job \ndataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes'])\n\n# find the max value and scale the value to [0, 1]\nv_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum)\ndataset = dataset.map(lambda v: (v / v_max))\n\n# expand the dimension by 1 to fit n_features=1\ndataset = dataset.map(lambda v: tf.expand_dims(v, -1))\n\n# take a sliding window\ndataset = dataset.window(n_steps, shift=1, drop_remainder=True)\ndataset = dataset.flat_map(lambda d: d.batch(n_steps))\n\n\n# the first value is x and the next value is y, only take 10 samples\nx = dataset.take(n_samples)\ny = dataset.skip(1).take(n_samples)\n\ndataset = tf.data.Dataset.zip((x, y))\n\n# pass the final dataset to model.fit for training\nsimple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10)",
"以上训练模型在实际场景中并不实用,因为本教程中设置的 CoreDNS 服务器没有任何工作负载。不过,这是一条可用于从真正的生产服务器加载指标的工作流水线。开发者可以改进该模型,以解决 DevOps 自动化中的现实问题。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chagaz/ma2823_2016
|
lab_notebooks/Lab 6 2016-11-04 Tree-based methods.ipynb
|
mit
|
[
"2016-11-04: Tree-based methods\nIn this lab, we will apply tree-based classification methods to the Endometrium vs. Uterus cancer data. For documentation see: http://scikit-learn.org/0.17/modules/tree.html\nLet us start, as usual, by setting up our environment, loading the data, and setting up our cross-validation.",
"import numpy as np\n%pylab inline\n\n# Load the data\n# TODO\n\n# Normalize the data\nfrom sklearn import preprocessing\nX = preprocessing.normalize(X)\n\n# Set up a stratified 10-fold cross-validation\nfrom sklearn import cross_validation\nfolds = cross_validation.StratifiedKFold(y, 10, shuffle=True)\n\ndef cross_validate(design_matrix, labels, classifier, cv_folds):\n \"\"\" Perform a cross-validation and returns the predictions. \n \n Parameters:\n -----------\n design_matrix: (n_samples, n_features) np.array\n Design matrix for the experiment.\n labels: (n_samples, ) np.array\n Vector of labels.\n classifier: sklearn classifier object\n Classifier instance; must have the following methods:\n - fit(X, y) to train the classifier on the data X, y\n - predict_proba(X) to apply the trained classifier to the data X and return probability estimates \n cv_folds: sklearn cross-validation object\n Cross-validation iterator.\n \n Return:\n -------\n pred: (n_samples, ) np.array\n Vectors of predictions (same order as labels).\n \"\"\"\n pred = np.zeros(labels.shape)\n for tr, te in cv_folds:\n # Restrict data to train/test folds\n Xtr = design_matrix[tr, :]\n ytr = labels[tr]\n Xte = design_matrix[te, :]\n #print Xtr.shape, ytr.shape, Xte.shape\n\n # Fit classifier\n classifier.fit(Xtr, ytr)\n\n # Predict probabilities (of belonging to +1 class) on test data\n yte_pred = classifier.predict_proba(Xte)\n index_of_class_1 = (1-classifier.classes_[0])/2 # 0 if the first sample is positive, 1 otherwise\n pred[te] = yte_pred[:, index_of_class_1]\n return pred",
"1. Decision trees\nQuestion Cross-validate 5 different decision trees, with default parameters.",
"from sklearn import tree\nfrom sklearn import metrics\n# Use: clf = tree.DecisionTreeClassifier()\n\nypred_dt = [] # will hold the 5 arrays of predictions (1 per tree)\nfor tree_index in range(5):\n # TODO",
"Question Compute the mean and standard deviation of the area under the ROC curve of these 5 trees. Plot the ROC curves of these 5 trees.",
"fpr_dt = [] # will hold the 5 arrays of false positive rates (1 per tree)\ntpr_dt = [] # will hold the 5 arrays of true positive rates (1 per tree)\nauc_dt = [] # will hold the 5 areas under the ROC curve (1 per tree)\nfor tree_index in range(5):\n # TODO\n \nfor tree_index in range(4):\n plt.plot(fpr_dt[tree_index], tpr_dt[tree_index], '-', color='orange') \nplt.plot(fpr_dt[-1], tpr_dt[-1], '-', color='orange', \n label='DT (AUC = %0.2f (+/- %0.2f))' % (np.mean(auc_dt), np.std(auc_dt)))\n\nplt.xlabel('False Positive Rate', fontsize=16)\nplt.ylabel('True Positive Rate', fontsize=16)\nplt.title('ROC curves', fontsize=16)\nplt.legend(loc=\"lower right\")",
"Question What parameters of DecisionTreeClassifier can you play with to define trees differently than with the default parameters? Cross-validate these using a grid search, and plot the optimal decision tree on the previous plot. Did you manage to improve performance?",
"from sklearn import grid_search\nparam_grid = # TODO\nclf = grid_search.GridSearchCV(tree.DecisionTreeClassifier(), param_grid, \n scoring='roc_auc')\nypred_dt_opt = cross_validate(X, y, clf, folds)\nfpr_dt_opt, tpr_dt_opt, thresholds = metrics.roc_curve(y, ypred_dt_opt, pos_label=1)\nauc_dt_opt = metrics.auc(fpr_dt_opt, tpr_dt_opt)\n\n# Plot the 5 decision trees from earlier\nfor tree_index in range(4):\n plt.plot(fpr_dt[tree_index], tpr_dt[tree_index], '-', color='blue') \nplt.plot(fpr_dt[-1], tpr_dt[-1], '-', color='blue', \n label='DT (AUC = %0.2f (+/- %0.2f))' % (np.mean(auc_dt), np.std(auc_dt)))\n# Plot the optimized decision tree \nplt.plot(fpr_dt_opt, tpr_dt_opt, color='orange', label='DT optimized (AUC=%0.2f)' % auc)\n\nplt.xlabel('False Positive Rate', fontsize=16)\nplt.ylabel('True Positive Rate', fontsize=16)\nplt.title('ROC curves', fontsize=16)\nplt.legend(loc=\"lower right\")",
"Question How does the performance of decision trees compare to the performance of classifiers we have used previously on this data? Does this match your expectations?\n2. Bagging trees\nWe will resort to ensemble methods to try to improve the performance of single decision trees. Let us start with bagging trees: The different trees are to be built using a bagging sample of the data, that is to say, a sample built by using as many data points, drawn with replacement from the original data.\nNote: Bagging trees and random forests start making sense when using large number of trees (several hundreds). This is computationally more intensive, especially when the number of features is large, as in this lab. For the sake of computational time, I suggested using small numbers of trees, but you might want to repeat this lab for larger number of trees at home.\nQuestion Cross-validate a bagging ensemble of 5 decision trees on the data. Plot the resulting ROC curve, compared to the 5 decision trees you trained earlier.",
"from sklearn import ensemble\n# By default, the base estimator is a decision tree with default parameters\n# TODO: Use clf = ensemble.BaggingClassifier(n_estimators=5) \n",
"Question Use cross_validate_optimize (as defined in the previous lab) to optimize the number of decision trees to use in the bagging method. How many trees did you find to be an optimal choice?",
"def cross_validate_optimize(design_matrix, labels, classifier, cv_folds):\n \"\"\" Perform a cross-validation and returns the predictions. \n \n Parameters:\n -----------\n design_matrix: (n_samples, n_features) np.array\n Design matrix for the experiment.\n labels: (n_samples, ) np.array\n Vector of labels.\n classifier: sklearn GridSearchCV object\n GridSearchCV instance; must have the following methods/attributes:\n - fit(X, y) to train the classifier on the data X, y\n - predict_proba(X) to apply the trained classifier to the data X and return probability estimates \n cv_folds: sklearn cross-validation object\n - best_params_ the best parameter dictionary\n Cross-validation iterator.\n \n Return:\n -------\n pred: (n_samples, ) np.array\n Vector of predictions (same order as labels).\n \"\"\"\n pred = np.zeros(labels.shape)\n for tr, te in cv_folds:\n # Restrict data to train/test folds\n Xtr = design_matrix[tr, :]\n ytr = labels[tr]\n Xte = design_matrix[te, :]\n #print Xtr.shape, ytr.shape, Xte.shape\n\n # Fit classifier\n classifier.fit(Xtr, ytr)\n \n # Print best parameter\n print classifier.best_params_\n\n # Predict probabilities (of belonging to +1 class) on test data\n yte_pred = classifier.predict_proba(Xte)\n index_of_class_1 = 1 - ytr[0] # 0 if the first sample is positive, 1 otherwise\n pred[te] = yte_pred[:, index_of_class_1] \n return pred\n\nparam_grid = {'n_estimators': [5, 15, 25, 50]}\n# TODO",
"Question Plot the ROC curve of the optimized cross-validated bagging tree classifier obtained with cross_validate_optimize, and compare it to the previous ROC curves (non-optimized bagging tree, decision trees). \n3. Random forests\nWe will now use random forests.\nQuestion What is the difference between bagging trees and random forests?\nQuestion Cross-validate a random forest of 5 decision trees on the data. Plot the resulting ROC curve, compared to the 5 decision trees you trained earlier, and the bagging tree made of 5 decision trees.",
"clf = ensemble.RandomForestClassifier(n_estimators=5) \n# TODO",
"Question Use cross_validate_optimize (as defined in the previous lab) to optimize the number of decision trees to use in the random forest. How many trees do you find to be an optimal choice? How does the optimal random forest compare to the optimal bagging trees? How do the training times of the random forest and the bagging trees compare?",
"param_grid = {'n_estimators': [5, 15, 25, 50]}\n# TODO",
"Question How do your tree-based classifiers compare to the linear regression (regularized or not)? Plot ROC curves.",
"from sklearn import linear_model\nparam_grid = {'C':[1e-3, 1e-2, 1e-1, 1., 1e2, 1e3]}\nclf = grid_search.GridSearchCV(linear_model.LogisticRegression(penalty='l1'), \n param_grid, scoring='roc_auc')\nypred_l1 = cross_validate_optimize(X, y, clf, folds)\nfpr_l1, tpr_l1, thresholds_l1 = metrics.roc_curve(y, ypred_l1, pos_label=1)\n\nclf = grid_search.GridSearchCV(linear_model.LogisticRegression(penalty='l2'), \n param_grid, scoring='roc_auc')\nypred_l2 = cross_validate_optimize(X, y, clf, folds)\nfpr_l2, tpr_l2, thresholds_l2 = metrics.roc_curve(y, ypred_l2, pos_label=1)\n\n# TODO\n\nplt.xlabel('False Positive Rate', fontsize=16)\nplt.ylabel('True Positive Rate', fontsize=16)\nplt.title('ROC curves', fontsize=16)\nplt.legend(loc=\"lower right\")",
"Kaggle challenge\nYou can find the documentation for tree-based regression here: \n* What parameters can you change?\n* Cross-validate several different tree-based regressors (trees and tree ensembles) on your data, using the folds you previously set up. How do the different variants of decision trees compare to each other? How do they compare to performance obtained with other algorithms?\n* Submit predictions to the leaderboard for the best of your tree-based models. Do the results on the leaderboard data match your expectations?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ipsl/cmip6/models/sandbox-2/ocnbgchem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: IPSL\nSource ID: SANDBOX-2\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
decisionstats/pythonfordatascience
|
titanic forked.ipynb
|
apache-2.0
|
[
"# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load in \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport seaborn as sb\n\n# Input data files are available in the \"../input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\n\nimport os\n\nprint(os.listdir(\"C:\\\\Users\\\\ajaohri\\\\Desktop\\\\all\"))\n\n# Any results you write to the current directory are saved as output.\n\nurl_train = 'C:\\\\Users\\\\ajaohri\\\\Desktop\\\\all/train.csv'\ntitanic = pd.read_csv(url_train)\ntitanic.head()\n\n#Checking if our target variable is binary or not\nsb.countplot(x='Survived',data=titanic)\n\n#Checking Null values\ntitanic.isnull().sum()",
"Dropping PassengerId, Name and Ticket because they are unique.\nDropping Cabin because of too many null values.",
"titanic_data = titanic.drop(['PassengerId','Name','Ticket'],1)\ntitanic_data.head()",
"Now need to take care of the missing data for Age variable. Need to approximate- one way, to take mean age for all the missing values.\nOr, find if age is related to Pclass, and assign respective means.",
"sb.boxplot(x='Pclass',y='Age',data=titanic_data)",
"If Passenger belongs to Pclass 3, age assigned is 24, if 2, age is assigned 29, if 1 then 37.",
"def age_approx(cols):\n age = cols[0]\n pclass = cols[1]\n if pd.isnull(age):\n if pclass == 1:\n return 37\n elif pclass == 2:\n return 29\n else:\n return 24\n else:\n return age\n\ntitanic_data['Age'] = titanic_data[['Age', 'Pclass']].apply(age_approx, axis=1)\ntitanic_data.isnull().sum()\n\ndef cabin_approx(cols):\n cabin = cols[0]\n pclass = cols[1]\n if pd.isnull(cabin):\n return 0\n elif cabin[0] == ('C' or 'B'):\n return 3\n elif cabin[0] == ('A' or 'D' or 'E' or 'T'):\n return 2\n elif cabin[0] == ('F' or 'G'):\n return 1\n else:\n return 0\n\ntitanic_data['Cabin'] = titanic_data[['Cabin', 'Pclass']].apply(cabin_approx, axis=1)\n#titanic_data.isnull().sum()\nsb.boxplot(x='Cabin',y='Fare',data=titanic_data)",
"There are two null values in Embarked, we can just drop them.",
"titanic_data.dropna(inplace=True)\ntitanic_data.isnull().sum()",
"Getting dummy variables from categorical ones.",
"gender = pd.get_dummies(titanic_data['Sex'],drop_first=True)\ngender.head()\n\nembark_location = pd.get_dummies(titanic_data['Embarked'],drop_first=True)\nembark_location.head()\n\ntitanic_data.drop(['Sex','Embarked'],axis=1,inplace=True)\ntitanic_data.head()\n\ntitanic_dmy = pd.concat([titanic_data, gender, embark_location],axis=1)\ntitanic_dmy.tail()\n\n#Checking for correlation between variables.\nsb.heatmap(titanic_dmy.corr(),square=True)\n#print(titanic_dmy.corr())\n\nX = titanic_dmy.ix[:,(1,2,3,4,5,6,7,8,9)].values\ny = titanic_dmy.ix[:,0].values\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=2)",
"The train test split is done for parameter tuning.\nWe now deploy the models.",
"!pip install xgboost\n\nfrom sklearn.ensemble import RandomForestClassifier\n#from sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import SVC\nfrom xgboost import XGBClassifier\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.ensemble import VotingClassifier\n\nclf1 = SVC(kernel='linear',C=1.0,random_state=3)\nclf2 = XGBClassifier(random_state=3)\nclf3 = RandomForestClassifier(n_estimators=30, max_depth=10, random_state=300)\neclf = VotingClassifier(estimators=[('clf1', clf1), ('clf2', clf2),('clf3',clf3)], voting='hard')\n\neclf.fit(X_train, y_train)\ny_pred = eclf.predict(X_test)\nprint(confusion_matrix(y_test, y_pred))\nprint(eclf.score(X_test, y_test))",
"Now taking in Competition Data.",
"url = 'C:\\\\Users\\\\ajaohri\\\\Desktop\\\\all/test.csv'\ntest = pd.read_csv(url)\ntest.head()\n\ntest.isnull().sum()",
"There are 86 null values in Age, so we approximate them like we did earlier.\nThere are 327 null values in Cabin, so we drop it altogether.\nThere is 1 null value in Fare, so we approximate it according to the median of each class of the null position.",
"test.describe()\n\nsb.set(rc={'figure.figsize':(11.7,8.27)})\nax = sb.boxplot(x='Pclass',y='Fare',data=test,width=0.9)\n\ndef fare_approx(cols):\n fare = cols[0]\n pclass = cols[1]\n if pd.isnull(fare):\n if pclass == 1:\n return 55\n elif pclass == 2:\n return 20\n else:\n return 10\n else:\n return fare",
"Cleaning up the test data:\nDropping variables, approximating age and fare, dummy variables.",
"test_data = test.drop(['Name','Ticket'],1)\ntest_data['Age'] = test_data[['Age', 'Pclass']].apply(age_approx, axis=1)\ntest_data['Fare'] = test_data[['Fare','Pclass']].apply(fare_approx, axis=1)\ntest_data['Cabin'] = test_data[['Cabin','Pclass']].apply(cabin_approx, axis=1)\n#\ngender_test = pd.get_dummies(test_data['Sex'],drop_first=True)\nembark_location_test = pd.get_dummies(test_data['Embarked'],drop_first=True)\ntest_data.drop(['Sex','Embarked'],axis=1,inplace=True)\ntest_dmy = pd.concat([test_data, gender_test, embark_location_test],axis=1)\n\n#test_dmy.describe()\ntest_data.dropna(inplace=True)\ntest_dmy.isnull().sum()\n\ntest_dmy.head()\n\nX_competition = test_dmy.ix[:,(1,2,3,4,5,6,7,8,9)].values",
"Prediction for Competition Data",
"y_comp = eclf.predict(X_competition)\n\nsubmission = pd.DataFrame({'PassengerId':test_data['PassengerId'],'Survived':y_comp})\nsubmission.head()\n\nfilename = 'Titanic Predictions 1.csv'\n\nsubmission.to_csv(filename,index=False)\n\nprint('Saved file: ' + filename)\n\nos.getcwd()\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
EmuKit/emukit
|
notebooks/Emukit-tutorial-bayesian-optimization-integrating-model-hyperparameters.ipynb
|
apache-2.0
|
[
"Bayesian optimization integrating model hyper-parameters\nIn this notebook we are going to see how to use Emukit to solve optimization problems when the acquisition function is integrated with respect to the hyper-parameters of the model. \nTo show this with an example, use the Six-hump camel function \n$$f(x_1,x_2) = \\left(4-2.1x_1^2 = \\frac{x_1^4}{3} \\right)x_1^2 + x_1x_2 + (-4 +4x_2^2)x_2^2,$$\nin $[-3,3]\\times [-2,2]$. This functions has two global minima, at $(0.0898,-0.7126)$ and $(-0.0898,0.7126)$.",
"import numpy as np\n%pylab inline",
"Loading the problem and generate initial data",
"from emukit.test_functions import sixhumpcamel_function\nf, parameter_space = sixhumpcamel_function()",
"Now we define the domain of the function to optimize.",
"### --- Generate data\nfrom emukit.core.initial_designs import RandomDesign\n\ndesign = RandomDesign(parameter_space) # Collect random points\nnum_data_points = 5\nX = design.get_samples(num_data_points)\nY = f(X)",
"Train the model on the initial data",
"import GPy\n\nmodel_gpy_mcmc = GPy.models.GPRegression(X,Y)\nmodel_gpy_mcmc.kern.set_prior(GPy.priors.Uniform(0,5))\nmodel_gpy_mcmc.likelihood.variance.constrain_fixed(0.001)",
"We wrap the model in Emukit.",
"from emukit.model_wrappers import GPyModelWrapper\nmodel_emukit = GPyModelWrapper(model_gpy_mcmc)\nmodel_emukit.model.plot()\nmodel_emukit.model",
"Create the aquisition function\nWe use a combination of IntegratedHyperParameterAcquisition and ExpectedImprovement classes to create the integrated expected improvement acquisition object. The IntegratedHyperParameterAcquisition can convert any acquisition function into one that is integrated over model hyper-parameters.\nWe need to pass a function that will return an acquisition object to IntegratedHyperParameterAcquisition, this function takes in the model as an input only.",
"from emukit.core.acquisition import IntegratedHyperParameterAcquisition\nfrom emukit.bayesian_optimization.acquisitions import ExpectedImprovement\n\nacquisition_generator = lambda model: ExpectedImprovement(model, jitter=0)\nexpected_improvement_integrated = IntegratedHyperParameterAcquisition(model_emukit, acquisition_generator)\n\nfrom emukit.bayesian_optimization.loops import BayesianOptimizationLoop\n\nbayesopt_loop = BayesianOptimizationLoop(model = model_emukit,\n space = parameter_space,\n acquisition = expected_improvement_integrated,\n batch_size = 1)",
"We run the loop for 10 iterations.",
"max_iter = 10\nbayesopt_loop.run_loop(f, max_iter)",
"Now, once the loop is completed we can visualize the distribution of the hyperparameters given the data.",
"labels = ['rbf variance', 'rbf lengthscale']\n\nplt.figure(figsize=(14,5))\nsamples = bayesopt_loop.candidate_point_calculator.acquisition.samples\n\nplt.subplot(1,2,1)\nplt.plot(samples,label = labels)\nplt.title('Hyperparameters samples',size=25)\nplt.xlabel('Sample index',size=15)\nplt.ylabel('Value',size=15)\n\nplt.subplot(1,2,2)\nfrom scipy import stats\nxmin = samples.min()\nxmax = samples.max()\nxs = np.linspace(xmin,xmax,100)\nfor i in range(samples.shape[1]):\n kernel = stats.gaussian_kde(samples[:,i])\n plot(xs,kernel(xs),label=labels[i])\n_ = legend()\nplt.title('Hyperparameters densities',size=25)\nplt.xlabel('Value',size=15)\nplt.ylabel('Frequency',size=15)",
"And we can check how the optimization evolved when you integrate out the acquisition.",
"plt.plot(np.minimum.accumulate(bayesopt_loop.loop_state.Y))\nplt.ylabel('Current best')\nplt.xlabel('Iteration');"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
n-witt/MachineLearningWithText_SS2017
|
tutorials/12 Model Selection.ipynb
|
gpl-3.0
|
[
"Hyperparameters and Model Validation\nPreviously, we saw the basic recipe for applying a supervised machine learning model:\n\nChoose a class of model\nChoose model hyperparameters\nFit the model to the training data\nUse the model to predict labels for new data\n\nThe first two pieces of this are the most important part of using these tools and techniques effectively.\nThe question that comes up is: How can we make informed choice for these parameters?\nWe've touched upon questions from this realm already, but here we are going to examine it in more detail.\nThinking about Model Validation\nIn principle, model validation is very simple: \n* choosing a model and its hyperparameters\n* Estimation: applying it to some of the training data and comparing the prediction to the known value.\nLet's first go through a naive approach and see why it fails.\nModel validation the wrong way\nLet's demonstrate the naive approach to validation using the Iris data, which we saw previously:",
"from sklearn.datasets import load_iris\niris = load_iris()\nX = iris.data\ny = iris.target",
"Next we choose a model and hyperparameters:",
"from sklearn.neighbors import KNeighborsClassifier\nmodel = KNeighborsClassifier(n_neighbors=1)",
"Then we train the model, and use it to predict labels for data we already know:",
"model.fit(X, y)\ny_model = model.predict(X)",
"Finally, we compute the fraction of correctly labeled points:",
"from sklearn.metrics import accuracy_score\naccuracy_score(y, y_model)",
"We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!\nBut is this truly measuring the expected accuracy?\nHave we really come upon a model that we expect to be correct 100% of the time?\nModel validation the right way: Holdout sets\nWe need to use a holdout set: that is, we hold back some subset of the data from the training of the model, and then use this holdout set to check the model performance.\nThis splitting can be done using the train_test_split utility in Scikit-Learn:",
"from sklearn.model_selection import train_test_split\n# split the data with 50% in each set\nX1, X2, y1, y2 = train_test_split(X, y, random_state=0, train_size=0.5)\n\n# fit the model on one set of data\nmodel.fit(X1, y1)\n\n# evaluate the model on the second set of data\ny2_model = model.predict(X2)\naccuracy_score(y2, y2_model)",
"The nearest-neighbor classifier is about 90% accurate on this hold-out set, which is more inline with out expactation.\nThe hold-out set is similar to unknown data, because the model has not \"seen\" it before.\nBut, we have lost a portion of our data to the model training.\nIn the above case, half the dataset does not contribute to the training of the model! This is not optimal.\nModel validation via cross-validation\nOne way to address this is to use cross-validation; that is, to do a sequence of fits where each subset of the data is used both as a training set and as a validation set.\nVisually, it might look something like this:\n\nHere we do two validation trials, alternately using each half of the data as a holdout set.\nUsing the split data from before, we could implement it like this:",
"y2_model = model.fit(X1, y1).predict(X2)\ny1_model = model.fit(X2, y2).predict(X1)\naccuracy_score(y1, y1_model), accuracy_score(y2, y2_model)",
"We could compute the mean of the two accuracy scores to get a better measure of the global model performance.\nThis particular form of cross-validation is a two-fold cross-validation.\nWe could expand on this idea to use even more trials, and more folds.\nHere is a visual depiction of five-fold cross-validation:\n\n\nWe split the data into five groups\nEach of them is used to evaluate the model fit on the other 4/5 of the data.\nWhat is the advantage of higher-degree crossvalidation? What is the drawback?\n\nThis would be rather tedious to do by hand, and so we can use Scikit-Learn's cross_val_score convenience routine to do it succinctly:",
"from sklearn.model_selection import cross_val_score\ncross_val_score(model, X, y, cv=5)",
"This gives us an even better idea of the performance of the algorithm.\nHow could we take this idea to its extreme?\nThe case in which our number of folds is equal to the number of data points.\nThis type of cross-validation is known as leave-one-out cross validation, and can be used as follows:",
"from sklearn.model_selection import LeaveOneOut\nimport numpy as np\n\nloo = LeaveOneOut()\nloo.get_n_splits(X)\nscores = []\nfor train_index, test_index in loo.split(X):\n model.fit(X[train_index], y[train_index])\n scores.append(accuracy_score(y[test_index], model.predict(X[test_index])))\nscores = np.array(scores)\nscores",
"Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and the score indicates either successful (1.0) or unsuccessful (0.0) prediction.\nTaking the mean of these gives an estimate of the error rate:",
"scores.mean()",
"This gives us a good impression on the performance of our model. But there is also a problem. Can you spot it?\nSelecting the Best Model\nNow that we can evaluate a model's performance, we will tackle the question how to select a model and its hyperparameters.\nA very important question to ask is: If my estimator is underperforming, how should I move forward? What are the options?\n\nUse a more complicated/more flexible model\nUse a less complicated/less flexible model\nGather more training samples\nGather more data to add features to each sample\n\nSometimes the results are counter-intuitive:\n* Using a more complicated model will give worse results \n* Adding more training samples may not improve your results\nThe Bias-variance trade-off\nFundamentally, the question of \"the best model\" is about finding a sweet spot in the tradeoff between bias and variance.\nConsider the following figure, which presents two regression fits to the same dataset:\n\n\nComment on the models.\nWhich model is better?\nIs either of the models 'good'? Why?\n\nConsider what happens if we use these two models to predict the y-value for some new data.\nIn the following diagrams, the red/lighter points indicate data that is omitted from the training set:\n\nIt is clear that neither of these models is a particularly good fit to the data, but they fail in different ways.\nThe score here is the $R^2$ score, or coefficient of determination, which measures how well a model performs relative to a simple mean of the target values. \n* $R^2=1$ indicates a perfect match\n* $R^2=0$ indicates the model does no better than simply taking the mean of the data\n* Negative values mean even worse models. \nLeft model\n\nAttempts to find a straight-line fit through the data.\nThe data are intrinsically more complicated than a straight line, the straight-line model will never be able to describe this dataset well.\nSuch a model is said to underfit the data: that is, it does not have enough model flexibility to suitably account for all the features in the data\nAnother way of saying this is that the model has high bias\n\nRight model\n\nAttempts to fit a high-order polynomial through the data.\nThe model fit has enough flexibility to nearly perfectly account for the fine features in the data\nIt very accurately describes the training data\nIts precise form seems to be more reflective of the noise properties rather than the intrinsic properties of whatever process generated that data\nSuch a model is said to overfit the data: that is, it has so much model flexibility that the model ends up accounting for random errors as well as the underlying data distribution\nAnother way of saying this is that the model has high variance.\n\nFrom the scores associated with these two models, we can make an observation that holds more generally:\n\nFor high-bias models, the performance of the model on the validation set is similar to the performance on the training set, but the overall score is low.\nFor high-variance models, the performance of the model on the validation set is far worse than the performance on the training set.\n\nImagine we have the ability to tune the model complexity, then we can expect the training score and validation score to behave as illustrated in the following figure:\n\nThe diagram shown here is often called a validation curve, and we see the following essential features:\n\nThe training score is everywhere higher than the validation score. This is generally the case. Why?\nFor very low model complexity (a high-bias model), the training data is under-fit, which means that the model is a poor predictor both for the training data and for any previously unseen data.\nFor very high model complexity (a high-variance model), the training data is over-fit, which means that the model predicts the training data very well, but fails for any previously unseen data.\nWhich model/level of complexity should we choose?.\n\nValidation curves in Scikit-Learn\nLet's look at an example. We will use a polynomial regression model: this is a generalized linear model in which the degree of the polynomial is a tunable parameter.\nFor example, a degree-1 polynomial fits a straight line to the data; for model parameters $a$ and $b$:\n$$\ny = ax + b\n$$\nA degree-3 polynomial fits a cubic curve to the data; for model parameters $a, b, c, d$:\n$$\ny = ax^3 + bx^2 + cx + d\n$$\nWe can generalize this to any number of polynomial features.\nIn Scikit-Learn, we can implement this with a simple linear regression combined with the polynomial preprocessor.\nWe will use a pipeline to string these operations together:",
"from sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.pipeline import make_pipeline\n\ndef PolynomialRegression(degree=2, **kwargs):\n return make_pipeline(PolynomialFeatures(degree), LinearRegression(**kwargs))",
"Now let's create some data to which we will fit our model:",
"import numpy as np\n\ndef make_data(N, err=1.0, rseed=1):\n # randomly sample the data\n rng = np.random.RandomState(rseed)\n X = rng.rand(N, 1) ** 2\n y = 10 - 1. / (X.ravel() + 0.1)\n if err > 0:\n y += err * rng.randn(N)\n return X, y\n\nX, y = make_data(40)",
"We can now visualize our data, along with polynomial fits of several degrees:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn; seaborn.set() # for beautiful plotting\n\nX_test = np.linspace(-0.1, 1.1, 500)[:, None]\n\nplt.scatter(X.ravel(), y, color='black') # plot data\naxis = plt.axis()\n\n# plot ploynomials\nfor degree in [1, 3, 5]:\n y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)\n plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))\n\nplt.xlim(-0.1, 1.0)\nplt.ylim(-2, 12)\nplt.legend(loc='best');",
"We can controll the model complexity (the degree of the polynomial), which can be any non-negative integer.\n\nA useful question to answer is this: what degree of polynomial provides a suitable trade-off between bias (under-fitting) and variance (over-fitting)?\n\n\nWe can make progress in this by visualizing the validation curve\n\nThis can be done straightforwardly using the validation_curve convenience routine\nGiven a model, data, parameter name, and a range to explore, this function will automatically compute both the training score and validation score across the range:",
"from sklearn.model_selection import validation_curve\ndegree = np.arange(0, 21)\ntrain_score, val_score = validation_curve(PolynomialRegression(), X, y,\n 'polynomialfeatures__degree', degree, cv=10)\n\nplt.plot(degree, np.median(train_score, 1), color='blue', label='training score')\nplt.plot(degree, np.median(val_score, 1), color='red', label='validation score')\nplt.legend(loc='best')\nplt.ylim(0, 1)\nplt.xlabel('degree')\nplt.ylabel('score');",
"This shows precisely the behavior we expect:\n* The training score is everywhere higher than the validation score\n* The training score is monotonically improving with increased model complexity\n* The validation score reaches a maximum before dropping off as the model becomes over-fit.\nWhich degree ist optimal?\nLet's plot this.",
"plt.scatter(X.ravel(), y)\nlim = plt.axis()\ny_test = PolynomialRegression(3).fit(X, y).predict(X_test)\nplt.plot(X_test.ravel(), y_test);\nplt.axis(lim);",
"Learning Curves\nOne important aspect of model complexity is that the optimal model will generally depend on the size of your training data.\nFor example, let's generate a new five times larger dataset:",
"X2, y2 = make_data(200)\nplt.scatter(X2.ravel(), y2);",
"We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well:",
"degree = np.arange(51)\ntrain_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2,\n 'polynomialfeatures__degree', degree, cv=10)\n\nplt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')\nplt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')\nplt.plot(degree[:train_score.shape[0]], np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')\nplt.plot(degree[:train_score.shape[0]], np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')\nplt.legend(loc='best')\nplt.ylim(0, 1)\nplt.xlabel('degree')\nplt.ylabel('score');",
"From the validation curve it is clear that the larger dataset can support a much more complicated model:\n* The peak here is probably around a degree of 6\n* Even a degree-20 model is not seriously overfitting the data\nThus we see that the behavior of the validation curve has not one but two important inputs: \n* the model complexity.\n* the number of training points.\nIt is often useful to explore the behavior of the model as a function of the number of training points.\nThis can be done by using increasingly larger subsets of the data to fit our model.\nThis is called a learning curve.\nThe general behavior of learning curves is this:\n* A model of a given complexity will overfit a small dataset: this means the training score will be relatively high, while the validation score will be relatively low.\n* A model of a given complexity will underfit a large dataset: this means that the training score will decrease, but the validation score will increase.\n* A model will never, except by chance, give a better score to the validation set than the training set: this means the curves should keep getting closer together but never cross.\nWith these features in mind, we would expect a learning curve to look qualitatively like that shown in the following figure:\n\n\nThe Learning curve converges to a particular score as the number of training samples grows\nOnce we have enough training data a particular model has converged\nThat means: adding more training data will not help!\nTo increase the model performance another (often more complex) model must be chosen\n\nLearning curves in Scikit-Learn\nScikit-Learn offers a convenient utility for computing such learning curves:",
"from sklearn.model_selection import learning_curve\n\nfig, ax = plt.subplots(1, 2, figsize=(16, 6))\nfig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)\n\nfor i, degree in enumerate([2, 9]):\n N, train_lc, val_lc = learning_curve(PolynomialRegression(degree),\n X, y, cv=10,\n train_sizes=np.linspace(0.3, 1, 25))\n\n ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')\n ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')\n ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],\n color='gray', linestyle='dashed')\n\n ax[i].set_ylim(0, 1)\n ax[i].set_xlim(N[0], N[-1])\n ax[i].set_xlabel('training size')\n ax[i].set_ylabel('score')\n ax[i].set_title('degree = {0}'.format(degree), size=14)\n ax[i].legend(loc='best')",
"This plot gives us a visual depiction of how our model responds to increasing training data.\nWhen the learning curve has already converged (i.e., when the training and validation curves are already close to each other) adding more training data will not significantly improve the fit!.\nThis situation is seen in the left panel, with the learning curve for the degree-2 model.\nTo increase the converged score a more complicated model must be used.\nIn the right panel: by moving to a much more complicated model, we increase the score of convergence\nThe drawback is a higher model variance (indicated by the difference between the training and validation scores).\nIf we were to add even more data points, the learning curve for the more complicated model would eventually converge.\n\nPlotting a learning curve for your particular choice of model and dataset can help you to make this type of decision about how to move forward in improving your analysis.\nWhat is the difference between a validation curve and a learning curve?\nValidation in Practice: Grid Search\n\nThe preceding discussion gave you some intuition into the trade-off between bias and variance, and its dependence on model complexity and training set size.\nIn practice, models generally have more than one knob to turn, and thus plots of validation and learning curves change from lines to multi-dimensional surfaces.\nSuch visualizations are difficult.\nWe would like to find the particular model that maximizes the validation score directly, instead.\n\nScikit-Learn provides automated tools to do this in the grid search module.\nHere is an example of using grid search to find the optimal polynomial model.\nWe will explore a three-dimensional grid of model features:\n* Polynomial degree\n* The flag telling us whether to fit the intercept\n* The flag telling us whether to normalize the problem.\nThis can be set up using Scikit-Learn's GridSearchCV meta-estimator:",
"from sklearn.model_selection import GridSearchCV\n\nparam_grid = {'polynomialfeatures__degree': np.arange(21),\n 'linearregression__fit_intercept': [True, False],\n 'linearregression__normalize': [True, False]}\n\ngrid = GridSearchCV(PolynomialRegression(), param_grid, cv=10)",
"Notice that like a normal estimator, this has not yet been applied to any data.\nCalling the fit() method will fit the model at each grid point, keeping track of the scores along the way:",
"grid.fit(X, y);",
"Now that this is fit, we can ask for the best parameters as follows:",
"grid.best_params_",
"Finally, if we wish, we can use the best model and show the fit to our data using code from before:",
"model = grid.best_estimator_\n\nplt.scatter(X.ravel(), y)\nlim = plt.axis()\ny_test = model.fit(X, y).predict(X_test)\nplt.plot(X_test.ravel(), y_test);\nplt.axis(lim);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ziky5/F4500_Python_pro_fyziky
|
lekce_08/N2spec.ipynb
|
mit
|
[
"Zpracování rotačních spekter $N_2(C\\,^3\\Pi_g\\rightarrow B\\,^3\\Pi_u)$\nV atmosférických výbojích často pozorujeme záření molekuly dusíky v důsledku přechodu $N_2(C\\,^3\\Pi_g\\rightarrow B\\,^3\\Pi_u)$. Ve starší literatuře se setkáme s označením \"druhý pozitivní systém\". Většina vibračních pásů tohoto systému se nachází v blízké UV oblasti a zasahuje i do viditelné části spektra. Toto záření je tedy často původcem charakteristické fialové barvy atmosférických výbojů. \nS rostoucí teplotou se více populují stavy s vyšším rotačním číslem. Ve spektrech se to projevuje tak, že roste relativní intenzita části pásů směrem ke kratším vlnovým délkám (tedy směrem \"doleva\").\nToho se dá využít k rychlému odhady teploty, pokud máme k dispozici kalibrační křivku \n$$\n\\frac{I_0}{ I_1}= f(T, {\\rm integrační~limity})\n$$",
"#kod v teto bunce neni soucasti lekce,\n#presto ho ale netajime\n\nimport massiveOES\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom matplotlib import colors as mcolors\ncolors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)\n\nN2 = massiveOES.SpecDB('N2CB.db')\nspec_cold = N2.get_spectrum(Trot=300, Tvib=300, wmin=325, wmax=337.6)\nspec_hot = N2.get_spectrum(Trot=3000, Tvib=3000, wmin=325, wmax=337.6)\ndump=spec_cold.refine_mesh()\ndump=spec_hot.refine_mesh()\n\nspec_cold.convolve_with_slit_function(gauss=5e-2)\nspec_hot.convolve_with_slit_function(gauss=5e-2)\n\nplt.rcParams['font.size'] = 13\nplt.rcParams['figure.figsize'] = (15,4)\n\nfig, axs = plt.subplots(1,2)\n\naxs[0].plot(spec_cold.x, spec_cold.y, color='blue', label = 'T$_{rot}$ = 300 K')\naxs[0].plot(spec_hot.x, spec_hot.y, color='green', label = 'T$_{rot}$ = 3000 K')\n\naxs[0].legend(loc='upper left')\naxs[0].set_xlabel('wavelength [nm]')\naxs[0].set_ylabel('relative photon flux [arb. u.]')\n\nint_lims = 320, 335, 336.9\n\naxs[1].plot(spec_hot.x, spec_hot.y, color='green', label = 'T$_{rot}$ = 3000 K')\naxs[1].fill_between(spec_hot.x, spec_hot.y, \n where=(spec_hot.x > int_lims[0]) & (spec_hot.x<int_lims[1]),\n color='green', alpha=0.5)\naxs[1].fill_between(spec_hot.x, spec_hot.y, \n where=(spec_hot.x > int_lims[1]) & (spec_hot.x<int_lims[2]),\n color='darkorange', alpha=0.5)\n\naxs[1].annotate('$I_0$', xy=(334, 70), xytext = (330, 300), \n arrowprops=dict(facecolor='green', width = 2, headwidth=7),\n color = 'green', alpha=0.9, size=20)\n\naxs[1].annotate('$I_1$', xy=(336, 220), xytext = (334, 500), \n arrowprops=dict(facecolor='darkorange', width = 2, headwidth=7),\n color = 'darkorange', alpha=1, size=20)\n\n\naxs[1].set_xlabel('wavelength [nm]')\ntxt = axs[1].set_ylabel('relative photon flux [arb. u.]')\n\n\nimport numpy \nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ncal = numpy.genfromtxt('N2spec/calibration.txt')\n\nplt.rcParams['figure.figsize'] = (7.5,4)\nax = plt.plot(cal[:,0], cal[:,1])\n\nplt.xlabel('temperature [K]')\nplt.ylabel('I$_0$ / I$_1$')\n\ntxt = plt.title('limits = (320, 335, 336.9) nm')\n",
"Úkoly\n\nProjděte všechny soubory v adresáři N2spec. Každý z nich obsahuje dusíkové spektrum získané při určité teplotě. Výše popsanou metodou s využitím kalibrační křivky (N2spec/calibration.txt) přiřaďte každému souboru správnou teplotu. \nJméno každého souboru skrývá souřadnice x a y. Vyneste získané hodnoty teploty do dvourozměrného pole podle těchto souřadnic a vykreslete takto získaný obraz.\n\nMůžeme začít tím, že zjistíme, co se v daném adresáři skrývá:",
"!ls N2spec/ | head #!ls vypíše obsah adresáře, head jej omezí na prvních 10 řádků",
"Předchozí přístup vlastně nevyužívá python. Pro práci s obsahem adresářů je v pythonu např. knihovna glob. S její pomocí můžeme zjistit, kolik souborů je třeba zpracovat.",
"import glob\n\nfilelist = glob.glob('N2spec/y*x*.txt')\n\nprint('Dnes zpracujeme ' + str(len(filelist)) + ' souborů')",
"Měli bychom si ověřit, jakou mají naše soubory strukturu:",
"!head N2spec/y0x10.txt ",
"Vidíme, že soubory skrývají spektra ve dvou sloupcích oddělených tabulátorem. První sloupec obsahuje údaje o vlnové délce (podle něj tedy můžeme rozhodnout, v jakém intervalu budeme integrovat), ve druhém sloupci najdeme intenzitu, tedy to, co bylo ve výše ukázaných grafech na ose y. První řádek obsahuje hlavičku a začíná znakem \"#\". Toto je tedy symbol označující komentáře - každý řádek od tohoto symbolu dál by měl být při načítání přeskočen. \nPro načítání takovýchto souborů nám poslouží funkce numpy.genfromtxt().",
"import numpy\n\nnumpy.genfromtxt?\n#parametr converters umozni zpracovat i nestandardni vstupy\n#napr. desetinna carka misto tecky\n\nsample = numpy.genfromtxt(filelist[0], comments='#')\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(sample[:,0], sample[:,1])\n#dvojtečková notace [:,0]: čti jako \"vezmi všechny řádky, nultý sloupec\"",
"A jak se počítá integrál? Podívejme se na funkci numpy.trapz.",
"numpy.trapz?",
"trapz sice umí spočítat integrál, ale neumí se omezit na daný interval. Na to budeme využívat výběr z pole s podmínkou.",
"condition0 = (sample[:,0] > 320) & (sample[:,0] < 335)\ncondition1 = (sample[:,0] > 335) & (sample[:,0] < 336.9)\ncondition2 = sample[:,0] > 336.9\n\nplt.plot(sample[condition0,0], sample[condition0,1], color = 'blue')\nplt.plot(sample[condition1,0], sample[condition1,1], color = 'red')\nplt.plot(sample[condition2,0], sample[condition2,1], color = 'green')",
"Můžeme tedy přistoupit k výpočtu obou integrálů i jejich podílu.",
"I0 = numpy.trapz(y = sample[condition0,1], x = sample[condition0,0])\nI1 = numpy.trapz(y = sample[condition1,1], x = sample[condition1, 0])\n\nI0_over_I1 = I0 / I1\nprint(I0_over_I1)",
"Teď už jen zbývá správně použít kalibrační křivku a k podílu integrálů přiřadit správnou teplotu. Soubor musíme nejdříve načíst. Nejdřív zkontrolujeme, zda můžeme zase použít numpy.genfromtxt bez zvlzvláštních nastavení:",
"!head N2spec/calibration.txt\n#hurá, půjde to po dobrém\n\ncal = numpy.genfromtxt('N2spec/calibration.txt')\ncondition_temp = (cal[:,1] == I0_over_I1)\nprint(cal[condition_temp,0])",
"Jenomže přesná hodnota I0_over_I1 v souboru calibration.txt není! Co teď?\nBudeme muset použít google. Klíčová slova: numpy find nearest value.\n.\n.\n.\n.\n.\n.\n(obrázek tu máte, ale přece to nebudete opisovat...)\n<img src=\"http://physics.muni.cz/~janvorac/stack_overflow.png\"></img>",
"def find_nearest(array,value):\n idx = (numpy.abs(array-value)).argmin()\n return idx\n\nnearest_index = find_nearest(cal[:,1], I0_over_I1)\nprint(nearest_index)\nprint(cal[nearest_index, 0])",
"Takže teď už zbývá jenom sepsat výše uvedené do cyklu a vytvořit si vhodnou strukturu na podržení výsledných teplot v paměti. Např. slovník.",
"temp_dict = {}\n\nfor fname in glob.glob('N2spec/y*x*.txt'):\n data = numpy.genfromtxt(fname)\n I0 = numpy.trapz(y = data[condition0,1], x = data[condition0,0])\n I1 = numpy.trapz(y = data[condition1,1], x = data[condition1, 0])\n I0_over_I1 = I0 / I1\n index = find_nearest(cal[:,1], I0_over_I1)\n T = cal[index][0]\n temp_dict[fname] = T",
"Je třeba dostat informace o souřadnicích ze jména souboru:",
"xs = [] \nys = []\nfor fname in temp_dict:\n #nejdrive rozdelime jmeno souboru podle \"/\" a vezmeme jen druhy prvek [1]\n #prvni bude vzdy jen tentyz adresar\n fname = fname.split('/')[1] \n\n #abychom ziskali y, rozdelime jmenu souboru podle \"x\" a vezmeme prvni prvek [0]\n y = fname.split('x')[0]\n #jmeno souboru zacina znakem \"y\", ten preskocime [1:]\n y = int(y[1:])\n ys.append(y)\n \n #tady rozdelime jmeno souboru podle \"x\" a vezmeme druhy prvek [1]\n x = fname.split('x')[1]\n #zbytek rozdelime podle \".\" a vezmeme prvni prvek [0]\n x = x.split('.')[0]\n x = int(x)\n xs.append(x)\n\nprint(numpy.unique(xs))\nprint(numpy.unique(ys))",
"Vidíme, že naše výsledné pole by mělo být 50x50 pixelů velké, aby se do něj data vešla.",
"result = numpy.zeros((50, 50)) #tady bude vysledek\nfor i, fname in enumerate(temp_dict):\n #print(i, fname)\n x = xs[i]\n y = ys[i]\n T = temp_dict[fname]\n result[y, x] = T",
"Pomocí plt.imshow() můžeme výsledek vykreslit.",
"#plt.imshow(result)\n#plt.colorbar()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Misteir/Machine_Learning
|
linear_regression/linear_regression1.ipynb
|
gpl-3.0
|
[
"Librairies",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"read file content",
"data = pd.read_csv('ex1data1.txt', header=None, names=['population', 'profit'])\ndata.head()\n\ndata.plot.scatter('population', 'profit')",
"Dots seem to follow a line, we could have done a correlation test to check if the two variabes are linked. Now we transform the data matrix into two numpy arrays.",
"X = np.array(data[\"population\"])\ny = np.array(data[\"profit\"])",
"now we will developp the two functions predict (apply theta to the X matrix) and gradient_descent1 (update theta)",
"def predict(X, theta):\n return (X * theta[1] + theta[0])\n\ndef gradient_descent1(X, y, theta, alpha, num_iters):\n m = X.shape[0]\n for i in range(0, num_iters):\n theta0 = theta[0] - (alpha / m) * np.sum(predict(X, theta) - y)\n theta1 = theta[1] - (alpha / m) * np.dot(predict(X, theta) - y, np.transpose(X))\n theta = [theta0, theta1]\n return theta\n\ntheta = np.zeros(2, dtype=float)\ntheta = gradient_descent1(X, y, theta, 0.01, 1500)\ntheta",
"Expected output (for alpha 0.01 and 1500 iterations):[-3.6302914394043597, 1.166362350335582]\nThe visualize plot our dataset with the regression line corresponding to theta",
"def visualize(theta):\n fig = plt.figure()\n ax = plt.axes()\n ax.set_xlim([4.5,22.5])\n ax.set_ylim([-5, 25])\n ax.scatter(X, y)\n line_x = np.linspace(0,22.5, 20)\n line_y = theta[0] + line_x * theta[1]\n ax.plot(line_x, line_y)\n plt.show()\n\nvisualize(theta)",
"the cost function will allow us to record the evolution of the cost during the gradient descent",
"def cost(X, y, theta):\n loss = predict(X, theta) - y\n cost = (1 / (2 * X.shape[0])) * np.dot(loss, np.transpose(loss))\n return(cost)\n\ncost(X, y, [0, 0])",
"expected output for [0, 0]: 32.072733877455676\nthe full version of gradient descent now records the cost history",
"def gradient_descent(X, y, theta, alpha, num_iters):\n m = X.shape[0]\n J_history = []\n for i in range(0, num_iters):\n theta0 = theta[0] - (alpha / m) * np.sum(predict(X, theta) - y)\n theta1 = theta[1] - (alpha / m) * np.dot(predict(X, theta) - y, np.transpose(X))\n theta = [theta0, theta1]\n J_history.append(cost(X, y, theta))\n return theta, J_history\n\ntheta = np.zeros(2, dtype=float)\ntheta, J_history = gradient_descent(X, y, theta, 0.01, 1500)\ntheta",
"Expected output for alhpa 0.01 and 1500 iterations: [-3.6302914394043597, 1.166362350335582]",
"fit = plt.figure()\nax = plt.axes()\nax.plot(J_history)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Mashimo/datascience
|
01-Regression/LogisticRegression.ipynb
|
apache-2.0
|
[
"Example of Logistic regression\nPredict student admission based on exams result\nData is taken from Andrew Ng's CS229 course on Machine Learning at Stanford.",
"import pandas as pd\n\ndata = pd.read_csv(\"datasets/ex2data1.txt\", header=None, \n names=['Exam1', 'Exam2', 'Admitted']) \n\ndata.head()",
"Historical data from previous students: each student has two exams scores associated and the final admission result (1=yes, 0= no).\nLet's plot the points in a chart (green means admitted, red not admitted).",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\ncolours = ['red' if i==0 else 'green' for i in data.Admitted]\n\nfig,ax = plt.subplots()\nax.scatter(data.Exam1, data.Exam2, c=colours)\nax.grid(True)\nax.set_xlabel(\"Exam 1 score\")\nax.set_ylabel(\"Exam 2 score\")\nfig.suptitle(\"Student admission vs. past two exams\")",
"If the score of the first or the second exam was too low, it might be not enough to be admitted. You need a good balance.\nLet's try to quantify it.\nThe sigmoid function\nLogistic regression uses a special function to model how the probability of the event \"Admitted\" P(y=1) is affected by our variables (the exams score).\nThis function is the sigmoid function:\n$$ g(z) = \\frac{1}{1 + e^{-z}}$$",
"import numpy as np\n\ndef sigmoid(z): \n \"\"\"\n Compute the sigmoid function of each input value.\n It uses numpy to leverage the vectorised format.\n\n Argument:\n z: matrix, vector or scalar (float)\n\n Returns:\n matrix, vector or float\n \"\"\"\n \n return 1 / (1 + np.exp(-z))\n",
"Let's plot it:",
"x = np.arange(-10., 10., 0.2)\nsig = sigmoid(x)\n\nfig,ax = plt.subplots()\nax.plot(x,sig)\nax.grid(True)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"Sigmoid(x)\")\nfig.suptitle(\"The sigmoid function\")",
"Unit tests:",
"sigmoid(1)\n\nsigmoid(np.array([2,3]))",
"Logistic Response function: cost and gradient\nThis is the logistic function to model our admission: \n$P(y=1) = \\frac{1}{1 + e^{-(\\beta_{0} + \\beta_{1} \\cdot x_{1} + ... + \\beta_{n} \\cdot x_{n}) }} $\nwhere y is the admission result (0 or 1) and x are the exams scores. \nWe have in our example x1 and x2 (two exams).\nOur next step is to find the correct beta parameters for the model.\nAnd we will do it by using our historical data as a training set, like we did for the linear regression, using a gradient descent algorithm (see the blog post for details). \nThe algorithm will find the optimal beta parameters that minimise the cost. We need to define a function to calculate the cost and the gradient:",
"def getCostGradient(beta, X, y):\n \"\"\"\n Compute the cost of a particular choice of beta as the\n parameter for logistic regression and the gradient of the cost\n w.r.t. to the parameters.\n Returns cost and gradient\n \n Arguments:\n beta: parameters, list\n X: input data points, array\n y : output data points, array\n\n Returns:\n float - the cost\n array of float - the gradient (same dimension as beta parameters)\n \"\"\"\n # Initialize some useful values\n y = np.squeeze(y) # this is to avoid broadcasting when element-wise multiply\n m = len(y) # number of training examples\n grad = np.zeros(beta.shape) # grad should have the same dimensions as beta\n \n # Compute the partial derivatives and set grad to the partial\n # derivatives of the cost w.r.t. each parameter in theta\n \n h = sigmoid(np.dot(X, beta))\n \n # J cost function\n y0 = y * np.log(h)\n y1 = (1 - y) * np.log(1 - h)\n cost = -np.sum(y0 + y1) / m\n \n # gradient\n error = h - y\n grad = np.dot(error, X) / m\n\n return (cost, grad)\n",
"Unit test:",
"getCostGradient(np.array([-1,0.2]), np.array([[1,34], [1,35]]), np.array([0,1]))",
"Split data into X (training data) and y (target variable)",
"cols = data.shape[1]\ncols\n\n# add the intercept\ndata.insert(0, 'Ones', 1)\n\nX = data.iloc[:,0:cols] # the first columns but the last are X\nX = np.array(X.values)\n\ny = data.iloc[:,cols:cols+1] # last column is the y\ny = np.array(y.values) \n\ninitialBeta = np.zeros(cols) # could be random also",
"what is the cost given these initial beta parameters?",
"getCostGradient(initialBeta, X, y)",
"Initial cost is 0.69\nFit the beta parameters\nTo find the optimal beta parameters we use a highly tuned function (minimize) from the package SciPy.\nWe need to provide the cost and the gradient function, the input data and which method to use (we use the classic Newton). The argument Jac=True tells that cost and gradient are together in the same function.",
"import scipy.optimize as opt \n\nresult = opt.minimize(fun = getCostGradient, x0 = initialBeta, args = (X, y),\n method = 'Newton-CG',jac = True) \n\nresult.message\n\noptimalBeta = result.x",
"and here we have our final beta parameters:",
"optimalBeta",
"$$P(y=1) = \\frac{1}{1 + e^{25.17 - 0.21 \\cdot x_{1} - 0.20 \\cdot x_{2} }} $$\nPlot the decision boundary\nWe can use these beta parameters to plot the decision boundary on the training data.\nWe only need two points to plot a line, so we choose two endpoints: the min and the max among the X training data (we add a small margin of 2 to have a longer line in the plot, looks better).",
"plot_x = np.array([min(X[:,2])-2, max(X[:,2])+2])\nplot_x",
"The boundary lies where the P(y=1) = P(y=0) = 0.5\nwhich means that beta * X shall be zero",
"plot_y = (-1./optimalBeta[2]) * (optimalBeta[1] * plot_x + optimalBeta[0])\nplot_y\n\nfig,ax = plt.subplots()\nax.scatter(data.Exam1, data.Exam2, c=colours)\nax.plot(plot_x, plot_y)\nax.grid(True)\nax.set_xlabel(\"Exam 1 score\")\nax.set_ylabel(\"Exam 2 score\")\nfig.suptitle(\"Student admission vs. past two exams\")",
"The blue line is our decision boundary: when your exams score lie below the line then probably (that is the prediction) you will not be admitted to University. If they lie above, probably you will.\nAs you can see, the boundary is not predicting perfectly on the training historical data. It's a model. Not perfect but useful.\nWhat we can do is to measure its accuracy.\nAccuracy",
"def predict(beta, X): \n probabilities = sigmoid(np.dot(X, beta))\n return [1 if x >= 0.5 else 0 for x in probabilities]\n\npredictions = predict(optimalBeta, X)\n\ncorrect = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) \n else 0 for (a, b) in zip(predictions, y)] \naccuracy = (sum(map(int, correct)) % len(correct)) \nprint ('accuracy = {0}%'.format(accuracy) )",
"Just for fun, let's say that my scores are 40 in the first exam and 78 in the second one:",
"myExams = np.array([1., 40., 78.])\n\nsigmoid(np.dot(myExams, optimalBeta))",
"Uh oh, looks's like my probability to be admitted at University is only 23% ..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
archman/phantasy
|
docs/source/src/notebooks/phantasy_lattice.ipynb
|
bsd-3-clause
|
[
"Lattice\nPS: Since phantasy is still under development, this notebook might be updated frequently.\nImport modules/packages",
"import phantasy\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Model Machine",
"mp = phantasy.MachinePortal(machine='FRIB_FLAME', segment='LINAC')",
"Inspect mp",
"mp.inspect_mconf()",
"Load another lattice (segment)",
"mp.load_lattice('LS1')\n# Please note: 'LS1' maybe not consistent with the real configuration, just for demonstration.\n# The configuration for 'LS1' segment should be updated.",
"List all loaded lattices",
"mp.lattice_names\n\n# Current working lattice:\nmp.work_lattice_name\n\n# Switch lattice to 'LINAC'\nmp.use_lattice('LINAC')\n\n# Current working lattice now:\nmp.work_lattice_name",
"Currently (phantasy of version 0.5.0), one can operate lattice by explicitly getting working lattice,\nThe future plan is most operation should be able to reach through MachinePortal.\nGet working lattice",
"lat = mp.work_lattice_conf",
"Inspect lattice",
"print(\"Lattice name : %s\" % lat.name)\nprint(\"Machine name : %s\" % lat.mname)\nprint(\"Lattice groups : %s\" % lat.group.keys())\n## more to be shown, not final version.",
"Locate Elements\nTwo methods (to date) could be used to locate elements: get_elements() and next_elements().\nget_elements()",
"# Invalid name:\nlat.get_elements(name='NOEXISTS')\n\n# name:\nlat.get_elements(name='FS1_BMS:DCV_D2662')\n\n# name pattern\nlat.get_elements(name=['FS1_B?*D266?', 'LS1_B*DCV*'])\n\n# multiple filters, e.g. type:\nlat.get_elements(name=['FS1_B?*D266?', 'LS1_B*DCV*'], type='BPM')\nassert lat.get_elements(name=['FS1_B?*D266?', 'LS1_B*DCV*'], type='BPM') == \\\n lat.get_elements(name=['FS1_B?*D266?', 'LS1_B*DCV*'], type='BP?')\n\n# with hybrid types:\nlat.get_elements(name=['FS1_B?*D266?', 'LS1_B*DCV*'], type=['BPM', 'QUAD'])\n\n# get sub-lattice regarding to s-position range:\nlat.get_elements(srange=(10, 11))\n\n# multiple filters with srange:\nlat.get_elements(name=['FS1_B?*D266?', 'LS1_B*DCV*'], type=['BPM', 'QUAD'], srange=(154, 155))",
"next_elements()",
"# locate reference element:\nref_elem = lat.get_elements(name='*')[6]\n\nref_elem\n\n# get the next element of ref_elem, i.e. the first element downstream\nlat.next_elements(ref_elem)\n\n# get the last one of the next two elements\nlat.next_elements(ref_elem, count=2)\n\n# get all of the next two elements\nlat.next_elements(ref_elem, count=2, range='0::1')\n\n# get all of the two elements before ref_elem\nlat.next_elements(ref_elem, count=-2, range='0::1')\n\n# get the next two BPM elements of ref_elem,\n# return including ref_elem itself\nlat.next_elements(ref_elem, count=2, type=['BPM'], ref_include=True, range='0::1')\n\n# with hybrid types\nlat.next_elements(ref_elem, count=2, type=['BPM', 'CAV'], range='0::1')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
konstantinstadler/pymrio
|
doc/source/notebooks/buildflowmatrix.ipynb
|
gpl-3.0
|
[
"Analysing the source of stressors (flow matrix)\nTo calculate the source (in terms of regions and sectors) of a certain stressor or impact driven by consumption, one needs to diagonalize this stressor/impact. This section shows how to do this based on the \nsmall test mrio included in pymrio. The same procedure can be use for any other MRIO, but keep in mind that\ndiagonalizing a stressor dramatically increases the memory need for the calculations.\nBasic example\nFirst we load the test mrio:",
"import pymrio\nio = pymrio.load_test()",
"The test mrio includes several extensions:",
"list(io.get_extensions())",
"For the example here, we use 'emissions' - 'emission_type1':",
"io.emissions.F\n\net1_diag = io.emissions.diag_stressor(('emission_type1', 'air'), name = 'emtype1_diag')",
"The parameter name is optional, if not given the name is set to the stressor name + '_diag'\nThe new emission matrix now looks like this:",
"et1_diag.F.head(15)",
"And can be connected back to the system with:",
"io.et1_diag = et1_diag",
"Finally we can calulate the all stressor accounts with:",
"io.calc_all()",
"This results in a square footprint matrix. In this matrix, every column respresents the amount of stressor occuring in each region - sector driven by the consumption stated in the column header. Conversly, each row states where the stressor impacts occuring in the row are distributed due (from where they are driven).",
"io.et1_diag.D_cba.head(20)",
"The total footprints of a region - sector are given by summing the footprints along rows:",
"io.et1_diag.D_cba.sum(axis=0).reg1\n\nio.emissions.D_cba.reg1",
"The total stressor in a sector corresponds to the sum of the columns:",
"io.et1_diag.D_cba.sum(axis=1).reg1\n\nio.emissions.F.reg1",
"Aggregation of source footprints\nIf only one specific aspect of the source is of interest for the analysis, the footprint matrix can easily be aggregated with the standard pandas groupby function. \nFor example, to aggregate to the source region of stressor, do:",
"io.et1_diag.D_cba.groupby(level='region', axis=0).sum()",
"In addition, the aggregation function of pymrio also work on the diagonalized footprints. Here as example together with the country converter coco:",
"import country_converter as coco\nio.aggregate(region_agg = coco.agg_conc(original_countries=io.get_regions(), \n aggregates={'reg1': 'World Region A',\n 'reg2': 'World Region A',\n 'reg3': 'World Region A',},\n missing_countries='World Region B'))\n\nio.et1_diag.D_cba"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fluxcapacitor/source.ml
|
jupyterhub.ml/notebooks/train_deploy/zz_under_construction/zz_old/talks/DataWeekends/SparkMLDeployment/DataWeekends-Mar182017-SparkMLDeployment.ipynb
|
apache-2.0
|
[
"Who Am I?\nChris Fregly\n\n\nResearch Scientist, Founder @ PipelineIO\n\nVideo Series Author \"High Performance Tensorflow in Production\" @ OReilly (Coming Soon)\n\nFounder @ Advanced Spark and Tensorflow Meetup\n\nGithub Repo\n\nDockerHub Repo\n\nSlideshare\n\nYouTube\n\nWho Was I?\nSoftware Engineer @ Netflix, Databricks, IBM Spark Tech Center\n \n \n\nTypes of Model Deployments\nKeyValue\nie. Recommendations\nIn-memory: Redis, Memcache\nOn-disk: Cassandra, RocksDB\nFirst-class Servable in Tensorflow Serving\nPMML\nIt's Useful and Well-Supported\nApple, Cisco, Airbnb, HomeAway, etc\nPlease Don't Re-build It - Reduce Your Technical Debt!\n\nNative Code Generation (CPU and GPU)\nHand-coded (Python + Pickling)\nGenerate Java Code from PMML?\n\nTensorflow Models\nfreeze_graph.py: Combine Tensorflow Graph (Static) with Trained Weights (Checkpoints) into Single Deployable Model\nDemos!!",
"# You may need to Reconnect (more than Restart) the Kernel to pick up changes to these sett\nimport os\n\nmaster = '--master spark://spark-master-2-1-0:7077'\nconf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'\npackages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'\njars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'\npy_files = '--py-files /root/lib/jpmml.py'\n\nos.environ['PYSPARK_SUBMIT_ARGS'] = master \\\n + ' ' + conf \\\n + ' ' + packages \\\n + ' ' + jars \\\n + ' ' + py_files \\\n + ' ' + 'pyspark-shell'\n\nprint(os.environ['PYSPARK_SUBMIT_ARGS'])",
"Deploy Spark ML Models",
"from pyspark.ml.linalg import Vectors\nfrom pyspark.ml.feature import VectorAssembler, StandardScaler\nfrom pyspark.ml.feature import OneHotEncoder, StringIndexer\nfrom pyspark.ml import Pipeline, PipelineModel\nfrom pyspark.ml.regression import LinearRegression\n\nfrom pyspark.sql import SparkSession\n\nspark = SparkSession.builder.getOrCreate()",
"Step 0: Load Libraries and Data",
"df = spark.read.format(\"csv\") \\\n .option(\"inferSchema\", \"true\").option(\"header\", \"true\") \\\n .load(\"s3a://datapalooza/airbnb/airbnb.csv.bz2\")\n\ndf.registerTempTable(\"df\")\n\nprint(df.head())\n\nprint(df.count())",
"Step 1: Clean, Filter, and Summarize the Data",
"df_filtered = df.filter(\"price >= 50 AND price <= 750 AND bathrooms > 0.0 AND bedrooms is not null\")\n\ndf_filtered.registerTempTable(\"df_filtered\")\n\ndf_final = spark.sql(\"\"\"\n select\n id,\n city,\n case when state in('NY', 'CA', 'London', 'Berlin', 'TX' ,'IL', 'OR', 'DC', 'WA')\n then state\n else 'Other'\n end as state,\n space,\n cast(price as double) as price,\n cast(bathrooms as double) as bathrooms,\n cast(bedrooms as double) as bedrooms,\n room_type,\n host_is_super_host,\n cancellation_policy,\n cast(case when security_deposit is null\n then 0.0\n else security_deposit\n end as double) as security_deposit,\n price_per_bedroom,\n cast(case when number_of_reviews is null\n then 0.0\n else number_of_reviews\n end as double) as number_of_reviews,\n cast(case when extra_people is null\n then 0.0\n else extra_people\n end as double) as extra_people,\n instant_bookable,\n cast(case when cleaning_fee is null\n then 0.0\n else cleaning_fee\n end as double) as cleaning_fee,\n cast(case when review_scores_rating is null\n then 80.0\n else review_scores_rating\n end as double) as review_scores_rating,\n cast(case when square_feet is not null and square_feet > 100\n then square_feet\n when (square_feet is null or square_feet <=100) and (bedrooms is null or bedrooms = 0)\n then 350.0\n else 380 * bedrooms\n end as double) as square_feet\n from df_filtered\n\"\"\").persist()\n\ndf_final.registerTempTable(\"df_final\")\n\ndf_final.select(\"square_feet\", \"price\", \"bedrooms\", \"bathrooms\", \"cleaning_fee\").describe().show()\n\nprint(df_final.count())\n\nprint(df_final.schema)\n\n# Most popular cities\n\nspark.sql(\"\"\"\n select \n state,\n count(*) as ct,\n avg(price) as avg_price,\n max(price) as max_price\n from df_final\n group by state\n order by count(*) desc\n\"\"\").show()\n\n# Most expensive popular cities\n\nspark.sql(\"\"\"\n select \n city,\n count(*) as ct,\n avg(price) as avg_price,\n max(price) as max_price\n from df_final\n group by city\n order by avg(price) desc\n\"\"\").filter(\"ct > 25\").show()",
"Step 2: Define Continous and Categorical Features",
"continuous_features = [\"bathrooms\", \\\n \"bedrooms\", \\\n \"security_deposit\", \\\n \"cleaning_fee\", \\\n \"extra_people\", \\\n \"number_of_reviews\", \\\n \"square_feet\", \\\n \"review_scores_rating\"]\n\ncategorical_features = [\"room_type\", \\\n \"host_is_super_host\", \\\n \"cancellation_policy\", \\\n \"instant_bookable\", \\\n \"state\"]",
"Step 3: Split Data into Training and Validation",
"[training_dataset, validation_dataset] = df_final.randomSplit([0.8, 0.2])",
"Step 4: Continous Feature Pipeline",
"continuous_feature_assembler = VectorAssembler(inputCols=continuous_features, outputCol=\"unscaled_continuous_features\")\n\ncontinuous_feature_scaler = StandardScaler(inputCol=\"unscaled_continuous_features\", outputCol=\"scaled_continuous_features\", \\\n withStd=True, withMean=False)",
"Step 5: Categorical Feature Pipeline",
"categorical_feature_indexers = [StringIndexer(inputCol=x, \\\n outputCol=\"{}_index\".format(x)) \\\n for x in categorical_features]\n\ncategorical_feature_one_hot_encoders = [OneHotEncoder(inputCol=x.getOutputCol(), \\\n outputCol=\"oh_encoder_{}\".format(x.getOutputCol() )) \\\n for x in categorical_feature_indexers]",
"Step 6: Assemble our Features and Feature Pipeline",
"feature_cols_lr = [x.getOutputCol() \\\n for x in categorical_feature_one_hot_encoders]\nfeature_cols_lr.append(\"scaled_continuous_features\")\n\nfeature_assembler_lr = VectorAssembler(inputCols=feature_cols_lr, \\\n outputCol=\"features_lr\")",
"Step 7: Train a Linear Regression Model",
"linear_regression = LinearRegression(featuresCol=\"features_lr\", \\\n labelCol=\"price\", \\\n predictionCol=\"price_prediction\", \\\n maxIter=10, \\\n regParam=0.3, \\\n elasticNetParam=0.8)\n\nestimators_lr = \\\n [continuous_feature_assembler, continuous_feature_scaler] \\\n + categorical_feature_indexers + categorical_feature_one_hot_encoders \\\n + [feature_assembler_lr] + [linear_regression]\n\npipeline = Pipeline(stages=estimators_lr)\n\npipeline_model = pipeline.fit(training_dataset)\n\nprint(pipeline_model)",
"Step 8: Serialize PipelineModel",
"from jpmml import toPMMLBytes\n\nmodel_bytes = toPMMLBytes(spark, training_dataset, pipeline_model)\n\nprint(model_bytes.decode(\"utf-8\"))",
"Step 9: Push Model to Live, Running Spark ML Model Server (Mutable)",
"import urllib.request\n\nnamespace = 'default'\nmodel_name = 'airbnb'\nversion = '1'\nupdate_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml-model/%s/%s/%s' % (namespace, model_name, version)\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'application/xml'\n\nreq = urllib.request.Request(update_url, \\\n headers=update_headers, \\\n data=model_bytes)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.status) # Should return Http Status 200 ",
"Step 10: Evalute Model",
"import urllib.parse\nimport json\n\nnamespace = 'default'\nmodel_name = 'airbnb'\nversion = '1'\nevaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml-model/%s/%s/%s' % (namespace, model_name, version)\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\n\ninput_params = '{\"bathrooms\":5.0, \\\n \"bedrooms\":4.0, \\\n \"security_deposit\":175.00, \\\n \"cleaning_fee\":25.0, \\\n \"extra_people\":1.0, \\\n \"number_of_reviews\": 2.0, \\\n \"square_feet\": 250.0, \\\n \"review_scores_rating\": 2.0, \\\n \"room_type\": \"Entire home/apt\", \\\n \"host_is_super_host\": \"0.0\", \\\n \"cancellation_policy\": \"flexible\", \\\n \"instant_bookable\": \"1.0\", \\\n \"state\": \"CA\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = urllib.request.Request(evaluate_url, \\\n headers=evaluate_headers, \\\n data=encoded_input_params)\n\nresp = urllib.request.urlopen(req)\n\nprint(resp.read())",
"Bonus Demos!\nDeploy Java-based Model\nCreate Java-based Model",
"from urllib import request\n\nsourceBytes = ' \\n\\\n private String str; \\n\\\n \\n\\\n public void initialize(Map<String, Object> args) { \\n\\\n } \\n\\\n \\n\\\n public Object predict(Map<String, Object> inputs) { \\n\\\n String id = (String)inputs.get(\"id\"); \\n\\\n \\n\\\n return id.equals(\"21619\"); \\n\\\n } \\n\\\n'.encode('utf-8')",
"Deploy Java-based Model",
"from urllib import request\n\nnamespace = 'default'\nmodel_name = 'java_equals'\nversion = '1'\n\nupdate_url = 'http://prediction-java-aws.demo.pipeline.io/update-java/%s/%s/%s' % (namespace, model_name, version)\n\nupdate_headers = {}\nupdate_headers['Content-type'] = 'text/plain'\n\nreq = request.Request(\"%s\" % update_url, headers=update_headers, data=sourceBytes)\nresp = request.urlopen(req)\n\ngenerated_code = resp.read()\nprint(generated_code.decode('utf-8'))",
"Evaluate Java-based Model",
"from urllib import request\n\nnamespace = 'default'\nmodel_name = 'java_equals'\nversion = '1'\n\nevaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\ninput_params = '{\"id\":\"21618\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = request.urlopen(req)\n\nprint(resp.read()) # Should return false \n\nfrom urllib import request\n\nnamespace = 'default'\nmodel_name = 'java_equals'\nversion = '1'\nevaluate_url = 'http://prediction-java-aws.demo.pipeline.io/evaluate-java/%s/%s/%s' % (namespace, model_name, version)\n\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\ninput_params = '{\"id\":\"21619\"}' \nencoded_input_params = input_params.encode('utf-8')\n\nreq = request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = request.urlopen(req)\n\nprint(resp.read()) # Should return true",
"Deploy Scikit-Learn Model",
"!pip install sklearn_pandas\n!pip install git+https://github.com/jpmml/sklearn2pmml.git",
"Create Scikit-Learn Model",
"import pandas as pd\nimport numpy as np\nimport urllib.request\nimport urllib.parse\nimport json\n\nfrom sklearn.datasets import load_diabetes,load_iris\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error as mse, r2_score\nfrom sklearn2pmml import PMMLPipeline\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn2pmml import sklearn2pmml\n\niris = load_iris()\niris_df = pd.DataFrame(iris.data,columns=iris.feature_names)\niris_df['Species'] = iris.target\niris_pipeline = PMMLPipeline([\n (\"classifier\", DecisionTreeClassifier())\n])\niris_pipeline.fit(iris_df[iris_df.columns.difference([\"Species\"])], iris_df[\"Species\"])",
"Serialize Scikit-Learn Model",
"sklearn2pmml(iris_pipeline, \"DecisionTreeIris.pmml\", with_repr = True)\nmodel_bytes = bytearray(open('DecisionTreeIris.pmml', 'rb').read())",
"Deploy Scikit-Learn Model",
"import urllib.request\nimport urllib.parse\n\nnamespace = 'default'\nmodel_name = 'iris'\nversion = '1'\n\nupdate_url = 'http://prediction-pmml-aws.demo.pipeline.io/update-pmml-model/%s/%s/%s' % (namespace, model_name, version)\n\nupdate_headers = {}\nupdate_headers[\"Content-type\"] = \"application/xml\"\n\nreq = urllib.request.Request(update_url, headers=update_headers, data=model_bytes)\n\nresp = urllib.request.urlopen(req)\nprint(resp.status)\n\nnamespace = 'default'\nmodel_name = 'iris'\nversion = '1'\n\nevaluate_url = 'http://prediction-pmml-aws.demo.pipeline.io/evaluate-pmml-model/%s/%s/%s' % (namespace, model_name, version)\nevaluate_headers = {}\nevaluate_headers['Content-type'] = 'application/json'\n\ninput_params = iris_df.ix[0,:-1].to_json()\nencoded_input_params = input_params.encode('utf-8')\n\nreq = urllib.request.Request(evaluate_url, headers=evaluate_headers, data=encoded_input_params)\nresp = urllib.request.urlopen(req)\n\nprint(resp.read())",
"Monitoring Your Models\nNetflix Microservices Dashboard (Hystrix)",
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nfrom IPython.display import clear_output, Image, display, HTML\n\nhtml = '<iframe width=1200px height=500px src=\"http://hystrix.demo.pipeline.io/hystrix-dashboard/monitor/monitor.html?streams=%5B%7B%22name%22%3A%22Predictions%20-%20AWS%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-aws.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%2C%7B%22name%22%3A%22Predictions%20-%20GCP%22%2C%22stream%22%3A%22http%3A%2F%2Fturbine-gcp.demo.pipeline.io%2Fturbine.stream%22%2C%22auth%22%3A%22%22%2C%22delay%22%3A%22%22%7D%5D\">'\ndisplay(HTML(html))",
"Grafana + Prometheus Dashboard",
"from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\nfrom IPython.display import clear_output, Image, display, HTML\n\nhtml = '<iframe width=1200px height=500px src=\"http://grafana.demo.pipeline.io\">'\ndisplay(HTML(html))",
"Load-Test Your Model Servers\nRun JMeter Tests from Local Laptop (Limited by Laptop Performance)\nRun Headless JMeter Tests from Training Clusters in Cloud",
"# Spark ML - Airbnb\n!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-airbnb-rc.yaml\n\n# Codegen - Java - Simple\n!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-equals-rc.yaml\n\n# Tensorflow AI - Tensorflow Serving - Simple \n!kubectl create --context=awsdemo -f /root/pipeline/loadtest.ml/loadtest-aws-minimal-rc.yaml",
"End Load Tests",
"!kubectl delete --context=awsdemo rc loadtest-aws-airbnb\n!kubectl delete --context=awsdemo rc loadtest-aws-equals\n!kubectl delete --context=awsdemo rc loadtest-aws-minimal",
"Rolling Deploy",
"!kubectl rolling-update prediction-tensorflow --context=awsdemo --image-pull-policy=Always --image=fluxcapacitor/prediction-tensorflow",
"PipelineIO Premium Edition\n\nA/B and Multi-armed Bandit Testing\n\n\nContinuous, Hybrid-Cloud Deployments\n\n\n\n\n\nOnline Model Training and Deploying\n\nGPU-based Deployments"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gouthambs/karuth-source
|
content/extra/notebooks/pandas_vs_numpy.ipynb
|
artistic-2.0
|
[
"Numpy and Pandas Performance Comparison\nGoutham Balaraman\nPandas and Numpy are two packages that are core to a lot of data analysis. In this post I will compare the performance of numpy and pandas. \ntl;dr:\n- numpy consumes less memory compared to pandas\n- numpy generally performs better than pandas for 50K rows or less\n- pandas generally performs better than numpy for 500K rows or more\n- for 50K to 500K rows, it is a toss up between pandas and numpy depending on the kind of operation",
"import pandas as pd\nimport matplotlib.pyplot as plt\nplt.style.use(\"seaborn-pastel\")\n%matplotlib inline\nimport seaborn.apionly as sns\nimport numpy as np\nfrom timeit import timeit \nimport sys\n\niris = sns.load_dataset('iris')\n\ndata = pd.concat([iris]*100000)\ndata_rec = data.to_records()\n\nprint (len(data), len(data_rec))",
"Here I have loaded the iris dataset and replicated it so as to have 15MM rows of data. The space requirement for 15MM rows of data in a pandas dataframe is more than twice that of a numpy recarray.",
"MB = 1024*1024\nprint(\"Pandas %d MB \" % (sys.getsizeof(data)/MB))\nprint(\"Numpy %d MB \" % (sys.getsizeof(data_rec)/MB))",
"A snippet of the data shown below.",
"data.head()\n\n# <!-- collapse=True -->\ndef perf(inp, statement, grid=None):\n length = len(inp) \n gap = int(length/5)\n #grid = np.array([int(x) for x in np.logspace(np.log10(gap), np.log10(length+1) , 5)])\n if grid is None:\n grid = np.array([10000, 100000, 1000000, 5000000, 10000000])\n num = 100\n time = []\n data = {'pd': pd, 'np': np}\n for i in grid:\n if isinstance(inp, pd.DataFrame):\n sel = inp.iloc[:i]\n data['data'] = sel\n else:\n sel = inp[:i]\n data['data_rec'] = sel\n t = timeit(stmt=statement, globals=data, number=num)\n time.append(t/num)\n return grid, np.array(time)\n\ndef bench(pd_inp, pd_stmt, np_inp, np_stmt, title=\"\", grid=None):\n g,v1 = perf(pd_inp, pd_stmt, grid)\n g,v2 = perf(np_inp, np_stmt, grid)\n fig, ax = plt.subplots()\n ax.loglog()\n ax.plot(g, v1, label=\"pandas\",marker=\"o\", lw=2)\n ax.plot(g, v2, label=\"numpy\", marker=\"v\", lw=2)\n ax.set_xticks(g)\n plt.legend(loc=2)\n plt.xlabel(\"Number of Records\")\n plt.ylabel(\"Time (s)\")\n plt.grid(True)\n plt.xlim(min(g)/2,max(g)*2)\n plt.title(title)",
"In this post, performance metrics for a few different categories are compared between numpy and pandas:\n- operations on a column of data, such as mean or applying a vectorised function\n- operations on a filtered column of data\n- vector operations on a column or filtered column\nOperations on a Column\nHere some performance metrics with operations on one column of data. The operations involved in here include fetching a view, and a reduction operation such as mean, vectorised log or a string based unique operation. All these are O(n) calculations. The mean calculation is orders of magnitude faster in numpy compared to pandas for array sizes of 100K or less. For sizes larger than 100K pandas maintains a lead over numpy.",
"bench(data, \"data.loc[:, 'sepal_length'].mean()\", \n data_rec, \"np.mean(data_rec.sepal_length)\",\n title=\"Mean on Unfiltered Column\")",
"Below, the vectorized log operation is faster in numpy for sizes less than 100K but pandas costs about the same for sizes larger than 100K.",
"bench(data, \"np.log(data.loc[:, 'sepal_length'])\",\n data_rec, \"np.log(data_rec.sepal_length)\",\n title=\"Vectorised log on Unfiltered Column\")",
"The one differentiating aspect about the test below is that the column species is of string type. The operation demonstrated is a unique calculation. We observe that the unique calculation is roughly an order of magnitude faster in pandas for sizes larger than 1K rows.",
"bench(data, \"data.loc[:,'species'].unique()\",\n data_rec, \"np.unique(data_rec.species)\", \n grid=np.array([100, 1000, 10000, 100000, 1000000]),\n title=\"Unique on Unfiltered String Column\")",
"Operations on a Filtered Column\nBelow we perform the same tests as above, except that the column is not a full view, but is instead a filtered view. The filters are simple filters with an arithmetic bool comparison for the first two and a string comparison for the third below. \nBelow, mean is calculated for a filtered column sepal_length. Here performance of pandas is better for row sizes larger than 10K. In the mean on unfiltered column shown above, pandas performed better for 1MM or more. Just having selection operations has shifted performance chart in favor of pandas for even smaller number of records.",
"bench(data, \"data.loc[(data.sepal_width>3) & \\\n (data.petal_length<1.5), 'sepal_length'].mean()\",\n data_rec, \"np.mean(data_rec[(data_rec.sepal_width>3) & \\\n (data_rec.petal_length<1.5)].sepal_length)\",\n grid=np.array([1000, 10000, 100000, 1000000]),\n title=\"Mean on Filtered Column\")",
"For vectorised log operation on a unfiltered column shown above, numpy performed better than pandas for number of records less than 100K while the performance was comparable for the two for sizes larger than 100K. But the moment you introduce a filter on a column, pandas starts to show an edge over numpy for number of records larger than 10K.",
"bench(data, \"np.log(data.loc[(data.sepal_width>3) & \\\n (data.petal_length<1.5), 'sepal_length'])\",\n data_rec, \"np.log(data_rec[(data_rec.sepal_width>3) & \\\n (data_rec.petal_length<1.5)].sepal_length)\",\n grid=np.array([1000, 10000, 100000, 1000000]),\n title=\"Vectorised log on Filtered Column\")",
"Here is another example of a mean reduction on a column but with a string filter. We see a similar behavior where numpy performs significantly better at small sizes and pandas takes a gentle lead for larger number of records.",
"bench(data, \"data[data.species=='setosa'].sepal_length.mean()\",\n data_rec, \"np.mean(data_rec[data_rec.species=='setosa'].sepal_length)\",\n grid=np.array([1000, 10000, 100000, 1000000]),\n title=\"Mean on (String) Filtered Column\")",
"Vectorized Operation on a Column\nIn this last section, we do vectorised arithmetic using multiple columns. This involves creating a view and vectorised math on these views. Even when there is no filter, pandas has a slight edge over numpy for large number of records. For smaller than 100K records, numpy performs significantly better.",
"bench(data, \"data.petal_length * data.sepal_length + \\\n data.petal_width * data.sepal_width\",\n data_rec, \"data_rec.petal_length*data_rec.sepal_length + \\\n data_rec.petal_width * data_rec.sepal_width\",\n title=\"Vectorised Math on Unfiltered Columns\")",
"In the following figure, the filter involves vectorised arithmetic operation, and mean reduction is computed on the filtered column. The presence of a filter makes pandas significantly faster for sizes larger than 100K, while numpy maitains a lead for smaller than 10K number of records.",
"bench(data, \"data.loc[data.sepal_width * data.petal_length > \\\n data.sepal_length, 'sepal_length'].mean()\",\n data_rec, \"np.mean(data_rec[data_rec.sepal_width * data_rec.petal_length \\\n > data_rec.sepal_length].sepal_length)\",\n title=\"Vectorised Math in Filtering Columns\",\n grid=np.array([100, 1000, 10000, 100000, 1000000]))",
"Conclusion\nPandas is often used in an interactive environment such as through Jupyter notebooks. In such a case, any performance loss from pandas will be in significant. But if you have smaller pandas dataframes (<50K number of records) in a production environment, then it is worth considering numpy recarrays. \n- numpy consumes (roughtly 1/3) less memory compared to pandas\n- numpy generally performs better than pandas for 50K rows or less\n- pandas generally performs better than numpy for 500K rows or more\n- for 50K to 500K rows, it is a toss up between pandas and numpy depending on the kind of operation"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
TheMitchWorksPro/DataTech_Playground
|
PY_Basics/TMWP_Pandas_Dataframe_Basics.ipynb
|
mit
|
[
"<div align=\"right\">Python [conda env:PY27_rtclone]</div>\n<div align=\"right\">Python [conda env:PY36_clone]</div>\n\nWorking with Pandas DataFrames\nThis notebook was created in Python 2.7 and cross-tested in Python 3.6. Code should work in both versions. It explores basic syntax of working with Pandas Dataframes. Some useful functions and one line snippets to know:\n- DataFrame(data={key1:[data, data2], key2:[d, d2]}, columns=[\"col1\", \"col2\"], index=indexVar, dtype=int64)\n- data_df.astype(dtype= {\"wheel_number\":\"int64\", \"car_name\":\"object\",\"minutes_spent\":\"float64\"})\n- as.dataframe(df_data)\nTOC\n\nBuilding a Simple DataFrame\nfrom a Dictionary\nCreate Empty and Add Rows One by One\nSetting dtypes on Each Column of DF\nRenaming Columns\nDisplaying Dataframe with nice Notebook Formatting<br/><br/>\nIndexing and Selecting Dataframe Slices<br/><br/>\nMore Help on DFs on The Web<br/><br/>",
"# libraries used throughout this notebook\nimport pandas as pd\nimport numpy as np",
"<a id=\"bld1\" name=\"bld1\"></a>\nBuilding a Simple DataFrame\n<a id=\"dixndf\" name=\"dixndf\"></a>\nCreate DF from Dictionary",
"# create a dictionary\nstateData = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], \\\n 'year': [2000, 2001, 2002, 2001, 2002],\n 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}\n# convert to DataFrame\ndf_stdt = pd.DataFrame(stateData)\ndf_stdt\n\nprint(\"Index: %s\" %df_stdt.index)\nprint(\"Values: \\n%s\" %df_stdt.values)\nprint(\"Columns: %s\" %df_stdt.columns)\nprint(\"dataframe.describe():\")\ndf_stdt.describe()",
"<a id=\"lbldf\" name=\"lbldf\"></a>\nCreate Empty DF And Add Rows One At A Time\nA common scenario in code: a loop or function call needs to add a row to a dataframe but first you need a blank one. This code may come in handy as that scenario comes up.",
"solutionPD = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[]}, dtype=np.int64 ) \nsolutionPD = solutionPD.append(pd.DataFrame({ 'disk':[5],'fromPeg':[1], 'toPeg':[3]}), ignore_index=True)\nsolutionPD = solutionPD.append(pd.DataFrame({ 'disk':[4],'fromPeg':[1], 'toPeg':[2] }), ignore_index=True)\nprint(\"disk column type: %s\" %type(solutionPD['disk'][0]))\nsolutionPD",
"<a id=\"setDtype1\" name=\"setDtype1\"></a>\nSetting Data Types on Columns\nRead comments as well as code in cells which follow:",
"# Recommended: To set dtype for all columns, build the Dataframe, \n# and then use astype() to set the columns as shown here\n\n# though online help topics indicate it should be possible to pass in a dictionary or list of tuples\n# to set all the datatypes initially, under Python 2.7, this does not seem to work.\n# came closest with this sample:\n# solutionPD2 = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[], 'text':[], 'notes':[]}, \n# dtype=[('disk', 'int64'), ('fromPeg', 'int64' ), ('toPeg', 'float64'), \n# ('text', 'str'), ('notes', 'object')] ) \n# Related links:\n# http://stackoverflow.com/questions/21197774/assign-pandas-dataframe-column-dtypes\n# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.append.html\n\nsolutionPD2 = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[], 'notes':[], 'text':[]}, dtype=np.int64) \nsolutionPD2 = solutionPD2.astype({ 'disk':'int64','fromPeg':'float64', 'toPeg':'float64', 'notes':'object', \n 'text':'str'})\n\nsolutionPD2 = solutionPD2.append(pd.DataFrame({ 'disk':[5],'fromPeg':[1], 'toPeg':[3], \n 'notes':'hello', 'text':'more text' }), ignore_index=True)\nsolutionPD2 = solutionPD2.append(pd.DataFrame({ 'disk':[4],'fromPeg':[1], 'toPeg':[2], \n 'notes':'good bye', 'text':'even more text'}), ignore_index=True)\n\n# print(\"disk column type: %s\" %type(solutionPD['disk'][0]))\nprint(solutionPD2.dtypes)\nsolutionPD2\n\n# disk is currently an int\n# we will convert this numeric column to text here\nsolutionPD2.disk = solutionPD2.disk.astype('str')\nprint(solutionPD2.dtypes) # disk changes to 'object' which permits text",
"<a id=\"rname\" name=\"rname\"></a>\nRenaming Dataframe Columns\nStudy these tests. They show what works and what doesn't as counterintuitive as that is. This code was written python 3.6.1",
"testDF = pd.DataFrame({\"color\":[\"red\", \"green\", \"blue\"],\n \"thing\":[\"house\", \"dog\", \"machine\"]})\n\ntestDF\n\n## get single column name\ntestDF.columns[0] \n\n## rename one column\ntestDF.columns.values[0]='primaries'\ntestDF\n\ntestDF.primaries\n\ntestDF.color\n\ntestDF.color.name\n\ntestDF.columns = ['primary', 'something']\ntestDF\n\ntestDF.something\n\ntestDF.primary\n\n## stackoverflow: https://stackoverflow.com/questions/11346283/renaming-columns-in-pandas/11346337\n## sample: df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'}, inplace=True)\n\ntestDF.rename(columns={'primary':'myColors'}, inplace=True)\ntestDF\n\ntestDF.rename(columns={'myColors':'TruColors', 'something':'anything'}, inplace=True)\ntestDF",
"<a id=\"display1\" name=\"display1\"></a>\nDisplaying DataFrames in Python Notebooks\nTo the uninitiated, dataframes appear to display nicely in notebooks if they are the last command in a cell. But printing them and/or outputting them from functions does not work unless you \"know the trick\" illustrated in this section.\nThe Problem",
"## this does not work:\n\ndef printDF(df):\n print(df)\n\ndef displayDF(df):\n df\n\ndf_stdt # fails to output since this is not the last command in the cell\nprint(df_stdt) # outputs but without the nice built-in formatting\nprint(\"=\"*72)\nprintDF(df_stdt) # outputs without format\ndisplayDF(df_stdt) # fails to output since it is not the last command in the dell",
"The Solution\nA Stack Overflow post indicates to use the import statement that is commented out in the cell that follows.\nThe one that is not commented out works with the Python 3.6 Anaconda implementation tested here.\nBy default: this Anaconda implementation installed ipython and ipython_genutils packages. If these are missing, they would need to be installed using conda install <package>. You can use the --dry-run switch to test that the install is compatible with your specific package environment first.",
"## use this line if it works ... if not, try the one used at start of this code cell\n## this cell tested with Anaconda / Python 3.6\n# from Ipython.display import display\n\nimport ipython_genutils\n\ndef presentDF(df):\n display(df)\n \npresentDF(df_stdt) \nprint(\"=\"*72)\ndisplay(solutionPD)\nprint(\"=\"*72)\ndisplay(solutionPD2)\nprint(\"=\"*72)\ndf_stdt",
"<a id=\"indx\" name=\"indx\"></a>\nIndexing for DataFrame Slices\nThere are some good resources on the web about this. This list will get added to as they are discovered.\n\nTutorial: selecting dataframes\nPandas Dataframe by Exaple - see the sections on selecting and indexing\n\nQuick Example: .loc[], .iloc[]",
"testDF\n\ntestDF.iloc[:,0] # all rows, column 0\n\ntestDF.iloc[0,0] # row 0, column 0\n\ntestDF.iloc[1,:] # row 1, all columns\n\ntestDF.loc[1] # row at index 1\n\n## note: this DF had the names changed but indicies for columns are different\ntestDF.loc[:, 'TruColors'] # all rows, column by index name\n\ntestDF.loc[:, 'TruColors':'anything'] # if there were more columns - this would get all column in this range\n\ntestDF.loc[1:2, 'anything'] # using index names ... with names it is 1:2 including 2\n\ntestDF.iloc[1:3, 1:] # using index numbers ... note that with numbers it is up to 3 (so 1 and 2)",
"<a id=\"resources\" name=\"resources\"></a>\nAdditional Resources for Pandas Dataframes\n\nPandas Dataframe by Exaple - has extensive coverage of lots of pandas DF concepts\nPandas - apply operations to groups - has good coverage of groupby and aggregate functions for DFs"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
misken/hillmaker
|
hillmaker/examples/hillPyTesting-Core.ipynb
|
apache-2.0
|
[
"# Set up autoreloading of modules so that I can debug code in external files\n%load_ext autoreload\n%autoreload 2\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib as mp\nimport matplotlib.pyplot as plt\n\nimport bydatetime\nimport hillpylib as hm\nfrom pandas import Timestamp\n\n# Let's check what version of pandas, numpy and matplotlib we are using\nprint (\"pandas version \", pd.__version__)\nprint (\"numpy version \", np.version.version)\nprint (\"matplotlib version \", mp.__version__)",
"Put it all together\nBelow I've strung together all the pieces to do an entire Hillmaker run. Change inputs as needed (e.g. scenario_name and associated parameter values) and run all the cells below. You can skip rereading the main input file if that isn't changing.\nRead main stop data file",
"file_stopdata = 'data/ShortStay.csv'\ndf = pd.read_csv(file_stopdata, parse_dates=['InRoomTS','OutRoomTS'])\ndf.info()",
"Set input parameters",
"# Required inputs\nscenario_name = 'sstest_60'\nin_fld_name = 'InRoomTS'\nout_fld_name = 'OutRoomTS'\ncat_fld_name = 'PatType'\nstart_analysis = '1/1/1996'\nend_analysis = '3/30/1996 23:45'\n\n\n# Optional inputs\n\n# This next field wasn't in original Hillmaker. Use it to specify the name to use for the overall totals.\n# At this point the totals actually aren't being calculated.\ntot_fld_name = 'SSU'\n\nbin_size_mins = 60\n\nincludecats = ['ART','IVT']\n\n## Convert string dates to actual datetimes\nstart_analysis_dt = pd.Timestamp(start_analysis)\nend_analysis_dt = pd.Timestamp(end_analysis)\n\n# Mapper from weekday integer to string\ndaynum_to_dayname = {0: 'Mon', 1: 'Tue', 2: 'Wed', 3: 'Thu', 4: 'Fri', 5: 'Sat', 6: 'Sun'}\n",
"Create the by datetime table",
"df2 = df[df['PatType'].isin(includecats)]\n\ndf2.info()\n\ndf2.groupby('PatType').describe()\n\ndf = df[df['PatType'].isin(includecats)]\n\ndf.groupby('PatType').describe()\n\nbydt_df = bydatetime.make_bydatetime(df,\n in_fld_name,\n out_fld_name,\n cat_fld_name,\n start_analysis,\n end_analysis,\n tot_fld_name,\n bin_size_mins)\n\nbydt_df.dtypes\n\nbydt_df\n\nbydt_group = bydt_df.groupby(['datetime'])\n\ntot_arrivals = bydt_group.arrivals.sum()\ntot_departures = bydt_group.departures.sum()\ntot_occ = bydt_group.occupancy.sum()\n\n#bydt_totals = pd.DataFrame(tot_arrivals)\n\n\ntot_data = [tot_arrivals,tot_departures,tot_occ]\ntot_df = pd.concat(tot_data, axis = 1, keys = [s.name for s in tot_data])\n\ntot_data = [tot_arrivals,tot_departures,tot_occ]\ntot_df = pd.concat(tot_data, axis = 1, keys = [s.name for s in tot_data])\ntot_df['day_of_week'] = tot_df.index.map(lambda x: x.weekday())\ntot_df['bin_of_day'] = tot_df.index.map(lambda x: hm.bin_of_day(x,bin_size_mins))\ntot_df['bin_of_week'] = tot_df.index.map(lambda x: hm.bin_of_week(x,bin_size_mins))\n\ntot_df['category'] = tot_fld_name\ntot_df.set_index('category', append=True, inplace=True, drop=False)\ntot_df = tot_df.reorder_levels(['category', 'datetime'])\ntot_df['datetime'] = tot_df.index.levels[1]\n\ntot_df\n\ntot_df.info()\n\nbydt_df = pd.concat([bydt_df,tot_df])\n\nbydt_df.tail(n=25)",
"Compute summary stats",
"def get_occstats(group, stub=''):\n return {stub+'count': group.count(), stub+'mean': group.mean(), \n stub+'min': group.min(),\n stub+'max': group.max(), 'stdev': group.std(), \n stub+'p50': group.quantile(0.5), stub+'p55': group.quantile(0.55),\n stub+'p60': group.quantile(0.6), stub+'p65': group.quantile(0.65),\n stub+'p70': group.quantile(0.7), stub+'p75': group.quantile(0.75),\n stub+'p80': group.quantile(0.8), stub+'p85': group.quantile(0.85),\n stub+'p90': group.quantile(0.9), stub+'p95': group.quantile(0.95),\n stub+'p975': group.quantile(0.975), \n stub+'p99': group.quantile(0.99)}\n\nbydt_dfgrp2 = bydt_df.groupby(['category','day_of_week','bin_of_day'])\n\nocc_stats = bydt_dfgrp2['occupancy'].apply(get_occstats)\narr_stats = bydt_dfgrp2['arrivals'].apply(get_occstats)\ndep_stats = bydt_dfgrp2['departures'].apply(get_occstats)\n\nocc_stats_summary = occ_stats.unstack()\narr_stats_summary = arr_stats.unstack()\ndep_stats_summary = dep_stats.unstack()\n\n\n\nocc_stats.dtype\n\ntype(occ_stats)\n\nocc_stats_summary.info()",
"Write summaries and by datetime out to CSV",
"file_bydt_csv = 'testing/bydate_' + scenario_name + '.csv'\nbydt_df.to_csv(file_bydt_csv, index=False)\n\nfile_occ_csv = 'testing/occ_stats_' + scenario_name + '.csv'\nfile_arr_csv = 'testing/arr_stats_' + scenario_name + '.csv'\nfile_dep_csv = 'testing/dep_stats_' + scenario_name + '.csv'\n\nocc_stats_summary.to_csv(file_occ_csv)\narr_stats_summary.to_csv(file_arr_csv)\ndep_stats_summary.to_csv(file_dep_csv)",
"Debugging",
"ts = pd.Timestamp('19960103 00:00:00')\nprint(ts)\n\n24000/24\n\ndf_ART = df[(df.PatType == 'ART') & (df.InRoomTS < ts)]\n\ndf_ART.info()\n\ndf_ART\n\nbydt_df.head()\n\nbydt_df[25:50]\n\nimport numpy as np\nimport pandas as pd\nfrom pandas import Timestamp\n\nimport hillmaker as hm\n\nfile_stopdata = 'data/unit_stop_log_Experiment1_Scenario1_Rep1.csv'\n\nscenario_name = 'log_unitocc_test'\nin_fld_name = 'EnteredTS'\nout_fld_name = 'ExitedTS'\ncat_fld_name = 'Unit'\nstart_analysis = '3/24/2015 00:00'\nend_analysis = '6/16/2016 00:00'\n\n# Optional inputs\n\ntot_fld_name = 'OBTot'\nbin_size_mins = 60\nincludecats = ['LDR','PP']\n\n\nstops_df = pd.read_csv(file_stopdata,index_col=0)\nbasedate = Timestamp('20150215 00:00:00')\nstops_df['EnteredTS'] = df.apply(lambda row:\n Timestamp(round((basedate + pd.DateOffset(hours=row['Entered'])).value,-9)), axis=1)\n\nstops_df['ExitedTS'] = df.apply(lambda row:\n Timestamp(round((basedate + pd.DateOffset(hours=row['Exited'])).value,-9)), axis=1)\n\nstops_df = stops_df[stops_df[cat_fld_name].isin(includecats)]\n\nstops_df.info()\n\nstops_df[100:125]\n\nstart = stops_df.ix[188]['EnteredTS']\nend = stops_df.ix[188]['ExitedTS']\nprint(start, end)\nprint(type(start))\n\nstart_str = '2015-02-18 09:25:46'\nend_str = '2015-02-19 21:06:03'\n\nstart_analysis_timestamp = Timestamp(start_str)\nend_analysis_timestamp = Timestamp(end_str)\n\nstart_analysis_dt64 = np.datetime64(start_str)\nend_analysis_dt64 = np.datetime64(end_str)\n\nprint(start_analysis_timestamp, start_analysis_dt64)\n\nnum_days_fromts = end_analysis_timestamp - start_analysis_timestamp\nnum_days_fromdt64 = end_analysis_dt64 - start_analysis_dt64\n\nprint(num_days_fromts, num_days_fromdt64)\n\nprint(type(num_days_fromts))\nprint(type(num_days_fromdt64))\n\n\nprint(start)\nprint(start.date())\nstart_tsdate = Timestamp(start.date())\nprint (start_tsdate)\n\ngap = start - Timestamp(start.date())\nprint(gap)\nprint(type(gap))\n\nminutes = 60\ndt = start\n\nfloor_seconds = minutes * 60\ndt_date = Timestamp(dt.date())\ndelta = dt - dt_date\nprint(delta)\ntot_seconds = delta.total_seconds()\nprint(tot_seconds)\n\nfloor_time = (tot_seconds // floor_seconds) * floor_seconds\nprint(floor_time)\ngap_seconds = tot_seconds - floor_time\nprint(dt_date + pd.DateOffset(seconds=floor_time))\n\n\n#%time hm.run_hillmaker(scenario_name,df,in_fld_name, out_fld_name,cat_fld_name,start_analysis,end_analysis,tot_fld_name,bin_size_mins,categories=includecats,outputpath='./testing')\n\ndf.head()\n\ndf.info()",
"Computing occupancy statistics\nNeed to compute a bunch of output stats to use for visualization, metamodeling and to evaluate scenarios.\nOverall utilization\nIt would be nice if we could just Hillmaker with bin size of one week. Let's try it.",
"scenario_name = 'log_unitocc_test_steadystate'\nhm.run_hillmaker(scenario_name,df,in_fld_name, out_fld_name,cat_fld_name,\n start_analysis,end_analysis,tot_fld_name,1440,\n categories=includecats,totals=False,outputpath='./testing')\n\nocc_df = pd.read_csv('testing/occ_stats_summary_log_unitocc_test_steadystate.csv')\n\nocc_df\n\nbydt_df\n\n%matplotlib inline\n\nimport numpy as np\nfrom numpy.random import randn\nimport pandas as pd\nfrom scipy import stats\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nbydt_df = pd.read_csv('testing/bydatetime_log_unitocc_test_steadystate.csv')\n\npp_occ = bydt_df[(bydt_df['category'] == 'PP')]['occupancy']\n\nplt.hist(pp_occ.values,20)\n\ng = sns.FacetGrid(bydt_df, col=\"category\", margin_titles=True)\nbins = np.linspace(0, 60, 13)\ng.map(plt.hist, \"occupancy\", color=\"steelblue\", bins=bins, lw=0)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
saashimi/code_guild
|
wk4/notebooks/wk4.1.ipynb
|
mit
|
[
"wk4.1\nA word on virtual environments\nconda create -n virtualenv_name anaconda # Can use any other package besides anaconda for instance python=2\nsource activate virtualenv_name\nsource deactivate\nRegular expressions",
"import re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n if re.search('^From:', line) :\n print(line)\n\nimport re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n if re.search('^F..m:', line) :\n print(line)\n\nimport re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n if re.search('^R.+: <.+@.+>', line) : # This is greedy!\n print(line)",
"Extracting information\nFrom stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008\nReturn-Path: <postmaster@collab.sakaiproject.org>\n for <source@collab.sakaiproject.org>;\nReceived: (from apache@localhost)\nAuthor: stephen.marquard@uct.ac.za",
"import re\ns = 'Hello from csev@umich.edu to cwen@iupui.edu about the meeting @2PM'\nlst = re.findall('\\S+@\\S+', s)\nprint(lst)",
"The regular expression would match twice (csev@umich.edu and cwen@iupui.edu) but it would not match the string \"@2PM\" because there are no non-blank characters before the at-sign. We can use this regular expression in a program to read all the lines in a file and print out anything that looks like an e-mail address as follows:",
"import re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n x = re.findall('\\S+@\\S+', line) # some emails contain gross < characters!\n if len(x) > 0 :\n print(x)",
"Some of our E-mail addresses have incorrect characters like \"<\" or \";\" at the beginning or end. Let's declare that we are only interested in the portion of the string that starts and ends with a letter or a number.\nTo do this, we use another feature of regular expressions. Square brackets are used to indicate a set of multiple acceptable characters we are willing to consider matching. In a sense, the \"\\S\" is asking to match the set of \"non-whitespace characters\". Now we will be a little more explicit in terms of the characters we will match.\nHere is our new regular expression:\n[a-zA-Z0-9]\\S*@\\S*[a-zA-Z]",
"import re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n x = re.findall('[a-zA-Z0-9]\\S*@\\S*[a-zA-Z]', line)\n if len(x) > 0 :\n print(x)",
"Combining search and extraction\nIf we want to find numbers on lines that start with the string \"X-\" such as:\nX-DSPAM-Confidence: 0.8475\nX-DSPAM-Probability: 0.0000\nWe don't just want any floating point numbers from any lines. We only to extract numbers from lines that have the above syntax.\nTo match we write\n^X-.*: [0-9.]+",
"import re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n if re.search('^X\\S*: [0-9.]+', line) :\n print(line)",
"But now we have to solve the problem of extracting the numbers using split. While it would be simple enough to use split, we can use another feature of regular expressions to both search and parse the line at the same time. \nParentheses are another special character in regular expressions. When you add parentheses to a regular expression they are ignored when matching the string, but when you are using findall(), parentheses indicate that while you want the whole expression to match, you only are interested in extracting a portion of the substring that matches the regular expression. \nSo we make the following change to our program:",
"import re\nhand = open('mbox-short.txt')\nfor line in hand:\n line = line.rstrip()\n x = re.findall('^X\\S*: ([0-9]\\.[0-9]+)', line)\n if len(x) > 0 :\n print(x)",
"Escape characters\nWhat if we want to match $, ^, * etc?\nUse escape character (forward slash)",
"import re\nx = 'We just received $10.00 for cookies.'\ny = re.findall('\\$[0-9.]+',x)\nprint(y)\n\nimport re\nx = 'We just received $10.00 for cookies.'\ny = re.findall('c\\S+$',x)\nprint(y)",
"Summary\n^ \nMatches the beginning of the line.\n$ \nMatches the end of the line.\n. \nMatches any character (a wildcard).\n\\s \nMatches a whitespace character.\n\\S \nMatches a non-whitespace character (opposite of \\s).\n* \nApplies to the immediately preceding character and indicates to match zero or more of the preceding character.\n*? \nApplies to the immediately preceding character and indicates to match zero or more of the preceding character in \"non-greedy mode\".\n+ \nApplies to the immediately preceding character and indicates to match zero or more of the preceding character.\n+? \nApplies to the immediately preceding character and indicates to match zero or more of the preceding character in \"non-greedy mode\".\n[aeiou] \nMatches a single character as long as that character is in the specified set. In this example, it would match \"a\", \"e\", \"i\", \"o\" or \"u\" but no other characters.\n[a-z0-9] \nYou can specify ranges of characters using the minus sign. This example is a single character that must be a lower case letter or a digit.\n[^A-Za-z] \nWhen the first character in the set notation is a caret, it inverts the logic. This example matches a single character that is anything other than an upper or lower case character.\n( ) \nWhen parentheses are added to a regular expression, they are ignored for the purpose of matching, but allow you to extract a particular subset of the matched string rather than the whole string when using findall().\n\\b \nMatches the empty string, but only at the start or end of a word.\n\\B \nMatches the empty string, but not at the start or end of a word.\n\\d \nMatches any decimal digit; equivalent to the set [0-9].\n\\D \nMatches any non-digit character; equivalent to the set [^0-9].\nBonus section for Unix users\nSupport for searching files using regular expressions was built into the Unix operating system since the 1960's and it is available in nearly all programming languages in one form or another.\nAs a matter of fact, there is a command-line program built into Unix called grep (Generalized Regular Expression Parser) that does pretty much the same as the search() examples in this chapter. So if you have a Macintosh or Linux system, you can try the following commands in your command line window.\n$ grep '^From:' mbox-short.txt\nFrom: stephen.marquard@uct.ac.za\nFrom: louis@media.berkeley.edu\nFrom: zqian@umich.edu\nFrom: rjlowe@iupui.edu\nExercises\n\nExercise 1 Write a simple program to simulate the operation of the the grep command on Unix. Ask the user to enter a regular expression and count the number of lines that matched the regular expression:\n\n```\n$ python grep.py\nEnter a regular expression: ^Author\nmbox.txt had 1798 lines that matched ^Author\n$ python grep.py\nEnter a regular expression: ^X-\nmbox.txt had 14368 lines that matched ^X-\n$ python grep.py\nEnter a regular expression: java$\nmbox.txt had 4218 lines that matched java$\n* Exercise 2 Write a program to look for lines of the form\nNew Revision: 39772\nAnd extract the number from each of the lines using a regular expression and the findall() method. Compute the average of the numbers and print out the average.\nEnter file:mbox.txt \n38549.7949721\nEnter file:mbox-short.txt\n39756.9259259\n```\nWorld's simplest browser",
"import socket\n\nmysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nmysock.connect(('www.py4inf.com', 80))\nmysock.send('GET http://www.py4inf.com/code/romeo.txt HTTP/1.0\\n\\n')\n\nwhile True:\n data = mysock.recv(512)\n if ( len(data) < 1 ) :\n break\n print(data)\n\nmysock.close()\n\nimport socket\nimport time\n\nmysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nmysock.connect(('www.py4inf.com', 80))\nmysock.send('GET http://www.py4inf.com/cover.jpg HTTP/1.0\\n\\n')\n\n\ncount = 0\npicture = \"\";\nwhile True:\n data = mysock.recv(5120)\n if ( len(data) < 1 ) : break\n time.sleep(0.25)\n count = count + len(data)\n print len(data),count\n #print data\n picture = picture + data\n\nmysock.close()\n\n# Look for the end of the header (2 CRLF)\npos = picture.find(\"\\r\\n\\r\\n\");\nprint 'Header length',pos\nprint picture[:pos]\n\n# Skip past the header and save the picture data\npicture = picture[pos+4:]\nfhand = open(\"stuff.jpg\",\"w\")\nfhand.write(picture);\nfhand.close()\n\nimport urllib\nimport re\n\nurl = raw_input('Enter - ')\nhtml = urllib.urlopen(url).read()\nlinks = re.findall('href=\"(http://.*?)\"', html)\nfor link in links:\n print link\n\nimport urllib\nfrom bs4 import BeautifulSoup\n\nurl = raw_input('Enter - ')\nhtml = urllib.urlopen(url).read()\nsoup = BeautifulSoup(html)\n\n# Retrieve all of the anchor tags\ntags = soup('a')\nfor tag in tags:\n print tag.get('href', None)",
"Exercises\nhttp://www.pythonlearn.com/html-008/cfbook013.html#toc140"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
karlstroetmann/Formal-Languages
|
Python/FixedPoint.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open (\"style.css\", \"r\") as file:\n css = file.read()\nHTML(css)",
"Fixed-Point Iteration\nThe function fixpoint takes two arguments:\n - $S_0$ is a set of elements, \n - $f$ is a function mapping elements to sets of elements. \nThe function fixpoint computes the smallest set $S$ such the following conditions holds: \n- $S_0 \\subseteq S$,\n- $S = \\bigcup { f(x) \\mid x \\in S }$.",
"def fixpoint(S0, f):\n Result = S0.copy() # don't change S0\n while True:\n NewElements = { x for o in Result \n for x in f(o) \n }\n if NewElements.issubset(Result):\n return Result\n Result |= NewElements",
"The function fixpoint2 takes two arguments:\n - $S_0$ is a set of elements, \n - $f$ is a function mapping sets of elements to sets of elements. \nThe function fixpoint2 computes the smallest set $S$ such the following conditions holds: \n- $S_0 \\subseteq S$,\n- $S = f(S)$.",
"def fixpoint2(S0, f):\n Result = S0.copy() # don't change S0\n while True:\n NewElements = f(Result)\n if NewElements.issubset(Result):\n return Result\n Result |= NewElements"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
fifabsas/talleresfifabsas
|
python/Extras/Fisica1/simulacion.ipynb
|
mit
|
[
"Ejemplo de simulación numérica",
"import numpy as np\nfrom scipy.integrate import odeint\nfrom matplotlib import rc\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nrc(\"text\", usetex=True)\nrc(\"font\", size=18)\nrc(\"figure\", figsize=(6,4))\nrc(\"axes\", grid=True)",
"Problema físico\n\nDefinimos un SR con el origen en el orificio donde el hilo atravieza el plano, la coordenada $\\hat{z}$ apuntando hacia abajo. Con esto sacamos, de la segunda ley de Newton para las particulas:\n$$\n\\begin{align}\n\\text{Masa 1)}\\quad&\\vec{F}_1 = m_1 \\vec{a}_1 \\\n&-T \\hat{r} = m_1 \\vec{a}_1 \\\n&-T \\hat{r} = m_1 \\left{ \\left(\\ddot{r} - r \\dot{\\theta}^2\\right) \\hat{r} + \\left(r\\ddot{\\theta} + 2\\dot{r}\\dot{\\theta}\\right)\\hat{\\theta} \\right} \\\n&\\begin{cases}\n\\hat{r})\\ - T = m_1\\left( \\ddot{r} - r\\, \\dot{\\theta}^2\\right)\\\n\\hat{\\theta})\\ 0 = m_1 \\left(r \\ddot{\\theta} + 2 \\dot{r}\\dot{\\theta}\\right)\\\n\\end{cases}\\\n\\\n\\text{Masa 2)}\\quad&\\vec{F}_2 = m_2 \\vec{a}_2 \\\n&-T \\hat{z} + m_2 g \\hat{z} = m_2 \\ddot{z} \\hat{z} \\\n\\implies & \\boxed{T = m_2 \\left( g - \\ddot{z} \\right)}\\\n\\end{align}\n$$\nAhora reemplazando este resultado para la tension (que es igual en ambas expresiones) y entendiendo que $\\ddot{z} = -\\ddot{r}$ pues la soga es ideal y de largo constante, podemos rescribir las ecuaciones obtenidas para la masa 1 como:\n$$\n\\begin{cases}\n\\hat{r})\\quad - m_2 \\left( g + \\ddot{r} \\right) = m_1\\left( \\ddot{r} - r\\, \\dot{\\theta}^2\\right)\\\n\\\n\\hat{\\theta})\\quad 0 = m_1 \\left(r \\ddot{\\theta} + 2 \\dot{r}\\dot{\\theta}\\right)\n\\end{cases}\n\\implies\n\\begin{cases}\n\\hat{r})\\quad \\ddot{r} = \\dfrac{- m_2 g + m_1 r \\dot{\\theta}^2}{m_1 + m_2}\\\n\\\n\\hat{\\theta})\\quad \\ddot{\\theta} = -2 \\dfrac{\\dot{r}\\dot{\\theta}}{r}\\\n\\end{cases}\n$$\nLa gracia de estos métodos es lograr encontrar una expresión de la forma $y'(x) = f(x,t)$ donde x será la solución buscada, aca como estamos en un sistema de segundo orden en dos variables diferentes ($r$ y $\\theta$) sabemos que nuestra solución va a tener que involucrar 4 componentes. Es como en el oscilador armónico, que uno tiene que definir posicion y velocidad inicial para poder conocer el sistema, solo que aca tenemos dos para $r$ y dos para $\\theta$.\nSe puede ver entonces que vamos a necesitar una solucion del tipo:\n$$\\mathbf{X} = \\begin{pmatrix} r \\ \\dot{r}\\ \\theta \\ \\dot{\\theta} \\end{pmatrix} $$\nY entonces\n$$\n\\dot{\\mathbf{X}} = \n\\begin{pmatrix} \\dot{r} \\ \\ddot{r}\\ \\dot{\\theta} \\ \\ddot{\\theta} \\end{pmatrix} =\n\\begin{pmatrix} \\dot{r} \\ \\dfrac{-m_2 g + m_1 r \\dot{\\theta}^2}{m_1 + m_2} \\ \\dot{\\theta} \\ -2 \\dfrac{\\dot{r}\\dot{\\theta}}{r} \\end{pmatrix} =\n\\mathbf{f}(\\mathbf{X}, t)\n$$\n\nSi alguno quiere, tambien se puede escribir la evolucion del sistema de una forma piola, que no es otra cosa que una querida expansión de Taylor a orden lineal.\n$$\n\\begin{align}\n r(t+dt) &= r(t) + \\dot{r}(t)\\cdot dt \\\n \\dot{r}(t+dt) &= \\dot{r}(t) + \\ddot{r}(t)\\cdot dt \\\n \\theta(t+dt) &= \\theta(t) + \\dot{\\theta}(t)\\cdot dt \\\n \\dot{\\theta}(t+dt) &= \\dot{\\theta}(t) + \\ddot{\\theta}(t)\\cdot dt\n\\end{align}\n\\implies\n\\begin{pmatrix}\n r\\\n \\dot{r}\\\n \\theta\\\n \\ddot{\\theta}\n\\end{pmatrix}(t + dt) = \n\\begin{pmatrix}\n r\\\n \\dot{r}\\\n \\theta\\\n \\ddot{\\theta}\n\\end{pmatrix}(t) + \n\\begin{pmatrix}\n \\dot{r}\\\n \\ddot{r}\\\n \\dot{\\theta}\\\n \\ddot{\\theta}\n\\end{pmatrix}(t) \\cdot dt\n$$\nAca tenemos que recordar que la compu no puede hacer cosas continuas, porque son infinitas cuentas, entones si o si hay que discretizar el tiempo y el paso temporal!\n$$\n\\begin{pmatrix}\nr\\\n\\dot{r}\\\n\\theta\\\n\\ddot{\\theta}\n\\end{pmatrix}_{i+1} = \n\\begin{pmatrix}\nr\\\n\\dot{r}\\\n\\theta\\\n\\ddot{\\theta}\n\\end{pmatrix}_i + \n\\begin{pmatrix}\n\\dot{r}\\\n\\ddot{r}\\\n\\dot{\\theta}\\\n\\ddot{\\theta}\n\\end{pmatrix}_i \\cdot dt\n$$\nSi entonces decido llamar a este vector columna $\\mathbf{X}$, el sistema queda escrito como:\n$$\n\\mathbf{X}_{i+1} = \\mathbf{X}_i + \\dot{\\mathbf{X}}_i\\ dt\n$$\nDonde sale denuevo que $\\dot{\\mathbf{X}}$ es lo que está escrito arriba.\nEs decir que para encontrar cualquier valor, solo hace falta saber el vector anterior y la derivada, pero las derivadas ya las tenemos (es todo el trabajo que hicimos de fisica antes)!!\n---\nDe cualquier forma que lo piensen, ojala hayan entendido que entonces con tener las condiciones iniciales y las ecuaciones diferenciales ya podemos resolver (tambien llamado integrar) el sistema.",
"# Constantes del problema:\nM1 = 3\nM2 = 3\ng = 9.81\n\n# Condiciones iniciales del problema:\nr0 = 2\nr_punto0 = 0\ntita0 = 0\ntita_punto0 = 1\n\nC1 = (M2*g)/(M1+M2) # Defino constantes utiles\nC2 = (M1)/(M1+M2)\ncond_iniciales = [r0, r_punto0, tita0, tita_punto0]\n\ndef derivada(X, t, c1, c2): # esto sería la f del caso { x' = f(x,t) }\n r, r_punto, tita, tita_punto = X\n deriv = [0, 0, 0, 0] # es como el vector columna de arriba pero en filado\n \n deriv[0] = r_punto # derivada de r\n deriv[1] = -c1 + c2*r*(tita_punto)**2 # r dos puntos\n deriv[2] = tita_punto # derivada de tita\n deriv[3] = -2*r_punto*tita_punto/r\n return deriv\n\n\ndef resuelvo_sistema(m1, m2, tmax = 20):\n t0 = 0\n c1 = (m2*g)/(m1+m2) # Defino constantes utiles\n c2 = (m1)/(m1+m2)\n t = np.arange(t0, tmax, 0.001)\n \n # aca podemos definirnos nuestro propio algoritmo de integracion\n # o bien usar el que viene a armado de scipy. \n # Ojo que no es perfecto eh, a veces es mejor escribirlo uno\n out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))\n\n return [t, out.T]\n\nt, (r, rp, tita, titap) = resuelvo_sistema(M1, M2, tmax=10)\n\nplt.figure()\nplt.plot(t, r/r0, 'r')\nplt.ylabel(r\"$r / r_0$\")\nplt.xlabel(r\"tiempo\")\n# plt.savefig(\"directorio/r_vs_t.pdf\", dpi=300)\n\nplt.figure()\nplt.plot(t, tita-tita0, 'b')\nplt.ylabel(r\"$\\theta - \\theta_0$\")\nplt.xlabel(r\"tiempo\")\n# plt.savefig(\"directorio/tita_vs_t.pdf\", dpi=300)\n\n\nplt.figure()\nplt.plot(r*np.cos(tita-tita0)/r0, r*np.sin(tita-tita0)/r0, 'g')\nplt.ylabel(r\"$r/r_0\\ \\sin\\left(\\theta - \\theta_0\\right)$\")\nplt.xlabel(r\"$r/r_0\\ \\cos\\left(\\theta - \\theta_0\\right)$\")\n# plt.savefig(\"directorio/trayectoria.pdf\", dpi=300)",
"Todo muy lindo!!\nCómo podemos verificar si esto está andando ok igual? Porque hasta acá solo sabemos que dio razonable, pero el ojímetro no es una medida cuantitativa.\nUna opción para ver que el algoritmo ande bien (y que no hay errores numéricos, y que elegimos un integrador apropiado ojo con esto eh... te estoy mirando a vos, Runge-Kutta), es ver si se conserva la energía.\nLes recuerdo que la energía cinética del sistema es $K = \\frac{1}{2} m_1 \\left|\\vec{v}_1 \\right|^2 + \\frac{1}{2} m_2 \\left|\\vec{v}_2 \\right|^2$, cuidado con cómo se escribe cada velocidad, y que la energía potencial del sistema únicamente depende de la altura de la pelotita colgante.\nHace falta conocer la longitud $L$ de la cuerda para ver si se conserva la energía mecánica total? (Spoiler: No. Pero piensen por qué)\nLes queda como ejercicio a ustedes verificar eso, y también pueden experimentar con distintos metodos de integración a ver qué pasa con cada uno, abajo les dejamos una ayudita para que prueben.",
"from scipy.integrate import solve_ivp\n\ndef resuelvo_sistema(m1, m2, tmax = 20, metodo='RK45'):\n t0 = 0\n c1 = (m2*g)/(m1+m2) # Defino constantes utiles\n c2 = (m1)/(m1+m2)\n t = np.arange(t0, tmax, 0.001)\n \n # acá hago uso de las lambda functions, solamente para usar \n # la misma funcion que definimos antes. Pero como ahora\n # voy a usar otra funcion de integracion (no odeint)\n # que pide otra forma de definir la funcion, en vez de pedir\n # f(x,t) esta te pide f(t, x), entonces nada, hay que dar vuelta\n # parametros y nada mas...\n \n deriv_bis = lambda t, x: derivada(x, t, c1, c2)\n out = solve_ivp(fun=deriv_bis, t_span=(t0, tmax), y0=cond_iniciales,\\\n method=metodo, t_eval=t)\n\n return out\n\n# Aca armo dos arrays con los metodos posibles y otro con colores\nall_metodos = ['RK45', 'RK23', 'Radau', 'BDF', 'LSODA']\nall_colores = ['r', 'b', 'm', 'g', 'c']\n\n# Aca les dejo la forma piola de loopear sobre dos arrays a la par\nfor met, col in zip(all_metodos, all_colores):\n result = resuelvo_sistema(M1, M2, tmax=30, metodo=met)\n t = result.t\n r, rp, tita, titap = result.y\n plt.plot(t, r/r0, col, label=met)\n \nplt.xlabel(\"tiempo\")\nplt.ylabel(r\"$r / r_0$\")\nplt.legend(loc=3)",
"Ven cómo los distintos métodos van modificando más y más la curva de $r(t)$ a medida que van pasando los pasos de integración. Tarea para ustedes es correr el mismo código con la conservación de energía.\nCuál es mejor, por qué y cómo saberlo son preguntas que deberán hacerse e investigar si en algún momento trabajan con esto.\nPor ejemplo, pueden buscar en Wikipedia \"Symplectic Integrator\" y ver qué onda.\nLes dejamos también abajo la simulación de la trayectoria de la pelotita",
"from matplotlib import animation\n%matplotlib notebook\n\nresult = resuelvo_sistema(M1, M2, tmax=30, metodo='Radau')\nt = result.t\nr, rp, tita, titap = result.y\n\nfig, ax = plt.subplots()\nax.set_xlim([-1, 1])\nax.set_ylim([-1, 1])\nax.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0, 'm', lw=0.2)\nline, = ax.plot([], [], 'ko', ms=5)\n\nN_SKIP = 50\nN_FRAMES = int(len(r)/N_SKIP)\n\ndef animate(frame_no):\n i = frame_no*N_SKIP\n r_i = r[i]/r0\n tita_i = tita[i]\n line.set_data(r_i*np.cos(tita_i), r_i*np.sin(tita_i))\n return line,\n \nanim = animation.FuncAnimation(fig, animate, frames=N_FRAMES,\n interval=50, blit=False)",
"Recuerden que esta animación no va a parar eh, sabemos que verla te deja en una especie de trance místico, pero recuerden pararla cuando haya transcurrido suficiente tiempo\nAnimación Interactiva\nUsando ipywidgets podemos agregar sliders a la animación, para modificar el valor de las masitas",
"from ipywidgets import interactive, interact, FloatProgress\nfrom IPython.display import clear_output, display\n%matplotlib inline\n\n@interact(m1=(0,5,0.5), m2=(0,5,0.5), tmax=(0.01,20,0.5)) #Permite cambiar el parámetro de la ecuación\ndef resuelvo_sistema(m1, m2, tmax = 20):\n t0 = 0\n c1 = (m2*g)/(m1+m2) # Defino constantes utiles\n c2 = (m1)/(m1+m2)\n t = np.arange(t0, tmax, 0.05)\n# out = odeint(derivada, cond_iniciales, t, args = (c1, c2,))\n r, rp, tita, titap = odeint(derivada, cond_iniciales, t, args=(c1, c2,)).T\n plt.xlim((-1,1))\n plt.ylim((-1,1))\n plt.plot(r*np.cos(tita)/r0, r*np.sin(tita)/r0,'b-')\n \n# plt.xlabel(\"tiempo\")\n# plt.ylabel(r\"$r / r_0$\")\n# plt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kabrapratik28/Stanford_courses
|
cs231n/assignment3/StyleTransfer-TensorFlow.ipynb
|
apache-2.0
|
[
"Style Transfer\nIn this notebook we will implement the style transfer technique from \"Image Style Transfer Using Convolutional Neural Networks\" (Gatys et al., CVPR 2015).\nThe general idea is to take two images, and produce a new image that reflects the content of one but the artistic \"style\" of the other. We will do this by first formulating a loss function that matches the content and style of each respective image in the feature space of a deep network, and then performing gradient descent on the pixels of the image itself.\nThe deep network we use as a feature extractor is SqueezeNet, a small model that has been trained on ImageNet. You could use any network, but we chose SqueezeNet here for its small size and efficiency.\nHere's an example of the images you'll be able to produce by the end of this notebook:\n\nSetup",
"\n%load_ext autoreload\n%autoreload 2\nfrom scipy.misc import imread, imresize\nimport numpy as np\n\nfrom scipy.misc import imread\nimport matplotlib.pyplot as plt\n\n# Helper functions to deal with image preprocessing\nfrom cs231n.image_utils import load_image, preprocess_image, deprocess_image\n\n%matplotlib inline\n\ndef get_session():\n \"\"\"Create a session that dynamically allocates memory.\"\"\"\n # See: https://www.tensorflow.org/tutorials/using_gpu#allowing_gpu_memory_growth\n config = tf.ConfigProto()\n config.gpu_options.allow_growth = True\n session = tf.Session(config=config)\n return session\n\ndef rel_error(x,y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Older versions of scipy.misc.imresize yield different results\n# from newer versions, so we check to make sure scipy is up to date.\ndef check_scipy():\n import scipy\n vnum = int(scipy.__version__.split('.')[1])\n assert vnum >= 16, \"You must install SciPy >= 0.16.0 to complete this notebook.\"\n\ncheck_scipy()",
"Load the pretrained SqueezeNet model. This model has been ported from PyTorch, see cs231n/classifiers/squeezenet.py for the model architecture. \nTo use SqueezeNet, you will need to first download the weights by changing into the cs231n/datasets directory and running get_squeezenet_tf.sh . Note that if you ran get_assignment3_data.sh then SqueezeNet will already be downloaded.",
"from cs231n.classifiers.squeezenet import SqueezeNet\nimport tensorflow as tf\n\ntf.reset_default_graph() # remove all existing variables in the graph \nsess = get_session() # start a new Session\n\n# Load pretrained SqueezeNet model\nSAVE_PATH = 'cs231n/datasets/squeezenet.ckpt'\nif not os.path.exists(SAVE_PATH):\n raise ValueError(\"You need to download SqueezeNet!\")\nmodel = SqueezeNet(save_path=SAVE_PATH, sess=sess)\n\n# Load data for testing\ncontent_img_test = preprocess_image(load_image('styles/tubingen.jpg', size=192))[None]\nstyle_img_test = preprocess_image(load_image('styles/starry_night.jpg', size=192))[None]\nanswers = np.load('style-transfer-checks-tf.npz')\n",
"Computing Loss\nWe're going to compute the three components of our loss function now. The loss function is a weighted sum of three terms: content loss + style loss + total variation loss. You'll fill in the functions that compute these weighted terms below.\nContent loss\nWe can generate an image that reflects the content of one image and the style of another by incorporating both in our loss function. We want to penalize deviations from the content of the content image and deviations from the style of the style image. We can then use this hybrid loss function to perform gradient descent not on the parameters of the model, but instead on the pixel values of our original image.\nLet's first write the content loss function. Content loss measures how much the feature map of the generated image differs from the feature map of the source image. We only care about the content representation of one layer of the network (say, layer $\\ell$), that has feature maps $A^\\ell \\in \\mathbb{R}^{1 \\times C_\\ell \\times H_\\ell \\times W_\\ell}$. $C_\\ell$ is the number of filters/channels in layer $\\ell$, $H_\\ell$ and $W_\\ell$ are the height and width. We will work with reshaped versions of these feature maps that combine all spatial positions into one dimension. Let $F^\\ell \\in \\mathbb{R}^{N_\\ell \\times M_\\ell}$ be the feature map for the current image and $P^\\ell \\in \\mathbb{R}^{N_\\ell \\times M_\\ell}$ be the feature map for the content source image where $M_\\ell=H_\\ell\\times W_\\ell$ is the number of elements in each feature map. Each row of $F^\\ell$ or $P^\\ell$ represents the vectorized activations of a particular filter, convolved over all positions of the image. Finally, let $w_c$ be the weight of the content loss term in the loss function.\nThen the content loss is given by:\n$L_c = w_c \\times \\sum_{i,j} (F_{ij}^{\\ell} - P_{ij}^{\\ell})^2$",
"def content_loss(content_weight, content_current, content_original):\n \"\"\"\n Compute the content loss for style transfer.\n \n Inputs:\n - content_weight: scalar constant we multiply the content_loss by.\n - content_current: features of the current image, Tensor with shape [1, height, width, channels]\n - content_target: features of the content image, Tensor with shape [1, height, width, channels]\n \n Returns:\n - scalar content loss\n \"\"\"\n pass\n",
"Test your content loss. You should see errors less than 0.001.",
"def content_loss_test(correct):\n content_layer = 3\n content_weight = 6e-2\n c_feats = sess.run(model.extract_features()[content_layer], {model.image: content_img_test})\n bad_img = tf.zeros(content_img_test.shape)\n feats = model.extract_features(bad_img)[content_layer]\n student_output = sess.run(content_loss(content_weight, c_feats, feats))\n error = rel_error(correct, student_output)\n print('Maximum error is {:.3f}'.format(error))\n\ncontent_loss_test(answers['cl_out'])",
"Style loss\nNow we can tackle the style loss. For a given layer $\\ell$, the style loss is defined as follows:\nFirst, compute the Gram matrix G which represents the correlations between the responses of each filter, where F is as above. The Gram matrix is an approximation to the covariance matrix -- we want the activation statistics of our generated image to match the activation statistics of our style image, and matching the (approximate) covariance is one way to do that. There are a variety of ways you could do this, but the Gram matrix is nice because it's easy to compute and in practice shows good results.\nGiven a feature map $F^\\ell$ of shape $(1, C_\\ell, M_\\ell)$, the Gram matrix has shape $(1, C_\\ell, C_\\ell)$ and its elements are given by:\n$$G_{ij}^\\ell = \\sum_k F^{\\ell}{ik} F^{\\ell}{jk}$$\nAssuming $G^\\ell$ is the Gram matrix from the feature map of the current image, $A^\\ell$ is the Gram Matrix from the feature map of the source style image, and $w_\\ell$ a scalar weight term, then the style loss for the layer $\\ell$ is simply the weighted Euclidean distance between the two Gram matrices:\n$$L_s^\\ell = w_\\ell \\sum_{i, j} \\left(G^\\ell_{ij} - A^\\ell_{ij}\\right)^2$$\nIn practice we usually compute the style loss at a set of layers $\\mathcal{L}$ rather than just a single layer $\\ell$; then the total style loss is the sum of style losses at each layer:\n$$L_s = \\sum_{\\ell \\in \\mathcal{L}} L_s^\\ell$$\nBegin by implementing the Gram matrix computation below:",
"def gram_matrix(features, normalize=True):\n \"\"\"\n Compute the Gram matrix from features.\n \n Inputs:\n - features: Tensor of shape (1, H, W, C) giving features for\n a single image.\n - normalize: optional, whether to normalize the Gram matrix\n If True, divide the Gram matrix by the number of neurons (H * W * C)\n \n Returns:\n - gram: Tensor of shape (C, C) giving the (optionally normalized)\n Gram matrices for the input image.\n \"\"\"\n pass\n",
"Test your Gram matrix code. You should see errors less than 0.001.",
"def gram_matrix_test(correct):\n gram = gram_matrix(model.extract_features()[5])\n student_output = sess.run(gram, {model.image: style_img_test})\n error = rel_error(correct, student_output)\n print('Maximum error is {:.3f}'.format(error))\n\ngram_matrix_test(answers['gm_out'])",
"Next, implement the style loss:",
"def style_loss(feats, style_layers, style_targets, style_weights):\n \"\"\"\n Computes the style loss at a set of layers.\n \n Inputs:\n - feats: list of the features at every layer of the current image, as produced by\n the extract_features function.\n - style_layers: List of layer indices into feats giving the layers to include in the\n style loss.\n - style_targets: List of the same length as style_layers, where style_targets[i] is\n a Tensor giving the Gram matrix the source style image computed at\n layer style_layers[i].\n - style_weights: List of the same length as style_layers, where style_weights[i]\n is a scalar giving the weight for the style loss at layer style_layers[i].\n \n Returns:\n - style_loss: A Tensor contataining the scalar style loss.\n \"\"\"\n # Hint: you can do this with one for loop over the style layers, and should\n # not be very much code (~5 lines). You will need to use your gram_matrix function.\n pass\n",
"Test your style loss implementation. The error should be less than 0.001.",
"def style_loss_test(correct):\n style_layers = [1, 4, 6, 7]\n style_weights = [300000, 1000, 15, 3]\n \n feats = model.extract_features()\n style_target_vars = []\n for idx in style_layers:\n style_target_vars.append(gram_matrix(feats[idx]))\n style_targets = sess.run(style_target_vars,\n {model.image: style_img_test})\n \n s_loss = style_loss(feats, style_layers, style_targets, style_weights)\n student_output = sess.run(s_loss, {model.image: content_img_test})\n error = rel_error(correct, student_output)\n print('Error is {:.3f}'.format(error))\n\nstyle_loss_test(answers['sl_out'])",
"Total-variation regularization\nIt turns out that it's helpful to also encourage smoothness in the image. We can do this by adding another term to our loss that penalizes wiggles or \"total variation\" in the pixel values. \nYou can compute the \"total variation\" as the sum of the squares of differences in the pixel values for all pairs of pixels that are next to each other (horizontally or vertically). Here we sum the total-variation regualarization for each of the 3 input channels (RGB), and weight the total summed loss by the total variation weight, $w_t$:\n$L_{tv} = w_t \\times \\sum_{c=1}^3\\sum_{i=1}^{H-1} \\sum_{j=1}^{W-1} \\left( (x_{i,j+1, c} - x_{i,j,c})^2 + (x_{i+1, j,c} - x_{i,j,c})^2 \\right)$\nIn the next cell, fill in the definition for the TV loss term. To receive full credit, your implementation should not have any loops.",
"def tv_loss(img, tv_weight):\n \"\"\"\n Compute total variation loss.\n \n Inputs:\n - img: Tensor of shape (1, H, W, 3) holding an input image.\n - tv_weight: Scalar giving the weight w_t to use for the TV loss.\n \n Returns:\n - loss: Tensor holding a scalar giving the total variation loss\n for img weighted by tv_weight.\n \"\"\"\n # Your implementation should be vectorized and not require any loops!\n pass\n",
"Test your TV loss implementation. Error should be less than 0.001.",
"def tv_loss_test(correct):\n tv_weight = 2e-2\n t_loss = tv_loss(model.image, tv_weight)\n student_output = sess.run(t_loss, {model.image: content_img_test})\n error = rel_error(correct, student_output)\n print('Error is {:.3f}'.format(error))\n\ntv_loss_test(answers['tv_out'])",
"Style Transfer\nLets put it all together and make some beautiful images! The style_transfer function below combines all the losses you coded up above and optimizes for an image that minimizes the total loss.",
"def style_transfer(content_image, style_image, image_size, style_size, content_layer, content_weight,\n style_layers, style_weights, tv_weight, init_random = False):\n \"\"\"Run style transfer!\n \n Inputs:\n - content_image: filename of content image\n - style_image: filename of style image\n - image_size: size of smallest image dimension (used for content loss and generated image)\n - style_size: size of smallest style image dimension\n - content_layer: layer to use for content loss\n - content_weight: weighting on content loss\n - style_layers: list of layers to use for style loss\n - style_weights: list of weights to use for each layer in style_layers\n - tv_weight: weight of total variation regularization term\n - init_random: initialize the starting image to uniform random noise\n \"\"\"\n # Extract features from the content image\n content_img = preprocess_image(load_image(content_image, size=image_size))\n feats = model.extract_features(model.image)\n content_target = sess.run(feats[content_layer],\n {model.image: content_img[None]})\n\n # Extract features from the style image\n style_img = preprocess_image(load_image(style_image, size=style_size))\n style_feat_vars = [feats[idx] for idx in style_layers]\n style_target_vars = []\n # Compute list of TensorFlow Gram matrices\n for style_feat_var in style_feat_vars:\n style_target_vars.append(gram_matrix(style_feat_var))\n # Compute list of NumPy Gram matrices by evaluating the TensorFlow graph on the style image\n style_targets = sess.run(style_target_vars, {model.image: style_img[None]})\n\n # Initialize generated image to content image\n \n if init_random:\n img_var = tf.Variable(tf.random_uniform(content_img[None].shape, 0, 1), name=\"image\")\n else:\n img_var = tf.Variable(content_img[None], name=\"image\")\n\n # Extract features on generated image\n feats = model.extract_features(img_var)\n # Compute loss\n c_loss = content_loss(content_weight, feats[content_layer], content_target)\n s_loss = style_loss(feats, style_layers, style_targets, style_weights)\n t_loss = tv_loss(img_var, tv_weight)\n loss = c_loss + s_loss + t_loss\n \n # Set up optimization hyperparameters\n initial_lr = 3.0\n decayed_lr = 0.1\n decay_lr_at = 180\n max_iter = 200\n\n # Create and initialize the Adam optimizer\n lr_var = tf.Variable(initial_lr, name=\"lr\")\n # Create train_op that updates the generated image when run\n with tf.variable_scope(\"optimizer\") as opt_scope:\n train_op = tf.train.AdamOptimizer(lr_var).minimize(loss, var_list=[img_var])\n # Initialize the generated image and optimization variables\n opt_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=opt_scope.name)\n sess.run(tf.variables_initializer([lr_var, img_var] + opt_vars))\n # Create an op that will clamp the image values when run\n clamp_image_op = tf.assign(img_var, tf.clip_by_value(img_var, -1.5, 1.5))\n \n f, axarr = plt.subplots(1,2)\n axarr[0].axis('off')\n axarr[1].axis('off')\n axarr[0].set_title('Content Source Img.')\n axarr[1].set_title('Style Source Img.')\n axarr[0].imshow(deprocess_image(content_img))\n axarr[1].imshow(deprocess_image(style_img))\n plt.show()\n plt.figure()\n \n # Hardcoded handcrafted \n for t in range(max_iter):\n # Take an optimization step to update img_var\n sess.run(train_op)\n if t < decay_lr_at:\n sess.run(clamp_image_op)\n if t == decay_lr_at:\n sess.run(tf.assign(lr_var, decayed_lr))\n if t % 100 == 0:\n print('Iteration {}'.format(t))\n img = sess.run(img_var)\n plt.imshow(deprocess_image(img[0], rescale=True))\n plt.axis('off')\n plt.show()\n print('Iteration {}'.format(t))\n img = sess.run(img_var) \n plt.imshow(deprocess_image(img[0], rescale=True))\n plt.axis('off')\n plt.show()",
"Generate some pretty pictures!\nTry out style_transfer on the three different parameter sets below. Make sure to run all three cells. Feel free to add your own, but make sure to include the results of style transfer on the third parameter set (starry night) in your submitted notebook.\n\nThe content_image is the filename of content image.\nThe style_image is the filename of style image.\nThe image_size is the size of smallest image dimension of the content image (used for content loss and generated image).\nThe style_size is the size of smallest style image dimension.\nThe content_layer specifies which layer to use for content loss.\nThe content_weight gives weighting on content loss in the overall loss function. Increasing the value of this parameter will make the final image look more realistic (closer to the original content).\nstyle_layers specifies a list of which layers to use for style loss. \nstyle_weights specifies a list of weights to use for each layer in style_layers (each of which will contribute a term to the overall style loss). We generally use higher weights for the earlier style layers because they describe more local/smaller scale features, which are more important to texture than features over larger receptive fields. In general, increasing these weights will make the resulting image look less like the original content and more distorted towards the appearance of the style image.\ntv_weight specifies the weighting of total variation regularization in the overall loss function. Increasing this value makes the resulting image look smoother and less jagged, at the cost of lower fidelity to style and content. \n\nBelow the next three cells of code (in which you shouldn't change the hyperparameters), feel free to copy and paste the parameters to play around them and see how the resulting image changes.",
"# Composition VII + Tubingen\nparams1 = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/composition_vii.jpg',\n 'image_size' : 192,\n 'style_size' : 512,\n 'content_layer' : 3,\n 'content_weight' : 5e-2, \n 'style_layers' : (1, 4, 6, 7),\n 'style_weights' : (20000, 500, 12, 1),\n 'tv_weight' : 5e-2\n}\n\nstyle_transfer(**params1)\n\n# Scream + Tubingen\nparams2 = {\n 'content_image':'styles/tubingen.jpg',\n 'style_image':'styles/the_scream.jpg',\n 'image_size':192,\n 'style_size':224,\n 'content_layer':3,\n 'content_weight':3e-2,\n 'style_layers':[1, 4, 6, 7],\n 'style_weights':[200000, 800, 12, 1],\n 'tv_weight':2e-2\n}\n\nstyle_transfer(**params2)\n\n# Starry Night + Tubingen\nparams3 = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/starry_night.jpg',\n 'image_size' : 192,\n 'style_size' : 192,\n 'content_layer' : 3,\n 'content_weight' : 6e-2,\n 'style_layers' : [1, 4, 6, 7],\n 'style_weights' : [300000, 1000, 15, 3],\n 'tv_weight' : 2e-2\n}\n\nstyle_transfer(**params3)",
"Feature Inversion\nThe code you've written can do another cool thing. In an attempt to understand the types of features that convolutional networks learn to recognize, a recent paper [1] attempts to reconstruct an image from its feature representation. We can easily implement this idea using image gradients from the pretrained network, which is exactly what we did above (but with two different feature representations).\nNow, if you set the style weights to all be 0 and initialize the starting image to random noise instead of the content source image, you'll reconstruct an image from the feature representation of the content source image. You're starting with total noise, but you should end up with something that looks quite a bit like your original image.\n(Similarly, you could do \"texture synthesis\" from scratch if you set the content weight to 0 and initialize the starting image to random noise, but we won't ask you to do that here.) \n[1] Aravindh Mahendran, Andrea Vedaldi, \"Understanding Deep Image Representations by Inverting them\", CVPR 2015",
"# Feature Inversion -- Starry Night + Tubingen\nparams_inv = {\n 'content_image' : 'styles/tubingen.jpg',\n 'style_image' : 'styles/starry_night.jpg',\n 'image_size' : 192,\n 'style_size' : 192,\n 'content_layer' : 3,\n 'content_weight' : 6e-2,\n 'style_layers' : [1, 4, 6, 7],\n 'style_weights' : [0, 0, 0, 0], # we discard any contributions from style to the loss\n 'tv_weight' : 2e-2,\n 'init_random': True # we want to initialize our image to be random\n}\n\nstyle_transfer(**params_inv)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rbiswas4/simlib
|
example/ExploringOpSimOutputs_others.ipynb
|
mit
|
[
"This is a notebook to explore opSim outputs in different ways, mostly useful to supernova analysis. We will look at the opsim output called Enigma_1189.\nUsing the notebook requires installing\n- lsst_sims\n-",
"import numpy as np\n%matplotlib inline \nimport matplotlib.pyplot as plt\nimport sncosmo\nimport os\n\nimport OpSimSummary\nprint OpSimSummary.__file__\n\n# Required packages sqlachemy, pandas (both are part of anaconda distribution, or can be installed with a python installer)\n# One step requires the LSST stack, can be skipped for a particular OPSIM database in question\nimport OpSimSummary.summarize_opsim as so\nfrom sqlalchemy import create_engine\nimport pandas as pd\nprint so.__file__\n\n# This step requires LSST SIMS package MAF. The main goal of this step is to set DD and WFD to integer keys that \n# label an observation as Deep Drilling or for Wide Fast Deep.\n# If you want to skip this step, you can use the next cell by uncommenting it, and commenting out this cell, if all you\n# care about is the database used in this example. But there is no guarantee that the numbers in the cell below will work\n# on other versions of opsim database outputs\n\nfrom lsst.sims.maf import db\nfrom lsst.sims.maf.utils import opsimUtils\n\nfrom LSSTmetrics import PerSNMetric\nimport LSSTmetrics\nprint LSSTmetrics.__file__\nfrom lsst.sims.photUtils import BandpassDict\n\n# DD = 366\n# WFD = 364",
"Read in OpSim output for modern versions: (sqlite formats)\nDescription of OpSim outputs are available on the page https://confluence.lsstcorp.org/display/SIM/OpSim+Datasets+for+Cadence+Workshop+LSST2015http://tusken.astro.washington.edu:8080\nHere we will use the opsim output http://ops2.tuc.noao.edu/runs/enigma_1189/data/enigma_1189_sqlite.db.gz\nI have downloaded this database, unzipped and use the variable dbname to point to its location",
"# Change dbname to point at your own location of the opsim output\ndbname = '/Users/rbiswas/data/LSST/OpSimData/minion_1016_sqlite.db'\nopsdb = db.OpsimDatabase(dbname)\npropID, propTags = opsdb.fetchPropInfo()\nDD = propTags['DD'][0]\nWFD = propTags['WFD'][0]\n\nprint(\"The propID for the Deep Drilling Field {0:2d}\".format(DD))\nprint(\"The propID for the Wide Fast Deep Field {0:2d}\".format(WFD))",
"Read in the OpSim DataBase into a pandas dataFrame",
"engine = create_engine('sqlite:///' + dbname)",
"The opsim database is a large file (approx 4.0 GB), but still possible to read into memory on new computers. You usually only need the Summary Table, which is about 900 MB. If you are only interested in the Deep Drilling Fields, you can use the read_sql_query to only select information pertaining to Deep Drilling Observations. This has a memory footprint of about 40 MB.\nObviously, you can reduce this further by narrowing down the columns to those of interest only. For the entire Summary Table, this step takes a few minutes on my computer. \nIf you are going to do the read from disk step very often, you can further reduce the time used by storing the output on disk as a hdf5 file and reading that into memory\nWe will look at three different Summaries of OpSim Runs. A summary of the \n1. Deep Drilling fields: These are the observations corresponding to propID of the variable DD above, and are restricted to a handful of fields\n2. WFD (Main) Survey: These are the observations corresponding to the propID of the variables WFD\n3. Combined Survey: These are observations combining DEEP and WFD in the DDF. Note that this leads to duplicate observations which must be subsequently dropped.",
"# Load to a dataframe\n# Summary = pd.read_hdf('storage.h5', 'table')\n# Summary = pd.read_sql_table('Summary', engine, index_col='obsHistID')\n# EnigmaDeep = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is 366', engine)\n# EnigmaD = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is 366', engine)",
"If we knew ahead of time the proposal ID, then we could have done this quicker using",
"OpSim_combined = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is ' + str(DD) + ' or ' + str(WFD), engine, index_col='obsHistID')\n\nOpSim_Deep = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is ' + str(DD), engine, index_col='obsHistID')\n",
"We can also sub-select this from the all-encompassing Summay Table. This can be done in two way:\nSome properties of the OpSim Outputs\nConstruct our Summary",
"OpSimDeepSummary = so.SummaryOpsim(OpSim_Deep)\nOpSimCombinedSummary = so.SummaryOpsim(OpSim_combined)\n\nfig = plt.figure(figsize=(10, 5))\nax = fig.add_subplot(111, projection='mollweide');\nfig = OpSimDeepSummary.showFields(ax=fig.axes[0], marker='o', s=40)\n\nOpSimCombinedSummary.showFields(ax=ax, marker='o', color='r', s=8)\n\n#fieldList = EnigmaDeepSummary.fieldIds",
"First Season\nWe can visualize the cadence during the first season using the cadence plot for a particular field: The following plot shows how many visits we have in different filters on a particular night:",
"firstSeasonDeep = OpSimDeepSummary.cadence_plot(fieldID=1427, observedOnly=False, sql_query='night < 366')\n\nfirstSeasonCombined = OpSimCombinedSummary.cadence_plot(fieldID=1427, observedOnly=False, sql_query='night < 366')\n\nOpSimCombinedSummary.mjdvalfornight(300)\n\nfirstSeasonCombined[0].savefig('minion_1427.pdf')\n\nfirstSeason = OpSimCombinedSummary.cadence_plot(fieldID=1430, observedOnly=False, sql_query='night <366', \n nightMin=0, nightMax=366)",
"Suppose we have a supernova with a peak around a particular MJD of 49540, and we want to see what the observations happened around it:",
"SN = OpSimCombinedSummary.cadence_plot(fieldID=1427, #racol='fieldRA', deccol='fieldDec',\n observedOnly=False, mjd_center=59580 + 270., mjd_range=[-30., 50.])\n# ax = plt.gca()\n# ax.axvline(49540, color='r', lw=2.)\n# ax.xaxis.get_major_formatter().set_useOffset(False)\n\nSN[0].savefig(\"Minion_1427_59850.pdf\")\n\nlsst_bp = BandpassDict.loadTotalBandpassesFromFiles()\n# sncosmo Bandpasses required for fitting\nthroughputsdir = os.getenv('THROUGHPUTS_DIR')\n\nfrom astropy.units import Unit\nbandPassList = ['u', 'g', 'r', 'i', 'z', 'y']\nbanddir = os.path.join(os.getenv('THROUGHPUTS_DIR'), 'baseline')\n\nfor band in bandPassList:\n\n # setup sncosmo bandpasses\n bandfname = banddir + \"/total_\" + band + '.dat'\n\n\n # register the LSST bands to the SNCosmo registry\n # Not needed for LSST, but useful to compare independent codes\n # Usually the next two lines can be merged,\n # but there is an astropy bug currently which affects only OSX.\n numpyband = np.loadtxt(bandfname)\n print(band)\n sncosmoband = sncosmo.Bandpass(wave=numpyband[:, 0],\n trans=numpyband[:, 1],\n wav\\e_unit=Unit('nm'),\n name=band)\n sncosmo.registry.register(sncosmoband, force=True)\n\nsnDaily = PerSNMetric(summarydf=OpSimCombinedSummary.simlib(1427),t0=59580 + 270., \n raCol='fieldRA', decCol='fieldDec', lsst_bp=lsst_bp)\n\nsnDaily.lightcurve\n\nnotCoadded = snDaily.lcplot(scattered=True)\n\nCoaddedNights = snDaily.lcplot(nightlyCoadd=True, scattered=True)\n\nnotCoadded.savefig('SingleVisitNights.pdf')\nCoaddedNights.savefig('CoaddedNights.pdf')\n\n!pwd\n\nOpSimCombinedSummary.df.night.min()\n\nSN[0].savefig('SN_observaton.pdf')",
"Scratch",
"SN_matrix.sum(axis=1).sum()\n\nEnigmaDeep.query('fieldID == 744 and expMJD < 49590 and expMJD > 49510').expMJD.size\n\nSN_matrix[SN_matrix > 0.5] = 1\n\nSN_matrix.sum().sum()\n\nlen(SN_matrix.sum(axis=1).dropna())\n\nnightlySN_matrix = SN_matrix.copy(deep=True)\n\nnightlySN_matrix[SN_matrix > 0.5] =1\n\nnightlySN_matrix.sum(axis=1).dropna().sum()\n\nnightlySN_matrix.sum(axis=1).dropna().size\n\nnightlySN_matrix.sum(ax)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
deepankarsharma/udacity_traffic_signs
|
Traffic_Sign_Classifier.ipynb
|
mit
|
[
"Self-Driving Car Engineer Nanodegree\nDeep Learning\nProject: Build a Traffic Sign Recognition Classifier\nIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. \n\nNote: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. \n\nIn addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.\nThe rubric contains \"Stand Out Suggestions\" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the \"stand out suggestions\", you can include the code in this Ipython notebook and also discuss the results in the writeup file.\n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\n\nStep 0: Load The Data",
"# Load pickled data\nimport pickle\nimport tensorflow as tf\nimport numpy as np\n\n# TODO: Fill this in based on where you saved the training and testing data\n\ntraining_file = 'traffic-signs-data/train.p'\nvalidation_file= 'traffic-signs-data/valid.p'\ntesting_file = 'traffic-signs-data/test.p'\n\nwith open(training_file, mode='rb') as f:\n train = pickle.load(f)\nwith open(validation_file, mode='rb') as f:\n valid = pickle.load(f)\nwith open(testing_file, mode='rb') as f:\n test = pickle.load(f)\n \nX_train, y_train = train['features'], train['labels']\nX_valid, y_valid = valid['features'], valid['labels']\nX_test, y_test = test['features'], test['labels']",
"Step 1: Dataset Summary & Exploration\nThe pickled data is a dictionary with 4 key/value pairs:\n\n'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).\n'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.\n'sizes' is a list containing tuples, (width, height) representing the original width and height the image.\n'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES\n\nComplete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results. \nProvide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas",
"### Replace each question mark with the appropriate value. \n### Use python, pandas or numpy methods rather than hard coding the results\n\n# TODO: Number of training examples\nn_train = X_train.shape[0]\n\n# TODO: Number of validation examples\nn_validation = X_valid.shape[0]\n\n# TODO: Number of testing examples.\nn_test = X_test.shape[0]\n\n# TODO: What's the shape of an traffic sign image?\nimage_shape = X_train.shape\n\nIMG_W, IMG_H = image_shape[1], image_shape[2]\n\n# TODO: How many unique classes/labels there are in the dataset.\nclasses = np.array(list(set(list(y_train)+list(y_valid)+list(y_test))))\nn_classes = len(classes)\n\nprint(\"Number of training examples =\", n_train)\nprint(\"Number of testing examples =\", n_test)\nprint(\"Image data shape =\", image_shape)\nprint(\"Number of classes =\", n_classes)",
"Include an exploratory visualization of the dataset\nVisualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. \nThe Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.\nNOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nfig, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3)\n\nbins = range(n_classes + 1)\n\nax1.hist(y_train, bins=bins)\nax1.set_title('Train dist', fontsize=12)\n\nax2.hist(y_test, bins=bins)\nax2.set_title('Test dist', fontsize=12)\n\nax3.hist(y_valid, bins=bins)\nax3.set_title('Valid dist', fontsize=12)\n\nplt.tight_layout()",
"Step 2: Design and Test a Model Architecture\nDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.\nThe LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! \nWith the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. \nThere are various aspects to consider when thinking about this problem:\n\nNeural network architecture (is the network over or underfitting?)\nPlay around preprocessing techniques (normalization, rgb to grayscale, etc)\nNumber of examples per label (some have more than others).\nGenerate fake data.\n\nHere is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.\nPre-process the Data Set (normalization, grayscale, etc.)\nMinimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project. \nOther pre-processing steps are optional. You can try different techniques to see if it improves performance. \nUse the code cell (or multiple code cells, if necessary) to implement the first step of your project.",
"# Function to generate transformed images\n# from https://nbviewer.jupyter.org/github/vxy10/SCND_notebooks/blob/master/preprocessing_stuff/img_transform_NB.ipynb\nimport random\nimport cv2\nfrom IPython.core.display import Image, display\n\ndef transform_image(img,ang_range, trans_range):\n '''\n This function transforms images to generate new images.\n The function takes in following arguments,\n 1- Image\n 2- ang_range: Range of angles for rotation\n 3- trans_range: Range of values to apply translations over. \n \n A Random uniform distribution is used to generate different parameters for transformation\n \n '''\n ang_rot = np.random.uniform(ang_range)-ang_range/2\n rows,cols,ch = img.shape \n Rot_M = cv2.getRotationMatrix2D((cols/2,rows/2),ang_rot,1)\n tr_x = trans_range*np.random.uniform()-trans_range/2\n tr_y = trans_range*np.random.uniform()-trans_range/2\n Trans_M = np.float32([[1,0,tr_x],[0,1,tr_y]])\n img = cv2.warpAffine(img,Rot_M,(cols,rows))\n img = cv2.warpAffine(img,Trans_M,(cols,rows))\n return img\n\nclasses, counts = np.unique(y_train, return_counts=True)\nmin_count = 1200\n\nfor klass in classes:\n klass_indices = np.where(y_train == klass)[0]\n img = X_train[random.choice(klass_indices)]\n fig, ax1 = plt.subplots(nrows=1, ncols=1)\n ax1.set_xlabel(str(klass))\n ax1.tick_params(labelbottom='off') \n ax1.imshow(img)\n if counts[klass] >= min_count:\n continue\n n_add = min_count - counts[klass]\n \n klass = klass.reshape([1])\n tmp_x = []\n tmp_y = []\n for num in range(n_add):\n img = X_train[random.choice(klass_indices)]\n orig = img\n img = transform_image(img,15,5)\n img = img.reshape([1, 32, 32, 3])\n tmp_x.append(img)\n tmp_y.append(klass)\n tmp_x = np.concatenate(tmp_x)\n tmp_y = np.concatenate(tmp_y)\n X_train = np.concatenate([X_train, tmp_x])\n y_train = np.concatenate([y_train, tmp_y])\n\n\n%matplotlib inline\n\nfig, ax1 = plt.subplots(nrows=1, ncols=1)\nbins = range(n_classes + 1)\nax1.hist(y_train, bins=bins)\nax1.set_title('Train dist', fontsize=12)\n\n### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include \n### converting to grayscale, etc.\n### Feel free to use as many code cells as needed.\nfrom skimage.color import rgb2gray\nfrom skimage import exposure\nimport random\n\ndef preprocess(d):\n tmp = []\n for img in d:\n i = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n i = cv2.equalizeHist(i)\n i = np.dstack((i, i, i))\n tmp.append(i)\n data = np.array(tmp)\n data = data - np.mean(data)\n data = data / np.std(data)\n return data\n\nX_train = preprocess(X_train)\nX_valid = preprocess(X_valid)\nX_test = preprocess(X_test)\n\nn_disp = 8\nfig, axes = plt.subplots(nrows=1, ncols=n_disp)\nfor i in range(n_disp):\n axes[i].imshow(X_train[random.randint(0, len(X_train) - 1)])\n",
"Model Architecture",
"from tensorflow.contrib.layers import flatten\nfrom sklearn.utils import shuffle\n\n\n\ndef model_lenet(x): \n # Hyperparameters\n mu = 0\n sigma = 0.1\n\n # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n\n # Activation.\n conv1 = tf.nn.relu(conv1)\n\n # Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n\n # Activation.\n conv2 = tf.nn.relu(conv2)\n\n # Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n\n # Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b\n\n # Activation.\n fc1 = tf.nn.relu(fc1)\n fc1 = tf.nn.dropout(fc1, keep_prob)\n\n # Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n\n # Activation.\n fc2 = tf.nn.relu(fc2)\n fc2 = tf.nn.dropout(fc2, keep_prob)\n\n # Layer 5: Fully Connected. Input = 84. Output = 43.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(43))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n\n return logits\n\n\nx = tf.placeholder(tf.float32, (None, 32, 32, 3))\ny = tf.placeholder(tf.int32, (None))\nkeep_prob = tf.placeholder(tf.float32) \none_hot_y = tf.one_hot(y, 43)\n\nrate = 0.001\nEPOCHS = 100\nBATCH_SIZE = 128\n\nlogits = model_lenet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)\npredict = tf.nn.softmax(logits)\n\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n all_y = []\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n all_y.append(batch_y)\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})\n total_accuracy += (accuracy * len(batch_x))\n return (total_accuracy / num_examples), all_y\n",
"Train, Validate and Test the Model\nA validation set can be used to assess how well the model is performing. A low accuracy on the training and validation\nsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.",
"### Train your model here.\n### Calculate and report the accuracy on the training and validation set.\n### Once a final model architecture is selected, \n### the accuracy on the test set should be calculated and reported as well.\n### Feel free to use as many code cells as needed.\nwith tf.Session() as sess:\n sess.run(tf.initialize_all_variables())\n num_examples = len(X_train)\n\n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.7})\n\n validation_accuracy, _ = evaluate(X_valid, y_valid)\n if i%10 == 0:\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n\n saver.save(sess, 'model')\n print(\"Model saved\")\n # Evaluate on the test data\n test_accuracy, _ = evaluate(X_test, y_test)\n print(\"Test accuracy = {:.3f}\".format(test_accuracy))\n\n",
"Step 3: Test a Model on New Images\nTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.\nYou may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.\nLoad and Output the Images",
"### Load the images and plot them here.\n### Feel free to use as many code cells as needed.\nimport os\nimport matplotlib.image as mpimg\n\ndef load_extra_images(folder):\n images = []\n for fname in sorted(os.listdir(folder)):\n img = mpimg.imread(os.path.join(folder, fname))\n img=cv2.resize(img,(32,32))\n if img is not None:\n images.append(img)\n return images\n\nimages = load_extra_images(\"/home/dman/CarND-Traffic-Sign-Classifier-Project/extra_signs\")\n\nfig, axes = plt.subplots(nrows=1, ncols=len(images))\nfor i in range(len(images)):\n axes[i].imshow(images[i])\n\nX_extra = np.concatenate([img.reshape([1, 32, 32, 3]) for img in images])\ny_extra = np.array([28, 12, 14, 3, 25, 13])",
"Predict the Sign Type for Each Image",
"### Run the predictions here and use the model to output the prediction for each image.\n### Make sure to pre-process the images with the same pre-processing pipeline used earlier.\n### Feel free to use as many code cells as needed\n \nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n extra_prediction_softmax = tf.nn.softmax(logits)\n top_k_ = tf.nn.top_k(extra_classes, k=5, sorted=True)\n top_k = sess.run(top_k_, feed_dict={x: X_extra, y: y_extra, keep_prob: 1.0})\n pred = np.array([x[0] for x in top_k[1]])\n print(pred)\n",
"Analyze Performance",
"### Calculate the accuracy for these 5 new images. \n### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.\narr = np.array([x[0] for x in top_k[1]])\nprint('accuracy is %s' % (np.sum(arr == y_extra)/float(len(arr))))\n",
"Output Top 5 Softmax Probabilities For Each Image Found on the Web\nFor each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here. \nThe example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.\ntf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.\nTake this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:\n```\n(5, 6) array\na = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,\n 0.12789202],\n [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,\n 0.15899337],\n [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,\n 0.23892179],\n [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,\n 0.16505091],\n [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,\n 0.09155967]])\n```\nRunning it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:\nTopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],\n [ 0.28086119, 0.27569815, 0.18063401],\n [ 0.26076848, 0.23892179, 0.23664738],\n [ 0.29198961, 0.26234032, 0.16505091],\n [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],\n [0, 1, 4],\n [0, 5, 1],\n [1, 3, 5],\n [1, 4, 3]], dtype=int32))\nLooking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.",
"### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. \n### Feel free to use as many code cells as needed.\n \nfor i in range(len(top_k[0])):\n print('Image', i, '\\n\\tprobabilities:', top_k[0][i], '\\n\\tclasses:', top_k[1][i])",
"Project Writeup\nOnce you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file. \n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \\n\",\n \"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.\n\n\nStep 4 (Optional): Visualize the Neural Network's State with Test Images\nThis Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.\nProvided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.\nFor an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.\n<figure>\n <img src=\"visualize_cnn.png\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above)</p> \n </figcaption>\n</figure>\n<p></p>",
"### Visualize your network's feature maps here.\n### Feel free to use as many code cells as needed.\n\n# image_input: the test image being fed into the network to produce the feature maps\n# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer\n# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output\n# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry\n\ndef outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):\n # Here make sure to preprocess your image_input in a way your network expects\n # with size, normalization, ect if needed\n # image_input =\n # Note: x should be the same name as your network's tensorflow data placeholder variable\n # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function\n activation = tf_activation.eval(session=sess,feed_dict={x : image_input})\n featuremaps = activation.shape[3]\n plt.figure(plt_num, figsize=(15,15))\n for featuremap in range(featuremaps):\n plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column\n plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number\n if activation_min != -1 & activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin =activation_min, vmax=activation_max, cmap=\"gray\")\n elif activation_max != -1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmax=activation_max, cmap=\"gray\")\n elif activation_min !=-1:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", vmin=activation_min, cmap=\"gray\")\n else:\n plt.imshow(activation[0,:,:, featuremap], interpolation=\"nearest\", cmap=\"gray\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kaleoyster/nbi-data-science
|
Bridge Life-Cycle Models/CDF+Probability+Reconstruction+vs+Age+of+Bridges+in+the+Midwestern+United+States.ipynb
|
gpl-2.0
|
[
"Libraries and Packages",
"import pymongo\nfrom pymongo import MongoClient\nimport time\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nfrom matplotlib.pyplot import *\nimport matplotlib.pyplot as plt\nimport folium\nimport datetime as dt\nimport random as rnd\nimport warnings\nimport datetime as dt\nimport csv\n%matplotlib inline",
"Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance",
"warnings.filterwarnings(action=\"ignore\")\nClient = MongoClient(\"mongodb://bridges:readonly@nbi-mongo.admin/bridge\")\ndb = Client.bridge\ncollection = db[\"bridges\"]",
"Extracting Data of Midwestern states of the United states from 1992 - 2016.\nThe following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.",
"def getData(state):\n pipeline = [{\"$match\":{\"$and\":[{\"year\":{\"$gt\":1991, \"$lt\":2017}},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"structureNumber\":1,\n \"yearBuilt\":1,\n \"yearReconstructed\":1,\n \"deck\":1, ## Rating of deck\n \"year\":1,\n 'owner':1,\n \"countyCode\":1,\n \"substructure\":1, ## rating of substructure\n \"superstructure\":1, ## rating of superstructure\n \"Structure Type\":\"$structureTypeMain.typeOfDesignConstruction\",\n \"Type of Wearing Surface\":\"$wearingSurface/ProtectiveSystem.typeOfWearingSurface\",\n }}]\n dec = collection.aggregate(pipeline)\n conditionRatings = pd.DataFrame(list(dec))\n\n ## Creating new column: Age\n conditionRatings['Age'] = conditionRatings['year']- conditionRatings['yearBuilt']\n \n return conditionRatings\n",
"Filteration of NBI Data\nThe following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.",
"## filter and convert them into interger\ndef filterConvert(conditionRatings):\n before = len(conditionRatings)\n print(\"Total Records before filteration: \",len(conditionRatings))\n conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]\n conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]\n after = len(conditionRatings)\n print(\"Total Records after filteration: \",len(conditionRatings))\n print(\"Difference: \", before - after)\n return conditionRatings\n\n",
"Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.",
"## make it into a function\ndef findSurvivalProbablities(conditionRatings):\n \n i = 1\n j = 2\n probabilities = []\n while j < 121:\n v = list(conditionRatings.loc[conditionRatings['Age'] == i]['deck'])\n k = list(conditionRatings.loc[conditionRatings['Age'] == i]['structureNumber'])\n Age1 = {key:int(value) for key, value in zip(k,v)}\n #v = conditionRatings.loc[conditionRatings['Age'] == j]\n\n v_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['deck'])\n k_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['structureNumber'])\n Age2 = {key:int(value) for key, value in zip(k_2,v_2)}\n\n\n intersectedList = list(Age1.keys() & Age2.keys())\n reconstructed = 0\n for structureNumber in intersectedList:\n if Age1[structureNumber] < Age2[structureNumber]:\n if (Age1[structureNumber] - Age2[structureNumber]) < -1:\n reconstructed = reconstructed + 1\n try:\n probability = reconstructed / len(intersectedList)\n except ZeroDivisionError:\n probability = 0\n\n probabilities.append(probability*100)\n\n i = i + 1\n j = j + 1\n \n return probabilities\n",
"A utility function to plot the graphs.",
"def plotCDF(cumsum_probabilities):\n fig = plt.figure(figsize=(15,8))\n ax = plt.axes()\n\n plt.title('CDF of Reonstruction Vs Age')\n plt.xlabel('Age')\n plt.ylabel('CDF of Reonstruction')\n plt.yticks([0,10,20,30,40,50,60,70,80,90,100])\n plt.ylim(0,100)\n\n x = [i for i in range(1,120)]\n y = cumsum_probabilities\n ax.plot(x,y)\n return plt.show()\n\n",
"The following script will select all the bridges in the midwestern United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.",
"states = ['31','19','17','18','20','26','27','29','38','46','39','55']\n\n# Mapping state code to state abbreviation \nstateNameDict = {'25':'MA',\n '04':'AZ',\n '08':'CO',\n '38':'ND',\n '09':'CT',\n '19':'IA',\n '26':'MI',\n '48':'TX',\n '35':'NM',\n '17':'IL',\n '51':'VA',\n '23':'ME',\n '16':'ID',\n '36':'NY',\n '56':'WY',\n '29':'MO',\n '39':'OH',\n '28':'MS',\n '11':'DC',\n '21':'KY',\n '18':'IN',\n '06':'CA',\n '47':'TN',\n '12':'FL',\n '24':'MD',\n '34':'NJ',\n '46':'SD',\n '13':'GA',\n '55':'WI',\n '30':'MT',\n '54':'WV',\n '15':'HI',\n '32':'NV',\n '37':'NC',\n '10':'DE',\n '33':'NH',\n '44':'RI',\n '50':'VT',\n '42':'PA',\n '05':'AR',\n '20':'KS',\n '45':'SC',\n '22':'LA',\n '40':'OK',\n '72':'PR',\n '41':'OR',\n '27':'MN',\n '53':'WA',\n '01':'AL',\n '31':'NE',\n '02':'AK',\n '49':'UT'\n }\n\ndef getProbs(states, stateNameDict):\n # Initializaing the dataframes for deck, superstructure and subtructure\n df_prob_recon = pd.DataFrame({'Age':range(1,61)})\n df_cumsum_prob_recon = pd.DataFrame({'Age':range(1,61)})\n \n\n for state in states:\n conditionRatings_state = getData(state)\n stateName = stateNameDict[state]\n print(\"STATE - \",stateName)\n conditionRatings_state = filterConvert(conditionRatings_state)\n print(\"\\n\")\n probabilities_state = findSurvivalProbablities(conditionRatings_state)\n cumsum_probabilities_state = np.cumsum(probabilities_state)\n \n df_prob_recon[stateName] = probabilities_state[:60]\n df_cumsum_prob_recon[stateName] = cumsum_probabilities_state[:60]\n \n# df_prob_recon.set_index('Age', inplace = True)\n# df_cumsum_prob_recon.set_index('Age', inplace = True)\n \n return df_prob_recon, df_cumsum_prob_recon\n \ndf_prob_recon, df_cumsum_prob_recon = getProbs(states, stateNameDict)\n\ndf_prob_recon.to_csv('prsmidwest.csv')\ndf_cumsum_prob_recon.to_csv('cprsmidwest.csv')",
"In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the midwestern United States, as the bridges grow older the probability of reconstruction increases.",
"plt.figure(figsize=(12,8))\nplt.title(\"CDF Probability of Reconstruction vs Age\")\n\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\nlinestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):\n \n plt.plot(df_cumsum_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)\n \nplt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction'); \nplt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)\nplt.ylim(1,25)\nplt.show()",
"The below figure presents CDF Probability of reconstruction, of bridge in the midwestern United States.",
"plt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\n# multiple line plot\nnum = 1\nlinestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor n, column in enumerate(df_cumsum_prob_recon.drop('Age', axis=1)):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_cumsum_prob_recon['Age'], df_cumsum_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,100)\n \n # Not ticks everywhere\n if num in range(10) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7,10]:\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 50, 'Probability', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"CDF Probability of Reconstruction vs Age\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n ",
"In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.\none particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.",
"plt.figure(figsize=(12,8))\nplt.title(\"Probability of Reconstruction vs Age\")\n\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\nlinestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):\n \n plt.plot(df_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)\n \nplt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction'); \nplt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)\nplt.ylim(1,25)\nplt.show()",
"A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.\nTo Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the midwestern United States.",
"plt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\n# multiple line plot\nnum = 1\nlinestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']\nfor n, column in enumerate(df_prob_recon.drop('Age', axis=1)):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_prob_recon['Age'], df_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,25)\n \n # Not ticks everywhere\n if num in range(10) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7,10]:\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 12.5, 'Probability', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Probability of Reconstruction vs Age\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.24/_downloads/1a105d401683707ed0696f30397d6253/40_artifact_correction_ica.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Repairing artifacts with ICA\nThis tutorial covers the basics of independent components analysis (ICA) and\nshows how ICA can be used for artifact repair; an extended example illustrates\nrepair of ocular and heartbeat artifacts. For conceptual background on ICA, see\nthis scikit-learn tutorial\n<sphx_glr_auto_examples_decomposition_plot_ica_blind_source_separation.py>.\nWe begin as always by importing the necessary Python modules and loading some\nexample data <sample-dataset>. Because ICA can be computationally\nintense, we'll also crop the data to 60 seconds; and to save ourselves from\nrepeatedly typing mne.preprocessing we'll directly import a few functions\nand classes from that submodule:",
"import os\nimport mne\nfrom mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs,\n corrmap)\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)\n# Here we'll crop to 60 seconds and drop gradiometer channels for speed\nraw.crop(tmax=60.).pick_types(meg='mag', eeg=True, stim=True, eog=True)\nraw.load_data()",
"<div class=\"alert alert-info\"><h4>Note</h4><p>Before applying ICA (or any artifact repair strategy), be sure to observe\n the artifacts in your data to make sure you choose the right repair tool.\n Sometimes the right tool is no tool at all — if the artifacts are small\n enough you may not even need to repair them to get good analysis results.\n See `tut-artifact-overview` for guidance on detecting and\n visualizing various types of artifact.</p></div>\n\nWhat is ICA?\nIndependent components analysis (ICA) is a technique for estimating\nindependent source signals from a set of recordings in which the source\nsignals were mixed together in unknown ratios. A common example of this is\nthe problem of blind source separation_: with 3 musical instruments playing\nin the same room, and 3 microphones recording the performance (each picking\nup all 3 instruments, but at varying levels), can you somehow \"unmix\" the\nsignals recorded by the 3 microphones so that you end up with a separate\n\"recording\" isolating the sound of each instrument?\nIt is not hard to see how this analogy applies to EEG/MEG analysis: there are\nmany \"microphones\" (sensor channels) simultaneously recording many\n\"instruments\" (blinks, heartbeats, activity in different areas of the brain,\nmuscular activity from jaw clenching or swallowing, etc). As long as these\nvarious source signals are statistically independent_ and non-gaussian, it\nis usually possible to separate the sources using ICA, and then re-construct\nthe sensor signals after excluding the sources that are unwanted.\nICA in MNE-Python\n.. sidebar:: ICA and dimensionality reduction\nIf you want to perform ICA with *no* dimensionality reduction (other than\nthe number of Independent Components (ICs) given in ``n_components``, and\nany subsequent exclusion of ICs you specify in ``ICA.exclude``), simply\npass ``n_components``.\n\nHowever, if you *do* want to reduce dimensionality, consider this\nexample: if you have 300 sensor channels and you set ``n_components=50``\nduring instantiation and pass ``n_pca_components=None`` to\n`~mne.preprocessing.ICA.apply`, then the the first 50\nPCs are sent to the ICA algorithm (yielding 50 ICs), and during\nreconstruction `~mne.preprocessing.ICA.apply` will use the 50 ICs\nplus PCs number 51-300 (the full PCA residual). If instead you specify\n``n_pca_components=120`` in `~mne.preprocessing.ICA.apply`, it will\nreconstruct using the 50 ICs plus the first 70 PCs in the PCA residual\n(numbers 51-120), thus discarding the smallest 180 components.\n\n**If you have previously been using EEGLAB**'s ``runica()`` and are\nlooking for the equivalent of its ``'pca', n`` option to reduce\ndimensionality, set ``n_components=n`` during initialization and pass\n``n_pca_components=n`` to `~mne.preprocessing.ICA.apply`.\n\nMNE-Python implements three different ICA algorithms: fastica (the\ndefault), picard, and infomax. FastICA and Infomax are both in fairly\nwidespread use; Picard is a newer (2017) algorithm that is expected to\nconverge faster than FastICA and Infomax, and is more robust than other\nalgorithms in cases where the sources are not completely independent, which\ntypically happens with real EEG/MEG data. See\n:footcite:AblinEtAl2018 for more information.\nThe ICA interface in MNE-Python is similar to the interface in\nscikit-learn_: some general parameters are specified when creating an\n~mne.preprocessing.ICA object, then the ~mne.preprocessing.ICA object is\nfit to the data using its ~mne.preprocessing.ICA.fit method. The results of\nthe fitting are added to the ~mne.preprocessing.ICA object as attributes\nthat end in an underscore (_), such as ica.mixing_matrix_ and\nica.unmixing_matrix_. After fitting, the ICA component(s) that you want\nto remove must be chosen, and the ICA fit must then be applied to the\n~mne.io.Raw or ~mne.Epochs object using the ~mne.preprocessing.ICA\nobject's ~mne.preprocessing.ICA.apply method.\nAs is typically done with ICA, the data are first scaled to unit variance and\nwhitened using principal components analysis (PCA) before performing the ICA\ndecomposition. This is a two-stage process:\n\nTo deal with different channel types having different units\n (e.g., Volts for EEG and Tesla for MEG), data must be pre-whitened.\n If noise_cov=None (default), all data of a given channel type is\n scaled by the standard deviation across all channels. If noise_cov is\n a ~mne.Covariance, the channels are pre-whitened using the covariance.\nThe pre-whitened data are then decomposed using PCA.\n\nFrom the resulting principal components (PCs), the first n_components are\nthen passed to the ICA algorithm if n_components is an integer number.\nIt can also be a float between 0 and 1, specifying the fraction of\nexplained variance that the PCs should capture; the appropriate number of\nPCs (i.e., just as many PCs as are required to explain the given fraction\nof total variance) is then passed to the ICA.\nAfter visualizing the Independent Components (ICs) and excluding any that\ncapture artifacts you want to repair, the sensor signal can be reconstructed\nusing the ~mne.preprocessing.ICA object's\n~mne.preprocessing.ICA.apply method. By default, signal\nreconstruction uses all of the ICs (less any ICs listed in ICA.exclude)\nplus all of the PCs that were not included in the ICA decomposition (i.e.,\nthe \"PCA residual\"). If you want to reduce the number of components used at\nthe reconstruction stage, it is controlled by the n_pca_components\nparameter (which will in turn reduce the rank of your data; by default\nn_pca_components=None resulting in no additional dimensionality\nreduction). The fitting and reconstruction procedures and the\nparameters that control dimensionality at various stages are summarized in\nthe diagram below:\n.. raw:: html\n<a href=\n \"../../_images/graphviz-7483cb1cf41f06e2a4ef451b17f073dbe584ba30.png\">\n.. graphviz:: ../../_static/diagrams/ica.dot\n :alt: Diagram of ICA procedure in MNE-Python\n :align: left\n.. raw:: html\n</a>\nSee the Notes section of the ~mne.preprocessing.ICA documentation\nfor further details. Next we'll walk through an extended example that\nillustrates each of these steps in greater detail.\nExample: EOG and ECG artifact repair\nVisualizing the artifacts\nLet's begin by visualizing the artifacts that we want to repair. In this\ndataset they are big enough to see easily in the raw data:",
"# pick some channels that clearly show heartbeats and blinks\nregexp = r'(MEG [12][45][123]1|EEG 00.)'\nartifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)\nraw.plot(order=artifact_picks, n_channels=len(artifact_picks),\n show_scrollbars=False)",
"We can get a summary of how the ocular artifact manifests across each channel\ntype using ~mne.preprocessing.create_eog_epochs like we did in the\ntut-artifact-overview tutorial:",
"eog_evoked = create_eog_epochs(raw).average()\neog_evoked.apply_baseline(baseline=(None, -0.2))\neog_evoked.plot_joint()",
"Now we'll do the same for the heartbeat artifacts, using\n~mne.preprocessing.create_ecg_epochs:",
"ecg_evoked = create_ecg_epochs(raw).average()\necg_evoked.apply_baseline(baseline=(None, -0.2))\necg_evoked.plot_joint()",
"Filtering to remove slow drifts\nBefore we run the ICA, an important step is filtering the data to remove\nlow-frequency drifts, which can negatively affect the quality of the ICA fit.\nThe slow drifts are problematic because they reduce the independence of the\nassumed-to-be-independent sources (e.g., during a slow upward drift, the\nneural, heartbeat, blink, and other muscular sources will all tend to have\nhigher values), making it harder for the algorithm to find an accurate\nsolution. A high-pass filter with 1 Hz cutoff frequency is recommended.\nHowever, because filtering is a linear operation, the ICA solution found from\nthe filtered signal can be applied to the unfiltered signal (see\n:footcite:WinklerEtAl2015 for\nmore information), so we'll keep a copy of the unfiltered\n~mne.io.Raw object around so we can apply the ICA solution to it\nlater.",
"filt_raw = raw.copy().filter(l_freq=1., h_freq=None)",
"Fitting and plotting the ICA solution\n.. sidebar:: Ignoring the time domain\nThe ICA algorithms implemented in MNE-Python find patterns across\nchannels, but ignore the time domain. This means you can compute ICA on\ndiscontinuous `~mne.Epochs` or `~mne.Evoked` objects (not\njust continuous `~mne.io.Raw` objects), or only use every Nth\nsample by passing the ``decim`` parameter to ``ICA.fit()``.\n\n.. note:: `~mne.Epochs` used for fitting ICA should not be\n baseline-corrected. Because cleaning the data via ICA may\n introduce DC offsets, we suggest to baseline correct your data\n **after** cleaning (and not before), should you require\n baseline correction.\n\nNow we're ready to set up and fit the ICA. Since we know (from observing our\nraw data) that the EOG and ECG artifacts are fairly strong, we would expect\nthose artifacts to be captured in the first few dimensions of the PCA\ndecomposition that happens before the ICA. Therefore, we probably don't need\na huge number of components to do a good job of isolating our artifacts\n(though it is usually preferable to include more components for a more\naccurate solution). As a first guess, we'll run ICA with n_components=15\n(use only the first 15 PCA components to compute the ICA decomposition) — a\nvery small number given that our data has over 300 channels, but with the\nadvantage that it will run quickly and we will able to tell easily whether it\nworked or not (because we already know what the EOG / ECG artifacts should\nlook like).\nICA fitting is not deterministic (e.g., the components may get a sign\nflip on different runs, or may not always be returned in the same order), so\nwe'll also specify a random seed_ so that we get identical results each\ntime this tutorial is built by our web servers.",
"ica = ICA(n_components=15, max_iter='auto', random_state=97)\nica.fit(filt_raw)\nica",
"Some optional parameters that we could have passed to the\n~mne.preprocessing.ICA.fit method include decim (to use only\nevery Nth sample in computing the ICs, which can yield a considerable\nspeed-up) and reject (for providing a rejection dictionary for maximum\nacceptable peak-to-peak amplitudes for each channel type, just like we used\nwhen creating epoched data in the tut-overview tutorial).\nNow we can examine the ICs to see what they captured.\n~mne.preprocessing.ICA.plot_sources will show the time series of the\nICs. Note that in our call to ~mne.preprocessing.ICA.plot_sources we\ncan use the original, unfiltered ~mne.io.Raw object:",
"raw.load_data()\nica.plot_sources(raw, show_scrollbars=False)",
"Here we can pretty clearly see that the first component (ICA000) captures\nthe EOG signal quite well, and the second component (ICA001) looks a lot\nlike a heartbeat <qrs_> (for more info on visually identifying Independent\nComponents, this EEGLAB tutorial is a good resource). We can also\nvisualize the scalp field distribution of each component using\n~mne.preprocessing.ICA.plot_components. These are interpolated based\non the values in the ICA mixing matrix:",
"ica.plot_components()",
"<div class=\"alert alert-info\"><h4>Note</h4><p>`~mne.preprocessing.ICA.plot_components` (which plots the scalp\n field topographies for each component) has an optional ``inst`` parameter\n that takes an instance of `~mne.io.Raw` or `~mne.Epochs`.\n Passing ``inst`` makes the scalp topographies interactive: clicking one\n will bring up a diagnostic `~mne.preprocessing.ICA.plot_properties`\n window (see below) for that component.</p></div>\n\nIn the plots above it's fairly obvious which ICs are capturing our EOG and\nECG artifacts, but there are additional ways visualize them anyway just to\nbe sure. First, we can plot an overlay of the original signal against the\nreconstructed signal with the artifactual ICs excluded, using\n~mne.preprocessing.ICA.plot_overlay:",
"# blinks\nica.plot_overlay(raw, exclude=[0], picks='eeg')\n# heartbeats\nica.plot_overlay(raw, exclude=[1], picks='mag')",
"We can also plot some diagnostics of each IC using\n~mne.preprocessing.ICA.plot_properties:",
"ica.plot_properties(raw, picks=[0, 1])",
"In the remaining sections, we'll look at different ways of choosing which ICs\nto exclude prior to reconstructing the sensor signals.\nSelecting ICA components manually\nOnce we're certain which components we want to exclude, we can specify that\nmanually by setting the ica.exclude attribute. Similar to marking bad\nchannels, merely setting ica.exclude doesn't do anything immediately (it\njust adds the excluded ICs to a list that will get used later when it's\nneeded). Once the exclusions have been set, ICA methods like\n~mne.preprocessing.ICA.plot_overlay will exclude those component(s)\neven if no exclude parameter is passed, and the list of excluded\ncomponents will be preserved when using mne.preprocessing.ICA.save\nand mne.preprocessing.read_ica.",
"ica.exclude = [0, 1] # indices chosen based on various plots above",
"Now that the exclusions have been set, we can reconstruct the sensor signals\nwith artifacts removed using the ~mne.preprocessing.ICA.apply method\n(remember, we're applying the ICA solution from the filtered data to the\noriginal unfiltered signal). Plotting the original raw data alongside the\nreconstructed data shows that the heartbeat and blink artifacts are repaired.",
"# ica.apply() changes the Raw object in-place, so let's make a copy first:\nreconst_raw = raw.copy()\nica.apply(reconst_raw)\n\nraw.plot(order=artifact_picks, n_channels=len(artifact_picks),\n show_scrollbars=False)\nreconst_raw.plot(order=artifact_picks, n_channels=len(artifact_picks),\n show_scrollbars=False)\ndel reconst_raw",
"Using an EOG channel to select ICA components\nIt may have seemed easy to review the plots and manually select which ICs to\nexclude, but when processing dozens or hundreds of subjects this can become\na tedious, rate-limiting step in the analysis pipeline. One alternative is to\nuse dedicated EOG or ECG sensors as a \"pattern\" to check the ICs against, and\nautomatically mark for exclusion any ICs that match the EOG/ECG pattern. Here\nwe'll use ~mne.preprocessing.ICA.find_bads_eog to automatically find\nthe ICs that best match the EOG signal, then use\n~mne.preprocessing.ICA.plot_scores along with our other plotting\nfunctions to see which ICs it picked. We'll start by resetting\nica.exclude back to an empty list:",
"ica.exclude = []\n# find which ICs match the EOG pattern\neog_indices, eog_scores = ica.find_bads_eog(raw)\nica.exclude = eog_indices\n\n# barplot of ICA component \"EOG match\" scores\nica.plot_scores(eog_scores)\n\n# plot diagnostics\nica.plot_properties(raw, picks=eog_indices)\n\n# plot ICs applied to raw data, with EOG matches highlighted\nica.plot_sources(raw, show_scrollbars=False)\n\n# plot ICs applied to the averaged EOG epochs, with EOG matches highlighted\nica.plot_sources(eog_evoked)",
"Note that above we used ~mne.preprocessing.ICA.plot_sources on both\nthe original ~mne.io.Raw instance and also on an\n~mne.Evoked instance of the extracted EOG artifacts. This can be\nanother way to confirm that ~mne.preprocessing.ICA.find_bads_eog has\nidentified the correct components.\nUsing a simulated channel to select ICA components\nIf you don't have an EOG channel,\n~mne.preprocessing.ICA.find_bads_eog has a ch_name parameter that\nyou can use as a proxy for EOG. You can use a single channel, or create a\nbipolar reference from frontal EEG sensors and use that as virtual EOG\nchannel. This carries a risk however: you must hope that the frontal EEG\nchannels only reflect EOG and not brain dynamics in the prefrontal cortex (or\nyou must not care about those prefrontal signals).\nFor ECG, it is easier: ~mne.preprocessing.ICA.find_bads_ecg can use\ncross-channel averaging of magnetometer or gradiometer channels to construct\na virtual ECG channel, so if you have MEG channels it is usually not\nnecessary to pass a specific channel name.\n~mne.preprocessing.ICA.find_bads_ecg also has two options for its\nmethod parameter: 'ctps' (cross-trial phase statistics\n:footcite:DammersEtAl2008) and\n'correlation' (Pearson correlation between data and ECG channel).",
"ica.exclude = []\n# find which ICs match the ECG pattern\necg_indices, ecg_scores = ica.find_bads_ecg(raw, method='correlation',\n threshold='auto')\nica.exclude = ecg_indices\n\n# barplot of ICA component \"ECG match\" scores\nica.plot_scores(ecg_scores)\n\n# plot diagnostics\nica.plot_properties(raw, picks=ecg_indices)\n\n# plot ICs applied to raw data, with ECG matches highlighted\nica.plot_sources(raw, show_scrollbars=False)\n\n# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted\nica.plot_sources(ecg_evoked)",
"The last of these plots is especially useful: it shows us that the heartbeat\nartifact is coming through on two ICs, and we've only caught one of them.\nIn fact, if we look closely at the output of\n~mne.preprocessing.ICA.plot_sources (online, you can right-click →\n\"view image\" to zoom in), it looks like ICA014 has a weak periodic\ncomponent that is in-phase with ICA001. It might be worthwhile to re-run\nthe ICA with more components to see if that second heartbeat artifact\nresolves out a little better:",
"# refit the ICA with 30 components this time\nnew_ica = ICA(n_components=30, max_iter='auto', random_state=97)\nnew_ica.fit(filt_raw)\n\n# find which ICs match the ECG pattern\necg_indices, ecg_scores = new_ica.find_bads_ecg(raw, method='correlation',\n threshold='auto')\nnew_ica.exclude = ecg_indices\n\n# barplot of ICA component \"ECG match\" scores\nnew_ica.plot_scores(ecg_scores)\n\n# plot diagnostics\nnew_ica.plot_properties(raw, picks=ecg_indices)\n\n# plot ICs applied to raw data, with ECG matches highlighted\nnew_ica.plot_sources(raw, show_scrollbars=False)\n\n# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted\nnew_ica.plot_sources(ecg_evoked)",
"Much better! Now we've captured both ICs that are reflecting the heartbeat\nartifact (and as a result, we got two diagnostic plots: one for each IC that\nreflects the heartbeat). This demonstrates the value of checking the results\nof automated approaches like ~mne.preprocessing.ICA.find_bads_ecg\nbefore accepting them.",
"# clean up memory before moving on\ndel raw, ica, new_ica",
"Selecting ICA components using template matching\nWhen dealing with multiple subjects, it is also possible to manually select\nan IC for exclusion on one subject, and then use that component as a\ntemplate for selecting which ICs to exclude from other subjects' data,\nusing mne.preprocessing.corrmap :footcite:CamposViolaEtAl2009.\nThe idea behind ~mne.preprocessing.corrmap is that the artifact patterns\nare similar\nenough across subjects that corresponding ICs can be identified by\ncorrelating the ICs from each ICA solution with a common template, and\npicking the ICs with the highest correlation strength.\n~mne.preprocessing.corrmap takes a list of ICA solutions, and a\ntemplate parameter that specifies which ICA object and which component\nwithin it to use as a template.\nSince our sample dataset only contains data from one subject, we'll use a\ndifferent dataset with multiple subjects: the EEGBCI dataset\n:footcite:SchalkEtAl2004,GoldbergerEtAl2000. The\ndataset has 109 subjects, we'll just download one run (a left/right hand\nmovement task) from each of the first 4 subjects:",
"mapping = {\n 'Fc5.': 'FC5', 'Fc3.': 'FC3', 'Fc1.': 'FC1', 'Fcz.': 'FCz', 'Fc2.': 'FC2',\n 'Fc4.': 'FC4', 'Fc6.': 'FC6', 'C5..': 'C5', 'C3..': 'C3', 'C1..': 'C1',\n 'Cz..': 'Cz', 'C2..': 'C2', 'C4..': 'C4', 'C6..': 'C6', 'Cp5.': 'CP5',\n 'Cp3.': 'CP3', 'Cp1.': 'CP1', 'Cpz.': 'CPz', 'Cp2.': 'CP2', 'Cp4.': 'CP4',\n 'Cp6.': 'CP6', 'Fp1.': 'Fp1', 'Fpz.': 'Fpz', 'Fp2.': 'Fp2', 'Af7.': 'AF7',\n 'Af3.': 'AF3', 'Afz.': 'AFz', 'Af4.': 'AF4', 'Af8.': 'AF8', 'F7..': 'F7',\n 'F5..': 'F5', 'F3..': 'F3', 'F1..': 'F1', 'Fz..': 'Fz', 'F2..': 'F2',\n 'F4..': 'F4', 'F6..': 'F6', 'F8..': 'F8', 'Ft7.': 'FT7', 'Ft8.': 'FT8',\n 'T7..': 'T7', 'T8..': 'T8', 'T9..': 'T9', 'T10.': 'T10', 'Tp7.': 'TP7',\n 'Tp8.': 'TP8', 'P7..': 'P7', 'P5..': 'P5', 'P3..': 'P3', 'P1..': 'P1',\n 'Pz..': 'Pz', 'P2..': 'P2', 'P4..': 'P4', 'P6..': 'P6', 'P8..': 'P8',\n 'Po7.': 'PO7', 'Po3.': 'PO3', 'Poz.': 'POz', 'Po4.': 'PO4', 'Po8.': 'PO8',\n 'O1..': 'O1', 'Oz..': 'Oz', 'O2..': 'O2', 'Iz..': 'Iz'\n}\n\nraws = list()\nicas = list()\n\nfor subj in range(4):\n # EEGBCI subjects are 1-indexed; run 3 is a left/right hand movement task\n fname = mne.datasets.eegbci.load_data(subj + 1, runs=[3])[0]\n raw = mne.io.read_raw_edf(fname).load_data().resample(50)\n # remove trailing `.` from channel names so we can set montage\n raw.rename_channels(mapping)\n raw.set_montage('standard_1005')\n # high-pass filter\n raw_filt = raw.copy().load_data().filter(l_freq=1., h_freq=None)\n # fit ICA, using low max_iter for speed\n ica = ICA(n_components=30, max_iter=100, random_state=97)\n ica.fit(raw_filt, verbose='error')\n raws.append(raw)\n icas.append(ica)",
"Now let's run ~mne.preprocessing.corrmap:",
"# use the first subject as template; use Fpz as proxy for EOG\nraw = raws[0]\nica = icas[0]\neog_inds, eog_scores = ica.find_bads_eog(raw, ch_name='Fpz')\ncorrmap(icas, template=(0, eog_inds[0]))",
"The first figure shows the template map, while the second figure shows all\nthe maps that were considered a \"match\" for the template (including the\ntemplate itself). There is one match for each subject, but it's a good idea\nto also double-check the ICA sources for each subject:",
"for index, (ica, raw) in enumerate(zip(icas, raws)):\n fig = ica.plot_sources(raw, show_scrollbars=False)\n fig.subplots_adjust(top=0.9) # make space for title\n fig.suptitle('Subject {}'.format(index))",
"Notice that subjects 2 and 3 each seem to have two ICs that reflect ocular\nactivity (components ICA000 and ICA002), but only one was caught by\n~mne.preprocessing.corrmap. Let's try setting the threshold manually:",
"corrmap(icas, template=(0, eog_inds[0]), threshold=0.9)",
"This time it found 2 ICs for each of subjects 2 and 3 (which is good).\nAt this point we'll re-run ~mne.preprocessing.corrmap with\nparameters label='blink', plot=False to label the ICs from each subject\nthat capture the blink artifacts (without plotting them again).",
"corrmap(icas, template=(0, eog_inds[0]), threshold=0.9, label='blink',\n plot=False)\nprint([ica.labels_ for ica in icas])",
"Notice that the first subject has 3 different labels for the IC at index 0:\n\"eog/0/Fpz\", \"eog\", and \"blink\". The first two were added by\n~mne.preprocessing.ICA.find_bads_eog; the \"blink\" label was added by the\nlast call to ~mne.preprocessing.corrmap. Notice also that each subject has\nat least one IC index labelled \"blink\", and subjects 2 and 3 each have two\ncomponents (0 and 2) labelled \"blink\" (consistent with the plot of IC sources\nabove). The labels_ attribute of ~mne.preprocessing.ICA objects can\nalso be manually edited to annotate the ICs with custom labels. They also\ncome in handy when plotting:",
"icas[3].plot_components(picks=icas[3].labels_['blink'])\nicas[3].exclude = icas[3].labels_['blink']\nicas[3].plot_sources(raws[3], show_scrollbars=False)",
"As a final note, it is possible to extract ICs numerically using the\n~mne.preprocessing.ICA.get_components method of\n~mne.preprocessing.ICA objects. This will return a :class:NumPy\narray <numpy.ndarray> that can be passed to\n~mne.preprocessing.corrmap instead of the :class:tuple of\n(subject_index, component_index) we passed before, and will yield the\nsame result:",
"template_eog_component = icas[0].get_components()[:, eog_inds[0]]\ncorrmap(icas, template=template_eog_component, threshold=0.9)\nprint(template_eog_component)",
"An advantage of using this numerical representation of an IC to capture a\nparticular artifact pattern is that it can be saved and used as a template\nfor future template-matching tasks using ~mne.preprocessing.corrmap\nwithout having to load or recompute the ICA solution that yielded the\ntemplate originally. Put another way, when the template is a NumPy array, the\n~mne.preprocessing.ICA object containing the template does not need\nto be in the list of ICAs provided to ~mne.preprocessing.corrmap.\n.. LINKS\nhttps://en.wikipedia.org/wiki/Signal_separation\n https://en.wikipedia.org/wiki/Independence_(probability_theory)\nCompute ICA components on Epochs\nICA is now fit to epoched MEG data instead of the raw data.\nWe assume that the non-stationary EOG artifacts have already been removed.\nThe sources matching the ECG are automatically found and displayed.\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example is computationally intensive, so it might take a few minutes\n to complete.</p></div>\n\nAfter reading the data, preprocessing consists of:\n\nMEG channel selection\n1-30 Hz band-pass filter\nepoching -0.2 to 0.5 seconds with respect to events\nrejection based on peak-to-peak amplitude\n\nNote that we don't baseline correct the epochs here – we'll do this after\ncleaning with ICA is completed. Baseline correction before ICA is not\nrecommended by the MNE-Python developers, as it doesn't guarantee optimal\nresults.",
"filt_raw.pick_types(meg=True, eeg=False, exclude='bads', stim=True).load_data()\nfilt_raw.filter(1, 30, fir_design='firwin')\n\n# peak-to-peak amplitude rejection parameters\nreject = dict(mag=4e-12)\n# create longer and more epochs for more artifact exposure\nevents = mne.find_events(filt_raw, stim_channel='STI 014')\n# don't baseline correct epochs\nepochs = mne.Epochs(filt_raw, events, event_id=None, tmin=-0.2, tmax=0.5,\n reject=reject, baseline=None)",
"Fit ICA model using the FastICA algorithm, detect and plot components\nexplaining ECG artifacts.",
"ica = ICA(n_components=15, method='fastica', max_iter=\"auto\").fit(epochs)\n\necg_epochs = create_ecg_epochs(filt_raw, tmin=-.5, tmax=.5)\necg_inds, scores = ica.find_bads_ecg(ecg_epochs, threshold='auto')\n\nica.plot_components(ecg_inds)",
"Plot the properties of the ECG components:",
"ica.plot_properties(epochs, picks=ecg_inds)",
"Plot the estimated sources of detected ECG related components:",
"ica.plot_sources(filt_raw, picks=ecg_inds)",
"References\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jmschrei/pomegranate
|
examples/hmm_infinite.ipynb
|
mit
|
[
"Infinite Hidden Markov Model\nauthors:<br>\nJacob Schreiber [<a href=\"mailto:jmschreiber91@gmail.com\">jmschreiber91@gmail.com</a>]<br>\nNicholas Farn [<a href=\"mailto:nicholasfarn@gmail.com\">nicholasfarn@gmail.com</a>]\nThis example shows how to use pomegranate to sample from an infinite HMM. The premise is that you have an HMM which does not have transitions to the end state, and so can continue on forever. This is done by not adding transitions to the end state. If you bake a model with no transitions to the end state, you get an infinite model, with no extra work! This change is passed on to all the algorithms.",
"from pomegranate import *\nimport itertools as it\nimport numpy as np",
"First we define the possible states in the model. In this case we make them all have normal distributions.",
"s1 = State( NormalDistribution( 5, 2 ), name=\"S1\" )\ns2 = State( NormalDistribution( 15, 2 ), name=\"S2\" )\ns3 = State( NormalDistribution( 25, 2 ), name=\"S3\" )",
"We then create the HMM object, naming it, logically, \"infinite\".",
"model = HiddenMarkovModel( \"infinite\" )",
"We then add the possible transition, making sure not to add an end state. Thus with no end state, the model is infinite!",
"model.add_transition( model.start, s1, 0.7 )\nmodel.add_transition( model.start, s2, 0.2 )\nmodel.add_transition( model.start, s3, 0.1 )\nmodel.add_transition( s1, s1, 0.6 )\nmodel.add_transition( s1, s2, 0.1 )\nmodel.add_transition( s1, s3, 0.3 )\nmodel.add_transition( s2, s1, 0.4 )\nmodel.add_transition( s2, s2, 0.4 )\nmodel.add_transition( s2, s3, 0.2 )\nmodel.add_transition( s3, s1, 0.05 )\nmodel.add_transition( s3, s2, 0.15 )\nmodel.add_transition( s3, s3, 0.8 )",
"Finally we \"bake\" the model, finalizing the model.",
"model.bake()",
"Now we can check whether or not our model is infinite.",
"# Not implemented: print model.is_infinite()",
"Now lets the possible states in the model.",
"print(\"States\")\nprint(\"\\n\".join( state.name for state in model.states ))",
"Now lets test out our model by feeding it a sequence of values. We feed our sequence of values first through a forward algorithm in our HMM.",
"sequence = [ 4.8, 5.6, 24.1, 25.8, 14.3, 26.5, 15.9, 5.5, 5.1 ]\n\nprint(\"Forward\")\nprint(model.forward( sequence ))",
"That looks good as well. Now lets feed our sequence into the model through a backwards algorithm.",
"print(\"Backward\")\nprint(model.backward( sequence ))",
"Continuing on we now feed the sequence in through a forward-backward algorithm.",
"print(\"Forward-Backward\")\ntrans, emissions = model.forward_backward( sequence )\nprint(trans)\nprint(emissions)",
"Finally we feed the sequence through a Viterbi algorithm to find the most probable sequence of states.",
"print(\"Viterbi\")\nprob, states = model.viterbi( sequence )\nprint(\"Prob: {}\".format( prob ))\nprint(\"\\n\".join( state[1].name for state in states ))\nprint()\nprint(\"MAP\")\nprob, states = model.maximum_a_posteriori( sequence )\nprint(\"Prob: {}\".format( prob ))\nprint(\"\\n\".join( state[1].name for state in states ))",
"Finally we try and reproduce the transition matrix from 100,000 samples.",
"print(\"Should produce a matrix close to the following: \")\nprint(\" [ [ 0.60, 0.10, 0.30 ] \")\nprint(\" [ 0.40, 0.40, 0.20 ] \")\nprint(\" [ 0.05, 0.15, 0.80 ] ] \")\nprint()\nprint(\"Transition Matrix From 100000 Samples:\")\nsample, path = model.sample( 100000, path=True )\ntrans = np.zeros((3,3))\n\nfor state, n_state in it.izip( path[1:-2], path[2:-1] ):\n\tstate_name = int( state.name[1:] )-1\n\tn_state_name = int( n_state.name[1:] )-1\n\ttrans[ state_name, n_state_name ] += 1\n\ntrans = (trans.T / trans.sum( axis=1 )).T\nprint(trans)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hanezu/cs231n-assignment
|
assignment1/two_layer_net.ipynb
|
mit
|
[
"Implementing a Neural Network\nIn this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.",
"# A bit of setup\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom cs231n.classifiers.neural_net import TwoLayerNet\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))",
"We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.",
"# Create a small net and some toy data to check your implementations.\n# Note that we set the random seed for repeatable experiments.\n\ninput_size = 4\nhidden_size = 10\nnum_classes = 3\nnum_inputs = 5\n\ndef init_toy_model():\n np.random.seed(0)\n return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)\n\ndef init_toy_data():\n np.random.seed(1)\n X = 10 * np.random.randn(num_inputs, input_size)\n y = np.array([0, 1, 2, 2, 1])\n return X, y\n\nnet = init_toy_model()\nX, y = init_toy_data()",
"Forward pass: compute scores\nOpen the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. \nImplement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.",
"scores = net.loss(X)\nprint 'Your scores:'\nprint scores\nprint\nprint 'correct scores:'\ncorrect_scores = np.asarray([\n [-0.81233741, -1.27654624, -0.70335995],\n [-0.17129677, -1.18803311, -0.47310444],\n [-0.51590475, -1.01354314, -0.8504215 ],\n [-0.15419291, -0.48629638, -0.52901952],\n [-0.00618733, -0.12435261, -0.15226949]])\nprint correct_scores\nprint\n\n# The difference should be very small. We get < 1e-7\nprint 'Difference between your scores and correct scores:'\nprint np.sum(np.abs(scores - correct_scores))",
"Forward pass: compute loss\nIn the same function, implement the second part that computes the data and regularizaion loss.",
"loss, _ = net.loss(X, y, reg=0.1)\ncorrect_loss = 1.30378789133\n\n# should be very small, we get < 1e-12\nprint 'Difference between your loss and correct loss:'\nprint np.sum(np.abs(loss - correct_loss))\nprint loss",
"Backward pass\nImplement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:",
"from cs231n.gradient_check import eval_numerical_gradient\n\n# Use numeric gradient checking to check your implementation of the backward pass.\n# If your implementation is correct, the difference between the numeric and\n# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.\n\nloss, grads = net.loss(X, y, reg=0.1)\n\n# these should all be less than 1e-8 or so\nfor param_name in grads:\n f = lambda W: net.loss(X, y, reg=0.1)[0]\n param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)\n print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))",
"Train the network\nTo train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.\nOnce you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.",
"net = init_toy_model()\nstats = net.train(X, y, X, y,\n learning_rate=1e-1, reg=1e-5,\n num_iters=100, verbose=False)\n\nprint 'Final training loss: ', stats['loss_history'][-1]\n\n# plot the loss history\nplt.plot(stats['loss_history'])\nplt.xlabel('iteration')\nplt.ylabel('training loss')\nplt.title('Training Loss history')\nplt.show()",
"Load the data\nNow that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.",
"from cs231n.data_utils import load_CIFAR10\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n \"\"\"\n Load the CIFAR-10 dataset from disk and perform preprocessing to prepare\n it for the two-layer neural net classifier. These are the same steps as\n we used for the SVM, but condensed to a single function. \n \"\"\"\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = range(num_training, num_training + num_validation)\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = range(num_training)\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = range(num_test)\n X_test = X_test[mask]\n y_test = y_test[mask]\n\n # Normalize the data: subtract the mean image\n mean_image = np.mean(X_train, axis=0)\n X_train -= mean_image\n X_val -= mean_image\n X_test -= mean_image\n\n # Reshape data to rows\n X_train = X_train.reshape(num_training, -1)\n X_val = X_val.reshape(num_validation, -1)\n X_test = X_test.reshape(num_test, -1)\n\n return X_train, y_train, X_val, y_val, X_test, y_test\n\n\n# Invoke the above function to get our data.\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()\nprint 'Train data shape: ', X_train.shape\nprint 'Train labels shape: ', y_train.shape\nprint 'Validation data shape: ', X_val.shape\nprint 'Validation labels shape: ', y_val.shape\nprint 'Test data shape: ', X_test.shape\nprint 'Test labels shape: ', y_test.shape",
"Train a network\nTo train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.",
"input_size = 32 * 32 * 3\nhidden_size = 50\nnum_classes = 10\nnet = TwoLayerNet(input_size, hidden_size, num_classes)\n\n# Train the network\nstats = net.train(X_train, y_train, X_val, y_val,\n num_iters=1000, batch_size=200,\n learning_rate=1e-4, learning_rate_decay=0.95,\n reg=0.5, verbose=True)\n\n# Predict on the validation set\nval_acc = (net.predict(X_val) == y_val).mean()\nprint 'Validation accuracy: ', val_acc\n\n",
"Debug the training\nWith the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.\nOne strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.\nAnother strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.",
"# Plot the loss function and train / validation accuracies\nplt.subplot(2, 1, 1)\nplt.plot(stats['loss_history'])\nplt.title('Loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(stats['train_acc_history'], label='train')\nplt.plot(stats['val_acc_history'], label='val')\nplt.title('Classification accuracy history')\nplt.xlabel('Epoch')\nplt.ylabel('Clasification accuracy')\nplt.show()\n\nfrom cs231n.vis_utils import visualize_grid\n\n# Visualize the weights of the network\n\ndef show_net_weights(net):\n W1 = net.params['W1']\n W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)\n plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))\n plt.gca().axis('off')\n plt.show()\n\nshow_net_weights(net)",
"Tune your hyperparameters\nWhat's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.\nTuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.\nApproximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.\nExperiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).",
"best_net = None # store the best model into this \nbest_val = 0\n\n#################################################################################\n# TODO: Tune hyperparameters using the validation set. Store your best trained #\n# model in best_net. #\n# #\n# To help debug your network, it may help to use visualizations similar to the #\n# ones we used above; these visualizations will have significant qualitative #\n# differences from the ones we saw above for the poorly tuned network. #\n# #\n# Tweaking hyperparameters by hand can be fun, but you might find it useful to #\n# write code to sweep through possible combinations of hyperparameters #\n# automatically like we did on the previous exercises. #\n#################################################################################\npass\ninput_size = 32 * 32 * 3\nnum_classes = 10\nfor hidden_size in np.arange(50,200,30):\n for learning_rate in np.arange(1e-4, 1e-3, 1e-4):\n for num_iters in [1000, 1500]:\n for reg in [.1, .3, .5]:\n net = TwoLayerNet(input_size, hidden_size, num_classes)\n stats = net.train(X_train, y_train, X_val, y_val,\n num_iters=num_iters, batch_size=200,\n learning_rate=learning_rate, learning_rate_decay=0.95,\n reg=reg, verbose=False)\n # Predict on the validation set\n val_acc = (net.predict(X_val) == y_val).mean()\n print 'Validation accuracy: ', val_acc\n if val_acc > best_val:\n best_val = val_acc\n best_net = net\n \n\n\n\n\n#################################################################################\n# END OF YOUR CODE #\n#################################################################################\n\n# visualize the weights of the best network\nshow_net_weights(best_net)",
"Run on the test set\nWhen you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.\nWe will give you extra bonus point for every 1% of accuracy above 52%.",
"test_acc = (best_net.predict(X_test) == y_test).mean()\nprint 'Test accuracy: ', test_acc"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/structured/solutions/3b_bqml_linear_transform_babyweight.ipynb
|
apache-2.0
|
[
"LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform.\nLearning Objectives\n\nCreate and evaluate linear model with BigQuery's ML.FEATURE_CROSS\nCreate and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE\nCreate and evaluate linear model with ML.TRANSFORM\n\nIntroduction\nIn this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook BQML Baseline Model.\nWe will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM.\nLoad necessary libraries\nCheck that the Google BigQuery library is installed and if not, install it.",
"%%bash\nsudo pip freeze | grep google-cloud-bigquery==1.6.1 || \\\nsudo pip install google-cloud-bigquery==1.6.1",
"Verify tables exist\nRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.",
"%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_train\nLIMIT 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_eval\nLIMIT 0",
"Model 1: Apply the ML.FEATURE_CROSS clause to categorical features\nBigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations.\nCreate model with feature cross.",
"%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_1\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n L2_REG=0.1,\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n plurality)\n ) AS gender_plurality_cross\nFROM\n babyweight.babyweight_data_train",
"Create two SQL statements to evaluate the model.",
"%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_1,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n plurality)\n ) AS gender_plurality_cross\n FROM\n babyweight.babyweight_data_eval\n ))\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.model_1,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n plurality)\n ) AS gender_plurality_cross\n FROM\n babyweight.babyweight_data_eval\n ))",
"Model 2: Apply the BUCKETIZE Function\nBucketize is a pre-processing function that creates \"buckets\" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds.",
"%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_2\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n L2_REG=0.1,\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n ML.BUCKETIZE(\n mother_age,\n GENERATE_ARRAY(15, 45, 1)\n ) AS bucketed_mothers_age,\n plurality,\n ML.BUCKETIZE(\n gestation_weeks,\n GENERATE_ARRAY(17, 47, 1)\n ) AS bucketed_gestation_weeks\n )\n ) AS crossed\nFROM\n babyweight.babyweight_data_train",
"Let's now retrieve the training statistics and evaluate the model.",
"%%bigquery\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)",
"We now evaluate our model on our eval dataset:",
"%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_2,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n ML.BUCKETIZE(\n mother_age,\n GENERATE_ARRAY(15, 45, 1)\n ) AS bucketed_mothers_age,\n plurality,\n ML.BUCKETIZE(\n gestation_weeks,\n GENERATE_ARRAY(17, 47, 1)\n ) AS bucketed_gestation_weeks\n )\n ) AS crossed\n FROM\n babyweight.babyweight_data_eval))",
"Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.",
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.model_2,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n ML.BUCKETIZE(\n mother_age,\n GENERATE_ARRAY(15, 45, 1)\n ) AS bucketed_mothers_age,\n plurality,\n ML.BUCKETIZE(\n gestation_weeks,\n GENERATE_ARRAY(17, 47, 1)\n ) AS bucketed_gestation_weeks\n )\n ) AS crossed\n FROM\n babyweight.babyweight_data_eval))",
"Model 3: Apply the TRANSFORM clause\nBefore we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries.\nLet's apply the TRANSFORM clause to the model_3 and run the query.",
"%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_3\n\nTRANSFORM(\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n ML.FEATURE_CROSS(\n STRUCT(\n is_male,\n ML.BUCKETIZE(\n mother_age,\n GENERATE_ARRAY(15, 45, 1)\n ) AS bucketed_mothers_age,\n plurality,\n ML.BUCKETIZE(\n gestation_weeks,\n GENERATE_ARRAY(17, 47, 1)\n ) AS bucketed_gestation_weeks\n )\n ) AS crossed\n)\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n L2_REG=0.1,\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n *\nFROM\n babyweight.babyweight_data_train",
"Let's retrieve the training statistics:",
"%%bigquery\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)",
"We now evaluate our model on our eval dataset:",
"%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_3,\n (\n SELECT\n *\n FROM\n babyweight.babyweight_data_eval\n ))",
"Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.",
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.model_3,\n (\n SELECT\n *\n FROM\n babyweight.babyweight_data_eval\n ))",
"Lab Summary:\nIn this lab, we created and evaluated a linear model using BigQuery's ML.FEATURE_CROSS, created and evaluated a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and created and evaluated a linear model using BigQuery's ML.TRANSFORM.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ML4DS/ML4all
|
U_lab1.Clustering/Lab_ShapeSegmentation_student/LabSessionClustering_student.ipynb
|
mit
|
[
"Lab Session: Clustering algorithms for Image Segmentation\nAuthor: Jesús Cid Sueiro\nJan. 2017",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.misc import imread",
"1. Introduction\nIn this notebook we explore an application of clustering algorithms to shape segmentation from binary images. We will carry out some exploratory work with a small set of images provided with this notebook. Most of them are not binary images, so we must do some preliminary work to extract he binary shape images and apply the clustering algorithms to them. We will have the opportunity to test the differences between $k$-means and spectral clustering in this problem.\n1.1. Load Image\nSeveral images are provided with this notebook:\n\nBinarySeeds.png\nbirds.jpg\nblood_frog_1.jpg\ncKyDP.jpg\nMatricula.jpg\nMatricula2.jpg\nSeeds.png\n\nSelect and visualize image birds.jpg from file and plot it in grayscale",
"name = \"birds.jpg\"\nname = \"Seeds.jpg\"\n\nbirds = imread(\"Images/\" + name)\nbirdsG = np.sum(birds, axis=2)\n\n# <SOL>\n# </SOL>\n",
"2. Thresholding\nSelect an intensity threshold by manual inspection of the image histogram",
"# <SOL>\n# </SOL>\n",
"Plot the binary image after thresholding.",
"# <SOL>\n# </SOL>\n",
"3. Dataset generation\nExtract pixel coordinates dataset from image and plot them in a scatter plot.",
"# <SOL>\n# </SOL>\n\nprint X\nplt.scatter(X[:, 0], X[:, 1], s=5);\nplt.axis('equal')\nplt.show()",
"4. k-means clustering algorithm\nUse the pixel coordinates as the input data for a k-means algorithm. Plot the result of the clustering by means of a scatter plot, showing each cluster with a different colour.",
"from sklearn.cluster import KMeans\n\n# <SOL>\n# </SOL>\n",
"5. Spectral clustering algorithm\n5.1. Affinity matrix\nCompute and visualize the affinity matrix for the given dataset, using a rbf kernel with $\\gamma=5$.",
"from sklearn.metrics.pairwise import rbf_kernel\n\n# <SOL>\n# </SOL>\n\n# Visualization\n# <SOL>\n# </SOL>",
"5.2. Spectral clusering\nApply the spectral clustering algorithm, and show the clustering results using a scatter plot.",
"# <SOL>\n# </SOL>\n\nplt.scatter(Xsub[:,0], Xsub[:,1], c=y_kmeans, s=5, cmap='rainbow', linewidth=0.0)\nplt.axis('equal')\nplt.show()",
"Try now with other images in the dataset. You will need to re-adjust some free parameters to get a better performance."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
beangoben/HistoriaDatos_Higgs
|
Dia4/.ipynb_checkpoints/6_Arboles_de_decisión-checkpoint.ipynb
|
gpl-2.0
|
[
"<i class=\"fa fa-diamond\"></i> Primero pimpea tu libreta!",
"from IPython.core.display import HTML\nimport os\ndef css_styling():\n \"\"\"Load default custom.css file from ipython profile\"\"\"\n base = os.getcwd()\n styles = \"<style>\\n%s\\n</style>\" % (open(os.path.join(base,'files/custom.css'),'r').read())\n return HTML(styles)\ncss_styling()",
"<i class=\"fa fa-book\"></i> Primero librerias",
"import numpy as np\nimport sklearn as sk\nimport matplotlib.pyplot as plt\nimport sklearn.datasets as datasets\nimport seaborn as sns\n%matplotlib inline",
"<i class=\"fa fa-database\"></i> Vamos a crear datos de jugete\nCrea varios \"blobs\"\nrecuerda la funcion de scikit-learn datasets.make_blobs()\n Tambien prueba\npython\n centers = [[1, 1], [-1, -1], [1, -1]]\n X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)\n<i class=\"fa fa-tree\"></i> Ahora vamos a crear un modelo de arbol\n\npodemos usar DecisionTreeClassifier como clasificador",
"from sklearn.tree import DecisionTreeClassifier\n",
"<i class=\"fa fa-question-circle\"></i> Que parametros y funciones tiene el classificador?\nHint: usa help(cosa)!",
"help(clf)",
"vamos a ajustar nuestro modelo con fit y sacar su puntaje con score\n<i class=\"fa fa-question-circle\"></i>\nPor que no queremos 100%?\nEste problema se llama \"Overfitting\"\n\n<i class=\"fa fa-list\"></i> Pasos para un tipico algoritmo ML:\n\nCrear un modelo\nParticionar tus datos en diferentes pedazos (10% entrenar y 90% prueba)\nEntrenar tu modelo sobre cada pedazo de los datos\nEscogete el mejor modelo o el promedio de los modelos\nPredice!\n\nPrimero vamos a particionar los datos usando",
"from sklearn.cross_validation import train_test_split\n",
"cuales son los tamanios de estos nuevos datos?\ny ahora entrenamos nuestro modelo y checamos el error\n<i class=\"fa fa-question-circle\"></i>\nComo se ve nuestro modelo?\nQue fue mas importante para hacer una decision?\nComo podemos mejorar y controlar como dividimos nuestros datos?\nValidación cruzada y\nK-fold\n\n\nY lo mejor es que podemos hacer todo de usa sola patada con sci-kit!\nHay que usar cross_val_score",
"from sklearn.cross_validation import cross_val_score\n",
"<i class=\"fa fa-question-circle\"></i>\nY como podemos mejorar un arbol de decision?\n\n\n RandomForestClassifier(n_estimators=n_estimators) Al rescate!",
"from sklearn.ensemble import RandomForestClassifier",
"a probarlo!\nmejoro?\nPero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?\n<i class=\"fa fa-tree\"></i>,<i class=\"fa fa-tree\"></i>,<i class=\"fa fa-tree\"></i> ...\nQue tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?\nActividad!\nHay que :\n\nDefinir nuestro rango de arboles a probar en un arreglo\nhacer un for loop sobre este arreglo\nPara cada elemento, entrena un bosque y saca el score\nGuarda el score en una lista\ngraficalo!\n\n\n\n<i class=\"fa fa-pagelines\"></i> El conjunto de datos Iris\nUn modelo multi-dimensional",
"g = sns.PairGrid(iris, hue=\"species\")\ng = g.map(plt.scatter)\ng = g.add_legend()",
"Actividad:\nObjetivo: Entrena un arbol para predecir la especie de la planta\n\nCheca las graficas, que variables podrian ser mas importante?\nAgarra los datos, que dimensiones son?\nRompelos en pedacitos y entrena tus modelos\nQue scores te da? Que resulto ser importante?",
"iris = datasets.load_iris()\nX = iris.data\nY = iris.target"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsiufl/2015-Fall-Hadoop
|
notes/.ipynb_checkpoints/1-hadoop-streaming-py-wordcount-checkpoint.ipynb
|
mit
|
[
"Hadoop Short Course\n1. Hadoop Distributed File System\nHadoop Distributed File System (HDFS)\nHDFS is the primary distributed storage used by Hadoop applications. A HDFS cluster primarily consists of a NameNode that manages the file system metadata and DataNodes that store the actual data. The HDFS Architecture Guide describes HDFS in detail. To learn more about the interaction of users and administrators with HDFS, please refer to HDFS User Guide. \nAll HDFS commands are invoked by the bin/hdfs script. Running the hdfs script without any arguments prints the description for all commands. For all the commands, please refer to HDFS Commands Reference\nStart HDFS",
"hadoop_root = '/home/ubuntu/shortcourse/hadoop-2.7.1/'\nhadoop_start_hdfs_cmd = hadoop_root + 'sbin/start-dfs.sh'\nhadoop_stop_hdfs_cmd = hadoop_root + 'sbin/stop-dfs.sh'\n\n# start the hadoop distributed file system\n! {hadoop_start_hdfs_cmd}\n\n# show the jave jvm process summary\n# You should see NamenNode, SecondaryNameNode, and DataNode\n! jps",
"Normal file operations and data preparation for later example\nlist recursively everything under the root dir\nDownload some files for later use. The files should already be there.",
"# We will use three ebooks from Project Gutenberg for later example\n# Pride and Prejudice by Jane Austen: http://www.gutenberg.org/ebooks/1342.txt.utf-8\n! wget http://www.gutenberg.org/ebooks/1342.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/pride-and-prejudice.txt\n\n# Alice's Adventures in Wonderland by Lewis Carroll: http://www.gutenberg.org/ebooks/11.txt.utf-8\n! wget http://www.gutenberg.org/ebooks/11.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/alice.txt\n \n# The Adventures of Sherlock Holmes by Arthur Conan Doyle: http://www.gutenberg.org/ebooks/1661.txt.utf-8\n! wget http://www.gutenberg.org/ebooks/1661.txt.utf-8 -O /home/ubuntu/shortcourse/data/wordcount/sherlock-holmes.txt",
"Delete existing folders under /user/ubuntu/ in hdfs\nCreate input folder: /user/ubuntu/input\nCopy the three books to the input folder in HDFS.\nSimiliar to normal bash cmd: \ncp /home/ubuntu/shortcourse/data/wordcount/* /user/ubuntu/input/\n\nbut copy to hdfs.\nShow if the files are there. \n2. WordCount Example\nLet's count the single word frequency in the uploaded three books.\nStart Yarn, the resource allocator for Hadoop.",
"Start the hadoop distributed file system\n\n! {hadoop_root + 'sbin/start-yarn.sh'}",
"Test locally the mapper.py and reduce.py",
"# wordcount 1 the scripts\n# Map: /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py\n# Test locally the map script\n! echo \"go gators gators beat everyone go glory gators\" | \\\n /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py\n\n# Reduce: /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py\n# Test locally the reduce script\n! echo \"go gators gators beat everyone go glory gators\" | \\\n /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py | \\\n sort -k1,1 | \\\n /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py\n\n# run them with Hadoop against the uploaded three books\ncmd = hadoop_root + 'bin/hadoop jar ' + hadoop_root + 'hadoop-streaming-2.7.1.jar ' + \\\n '-input input ' + \\\n '-output output ' + \\\n '-mapper /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \\\n '-reducer /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/mapper.py ' + \\\n '-file /home/ubuntu/shortcourse/notes/scripts/wordcount1/reducer.py'\n\n! {cmd}",
"List the output\nDownload the output file (part-00000) to local fs.",
"# Let's see what's in the output file\n# delete if previous results exist\n! tail -n 20 $(THE_DOWNLOADED_FILE)",
"3. Exercise: WordCount2\nCount the single word frequency, where the words are given in a pattern file. \nFor example, given pattern.txt file, which contains: \n\"a b c d\"\n\nAnd the input file is: \n\"d e a c f g h i a b c d\".\n\nThen the output shoule be:\n\"a 1\n b 1\n c 2\n d 2\"\n\nPlease copy the mapper.py and reduce.py from the first wordcount example to foler \"/home/ubuntu/shortcourse/notes/scripts/wordcount2/\". The pattern file is given in the wordcount2 folder with name \"wc2-pattern.txt\"\nHint:\n1. pass the pattern file using \"-file option\" and use -cmdenv to pass the file name as environment variable\n2. in the mapper, read the pattern file into a set\n3. only print out the words that exist in the set",
"# 1. go to wordcount2 folder, modify the mapper\n\n# 2. test locally if the mapper is working\n\n# 3. run with hadoop streaming. Input is still the three books, output to 'output2'",
"Verify Results\n\nCopy the output file to local\n\nrun the following command, and compare with the downloaded output\nsort -nrk 2,2 part-00000 | head -n 20\n\n\nThe wc1-part-00000 is the output of the previous wordcount (wordcount1)",
"# 1. list the output, download the output to local, and cat the output file\n\n# 2. use bash cmd to find out the most frequently used 20 words from the previous example, \n# and compare the results with this output\n\n# stop dfs and yarn\n!{hadoop_root + 'sbin/stop-yarn.sh'}\n# don't stop hdfs for now, later use\n# !{hadoop_stop_hdfs_cmd}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
resendislab/cobrame-docker
|
getting_started.ipynb
|
apache-2.0
|
[
"Building and solving the E. coli ME model\nThe image includes the COBRAme and ECOLIme Python packages to get you started quickly. The docker image includes a prebuild version of the E. coli ME model iLE1678 at at me_models/iLE1678.pickle.\nIf you need more info about the construction process you can find it in the build_ME_model.ipynb notebook.",
"import pickle\n\nwith open(\"me_models/iLE1678.pickle\", \"rb\") as model_file:\n ecoli = pickle.load(model_file)",
"This will read the saved model into the variable ecoli.",
"print(ecoli)\nprint(\"Reactions:\", len(ecoli.reactions))\nprint(\"Metabolites:\", len(ecoli.metabolites))",
"We can now run the optimization for the model. This will take around 10 minutes.",
"from cobrame.solve.algorithms import binary_search\n\n%time binary_search(ecoli, min_mu=0.1, max_mu=1.0, debug=True, mu_accuracy=1e-2)",
"If we want to we could also visualize the model fluxes on a map of the E. coli central carbon metabolism obtained from iJO1366.",
"import escher\nview = escher.Builder(\"iJO1366.Central metabolism\")\nview.reaction_data = ecoli.get_metabolic_flux()\nview.display_in_notebook()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ealogar/curso-python
|
basic/3_Booleans_Flow_Control_and_Comprehension.ipynb
|
apache-2.0
|
[
"Booleans",
"# Let's declare some bools\n\nspam = True\nprint spam\n\nprint type(spam)\n\n\neggs = False\nprint eggs\n\nprint type(eggs)",
"Python truth value testing\n\nAny object can be tested for truth value\nTruth value testing is used in flow control or in Boolean operations\nAll objects are evaluated as True except:\nNone (aka. null)\nFalse\nZero of any numeric type: 0, 0L, 0.0, 0j, 0x0, 00\nAny empty sequence or mapping: '', [], (), {}\nInstances of user-defined classes implementing nonzero or len method\n and returning 0 or False",
"# Let's try boolean operations\n\nprint True or True\nprint True or False\nprint False or True # Boolean or. Short-circuited, so it only evaluates the second argument if the first one is False\n\nprint True and True\nprint True and False\nprint False and True # Boolean or. Short-circuited, so it only evaluates the second argument if the first one is True\n\nprint not True\nprint not False\n\n# So, if all objects can be tested for truth, let's try something different\n\nspam = [1.2345, 7, \"x\"]\neggs = (\"a\", 0x07, True)\nfooo = \"aeiou\"\n\nprint spam or eggs\n\n# Did you expect it to print True?\nprint fooo or []\nprint \"\" or eggs\nprint spam and eggs\n\nprint fooo and ()\nprint [] and eggs\nprint not spam\nprint not \"\"\n\nprint spam and eggs or \"abcd\" and False\nprint (spam and eggs) or (\"abcd\" and False)\nprint spam and (eggs or \"abcd\") and False\nprint spam and (eggs or \"abcd\" and False)",
"Python boolean operands:\n\nALWAYS return one of the incoming arguments!\nx or y => if x is false, then y, else x\nx and y => if x is false, then x, else y\nnot x => if x is false, then True, else False\n\n\nThey are short-circuited, so second argument is not always evaluated\nCan take any object type as arguments\nEven function calls, so boolean operands are used for flow control\n\n\nParentheses may be used to change order of boolean operands or comparissons\n\nWhat about comparisons?",
"spam = 2\neggs = 2.5\n\nprint spam == 2 # equal\n\nprint spam != eggs # not equal\n\nprint spam >= eggs # greater than or equal\n\nprint spam > eggs # strictly greater than\n\nprint spam <= eggs # less than or equal\n\nprint spam < eggs # strictly less than\n\nprint spam is 2 # object identity, useful to compare with None (discussed latter)\n\nprint spam is not None # negated object identity",
"Flow Control\nLet's start with the conditional execution",
"spam = [1, 2, 3] # True\neggs = \"\" # False\n\nif spam:\n print \"spam is True\"\nelse:\n print \"spam is False\"\n\nprint \"outside the conditional\" # Notice that theres is no closing fi statement\n\n\nif spam:\n print \"spam is True\"\nelse:\n print \"spam is False\"\n\n print \"still inside the conditional\"",
"REMEMBER:\n\nIndentation is Python's way of grouping statements!!\nTypically four spaces per indentation level\nNo curly brackets { } or semicolons ; used anywhere\nThis enforces a more readable code",
"if eggs:\n print \"eggs is True\"\nelif spam:\n print \"eggs is False and spam is True\"\nelse:\n print \"eggs and spam are False\"\n\nif eggs:\n print \"eggs is True\"\nelif max(spam) > 5:\n print \"eggs is False and second condition is True\"\nelif len(spam) == 3 and not eggs is None:\n print \"third condition is true\"\nelse:\n print \"everything is False\"",
"Let's see the ternary operator",
"spam = [1, 2, 3] # True\neggs = \"\" # False\n\nprint \"first option\" if spam else \"second option\"\n\nprint \"first option\" if eggs else \"second option\"\n\nprint \"first option\" if eggs else \"second option\" if spam else \"last option\" # We can even concatenate them\n\nprint \"first option\" if eggs else (\"second option\" if spam else \"last option\")",
"Time for the while loop",
"spam = [1, 2, 3]\nwhile len(spam) > 0:\n print spam.pop(0)\n\nspam = [1, 2, 3]\nidx = 0\nwhile idx < len(spam):\n print spam[idx]\n idx += 1",
"What about the for loop?",
"spam = [1, 2, 3]\nfor item in spam: # The for loop only iterates over the items of a sequence\n print item\n\nspam = [1, 2, 3]\nfor item in spam[::-1]: # As we saw, slicing may be slow. Keep it in mind\n print item\n\neggs = \"eggs\"\nfor letter in eggs: # It can loop over characters of a string\n print letter\n\nspam = {\"one\": 1,\n \"two\": 2,\n \"three\": 3}\nfor key in spam: # Or even it can interate through a dictionary\n print spam[key] # Note that it iterates over the keys of the dictionary",
"Let's see how to interact with loops iterations",
"spam = [1, 2, 3]\nfor item in spam:\n if item == 2:\n break\n print item",
"break statement halts a loop execution (inside while or for)\nOnly affects the closer inner (or smallest enclosing) loop",
"# A bit more complicated example\nspam = [\"one\", \"two\", \"three\"]\nfor item in spam: # This loop is never broken\n for letter in item:\n if letter in \"wh\": # Check if letter is either 'w' or 'h'\n break # Break only the immediate inner loop\n print letter\n print # It prints a break line (empty line)\n\n# A bit different example\nspam = [\"one\", \"two\", \"three\"]\nfor item in spam:\n for letter in item:\n if letter in \"whe\": # Check if letter is either 'w', 'h' or 'e'\n continue # Halt only current iteration, but continue the loop\n print letter\n print",
"continue statement halts current iteration (inside while or for)\nloops continue its normal execution",
"spam = [1, 2, 3, 4, 5, 6, 7, 8]\neggs = 5\nwhile len(spam) > 0:\n value = spam.pop()\n if value == eggs:\n print \"Value found:\", value\n break\nelse: # Note that else has the same indentation than while\n print \"The right value was not found\"\n\nspam = [1, 2, 3, 4, 6, 7, 8]\neggs = 5\nwhile len(spam) > 0:\n value = spam.pop()\n if value == eggs:\n print \"Value found:\", value\n break\nelse:\n print \"The right value was not found\"",
"else clause after a loop is executed if all iterations were run without break statement called",
"spam = [1, 2, 3]\nfor item in spam:\n pass",
"pass statement is Python's noop (does nothing)\nLet's check exception handling",
"spam = [1, 2, 3]\ntry:\n print spam[5]\nexcept: # Use try and except to capture exceptions\n print \"Failed\"\n\nspam = {\"one\": 1, \"two\": 2, \"three\": 3}\ntry:\n print spam[5]\nexcept IndexError as e: # Inside the except clause 'e' will contain the exception instance\n print \"IndexError\", e\nexcept KeyError as e: # Use several except clauses for different types of exceptions\n print \"KeyError\", e\n\ntry:\n print 65 + \"spam\"\nexcept (IndexError, KeyError) as e: # Or even group exception types\n print \"Index or Key Error\", e\nexcept TypeError as e:\n print \"TypeError\", e\n\ntry:\n print 65 + 2\nexcept (IndexError, KeyError), e:\n print \"Index or Key Error\", e\nexcept TypeError, e:\n print \"TypeError\", e\nelse:\n print \"No exception\" # Use else clause to run code in case no exception was raised\n\ntry:\n print 65 + \"spam\"\n raise AttributeError # Use 'raise' to launch yourself exceptions\nexcept (IndexError, KeyError), e:\n print \"Index or Key Error\", e\nexcept TypeError, e:\n print \"TypeError\", e\nelse:\n print \"No exception\"\nfinally:\n print \"Finally we clean up\" # Use finally clause to ALWAYS execute clean up code\n\ntry:\n print 65 + 2\nexcept (IndexError, KeyError), e:\n print \"Index or Key Error\", e\n raise # Use 'raise' without arguments to relaunch the exception\nexcept TypeError, e:\n print \"TypeError\", e\nelse:\n print \"No exception\"\nfinally:\n print \"Finally we clean up\" # Use finally clause to ALWAYS execute clean up code",
"Let's see another construction",
"try:\n f = open(\"tmp_file.txt\", \"a\")\nexcept:\n print \"Exception opening file\"\nelse:\n try:\n f.write(\"I'm writing to a file...\\n\")\n except:\n print \"Can not write to a file\"\n finally:\n f.close()",
"Not pythonic, too much code for only three real lines",
"try:\n with open(\"tmp_file.txt\", \"a\") as f:\n f.write(\"I'm writing to a file...\\n\")\nexcept:\n print \"Can not open file for writing\"",
"Where is the file closed? What happens if an exception is raised?\nPython context managers\n\nEncapsulate common patterns used wrapping code blocks where real runs the program logic\nUsually try/except/finally patterns\n\n\nSeveral uses:\nAutomatic cleanup, closing files or network or DB connections when exiting the context block\nSet temporary environment, like enable/disable logging, timing, profiling...\n\n\nUse the 'with' and optionally the 'as' statements to open a context manager\nIt is automatically closed when code execution goes outside the block\n\n\n\nComprehension",
"spam = [0, 1, 2, 3, 4]\neggs = [0, 10, 20, 30]\nfooo = []\n\nfor s in spam:\n for e in eggs:\n if s > 1 and e > 1:\n fooo.append(s * e)\n\nprint fooo",
"Short code, right?",
"spam = [0, 1, 2, 3, 4]\neggs = [0, 10, 20, 30]\nfooo = [s * e for s in spam for e in eggs if s > 1 and e > 1]\nprint fooo",
"What about now?",
"fooo = [s * s for s in spam] # This is the most basic list comprehension construction\nprint fooo\n\nfooo = [s * s for s in spam if s > 1] # We can add 'if' clauses\nprint fooo\n\nspam = [1, 2, 3, 4]\neggs = [0, -1, -2, -3]\nfooo = [l.upper() * (s + e) for s in spam\n for e in eggs\n for l in \"SpaM aNd eGgs aNd stuFf\"\n if (s + e) >= 1\n if l.islower()\n if ord(l) % 2 == 0] # We can add lots of 'for' and 'if' clauses\nprint fooo \n\nspam = [1, 2, 3, 4]\neggs = [10, 20, 30, 40]\nfooo = [[s * e for s in spam] for e in eggs] # It is possible to nest list comprehensions\nprint fooo",
"List comprehension is faster than standard loops (low level C optimizations)\nHowever, built-in functions are still faster (see Functional and iterables tools module)\n\nThere is also dict comprehension (2.7 or higher)",
"spam = ['monday', 'tuesday',\n 'wednesday', 'thursday',\n 'friday']\nfooo = {s: len(s) for s in spam} # The syntax is a merge of list comprehension and dicts\nprint fooo\n\nspam = [(0, 'monday'), (1, 'tuesday'),\n (2, 'wednesday'), (3, 'thursday'),\n (4, 'friday')]\nfooo = {s: idx for idx, s in spam} # Tuple unpacking is useful here\nprint fooo\n\nspam = ['monday', 'tuesday',\n 'wednesday', 'thursday',\n 'friday']\nfooo = {s: len(s) for s in spam if s[0] in \"tm\"} # Ofc, you can add more 'for' and 'if' clauses\nprint fooo",
"Sources\n\nhttp://docs.python.org/2/library/stdtypes.html#boolean-operations-and-or-not\nhttp://docs.python.org/2/tutorial/controlflow.html#if-statements\nhttp://docs.python.org/2/reference/compound_stmts.html\nhttp://docs.python.org/2/reference/expressions.html#conditional-expressions\nhttp://docs.python.org/2/reference/simple_stmts.html\nhttp://www.python.org/dev/peps/pep-0343/\nhttp://docs.python.org/2/reference/compound_stmts.html#the-with-statement\nhttp://docs.python.org/2/tutorial/classes.html#iterators\nhttp://docs.python.org/2/library/stdtypes.html#iterator-types\nhttp://docs.python.org/2/tutorial/datastructures.html#list-comprehensions\nhttp://www.python.org/dev/peps/pep-0274/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxalbert/pandas-snippets
|
pandas-snippets.ipynb
|
unlicense
|
[
"import pandas as pd\nfrom numpy import NaN",
"Create a sample DataFrame with some missing values.",
"df = pd.DataFrame({\n 'colA': ['aaa', NaN, NaN, NaN, 'bbb', 'ccc'],\n 'colB': ['xxx', 'yyy', NaN, 'zzz', NaN, 'www'], \n #'colC': [NaN, 3, NaN, 1, 0, 9]\n })\n\ndf",
"Task: replace missing values in column colA with those in colB (if they exist).\nFirst we define a filtering expression (\"condition\") cond which encodes the condition which we'd like to use for filling in the values. In this case we could actually use the simpler condition cond = df.colA.isnull() because it doesn't matter if the value in colB is also missing (since we would just replace NaN with NaN), but for the sake of illustration let's use this slightly more complicated expression.",
"cond = df.colA.isnull() & ~df.colB.isnull()\ncond",
"We can use this to extract the desired columns if we wish.",
"df[cond]",
"Now we can do the assignment. Note that we use the .loc operator to avoid a warning about \"trying to set values on a copy of a slice from a DataFrame\" which would happen if we used for example the following expression\ndf[cond]['colA'] = df[cond]['colB']\n(See http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy for details.)",
"df.loc[cond, 'colA'] = df.loc[cond, 'colB']",
"The resulting DataFrame does indeed have the values yyy and zzz filled in column colA.",
"df"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kinnala/sp.fem
|
learning/Example 1 - Stokes equations.ipynb
|
agpl-3.0
|
[
"Problem statement\nThe Stokes problem is a classical example of a mixed problem.\nInitialize",
"import sys\nsys.path.append('../')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom spfem.geometry import GeometryMeshPyTriangle\n%matplotlib inline",
"Geometry and mesh generation",
"g = GeometryMeshPyTriangle(np.array([(0, 0), (1, 0), (1, 0.2), (2, 0.4), (2, 0.6), (1, 0.8), (1, 1), (0, 1)]))\nm = g.mesh(0.03)\nm.draw()\nm.show()",
"Assembly",
"from spfem.element import ElementTriP1, ElementTriP2, ElementH1Vec\nfrom spfem.assembly import AssemblerElement",
"Next we create assemblers for the elements. We can give different elements for the solution vector and the test function. In this case we form the blocks $A$ and $B$ separately.",
"a = AssemblerElement(m, ElementH1Vec(ElementTriP2()))\nb = AssemblerElement(m, ElementH1Vec(ElementTriP2()), ElementTriP1())\nc = AssemblerElement(m, ElementTriP1())\n\ndef stokes_bilinear_a(du, dv):\n def inner_product(a, b):\n return a[0][0]*b[0][0] +\\\n a[0][1]*b[0][1] +\\\n a[1][0]*b[1][0] +\\\n a[1][1]*b[1][1]\n def eps(dw): # symmetric part of the velocity gradient\n import copy\n dW = copy.deepcopy(dw)\n dW[0][1] = .5*(dw[0][1] + dw[1][0])\n dW[1][0] = dW[0][1]\n return dW\n return inner_product(eps(du), eps(dv))\n\nA = a.iasm(stokes_bilinear_a) # iasm takes a function handle defining the weak form\n\ndef stokes_bilinear_b(du, v):\n return (du[0][0]+du[1][1])*v\n\nB = b.iasm(stokes_bilinear_b)\n\nfrom spfem.utils import stack\nfrom scipy.sparse import csr_matrix\neps = 1e-3\nC = c.iasm(lambda u, v: u*v)\nK = stack(np.array([[A, B.T], [B, -eps*C]])).tocsr()\n\nfrom spfem.utils import direct\nimport copy\nx = np.zeros(K.shape[0])\nf = copy.deepcopy(x)\n\n# find DOF sets\ndirichlet_dofs, _ = a.find_dofs(lambda x, y: x >= 1.0)\ninflow_dofs, inflow_locs = a.find_dofs(lambda x, y: x == 2.0, dofrows=[0])\n\n# set inflow condition and solve with direct method\ndef inflow_profile(y):\n return (y-0.4)*(y-0.6)\nx[inflow_dofs] = inflow_profile(inflow_locs[1, :])\nI = np.setdiff1d(np.arange(K.shape[0]), dirichlet_dofs)\nx = direct(K, f, x=x, I=I)\n\nm.plot(x[np.arange(C.shape[0]) + A.shape[0]])\nm.plot(np.sqrt(x[a.dofnum_u.n_dof[0, :]]**2+x[a.dofnum_u.n_dof[0, :]]**2), smooth=True)\nplt.figure()\nplt.quiver(m.p[0, :], m.p[1, :], x[a.dofnum_u.n_dof[0, :]], x[a.dofnum_u.n_dof[1, :]])\nm.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
feststelltaste/software-analytics
|
prototypes/Checking the modularization of software systems by analyzing co-changing source code files (Conway's law edition).ipynb
|
gpl-3.0
|
[
"Introduction\nIn my previous blog post, we've seen how we can identify files that change together in one commit.\nIn this blog post, we take the analysis to an advanced level:\n\nWe're using a more robust model for determining the similarity of co-changing source code files\nWe're checking the existing modularization of a software system and compare it to the change behavior of the development teams\nWe're creating a visualization that lets us determine the underlying, \"hidden\" modularization of our software system based on conjoint changes\nWe discuss the results for a concrete software system in detail (with more systems to come in the upcoming blog posts).\n\nWe're using Python and pandas as well as some algorithms from the machine learning library scikit-learn and the visualization libraries matplotlib, seaborn and pygal for these purposes.\nThe System under Investigation\nFor this analysis, we use a closed-source project that I developed with some friends of mine. It's called \"DropOver\", a web application that can manage events with features like events' sites, scheduling, comments, todos, file uploads, mail notifications and so on. The architecture of the software system mirrored the feature-based development process: You could quickly locate where code has to be added or changed because the software system's \"screaming architecture\". This architecture style lead you to the right place because of the explicit, feature-based modularization that was used for the Java packages/namespaces:\n\nIt's also important to know, that we developed the software almost strictly feature-based by feature teams (OK, one developer was one team in our case). Nevertheless, the history of this repository should perfectly fit for our analysis of checking the modularization based on co-changing source code files. \nThe main goal of our analysis is to see if the modules of the software system were changed independently or if they were code was changed randomly across modules boundaries. If the latter would be the case, we should reorganize the software system or the development teams to let software development activities and the surrounding more naturally fit together.\nIdea\nWe can do this kind of analysis pretty easily by using the version control data of a software system like Git. A version control system tracks each change to a file. If more files are changed within one commit, we can assume that those files somehow have something to do with each other. This could be e. g. a direct dependency because two files depend on each other or a semantic dependency which causes an underlying concepts to change across module boundaries.\nIn this blog post, we take the idea further: We want to find out the degree of similarity of two co-changing files, making the analysis more robust and reliable on one side, but also enabling a better analysis of bigger software systems on the other side by comparing all files of a software system with each other regarding the co-changing properties.\nData\nWe use a little helper library for importing the data of our project. It's a simple git log with change statistics for each commit and file (you can see here how to retrieve it if you want to do it manually).",
"from lib.ozapfdis.git_tc import log_numstat\n\nGIT_REPO_DIR = \"../../dropover_git/\"\ngit_log = log_numstat(GIT_REPO_DIR)[['sha', 'file', 'author']]\ngit_log.head()",
"In our case, we only want to check the modularization of our software for Java production code. So we just leave the files that are belonging to the main source code. What to keep here exactly is very specific to your own project. With Jupyter and pandas, we can make our decisions for this transparent and thus retraceable.",
"prod_code = git_log.copy()\nprod_code = prod_code[prod_code.file.str.endswith(\".java\")]\nprod_code = prod_code[prod_code.file.str.startswith(\"backend/src/main\")]\nprod_code = prod_code[~prod_code.file.str.endswith(\"package-info.java\")]\nprod_code.head()",
"Analysis\nWe want to see which files are changing (almost) together. A good start for this is to create this view onto our dataset with the pivot_table method of the underlying pandas' DataFrame. \nBut before this, we need a marker column that signals that a commit occurred. We can create an additional column named hit for this easily.",
"prod_code['hit'] = 1\nprod_code.head()",
"Now, we can transform the data as we need it: For the index, we choose the filename, as columns, we choose the unique sha key of a commit. Together with the commit hits as values, we are now able to see which file changes occurred in which commit. Note that the pivoting also change the order of both indexes. They are now sorted alphabetically.",
"commit_matrix = prod_code.reset_index().pivot_table(\n index='file',\n columns='sha',\n values='hit',\n fill_value=0)\ncommit_matrix.iloc[0:5,50:55]",
"As already mentioned in a previous blog post, we are now able to look at our problem from a mathematician' s perspective. What we have here now with the commit_matrix is a collection of n-dimensional vectors. Each vector represents a filename and the components/dimensions of such a vector are the commits with either the value 0 or 1. \nCalculating similarities between such vectors is a well-known problem with a variety of solutions. In our case, we calculate the distance between the various vectors with the cosines distance metric. The machine learning library scikit-learn provides us with an easy to use implementation.",
"from sklearn.metrics.pairwise import cosine_distances\n\ndissimilarity_matrix = cosine_distances(commit_matrix)\ndissimilarity_matrix[:5,:5]",
"To be able to better understand the result, we add the file names from the commit_matrix as index and column index to the dissimilarity_matrix.",
"import pandas as pd\ndissimilarity_df = pd.DataFrame(\n dissimilarity_matrix,\n index=commit_matrix.index,\n columns=commit_matrix.index)\ndissimilarity_df.iloc[:5,:2]",
"Now, we see the result in a better representation: For each file pair, we get the distance of the commit vectors. This means that we have now a distance measure that says how dissimilar two files were changed in respect to each other.\nVisualization\nHeatmap\nTo get an overview of the result's data, we can plot the matrix with a little heatmap first.",
"%matplotlib inline\nimport seaborn as sns\n\nsns.heatmap(\n dissimilarity_df,\n xticklabels=False,\n yticklabels=False\n);",
"Because of the alphabetically ordered filenames and the \"feature-first\" architecture of the software under investigation, we get the first glimpse of how changes within modules are occurring together and which are not.\nTo get an even better view, we can first extract the module's names with an easy string operation and use this for the indexes.",
"modules = dissimilarity_df.copy()\nmodules.index = modules.index.str.split(\"/\").str[6]\nmodules.index.name = 'module'\nmodules.columns = modules.index\nmodules.iloc[25:30,25:30]",
"Then, we can create another heatmap that shows the name of the modules on both axes for further evaluation. We also just take a look at a subset of the data for representational reasons.",
"import matplotlib.pyplot as plt\nplt.figure(figsize=[10,9])\nsns.heatmap(modules.iloc[:180,:180]);",
"Discussion\n\nStarting at the upper left, we see the \"comment\" module with a pretty dark area very clearly. This means, that files around this module changed together very often.\nIf we go to the middle left, we see dark areas between the \"comment\" module and the \"framework\" module as well as the \"site\" module further down. This shows a change dependency between the \"comment\" module and the other two (I'll explain later, why it is that way).\nIf we take a look in the middle of the heatmap, we see that the very dark area represents changes of the \"mail\" module. This module was pretty much changed without touching any other modules. This shows a nice separation of concerns.\nFor the \"scheduling\" module, we can also see that the changes occurred mostly cohesive within the module.\nAnother interesting aspect is the horizontal line within the \"comment\" region: These files were changed independently from all other files within the module. These files were the code for an additional data storage technology that was added in later versions of the software system. This pattern repeats for all other modules more or less strongly.\n\nWith this visualization, we can get a first impression of how good our software architecture fits the real software development activities. In this case, I would say that you can see most clearly that the source code of the modules changed mostly within the module boundaries. But we have to take a look at the changes that occur in other modules as well when changing a particular module. These could be signs of unwanted dependencies and may lead us to an architectural problem.\nMulti-dimensional Scaling\nWe can create another kind of visualization to check \n* if the code within the modules is only changed altogether and\n* if not, what other modules were changed.\nHere, we can help ourselves with a technique called \"multi-dimensional scaling\" or \"MDS\" for short. With MDS, we can break down an n-dimensional space to a lower-dimensional space representation. MDS tries to keep the distance proportions of the higher-dimensional space when breaking it down to a lower-dimensional space.\nIn our case, we can let MDS figure out a 2D representation of our dissimilarity matrix (which is, overall, just a plain multi-dimensional vector space) to see which files get change together. With this, we'll able to see which files are changes together regardless of the modules they belong to.\nThe machine learning library scikit-learn gives us easy access to the algorithm that we need for this task as well. We just need to say that we have a precomputed dissimilarity matrix when initializing the algorithm and then pass our dissimilarity_df DataFrame to the fit_transform method of the algorithm.",
"from sklearn.manifold import MDS\n\n# uses a fixed seed for random_state for reproducibility\nmodel = MDS(dissimilarity='precomputed', random_state=0)\ndissimilarity_2d = model.fit_transform(dissimilarity_df)\ndissimilarity_2d[:5]",
"The result is a 2D matrix that we can plot with matplotlib to get a first glimpse of the distribution of the calculated distances.",
"plt.figure(figsize=(8,8))\nx = dissimilarity_2d[:,0]\ny = dissimilarity_2d[:,1]\nplt.scatter(x, y);",
"With the plot above, we see that the 2D transformation somehow worked. But we can't see\n* which filenames are which data points\n* how the modules are grouped all together\nSo we need to enrich the data a little bit more and search for a better, interactive visualization technique.\nLet's add the filenames to the matrix as well as nice column names. We, again, add the information about the module of a source code file to the DataFrame.",
"dissimilarity_2d_df = pd.DataFrame(\n dissimilarity_2d,\n index=commit_matrix.index,\n columns=[\"x\", \"y\"])\ndissimilarity_2d_df.head()",
"Author",
"prod_code.groupby(['file', 'author'])['hit'].count().groupby(['file', 'author']).max()\n\ndissimilarity_2d_df['module'] = dissimilarity_2d_df.index.str.split(\"/\").str[6]\n",
"OK, here comes the ugly part: We have to transform all the data to the format our interactive visualization library pygal needs for its XY chart. We need to \n* group the data my modules\n* add every distance information \n * for each file as well as\n * the filename itself \nin a specific dictionary-like data structure.\nBut there is nothing that can hinder us in Python and pandas. So let's do this!\n\nWe create a separate DataFrame named plot_data with the module names as index\nWe join the coordinates x and y into a tuple data structure\nWe use the filenames from dissimilarity_2d_df's index as labels\nWe convert both data items to a dictionary\nWe append each entry for a module to only on module entry\n\nThis gives us a new DataFrame with modules as index and per module a list of dictionary-like entries with \n* the filenames as labels and\n* the coordinates as values.",
"plot_data = pd.DataFrame(index=dissimilarity_2d_df['module'])\nplot_data['value'] = tuple(zip(dissimilarity_2d_df['x'], dissimilarity_2d_df['y']))\nplot_data['label'] = dissimilarity_2d_df.index\nplot_data['data'] = plot_data[['label', 'value']].to_dict('records')\nplot_dict = plot_data.groupby(plot_data.index).data.apply(list)\nplot_dict",
"With this nice little data structure, we can fill pygal's XY chart and create an interactive chart.",
"import pygal\n\nxy_chart = pygal.XY(stroke=False)\n[xy_chart.add(entry[0], entry[1]) for entry in plot_dict.iteritems()] \n# uncomment to create the interactive chart\n# xy_chart.render_in_browser()\nxy_chart",
"This view is a pretty cool way for checking the real change behavior of your software including an architectural perspective. \nExample\nBelow, you see the complete data for a data point if you hover over that point:\n\nYou can see the following here:\n* In the upper left, you find the name of the module in the gray color\n* You find the complete name of the source code file in the middle\n* You can see the coordinates that MDS assigned to this data point in the color of the selected module\nLet's dive even deeper into the chart to get some insights that we can gain from our result.\nDiscussion\nModule \"mail\"\n\nAs already seen in the heatmap, we can see that all files of the \"mail\" module are very close together. This means that the files changed together very often. \nIn the XY chart, we can see this clearly when we hover over the \"mail\" entry in the legend on the upper left. The corresponding data points will be magnified a little bit.\nModule \"scheduling\"\n\nAnother interesting result can be found if we take a look at the distribution of the files of the module \"scheduling\". Especially the data points in the lower region of the chart indicate clearly that these files were changed almost exclusive together.\nIn the XY chart, we can take a look at the relevant data points by selecting just the \"scheduling\" data points by deselecting all the other entries in the legend.\nModules \"comment\", \"framework\" and \"site\"\n\nThe last thing I want to show you in our example is the common change pattern for the files of the modules \"comment\", \"framework\" and \"site\". The files of these modules changed together very often, leading to a very mixed colored region in the upper middle. In case of our system under investigation, this is perfectly explainable: These three modules were developed at the beginning of the project. Due to many redesigns and refactorings, those files had to be changed all together. For these modules, it would make sense to only look at the recent development activities to find out if the code within these modules is still co-changing.\nIn the XY chart, just select the modules \"comment\", \"framework\" and \"site\" to see the details.\nSummary\nWe've seen how you can check the modularization of your software system by also taking a look at the development activities that is stored in the version control system. This gives you plenty of hints if you've chosen a software architecture that also fits the commit behavior of your development teams.\nBut there is more that we could add here: You cannot only check for modularization. You could also e. g. take a look at the commits of your teams, spotting parts that are getting changed from too many teams. You could also see if your actions taken had any effect by checking only the recent history of the commits. You can also redefine what co-changing means e. g. you define it as files that were changed on the same day, which would kind of balance out different commit styles of different developers.\nBut for now, we are leaving it here. You can experiment with further options on your own. You can find the complete Jupyter notebook on GitHub."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
WNoxchi/Kaukasos
|
FAI_old/Lesson6/L6_Theano_RNN_Notes.ipynb
|
mit
|
[
"Wayne H Nixalo - 18 June 2017\nCodeAlong of Lesson 6 at 1:29:00 onwards link --- note: this JNB not meant to be run.\nCoding in Theano is different to coding in Python because Theano's job in life is to provide a way for you to describe a computation, and compile it for the GPU and run it there.",
"n_input = vocab_size\nn_output = vocab_size\n\ndef init_wgts(rows, cols):\n scale = math.sqrt(2/rows)\n return shared(normal(scale=scale, size=(rwos, cols)).astype(np.float32))\ndef init_bias(rows):\n vec = np.zeros(rows, dtype=np.float32)\n return shared(vec)",
"For our hidden weights (the arrow in the diagram which loops back to its matrix), we initialize it using an identity matrix.",
"def wgts_and_bias(n_in, n_out):\n return init_wgts(n_in, n_out), init_bias(n_out)\ndef id_and_bias(n):\n return shared(np.eye(n, dtype=np.float32)), init_bias(n)",
"We have to build up a computation-graph, a series of steps saying 'in the future I'm going to give you some data, and when I do I want you to do these steps...'\nSo we start off by describing the types of data we'll give it",
"# we'll give it some input data\nt_inp = T.matrix('inp')\n# some output data\nt_outp = T.matrix('outp')\n# give it some way of initializing the first hidden state\nt_h0 = T.vector('h0')\n# also give it a learning rate which we can change later\nlr = T.scalar('lr')\n\n# we create a list of all args we provide to Theano later\nall_args = [t_h0, t_inp, t_outp, lr]",
"To create the weights & biases, up above we have a function called wgts_and_bias(..) in which we tell Theano the size of the matrix we want to create.\nThe matrix that goes from input to hidden has n_input rows and n_hidden collumns.\nwgts_and_bias returns a tuple of weights and biases.\nTo create the weights, we first calculate the Glorot number, sqrt(2/n) -- the scale of the random numbers we're going to use -- then we create those random numbers using the Numpy normal(..) random number function, and then we use a special Theano keyword called shared. shared(..) tells Theano that the data inside is something we want it to pass off to the GPU later and keep track of.\nSo once you wrap something in shared, it kind of belongs to Theano now.",
"# weights & bias to hidden layer\nW_h = id_and_bias(b_hidden)\n# weights & bias to input\nW_x = wgts_and_bias(n_input, n_hidden)\n# weights & bias to output\nW_y = wgts_and_bias(n_hidden, n_output)\n# stick all manually constructed weight matrices and bias vectors \n# in a list:\nw_all = list(chain.from_iterable([W_h, W_x, W_y]))",
"Python has a function, chain.from_iterable(..), that takes a list of tuples and turns them into one big list.\nThe next thing we have to do is tell Theano what happens each time we take a single step of this RNN.\nThere's no such thing as a for-loop on a GPU bc the GPU is built to & wants to parallelize things & do multiple things at the same time.\nThere is something very similar to a for-loop that you can parallelize, it's called a scan operation.\nA scan operation is smth where you call some function (step) for every element (t_inp) of some sequence, and at every point the function returns some output, and the next time thru that function is called, it gets the output of the previous time to you called it along with the next element of the sequence.",
"# example of scan:\ndef scan(fn, start, seq):\n res = [] # array of results\n prev = start\n for s in seq:\n app = fn(prev, s) # apply function to previous result & next elem in sequence\n res.append(app)\n prev = app\n return res\n\nscan(lambda prev,curr: prev+curr, 0, range(5))\n\n# output:\n# [0, 1, 3, 6, 10]\n\n# the scan operation defines a cumulative sum.\n\n# It is possible to write a parallel version of this.\n# If you can turn you algorithm into a scan, you can run it quickly \n# on a GPU.",
"The fucntion we'll call on ea. step thru is a fn called step. step takes the input, x, does a dot-product by the weight-matrix we created earlier, W_x, adds on the bias term we created earlier, b_x. then we do the same thing, taking our previous hidden state h, multiplying it by the hidden weight-matrix W_h and adding the biases b_h; then putting that whole thing through the activation function, nnet.relu.\nAfter we do that we want to create an output each time. Output is exactly the same thing. It takes the result of h, the hidden state, multiply it by the outputs weight-vector, adding on the bias; and this time we use nnnet.softmax.\nAt the end of that, we return the hidden state we have so far, and the output, T.flatten(y, 1)\nAll that happens each step.",
"def step(x, h, W_h, b_h, W_x, b_x, W_y, b_y):\n h = nnet.relu(T.dot(x, W_x) + b_x + T.dot(h, W_h) + b_h)\n y = nnet.softmax(T.dot(h, W_y) + b_y)\n return h, T.flatten(y, 1)",
"For the sequence we're passing into it -- we're just describing a computation here, so we tell Theano it will will be a matrix.\nFor the starting point, outputs_info=[t_h0, None], we tell Theano via t_h0 = T.vector('h0') we'll provide it an initial hidden state.\nFinally, in Theano you have to tell it what all the other things you'll pass to the function, and we'll pass it the whole list of weights we created up above via: non_sequences=w_all",
"[v_h, v_y], = theano.scan(step, sequences=t_inp,\n outputs_info=[t_h0, None], non_sequences=w_all)",
"By this point we've described how to execute a whole sequence of stesp for an RNN. We haven't given it any data to do it; we've just set up the computation.\nWhen that computation is run, it's going to return 2 things bc step returns 2 things: the hidden state v_h, and our output activations v_y.\nNow we need to calculate our error. Using cat-crossent we compare the output of our scan, v_y to some matrix t_outp. Then add it all together.\nWe want to apply SGD after every step, meaning we have to take the derivative of w_all wrt. all the weights, and use that along with the learning rate to update all the weights. Theano has a simple function call to do that: T.grad(..)",
"error = nnet.categorical_crossentropy(v_y, t_outp).sum()\ng_all = T.grad(error, w_all)\n# \"please tell me the gradient, `g_all`, of the function `error`, \n# wrt. these gradients: `w_all`\"\n# Theano will symbolically calculate all the derivatives for you.",
"We're ready now to build our final function. It takes as input, all our arguments, all_args. It'll create the error as its output. At each step it's going to do some updates via upd_dict.\nWhat upd_dict does is create a dictionary that's going to map every noe of our weights to the weight w minus each one of our gradients g times the learning rate lr.\nw comes from wgts, g comes from grads.\nSo it updates every weight to itself minus its gradient times the learning rate.\nWhat Theano does via updates is it says: every time you calculate the next step, I want you to change your shared variables as follows: (and upd is the list of changes to make). \nAnd that's it.",
"def upd_dict(w_all, g_all, lr):\n return OrderedDict({w: w-g*lr for (w,g) in zip(wgts,grads)})\n\nupd = upd_dict(w_all, g_all, lr)\nfn = theano.function(all_args, error, updates=upd, allow_input_downcast=True)",
"We create our one-hot encoded X's and Y's, and we have to manually create our own loop; Theano has nothing built-in for this.\nWe go through every element of our input X, and call that function that we just created above, and pass in all its inputs, all_args, for initial hidden state t_h0, input t_inp, target t_outp, and learning rate lr.\nInitial hidden state, a bunch of zeros: np.zeros(n_hidden).\nThe condition if i % 1000 == 999: ... just prints out the error every 1000 loops.\n(we're using stochastic gradient descent with a minibatch size of 1)\ngradient descent w/o stochastic means your using a minibatch size of the whole dataset\nso this is 'online' gradient descent",
"X = oh_x_rnn\nY = oh_y_rnn\n# X.shape, Y.shape\n\nerr=0.0; l_rate=0.01\nfor i in range(len(X)):\n err+=fn(np.zeros(n_hidden), X[i], Y[i], l_rate)\n if i % 1000 == 999:\n print(\"Error:{:.3f}\".format(err/1000))\n err=0.0",
"At the end of learning (via the loop above), we create a new Theano fn which takes some piece of input along with some initial hidden state, and produce not the loss, but the output.\nThis is to do testing: the fn goes from out inputs to our vector of outputs v_y.\nOur predictions preds, will be to take that fn f_y, pass it our initial hidden state, np.zeros(n_hidden), and some input, say X[6], and that'll give us some predictions.",
"f_y = theano.function([t_h0, t_inp], v_y, allow_input_downcast=True)\n\npred = np.argmax(f_y(np.zeros(n_hidden), X[6]), axis=1)\n\n# grabbing sample input text\nact = np.argmax(X[6], axis=1)\n\n# displaying input text\n[indices_char[o] for o in act]\n\n# displaying expected outputs\n[indices_char[o] for o in pred]",
"Running the above 2 lines will show what the model expected to come after each character.\nIn lecture:\n```\nact: ['t', 'h', 'e', 'n', '?', ' ', 'I', 's']\n\npred: ['h', 'e', ' ', ' ', ' ', 'T', 'n', ' ']\n```\nAnd that's building a Recurrent Neural Network from Scratch using Theano."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jhprinz/openpathsampling
|
examples/misc/move_strategies_and_schemes.ipynb
|
lgpl-2.1
|
[
"This document is intended for intermediate to advanced users. It deals with the internals of the MoveStrategy and MoveScheme objects, as well how to create custom versions of them. For most users, the default behaviors are sufficient.",
"%matplotlib inline\nimport openpathsampling as paths\nfrom openpathsampling.visualize import PathTreeBuilder, PathTreeBuilder\nfrom IPython.display import SVG, HTML\n\nimport openpathsampling.high_level.move_strategy as strategies # TODO: handle this better\n\n# real fast setup of a small network\nfrom openpathsampling import VolumeFactory as vf\ncvA = paths.FunctionCV(name=\"xA\", f=lambda s : s.xyz[0][0])\ncvB = paths.FunctionCV(name=\"xB\", f=lambda s : -s.xyz[0][0])\nstateA = paths.CVDefinedVolume(cvA, float(\"-inf\"), -0.5).named(\"A\")\nstateB = paths.CVDefinedVolume(cvB, float(\"-inf\"), -0.5).named(\"B\")\ninterfacesA = paths.VolumeInterfaceSet(cvA, float(\"-inf\"),\n [-0.5, -0.3, -0.1])\ninterfacesB = paths.VolumeInterfaceSet(cvB, float(\"-inf\"),\n [-0.5, -0.3, -0.1])\nnetwork = paths.MSTISNetwork(\n [(stateA, interfacesA),\n (stateB, interfacesB)],\n ms_outers=paths.MSOuterTISInterface.from_lambdas(\n {interfacesA: 0.0, interfacesB: 0.0}\n )\n)",
"MoveStrategy and MoveScheme\nAfter you've set up your ensembles, you need to create a scheme to sample those ensembles. This is done by the MoveStrategy and MoveScheme objects.\nOpenPathSampling uses a simple default scheme for any network, in which first you choose a type of move to do (shooting, replica exchange, etc), and then you choose a specific instance of that move type (i.e., which ensembles to use). This default scheme works for most cases, but you might find yourself in a situation where the default scheme isn't very efficient, or where you think you have an idea for a more efficient scheme. OpenPathSampling makes it easy to modify the underlying move scheme.\nDefinitions of terms\n\nmove scheme: for a given simulation, the move scheme is the \"move decision tree\". Every step of the MC is done by starting with some root move, and tracing a series of decision points to generate (and then accept) a trial.\nmove strategy: a general approach to building a move scheme (or a subset thereof). SRTIS is a move strategy. Nearest-neighbor replica exchange is a move strategy. All-replica exchange is a move strategy.\n\nSo we use \"strategy\" to talk about the general idea, and \"scheme\" to talk about a specific implementation of that idea. This document will describe both how to modify the default scheme for one-time modifications and how to develop new move strategies to be re-used on many problems.\nFor the simplest cases, you don't need to get into all of this. All you need to do is to use the DefaultScheme, getting the move decision tree as follows:",
"scheme = paths.DefaultScheme(network)",
"OpenPathSampling comes with a nice tool to visualize the move scheme. There are two main columns in the output of this visualization: at the left, you see a visualization of the move decision tree. On the right, you see the input and output ensembles for each PathMover.\nThe move decision tree part of the visualization should be read as follows: each RandomChoiceMover (or related movers, such as OneWayShooting) randomly select one of the movers at the next level of indentation. Any form of SequentialMover performs the moves at the next level of indentation in the order from top to bottom.\nThe input/output ensembles part shows possible input ensembles to the move marked with a green bar at the top, and possible output ensembles to the move marked with a red bar on the bottom.\nThe example below shows this visualization for the default scheme with this network.",
"move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)\nSVG(move_vis.svg())",
"MoveSchemes are built from MoveStrategy objects\nIn the end, you must give your PathSimulator object a single MoveScheme. However, this scheme might involve several different strategies (for example, whether you want to do one-way shooting or two-way shooting is one strategy decision, and it each can be combined with either nearest-neightbor replica exchange strategy or all-replica exchange strategy: these strategy decisions are completely independent.)\nCreating a strategy\nA strategy should be thought of as a way to either add new PathMovers to a MoveScheme or to change those PathMovers which already exist in some way.\nEvery MoveStrategy therefore has an ensembles parameter. If the ensembles parameter is not given, it is assumed that the user intended all normal ensembles in the scheme's transitions. Every strategy also has an initialization parameter called group. This defines the \"category\" of the move. There are several standard categories (described below), but you can also create custom categories (some examples are given later).\nFinally, there is another parameter which can be given in the initialization of any strategy, but which must be given as a named parameter. This is replace, which is a boolean stating whether the movers created using this should replace those in the scheme at this point.\nStrategy groups\nIntuitively, we often think of moves in groups: the shooting moves, the replica exchange moves, etc. For organizational and analysis purposes, we include that structure in the MoveScheme, and each MoveStrategy must declare what groups it applies to. OpenPathSampling allows users to define arbitrary groups (using strings as labels). The standard schemes use the following groups:\n\n'shooting'\n'repex'\n'pathreversal'\n'minus'\n\nStrategy levels\nIn order to apply the strategies in a reasonable order, OpenPathSampling distinguishes several levels at which move strategies work. For example, one level determines which swaps define the replica exchange strategy to be used (SIGNATURE), and another level determines whether the swaps are done as replica exchange or ensemble hopping (GROUP). Yet another level creates the structures that determine when to do a replica exchange vs. when to do a shooting move (GLOBAL).\nWhen building the move decision tree, the strategies are applied in the order of their levels. Each level is given a numerical value, meaning that it is simple to create custom orderings. Here are the built-in levels, their numeric values, and brief description:\n\nlevels.SIGNATURE = 10: \nlevels.MOVER = 30: \nlevels.GROUP = 50: \nlevels.SUPERGROUP = 70: \nlevels.GLOBAL = 90: \n\nApplying the strategy to a move scheme\nTo add a strategy to the move scheme, you use MoveScheme's .append() function. This function can take two arguments: the list of items to append (which is required) and the levels associated with each item. By default, every strategy has a level associated with it, so under most circumstances you don't need to use the levels argument.\nNow let's look at a specific example. Say that, instead of doing nearest-neighbor replica exchange (as is the default), we wanted to allow all exchanges within each transition. This is as easy as appending an AllSetRepExStrategy to our scheme.",
"# example: switching between AllSetRepEx and NearestNeighborRepEx\nscheme = paths.DefaultScheme(network)\nscheme.append(strategies.AllSetRepExStrategy())",
"Now when we visualize this, note the difference in the replica exchange block: we have 6 movers instead of 4, and now we allow the exchanges between the innermost and outermost ensembles.",
"move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)\nSVG(move_vis.svg())",
"What if you changed your mind, or wanted to go the other way? Of course, you could just create a new scheme from scratch. However, you can also append a NearestNeighborRepExStrategy after the AllSetRepExStrategy and, from that, return to nearest-neighbor replica exchange.\nFor NearestNeighborRepExStrategy, the default is replace=True: this is required in order to replace the AllSetRepExStrategy. Also, to obtain the new move decision tree, you have to pass the argument rebuild=True. This is because, once you've built the tree once, the function scheme.mover_decision_tree() will otherwise skip building the scheme and return the root of the already-built decision tree. This allows advanced custom changes, as discussed much later in this document.",
"scheme.append(strategies.NearestNeighborRepExStrategy(), force=True)\n\nmove_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)\nSVG(move_vis.svg())",
"Combination strategies\nOpenPathSampling provides a few shortcuts to strategies which combine several substrategies into a whole.\nDefaultMoveStrategy\nThe DefaultMoveStrategy converts the move scheme to one which follows the default OpenPathSampling behavior.\nTODO: note that this isn't always the same as the default scheme you get from an empty move scheme. If other movers exist, they are converted to the default strategy. So if you added movers which are not part of the default for your network, they will still get included in the scheme.\nSingleReplicaStrategy\nThe SingleReplicaStrategy converts all replica exchanges to ensemble hops (bias parameter required). It then reshapes the move decision tree so that is organized by ensemble, TODO",
"# example: single replica",
"Examples of practical uses\nIn the examples above, we saw how to change from nearest neighbor replica exchange to all (in-set) replica exchange, and we saw how to switch to a single replica move strategy. In the next examples, we'll look at several other uses for move strategies.\nAdding a specific extra replica exchange move\nIn the examples above, we showed how to get either a nearest neighbor replica exchange attempt graph, or an all in-set replica exchange attempt graph. If you want something in-between, there's also the NthNearestNeighborRepExStrategy, which works like those above. But what if (probably in addition to one of these schemes) you want to allow a certain few replica exchange? For example, in a multiple interface set approach you might want to include a few exchanges between interfaces in different sets which share the same initial state.\nTo do this, we start with an acceptable strategy (we'll assume the default NearestNeighborRepExStrategy is our starting point) and we add more moves using SelectedPairsRepExStrategy, with replace=False.",
"ens00 = network.sampling_transitions[0].ensembles[0]\nens02 = network.sampling_transitions[0].ensembles[2]\nextra_repex = strategies.SelectedPairsRepExStrategy(ensembles=[ens00, ens02], replace=False)\nscheme = paths.DefaultScheme(network)\nscheme.append(extra_repex)",
"Now we have 7 replica exchange movers (5 not including MS-outer), as can be seen in the move tree visualization.",
"move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)\nSVG(move_vis.svg())",
"First crossing shooting point selection for some ensembles\nFor ensembles which are far from the state, sometimes uniform shooting point selection doesn't work. If the number of frames inside the interface is much larger than the number outside the interface, then you are very likely to select a shooting point inside the interface. If that point is far enough from the interface, it may be very unlikely for the trial path to cross the interface.\nOne remedy for this is to use the first frame after the first crossing of the interface as the shooting point. This leads to 100% acceptance of the shooting move (every trial satisfies the ensemble, and since there is only one such point -- which is conserved in the trial -- the selection probability is equal in each direction.)\nThe downside of this approach is that the paths decorrelate much more slowly, since only that one point is allowed for shooting (path reversal moves change which is the \"first\" crossing, otherwise there would be never be complete decorrelation). So while it may be necessary to do it for outer interfaces, doing the same for inner interfaces may slow convergence. \nThe trick we'll show here is to apply the first crossing shooting point selection only to the outer interfaces. This can increase the acceptance probability of the outer interfaces without affecting the decorrelation of the inner interfaces.",
"# select the outermost ensemble in each sampling transition\nspecial_ensembles = [transition.ensembles[-1] for transition in network.sampling_transitions]\n\nalternate_shooting = strategies.OneWayShootingStrategy(\n selector=paths.UniformSelector(), # TODO: change this\n ensembles=special_ensembles\n)\n# note that replace=True is the default\n\nscheme = paths.DefaultScheme(network)\nscheme.movers = {} # TODO: this will be removed, and lines on either side combined, when all is integrated\nscheme.append(alternate_shooting)\nmove_decision_tree = scheme.move_decision_tree()\n\n# TODO: find a way to visualize",
"Two different kinds of shooting for one ensemble\nIn importance sampling approaches like TIS, you're seeking a balance between two sampling goals. On the one hand, most of space has a negligible (or zero) contribution to the property being measured, so you don't want your steps to be so large that your trials are never accepted. On the other hand, if you make very small steps, it takes a long time to diffuse through the important region (i.e., to decorrelate).\nOne approach which could be used to fix this would be to allow two different kinds of moves: one which makes small changes with a relatively high acceptance probability to get accepted samples, and one which makes larger changes in an attempt to decorrelate.\nThis section will show you how to do that by adding a small_step_shooting group which does uses the first crossing shooting point selection. (In reality, a better way to get this effect would be to use the standard one-way shooting to do the small steps, and use two-way shooting -- not yet implemented -- to get the larger steps.)",
"# example: add extra shooting (in a different group, preferably)\nextra_shooting = strategies.OneWayShootingStrategy(\n selector=paths.UniformSelector(), # TODO: change this\n group='small_step_shooting'\n)\nscheme = paths.DefaultScheme(network)\nscheme.append(extra_shooting)",
"In the visualization of this, you'll see that we have 2 blocks of shooting moves: one is the pre-existing group called 'shooting', and the other is this new group 'small_step_shooting'.",
"move_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)\nSVG(move_vis.svg())",
"RepEx-Shoot-RepEx\nOne of the mains goals of OpenPathSampling is to allow users to develop new approaches. New move strategies certainly represents one direction of possible research. This particular example also shows you how to implement such features. It includes both implementation of a custom PathMover and a custom MoveStrategy.\nSay that, instead of doing the standard replica exchange and shooting moves, you wanted to combine them all into one move buy first doing all the replica exchanges in one order, then doing all the shooting moves, then doing all the replica exchanges in the other order.\nTo implement this, we'll create a custom subclass of MoveStrategy. When making the movers for this strategy, we'll use the built-in SequentialMover object to create the move we're interested in.",
"# example: custom subclass of `MoveStrategy`\nclass RepExShootRepExStrategy(strategies.MoveStrategy):\n _level = strategies.levels.GROUP\n # we define an init function mainly to set defaults for `replace` and `group`\n def __init__(self, ensembles=None, group=\"repex_shoot_repex\", replace=True, network=None):\n super(RepExShootRepExStrategy, self).__init__(\n ensembles=ensembles, group=group, replace=replace\n )\n \n def make_movers(self, scheme):\n # if we replace, we remove these groups from the scheme.movers dictionary\n if self.replace:\n repex_movers = scheme.movers.pop('repex')\n shoot_movers = scheme.movers.pop('shooting')\n else:\n repex_movers = scheme.movers['repex']\n shoot_movers = scheme.movers['shooting']\n # combine into a list for the SequentialMover\n mover_list = repex_movers + shoot_movers + list(reversed(repex_movers))\n combo_mover = paths.SequentialMover(mover_list)\n return [combo_mover]\n\nrepex_shoot_repex = RepExShootRepExStrategy()\nscheme = paths.DefaultScheme(network)\nscheme.append(repex_shoot_repex)",
"You'll notice that the combo_mover we defined above is within a RandomChoiceMover: that random choice is for the group 'repex_shoot_repex', which has only this one member.\nIn this, we have used the default replace=True, which removes the old groups for the shooting movers and replica exchange movers. If you would like to keep the old shooting and replica exchange moves around as well, you can use replace=False.",
"# TODO: there appears to be a bug in MoveTreeBuilder with this scheme\nmove_vis = paths.visualize.MoveTreeBuilder.from_scheme(scheme)\nSVG(move_vis.svg())",
"Modifying the probabilities of moves\nThe DefaultStrategy includes default choices for the probability of making each move type, and then treats all moves within a given type with equal probability. Above, we described how to change the probability of a specific move type; now we're going to discuss changing the probability of a specific move within that type.\nOne approach would be to create a custom MoveStrategy at the GLOBAL level. However, in this section we're going to use a different paradigm to approach this problem. Instead of using a MoveStrategy to change the MoveScheme, we will manually modify it.\nKeep in mind that this involves really diving into the guts of the MoveScheme object, with all the caveats that involves. Although this paradigm can be used in this and other cases, it is only recommended for advanced users.\nOne you've created the move decision tree, you can make any custom modifications to it that you would desire. However, it is important to remember that modifying certain aspects can lead to a nonsensical result. For example, appending a move to a RandomChoiceMover without also appending an associated weight will lead to nonsense. For the most part, it is better to use MoveStrategy objects to modify your move decision tree. But to make your own MoveStrategy subclasses, you will need to know how to work with the details of the MoveScheme and the move decision tree.\nIn this example, we find the shooting movers associated with a certain ensemble, and double the probability of choosing that ensemble if a shooting move is selected.",
"# TODO: This is done differently (and more easily) now\n# example: getting into the details\n#scheme = paths.DefaultScheme(network)\n#move_decision_tree = scheme.move_decision_tree()\n#ens = network.sampling_transitions[0].ensembles[-1]\n#shooting_chooser = [m for m in move_decision_tree.movers if m.movers==scheme.movers['shooting']][0]\n#idx_ens = [shooting_chooser.movers.index(m) \n# for m in shooting_chooser.movers \n# if m.ensemble_signature==((ens,), (ens,))]\n#print shooting_chooser.weights\n#for idx in idx_ens:\n# shooting_chooser.weights[idx] *= 2\n#print shooting_chooser.weights",
"List of built-in MoveStrategy classes\nReplica exchange strategies (level=SIGNATURE)\n\nNearestNeighborRepExStrategy\nTODO: NthNeighborRepExStrategy\nAllSetRepExStrategy\nSelectedPairsRepExStrategy\n\nEnsemble change strategies (level=GROUP)\n\nTODO: StateSwapStrategy\nTODO: ReplicaExchangeStrategy\nTODO: EnsembleHopStrategy\n\nShooting strategies (level=MOVER)\n\nOneWayShootingStrategy\n(TwoWayShootingStrategy) : not yet implemented\n\nPath reversal strategies (level=MOVER)\n\nPathReversalStrategy\n\nMinus move strategies (level=MOVER)\n\nTODO: MinusMoveStrategy\nTODO: SingleReplicaMinusStrategy\n\nOverall move decision tree strategies (combinations: level=GLOBAL)\n\nDefaultMoveStrategy\nSingleReplicaStrategy"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cnrm-cerfacs/cmip6/models/cnrm-cm6-1-hr/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: CNRM-CERFACS\nSource ID: CNRM-CM6-1-HR\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:52\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1-hr', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
prashantas/MyDataScience
|
DeepNetwork/Keras/BankChurn.ipynb
|
bsd-2-clause
|
[
"https://medium.com/@pushkarmandot/build-your-first-deep-learning-neural-network-model-using-keras-in-python-a90b5864116d\nData and Business Problem:\nOur basic aim is to predict customer churn for a certain bank i.e. which customer is going to leave this bank service.",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndataset = pd.read_csv('Churn_Modelling.csv')\n\ndataset.head()\n",
"Create matrix of features and matrix of target variable. In this case we are excluding column 1, 2 & 3 as those are ‘row_number’, ‘customerid’ & 'Surname' which are not useful in our analysis. Column 14, ‘Exited’ is our Target Variable",
"from sklearn.preprocessing import LabelEncoder, OneHotEncoder\n## Read this for categorical Encoding : http://pbpython.com/categorical-encoding.html\n##pd.get_dummies(dataset, columns=[\"Geography\", \"Gender\"], prefix=[\"Geography\", \"Gender\"]).head()\n\ndef getXy_1(dataset,target):\n df = pd.get_dummies(dataset, columns=[\"Geography\", \"Gender\"], prefix=[\"Geography\", \"Gender\"])\n \n y = df[target]\n X = df.loc[:, df.columns != target]\n return X,y\n\n\ndef getXy_2(dataset,target): \n lb = LabelEncoder()\n dataset['Gender'] = lb.fit_transform(dataset['Gender'])\n dataset['Geography'] = lb.fit_transform(dataset['Geography'])\n ## One-Hot Coding\n dataset = pd.get_dummies(dataset, columns = ['Geography','Gender'])\n y = dataset[target]\n X = dataset.loc[:, dataset.columns != target]\n return X,y\n \n\nX,y = getXy_2(dataset,target='Exited')\nprint(\"X.columns:\",X.columns)\nX = X.iloc[:, 3:].values\ny = y.values\ny\n\n# Splitting the dataset into the Training set and Test set\nfrom sklearn.model_selection import train_test_split\nX_train,X_test, y_train, y_test = train_test_split(X,y,test_size=0.2)",
"I know you are tired of data preprocessing but I promise this is the last step. If you carefully observe data, you will find that data is not scaled properly. Some variable has value in thousands while some have value is tens or ones. We don’t want any of our variable to dominate on other so let’s go and scale data.\n‘StandardScaler’ is available in ScikitLearn. In the following code we are fitting and transforming StandardScaler method on train data. We have to standardize our scaling so we will use the same fitted method to transform/scale test data.",
"from sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.fit_transform(X_test)\n\nX_train\n\nX_train.shape\n\nimport keras\nfrom keras.models import Sequential\n\nfrom keras.layers.core import Dense,Dropout, Dropout,Activation\n\nmodel = Sequential()\n\nmodel.add(Dense(16,input_dim=13))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(16))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(8))\nmodel.add(Activation('relu'))\nmodel.add(Dropout(0.2))\n\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\n\nmodel.summary()\n\n# Compiling Neural Network\nmodel.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])\n\n# Fitting our model \nmodel.fit(X_train, y_train, batch_size = 10, nb_epoch = 100)\n\n##Predicting the test set results\ny_pred = model.predict(X_test)\ny_pred = (y_pred>0.4)\ny_pred\n\n# Creating the Confusion Matrix\nfrom sklearn.metrics import confusion_matrix\ncm = confusion_matrix(y_test, y_pred)\n\ncm"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ffmmjj/desafio-dados-2016
|
experiments/Analise Exploratoria.ipynb
|
apache-2.0
|
[
"%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport sys\nsys.path.append('../src')",
"Carregamento e junção dos dados",
"from preprocessamento_escola_2011 import escolas_info_train, escolas_info_test",
"O módulo acima carrega os dados e os divide entre conjunto de treinamento(Para análise exploratória) e conjunto de teste(para validação de hipóteses). Os conjuntos já agrupam os questionários com as médias em língua portuguesa e matemática, além de renomear as questões com um nome mais significativo.\nAs questões também foram agrupadas em categorias(INFRA, SEGURANCA, RECURSOS e BIBLIOTECA) para facilitar possíveis agrupamentos posteriores.",
"escolas_info_train.info()",
"Limpeza inicial dos dados",
"def codificar_questoes(questionario_pd):\n questionario_tratado_pd = questionario_pd.copy()\n for questoes in questionario_tratado_pd.filter(regex='Q_.*').columns:\n questionario_tratado_pd[questoes] = questionario_pd[questoes].map({'A': 3, 'B': 2, 'C': 1, 'D': 0}).fillna(-1)\n \n return questionario_tratado_pd\n\n\ndef limpar_score(questionario_pd, score_names):\n questionario_tratado_pd = questionario_pd.copy()\n for score in score_names:\n questionario_tratado_pd[score] = questionario_pd[score].str.strip().str.replace(',', '.').str.replace('^$', '-1').astype(float)\n \n return questionario_tratado_pd\n\nescolas_train_clean = limpar_score(codificar_questoes(escolas_info_train), ['MEDIA_MT', 'MEDIA_LP'])",
"Checagem de correlações",
"escolas_info_clean_pd = escolas_train_clean.copy()\nescolas_info_clean_pd['MEDIA_LP'] = escolas_train_clean['MEDIA_LP']\nescolas_questoes_lp = escolas_train_clean.filter(regex='(Q_.*|MEDIA_LP)')\nmedia_lp_corrs = escolas_questoes_lp.corr()['MEDIA_LP']\nmedia_lp_corrs.sort_values(ascending=False).index[1:11]",
"Distribuição das médias em Língua Portuguesa",
"plt.hist(escolas_train_clean['MEDIA_LP'], bins=20)\nplt.xlabel('Média em Língua Portuguesa')\nplt.show()",
"Gráfico de correlação entre Qualidade da Biblioteca e Média em Língua Portuguesa",
"# Qualidade da biblioteca, de acordo com o dicionario de dados, encontra-se no campo TX_RESP_Q050\nplt.scatter(escolas_train_clean['Q_RECURSOS_BIBLIOTECA'], escolas_train_clean['MEDIA_LP'])\nplt.xlabel('Qualidade da biblioteca')\nplt.ylabel('Média em Língua Portuguesa')\nplt.xticks([-1, 0, 1, 2, 3], ['S/R', 'Inexistente', 'Ruim', 'Regular', 'Bom'])\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sysid/nbs
|
LP/Introduction-to-linear-programming/LaTeX_formatted_ipynb_files/Introduction to Linear Programming with Python - Part 6.ipynb
|
mit
|
[
"Introduction to Linear Programming with Python - Part 6\nMocking conditional statements using binary constraints\nIn part 5, I mentioned that in some cases it is possible to construct conditional statements using binary constraints.\nWe will explore not only conditional statements using binary constraints, but combining them with logical operators, 'and' and 'or'.\nFirst we'll work through some theory, then a real world example as an extension of part 5's example at the end.\nConditional statement\nTo start simply, if we have the binary constraint x<sub>1</sub> and we want:\npython\nif x1 == 1:\n y1 == 1\nelif x1 == 0:\n y1 == 0\nWe can achieve this easily using the following constraint:\npython\ny1 == x1\nHowever, if we wanted the opposite: \npython\nif x1 == 1:\n y1 == 0\nelif x1 == 0:\n y1 == 1\nGiven that they most both be 1 or 0, we just need the following constraint:\npython\nx1 + y1 == 1\nLogical 'AND' operator\nNow for something a little more complex, we can coerce a particular binary constraint to be 1 based on the states of 2 other binary constraints.\nIf we have binary constraints x<sub>1</sub> and x<sub>2</sub> and we want to achieve the following:\npython\nif x1 == 1 and x2 == 0:\n y1 == 1\nelse:\n y1 == 0\nSo that $y_1$ is only 1 in the case that x<sub>1</sub> is 1 and x<sub>2</sub> is 0. We can use the following 3 constraints to achieve this:\npython\n[\ny1 >= x1 - x2,\ny1 <= x1,\ny1 <= (1 - x2)\n]\nWe'll take a moment to deconstruct this. In our preferred case that x<sub>1</sub> = 1 and x<sub>2</sub> = 0, the three statments resolve to:\n* y<sub>1</sub> ≥ 1\n* y<sub>1</sub> ≤ 1\n* y<sub>1</sub> ≤ 1\nThe only value of $y_1$ that fulfils each of these is 1.\nIn any other case, however, y<sub>1</sub> will be zero. Let's take another example, say x<sub>1</sub> = 0 and x<sub>2</sub> = 1. This resolves to:\n* y<sub>1</sub> ≥ -1\n* y<sub>1</sub> ≤ 0\n* y<sub>1</sub> ≤ 0\nGiven that y<sub>1</sub> is a binary variable and must be 0 or 1, the only value of y<sub>1</sub> that can fulfil each of these is 0.\nYou can construct 3 constraints so that y<sub>1</sub> is equal to 1, only in the case you're interested in out of the 4 following options:\n* x<sub>1</sub> = 1 and x<sub>2</sub> = 1 \n* x<sub>1</sub> = 1 and x<sub>2</sub> = 0 \n* x<sub>1</sub> = 0 and x<sub>2</sub> = 1 \n* x<sub>1</sub> = 0 and x<sub>2</sub> = 0 \nI have created a function for exactly this purpose to cover all cases:",
"def make_io_and_constraint(y1, x1, x2, target_x1, target_x2):\n \"\"\"\n Returns a list of constraints for a linear programming model\n that will constrain y1 to 1 when\n x1 = target_x1 and x2 = target_x2; \n where target_x1 and target_x2 are 1 or 0\n \"\"\"\n binary = [0,1]\n assert target_x1 in binary\n assert target_x2 in binary\n \n if IOx1 == 1 and IOx2 == 1:\n return [\n y1 >= x1 + x2 - 1,\n y1 <= x1,\n y1 <= x2\n ]\n elif IOx1 == 1 and IOx2 == 0:\n return [\n y1 >= x1 - x2,\n y1 <= x1,\n y1 <= (1 - x2)\n ]\n elif IOx1 == 0 and IOx2 == 1:\n return [\n y1 >= x2 - x1,\n y1 <= (1 - x1),\n y1 <= x2\n ]\n else:\n return [\n y1 >= - (x1 + x2 -1),\n y1 <= (1 - x1),\n y1 <= (1 - x2)\n ]",
"Logical 'OR' operator\nThis is all well and good for the 'and' logical operator. What about the 'or' logical operator.\nIf we would like the following:\npython\nif x1 == 1 or x2 == 1:\n y1 == 1\nelse:\n y1 == 0\nWe can use the following linear constraints:\npython\ny1 <= x1 + x2\ny1 * 2 >= x1 + x2\nSo that:\n* if x<sub>1</sub> is 1 and x<sub>2</sub> is 1:\n * y<sub>1</sub> ≤ 2\n * 2y<sub>1</sub> ≥ 2\n * y<sub>1</sub> must equal 1\n* if x<sub>1</sub> is 1 and x<sub>2</sub> is 0:\n * y<sub>1</sub> ≤ 1\n * 2y<sub>1</sub> ≥ 1\n * y<sub>1</sub> must equal 1\n* if x<sub>1</sub> is 0 and x<sub>2</sub> is 1:\n * y<sub>1</sub> ≤ 1\n * 2y<sub>1</sub> ≥ 1\n * y<sub>1</sub> must equal 1\n* if x<sub>1</sub> is 0 and x<sub>2</sub> is 0:\n * y<sub>1</sub> ≤ 0\n * 2y<sub>1</sub> ≥ 0\n * y<sub>1</sub> must equal 0\nAgain, we'll consider the alternative option:\npython\nif x1 == 0 or x2 == 0:\n y1 == 1\nelse:\n y1 == 0\nWe can use the following linear constraints:\npython\ny1 * 2 <= 2 - (x1 + x2)\ny1 >= 1 - (x1 + x2)\nAn Example - Scheduling Example Extended\nIn our last example, we explored the scheduling of 2 factories.\nBoth factories had 2 costs:\n* Fixed Costs - Costs incurred while the factory is running\n* Variable Costs - Cost per unit of production\nWe're going to introduce a third cost - Start up cost.\nThis will be a cost incurred by turning on the machines at one of the factories.\nIn this example, our start-up costs will be:\n* Factory A - €20,000\n* Factory B - €400,000\nLet's start by reminding ourselves of the input data.",
"import pandas as pd\nimport pulp\n\nfactories = pd.DataFrame.from_csv('csv/factory_variables.csv', index_col=['Month', 'Factory'])\nfactories\n\ndemand = pd.DataFrame.from_csv('csv/monthly_demand.csv', index_col=['Month'])\ndemand",
"We'll begin by defining our decision variables, we have an additional binary variable for switching on the factory.",
"# Production\nproduction = pulp.LpVariable.dicts(\"production\",\n ((month, factory) for month, factory in factories.index),\n lowBound=0,\n cat='Integer')\n\n# Factory Status, On or Off\nfactory_status = pulp.LpVariable.dicts(\"factory_status\",\n ((month, factory) for month, factory in factories.index),\n cat='Binary')\n\n# Factory switch on or off\nswitch_on = pulp.LpVariable.dicts(\"switch_on\",\n ((month, factory) for month, factory in factories.index),\n cat='Binary')",
"We instantiate our model and define our objective function, including start up costs",
"# Instantiate the model\nmodel = pulp.LpProblem(\"Cost minimising scheduling problem\", pulp.LpMinimize)\n\n# Select index on factory A or B\nfactory_A_index = [tpl for tpl in factories.index if tpl[1] == 'A']\nfactory_B_index = [tpl for tpl in factories.index if tpl[1] == 'B']\n\n# Define objective function\nmodel += pulp.lpSum(\n [production[m, f] * factories.loc[(m, f), 'Variable_Costs'] for m, f in factories.index]\n + [factory_status[m, f] * factories.loc[(m, f), 'Fixed_Costs'] for m, f in factories.index]\n + [switch_on[m, f] * 20000 for m, f in factory_A_index]\n + [switch_on[m, f] * 400000 for m, f in factory_B_index]\n)",
"Now we begin to build up our constraints as in Part 5",
"# Production in any month must be equal to demand\nmonths = demand.index\nfor month in months:\n model += production[(month, 'A')] + production[(month, 'B')] == demand.loc[month, 'Demand']\n\n# Production in any month must be between minimum and maximum capacity, or zero.\nfor month, factory in factories.index:\n min_production = factories.loc[(month, factory), 'Min_Capacity']\n max_production = factories.loc[(month, factory), 'Max_Capacity']\n model += production[(month, factory)] >= min_production * factory_status[month, factory]\n model += production[(month, factory)] <= max_production * factory_status[month, factory]\n\n# Factory B is off in May\nmodel += factory_status[5, 'B'] == 0\nmodel += production[5, 'B'] == 0",
"But now we want to add in our constraints for switching on.\nA factory switches on if:\n* It is off in the previous month (m-1)\n* AND it on in the current month (m).\nAs we don't know if the factory is on before month 0, we'll assume that the factory has switched on if it is on in month 1.",
"for month, factory in factories.index:\n # In month 1, if the factory ison, we assume it turned on\n if month == 1:\n model += switch_on[month, factory] == factory_status[month, factory]\n \n # In other months, if the factory is on in the current month AND off in the previous month, switch on = 1\n else:\n model += switch_on[month, factory] >= factory_status[month, factory] - factory_status[month-1, factory]\n model += switch_on[month, factory] <= 1 - factory_status[month-1, factory]\n model += switch_on[month, factory] <= factory_status[month, factory]\n ",
"We'll then solve our model",
"model.solve()\npulp.LpStatus[model.status]\n\noutput = []\nfor month, factory in production:\n var_output = {\n 'Month': month,\n 'Factory': factory,\n 'Production': production[(month, factory)].varValue,\n 'Factory Status': factory_status[(month, factory)].varValue,\n 'Switch On': switch_on[(month, factory)].varValue\n }\n output.append(var_output)\noutput_df = pd.DataFrame.from_records(output).sort_values(['Month', 'Factory'])\noutput_df.set_index(['Month', 'Factory'], inplace=True)\noutput_df",
"Interestingly, we see that it now makes economic sense to keep factory B on after it turns off in month 5 up until month 12.\nPreviously, we had the case that it was not economic to run factory B in month 10, but as there is now a significant cost to switching off and back on, the factory runs through month 10 at its lowest capacity (20,000 units).\n\nFor those interested in using my function defined above (make_io_and_constraint). Instead of:\npython\nmodel += switch_on[month, factory] >= factory_status[month, factory] - factory_status[month-1, factory]\nmodel += switch_on[month, factory] <= 1 - factory_status[month-1, factory]\nmodel += switch_on[month, factory] <= factory_status[month, factory]\nYou could write:\npython\nfor constraint in make_io_and_constraint(switch_on[month, factory], \n factory_status[month, factory], \n factory_status[month-1, factory], 0, 1):\n model += constriant"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
UCSBarchlab/PyRTL
|
ipynb-examples/example4-debuggingtools.ipynb
|
bsd-3-clause
|
[
"Example 4: Debugging\nDebugging is half the coding process in software, and in PyRTL, it's no\ndifferent. PyRTL provides some additional challenges when it comes to\ndebugging as a problem may surface long after the error was made. Fortunately,\nPyRTL comes with various features to help you find mistakes.",
"import random\nimport io\nfrom pyrtl.rtllib import adders, multipliers\nimport pyrtl\npyrtl.reset_working_block()\n\nrandom.seed(93729473) # used to make random calls deterministic for this example",
"This example covers debugging strategies for PyRTL. For general python debugging\nwe recommend healthy use of the \"assert\" statement, and use of \"pdb\" for\ntracking down bugs. However, PyRTL introduces some new complexities because\nthe place where functionality is defined (when you construct and operate\non PyRTL classes) is separate in time from where that functionality is executed\n(i.e. during simulation). Thus, sometimes it hard to track down where a wire\nmight have come from, or what exactly it is doing.\nIn this example specifically, we will be building a circuit that adds up three values.\nHowever, instead of building an add function ourselves or using the\nbuilt-in \"+\" function in PyRTL, we will instead use the Kogge-Stone adders\nin RtlLib, the standard library for PyRTL.",
"# building three inputs\nin1, in2, in3 = (pyrtl.Input(8, \"in\" + str(x)) for x in range(1, 4))\nout = pyrtl.Output(10, \"out\")\n\nadd1_out = adders.kogge_stone(in1, in2)\nadd2_out = adders.kogge_stone(add1_out, in2)\nout <<= add2_out",
"The most basic way of debugging PyRTL is to connect a value to an output wire\nand use the simulation to trace the output. A simple \"print\" statement doesn't work\nbecause the values in the wires are not populated during creation time\nIf we want to check the result of the first addition, we can connect an output wire\nto the result wire of the first adder",
"debug_out = pyrtl.Output(9, \"debug_out\")\ndebug_out <<= add1_out",
"Now simulate the circuit. Let's create some random inputs to feed our adder.",
"vals1 = [int(2**random.uniform(1, 8) - 2) for _ in range(20)]\nvals2 = [int(2**random.uniform(1, 8) - 2) for _ in range(20)]\nvals3 = [int(2**random.uniform(1, 8) - 2) for _ in range(20)]\n\nsim_trace = pyrtl.SimulationTrace()\nsim = pyrtl.Simulation(tracer=sim_trace)\nfor cycle in range(len(vals1)):\n sim.step({\n 'in1': vals1[cycle],\n 'in2': vals2[cycle],\n 'in3': vals3[cycle]})",
"In order to get the result data, you do not need to print a waveform of the trace\nYou always have the option to just pull the data out of the tracer directly",
"print(\"---- Inputs and debug_out ----\")\nprint(\"in1: \", str(sim_trace.trace['in1']))\nprint(\"in2: \", str(sim_trace.trace['in2']))\nprint(\"debug_out: \", str(sim_trace.trace['debug_out']))\nprint('\\n')",
"Below, I am using the ability to directly retrieve the trace data to\nverify the correctness of the first adder",
"for i in range(len(vals1)):\n assert(sim_trace.trace['debug_out'][i] == sim_trace.trace['in1'][i] + sim_trace.trace['in2'][i])",
"Probe\nNow that we have built some stuff, let's clear it so we can try again in a\ndifferent way. We can start by clearing all of the hardware from the current working\nblock. The working block is a global structure that keeps track of all the\nhardware you have built thus far. A \"reset\" will clear it so we can start fresh.",
"pyrtl.reset_working_block()",
"In this example, we will be multiplying two numbers using tree_multiplier()\nAgain, create the two inputs and an output",
"print(\"---- Using Probes ----\")\nin1, in2 = (pyrtl.Input(8, \"in\" + str(x)) for x in range(1, 3))\nout1, out2 = (pyrtl.Output(8, \"out\" + str(x)) for x in range(1, 3))\n\nmultout = multipliers.tree_multiplier(in1, in2)\n\n#The following line will create a probe named \"std_probe for later use, like an output.\npyrtl.probe(multout, 'std_probe')",
"We could also do the same thing during assignment. The next command will\ncreate a probe (named 'stdout_probe') that refers to multout (returns the wire multout).\nThis achieves virtually the same thing as 4 lines above, but it is done during assignment,\nso we skip a step by probing the wire before the multiplication.\nThe probe returns multout, the original wire, and out will be assigned multout * 2",
"out1 <<= pyrtl.probe(multout, 'stdout_probe') * 2",
"Probe can also be used with other operations like this:",
"pyrtl.probe(multout + 32, 'adder_probe')\n\npyrtl.probe(multout[2:7], 'select_probe')\n\nout2 <<= pyrtl.probe(multout)[2:16] # notice probe names are not absolutely necessary",
"As one can see, probe can be used on any wire any time,\nsuch as before or during its operation, assignment, etc.\nNow on to the simulation...\nFor variation, we'll recreate the random inputs:",
"vals1 = [int(2**random.uniform(1, 8) - 2) for _ in range(10)]\nvals2 = [int(2**random.uniform(1, 8) - 2) for _ in range(10)]\n\nsim_trace = pyrtl.SimulationTrace()\nsim = pyrtl.Simulation(tracer=sim_trace)\nfor cycle in range(len(vals1)):\n sim.step({\n 'in1': vals1[cycle],\n 'in2': vals2[cycle]})",
"Now we will show the values of the inputs and probes\nand look at that, we didn't need to make any outputs!\n(although we did, to demonstrate the power and convenience of probes)",
"sim_trace.render_trace()\nsim_trace.print_trace()",
"Say we wanted to have gotten more information about\none of those probes above at declaration.\nWe could have used pyrtl.set_debug_mode() before their creation, like so:",
"print(\"--- Probe w/ debugging: ---\")\npyrtl.set_debug_mode()\npyrtl.probe(multout - 16, 'debugsubtr_probe)')\npyrtl.set_debug_mode(debug=False)",
"WireVector Stack Trace\nAnother case that might arise is that a certain wire is causing an error to occur\nin your program. WireVector Stack Traces allow you to find out more about where a particular\nWireVector was made in your code. With this enabled the WireVector will\nstore exactly were it was created, which should help with issues where\nthere is a problem with an identified wire.\nLike above, just add the following line before the relevant WireVector\nmight be made or at the beginning of the program.",
"pyrtl.set_debug_mode()\n\ntest_out = pyrtl.Output(9, \"test_out\")\ntest_out <<= adders.kogge_stone(in1, in2)",
"Now to retrieve information:",
"wire_trace = test_out.init_call_stack",
"This data is generated using the traceback.format_stack() call from the Python\nstandard library's Traceback module (look at the Python standard library docs for\ndetails on the function). Therefore, the stack traces are stored as a list with the\noutermost call first.",
"print(\"---- Stack Trace ----\")\nfor frame in wire_trace:\n print(frame)",
"Storage of Additional Debug Data\nWARNING: the debug information generated by the following two processes are\nnot guaranteed to be preserved when functions (eg. pyrtl.synthesize() ) are\ndone over the block.\nHowever, if the stack trace does not give you enough information about the\nWireVector, you can also embed additional information into the wire itself.\nTwo ways of doing so is either through manipulating the name of the\nWireVector, or by adding your own custom metadata to the WireVector.\nSo far, each input and output WireVector have been given their own names, but\nnormal WireVectors can also be given names by supplying the name argument to\nthe constructor",
"dummy_wv = pyrtl.WireVector(1, name=\"blah\")",
"Also, because of the flexible nature of Python, you can also add custom\nproperties to the WireVector.",
"dummy_wv.my_custom_property_name = \"John Clow is great\"\ndummy_wv.custom_value_028493 = 13\n\n# removing the WireVector from the block to prevent problems with the rest of\n# this example\npyrtl.working_block().remove_wirevector(dummy_wv)",
"Trivial Graph Format\nFinally, there is a handy way to view your hardware creations as a graph.\nThe function output_to_trivialgraph will render your hardware a formal that\nyou can then open with the free software \"yEd\"\n(http://en.wikipedia.org/wiki/YEd). There are options under the\n\"hierarchical\" rendering to draw something that looks quite like a circuit.",
"pyrtl.working_block().sanity_check()\npyrtl.passes._remove_unused_wires(pyrtl.working_block()) # so that trivial_graph() will work\n\nprint(\"--- Trivial Graph Format ---\")\nwith io.StringIO() as tgf:\n pyrtl.output_to_trivialgraph(tgf)\n print(tgf.getvalue())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sampathweb/movie-sentiment-analysis
|
01-load-vectorize-count.ipynb
|
mit
|
[
"Objective\n\nLoad Data, vectorize reviews to numbers\nBuild a basic model based on counting\nEvaluate the Model\nMake a first Kaggle Submission\n\nDownload Data from Kaggle:\n\n\nCompetition Link: https://www.kaggle.com/c/movie-sentiment-analysis\n\n\nUnzip into Data Directory",
"from __future__ import print_function # Python 2/3 compatibility\nimport numpy as np\nimport pandas as pd\nfrom collections import Counter\n\nfrom IPython.display import Image",
"Load Data",
"train_df = pd.read_csv(\"data/train.tsv\", sep=\"\\t\")\n\ntrain_df.sample(10)\n\n# Load the Test Dataset\n# Note that it's missing the Sentiment Column. That's what we need to Predict\n#\ntest_df = pd.read_csv(\"data/test.tsv\", sep=\"\\t\")\ntest_df.head()",
"Explore Dataset",
"# Equal Number of Positive and Negative Sentiments\ntrain_df.sentiment.value_counts()\n\n# Lets take a look at some examples\ndef print_reviews(reviews, max_words=500):\n for review in reviews:\n print(review[:500], end=\"\\n\\n\")\n\n# Some Positive Reviews\nprint(\"Sample **Positive** Reviews: \", \"\\n\")\nprint_reviews(train_df[train_df[\"sentiment\"] == 1].sample(3).review)\n\n# Some Negative Reviews\nprint(\"Sample **Negative** Reviews: \", \"\\n\")\nprint_reviews(train_df[train_df[\"sentiment\"] == 0].sample(3).review)",
"Vectorize Data (a.k.a. covert text to numbers)\nComputers don't understand Texts, so we need to convert texts to numbers before we could do any math on it and see if we can build a system to classify a review as Positive or Negative.\nWays to vectorize data:\n\nBag of Words\nTF-IDF\nWord Embeddings (Word2Vec) \n\nBag of Words\nTake each sentence and count how many occurances of a particular word.",
"## Doing it by Hand\n\ndef bag_of_words_vocab(reviews):\n \"\"\"Returns words in the reviews\"\"\"\n # all_words = []\n # for review in reviews:\n # for word in review.split():\n # all_words.append(word)\n ## List comprehension method of the same lines above\n all_words = [word.lower() for review in reviews for word in review.split(\" \")]\n return Counter(all_words)\n\nwords_vocab = bag_of_words_vocab(train_df.review)\n\nwords_vocab.most_common(20)",
"Observations:\n\nCommon words are not that meaningful (also called Stop words - unfortunately)\nThese words are likely to appear in both Positive and Negative Reviews\n\nWe need a way to find what words are mroe likely to cocur in Postive Review as compared to Negative Review",
"pos_words_vocab = bag_of_words_vocab(train_df[train_df.sentiment == 1].review)\nneg_words_vocab = bag_of_words_vocab(train_df[train_df.sentiment == 0].review)\n\npos_words_vocab.most_common(10)\n\nneg_words_vocab.most_common(10)\n\npos_neg_freq = Counter()\n\nfor word in words_vocab:\n pos_neg_freq[word] = (pos_words_vocab[word] + 1e-3) / (neg_words_vocab[word] + 1e-3)\n\nprint(\"Neutral words:\")\nprint(\"Pos-to-neg for 'the' = {:.2f}\".format(pos_neg_freq[\"is\"]))\nprint(\"Pos-to-neg for 'movie' = {:.2f}\".format(pos_neg_freq[\"is\"]))\n\nprint(\"\\nPositive and Negative review words:\")\nprint(\"Pos-to-neg for 'amazing' = {:.2f}\".format(pos_neg_freq[\"great\"]))\nprint(\"Pos-to-neg for 'terrible' = {:.2f}\".format(pos_neg_freq[\"terrible\"]))",
"Let's Amplify the difference using Log Scale\n\nNeutral Values are Close to 1\nNegative Sentiment Words are less than 1\nPositive Sentiment Words are greater than 1\n\nWhen Converted to Log Scale -\n\nNeutral Values are Close to 0\nNegative Sentiment Words are negative\nPositive Sentiment Words are postive\n\nThat not only makes lot of sense when looking at the numbers, but we could use it for our first classifier",
"# https://www.desmos.com/calculator \nImage(\"images/log-function.png\", width=960)\n\nfor word in pos_neg_freq:\n pos_neg_freq[word] = np.log(pos_neg_freq[word])\n\nprint(\"Neutral words:\")\nprint(\"Pos-to-neg for 'the' = {:.2f}\".format(pos_neg_freq[\"is\"]))\nprint(\"Pos-to-neg for 'movie' = {:.2f}\".format(pos_neg_freq[\"is\"]))\n\nprint(\"\\nPositive and Negative review words:\")\nprint(\"Pos-to-neg for 'amazing' = {:.2f}\".format(pos_neg_freq[\"great\"]))\nprint(\"Pos-to-neg for 'terrible' = {:.2f}\".format(pos_neg_freq[\"terrible\"]))",
"Time to build a Counting Model\n\nFor each Review, we will ADD all the pos_neg_freq values and if the Total for all words in the given review is > 0, we will call it Positive Review and if it's a negative total, we will call it a Negative Review. Sounds good?",
"class CountingClassifier(object):\n \n def __init__(self, pos_neg_freq):\n self.pos_neg_freq = pos_neg_freq\n \n def fit(self, X, y=None):\n # No Machine Learing here. It's just counting\n pass\n \n def predict(self, X):\n predictions = []\n for review in X:\n all_words = [word.lower() for word in review.split()]\n result = np.sum(self.pos_neg_freq.get(word, 0) for word in all_words)\n predictions.append(result)\n return np.array(predictions)\n\ncounting_model = CountingClassifier(pos_neg_freq)\ntrain_predictions = counting_model.predict(train_df.review)\n\ntrain_predictions[:10]\n\n# Covert to Binary Classifier\ntrain_predictions > 0\n\ny_pred = (train_predictions > 0).astype(int)\ny_pred\n\ny_true = train_df.sentiment\nlen(y_true)\n\nnp.sum(y_pred == y_true)\n\n## Accuracy\ntrain_accuracy = np.sum(y_pred == y_true) / len(y_true)\n\nprint(\"Accuracy on Train Data: {:.2f}\".format(train_accuracy))",
"Machine Learning Easy? What Gives?\nRemember this is Training Accuracy. We have not split our Data into Train and Validation (which we will do in our next notebook when we actualy build a Machine Learning Model)\nMake a Submission to Kaggle\nPredict on Test Data and Submit to Kaggle. May be we could end the tutorial right here :-D",
"## Test Accracy\ntest_predictions = counting_model.predict(test_df.review)\n\ntest_predictions\n\ny_pred = (test_predictions > 0).astype(int)\n\ndf = pd.DataFrame({\n \"document_id\": test_df.document_id,\n \"sentiment\": y_pred\n})\n\ndf.head()\n\ndf.to_csv(\"data/count-submission.csv\", index=False)",
"Reasons for Testing Accuracy Being Lower?\n\nOne Hypothesis, Since we are just Adding up ALL of the scores for each word in the review, the length of the reivew could have an impact. Let's look at length of reviews in train and test dataset",
"import matplotlib.pyplot as plt\n\n%matplotlib inline\n\ntrain_df.review.str.len().hist(log=True)\n\ntest_df.review.str.len().hist(log=True)",
"Next Steps\n\nSplit the Training Data into Training and Validation to avoid surprises on New Data(might not have helped in our counting method)\nBuild a Machine Learning Model beyond the rule based system of Counting values"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Housebeer/Natural-Gas-Model
|
Data Analytics/.ipynb_checkpoints/Fitting curve-checkpoint.ipynb
|
mit
|
[
"Fitting curve to data\nWithin this notebook we do some data analytics on historical data to feed some real numbers into the model. Since we assume the consumer data to be resemble a sinus, due to the fact that demand is seasonal, we will focus on fitting data to this kind of curve.",
"import pandas as pd\nimport numpy as np\nfrom scipy.optimize import leastsq\nimport pylab as plt\n\nN = 1000 # number of data points\nt = np.linspace(0, 4*np.pi, N)\ndata = 3.0*np.sin(t+0.001) + 0.5 + np.random.randn(N) # create artificial data with noise\n\nguess_mean = np.mean(data)\nguess_std = 3*np.std(data)/(2**0.5)\nguess_phase = 0\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = guess_std*np.sin(t+guess_phase) + guess_mean\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*np.sin(t+x[1]) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_std*np.sin(t+est_phase) + est_mean\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()",
"import data for our model\nThis is data imported from statline CBS webportal.",
"importfile = 'CBS Statline Gas Usage.xlsx'\ndf = pd.read_excel(importfile, sheetname='Month', skiprows=1)\ndf.drop(['Onderwerpen_1', 'Onderwerpen_2', 'Perioden'], axis=1, inplace=True)\n\n#df\n\n# transpose\ndf = df.transpose()\n\n\n# provide headers\nnew_header = df.iloc[0]\ndf = df[1:]\ndf.rename(columns = new_header, inplace=True)\n\n\n#df.drop(['nan'], axis=0, inplace=True)\ndf\n\n\nx = range(len(df.index))\ndf['Via regionale netten'].plot(figsize=(18,5))\nplt.xticks(x, df.index, rotation='vertical')\nplt.show()\n",
"now let fit different consumer groups",
"#b = self.base_demand\n#m = self.max_demand\n#y = b + m * (.5 * (1 + np.cos((x/6)*np.pi)))\n#b = 603\n#m = 3615\n\nN = 84 # number of data points\nt = np.linspace(0, 83, N)\n#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise\ndata = np.array(df['Via regionale netten'].values, dtype=np.float64)\n\nguess_mean = np.mean(data)\nguess_std = 2695.9075546 #2*np.std(data)/(2**0.5)\nguess_phase = 0\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()\nprint('Via regionale netten')\nprint('max_demand: %s' %(est_std))\nprint('phase_shift: %s' %(est_phase))\nprint('base_demand: %s' %(est_mean))\n\n#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise\ndata = np.array(df['Elektriciteitscentrales'].values, dtype=np.float64)\n\nguess_mean = np.mean(data)\nguess_std = 3*np.std(data)/(2**0.5)\nguess_phase = 0\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()\nprint('Elektriciteitscentrales')\nprint('max_demand: %s' %(est_std))\nprint('phase_shift: %s' %(est_phase))\nprint('base_demand: %s' %(est_mean))\n\n#data = b + m*(.5 * (1 + np.cos((t/6)*np.pi))) + 100*np.random.randn(N) # create artificial data with noise\ndata = np.array(df['Overige verbruikers'].values, dtype=np.float64)\n\nguess_mean = np.mean(data)\nguess_std = 3*np.std(data)/(2**0.5)\nguess_phase = 0\nguess_saving = .997\n\n# we'll use this to plot our first estimate. This might already be good enough for you\ndata_first_guess = (guess_mean + guess_std*(.5 * (1 + np.cos((t/6)*np.pi + guess_phase)))) #* np.power(guess_saving,t)\n\n# Define the function to optimize, in this case, we want to minimize the difference\n# between the actual data and our \"guessed\" parameters\noptimize_func = lambda x: x[0]*(.5 * (1 + np.cos((t/6)*np.pi+x[1]))) + x[2] - data\nest_std, est_phase, est_mean = leastsq(optimize_func, [guess_std, guess_phase, guess_mean])[0]\n\n# recreate the fitted curve using the optimized parameters\ndata_fit = est_mean + est_std*(.5 * (1 + np.cos((t/6)*np.pi + est_phase)))\n\nplt.plot(data, '.')\nplt.plot(data_fit, label='after fitting')\nplt.plot(data_first_guess, label='first guess')\nplt.legend()\nplt.show()\nprint('Overige verbruikers')\nprint('max_demand: %s' %(est_std))\nprint('phase_shift: %s' %(est_phase))\nprint('base_demand: %s' %(est_mean))",
"price forming\nIn order to estimate willingness to sell en willingness to buy we look at historical data over the past view years. We look at the DayAhead market at the TTF. Altough this data does not reflect real consumption necessarily",
"\ninputexcel = 'TTFDA.xlsx'\noutputexcel = 'pythonoutput.xlsx'\n\nprice = pd.read_excel(inputexcel, sheetname='Sheet1', index_col=0)\nquantity = pd.read_excel(inputexcel, sheetname='Sheet2', index_col=0)\n\nprice.index = pd.to_datetime(price.index, format=\"%d-%m-%y\")\nquantity.index = pd.to_datetime(quantity.index, format=\"%d-%m-%y\")\n\npq = pd.concat([price, quantity], axis=1, join_axes=[price.index])\npqna = pq.dropna()\n\nyear = np.arange(2008,2017,1)\n\ncoefficientyear = []\n\nfor i in year:\n x= pqna['Volume'].sort_index().ix[\"%s\"%i]\n y= pqna['Last'].sort_index().ix[\"%s\"%i]\n #plot the trendline\n plt.plot(x,y,'o')\n # calc the trendline\n z = np.polyfit(x, y, 1)\n p = np.poly1d(z)\n plt.plot(x,p(x),\"r--\", label=\"%s\"%i)\n plt.xlabel(\"Volume\")\n plt.ylabel(\"Price Euro per MWH\")\n plt.title('%s: y=%.10fx+(%.10f)'%(i,z[0],z[1]))\n # plt.savefig('%s.png' %i)\n plt.show()\n # the line equation:\n print(\"y=%.10fx+(%.10f)\"%(z[0],z[1]))\n # save the variables in a list\n coefficientyear.append([i, z[0], z[1]])\n\nlen(year)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
authman/DAT210x
|
Module6/Module6 - Lab2.ipynb
|
mit
|
[
"DAT210x - Programming with Python for DS\nModule6- Lab2",
"import pandas as pd\n\nimport matplotlib.pyplot as plt\nfrom sklearn import svm",
"The dataset used in this lab comes from https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits\nAt face value, this looks like an easy lab, but it has many parts to it, so prepare yourself by reading through it fully before starting.\nConvenience Functions",
"def load(path_train, path_test):\n # Load up the data.\n \n # You probably could have written this easily:\n with open(path_test, 'r') as f: testing = pd.read_csv(f)\n with open(path_train, 'r') as f: training = pd.read_csv(f)\n\n # The number of samples between training and testing can vary\n # But the number of features better remain the same!\n n_features = testing.shape[1]\n\n X_test = testing.ix[:,:n_features-1]\n X_train = training.ix[:,:n_features-1]\n y_test = testing.ix[:,n_features-1:].values.ravel()\n y_train = training.ix[:,n_features-1:].values.ravel()\n\n # Special:\n # ...\n \n return X_train, X_test, y_train, y_test\n\ndef peekData(X_train):\n # The 'targets' or labels are stored in y. The 'samples' or data is stored in X\n print(\"Peeking your data...\")\n fig = plt.figure()\n fig.set_tight_layout(True)\n\n cnt = 0\n for col in range(5):\n for row in range(10):\n plt.subplot(5, 10, cnt + 1)\n plt.imshow(X_train.ix[cnt,:].reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')\n plt.axis('off')\n cnt += 1\n \n plt.show()\n\ndef drawPredictions(X_train, X_test, y_train, y_test):\n fig = plt.figure()\n fig.set_tight_layout(True)\n \n # Make some guesses\n y_guess = model.predict(X_test)\n\n # INFO: This is the second lab we're demonstrating how to\n # do multi-plots using matplot lab. In the next assignment(s),\n # it'll be your responsibility to use this and assignment #1\n # as tutorials to add in the plotting code yourself!\n num_rows = 10\n num_cols = 5\n\n index = 0\n for col in range(num_cols):\n for row in range(num_rows):\n plt.subplot(num_cols, num_rows, index + 1)\n\n # 8x8 is the size of the image, 64 pixels\n plt.imshow(X_test.ix[index,:].reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')\n\n # Green = Guessed right\n # Red = Fail!\n fontcolor = 'g' if y_test[index] == y_guess[index] else 'r'\n plt.title('Label: %i' % y_guess[index], fontsize=6, color=fontcolor)\n plt.axis('off')\n index += 1\n plt.show()",
"The Assignment",
"# TODO: Pass in the file paths to the .tra and the .tes files:\nX_train, X_test, y_train, y_test = load('', '')",
"Get to know your data. It seems its already well organized in [n_samples, n_features] form. Your dataset looks like (4389, 784). Also your labels are already shaped as [n_samples].",
"peekData(X_train)",
"Create an SVC classifier. Leave C=1, but set gamma to 0.001 and set the kernel to linear. Then train the model on the training data and labels:",
"print(\"Training SVC Classifier...\")\n\n# .. your code here ..",
"Calculate the score of your SVC against the testing data:",
"print(\"Scoring SVC Classifier...\")\n\n# .. your code here ..\nprint(\"Score:\\n\", score)\n\n# Let's get some visual confirmation of accuracy:\ndrawPredictions(X_train, X_test, y_train, y_test)",
"Print out the TRUE value of the 1000th digit in the test set. By TRUE value, we mean, the actual provided, ground-truth label for that sample:",
"# .. your code here ..\n\nprint(\"1000th test label: \", true_1000th_test_value)",
"Predict the value of the 1000th digit in the test set. Was your model's prediction correct? If you get a warning on your predict line, look at the notes from the previous module's labs.",
"# .. your code here ..\n\nprint(\"1000th test prediction: \", guess_1000th_test_value)",
"Use imshow() to display the 1000th test image, so you can visually check if it was a hard image, or an easy image:",
"# .. your code here ..",
"To the Goal\n\n\nWere you able to beat the USPS advertised accuracy score of 98%? If so, STOP and answer the lab questions. But if you weren't able to get that high of an accuracy score, go back and change your SVC's kernel to 'poly' and re-run your lab again.\n\n\nWere you able to beat the USPS advertised accuracy score of 98%? If so, STOP and answer the lab questions. But if you weren't able to get that high of an accuracy score, go back and change your SVC's kernel to 'rbf' and re-run your lab again.\n\n\nWere you able to beat the USPS advertised accuracy score of 98%? If so, STOP and answer the lab questions. But if you weren't able to get that high of an accuracy score, go back and tinker with your gamma value and C value until you're able to beat the USPS. Don't stop tinkering until you do. =).\n\n\nMore Tasks\nOnly after you're able to beat the +98% accuracy score of the USPS, go back into the load() method and look for the line that reads # Special:\nImmediately under that line, ONLY alter X_train and y_train. Keep just the FIRST 4% of the samples. In other words, for every 100 samples found, throw away 96 of them. To make this easy, keep the samples and labels from th beginning of your X_train and y_train vectors.\nIf the first 4% of your train vector's size yields is a decimal number, then use ceil to round up to the nearest whole integer.\nThis operation might require some Pandas indexing skills, or rather some numpy indexing skills, if you'd like to go that route. Feel free to ask on the class forum if you'd like a tip on how to do this; but try to exercise your own muscles first! \nRe-Run your application after throwing away 96% your training data. What accuracy score do you get now?\nEven More Tasks...\nChange your kernel back to linear and run your assignment one last time. What's the accuracy score this time?\nSurprised?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mspieg/principals-appmath
|
Modes.ipynb
|
cc0-1.0
|
[
"<table>\n <tr align=left><td><img align=left src=\"./images/CC-BY.png\">\n <td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>\n</table>",
"import numpy as np\nimport scipy.linalg as la\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Fun with Fourier modes\nGOAL: visualize some basic behavior of Simple Eigenmodes\nModes of a square drum\nHere we will visualize the Fourier Modes for the eigenfunctions of \n$$-\\nabla^2 \\phi = \\lambda\\phi$$\nwith dirichlet Boundary conditions on the unit square $\\Omega = [0,1]\\times[0,1]$. The modes are \n$$\n \\phi_{nm}(x,y)=\\sin(n\\pi x)\\sin(m\\pi y)\n$$\nwith positive eigenvalues \n$$\n \\lambda_{nm} = \\pi^2(n^2 + m^2)\n$$",
"x = np.linspace(0,1,200)\nX,Y = np.meshgrid(x,x)\n\ndef seeMode(n,m,X,Y):\n phi_nm = lambda n,m: np.sin(n*np.pi*X)*np.sin(m*np.pi*Y)\n plt.figure(figsize=(8,6.2))\n plt.contourf(X,Y,phi_nm(n,m))\n plt.colorbar()\n title = '$\\phi_{'+'{},{}'.format(n,m)+'}$'\n plt.title(title,fontsize=24)\n plt.axis('equal')\n plt.show()\n\nseeMode(1,1,X,Y)\nseeMode(1,2,X,Y)\nseeMode(2,1,X,Y)\nseeMode(2,2,X,Y)\nseeMode(10,20,X,Y)\n\n",
"S## solution of Poisson's Equation with unit forcing\nsolve\n$$\n -\\nabla^2 u = 1\n$$ \non $x\\in[0,1]\\times[0,1]$ with $u=0$ on $\\partial\\Omega$",
"x = np.linspace(0,1,200)\nX,Y = np.meshgrid(x,x)\n \n\ndef solvePoisson(N,X,Y,c=0):\n phi_nm = lambda n,m: np.sin(n*np.pi*X)*np.sin(m*np.pi*Y)\n \n u = np.zeros(X.shape)\n for n in range(1,N+1):\n for m in range(1,N+1):\n bnm = 4./(np.pi**2*m*n)*(np.cos(n*np.pi)-1.)*(np.cos(m*np.pi) - 1.)\n bnm -= 4.*c*m/n*(np.cos(n*np.pi)-1.)\n lambda_nm = np.pi**2*(n*n + m*m)\n u += bnm/lambda_nm*phi_nm(n,m)\n \n return u\n\n\nN=50\nu = solvePoisson(N,X,Y)\n\nplt.figure(figsize=(8,6.2))\nplt.contourf(X,Y,u)\nplt.colorbar()\ntitle = 'Poisson: $f=1$, N={}, $u_{{max}}={:5.5f}$'.format(N,np.max(u))\nplt.title(title,fontsize=24)\nplt.axis('equal')\nplt.show()\n\nN=200\nu = solvePoisson(N,X,Y,c=.1)\nplt.figure(figsize=(8,6.2))\nplt.contourf(X,Y,u)\nplt.colorbar()\ntitle = 'Poisson: $f=1$, N={}, $u_{{max}}={:5.5f}$'.format(N,np.max(u))\nplt.title(title,fontsize=24)\nplt.axis('equal')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ImAlexisSaez/deep-learning-specialization-coursera
|
course_2/week_1/assignment_3/gradient_checking.ipynb
|
mit
|
[
"Gradient Checking\nWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. \nYou are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. \nBut backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, \"Give me a proof that your backpropagation is actually working!\" To give this reassurance, you are going to use \"gradient checking\".\nLet's do it!",
"# Packages\nimport numpy as np\nfrom testCases import *\nfrom gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector",
"1) How does gradient checking work?\nBackpropagation computes the gradients $\\frac{\\partial J}{\\partial \\theta}$, where $\\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.\nBecause forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\\frac{\\partial J}{\\partial \\theta}$. \nLet's look back at the definition of a derivative (or gradient):\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nIf you're not familiar with the \"$\\displaystyle \\lim_{\\varepsilon \\to 0}$\" notation, it's just a way of saying \"when $\\varepsilon$ is really really small.\"\nWe know the following:\n\n$\\frac{\\partial J}{\\partial \\theta}$ is what you want to make sure you're computing correctly. \nYou can compute $J(\\theta + \\varepsilon)$ and $J(\\theta - \\varepsilon)$ (in the case that $\\theta$ is a real number), since you're confident your implementation for $J$ is correct. \n\nLets use equation (1) and a small value for $\\varepsilon$ to convince your CEO that your code for computing $\\frac{\\partial J}{\\partial \\theta}$ is correct!\n2) 1-dimensional gradient checking\nConsider a 1D linear function $J(\\theta) = \\theta x$. The model contains only a single real-valued parameter $\\theta$, and takes $x$ as input.\nYou will implement code to compute $J(.)$ and its derivative $\\frac{\\partial J}{\\partial \\theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. \n<img src=\"images/1Dgrad_kiank.png\" style=\"width:600px;height:250px;\">\n<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>\nThe diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ (\"forward propagation\"). Then compute the derivative $\\frac{\\partial J}{\\partial \\theta}$ (\"backward propagation\"). \nExercise: implement \"forward propagation\" and \"backward propagation\" for this simple function. I.e., compute both $J(.)$ (\"forward propagation\") and its derivative with respect to $\\theta$ (\"backward propagation\"), in two separate functions.",
"# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(x, theta):\n \"\"\"\n Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n J -- the value of function J, computed using the formula J(theta) = theta * x\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n J = theta * x\n ### END CODE HERE ###\n \n return J\n\nx, theta = 2, 4\nJ = forward_propagation(x, theta)\nprint (\"J = \" + str(J))",
"Expected Output:\n<table style=>\n <tr>\n <td> ** J ** </td>\n <td> 8</td>\n </tr>\n</table>\n\nExercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\\theta) = \\theta x$ with respect to $\\theta$. To save you from doing the calculus, you should get $dtheta = \\frac { \\partial J }{ \\partial \\theta} = x$.",
"# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(x, theta):\n \"\"\"\n Computes the derivative of J with respect to theta (see Figure 1).\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n \n Returns:\n dtheta -- the gradient of the cost with respect to theta\n \"\"\"\n \n ### START CODE HERE ### (approx. 1 line)\n dtheta = x\n ### END CODE HERE ###\n \n return dtheta\n\nx, theta = 2, 4\ndtheta = backward_propagation(x, theta)\nprint (\"dtheta = \" + str(dtheta))",
"Expected Output:\n<table>\n <tr>\n <td> ** dtheta ** </td>\n <td> 2 </td>\n </tr>\n</table>\n\nExercise: To show that the backward_propagation() function is correctly computing the gradient $\\frac{\\partial J}{\\partial \\theta}$, let's implement gradient checking.\nInstructions:\n- First compute \"gradapprox\" using the formula above (1) and a small value of $\\varepsilon$. Here are the Steps to follow:\n 1. $\\theta^{+} = \\theta + \\varepsilon$\n 2. $\\theta^{-} = \\theta - \\varepsilon$\n 3. $J^{+} = J(\\theta^{+})$\n 4. $J^{-} = J(\\theta^{-})$\n 5. $gradapprox = \\frac{J^{+} - J^{-}}{2 \\varepsilon}$\n- Then compute the gradient using backward propagation, and store the result in a variable \"grad\"\n- Finally, compute the relative difference between \"gradapprox\" and the \"grad\" using the following formula:\n$$ difference = \\frac {\\mid\\mid grad - gradapprox \\mid\\mid_2}{\\mid\\mid grad \\mid\\mid_2 + \\mid\\mid gradapprox \\mid\\mid_2} \\tag{2}$$\nYou will need 3 Steps to compute this formula:\n - 1'. compute the numerator using np.linalg.norm(...)\n - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.\n - 3'. divide them.\n- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.",
"# GRADED FUNCTION: gradient_check\n\ndef gradient_check(x, theta, epsilon = 1e-7):\n \"\"\"\n Implement the backward propagation presented in Figure 1.\n \n Arguments:\n x -- a real-valued input\n theta -- our parameter, a real number as well\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.\n ### START CODE HERE ### (approx. 5 lines)\n thetaplus = theta + epsilon # Step 1\n thetaminus = theta - epsilon # Step 2\n J_plus = forward_propagation(x, thetaplus) # Step 3\n J_minus = forward_propagation(x, thetaminus) # Step 4\n gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5\n ### END CODE HERE ###\n \n # Check if gradapprox is close enough to the output of backward_propagation()\n ### START CODE HERE ### (approx. 1 line)\n grad = backward_propagation(x, theta)\n ### END CODE HERE ###\n \n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(grad - gradapprox) # Step 1'\n denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'\n difference = numerator / denominator # Step 3'\n ### END CODE HERE ###\n \n if difference < 1e-7:\n print (\"The gradient is correct!\")\n else:\n print (\"The gradient is wrong!\")\n \n return difference\n\nx, theta = 2, 4\ndifference = gradient_check(x, theta)\nprint(\"difference = \" + str(difference))",
"Expected Output:\nThe gradient is correct!\n<table>\n <tr>\n <td> ** difference ** </td>\n <td> 2.9193358103083e-10 </td>\n </tr>\n</table>\n\nCongrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). \nNow, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!\n3) N-dimensional gradient checking\nThe following figure describes the forward and backward propagation of your fraud detection model.\n<img src=\"images/NDgrad_kiank.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>\nLet's look at your implementations for forward propagation and backward propagation.",
"def forward_propagation_n(X, Y, parameters):\n \"\"\"\n Implements the forward propagation (and computes the cost) presented in Figure 3.\n \n Arguments:\n X -- training set for m examples\n Y -- labels for m examples \n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (5, 4)\n b1 -- bias vector of shape (5, 1)\n W2 -- weight matrix of shape (3, 5)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n \n Returns:\n cost -- the cost function (logistic cost for one example)\n \"\"\"\n \n # retrieve parameters\n m = X.shape[1]\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n\n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n\n # Cost\n logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)\n cost = 1./m * np.sum(logprobs)\n \n cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)\n \n return cost, cache",
"Now, run backward propagation.",
"def backward_propagation_n(X, Y, cache):\n \"\"\"\n Implement the backward propagation presented in figure 2.\n \n Arguments:\n X -- input datapoint, of shape (input size, 1)\n Y -- true \"label\"\n cache -- cache output from forward_propagation_n()\n \n Returns:\n gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1./m * np.dot(dZ3, A2.T)\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1./m * np.dot(dZ2, A1.T) * 2\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1./m * np.dot(dZ1, X.T)\n db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\n \"dA2\": dA2, \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2,\n \"dA1\": dA1, \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients",
"You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.\nHow does gradient checking work?.\nAs in 1) and 2), you want to compare \"gradapprox\" to the gradient computed by backpropagation. The formula is still:\n$$ \\frac{\\partial J}{\\partial \\theta} = \\lim_{\\varepsilon \\to 0} \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon} \\tag{1}$$\nHowever, $\\theta$ is not a scalar anymore. It is a dictionary called \"parameters\". We implemented a function \"dictionary_to_vector()\" for you. It converts the \"parameters\" dictionary into a vector called \"values\", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.\nThe inverse function is \"vector_to_dictionary\" which outputs back the \"parameters\" dictionary.\n<img src=\"images/dictionary_to_vector.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>\nWe have also converted the \"gradients\" dictionary into a vector \"grad\" using gradients_to_vector(). You don't need to worry about that.\nExercise: Implement gradient_check_n().\nInstructions: Here is pseudo-code that will help you implement the gradient check.\nFor each i in num_parameters:\n- To compute J_plus[i]:\n 1. Set $\\theta^{+}$ to np.copy(parameters_values)\n 2. Set $\\theta^{+}_i$ to $\\theta^{+}_i + \\varepsilon$\n 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\\theta^{+}$ )). \n- To compute J_minus[i]: do the same thing with $\\theta^{-}$\n- Compute $gradapprox[i] = \\frac{J^{+}_i - J^{-}_i}{2 \\varepsilon}$\nThus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: \n$$ difference = \\frac {\\| grad - gradapprox \\|_2}{\\| grad \\|_2 + \\| gradapprox \\|_2 } \\tag{3}$$",
"# GRADED FUNCTION: gradient_check_n\n\ndef gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):\n \"\"\"\n Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n\n \n Arguments:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. \n x -- input datapoint, of shape (input size, 1)\n y -- true \"label\"\n epsilon -- tiny shift to the input to compute approximated gradient with formula(1)\n \n Returns:\n difference -- difference (2) between the approximated gradient and the backward propagation gradient\n \"\"\"\n \n # Set-up variables\n parameters_values, _ = dictionary_to_vector(parameters)\n grad = gradients_to_vector(gradients)\n num_parameters = parameters_values.shape[0]\n J_plus = np.zeros((num_parameters, 1))\n J_minus = np.zeros((num_parameters, 1))\n gradapprox = np.zeros((num_parameters, 1))\n \n # Compute gradapprox\n for i in range(num_parameters):\n \n # Compute J_plus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_plus[i]\".\n # \"_\" is used because the function you have to outputs two parameters but we only care about the first one\n ### START CODE HERE ### (approx. 3 lines)\n thetaplus = np.copy(parameters_values) # Step 1\n thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2\n J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3\n ### END CODE HERE ###\n \n # Compute J_minus[i]. Inputs: \"parameters_values, epsilon\". Output = \"J_minus[i]\".\n ### START CODE HERE ### (approx. 3 lines)\n thetaminus = np.copy(parameters_values) # Step 1\n thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 \n J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3\n ### END CODE HERE ###\n \n # Compute gradapprox[i]\n ### START CODE HERE ### (approx. 1 line)\n gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)\n ### END CODE HERE ###\n \n # Compare gradapprox to backward propagation gradients by computing difference.\n ### START CODE HERE ### (approx. 1 line)\n numerator = np.linalg.norm(grad - gradapprox) # Step 1'\n denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'\n difference = numerator / denominator # Step 3'\n ### END CODE HERE ###\n\n if difference > 1e-7:\n print (\"\\033[93m\" + \"There is a mistake in the backward propagation! difference = \" + str(difference) + \"\\033[0m\")\n else:\n print (\"\\033[92m\" + \"Your backward propagation works perfectly fine! difference = \" + str(difference) + \"\\033[0m\")\n \n return difference\n\nX, Y, parameters = gradient_check_n_test_case()\n\ncost, cache = forward_propagation_n(X, Y, parameters)\ngradients = backward_propagation_n(X, Y, cache)\ndifference = gradient_check_n(parameters, gradients, X, Y)",
"Expected output:\n<table>\n <tr>\n <td> ** There is a mistake in the backward propagation!** </td>\n <td> difference = 0.285093156781 </td>\n </tr>\n</table>\n\nIt seems that there were errors in the backward_propagation_n code we gave you! Good that you've implemented the gradient check. Go back to backward_propagation and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining backward_propagation_n() if you modify the code. \nCan you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. \nNote \n- Gradient Checking is slow! Approximating the gradient with $\\frac{\\partial J}{\\partial \\theta} \\approx \\frac{J(\\theta + \\varepsilon) - J(\\theta - \\varepsilon)}{2 \\varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. \n- Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. \nCongrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) \n<font color='blue'>\nWhat you should remember from this notebook:\n- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).\n- Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Ttl/scikit-rf
|
doc/source/tutorials/Plotting.ipynb
|
bsd-3-clause
|
[
"Plotting\nIntroduction\nThis tutorial describes skrf's plotting features. If you would like to use skrf's matplotlib interface with skrf styling, start with this",
"%matplotlib inline\nimport skrf as rf\nrf.stylely()",
"Plotting Methods\nPlotting functions are implemented as methods of the Network class.\n\nNetwork.plot_s_re\nNetwork.plot_s_im\nNetwork.plot_s_mag\nNetwork.plot_s_db\n...\n\nSimilar methods exist for Impedance (Network.z) and Admittance Parameters (Network.y), \n\nNetwork.plot_z_re\nNetwork.plot_z_im\n...\nNetwork.plot_y_re\nNetwork.plot_y_im\n...\n\nSmith Chart\nAs a first example, load a Network and plot all four s-parameters on the Smith chart.",
"from skrf import Network\n\nring_slot = Network('data/ring slot.s2p')\nring_slot.plot_s_smith()\n\nring_slot.plot_s_smith(draw_labels=True)",
"Another common option is to draw addmitance contours, instead of impedance. This is controled through the chart_type argument.",
"ring_slot.plot_s_smith(chart_type='y')",
"See skrf.plotting.smith() for more info on customizing the Smith Chart. \nComplex Plane\nNetwork parameters can also be plotted in the complex plane without a Smith Chart through Network.plot_s_complex.",
"ring_slot.plot_s_complex()\n\nfrom matplotlib import pyplot as plt\nplt.axis('equal') # otherwise circles wont be circles",
"Log-Magnitude\nScalar components of the complex network parameters can be plotted vs \nfrequency as well. To plot the log-magnitude of the s-parameters vs. frequency,",
"ring_slot.plot_s_db()",
"When no arguments are passed to the plotting methods, all parameters are plotted. Single parameters can be plotted by passing indices m and n to the plotting commands (indexing start from 0). Comparing the simulated reflection coefficient off the ring slot to a measurement,",
"from skrf.data import ring_slot_meas\nring_slot.plot_s_db(m=0,n=0, label='Theory') \nring_slot_meas.plot_s_db(m=0,n=0, label='Measurement') ",
"Phase\nPlot phase,",
"ring_slot.plot_s_deg()",
"Or unwrapped phase,",
"ring_slot.plot_s_deg_unwrap()",
"Phase is radian (rad) is also available\nGroup Delay\nA Network has a plot() method which creates a rectangular plot of the argument vs frequency. This can be used to make plots are arent 'canned'. For example group delay",
"gd = abs(ring_slot.s21.group_delay) *1e9 # in ns\n\nring_slot.plot(gd)\nplt.ylabel('Group Delay (ns)')\nplt.title('Group Delay of Ring Slot S21')",
"Impedance, Admittance\nThe components the Impendance and Admittance parameters can be plotted \nsimilarly,",
"ring_slot.plot_z_im()\n\nring_slot.plot_y_im()",
"Customizing Plots\nThe legend entries are automatically filled in with the Network's Network.name. The entry can be overidden by passing the label argument to the plot method.",
"ring_slot.plot_s_db(m=0,n=0, label = 'Simulation')",
"The frequency unit used on the x-axis is automatically filled in from \nthe Networks Network.frequency.unit attribute. To change\nthe label, change the frequency's unit.",
"ring_slot.frequency.unit = 'mhz'\nring_slot.plot_s_db(0,0)",
"Other key word arguments given to the plotting methods are passed through to the matplotlib matplotlib.pyplot.plot function.",
"ring_slot.frequency.unit='ghz'\nring_slot.plot_s_db(m=0,n=0, linewidth = 3, linestyle = '--', label = 'Simulation')\nring_slot_meas.plot_s_db(m=0,n=0, marker = 'o', markevery = 10,label = 'Measured')\n",
"All components of the plots can be customized through matplotlib functions, and styles can be used with a context manager.",
"from matplotlib import pyplot as plt\nfrom matplotlib import style \n\nwith style.context('seaborn-ticks'):\n ring_slot.plot_s_smith()\n plt.xlabel('Real Part');\n plt.ylabel('Imaginary Part');\n plt.title('Smith Chart With Legend Room');\n plt.axis([-1.1,2.1,-1.1,1.1])\n plt.legend(loc=5)\n ",
"Saving Plots\nPlots can be saved in various file formats using the GUI provided by the matplotlib. However, skrf provides a convenience function, called skrf.plotting.save_all_figs, that allows all open figures to be saved to disk in multiple file formats, with filenames pulled from each figure's title,\nfrom skrf.plotting import save_all_figs\nsave_all_figs('data/', format=['png','eps','pdf'])\n\nAdding Markers Post Plot\nA common need is to make a color plot, interpretable in greyscale print. \nThe skrf.plotting.add_markers_to_lines adds different markers each line in a plots after the plot has been made, which is usually when you remember to add them.",
"from skrf import plotting\nwith style.context('printable'):\n ring_slot.plot_s_deg()\n plotting.add_markers_to_lines()\n plt.legend() # have to re-generate legend\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SevData/fast_kmeans
|
Fast KMeans Notebook.ipynb
|
mit
|
[
"Fast KMEANS example\nThis is an example on how to use the accelerated KMEANS function written in C and built with cython.\nFirst let's import some libraries and the external KMEANS C extention built with Cython :",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport random\nfrom Cfast_km import Cfast_km",
"Then, set the number of target clusters, number of features and number of samples:",
"nb_cluster = 5\nnb_features = 2\nnb_samples = 500\nnb_cluster, nb_features,nb_samples",
"Randomly draw some samples and initial centroids. It is important to convert the lists to numpy arrays with 'double' type so the C function recognizes them.",
"X = [[random.random() for i in range(nb_features)] for j in range(nb_samples)] \nmu = random.sample(X, nb_cluster)\nX = np.array(X, dtype=\"double\")\nmu = np.array(mu, dtype=\"double\")\nmu",
"Set the initial weights for each features at 1 (all features have the same importance).\nAlso we need create the labels list, which will be passed to Cfast_km function to get the cluster number for each sample",
"weights = np.array([1 for i in range(nb_features)], dtype=\"double\")\nlabels = np.zeros(len(X), dtype=\"int\")",
"Let's execute the Fast KMEANS functions. Results will be collected through mu (centroids) and labels (samples labels). The last argument (set as 0) is the tolerance : increasing it will stop the process in earlier steps without a full convergence, but will save some computing time.",
"Cfast_km(X , mu , labels, weights, 0)\nmu, labels",
"Let's plot the results. We keep the first 2 features for 2D plotting",
"cmap = { 0:'k',1:'b',2:'y',3:'g',4:'r' }\n\nfig = plt.figure(figsize=(5,5))\nplt.xlim(0,1)\nplt.ylim(0,1)\n\n\nfor i in range(nb_cluster):\n X_extract = X[labels == i]\n X1 = np.transpose(X_extract[:,0]).tolist()\n X2 = np.transpose(X_extract[:,1]).tolist()\n plt.plot(X1, X2, cmap[i%5]+'.', alpha=0.5)\n\n\nmu1 = np.transpose(mu[:,0]).tolist()\nmu2 = np.transpose(mu[:,1]).tolist()\nplt.plot(mu1, mu2, 'ro')\n\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
danielfather7/teach_Python
|
lecture/06.Procedural_Python.part3.ipynb
|
gpl-3.0
|
[
"Procedural programming in python\nTopics\n\nFlow control, part 2\nFunctions\nIn class exercise:\nFunctionalize this!\nFrom nothing to something:\nPairwise correlation between rows in a pandas dataframe\nSketch of the process\nIn class exercise:\nWrite the code!\n\n\nRejoining, sharing ideas, problems, thoughts\n\n<hr>\n\n<hr>\nFlow control\n<img src=\"https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/lnpls008.gif\">Flow control figure</img>\nFlow control refers how to programs do loops, conditional execution, and order of functional operations. \nIf\nIf statements can be use to execute some lines or block of code if a particular condition is satisfied. E.g. Let's print something based on the entries in the list.",
"instructors = ['Dave', 'Jim', 'Dorkus the Clown']\n\nif 'Dorkus the Clown' in instructors:\n print('#fakeinstructor')",
"There is a special do nothing word: pass that skips over some arm of a conditional, e.g.",
"if 'Jim' in instructors:\n print(\"Congratulations! Jim is teaching, your class won't stink!\")\nelse:\n pass",
"For\nFor loops are the standard loop, though while is also common. For has the general form:\nfor items in list:\n do stuff\nFor loops and collections like tuples, lists and dictionaries are natural friends.",
"for instructor in instructors:\n print(instructor)",
"You can combine loops and conditionals:",
"for instructor in instructors:\n if instructor.endswith('Clown'):\n print(instructor + \" doesn't sound like a real instructor name!\")\n else:\n print(instructor + \" is so smart... all those gooey brains!\")",
"range()\nSince for operates over lists, it is common to want to do something like:\nNOTE: C-like\nfor (i = 0; i < 3; ++i) {\n print(i);\n}\nThe Python equivalent is:\nfor i in [0, 1, 2]:\n do something with i\nWhat happens when the range you want to sample is big, e.g.\nNOTE: C-like\nfor (i = 0; i < 1000000000; ++i) {\n print(i);\n}\nThat would be a real pain in the rear to have to write out the entire list from 1 to 1000000000.\nEnter, the range() function. E.g.\n range(3) is [0, 1, 2]",
"sum = 0\nfor i in range(10):\n sum += i\nprint(sum)",
"<hr>\n\nFunctions\nFor loops let you repeat some code for every item in a list. Functions are similar in that they run the same lines of code for new values of some variable. They are different in that functions are not limited to looping over items.\nFunctions are a critical part of writing easy to read, reusable code.\nCreate a function like:\ndef function_name (parameters):\n \"\"\"\n docstring\n \"\"\"\n function expressions\n return [variable]\nNote: Sometimes I use the word argument in place of parameter.\nHere is a simple example. It prints a string that was passed in and returns nothing.",
"def print_string(str):\n \"\"\"This prints out a string passed as the parameter.\"\"\"\n print(str)\n for c in str:\n print(c)\n if c == 'r':\n break\n print(\"done\")\n return\n\n\nprint_string(\"string\")",
"To call the function, use:\nprint_string(\"Dave is awesome!\")\nNote: The function has to be defined before you can call it!",
"print_string(\"Dave is awesome!\")",
"If you don't provide an argument or too many, you get an error.",
"#print_string()",
"Parameters (or arguments) in Python are all passed by reference. This means that if you modify the parameters in the function, they are modified outside of the function.\nSee the following example:\n```\ndef change_list(my_list):\n \"\"\"This changes a passed list into this function\"\"\"\n my_list.append('four');\n print('list inside the function: ', my_list)\n return\nmy_list = [1, 2, 3];\nprint('list before the function: ', my_list)\nchange_list(my_list);\nprint('list after the function: ', my_list)\n```",
"def change_list(my_list):\n \"\"\"This changes a passed list into this function\"\"\"\n my_list.append('four');\n print('list inside the function: ', my_list)\n return\n\nmy_list = [1, 2, 3];\nprint('list before the function: ', my_list)\nchange_list(my_list);\nprint('list after the function: ', my_list)",
"Variables have scope: global and local\nIn a function, new variables that you create are not saved when the function returns - these are local variables. Variables defined outside of the function can be accessed but not changed - these are global variables, Note there is a way to do this with the global keyword. Generally, the use of global variables is not encouraged, instead use parameters.\n```\nmy_global_1 = 'bad idea'\nmy_global_2 = 'another bad one'\nmy_global_3 = 'better idea'\ndef my_function():\n print(my_global_1)\n my_global_2 = 'broke your global, man!'\n global my_global_3\n my_global_3 = 'still a better idea'\n return\nmy_function()\nprint(my_global_2)\nprint(my_global_3)\n```",
"my_global_1 = 'bad idea'\nmy_global_2 = 'another bad one'\nmy_global_3 = 'better idea'\n\ndef my_function():\n print(my_global_1)\n my_global_2 = 'broke your global, man!'\n print(my_global_2)\n global my_global_3\n my_global_3 = 'still a better idea'\n return\n\nmy_function()\nprint(my_global_2)\nprint(my_global_3)",
"In general, you want to use parameters to provide data to a function and return a result with the return. E.g.\ndef sum(x, y):\n my_sum = x + y\n return my_sum\nIf you are going to return multiple objects, what data structure that we talked about can be used? Give and example below.",
"def a_function(parameter):\n return None\n\n\n\nfoo = a_function('bar')\nprint(foo)",
"Parameters have three different types:\n| type | behavior |\n|------|----------|\n| required | positional, must be present or error, e.g. my_func(first_name, last_name) |\n| keyword | position independent, e.g. my_func(first_name, last_name) can be called my_func(first_name='Dave', last_name='Beck') or my_func(last_name='Beck', first_name='Dave') |\n| default | keyword params that default to a value if not provided |",
"def print_name(first, last='the Clown'):\n print('Your name is %s %s' % (first, last))\n return",
"Take a minute and play around with the above function. Which are required? Keyword? Default?",
"def massive_correlation_analysis(data, method='pearson'):\n pass\n return",
"Functions can contain any code that you put anywhere else including:\n* if...elif...else\n* for...else\n* while\n* other function calls",
"def print_name_age(first, last, age):\n print_name(first, last)\n print('Your age is %d' % (age))\n print('Your age is ' + str(age))\n if age > 35:\n print('You are really old.')\n return\n\nprint_name_age(age=40, last='Beck', first='Dave')\n",
"Once you have some code that is functionalized and not going to change, you can move it to a file that ends in .py, check it into version control, import it into your notebook and use it!\nLet's do this now for the above two functions.\n...\nSee you after the break!\n\nImport the function...\nCall them!\n<hr>\nHacky Hack Time with Functions!\nNotes from last class:\n* The os package has tools for checking if a file exists: os.path.exists\nimport os\nfilename = 'HCEPDB_moldata.zip'\nif os.path.exists(filename):\n print(\"wahoo!\")\n* Use the requests package to get the file given a url (got this from the requests docs)\nimport requests\nurl = 'http://faculty.washington.edu/dacb/HCEPDB_moldata.zip'\nreq = requests.get(url)\nassert req.status_code == 200 # if the download failed, this line will generate an error\nwith open(filename, 'wb') as f:\n f.write(req.content)\n* Use the zipfile package to decompress the file while reading it into pandas\nimport pandas as pd\nimport zipfile\ncsv_filename = 'HCEPDB_moldata.csv'\nzf = zipfile.ZipFile(filename)\ndata = pd.read_csv(zf.open(csv_filename))\nHere was my solution\n```\nimport os\nimport requests\nimport pandas as pd\nimport zipfile\nfilename = 'HCEPDB_moldata.zip'\nurl = 'http://faculty.washington.edu/dacb/HCEPDB_moldata.zip'\ncsv_filename = 'HCEPDB_moldata.csv'\nif os.path.exists(filename):\n pass\nelse:\n req = requests.get(url)\n assert req.status_code == 200 # if the download failed, this line will generate an error\n with open(filename, 'wb') as f:\n f.write(req.content)\nzf = zipfile.ZipFile(filename)\ndata = pd.read_csv(zf.open(csv_filename))\n```\nMy solution:",
"def download_if_not_exists(url, filename):\n if os.path.exists(filename):\n pass\n else:\n req = requests.get(url)\n assert req.status_code == 200 # if the download failed, this line will generate an error\n with open(filename, 'wb') as f:\n f.write(req.content)\n\ndef load_HCEPDB_data(url, zip_filename, csv_filename):\n download_if_not_exists(url, zip_filename)\n zf = zipfile.ZipFile(zip_filename)\n data = pd.read_csv(zf.open(csv_filename))\n return data\n\nimport os\nimport requests\nimport pandas as pd\nimport zipfile\n\nload_HCEPDB_data('http://faculty.washington.edu/dacb/HCEPDB_moldata_set1.zip', 'HCEPDB_moldata_set1.zip', 'HCEPDB_moldata_set1.csv')",
"How many functions did you use?\nWhy did you choose to use functions for these pieces?\n<HR>\nFrom something to nothing\nTask: Compute the pairwise Pearson correlation between rows in a dataframe.\nLet's say we have three molecules (A, B, C) with three measurements each (v1, v2, v3). So for each molecule we have a vector of measurements:\n$$X=\\begin{bmatrix}\n X_{v_{1}} \\\n X_{v_{2}} \\\n X_{v_{3}} \\\n \\end{bmatrix} $$\nWhere X is a molecule and the components are the values for each of the measurements. These make up the rows in our matrix.\nOften, we want to compare molecules to determine how similar or different they are. One measure is the Pearson correlation.\nPearson correlation: <img src=\"https://wikimedia.org/api/rest_v1/media/math/render/svg/01d103c10e6d4f477953a9b48c69d19a954d978a\"/>\nExpressed graphically, when you plot the paired measurements for two samples (in this case molecules) against each other you can see positively correlated, no correlation, and negatively correlated. Eg.\n<img src=\"http://www.statisticshowto.com/wp-content/uploads/2012/10/pearson-2-small.png\"/>\nSimple input dataframe (note when you are writing code it is always a good idea to have a simple test case where you can readily compute by hand or know the output):\n| index | v1 | v2 | v3 |\n|-------|----|----|----|\n| A | -1 | 0 | 1 |\n| B | 1 | 0 | -1 |\n| C | .5 | 0 | .5 |\n\n\nIf the above is a dataframe what shape and size is the output?\n\n\nWhare are some unique features of the output?\n\n\nFor our test case, what will the output be?\n| | A | B | C |\n|---|---|---|---|\n| A | 1 | -1 | 0 |\n| B | -1 | 1 | 0 |\n| C | 0 | 0 | 1 |\nLet's sketch the idea...\nIn class exercise\n20-30 minutes\nObjectives:\n\nWrite code using functions to compute the pairwise Pearson correlation between rows in a pandas dataframe. You will have to use for and possibly if.\nUse a cell to test each function with an input that yields an expected output. Think about the shape and values of the outputs.\nPut the code in a .py file in the directory with the Jupyter notebook, import and run!\n\nTo help you get started...\nTo create the sample dataframe:\ndf = pd.DataFrame([[-1, 0, 1], [1, 0, -1], [.5, 0, .5]])\nTo loop over rows in a dataframe, check out (Google is your friend):\nDataFrame.iterrows\n<hr>\nHow do we know it is working?\nUse the test case!\nOur three row example is a useful tool for checking that our code is working. We can write some tests that compare the output of our functions to our expectations.\nE.g. The diagonals should be 1, and corr(A, B) = -1, ...\nBut first, let's talk assert and raise\nWe've already briefly been exposed to assert in this code:\nif os.path.exists(filename):\n pass\nelse:\n req = requests.get(url)\n # if the download failed, next line will raise an error\n assert req.status_code == 200\n with open(filename, 'wb') as f:\n f.write(req.content)\nWhat is the assert doing there?\nLet's play with assert. What should the following asserts do?\nassert True == False, \"You assert wrongly, sir!\"\nassert 'Dave' in instructors\nassert function_that_returns_True_or_False(parameters)\nSo when an assert statement is true, the code keeps executing and when it is false, it raises an exception (also known as an error).\nWe've all probably seen lots of exception. E.g.\n```\ndef some_function(parameter):\n return\nsome_function()\n```\nsome_dict = { }\nprint(some_dict['invalid key'])\n'fourty' + 2\nLike C++ and other languages, Python let's you raise your own exception. You can do it with raise (surprise!). Exceptions are special objects and you can create your own type of exceptions. For now, we are going to look at the simplest Exception.\nWe create an Exception object by calling the generator:\nException()\nThis isn't very helpful. We really want to supply a description. The Exception object takes any number of strings. One good form if you are using the generic exception object is:\nException('Short description', 'Long description')\nCreating an exception object isn't useful alone, however. We need to send it down the software stack to the Python interpreter so that it can handle the exception condition. We do this with raise.\nraise Exception(\"An error has occurred.\")\nNow you can create your own error messages like a pro!\nDETOUR!\nThere are lots of types of exceptions beyond the generic class Exception. You can use them in your own code if they make sense. E.g.\nimport math\nmy_variable = math.inf\nif my_variable == math.inf:\n raise ValueError('my_variable cannot be infinity')\n<p>List of Standard Exceptions −</p>\n<table class=\"table table-bordered\">\n<tr>\n<th><b>EXCEPTION NAME</b></th>\n<th><b>DESCRIPTION</b></th>\n</tr>\n<tr>\n<td>Exception</td>\n<td>Base class for all exceptions</td>\n</tr>\n<tr>\n<td>StopIteration</td>\n<td>Raised when the next() method of an iterator does not point to any object.</td>\n</tr>\n<tr>\n<td>SystemExit</td>\n<td>Raised by the sys.exit() function.</td>\n</tr>\n<tr>\n<td>StandardError</td>\n<td>Base class for all built-in exceptions except StopIteration and SystemExit.</td>\n</tr>\n<tr>\n<td>ArithmeticError</td>\n<td>Base class for all errors that occur for numeric calculation.</td>\n</tr>\n<tr>\n<td>OverflowError</td>\n<td>Raised when a calculation exceeds maximum limit for a numeric type.</td>\n</tr>\n<tr>\n<td>FloatingPointError</td>\n<td>Raised when a floating point calculation fails.</td>\n</tr>\n<tr>\n<td>ZeroDivisonError</td>\n<td>Raised when division or modulo by zero takes place for all numeric types.</td>\n</tr>\n<tr>\n<td>AssertionError</td>\n<td>Raised in case of failure of the Assert statement.</td>\n</tr>\n<tr>\n<td>AttributeError</td>\n<td>Raised in case of failure of attribute reference or assignment.</td>\n</tr>\n<tr>\n<td>EOFError</td>\n<td>Raised when there is no input from either the raw_input() or input() function and the end of file is reached.</td>\n</tr>\n<tr>\n<td>ImportError</td>\n<td>Raised when an import statement fails.</td>\n</tr>\n<tr>\n<td>KeyboardInterrupt</td>\n<td>Raised when the user interrupts program execution, usually by pressing Ctrl+c.</td>\n</tr>\n<tr>\n<td>LookupError</td>\n<td>Base class for all lookup errors.</td>\n</tr>\n<tr>\n<td><p>IndexError</p><p>KeyError</p></td>\n<td><p>Raised when an index is not found in a sequence.</p><p>Raised when the specified key is not found in the dictionary.</p></td>\n</tr>\n<tr>\n<td>NameError</td>\n<td>Raised when an identifier is not found in the local or global namespace.</td>\n</tr>\n<tr>\n<td><p>UnboundLocalError</p><p>EnvironmentError</p></td>\n<td><p>Raised when trying to access a local variable in a function or method but no value has been assigned to it.</p><p>Base class for all exceptions that occur outside the Python environment.</p></td>\n</tr>\n<tr>\n<td><p>IOError</p><p>IOError</p></td>\n<td><p>Raised when an input/ output operation fails, such as the print statement or the open() function when trying to open a file that does not exist.</p><p>Raised for operating system-related errors.</p></td>\n</tr>\n<tr>\n<td><p>SyntaxError</p><p>IndentationError</p></td>\n<td><p>Raised when there is an error in Python syntax.</p><p>Raised when indentation is not specified properly.</p></td>\n</tr>\n<tr>\n<td>SystemError</td>\n<td>Raised when the interpreter finds an internal problem, but when this error is encountered the Python interpreter does not exit.</td>\n</tr>\n<tr>\n<td>SystemExit</td>\n<td>Raised when Python interpreter is quit by using the sys.exit() function. If not handled in the code, causes the interpreter to exit.</td>\n</tr>\n<tr>\n<td>Raised when Python interpreter is quit by using the sys.exit() function. If not handled in the code, causes the interpreter to exit.</td>\n<td>Raised when an operation or function is attempted that is invalid for the specified data type.</td>\n</tr>\n<tr>\n<td>ValueError</td>\n<td>Raised when the built-in function for a data type has the valid type of arguments, but the arguments have invalid values specified.</td>\n</tr>\n<tr>\n<td>RuntimeError</td>\n<td>Raised when a generated error does not fall into any category.</td>\n</tr>\n<tr>\n<td>NotImplementedError</td>\n<td>Raised when an abstract method that needs to be implemented in an inherited class is not actually implemented.</td>\n</tr>\n</table>\n\nPut it all together... assert and raise\nBreaking assert down, it is really just an if test followed by a raise. So the code below:\nassert <some_test>, <message>\nis equivalent to a short hand for:\nif not <some_test>:\n raise AssertionError(<message>)\nProve it? OK.\ninstructors = ['Dorkus the Clown', 'Jim']\nassert 'Dave' in instructors, \"Dave isn't in the instructor list!\"\ninstructors = ['Dorkus the Clown', 'Jim']\nassert 'Dave' in instructors, \"Dave isn't in the instructor list!\"\nif not 'Dave' in instructors:\n raise AssertionError(\"Dave isn't in the instructor list!\")\nQuestions?\nAll of this was in preparation for some testing...\nCan we write some quick tests that make sure our code is doing what we think it is? Something of the form:\ncorr_matrix = pairwise_row_correlations(my_sample_dataframe)\nassert corr_matrix looks like what we expect, \"The function is broken!\"\nWhat are the smallest units of code that we can test?\nWhat asserts can we make for these pieces of code?\nRemember, in computers, 1.0 does not necessarily = 1\nPut the following in an empty cell:\n.99999999999999999999\nHow can we test for two floating point numbers being (almost) equal? Pro tip: Google!\nFrom nothing to something wrap up\nHere we created some functions from just a short description of our needs.\n* Before we wrote any code, we walked through the flow control and decided on the parts that were necessary.\n* Before we wrote any code, we created a simple test example with simple predictable output.\n* We wrote some code according to our specifications.\n* We wrote tests using assert to verify our code against the simple test example.\nNext: errors, part 2; unit tests; debugging;\nQUESTIONS?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
xpharry/Udacity-DLFoudation
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n return x / 255\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n one_hot = np.zeros((len(x), 10))\n for i in range(len(x)):\n one_hot[i][x[i]] = 1\n return one_hot\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\nIf you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except \"Convolutional & Max Pooling\" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nIf you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32,\n shape = [None, image_shape[0], image_shape[1], image_shape[2]],\n name = 'x')\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32,\n shape = [None, n_classes],\n name = 'y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, name = 'keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers.",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n \n # Input/Image\n input = x_tensor\n \n # Weight and bias\n weight = tf.Variable(tf.truncated_normal(\n [conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1))\n bias = tf.Variable(tf.zeros(conv_num_outputs))\n \n # Apply Convolution\n conv_layer = tf.nn.conv2d(input, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')\n # Add bias\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n # Apply activation function\n conv_layer = tf.nn.relu(conv_layer)\n \n # Apply maxpooling\n conv_layer = tf.nn.max_pool(\n conv_layer,\n ksize=[1, pool_ksize[0], pool_ksize[1], 1],\n strides=[1, pool_strides[0], pool_strides[1], 1],\n padding='SAME')\n return conv_layer \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n batch_size = x_tensor.get_shape().as_list()[0]# i tried as_list()[] \n width = x_tensor.get_shape().as_list()[1]\n height = x_tensor.get_shape().as_list()[2]\n depth = x_tensor.get_shape().as_list()[3]\n\n image_flat_size = width * height * depth\n\n return tf.contrib.layers.flatten(x_tensor, [batch_size, image_flat_size])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return tf.contrib.layers.fully_connected(x_tensor, num_outputs)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer.\nNote: Activation, softmax, or cross entropy shouldn't be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs])) \n mul = tf.matmul(x_tensor, weights, name='mul')\n bias = tf.Variable(tf.zeros(num_outputs))\n \n return tf.add(mul, bias)\n\n# y = tf.add(mul, bias)\n# fc = tf.contrib.layers.fully_connected(y, num_outputs)\n# return fc\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_num_outputs = 10\n conv_ksize = [2, 2]\n conv_strides = [2, 2]\n pool_ksize = [2, 2]\n pool_strides = [2, 2]\n conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n conv_layer = flatten(conv_layer)\n \n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n num_outputs = 10\n conv_layer = fully_conn(conv_layer, num_outputs)\n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n num_classes = 10\n conv_layer = output(conv_layer, num_classes)\n \n # TODO: return output\n return conv_layer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n print('Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_acc))",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters\nepochs = 100\nbatch_size = 256\nkeep_probability = 0.2",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-70% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 70%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
steinam/teacher
|
jup_notebooks/data-science-ipython-notebooks-master/deep-learning/tensor-flow-exercises/3_regularization.ipynb
|
mit
|
[
"Deep Learning with TensorFlow\nCredits: Forked from TensorFlow by Google\nSetup\nRefer to the setup instructions.\nExercise 3\nPreviously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model.\nThe goal of this exercise is to explore regularization techniques.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nimport cPickle as pickle\nimport numpy as np\nimport tensorflow as tf",
"First reload the data we generated in notmist.ipynb.",
"pickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print 'Training set', train_dataset.shape, train_labels.shape\n print 'Validation set', valid_dataset.shape, valid_labels.shape\n print 'Test set', test_dataset.shape, test_labels.shape",
"Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.",
"image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint 'Training set', train_dataset.shape, train_labels.shape\nprint 'Validation set', valid_dataset.shape, valid_labels.shape\nprint 'Test set', test_dataset.shape, test_labels.shape\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])",
"Problem 1\nIntroduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compue the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy.\n\n\nProblem 2\nLet's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n\n\nProblem 3\nIntroduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training.\nWhat happens to our extreme overfitting case?\n\n\nProblem 4\nTry to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is 97.1%.\nOne avenue you can explore is to add multiple layers.\nAnother one is to use learning rate decay:\nglobal_step = tf.Variable(0) # count the number of steps taken.\nlearning_rate = tf.train.exponential_decay(0.5, step, ...)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aboSamoor/polyglot
|
notebooks/Embeddings.ipynb
|
gpl-3.0
|
[
"Word Embeddings\nWord embedding is a mapping of a word to a d-dimensional vector space.\nThis real valued vector representation captures semantic and syntactic features.\nPolyglot offers a simple interface to load several formats of word embeddings.",
"from polyglot.mapping import Embedding",
"Formats\nThe Embedding class can read word embeddings from different sources:\n\nGensim word2vec objects: (from_gensim method)\nWord2vec binary/text models: (from_word2vec method)\nGloVe models (from_glove method)\npolyglot pickle files: (load method)",
"embeddings = Embedding.load(\"/home/rmyeid/polyglot_data/embeddings2/en/embeddings_pkl.tar.bz2\")",
"Nearest Neighbors\nA common way to investigate the space capture by the embeddings is to query for the nearest neightbors of any word.",
"neighbors = embeddings.nearest_neighbors(\"green\")\nneighbors",
"to calculate the distance between a word and the nieghbors, we can call the distances method",
"embeddings.distances(\"green\", neighbors)",
"The word embeddings are not unit vectors, actually the more frequent the word is the larger the norm of its own vector.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nnorms = np.linalg.norm(embeddings.vectors, axis=1)\nwindow = 300\nsmooth_line = np.convolve(norms, np.ones(window)/float(window), mode='valid')\nplt.plot(smooth_line)\nplt.xlabel(\"Word Rank\"); _ = plt.ylabel(\"$L_2$ norm\")",
"This could be problematic for some applications and training algorithms.\nWe can normalize them by $L_2$ norms to get unit vectors to reduce effects of word frequency, as the following",
"embeddings = embeddings.normalize_words()\n\nneighbors = embeddings.nearest_neighbors(\"green\")\nfor w,d in zip(neighbors, embeddings.distances(\"green\", neighbors)):\n print(\"{:<8}{:.4f}\".format(w,d))",
"Vocabulary Expansion",
"from polyglot.mapping import CaseExpander, DigitExpander",
"Not all the words are available in the dictionary defined by the word embeddings.\nSometimes it would be useful to map new words to similar ones that we have embeddings for.\nCase Expansion\nFor example, the word GREEN is not available in the embeddings,",
"\"GREEN\" in embeddings",
"we would like to return the vector that represents the word Green, to do that we apply a case expansion:",
"embeddings.apply_expansion(CaseExpander)\n\n\"GREEN\" in embeddings\n\nembeddings.nearest_neighbors(\"GREEN\")",
"Digit Expansion\nWe reduce the size of the vocabulary while training the embeddings by grouping special classes of words.\nOnce common case of such grouping is digits.\nEvery digit in the training corpus get replaced by the symbol #.\nFor example, a number like 123.54 becomes ###.##.\nTherefore, querying the embedding for a new number like 434 will result in a failure",
"\"434\" in embeddings",
"To fix that, we apply another type of vocabulary expansion DigitExpander.\nIt will map any number to a sequence of #s.",
"embeddings.apply_expansion(DigitExpander)\n\n\"434\" in embeddings",
"As expected, the neighbors of the new number 434 will be other numbers:",
"embeddings.nearest_neighbors(\"434\")",
"Demo\nDemo is available here.\nCitation\nThis work is a direct implementation of the research being described in the Polyglot: Distributed Word Representations for Multilingual NLP paper.\nThe author of this library strongly encourage you to cite the following paper if you are using this software.\n@InProceedings{polyglot:2013:ACL-CoNLL,\n author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},\n title = {Polyglot: Distributed Word Representations for Multilingual NLP},\n booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},\n month = {August},\n year = {2013},\n address = {Sofia, Bulgaria},\n publisher = {Association for Computational Linguistics},\n pages = {183--192}, \n url = {http://www.aclweb.org/anthology/W13-3520}\n}"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ioam/scipy-2017-holoviews-tutorial
|
notebooks/02-customizing-visual-appearance.ipynb
|
bsd-3-clause
|
[
"<a href='http://www.holoviews.org'><img src=\"assets/hv+bk.png\" alt=\"HV+BK logos\" width=\"40%;\" align=\"left\"/></a>\n<div style=\"float:right;\"><h2>02. Customizing Visual Appearance</h2></div>\n\nSection 01 focused on specifying elements and simple collections of them. This section explains how the visual appearance can be adjusted to bring out the most salient aspects of your data, or just to make the style match the overall theme of your document.\nPreliminaries\nIn the introduction to elements, hv.extension('bokeh') was used at the start to load and activate the bokeh plotting extension. In this notebook, we will also briefly use matplotlib which we will load, but not yet activate, by listing it second:",
"import pandas as pd\nimport holoviews as hv\nhv.extension('bokeh', 'matplotlib')",
"Visualizing eclipse data\nLet us find some interesting data to generate elements from, before we consider how to customize them. Here is a dataset containing information about all the eclipses of the 21st century:",
"eclipses = pd.read_csv('../data/eclipses_21C.csv', parse_dates=['date'])\neclipses.head()",
"Here we have the date of each eclipse, what time of day the eclipse reached its peak in both local time and in UTC, the type of eclipse, its magnitude (fraction of the Sun's diameter obscured by the Moon) and the position of the peak in latitude and longitude.\nLet's see what happens if we pass this dataframe to the Curve element:",
"hv.Curve(eclipses)",
"We see that, by default, the first dataframe column becomes the key dimension (corresponding to the x-axis) and the second column becomes the value dimension (corresponding to the y-axis). There is clearly structure in this data, but the plot is too highly compressed in the x direction to see much detail, and you may not like the particular color or line style. So we can start customizing the appearance of this curve using the HoloViews options system.\nTypes of option\nIf we want to change the appearance of what we can already see in the plot, we're no longer focusing on the data and metadata stored in the elements, but about details of the presentation. Details specific to the final plo tare handled by the separate \"options\" system, not the element objects. HoloViews allows you to set three types of options:\n\nplot options: Options that tell HoloViews how to construct the plot.\nstyle options: Options that tell the underlying plotting extension (Bokeh, matplotlib, etc.) how to style the plot\nnormalization options: Options that tell HoloViews how to normalize the various elements in the plot against each other (not covered in this tutorial)\n\nPlot options\nWe noted that the data is too compressed in the x direction. Let us fix that by specifying the width plot option:",
"%%opts Curve [width=900]\nhour_curve = hv.Curve(eclipses).redim.label(hour_local='Hour (local time)', date='Date (21st century)')\nhour_curve",
"The top line uses a special IPython/Jupyter syntax called the %%opts cell magic to specify the width plot option for all Curve objects in this cell. %%opts accepts a simple specification where we pass the width=900 keyword argument to Curve as a plot option (denoted by the square brackets).\nOf course, there are other ways of applying options in HoloViews that do not require this IPython-specific syntax, but for this tutorial, we will only be covering the more-convenient magic-based syntax. You can read about the alternative approaches in the user guide.",
"# Exercise: Try setting the height plot option of the Curve above.\n# Hint: the magic supports tab completion when the cursor is in the square brackets!\n\n\n# Exercise: Try enabling the boolean show_grid plot option for the curve above\n\n\n# Exercise: Try set the x-axis label rotation (in degrees) with the xrotation plot option\n",
"Aside: hv.help\nTab completion helps discover what keywords are available but you can get more complete help using the hv.help utility. For instance, to learn more about the options for hv.Curve run hv.help(hv.Curve):",
"# hv.help(hv.Curve)\n",
"Style options\nThe plot options earlier instructed HoloViews to build a plot 900 pixels wide, when rendered with the Bokeh plotting extension. Now let's specify that the Bokeh glyph should be 'red' and slightly thicker, which is information passed on directly to Bokeh (making it a style option):",
"%%opts Curve (color='red' line_width=2)\nhour_curve",
"Note how the plot options applied above to hour_curve are remembered! The %%opts magic is used to customize the object displayed as output for a particular code cell: behind the scenes HoloViews has linked the specified options to the hour_curve object via a hidden integer id attribute.\nHaving used the %%opts magic on hour_curve again, we have now associated the 'red' color style option to it. In the options specification syntax, style options are the keywords in parentheses and are keywords defined and used by Bokeh to style line glyphs.",
"# Exercise: Display hour_curve without any new options to verify it stays red\n\n\n# Exercise: Try setting the line_width style options to 1\n\n\n# Exercise: Try setting the line_dash style option to 'dotdash'\n",
"Switching to matplotlib\nLet us now view our curve with matplotlib using the %%output cell magic:",
"%%output backend='matplotlib'\nhour_curve",
"All our options are gone! This is because the options are associated with the corresponding plotting extension---if you switch back to 'bokeh', the options will be applicable again. In general, options have to be specific to backends; e.g. the line_width style option accepted by Bokeh is called linewidth in matplotlib:",
"%%output backend='matplotlib'\n%%opts Curve [aspect=4 fig_size=400 xrotation=90] (color='blue' linewidth=2)\nhour_curve\n\n# Exercise: Apply the matplotlib equivalent to line_dash above using linestyle='-.'",
"The %output line magic\nIn the two cells above we repeated %%output backend='matplotlib' to use matplotlib to render those two cells. Instead of repeating ourselves with the cell magic, we can use a \"line magic\" (similar syntax to the cell magic but with one %) to set things globally. Let us switch to matplotlib with a line magic and specify that we want SVG output:",
"%output backend='matplotlib' fig='svg'",
"Unlike the cell magic, the line magic doesn't need to be followed by any expression and can be used anywhere in the notebook. Both the %output and %opts line magics set things globally so it is recommended you declare them at the top of your notebooks. Now let us look at the SVG matplotlib output we requested:",
"%%opts Curve [aspect=4 fig_size=400 xrotation=70] (color='green' linestyle='--')\nhour_curve\n\n# Exercise: Verify for yourself that the output above is SVG and not PNG\n# You can do this by right-clicking above then selecting 'Open Image in a new Tab' (Chrome) or 'View Image' (Firefox)",
"Switching back to bokeh\nIn previous releases of HoloViews, it was typical to switch to matplotlib in order to export to PNG or SVG, because Bokeh did not support these file formats. Since Bokeh 0.12.6 we can now easily use HoloViews to export Bokeh plots to a PNG file, as we will now demonstrate:",
"%output backend='bokeh'",
"By passing fig='png' and a filename='eclipses' to %output we can both render to PNG and save the output to file:",
"%%output fig='png' filename='eclipses'\nhour_curve.clone()",
"Here we have requested PNG format using fig='png' and that the output is output to eclipses.png using filename='eclipses':",
"ls *.png",
"Bokeh also has some SVG support, but it is not yet exposed in HoloViews.\nUsing group and label\nThe above examples showed how to customize by type, but HoloViews offers multiple additional levels of customization that should be sufficient to cover any purpose. For our last example, let us split our eclipse dataframe based on the type ('Total' or 'Partial'):",
"total_eclipses = eclipses[eclipses.type=='Total']\npartial_eclipses = eclipses[eclipses.type=='Partial']",
"We'll now introduce the Spikes element, and display it with a large width and without a y-axis. We can specify those options for all following Spikes elements using the %opts line magic:",
"%opts Spikes [width=900 yaxis=None] ",
"Now let us look at the hour of day at which these two types of eclipses occur (local time) by overlaying the two types of eclipse as Spikes elements. The problem then is finding a way to visually distinguish the spikes corresponding to the different ellipse types.\nWe can do this using the element group and label introduced in the introduction to elements section as follows:",
"%%opts Spikes.Eclipses.Total (line_dash='solid')\n%%opts Spikes.Eclipses.Partial (line_dash='dotted')\ntotal = hv.Spikes(total_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Total')\npartial = hv.Spikes(partial_eclipses, kdims=['hour_local'], vdims=[], group='Eclipses', label='Partial')\n(total * partial).redim.label(hour_local='Local time (hour)')",
"Using these options to distinguish between the two categories of data with the same type, you can now see clear patterns of grouping between the two types, with many more total eclipses around noon in local time. Similar techniques can be used to provide arbitrarily specific customizations when needed.",
"# Exercise: Remove the two %%opts lines above and observe the effect\n\n\n# Exercise: Show all spikes with 'solid' line_dash, total eclipses in black and the partial ones in 'lightgray'\n\n\n# Optional Exercise: Try differentiating the two sets of spikes by group and not label\n",
"Onwards\nWe have now seen some of the ways you can customize the appearance of your visualizations. You can consult our Customizing Plots user guide to learn about other approaches, including the hv.opts and hv.output utilities which do not rely on notebook specific syntax. One last approach worth mentioning is the .opts method which accepts a customization specification dictionary or string to customize a particular object directly. When called without any arguments .opts() clears any customizations that may be set on that object.\nIn the exploration with containers section that follows, you will also see a few examples of how the appearance of elements can be customized when viewed in containers."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sbussmann/sensor-fusion
|
Code/Extract Gravity Signal.ipynb
|
mit
|
[
"Goal: extract the signal due to gravity from accelerometer measurements\nExperiment: I drove my car from home to Censio and back and used SensorLog on my iPhone to track the trip. The total time for the trip was about 15 minutes.",
"import pandas as pd\n%matplotlib inline\n\n# load the raw data\ndf = pd.read_csv('../Data/shaneiphone_exp2.csv')\n\n# plot accelerometer signal = gravity + user acceleration\ndf[['accelerometerAccelerationX', 'accelerometerAccelerationY', 'accelerometerAccelerationZ']].plot()\n\n# plot gravity signal (this is what we're trying to reproduce)\ndf[['motionGravityX', 'motionGravityY', 'motionGravityZ']].plot()",
"Around sample 10500, the orientation of my phone in the X and Y directions changed because Nick and I swapped phones.\nExtracting the gravity signal is largely a matter of applying a low pass filter, since the acceleration due to gravity is not changing with time.",
"# a Gaussian filter with a large window width is essentially a low pass filter\nfrom scipy.ndimage import gaussian_filter\n\n# a window that is too small retains noise from non-gravity signals\n# a window that is too large eliminates variations in GravityXYZ due to orientation variations.\nwindowwidth = 25\n\n# apply the smoothing function and store the results in a new column in the dataframe\ntemp = gaussian_filter(df['accelerometerAccelerationX'], windowwidth)\ndf['smoothaccelerometerAccelerationX'] = temp\ntemp = gaussian_filter(df['accelerometerAccelerationY'], windowwidth)\ndf['smoothaccelerometerAccelerationY'] = temp\ntemp = gaussian_filter(df['accelerometerAccelerationZ'], windowwidth)\ndf['smoothaccelerometerAccelerationZ'] = temp\n\n# plot the result\ndf[['smoothaccelerometerAccelerationX', 'smoothaccelerometerAccelerationY', 'smoothaccelerometerAccelerationZ']].plot(figsize=(12,4))\n\n# and compare again to the gravity signal\ndf[['motionGravityX', 'motionGravityY', 'motionGravityZ']].plot(figsize=(12,4))",
"Looks like a pretty good result."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
PyDataMadrid2016/Conference-Info
|
talks_materials/20160409_1130_Embrace_conda_packages/Embrace conda packages.ipynb
|
mit
|
[
"Embrace conda packages\nThe build system we always needed, but never deserved\nJuan Luis Cano Rodríguez\nMadrid, 2016-04-08\nOutline\n\nIntroduction\nMotivation: What brought us here?\nOur first conda package\nSome more tricks\nWorking with other languages\nconda-forge: a community repository\nLimitations and future work\nConclusions\n\nWho is this guy?\n<img src=\"static/juanlu.jpg\" width=\"350px\" style=\"float: right\" />\n\nAlmost Aerospace Engineer\nQuant Developer for BBVA at Indizen (yeah, lots of Python there!)\nWriter and furious tweeter at Pybonacci\nChair ~~and BDFL~~ of Python España\nCo-creator and charismatic leader of AeroPython (*not the Lorena Barba course)\nWhen time permits (rare) writes some open source Python code\n\nYou know, I've been giving talks on Python and its scientific ecosystem for about three years now... And I always write this bit there, that \"Almost\" word in italics before my background. You may reasonably wonder now what the heck I've been doing all these years to always introduce myself as an \"almost\" Aerospace Engineer, right? Well, I promise that I'm taking the required steps to graduate not later than this Autumn, but anyway this talk reflects one of the severe pains I've been going through while carrying my final project.\nMotivation: What brought us here?\nLet's begin with some questions:\n\nWho writes Python code here, either for a living or for fun?\nWho can write a setup.py... without copying a working one from the Internet?\nHow many Linux users... can configure a Visual Studio project properly?\nHow many of you are using Anaconda... because it was the only way to survive?\n\n...or: \"The sad state of scientific software\"\n\n\nThe scientific Python community was told to \"fix the packaging problem themselves\" in 2014, Christoph Gohlke packages were the only practical way to use Python on Windows for years before Python(x,y), Canopy and Anaconda were born\n\n\nOne of the FAQ items of the Sage project: \"Wouldn’t it be way better if Sage did not ship as a gigantic bundle?\", they started a SaaS to end the pain\n\n\nPETSc (solution of PDEs): They are forced to maintain their own forks because upstream projects won't fix bugs, even with patches and reproducible tests\n\n\nDOLFIN (part of the FEniCS project): Extremely difficult to make it work outside Ubuntu, pure Python alternatives are being developed, my fenics-recipes project has at least 7 meaningful forks already\n\n\n<img src=\"static/STAHP.jpg\" style=\"margin: 0 auto;\" />\nSome inconvenient truths:\nPortability is hard (unless you stick to pure Python)\nProperly distributing software libraries is very hard\nResult:\n<img src=\"static/keep-calm-it-works-on-my-machine.png\" width=\"400px\" style=\"margin: 0 auto;\" />\nWhat horror have we created\n\nIf you’re missing a library or program, and that library or program happens to be written in C, you either need root to install it from your package manager, or you will descend into a lovecraftian nightmare of attempted local builds from which there is no escape. You say you need lxml on shared hosting and they don’t have libxml2 installed? Well, fuck you.\n— Eevee, \"The sad state of web app deployment\"\n\nAre virtual machines and containers the solution?\n\n\"It's easy to build a VM if you automate the install process, and providing that install script for even one OS can demystify the install process for others; conversely, just because you provide a VM doesn't mean that anyone other than you can install your software\"\n— C. Titus Brown, \"Virtual machines considered harmful for reproducibility\"\n\nOur first conda package\nLet's install conda-build!",
"!conda install -y conda-build -q -n root",
"conda packages are created from conda recipes. We can create a bare recipe using conda skeleton to build it from a PyPI package.",
"!conda skeleton pypi pytest-benchmark > /dev/null\n\n!ls pytest-benchmark",
"These are the minimum files for the recipe:\n\nmeta.yaml contains all the metadata\nbuild.sh and bld.bat are the build scripts for Linux/OS X and Windows respectively\n\nThe meta.yaml file\nIt contains the metadata in YAML format.\n\npackage, source and build specify the name, version and source of the package\nrequirements specify the build (install time) and run (runtime) requirements\ntest specify imports, commands and scripts to test\nabout adds some additional data for the package",
"!grep -v \"#\" pytest-benchmark/meta.yaml | head -n24",
"The build.sh and bld.bat files\nThey specify how to build the package.",
"!cat pytest-benchmark/build.sh\n\n!grep -v \"::\" pytest-benchmark/bld.bat",
"The build process\nAdapted from http://conda.pydata.org/docs/building/recipe.html#conda-recipe-files-overview\n\nDownloads the source\nApplies patches (if any)\nInstall build dependencies\nRuns the build script\nPackages new files\nRun tests against newly created package\n\nSeems legit!",
"!conda build pytest-benchmark --python 3.5 > /dev/null # It works!\n\n!ls ~/.miniconda3/conda-bld/linux-64/pytest-benchmark-3.0.0-py35_0.tar.bz2",
"<img src=\"static/conda-names.png\" style=\"margin: 0 auto;\" />\n(From http://conda.pydata.org/docs/building/pkg-name-conv.html)",
"!conda install pytest-benchmark --use-local --yes",
"Build, test, upload, repeat\n\nCustom packages can be uploaded to Anaconda Cloud https://anaconda.org/\nThis process can be automated through Anaconda Build http://docs.anaconda.org/build.html\nLater on we can use our custom channels to install non-official packages\n\n<img src=\"static/anaconda-cloud.png\" style=\"margin: 0 auto;\" />\nLet's upload the package first using anaconda-client:",
"!conda install anaconda-client --quiet --yes\n\n!anaconda upload ~/.miniconda3/conda-bld/linux-64/pytest-benchmark-3.0.0-py35_0.tar.bz2",
"And now, let's install it!",
"!conda remove pytest-benchmark --yes > /dev/null\n\n!conda install pytest-benchmark --channel juanlu001 --yes",
"Some more tricks\nRunning the tests\nYou can run your tests with Python, Perl or shell scripts (run_test.[py,pl,sh,bat])\nConvert pure Python packages to other platforms\nUsing conda convert for pure Python packages, we can quickly provide packages for other platforms",
"!conda convert ~/.miniconda3/conda-bld/linux-64/pytest-benchmark-3.0.0-py35_0.tar.bz2 --platform all | grep Converting",
"Platform-specific metadata\nTemplating for meta.yaml\nMetadata files support templating using Jinja2!\nWorking with other languages\nor: conda as a cross-platform package manager\n\nconda can be used to build software written in any language\nJust don't include python as a build or run dependency!\nIt's already being used to distribute pure C and C++ libraries, R packages...\n\nImportant caveat:\nThe burden is on you\nThere be dragons\n\nconda-build does not solve cross-compiling so you will need to build compiled packages on each platform\nRegarding Linux, there are a lot of sources of binary incompatibility\nBuilding on a clean operative system is key\nUsing an old version of Linux (CentOS 5?) also helps, because many core system libraries have strict backwards compatibility policies\nPackages that assume everything is on root locations will fail to compile\nSometimes careful editing of compiler flags and event patching is necessary\n\nIf the recipe builds on a fresh, headless, old Linux it will work everywhere\nconda-forge: a community repository\n<img src=\"static/conda-forge.png\" style=\"margin: 0 auto;\" />\n\nconda-forge is a github organization containing repositories of conda recipes. Thanks to some awesome continuous integration providers (AppVeyor, CircleCI and TravisCI), each repository, also known as a feedstock, automatically builds its own recipe in a clean and repeatable way on Windows, Linux and OSX.\n\nFeatures:\n\nAutomatic linting of recipes\nContinuous integration of recipes in Linux, OS X and Windows\nAutomatic upload of packages\n\nWhat I love:\n\nHaving a blessed community channel (like Arch Linux AUR)\nEnsuring recipes run everywhere\nHigh quality standards!\n\nLimitations and future work\nconda (2012?) and conda-build (2013) are very young projects and still have some pain points that ought to be addressed\n\n\nSupport for gcc and libgfortran is not yet polished in Anaconda and there are still some portability issues\n\n\nNo way to include custom channels on a meta.yaml, the only option is to keep a copy of all dependencies\n\n\nPinning NumPy versions on meta.yaml can be a mess\n\n\nThe state of Python packaging is improving upstream too!\n\npip builds and caches wheels locally - the problem of compiling NumPy over and over again was addressed a while ago\nWindows and OS X wheels are easy to build and widely available for many scientific packages\nPEP 0513 provides a way to finally upload Linux wheels to PyPI which are compatible with many Linux distributions\nPEP 0516 proposes \"a simple and standard sdist format that isn't intertwined with distutils\"!!!1!\n\nStill, there are some remaining irks:\n\npip does not have a dependency solver\nconda-build has a more streamlined process to build and test packages in an isolated way\n\nConclusion\n<img src=\"static/keep-calm-and-conda-install.png\" style=\"margin: 0 auto;\" />\n<img src=\"static/pydata-logo-madrid-2016.png\" style=\"margin: 0 auto;\" />\n\nThis talk: https://github.com/AeroPython/embrace-conda-packages\nMy GitHub: https://github.com/Juanlu001/\nMe on Twitter: @astrojuanlu, @Pybonacci, @PyConES, @AeroPython\n\nApproach me during the conference, interrupt me while I'm on a conversation, ask me questions, let's talk about your ideas and projects! 😊\nThanks for yor attention!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
podondra/bt-spectraldl
|
notebooks/01-data-download.ipynb
|
gpl-3.0
|
[
"Data Download\nThis notebook download all FITS data.\nList of files is in file ondrejov-labeled-spectra.csv.\nThese spectra has been classified with\nSpectral View tool.",
"%matplotlib inline\n\nimport urllib.request\nimport urllib.parse\nimport io\nimport os\nimport csv\nimport glob\nfrom functools import partial\nfrom itertools import count\nimport numpy as np\nfrom astropy.io import fits\nimport matplotlib.pyplot as plt\n\nLABELS_FILE = 'data/ondrejov-dataset.csv'\n!head $LABELS_FILE",
"Read CSV with Labels",
"with open(LABELS_FILE, newline='') as f:\n reader = csv.DictReader(f)\n # each public id is unique and set operation will be usefull later\n spectra_idents = set(map(lambda x: x['id'], reader))\n\nlen(spectra_idents)",
"Simple Spectral Access protocol\nThis is not much revlevant now since only datalink is used\nto download normalized spectra.\nSSAP, SSA defines a uniform intreface to remotely discover\nand access one dimenisonal spectra. Spectral data access\nmmay involve active transformation of data. SSA also\ndefines complete metadata to describe the available\ndatasets. It makes use of VOTable for metadata exchange.\nArchitecture\nA query is used for data discovery and to negotiate the\ndetails of the static or dynamically created dataset\nto be retrieved. SSA allows to mediate not only dataset\nmetadata but the actual dataset itself. Direct access to\ndata is also provided.\nA single service may support multiple operation to perform\nvarious functions. The current interface use an HTTP GET\nrequest to submit parametrized requests with responses\nbeing returned as for example FITS or VOTable. Defined\noperations are the following:\n\nA queryData operation return a VOTable describing\ncandidate datasets.\nA getData operation is used to access an individual\ndataset.",
"def request_url(url):\n '''Make HTTP request and return response data.'''\n try:\n with urllib.request.urlopen(url) as response:\n data = response.read()\n except Exception as e:\n print(e)\n return None\n return data",
"Datalink\nDatalink is a service for working with spectra.\nFor information about the one which is used here see http://voarchive.asu.cas.cz/ccd700/q/sdl/info.",
"datalink_service = 'http://voarchive.asu.cas.cz/ccd700/q/sdl/dlget'\n\ndef make_datalink_url(\n pub_id, fluxcalib=None, wave_min=None, wave_max=None,\n file_format='application/fits', url=datalink_service\n):\n url_parameters = {'ID': pub_id}\n if fluxcalib:\n url_parameters['FLUXCALIB'] = fluxcalib\n if wave_min and wave_max:\n url_parameters['BAND'] = str(wave_min) + ' ' + str(wave_max)\n if file_format:\n url_parameters['FORMAT'] = file_format\n \n return url + '?' + urllib.parse.urlencode(url_parameters)\n\nmake_datalink_url(\n 'ivo://asu.cas.cz/stel/ccd700/sh270028',\n fluxcalib='normalized',\n wave_min=6500e-10, wave_max=6600e-10\n)",
"Show fluxcalib Parameters\nTo show how to work with datalink\nand what it offers.\nFrom this is obvious that the 'normalized' setting is the desired.",
"def plot_fluxcalib(fluxcalib, ax):\n # create the datalink service URL\n datalink_url = make_datalink_url('ivo://asu.cas.cz/stel/ccd700/sh270028', fluxcalib=fluxcalib)\n # download the data\n fits_data = request_url(datalink_url)\n # open the data as file\n hdulist = fits.open(io.BytesIO(fits_data))\n # plot it\n ax.set_title('fluxcalib is ' + str(fluxcalib))\n ax.plot(hdulist[1].data['spectral'], hdulist[1].data['flux'])\n\nfluxcalibs = [None, 'normalized', 'relative', 'UNCALIBRATED']\nfif, axs = plt.subplots(4, 1)\n\nfor fluxcalib, ax in zip(fluxcalibs, axs):\n plot_fluxcalib(fluxcalib, ax)\n\nfig.tight_layout()",
"FITS Download",
"def download_spectrum(pub_id, n, directory, fluxcalib, minimum=None, maximum=None):\n # get the name from public id\n name = pub_id.split('/')[-1]\n # directory HAS TO end with '/'\n path = directory + name + '.fits'\n url = make_datalink_url(pub_id, fluxcalib, minimum, maximum)\n \n print('{:5} downloading {}'.format(n, name))\n \n try:\n data = request_url(url)\n except Exception as e:\n print(e)\n return name\n \n with open(path, 'wb') as f:\n f.write(data)\n\nFITS_DIR = 'data/ondrejov/'\n%mkdir $FITS_DIR 2> /dev/null\n\nondrejov_downloader = partial(\n download_spectrum,\n directory=FITS_DIR,\n fluxcalib='normalized'\n)\n\nccd700_prefix = 'ivo://asu.cas.cz/stel/ccd700/'\n\ndef get_pub_id(path, prefix=ccd700_prefix):\n return prefix + os.path.splitext(os.path.split(path)[-1])[0]\n\nget_pub_id('ssap/uh260033.fits')\n\nspectra_idents -= set(map(get_pub_id, glob.glob(FITS_DIR + '*.fits')))\nif len(spectra_idents) != 0:\n donwload_info = list(map(ondrejov_downloader, spectra_idents, count(start=1)))\nprint('All spectra downloaded.')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cathalmccabe/PYNQ
|
boards/Pynq-Z1/base/notebooks/microblaze/microblaze_c_libraries.ipynb
|
bsd-3-clause
|
[
"PYNQ Microblaze Libraries in C\nThis document describes the various libraries that ship with PYNQ Microblaze.\npynqmb\nThe main library is pynqmb which consists of functions for interacting with a variety of I/O devices. pynqmb is split into separate i2c.h, gpio.h, spi.h, timer.h and uart.h header files with each one being self contained. In this notebook we will look just at the I2C and GPIO headers however the full function reference for all of the components can be found on http://pynq.readthedocs.io \nAll of the components follow the same pattern in having _open function calls that take one or more pins depending on the protocol. These function use an I/O switch in the subsystem to connect the protocol controller to the output pins. For devices not connected to output pins there are _open_device functions which take either the base address of the controller or it's index as defined in the board support package.\nFor this example we are going to use a Grove ADC connected via Pmod-Grove adapter and using the I2C protocol. One ancillary header file that is useful when using the Pmod-Grove adapter is pmod_grove.h which includes the pin definitions for the adapter board. In this case we are using the G4 port on the adapter which is connected to pins 6 and 2 of the Pmod connector.",
"from pynq.overlays.base import BaseOverlay\nbase = BaseOverlay('base.bit')\n\n%%microblaze base.PMODA\n#include <i2c.h>\n#include <pmod_grove.h>\n\nint read_adc() {\n i2c device = i2c_open(PMOD_G4_B, PMOD_G4_A);\n unsigned char buf[2];\n buf[0] = 0;\n i2c_write(device, 0x50, buf, 1);\n i2c_read(device, 0x50, buf, 2);\n return ((buf[0] & 0x0F) << 8) | buf[1];\n}\n\nread_adc()",
"We can use the gpio and timer components in concert to flash an LED connected to G1. The timer header provides PWM and program delay functionality, although only one can be used simultaneously.",
"%%microblaze base.PMODA\n#include <timer.h>\n#include <gpio.h>\n#include <pmod_grove.h>\n\nvoid flash_led() {\n gpio led = gpio_open(PMOD_G1_A);\n gpio_set_direction(led, GPIO_OUT);\n int state = 0;\n while (1) {\n gpio_write(led, state);\n state = !state;\n delay_ms(500);\n }\n}\n\nflash_led()",
"pyprintf\nThe pyprint library exposes a single pyprintf function which acts similarly to a regular printf function but forwards arguments to Python for formatting and display result in far lower code overhead than a regular printf as well as not requiring access to standard in and out.",
"%%microblaze base.PMODA\n#include <pyprintf.h>\n\nint test_print(float value) {\n pyprintf(\"Printing %f from the Microblaze!\\n\", value);\n return 0;\n}\n\ntest_print(1.5)",
"At present, pyprintf can support the common subset of datatype between Python and C - in particular %{douxXfFgGeEsc}. Long data types and additional format modifiers are not supported yet."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
justanr/notebooks
|
top-ten-experiment/analyzing_top_ten_lists.ipynb
|
mit
|
[
"Analyzing Shreddit's Q2 Top 5 voting\nThis started out as a curiosity. I was interested in what I'd need to do to take a bunch of \"Top X\" lists, combine them and then ask questions to the data like, \"What thing was number one the most?\" or \"If the votes are weighted, what does the actual top X look like?\" I then remembered that Shreddit just did a voting. ;)\nThis isn't a scientifically accurate analysis rooted in best practices. But I'm also just getting started with data analysis. So there's that.",
"# set up all the data for the rest of the notebook\nimport json\nfrom collections import Counter\nfrom itertools import chain\nfrom IPython.display import HTML\n\ndef vote_table(votes):\n \"\"\"Render a crappy HTML table for easy display. I'd use Pandas, but that seems like\n complete overkill for this simple task.\n \"\"\"\n base_table = \"\"\"\n <table>\n <tr><td>Position</td><td>Album</td><td>Votes</td></tr>\n {}\n </table>\n \"\"\"\n \n base_row = \"<tr><td>{0}</td><td>{1}</td><td>{2}</td></tr>\"\n vote_rows = [base_row.format(idx, name, vote) for idx, (name, vote) in enumerate(votes, 1)]\n return HTML(base_table.format('\\n'.join(vote_rows)))\n\nwith open('shreddit_q2_votes.json', 'r') as fh:\n ballots = json.load(fh)\n\nwith open('tallied_votes.json', 'r') as fh:\n tallied = Counter(json.load(fh))\n\nequal_placement_ballots = Counter(chain.from_iterable(ballots))",
"Equal Placement Ballots\nThe equal placement ballot assumes that any position on the ballot is equal to any other. And given that this how the voting was designed, it makes the most sense to look at this first. There are some differences, but given that /u/kaptain_carbon was tallying by hand, and I manually copy-pasted ballots (regex is hard) and then had to manually massage some data (fixing names and the like), differences are to be expected. Another note, all the data in my set is lower cased in an effort to normalize to make the data more accurate. My analysis also includes submissions from after voting was closed, mostly because I was too lazy to check dates.\nI'm also playing fast and loose with items that end up with the same total, rather than doing the \"right thing\" and marking them at the same position. So, there's that.\nHere's the top ten of the table in the post.",
"vote_table(tallied.most_common(10))",
"And here's the top ten from my computed tally:",
"vote_table(equal_placement_ballots.most_common(10))",
"Weighted Tally Ballot\nBut that's boring. What if we pretended for a second that everyone submitted a ballot where the albums were actually ranked one through five. What would the top ten look like then? There's a few ways to figure this one out. Initially, my thought was to provide a number 1 to 5 based on position to each vote and then find the lowest sum. However, the problem is that an item that only appears once will be considered the most preferred. That won't work. But going backwards from five to one for each item and then finding the largest total probably would:",
"weighted_ballot = Counter()\n\nfor ballot in ballots:\n for item, weight in zip(ballot, range(5, 0, -1)):\n weighted_ballot[item] += weight",
"This handles the situation where a ballot may not be full (five votes), which make up a surpsingly non trival amount of the ballots:",
"sum(1 for _ in filter(lambda x: len(x) < 5, ballots)) / len(ballots)",
"Anyways, what does a top ten for weighted votes end up looking like?",
"vote_table(weighted_ballot.most_common(10))",
"Hm, it's not actually all the different. Some bands move around a little bit, Deathhammer moves into the top ten using this method. But overall, the general spread is pretty much the same.\nIt's also interesting to look at the difference in position from the weighted tally vs the way it's done in the thread. There's major differences between the two due to the voting difference and from including submissions from after voting expired. There's also a missing band. :?",
"regular_tally_spots = {name.lower(): pos for pos, (name, _) in enumerate(tallied.most_common(), 1)}\n\nbase_table = \"\"\"\n<table>\n <tr><td>Album</td><td>Regular Spot</td><td>Weighted Spot</td></tr>\n {}\n</table>\n\"\"\"\nbase_row = \"<tr><td>{0}</td><td>{1}</td><td>{2}</td></tr>\"\n\nrows = [base_row.format(name, regular_tally_spots[name], pos) \n for pos, (name, _) in enumerate(weighted_ballot.most_common(), 1)\n # some albums didn't make it, like Arcturian D:\n if name in regular_tally_spots]\n\nHTML(base_table.format('\\n'.join(rows)))",
"What album appeared at number one most often?\nAnother question I've been pondering is, \"How do you figure out what thing appears at number one most often?\" Again, this is assuming everyone submitted a ballot with the intention of it being read as ranked. Turns out, doing this isn't that hard either:",
"number_one = Counter([b[0] for b in ballots]) \nvote_table(number_one.most_common(10))",
"This paints a slightly different picture of the top ten. While the names are largely the same, Scar Sighted was thought of as the top album most often, despite being at two or three through the other methods. And Misþyrming is at four (okay, \"2\", again fast and loose with numbering) despite being the solid top choice for all other methods.\nThe Take Away\nThere's lot of different ways to look at the ballots and different ways to tally them. Weighted voting is certainly an interesting avenue to explore.\nOriginally, I had wondered if something like something along the lines of Instant Runoff Voting or data processing packages like Panadas, Numpy or SciPy would be needed. But for basic prodding and poking, it turns out the stdlib is just fine.\nAlso: a lot of awesome music I haven't listened to at all this year (been tied up with Peace is the Mission the last few weeks, too, sorry guys).\nThe full tables\nBecause someone will ask for them, here's the full tables from my analysis:",
"#regular tallying\nvote_table(equal_placement_ballots.most_common())\n\n#weighted ballot\nvote_table(weighted_ballot.most_common())\n\n#number one count\nvote_table(number_one.most_common())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
natematias/research_in_python
|
instrumental_variables_estimation/Instrumental-Variables Estimation.ipynb
|
mit
|
[
"# RESEARCH IN PYTHON: INSTRUMENTAL VARIABLES ESTIMATION\n# by J. NATHAN MATIAS March 18, 2015\n\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.",
"Instrumental Variables Estimation\nThis section is taken from Chapter 10 of Methods Matter by Richard Murnane and John Willett. The descriptions are taken from Wikipedia, for copyright reasons.\nIn statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment.\nIn linear models, there are two main requirements for using an IV:\n\nThe instrument must be correlated with the endogenous explanatory variables, conditional on the other covariates.\nThe instrument cannot be correlated with the error term in the explanatory equation (conditional on the other covariates), that is, the instrument cannot suffer from the same problem as the original predicting variable.\n\nExample: Predicting Civic Engagement from College Attainment\nCan we use college attainment (COLLEGE) to predict the probability of civic engagement (REGISTER)? College attainment is not randomized, and the arrow of causality may move in the opposite direction, so all we can do with standard regression is to establish a correlation.\nIn this example, we use an Instrumental Variable of distance between the student's school and a community college (DISTANCE), to estimate a causal relationship. This is possible only if this variable is related to college attainment and NOT related to the residuals of regressing COLLEGE on REGISTER. \nThe python code listed here is roughly parallel to the code listed in the textbook example for Methods Matter Chapter 10. If you're curious about how to do a similar example in R, check out \"A Simple Instrumental Variables Problem\" by Adam Hyland in R-Bloggers or Ani Katchova's \"Instrumental Variables in R video on YouTube.",
"# THINGS TO IMPORT\n# This is a baseline set of libraries I import by default if I'm rushed for time.\n\nimport codecs # load UTF-8 Content\nimport json # load JSON files\nimport pandas as pd # Pandas handles dataframes\nimport numpy as np # Numpy handles lots of basic maths operations\nimport matplotlib.pyplot as plt # Matplotlib for plotting\nimport seaborn as sns # Seaborn for beautiful plots\nfrom dateutil import * # I prefer dateutil for parsing dates\nimport math # transformations\nimport statsmodels.formula.api as smf # for doing statistical regression\nimport statsmodels.api as sm # access to the wider statsmodels library, including R datasets\nfrom collections import Counter # Counter is useful for grouping and counting\nimport scipy",
"Acquire Dee Dataset from Methods Matter",
"import urllib2\nimport os.path\nif(os.path.isfile(\"dee.dta\")!=True):\n response = urllib2.urlopen(\"http://www.ats.ucla.edu/stat/stata/examples/methods_matter/chapter10/dee.dta\")\n if(response.getcode()==200):\n f = open(\"dee.dta\",\"w\")\n f.write(response.read())\n f.close()\ndee_df = pd.read_stata(\"dee.dta\")",
"Summary Statistics",
"dee_df[['register','college', 'distance']].describe()",
"Cross-Tabulation",
"print pd.crosstab(dee_df.register, dee_df.college)\nchi2 = scipy.stats.chi2_contingency(pd.crosstab(dee_df.register, dee_df.college))\nprint \"chi2: %(c)d\" % {\"c\":chi2[0]}\nprint \"p: %(p)0.03f\" % {\"p\":chi2[1]}\nprint \"df: %(df)0.03f\" % {\"df\":chi2[2]}\nprint \"expected:\"\nprint chi2[3]",
"Correlation Matrix",
"sns.corrplot(dee_df[['register','college','distance']])",
"Linear Regression of REGISTER on COLLEGE",
"result = smf.ols(formula = \"register ~ college\", data = dee_df).fit()\nprint result.summary()\n",
"Two-Stage Least Squares Regression of REGISTER ~ COLLEGE where IV=DISTANCE\nusing statsmodels.formula.api.ols\nIn two-stage least squares regression, we regress COLLEGE on DISTANCE and use the predictions from that model as the predictors for REGISTER.",
"print \"==============================================================================\"\nprint \" FIRST STAGE\"\nprint \"==============================================================================\"\nresult = smf.ols(formula = \"college ~ distance\", data = dee_df).fit()\nprint result.summary()\ndee_df['college_fitted'] = result.predict()\n\nprint\nprint\nprint \"==============================================================================\"\nprint \" SECOND STAGE\"\nprint \"==============================================================================\"\n\nresult = smf.ols(formula = \"register ~ college_fitted\", data=dee_df).fit()\nprint result.summary()",
"^^^^^^ Not sure what's going on with the R2 statistic here (it's 0.001 here, versus 0.022 in the example), although everything else matches what we see from the Stata output in the published example \nAdding Covariates that do not satisfy the requirements of instrumental variables\nIn the case of the covariate of race/ethicity, we expect that there might be a relationship between race/ethnicity and distance to a community college, as well as a relationship between race/ethnicity and voter registration. \nWhile race/ethnicity fails the test for instrumental variables, it can still be included as a covariate in a multiple regression model. In such cases, it is essential to include covariates at both stages of a two-stage test.\nCorrelation Matrix",
"sns.corrplot(dee_df[['register','college','distance', 'black','hispanic','otherrace']])",
"Two-Stage Least Squares Regression of REGISTER ~ COLLEGE + BLACK + HISPANIC + OTHERRACE where IV=DISTANCE",
"print \"==============================================================================\"\nprint \" FIRST STAGE\"\nprint \"==============================================================================\"\nresult = smf.ols(formula = \"college ~ distance + black + hispanic + otherrace\", data = dee_df).fit()\nprint result.summary()\ndee_df['college_fitted'] = result.predict()\n\nprint\nprint\nprint \"==============================================================================\"\nprint \" SECOND STAGE\"\nprint \"==============================================================================\"\n\nresult = smf.ols(formula = \"register ~ college_fitted + black + hispanic + otherrace\", data=dee_df).fit()\nprint result.summary()",
"Interactions Between the Endogenous Question Predictor and Exogenous Covariates in the Second Stage Model\nIn this case, we explore whether interactions between college and race/ethnicity are significant predictors of voter registration. Here, it's important to meet the \"rank condition\": that \"for every endogenous predictor included in the second stage, there must be at least one instrument included in the first stage.\"\nTo do this, we need to create a series of stage-one instruments, one for the main effect, and one for each interaction. In",
"print \"==============================================================================\"\nprint \" FIRST STAGE\"\nprint \"==============================================================================\"\n# generate the stage one main effect instrument\nresult = smf.ols(formula = \"college ~ distance + black + hispanic + otherrace +\" +\n \"distance:black + distance:hispanic + distance:otherrace\", data = dee_df).fit()\ndee_df['college_fitted'] = result.predict()\nprint result.summary()\n\n# generate the stage one interaction instrument for distance:black\n# note that we have DROPPED the irrelevant terms. \n# The full form for each interaction, which gives the exact same result, is:\n# result = smf.ols(formula = \"college:black ~ distance + black + hispanic + otherrace +\" +\n# \"distance:black + distance:hispanic + distance:otherrace\", data = dee_df).fit()\n\nresult = smf.ols(formula = \"college:black ~ distance + black + distance:black\", data = dee_df).fit()\ndee_df['collegeXblack'] = result.predict()\n\n\n# generate the stage one interaction instrument for distance:hispanic\nresult = smf.ols(formula = \"college:hispanic ~ distance + hispanic + distance:hispanic\", data = dee_df).fit()\ndee_df['collegeXhispanic'] = result.predict()\n\n# generate the stage one interaction instrument for distance:hispanic\nresult = smf.ols(formula = \"college:otherrace ~ distance + otherrace + distance:otherrace\", data = dee_df).fit()\ndee_df['collegeXotherrace'] = result.predict()\n\n# generate the final model, that includes these interactions as predictors\nresult = smf.ols(formula = \"register ~ college_fitted + black + hispanic + otherrace +\" +\n \"collegeXblack + collegeXhispanic + collegeXotherrace\", data = dee_df).fit()\nprint result.summary()",
"^^^ in this particular case, we find no significant interactions and fall back on our previous model, which simply included race/ethnicity as a covariate\nBinomial Regression: Logistic Model\nIn this example, we use a logistic model with a two-stage least-squares regression. NOTE: This is not attempted in the textbook example, so I cannot be completely certain about this, unlike the above results.",
"print \"==============================================================================\"\nprint \" FIRST STAGE\"\nprint \"==============================================================================\"\nresult = smf.glm(formula = \"college ~ distance + black + hispanic + otherrace\", \n data=dee_df,\n family=sm.families.Binomial()).fit()\nprint result.summary()\ndee_df['college_fitted'] = result.predict()\n\nprint\nprint\nprint \"==============================================================================\"\nprint \" SECOND STAGE\"\nprint \"==============================================================================\"#\nresult = smf.glm(formula = \"register ~ college_fitted + black + hispanic + otherrace\",\n data=dee_df,\n family=sm.families.Binomial()).fit()\nprint result.summary()",
"Binomial Regression: Probit Model\nIn this example, we use a probit model with a two-stage least-squares regression. NOTE: This is not attempted in the textbook example, so I cannot be completely certain about this, unlike the above results.",
"import patsy\nprint \"==============================================================================\"\nprint \" FIRST STAGE\"\nprint \"==============================================================================\"\na,b = patsy.dmatrices(\"college ~ distance + black + hispanic + otherrace\",\n dee_df,return_type=\"dataframe\")\nresult = sm.Probit(a,b).fit()\nprint result.summary()\ndee_df['college_fitted'] = result.predict()\n\n\nprint\nprint\nprint \"==============================================================================\"\nprint \" SECOND STAGE\"\nprint \"==============================================================================\"#\n\na,b = patsy.dmatrices(\"register ~ college_fitted + black + hispanic + otherrace\",\n dee_df,return_type=\"dataframe\")\nresult = sm.Probit(a,b).fit()\n\nprint result.summary()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xpmanoj/content
|
Chart_Samples.ipynb
|
mit
|
[
"Chart Samples",
"%matplotlib inline\nfrom collections import defaultdict\nimport json\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup as bs\n\nfrom matplotlib import rcParams\nimport matplotlib.cm as cm\nimport matplotlib as mpl\n\n#colorbrewer2 Dark2 qualitative color table\ndark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),\n (0.8509803921568627, 0.37254901960784315, 0.00784313725490196),\n (0.4588235294117647, 0.4392156862745098, 0.7019607843137254),\n (0.9058823529411765, 0.1607843137254902, 0.5411764705882353),\n (0.4, 0.6509803921568628, 0.11764705882352941),\n (0.9019607843137255, 0.6705882352941176, 0.00784313725490196),\n (0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]\n\nrcParams['figure.figsize'] = (10, 6)\nrcParams['figure.dpi'] = 150\nrcParams['axes.color_cycle'] = dark2_colors\nrcParams['lines.linewidth'] = 2\nrcParams['axes.facecolor'] = 'white'\nrcParams['font.size'] = 14\nrcParams['patch.edgecolor'] = 'white'\nrcParams['patch.facecolor'] = dark2_colors[0]\n#rcParams['font.family'] = 'sans-serif'\n#rcParams['font.sans-serif'] = 'Helvetica'\n\n\ndef remove_border(axes=None, top=False, right=False, left=True, bottom=True):\n \"\"\"\n Minimize chartjunk by stripping out unnecesasry plot borders and axis ticks\n \n The top/right/left/bottom keywords toggle whether the corresponding plot border is drawn\n \"\"\"\n ax = axes or plt.gca()\n ax.spines['top'].set_visible(top)\n ax.spines['right'].set_visible(right)\n ax.spines['left'].set_visible(left)\n ax.spines['bottom'].set_visible(bottom)\n \n #turn off all ticks\n ax.yaxis.set_ticks_position('none')\n ax.xaxis.set_ticks_position('none')\n \n #now re-enable visibles\n if top:\n ax.xaxis.tick_top()\n if bottom:\n ax.xaxis.tick_bottom()\n if left:\n ax.yaxis.tick_left()\n if right:\n ax.yaxis.tick_right()\n \npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\n\n#utility functions\n\ndef histogram_style():\n remove_border(left=False)\n plt.grid(False)\n plt.grid(axis='y', color='w', linestyle='-', lw=1)\n \ndef histogram_labels(xlabel, ylabel, title, loc): \n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n plt.title(title)\n plt.legend(frameon=False, loc=loc)\n \ndef histogram_settings(xlabel, ylabel, title, loc = 'upper left'):\n histogram_style() \n histogram_labels(xlabel, ylabel, title, loc) \n\n\nfrom matplotlib.colors import ListedColormap\ncmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])\ncmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])\ncm = plt.cm.RdBu\ncm_bright = ListedColormap(['#FF0000', '#0000FF'])\n\ndef points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light, cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):\n h = .02\n X=np.concatenate((Xtr, Xte))\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),\n np.linspace(y_min, y_max, 100))\n\n #plt.figure(figsize=(10,6))\n if zfunc:\n p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]\n p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n Z=zfunc(p0, p1)\n else:\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n ZZ = Z.reshape(xx.shape)\n if mesh:\n plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)\n if predicted:\n showtr = clf.predict(Xtr)\n showte = clf.predict(Xte)\n else:\n showtr = ytr\n showte = yte\n ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold, s=psize, alpha=alpha,edgecolor=\"k\")\n # and testing points\n ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold, alpha=alpha, marker=\"s\", s=psize+10)\n ax.set_xlim(xx.min(), xx.max())\n ax.set_ylim(yy.min(), yy.max())\n return ax,xx,yy\n\ndef points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light, cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):\n ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False, colorscale=colorscale, cdiscrete=cdiscrete, psize=psize, alpha=alpha, predicted=True) \n Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]\n Z = Z.reshape(xx.shape)\n plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)\n cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)\n plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)\n return ax ",
"Some useful methods to train",
"from sklearn.cross_validation import train_test_split\nfrom sklearn.grid_search import GridSearchCV\n\n\"\"\"\nFunction\n--------\ndef cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5)\nPerforms cross validation of n_folds using a list of values for a chosen parameter (say regularization parameter, C) and returns\nthe parameter with the best performance\n\n\nParameters\n----------\nclf : Model (eg: LogisticRegression())\nparameters: List of parameter values to test eg. {\"C\": [0.01, 0.1, 1, 10, 100]}\nXtrain: Training features\nytrain: Training labels\nn_folds: No of cross validation folds eg. 5\n \nReturns\n-------\nbest: Best parameter from the values given\n \n\"\"\"\ndef cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5):\n gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds)\n gs.fit(Xtrain, ytrain)\n print \"BEST PARAMS\", gs.best_params_\n best = gs.best_estimator_\n return best\n\"\"\"\nFunction\n--------\ndef do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8)\nTrains a model and computes the accuracy on train and test data. Sub-tasks included:\n1. Subset data frame by the given features\n2. Split in to train and test data\n3. Cross validation using cv_optimize\n4. Fit the model\n5. Compute and print train and test accuracy\n\n\nParameters\n----------\nclf : Model (eg: LogisticRegression())\nparameters: List of parameter values to test eg. {\"C\": [0.01, 0.1, 1, 10, 100]}\nindf: data frame\nfeaturenames: List of feature names\ntargetname: Name of prediction variable\ntarget1val: Value of prediction variable\n \nReturns\n-------\nclf, Xtrain, ytrain, Xtest, ytest\n\"\"\"\ndef do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8):\n subdf=indf[featurenames]\n if standardize:\n subdfstd=(subdf - subdf.mean())/subdf.std()\n else:\n subdfstd=subdf\n X=subdfstd.values\n y=(indf[targetname].values==target1val)*1\n Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)\n clf = cv_optimize(clf, parameters, Xtrain, ytrain)\n clf=clf.fit(Xtrain, ytrain)\n training_accuracy = clf.score(Xtrain, ytrain)\n test_accuracy = clf.score(Xtest, ytest)\n print \"Accuracy on training data: %0.2f\" % (training_accuracy)\n print \"Accuracy on test data: %0.2f\" % (test_accuracy)\n return clf, Xtrain, ytrain, Xtest, ytest",
"Getting the data\nGet json data",
"import simplejson as json\nwith open('./data/us_state_map.geojson','r') as fp:\n statedata = json.load(fp)\nwith open('./data/us_county_map.geojson','r') as fp:\n data = json.load(fp)",
"Get json data from url",
"#your code here\ndef get_senate_vote(vote):\n url = 'https://www.govtrack.us/data/congress/113/votes/2013/s'+str(vote)+'/data.json'\n vote_data = requests.get(url)\n return json.loads(vote_data.text)",
"Get data from a url using beautiful soup",
"#Your code here\ndef get_all_votes():\n url = 'http://www.govtrack.us/data/congress/113/votes/2013'\n response = requests.get(url)\n soup = bs(response.content, \"lxml\")\n s_votes = [a['href'][1:-1] for a in soup.find_all('a') \n if a['href'].startswith('s')]\n return [get_senate_vote(vote) for vote in s_votes]",
"Useful functions and tricks",
"fulldf=pd.read_csv(\"bigdf.csv\")",
"Recompute dataframe\nThe following function is used to re-compute review counts and averages whenever you subset a reviews data frame. We'll use it soon to construct a smaller, more computationally tractable data frame.",
"def recompute_frame(ldf):\n \"\"\"\n takes a dataframe ldf, makes a copy of it, and returns the copy\n with all averages and review counts recomputed\n this is used when a frame is subsetted.\n \"\"\"\n ldfu=ldf.groupby('user_id')\n ldfb=ldf.groupby('business_id')\n user_avg=ldfu.stars.mean()\n user_review_count=ldfu.review_id.count()\n business_avg=ldfb.stars.mean()\n business_review_count=ldfb.review_id.count()\n nldf=ldf.copy()\n nldf.set_index(['business_id'], inplace=True)\n nldf['business_avg']=business_avg\n nldf['business_review_count']=business_review_count\n nldf.reset_index(inplace=True)\n nldf.set_index(['user_id'], inplace=True)\n nldf['user_avg']=user_avg\n nldf['user_review_count']=user_review_count\n nldf.reset_index(inplace=True)\n return nldf\n\nsmalldf = fulldf[(fulldf.business_review_count > 150) & (fulldf.user_review_count > 60)] \nsmalldf_new = recompute_frame(smalldf)",
"Compute Common Support\nThe common support is an important concept, as for each pair of restaurants, its the number of people who reviewed both. It will be used to modify similarity between restaurants. If the common support is low, the similarity is less believable.",
"restaurants=smalldf.business_id.unique() #get all the unique restaurant ids\nsupports=[]\nfor i,rest1 in enumerate(restaurants): # first restaurant (rest1) in the pair\n for j,rest2 in enumerate(restaurants): # second restaurant(rest2) in the pair\n if i < j: #skip pairing same restaurants and forming duplicate pairs\n rest1_reviewers = smalldf[smalldf.business_id==rest1].user_id.unique() #find all unique users who reviewed restaurant 1\n rest2_reviewers = smalldf[smalldf.business_id==rest2].user_id.unique() #find all unique users who reviewed restaurant 2\n common_reviewers = set(rest1_reviewers).intersection(rest2_reviewers) # find common reviewers by taking intersection\n supports.append(len(common_reviewers)) # add the no of common reviewers to list\nprint \"Mean support is:\",np.mean(supports)\nplt.hist(supports) #plot hist of list\nhistogram_style()",
"Return X (column value) from a df given Y (another column value)",
"# test business id\ntestbizid=\"eIxSLxzIlfExI6vgAbn2JA\"\n\n# Return business name from a df given id\ndef biznamefromid(df, theid):\n return df['biz_name'][df['business_id']==theid].values[0]\n\nprint testbizid, biznamefromid(smalldf,testbizid)",
"Plotting the data\nHistograms\nSimulating Elections",
"predictwise = pd.read_csv('data/predictwise.csv').set_index('States')\npredictwise.head(10)\n\n#Your code here\ndef simulate_election(model, n_sim):\n simulations = np.random.uniform(size= (51,n_sim))\n obama_votes = (simulations < model.Obama.values.reshape(-1,1))* model.Votes.values.reshape(-1,1)\n results = obama_votes.sum(axis = 0)\n return results\n\n\nresult = simulate_election(predictwise, 10000)\nresult\n\n#your code here\ndef plot_simulation(simulation): \n plt.hist(simulation, bins=np.arange(200, 538, 1), \n label='simulations', align='left', normed=True)\n plt.axvline(332, 0, .5, color='r', label='Actual Outcome')\n plt.axvline(269, 0, .5, color='k', label='Victory Threshold')\n p05 = np.percentile(simulation, 5.)\n p95 = np.percentile(simulation, 95.)\n iq = int(p95 - p05)\n pwin = ((simulation >= 269).mean() * 100)\n plt.title(\"Chance of Obama Victory: %0.2f%%, Spread: %d votes\" % (pwin, iq))\n plt.legend(frameon=False, loc='upper left')\n plt.xlabel(\"Obama Electoral College Votes\")\n plt.ylabel(\"Probability\")\n remove_border()\n\n\nplot_simulation(result)\nplt.xlim(240,380)",
"Movie Reviews",
"# import data\ncritics = pd.read_csv('critics.csv') # Rotten Tomatoes Top Critics Reviews Data for about 3000 movies\n\n# clean data\ncritics = critics[~critics.quote.isnull()]\ncritics = critics[critics.fresh.notnull()]\ncritics = critics[critics.quote.str.len() > 0]\ncritics = critics.reset_index(drop = True)\ncritics.head()",
"What does the distribution of number of reviews per reviewer look like?",
"plt.hist(critics.groupby('critic').rtid.count(),log = True, bins=range(20), edgecolor='white')\nplt.xlabel(\"Number of reviews per critic\")\nplt.ylabel(\"N\")\nhistogram_style()\n",
"Of the critics with > 100 reviews, plot the distribution of average \"freshness\" rating per critic",
"df = critics.copy()\ndf['fresh'] = df.fresh == 'fresh'\n\ngrp = df.groupby('critic')\ncounts = grp.rtid.count()\nmeans = grp.fresh.mean()\nmeans[counts > 100].hist(bins=10, edgecolor='w', lw=1)\nplt.xlabel(\"Average rating per critic\")\nplt.ylabel(\"N\")\nplt.yticks([0, 2, 4, 6, 8, 10])\nhistogram_style()",
"Using the original movies dataframe, plot the rotten tomatoes Top Critics Rating as a function of year. Overplot the average for each year, ignoring the score=0 examples (some of these are missing data).",
"# Get the data\nfrom io import StringIO \nmovie_txt = requests.get('https://raw.github.com/cs109/cs109_data/master/movies.dat').text\nmovie_file = StringIO(movie_txt) # treat a string like a file\nmovies = pd.read_csv(movie_file, delimiter='\\t')\nmovies = movies.dropna()\nmovies = movies.reset_index(drop = True)\n\n# process data\ndata = movies[['year','rtTopCriticsRating']]\n#data = data.convert_objects(convert_numeric=True) #deprecated\ndata = data.apply(pd.to_numeric)\ndata = data[(data.rtTopCriticsRating > 0.00)]\nmeans = data.groupby('year').mean()\n\n# plotting\nplt.plot(data['year'], data['rtTopCriticsRating'], 'o', mec = 'none', alpha = .5, label = 'Data' )\nplt.plot(means.index, means['rtTopCriticsRating'], '-', label = 'Yearly Average' )\nplt.legend(loc ='lower left', frameon = False)\nplt.xlabel(\"Year\")\nplt.ylabel(\"Average Score\")\nremove_border()",
"This graph shows a trend towards a lower average score, as well as a greater abundance of low scores, with time. This is probably at least partially a selection effect -- Rotten Tomatoes probably doesn't archive reviews for all movies, especially ones that came out before the website existed. Thus, reviews of old movies are more often \"the classics\". Mediocre old movies have been partially forgotten, and are underrepresented in the data. Other questions worth exploring:\nDoes the down trend mean:\n* the quality of movies has dropped ?\n* the top critics became more stringent in giving higher ratings?\n* more top critics with stringent ratings entered the scene?",
"df=pd.read_csv(\"01_heights_weights_genders.csv\")\ndf.head()\n\nfrom sklearn.linear_model import LogisticRegression\nclf_l, Xtrain_l, ytrain_l, Xtest_l, ytest_l = do_classify(LogisticRegression(), {\"C\": [0.01, 0.1, 1, 10, 100]}, df, ['Weight', 'Height'], 'Gender','Male')\n\nplt.figure(figsize =(12,8))\nplt.xlabel(\"Weight\")\nplt.ylabel(\"Height\")\nax=plt.gca()\npoints_plot(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, alpha=0.2);",
"Let us plot the probabilities obtained from predict_proba, overlayed on the samples with their true labels:",
"plt.figure(figsize =(12,8))\nplt.grid(True)\nax=plt.gca()\npoints_plot_prob(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, psize=20, alpha=0.1);",
"Linear Discriminant Analysis",
"from sklearn.lda import LDA\nclflda = LDA(solver=\"svd\", store_covariance=True)\nclflda.fit(Xtrain_l, ytrain_l)\n\n#from REF\nfrom scipy import linalg\n\ndef plot_ellipse(splot, mean, cov, color):\n v, w = linalg.eigh(cov)\n u = w[0] / linalg.norm(w[0])\n angle = np.arctan(u[1] / u[0])\n angle = 180 * angle / np.pi # convert to degrees\n # filled Gaussian at 2 standard deviation\n ell = mpl.patches.Ellipse(mean, 2 * v[0] ** 0.5, 2 * v[1] ** 0.5,\n 180 + angle, color=color, lw=3, fill=False)\n ell.set_clip_box(splot.bbox)\n ell1 = mpl.patches.Ellipse(mean, 1 * v[0] ** 0.5, 1 * v[1] ** 0.5,\n 180 + angle, color=color, lw=3, fill=False)\n ell1.set_clip_box(splot.bbox)\n ell3 = mpl.patches.Ellipse(mean, 3 * v[0] ** 0.5, 3 * v[1] ** 0.5,\n 180 + angle, color=color, lw=3, fill=False)\n ell3.set_clip_box(splot.bbox)\n #ell.set_alpha(0.2)\n splot.add_artist(ell)\n splot.add_artist(ell1)\n splot.add_artist(ell3)\n\n\n #splot.set_xticks(())\n #splot.set_yticks(())\ndef plot_lda_cov(lda, splot):\n plot_ellipse(splot, lda.means_[0], lda.covariance_, 'red')\n plot_ellipse(splot, lda.means_[1], lda.covariance_, 'blue')\n#plt.bivariate_normal(X, Y, sigmax=1.0, sigmay=1.0, mux=0.0, muy=0.0, sigmaxy=0.0)¶\n\nplt.figure(figsize =(12,8))\nax=plt.gca()\nspl,_,_=points_plot(ax,Xtrain_l, Xtest_l, ytrain_l, ytest_l, clflda)\nplot_lda_cov(clflda, spl)",
"Calibration Plot",
"dfd=pd.read_csv(\"https://dl.dropboxusercontent.com/u/75194/pima.csv\")\ndfd.head()\n\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import confusion_matrix\nfeatures=['npreg', 'bmi','diaped', 'age', 'glconc','insulin']\nX=dfd[features].values\ny=dfd.diabetes.values\nfrom sklearn.naive_bayes import GaussianNB\nclf=GaussianNB()\nitr,ite=train_test_split(range(y.shape[0]))\nXtr=X[itr]\nXte=X[ite]\nytr=y[itr]\nyte=y[ite]\nclf.fit(Xtr,ytr)\nprint \"Frac of mislabeled points\",float((yte != clf.predict(Xte)).sum())/yte.shape[0]\nconfusion_matrix(clf.predict(Xte),yte)\n\ndef calibration_plot(clf, xtest, ytest):\n prob = clf.predict_proba(xtest)[:, 1]\n outcome = ytest\n data = pd.DataFrame(dict(prob=prob, outcome=outcome))\n\n #group outcomes into bins of similar probability\n bins = np.linspace(0, 1, 20)\n cuts = pd.cut(prob, bins)\n binwidth = bins[1] - bins[0]\n \n #freshness ratio and number of examples in each bin\n cal = data.groupby(cuts).outcome.agg(['mean', 'count'])\n cal['pmid'] = (bins[:-1] + bins[1:]) / 2\n cal['sig'] = np.sqrt(cal.pmid * (1 - cal.pmid) / cal['count'])\n \n #the calibration plot\n ax = plt.subplot2grid((3, 1), (0, 0), rowspan=2)\n p = plt.errorbar(cal.pmid, cal['mean'], cal['sig'])\n plt.plot(cal.pmid, cal.pmid, linestyle='--', lw=1, color='k')\n plt.ylabel(\"Empirical Fraction\")\n\n \n #the distribution of P(fresh)\n ax = plt.subplot2grid((3, 1), (2, 0), sharex=ax)\n #calsum = cal['count'].sum()\n plt.bar(left=cal.pmid - binwidth / 2, height=cal['count'],\n width=.95 * (bins[1] - bins[0]),\n fc=p[0].get_color())\n plt.xlabel(\"Classifier Probability\")\n\ncalibration_plot(clf, Xte, yte)",
"Coin tosses: Binomial-Beta",
"plt.figure(figsize=(11, 9))\n\nimport scipy.stats as stats\n\nbeta = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) / 2, 2, k + 1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n #posterior distribution.\n y = beta.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()",
"Gaussian with known $\\sigma$",
"Y = [16.4, 17.0, 17.2, 17.4, 18.2, 18.2, 18.2, 19.9, 20.8]\n# Prior mean\nmu_prior = 19.5\n# prior std\ntau = 10 \nN = 15000\n\n#Data Quantities\nsig = np.std(Y) # assume that is the value of KNOWN sigma (in the likelihood)\nmu_data = np.mean(Y)\nn = len(Y)\nkappa = sig**2 / tau**2\nsig_post =np.sqrt(1./( 1./tau**2 + n/sig**2));\n# posterior mean\nmu_post = kappa / (kappa + n) *mu_prior + n/(kappa+n)* mu_data\n\n#samples\ntheta_prior = np.random.normal(loc=mu_prior, scale=tau, size=N);\ntheta_post = np.random.normal(loc=mu_post, scale=sig_post, size=N);\n\nplt.hist(theta_post, bins=30, alpha=0.9, label=\"posterior\");\nplt.hist(theta_prior, bins=30, alpha=0.2, label=\"prior\");\nplt.xlim([10, 30])\nplt.xlabel(\"wing length (mm)\")\nplt.ylabel(\"Number of samples\")\nplt.legend();",
"Other Resources\n\nUse of python class to implement a database of similarities: See CS109 HW4\nPrint formatting: See CS109 HW4 or HW3\nUse of JSON and complex dictionaries: HW5"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.13.0/examples/notebooks/generated/gee_score_test_simulation.ipynb
|
bsd-3-clause
|
[
"GEE score tests\nThis notebook uses simulation to demonstrate robust GEE score tests. These tests can be used in a GEE analysis to compare nested hypotheses about the mean structure. The tests are robust to miss-specification of the working correlation model, and to certain forms of misspecification of the variance structure (e.g. as captured by the scale parameter in a quasi-Poisson analysis).\nThe data are simulated as clusters, where there is dependence within but not between clusters. The cluster-wise dependence is induced using a copula approach. The data marginally follow a negative binomial (gamma/Poisson) mixture.\nThe level and power of the tests are considered below to assess the performance of the tests.",
"import pandas as pd\nimport numpy as np\nfrom scipy.stats.distributions import norm, poisson\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt",
"The function defined in the following cell uses a copula approach to simulate correlated random values that marginally follow a negative binomial distribution. The input parameter u is an array of values in (0, 1). The elements of u must be marginally uniformly distributed on (0, 1). Correlation in u will induce correlations in the returned negative binomial values. The array parameter mu gives the marginal means, and the scalar parameter scale defines the mean/variance relationship (the variance is scale times the mean). The lengths of u and mu must be the same.",
"def negbinom(u, mu, scale):\n p = (scale - 1) / scale\n r = mu * (1 - p) / p\n x = np.random.gamma(r, p / (1 - p), len(u))\n return poisson.ppf(u, mu=x)",
"Below are some parameters that govern the data used in the simulation.",
"# Sample size\nn = 1000\n\n# Number of covariates (including intercept) in the alternative hypothesis model\np = 5\n\n# Cluster size\nm = 10\n\n# Intraclass correlation (controls strength of clustering)\nr = 0.5\n\n# Group indicators\ngrp = np.kron(np.arange(n/m), np.ones(m))",
"The simulation uses a fixed design matrix.",
"# Build a design matrix for the alternative (more complex) model\nx = np.random.normal(size=(n, p))\nx[:, 0] = 1",
"The null design matrix is nested in the alternative design matrix. It has rank two less than the alternative design matrix.",
"x0 = x[:, 0:3]",
"The GEE score test is robust to dependence and overdispersion. Here we set the overdispersion parameter. The variance of the negative binomial distribution for each observation is equal to scale times its mean value.",
"# Scale parameter for negative binomial distribution\nscale = 10",
"In the next cell, we set up the mean structures for the null and alternative models",
"# The coefficients used to define the linear predictors\ncoeff = [[4, 0.4, -0.2], [4, 0.4, -0.2, 0, -0.04]]\n\n# The linear predictors\nlp = [np.dot(x0, coeff[0]), np.dot(x, coeff[1])]\n\n# The mean values\nmu = [np.exp(lp[0]), np.exp(lp[1])]",
"Below is a function that carries out the simulation.",
"# hyp = 0 is the null hypothesis, hyp = 1 is the alternative hypothesis.\n# cov_struct is a statsmodels covariance structure\ndef dosim(hyp, cov_struct=None, mcrep=500):\n \n # Storage for the simulation results\n scales = [[], []]\n \n # P-values from the score test\n pv = []\n \n # Monte Carlo loop\n for k in range(mcrep):\n\n # Generate random \"probability points\" u that are uniformly \n # distributed, and correlated within clusters\n z = np.random.normal(size=n)\n u = np.random.normal(size=n//m)\n u = np.kron(u, np.ones(m))\n z = r*z +np.sqrt(1-r**2)*u\n u = norm.cdf(z)\n\n # Generate the observed responses\n y = negbinom(u, mu=mu[hyp], scale=scale)\n\n # Fit the null model\n m0 = sm.GEE(y, x0, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson())\n r0 = m0.fit(scale='X2')\n scales[0].append(r0.scale)\n \n # Fit the alternative model\n m1 = sm.GEE(y, x, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson())\n r1 = m1.fit(scale='X2')\n scales[1].append(r1.scale)\n \n # Carry out the score test\n st = m1.compare_score_test(r0)\n pv.append(st[\"p-value\"])\n\n pv = np.asarray(pv)\n rslt = [np.mean(pv), np.mean(pv < 0.1)]\n \n return rslt, scales",
"Run the simulation using the independence working covariance structure. We expect the mean to be around 0 under the null hypothesis, and much lower under the alternative hypothesis. Similarly, we expect that under the null hypothesis, around 10% of the p-values are less than 0.1, and a much greater fraction of the p-values are less than 0.1 under the alternative hypothesis.",
"rslt, scales = [], []\n\nfor hyp in 0, 1:\n s, t = dosim(hyp, sm.cov_struct.Independence())\n rslt.append(s)\n scales.append(t)\n \nrslt = pd.DataFrame(rslt, index=[\"H0\", \"H1\"], columns=[\"Mean\", \"Prop(p<0.1)\"])\n\nprint(rslt)",
"Next we check to make sure that the scale parameter estimates are reasonable. We are assessing the robustness of the GEE score test to dependence and overdispersion, so here we are confirming that the overdispersion is present as expected.",
"_ = plt.boxplot([scales[0][0], scales[0][1], scales[1][0], scales[1][1]])\nplt.ylabel(\"Estimated scale\")",
"Next we conduct the same analysis using an exchangeable working correlation model. Note that this will be slower than the example above using independent working correlation, so we use fewer Monte Carlo repetitions.",
"rslt, scales = [], []\n\nfor hyp in 0, 1:\n s, t = dosim(hyp, sm.cov_struct.Exchangeable(), mcrep=100)\n rslt.append(s)\n scales.append(t)\n \nrslt = pd.DataFrame(rslt, index=[\"H0\", \"H1\"], columns=[\"Mean\", \"Prop(p<0.1)\"])\n\nprint(rslt)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
steven-murray/pydftools
|
docs/example_notebooks/basic_example.ipynb
|
mit
|
[
"Basic Example\nThis example is a basic introduction to using pydftools. It mimics example 1 of dftools.",
"# Import relevant libraries\n%matplotlib inline\n\nimport pydftools as df\nimport time\n\n # Make figures a little bigger in the notebook\nimport matplotlib as mpl\nmpl.rcParams['figure.dpi'] = 120 \n\n# For displaying equations\nfrom IPython.display import display, Markdown",
"Choose some parameters to use throughout",
"n = 1000\nseed = 1234\nsigma = 0.5\nmodel =df.model.Schechter()\np_true = model.p0",
"Generate mock data with observing errors:",
"data, selection, model, other = df.mockdata(n = n, seed = seed, sigma = sigma, model=model, verbose=True)",
"Create a fitting object (the fit is not performed until the fit object is accessed):",
"survey = df.DFFit(data=data, selection=selection, model=model)",
"Perform the fit and get the best set of parameters:",
"start = time.time()\nprint(survey.fit.p_best)\nprint(\"Time for fitting: \", time.time() - start, \" seconds\")",
"Plot the covariances:",
"fig = df.plotting.plotcov([survey], p_true=p_true, figsize=1.3)",
"Plot the mass function itself:",
"fig, ax = df.mfplot(survey, xlim=(1e7,2e12), ylim=(1e-4,2), p_true = p_true, bin_xmin=7.5, bin_xmax=12)",
"Write out fitted parameters with (Gaussian) uncertainties:",
"display(Markdown(survey.fit_summary(format_for_notebook=True)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
david-hoffman/scripts
|
notebooks/montecarlo_numbapro.ipynb
|
apache-2.0
|
[
"A Monte Carlo Option Pricer\nThis notebook introduces the vectorize and CUDA Python features in NumbaPro to speedup a monte carlo option pricer.\nA Numpy Implementation\nThe following is a NumPy implementatation of a simple monte carlo pricer.\nIt consists of two functions.\nThe mc_numpy function is the entry point of the pricer.\nThe entire simulation is divided into small time step dt.\nThe step_numpy function simulates the next batch of prices for each dt.",
"import numpy as np # numpy namespace\nfrom timeit import default_timer as timer # for timing\nfrom matplotlib import pyplot # for plotting\nimport math\n\ndef step_numpy(dt, prices, c0, c1, noises):\n return prices * np.exp(c0 * dt + c1 * noises)\n\ndef mc_numpy(paths, dt, interest, volatility):\n c0 = interest - 0.5 * volatility ** 2\n c1 = volatility * np.sqrt(dt)\n\n for j in range(1, paths.shape[1]): # for each time step\n prices = paths[:, j - 1] # last prices\n # gaussian noises for simulation\n noises = np.random.normal(0., 1., prices.size)\n # simulate\n paths[:, j] = step_numpy(dt, prices, c0, c1, noises)",
"Configurations",
"# stock parameter\n\nStockPrice = 20.83\nStrikePrice = 21.50\nVolatility = 0.021\nInterestRate = 0.20\nMaturity = 5. / 12.\n\n# monte-carlo parameter \n\nNumPath = 3000000\nNumStep = 100\n\n# plotting\nMAX_PATH_IN_PLOT = 50",
"Driver\nThe driver measures the performance of the given pricer and plots the simulation paths.",
"def driver(pricer, do_plot=False):\n paths = np.zeros((NumPath, NumStep + 1), order='F')\n paths[:, 0] = StockPrice\n DT = Maturity / NumStep\n\n ts = timer()\n pricer(paths, DT, InterestRate, Volatility)\n te = timer()\n elapsed = te - ts\n\n ST = paths[:, -1]\n PaidOff = np.maximum(paths[:, -1] - StrikePrice, 0)\n print('Result')\n fmt = '%20s: %s'\n print(fmt % ('stock price', np.mean(ST)))\n print(fmt % ('standard error', np.std(ST) / np.sqrt(NumPath)))\n print(fmt % ('paid off', np.mean(PaidOff)))\n optionprice = np.mean(PaidOff) * np.exp(-InterestRate * Maturity)\n print(fmt % ('option price', optionprice))\n\n print('Performance')\n NumCompute = NumPath * NumStep\n print(fmt % ('Mstep/second', '%.2f' % (NumCompute / elapsed / 1e6)))\n print(fmt % ('time elapsed', '%.3fs' % (te - ts)))\n\n if do_plot:\n pathct = min(NumPath, MAX_PATH_IN_PLOT)\n for i in range(pathct):\n pyplot.plot(paths[i])\n print('Plotting %d/%d paths' % (pathct, NumPath))\n pyplot.show()\n return elapsed",
"Result",
"numpy_time = driver(mc_numpy, do_plot=True)",
"Basic Vectorize\nThe vectorize decorator compiles a scalar function into a Numpy ufunc-like object for operation on arrays.\nThe decorator must be provided with a list of possible signatures.\nThe step_cpuvec takes 5 double arrays and return a double array.",
"from numbapro import vectorize\n\n@vectorize(['f8(f8, f8, f8, f8, f8)'])\ndef step_cpuvec(last, dt, c0, c1, noise):\n return last * math.exp(c0 * dt + c1 * noise)\n\ndef mc_cpuvec(paths, dt, interest, volatility):\n c0 = interest - 0.5 * volatility ** 2\n c1 = volatility * np.sqrt(dt)\n\n for j in range(1, paths.shape[1]):\n prices = paths[:, j - 1]\n noises = np.random.normal(0., 1., prices.size)\n paths[:, j] = step_cpuvec(prices, dt, c0, c1, noises)\n\ncpuvec_time = driver(mc_cpuvec, do_plot=True)",
"Parallel Vectorize\nBy setting the target to parallel, the vectorize decorator produces a multithread implementation.",
"@vectorize(['f8(f8, f8, f8, f8, f8)'], target='parallel')\ndef step_parallel(last, dt, c0, c1, noise):\n return last * math.exp(c0 * dt + c1 * noise)\n\ndef mc_parallel(paths, dt, interest, volatility):\n c0 = interest - 0.5 * volatility ** 2\n c1 = volatility * np.sqrt(dt)\n\n for j in range(1, paths.shape[1]):\n prices = paths[:, j - 1]\n noises = np.random.normal(0., 1., prices.size)\n paths[:, j] = step_parallel(prices, dt, c0, c1, noises)\n\nparallel_time = driver(mc_parallel, do_plot=True)",
"CUDA Vectorize\nTo take advantage of the CUDA GPU, user can simply set the target to gpu.\nThere are no different other than the target keyword argument.",
"@vectorize(['f8(f8, f8, f8, f8, f8)'], target='gpu')\ndef step_gpuvec(last, dt, c0, c1, noise):\n return last * math.exp(c0 * dt + c1 * noise)\n\ndef mc_gpuvec(paths, dt, interest, volatility):\n c0 = interest - 0.5 * volatility ** 2\n c1 = volatility * np.sqrt(dt)\n\n for j in range(1, paths.shape[1]):\n prices = paths[:, j - 1]\n noises = np.random.normal(0., 1., prices.size)\n paths[:, j] = step_gpuvec(prices, dt, c0, c1, noises)\n\ngpuvec_time = driver(mc_gpuvec, do_plot=True)",
"In the above simple CUDA vectorize example, the speedup is not significant due to the memory transfer overhead. Since the kernel has relatively low compute intensity, explicit management of memory transfer would give a significant speedup.\nCUDA JIT\nThis implementation uses the CUDA JIT feature with explicit memory transfer and asynchronous kernel call. A cuRAND random number generator is used instead of the NumPy implementation.",
"from numbapro import cuda, jit\nfrom numbapro.cudalib import curand\n\n@jit('void(double[:], double[:], double, double, double, double[:])', target='gpu')\ndef step_cuda(last, paths, dt, c0, c1, normdist):\n i = cuda.grid(1)\n if i >= paths.shape[0]:\n return\n noise = normdist[i]\n paths[i] = last[i] * math.exp(c0 * dt + c1 * noise)\n\ndef mc_cuda(paths, dt, interest, volatility):\n n = paths.shape[0]\n\n blksz = cuda.get_current_device().MAX_THREADS_PER_BLOCK\n gridsz = int(math.ceil(float(n) / blksz))\n\n # instantiate a CUDA stream for queueing async CUDA cmds\n stream = cuda.stream()\n # instantiate a cuRAND PRNG\n prng = curand.PRNG(curand.PRNG.MRG32K3A, stream=stream)\n\n # Allocate device side array\n d_normdist = cuda.device_array(n, dtype=np.double, stream=stream)\n \n c0 = interest - 0.5 * volatility ** 2\n c1 = volatility * np.sqrt(dt)\n\n # configure the kernel\n # similar to CUDA-C: step_cuda<<<gridsz, blksz, 0, stream>>>\n step_cfg = step_cuda[gridsz, blksz, stream]\n \n # transfer the initial prices\n d_last = cuda.to_device(paths[:, 0], stream=stream)\n for j in range(1, paths.shape[1]):\n # call cuRAND to populate d_normdist with gaussian noises\n prng.normal(d_normdist, mean=0, sigma=1)\n # setup memory for new prices\n # device_array_like is like empty_like for GPU\n d_paths = cuda.device_array_like(paths[:, j], stream=stream)\n # invoke step kernel asynchronously\n step_cfg(d_last, d_paths, dt, c0, c1, d_normdist)\n # transfer memory back to the host\n d_paths.copy_to_host(paths[:, j], stream=stream)\n d_last = d_paths\n # wait for all GPU work to complete\n stream.synchronize()\n\ncuda_time = driver(mc_cuda, do_plot=True)",
"Performance Comparision",
"def perf_plot(rawdata, xlabels):\n data = [numpy_time / x for x in rawdata]\n idx = np.arange(len(data))\n fig = pyplot.figure()\n width = 0.5\n ax = fig.add_subplot(111)\n ax.bar(idx, data, width)\n ax.set_ylabel('normalized speedup')\n ax.set_xticks(idx + width / 2)\n ax.set_xticklabels(xlabels)\n ax.set_ylim(0.9)\n pyplot.show()\n\nperf_plot([numpy_time, cpuvec_time, parallel_time, gpuvec_time], \n ['numpy', 'cpu-vect', 'parallel-vect', 'gpu-vect'])\n\nperf_plot([numpy_time, cpuvec_time, parallel_time, gpuvec_time, cuda_time],\n ['numpy', 'cpu-vect', 'parallel-vect', 'gpu-vect', 'cuda'])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
probml/pyprobml
|
notebooks/book1/14/densenet_torch.ipynb
|
mit
|
[
"Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/densenet_jax.ipynb\n<a href=\"https://colab.research.google.com/github/Nirzu97/pyprobml/blob/densenet-torch/notebooks/densenet_torch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nDense networks\nWe implement DenseNet.\nBased on 7.7 of http://d2l.ai/chapter_convolutional-modern/densenet.html",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport math\nfrom IPython import display\n\ntry:\n import torch\nexcept ModuleNotFoundError:\n %pip install -qq torch\n import torch\ntry:\n import torchvision\nexcept ModuleNotFoundError:\n %pip install -qq torchvision\n import torchvision\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.utils import data\nfrom torchvision import transforms\n\nimport random\nimport os\nimport time\n\nnp.random.seed(seed=1)\ntorch.manual_seed(1)\n!mkdir figures # for saving plots",
"Dense blocks\nA conv block uses BN-activation-conv in order.",
"def conv_block(input_channels, num_channels):\n return nn.Sequential(\n nn.BatchNorm2d(input_channels), nn.ReLU(), nn.Conv2d(input_channels, num_channels, kernel_size=3, padding=1)\n )",
"A DenseBlock is a sequence of conv-blocks, each consuming as input all previous outputs.",
"class DenseBlock(nn.Module):\n def __init__(self, num_convs, input_channels, num_channels):\n super(DenseBlock, self).__init__()\n layer = []\n for i in range(num_convs):\n layer.append(conv_block(num_channels * i + input_channels, num_channels))\n self.net = nn.Sequential(*layer)\n\n def forward(self, X):\n for blk in self.net:\n Y = blk(X)\n # Concatenate the input and output of each block on the channel\n # dimension\n X = torch.cat((X, Y), dim=1)\n return X",
"Example: we start with 3 channels, make a DenseBlock with 2 conv-blocks each with 10 channels, to get an output with 23 channels.",
"blk = DenseBlock(2, 3, 10)\nX = torch.randn(4, 3, 8, 8)\nY = blk(X)\nY.shape",
"Transition layers\nTo prevent the number of channels exploding, we can add a transition layer, that uses 1x1 convolution. We can also reduce the spatial resolution using stride 2 average pooling.",
"def transition_block(input_channels, num_channels):\n return nn.Sequential(\n nn.BatchNorm2d(input_channels),\n nn.ReLU(),\n nn.Conv2d(input_channels, num_channels, kernel_size=1),\n nn.AvgPool2d(kernel_size=2, stride=2),\n )",
"Below we show an example where we map the 23 channels back down to 10, and halve the spatial dimensions.",
"blk = transition_block(23, 10)\nblk(Y).shape",
"Full model\nThe first part of the model is similar to resnet.",
"b1 = nn.Sequential(\n nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),\n nn.BatchNorm2d(64),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=3, stride=2, padding=1),\n)",
"The \"backbone\" is 4 dense-blocks, each of which has 4 conv-blocks with 32 channels. Since the number of channels increases for each conv-block, the parameter 32 is called the \"growth rate\". We insert a transition block between each dense-block to keep things from getting too large.",
"# `num_channels`: the current number of channels\nnum_channels = 64 # output of first part of model\ngrowth_rate = 32\nnum_convs_in_dense_blocks = [4, 4, 4, 4]\nblks = []\nfor i, num_convs in enumerate(num_convs_in_dense_blocks):\n blks.append(DenseBlock(num_convs, num_channels, growth_rate))\n # This is the number of output channels in the previous dense block\n num_channels += num_convs * growth_rate\n # A transition layer that halves the number of channels is added between\n # the dense blocks\n if i != len(num_convs_in_dense_blocks) - 1:\n blks.append(transition_block(num_channels, num_channels // 2))\n num_channels = num_channels // 2",
"Finally we add average pooling and an FC layer. We assume 10 classes, for MNIST.",
"net = nn.Sequential(\n b1,\n *blks,\n nn.BatchNorm2d(num_channels),\n nn.ReLU(),\n nn.AdaptiveMaxPool2d((1, 1)),\n nn.Flatten(),\n nn.Linear(num_channels, 10)\n)\n\nnet\n\nX = torch.rand(size=(1, 1, 224, 224))\nfor layer in net:\n X = layer(X)\n print(layer.__class__.__name__, \"output shape:\\t\", X.shape)\n\nX = torch.rand(size=(1, 1, 96, 96))\nfor layer in net:\n X = layer(X)\n print(layer.__class__.__name__, \"output shape:\\t\", X.shape)",
"Training\nWe fit the model to Fashion-MNIST. We rescale images from 28x28 to 96x96, so that the input to the final average pooling layer has size 3x3. We notice that the training speed is much less than for ResNet.",
"def load_data_fashion_mnist(batch_size, resize=None):\n \"\"\"Download the Fashion-MNIST dataset and then load it into memory.\"\"\"\n trans = [transforms.ToTensor()]\n if resize:\n trans.insert(0, transforms.Resize(resize))\n trans = transforms.Compose(trans)\n mnist_train = torchvision.datasets.FashionMNIST(root=\"../data\", train=True, transform=trans, download=True)\n mnist_test = torchvision.datasets.FashionMNIST(root=\"../data\", train=False, transform=trans, download=True)\n return (\n data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=4),\n data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=4),\n )\n\nclass Animator:\n \"\"\"For plotting data in animation.\"\"\"\n\n def __init__(\n self,\n xlabel=None,\n ylabel=None,\n legend=None,\n xlim=None,\n ylim=None,\n xscale=\"linear\",\n yscale=\"linear\",\n fmts=(\"-\", \"m--\", \"g-.\", \"r:\"),\n nrows=1,\n ncols=1,\n figsize=(3.5, 2.5),\n ):\n # Incrementally plot multiple lines\n if legend is None:\n legend = []\n display.set_matplotlib_formats(\"svg\")\n self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)\n if nrows * ncols == 1:\n self.axes = [\n self.axes,\n ]\n # Use a lambda function to capture arguments\n self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)\n self.X, self.Y, self.fmts = None, None, fmts\n\n def add(self, x, y):\n # Add multiple data points into the figure\n if not hasattr(y, \"__len__\"):\n y = [y]\n n = len(y)\n if not hasattr(x, \"__len__\"):\n x = [x] * n\n if not self.X:\n self.X = [[] for _ in range(n)]\n if not self.Y:\n self.Y = [[] for _ in range(n)]\n for i, (a, b) in enumerate(zip(x, y)):\n if a is not None and b is not None:\n self.X[i].append(a)\n self.Y[i].append(b)\n self.axes[0].cla()\n for x, y, fmt in zip(self.X, self.Y, self.fmts):\n self.axes[0].plot(x, y, fmt)\n self.config_axes()\n display.display(self.fig)\n display.clear_output(wait=True)\n\n\nclass Timer:\n \"\"\"Record multiple running times.\"\"\"\n\n def __init__(self):\n self.times = []\n self.start()\n\n def start(self):\n \"\"\"Start the timer.\"\"\"\n self.tik = time.time()\n\n def stop(self):\n \"\"\"Stop the timer and record the time in a list.\"\"\"\n self.times.append(time.time() - self.tik)\n return self.times[-1]\n\n def avg(self):\n \"\"\"Return the average time.\"\"\"\n return sum(self.times) / len(self.times)\n\n def sum(self):\n \"\"\"Return the sum of time.\"\"\"\n return sum(self.times)\n\n def cumsum(self):\n \"\"\"Return the accumulated time.\"\"\"\n return np.array(self.times).cumsum().tolist()\n\n\nclass Accumulator:\n \"\"\"For accumulating sums over `n` variables.\"\"\"\n\n def __init__(self, n):\n self.data = [0.0] * n\n\n def add(self, *args):\n self.data = [a + float(b) for a, b in zip(self.data, args)]\n\n def reset(self):\n self.data = [0.0] * len(self.data)\n\n def __getitem__(self, idx):\n return self.data[idx]\n\ndef set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):\n \"\"\"Set the axes for matplotlib.\"\"\"\n axes.set_xlabel(xlabel)\n axes.set_ylabel(ylabel)\n axes.set_xscale(xscale)\n axes.set_yscale(yscale)\n axes.set_xlim(xlim)\n axes.set_ylim(ylim)\n if legend:\n axes.legend(legend)\n axes.grid()\n\ndef try_gpu(i=0):\n \"\"\"Return gpu(i) if exists, otherwise return cpu().\"\"\"\n if torch.cuda.device_count() >= i + 1:\n return torch.device(f\"cuda:{i}\")\n return torch.device(\"cpu\")\n\ndef accuracy(y_hat, y):\n \"\"\"Compute the number of correct predictions.\"\"\"\n if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:\n y_hat = torch.argmax(y_hat, axis=1)\n cmp_ = y_hat.type(y.dtype) == y\n return float(cmp_.type(y.dtype).sum())\n\n\ndef evaluate_accuracy_gpu(net, data_iter, device=None):\n \"\"\"Compute the accuracy for a model on a dataset using a GPU.\"\"\"\n if isinstance(net, torch.nn.Module):\n net.eval() # Set the model to evaluation mode\n if not device:\n device = next(iter(net.parameters())).device\n # No. of correct predictions, no. of predictions\n metric = Accumulator(2)\n for X, y in data_iter:\n X = X.to(device)\n y = y.to(device)\n metric.add(accuracy(net(X), y), y.numel())\n return metric[0] / metric[1]",
"Train function",
"def train(net, train_iter, test_iter, num_epochs, lr, device):\n \"\"\"Train a model with a GPU (defined in Chapter 6).\"\"\"\n\n def init_weights(m):\n if type(m) == nn.Linear or type(m) == nn.Conv2d:\n nn.init.xavier_uniform_(m.weight)\n\n net.apply(init_weights)\n print(\"training on\", device)\n net.to(device)\n optimizer = torch.optim.SGD(net.parameters(), lr=lr)\n loss = nn.CrossEntropyLoss()\n animator = Animator(xlabel=\"epoch\", xlim=[1, num_epochs], legend=[\"train loss\", \"train acc\", \"test acc\"])\n timer, num_batches = Timer(), len(train_iter)\n for epoch in range(num_epochs):\n # Sum of training loss, sum of training accuracy, no. of examples\n metric = Accumulator(3)\n net.train()\n for i, (X, y) in enumerate(train_iter):\n timer.start()\n optimizer.zero_grad()\n X, y = X.to(device), y.to(device)\n y_hat = net(X)\n l = loss(y_hat, y)\n l.backward()\n optimizer.step()\n with torch.no_grad():\n metric.add(l * X.shape[0], accuracy(y_hat, y), X.shape[0])\n timer.stop()\n train_l = metric[0] / metric[2]\n train_acc = metric[1] / metric[2]\n if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:\n animator.add(epoch + (i + 1) / num_batches, (train_l, train_acc, None))\n test_acc = evaluate_accuracy_gpu(net, test_iter)\n animator.add(epoch + 1, (None, None, test_acc))\n print(f\"loss {train_l:.3f}, train acc {train_acc:.3f}, \" f\"test acc {test_acc:.3f}\")\n print(f\"{metric[2] * num_epochs / timer.sum():.1f} examples/sec \" f\"on {str(device)}\")",
"Learning curve",
"lr, num_epochs, batch_size = 0.1, 10, 256\ntrain_iter, test_iter = load_data_fashion_mnist(batch_size, resize=96)\ntrain(net, train_iter, test_iter, num_epochs, lr, try_gpu())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tedunderwood/horizon
|
chapter3/notebooks/chapter3table3.ipynb
|
mit
|
[
"Chapter 3, Table 3\nThis notebook explains how I used the Harvard General Inquirer to streamline interpretation of a predictive model.\nI'm italicizing the word \"streamline\" because I want to emphasize that I place very little weight on the Inquirer: as I say in the text, \"The General Inquirer has no special authority, and I have tried not to make it a load-bearing element of this argument.\" \nTo interpret a model, I actually spend a lot of time looking at lists of features, as well as predictions about individual texts. But to explain my interpretation, I need some relatively simple summary. Given real-world limits on time and attention, going on about lists of individual words for five pages is rarely an option. So, although wordlists are crude and arbitrary devices, flattening out polysemy and historical change, I am willing to lean on them rhetorically, where I find that they do in practice echo observations I have made in other ways.\nI should also acknowledge that I'm not using the General Inquirer as it was designed to be used. The full version of this tool is not just a set of wordlists, it's a software package that tries to get around polysemy by disambiguating different word senses. I haven't tried to use it in that way: I think it would complicate my explanation, in order to project an impression of accuracy and precision that I don't particularly want to project. Instead, I have stressed that word lists are crude tools, and I'm using them only as crude approximations.\nThat said, how do I do it?\nTo start with, we'll load an array of modules. Some standard, some utilities that I've written myself.",
"# some standard modules\n\nimport csv, os, sys\nfrom collections import Counter\nimport numpy as np\nfrom scipy.stats import pearsonr\n\n# now a module that I wrote myself, located\n# a few directories up, in the software\n# library for this repository\n\nsys.path.append('../../lib')\nimport FileCabinet as filecab\n",
"Loading the General Inquirer.\nThis takes some doing, because the General Inquirer doesn't start out as a set of wordlists. I have to translate it into that form.\nI start by loading an English dictionary.",
"# start by loading the dictionary\n\ndictionary = set()\n\nwith open('../../lexicons/MainDictionary.txt', encoding = 'utf-8') as f:\n reader = csv.reader(f, delimiter = '\\t')\n for row in reader:\n word = row[0]\n count = int(row[2])\n if count < 10000:\n continue\n # that ignores very rare words\n # we end up with about 42,700 common ones\n else:\n dictionary.add(word)",
"The next stage is to translate the Inquirer. It begins as a table where word senses are row labels, and the Inquirer categories are columns (except for two columns at the beginning and two at the end). This is, by the way, the \"basic spreadsheet\" described at this site:\nhttp://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm\nI translate this into a dictionary where the keys are Inquirer categories, and the values are sets of words associated with each category.\nBut to do that, I have to do some filtering and expanding. Different senses of a word are broken out in the spreadsheet thus:\nABOUT#1\nABOUT#2\nABOUT#3\netc.\nI need to separate the hashtag part. Also, because I don't want to allow rare senses of a word too much power, I ignore everything but the first sense of a word.\nHowever, I also want to allow singular verb forms and plural nouns to count. So there's some code below that expands words by adding -s -ed, etc to the end. See the suffixes defined below for more details. Note that I use the English dictionary to determine which possible forms are real words.",
"inquirer = dict()\n\nsuffixes = dict()\nsuffixes['verb'] = ['s', 'es', 'ed', 'd', 'ing']\nsuffixes['noun'] = ['s', 'es']\n\nallinquirerwords = set()\n\nwith open('../../lexicons/inquirerbasic.csv', encoding = 'utf-8') as f:\n reader = csv.DictReader(f)\n fields = reader.fieldnames[2:-2]\n for field in fields:\n inquirer[field] = set()\n\n for row in reader:\n term = row['Entry']\n\n if '#' in term:\n parts = term.split('#')\n word = parts[0].lower()\n sense = int(parts[1].strip('_ '))\n partialsense = True\n else:\n word = term.lower()\n sense = 0\n partialsense = False\n\n if sense > 1:\n continue\n # we're ignoring uncommon senses\n\n pos = row['Othtags']\n if 'Noun' in pos:\n pos = 'noun'\n elif 'SUPV' in pos:\n pos = 'verb'\n\n forms = {word}\n if pos == 'noun' or pos == 'verb':\n for suffix in suffixes[pos]:\n if word + suffix in dictionary:\n forms.add(word + suffix)\n if pos == 'verb' and word.rstrip('e') + suffix in dictionary:\n forms.add(word.rstrip('e') + suffix)\n\n for form in forms:\n for field in fields:\n if len(row[field]) > 1:\n inquirer[field].add(form)\n allinquirerwords.add(form)\n \nprint('Inquirer loaded')\nprint('Total of ' + str(len(allinquirerwords)) + \" words.\")",
"Load model predictions about volumes\nThe next step is to create some vectors that store predictions about volumes. In this case, these are predictions about the probability that a volume is fiction, rather than biography.",
"# the folder where wordcounts will live\n# we're only going to load predictions\n# that correspond to files located there\nsourcedir = '../sourcefiles/'\n\ndocs = []\nlogistic = []\n\nwith open('../modeloutput/fullfiction.results.csv', encoding = 'utf-8') as f:\n reader = csv.DictReader(f)\n for row in reader:\n genre = row['realclass']\n docid = row['volid']\n if not os.path.exists(sourcedir + docid + '.tsv'):\n continue\n docs.append(row['volid'])\n logistic.append(float(row['logistic']))\n\nlogistic = np.array(logistic)\nnumdocs = len(docs)\n\nassert numdocs == len(logistic)\n\nprint(\"We have information about \" + str(numdocs) + \" volumes.\")",
"And get the wordcounts themselves\nThis cell of the notebook is very short (one line), but it takes a lot of time to execute. There's a lot of file i/o that happens inside the function get_wordfreqs, in the FileCabinet module, which is invoked here. We come away with a dictionary of wordcounts, keyed in the first instance by volume ID.\nNote that these are normalized frequencies rather than the raw integer counts we had in the analogous notebook in chapter 1.",
"wordcounts = filecab.get_wordfreqs(sourcedir, '.tsv', docs)",
"Now calculate the representation of each Inquirer category in each doc\nWe normalize by the total wordcount for a volume.\nThis cell also takes a long time to run. I've added a counter so you have some confidence that it's still running.",
"# Initialize empty category vectors\n\ncategories = dict()\nfor field in fields:\n categories[field] = np.zeros(numdocs)\n \n# Now fill them\n\nfor i, doc in enumerate(docs):\n ctcat = Counter()\n allcats = 0\n for word, count in wordcounts[doc].items():\n if word in dictionary:\n allcats += count\n \n if word not in allinquirerwords:\n continue\n for field in fields:\n if word in inquirer[field]:\n ctcat[field] += count\n for field in fields:\n categories[field][i] = ctcat[field] / (allcats + 0.00000001)\n # Laplacian smoothing there to avoid div by zero, among other things.\n # notice that, since these are normalized freqs, we need to use a very small decimal\n # If these are really normalized freqs, it may not matter very much\n # that we divide at all. The denominator should always be 1, more or less.\n # But I'm not 100% sure about that.\n \n if i % 100 == 1:\n print(i, allcats)",
"Calculate correlations\nNow that we have all the information, calculating correlations is easy. We iterate through Inquirer categories, in each case calculating the correlation between a vector of model predictions for docs, and a vector of category-frequencies for docs.",
"logresults = []\n\nfor inq_category in fields:\n l = pearsonr(logistic, categories[inq_category])[0]\n logresults.append((l, inq_category))\n\nlogresults.sort()\n",
"Load expanded names of Inquirer categories\nThe terms used in the inquirer spreadsheet are not very transparent. DAV for instance is \"descriptive action verbs.\" BodyPt is \"body parts.\" To make these more transparent, I have provided expanded names for many categories that turned out to be relevant in the book, trying to base my description on the accounts provided here: http://www.wjh.harvard.edu/~inquirer/homecat.htm\nWe load these into a dictionary.",
"short2long = dict()\nwith open('../../lexicons/long_inquirer_names.csv', encoding = 'utf-8') as f:\n reader = csv.DictReader(f)\n for row in reader:\n short2long[row['short_name']] = row['long_name']\n",
"Print results\nI print the top 12 correlations and the bottom 12, skipping categories that are drawn from the \"Laswell value dictionary.\" The Laswell categories are very finely discriminated (things like \"enlightenment gain\" or \"power loss\"), and I have little faith that they're meaningful. I especially doubt that they could remain meaningful when the Inquirer is used crudely as a source of wordlists.",
"print('Printing the correlations of General Inquirer categories')\nprint('with the predicted probabilities of being fiction in allsubset2.csv:')\nprint()\nprint('First, top positive correlations: ')\nprint()\nfor prob, n in reversed(logresults[-15 : ]):\n if n in short2long:\n n = short2long[n]\n if 'Laswell' in n:\n continue\n else:\n print(str(prob) + '\\t' + n)\n\nprint()\nprint('Now, negative correlations: ')\nprint()\nfor prob, n in logresults[0 : 15]:\n if n in short2long:\n n = short2long[n]\n if 'Laswell' in n:\n continue\n else:\n print(str(prob) + '\\t' + n)\n",
"Comments\nIf you compare the printout above to the book's version of Table 3.3, you may notice a few things have been dropped. In particular, I have skipped categories that contain a small number of words, like \"Sky\" (34). \"Sky\" is in effect rolled into \"natural objects.\"\n\"Verbs that imply an interpretation or explanation of an action\" has also been skipped--because I simply don't know how to convey that clearly in a table. In the Inquirer, there's a contrast between DAV and IAV, but it would take a paragraph to explain, and the whole point of this exercise is to produce something concise.\nHowever, on the whole, Table 3.3 corresponds very closely to the list above."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Hyperparticle/deep-learning-foundation
|
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
|
mit
|
[
"Sentiment analysis with TFLearn\nIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\nWe'll start off by importing all the modules we'll need, then load and prepare the data.",
"import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical",
"Preparing the data\nFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.\nRead the data\nUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.",
"reviews = pd.read_csv('reviews.txt', header=None)\nlabels = pd.read_csv('labels.txt', header=None)",
"Counting word frequency\nTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.\n\nExercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.",
"from collections import Counter\n\ntotal_counts = Counter(word for review in reviews.values for word in review[0].split(' '))\n\nprint(\"Total words in data set: \", len(total_counts))",
"Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.",
"vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\nprint(vocab[:60])",
"What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.",
"print(vocab[-1], ': ', total_counts[vocab[-1]])",
"The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\nNote: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.\nNow for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n\nExercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.",
"word2idx = {word: idx for idx,word in enumerate(vocab)}",
"Text to vector function\nNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n\nInitialize the word vector with np.zeros, it should be the length of the vocabulary.\nSplit the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.\nFor each word in that list, increment the element in the index associated with that word, which you get from word2idx.\n\nNote: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.",
"def text_to_vector(text):\n \n pass",
"If you do this right, the following code should return\n```\ntext_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]\narray([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n```",
"text_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]",
"Now, run through our entire review data set and convert each review to a word vector.",
"word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\nfor ii, (_, text) in enumerate(reviews.iterrows()):\n word_vectors[ii] = text_to_vector(text[0])\n\n# Printing out the first 5 word vectors\nword_vectors[:5, :23]",
"Train, Validation, Test sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.",
"Y = (labels=='positive').astype(np.int_)\nrecords = len(labels)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)\ntestX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)\n\ntrainY",
"Building the network\nTFLearn lets you build the network by defining the layers. \nInput layer\nFor the input layer, you just need to tell it how many units you have. For example, \nnet = tflearn.input_data([None, 100])\nwould create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.\nThe number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).\nOutput layer\nThe last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\nnet = tflearn.fully_connected(net, 2, activation='softmax')\nTraining\nTo set how you train the network, use \nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with the categorical cross-entropy.\n\nFinally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like \nnet = tflearn.input_data([None, 10]) # Input\nnet = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\nnet = tflearn.fully_connected(net, 2, activation='softmax') # Output\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nmodel = tflearn.DNN(net)\n\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.",
"# Network building\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n \n model = tflearn.DNN(net)\n return model",
"Intializing the model\nNext we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n\nNote: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.",
"model = build_model()",
"Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.\nYou can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.",
"# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)",
"Testing\nAfter you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.",
"predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\nprint(\"Test accuracy: \", test_accuracy)",
"Try out your own text!",
"# Helper function that uses your model to predict sentiment\ndef test_sentence(sentence):\n positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]\n print('Sentence: {}'.format(sentence))\n print('P(positive) = {:.3f} :'.format(positive_prob), \n 'Positive' if positive_prob > 0.5 else 'Negative')\n\nsentence = \"Moonlight is by far the best movie of 2016.\"\ntest_sentence(sentence)\n\nsentence = \"It's amazing anyone could be talented enough to make something this spectacularly awful\"\ntest_sentence(sentence)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/tf-estimator-tutorials
|
02_Classification/04.0 - TF Classification Model - Custom Estimator + Experiment + Dataset + CSV.ipynb
|
apache-2.0
|
[
"import tensorflow as tf\nfrom tensorflow import data\nimport numpy as np\nimport shutil\nimport math\nfrom datetime import datetime\nfrom tensorflow.python.feature_column import feature_column\n\nfrom tensorflow.contrib.learn import learn_runner\nfrom tensorflow.contrib.learn import make_export_strategy\n\nprint(tf.__version__)",
"Steps to use the TF Experiment APIs\n\nDefine dataset metadata\nDefine data input function to read the data from .tfrecord files + feature processing\nCreate TF feature columns based on metadata + extended feature columns\nDefine an a model function with the required feature columns, EstimatorSpecs, & parameters\nRun an Experiment with learn_runner to train, evaluate, and export the model\nEvaluate the model using test data\nPerform predictions & serving the exported model (using CSV/JSON input)",
"MODEL_NAME = 'class-model-02'\n\nTRAIN_DATA_FILES_PATTERN = 'data/train-*.csv'\nVALID_DATA_FILES_PATTERN = 'data/valid-*.csv'\nTEST_DATA_FILES_PATTERN = 'data/test-*.csv'\n\nRESUME_TRAINING = False\nPROCESS_FEATURES = True\nEXTEND_FEATURE_COLUMNS = True\nMULTI_THREADING = True",
"1. Define Dataset Metadata\n\ntf.example feature names and defaults\nNumeric and categorical feature names\nTarget feature name\nTarget feature labels\nUnused features",
"HEADER = ['key','x','y','alpha','beta','target']\nHEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], ['NA']]\n\nNUMERIC_FEATURE_NAMES = ['x', 'y'] \n\nCATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']}\nCATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys())\n\nFEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES\n\nTARGET_NAME = 'target'\n\nTARGET_LABELS = ['positive', 'negative']\n\nUNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME})\n\nprint(\"Header: {}\".format(HEADER))\nprint(\"Numeric Features: {}\".format(NUMERIC_FEATURE_NAMES))\nprint(\"Categorical Features: {}\".format(CATEGORICAL_FEATURE_NAMES))\nprint(\"Target: {} - labels: {}\".format(TARGET_NAME, TARGET_LABELS))\nprint(\"Unused Features: {}\".format(UNUSED_FEATURE_NAMES))",
"2. Define Data Input Function\n\nInput csv files name pattern\nUse TF Dataset APIs to read and process the data\nParse CSV lines to feature tensors\nApply feature processing\nReturn (features, target) tensors\n\na. Parsing and preprocessing logic",
"def parse_csv_row(csv_row):\n \n columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS)\n features = dict(zip(HEADER, columns))\n \n for column in UNUSED_FEATURE_NAMES:\n features.pop(column)\n \n target = features.pop(TARGET_NAME)\n\n return features, target\n\ndef process_features(features):\n\n features[\"x_2\"] = tf.square(features['x'])\n features[\"y_2\"] = tf.square(features['y'])\n features[\"xy\"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y']\n features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y']))\n \n return features",
"b. Data pipeline input function",
"def parse_label_column(label_string_tensor):\n table = tf.contrib.lookup.index_table_from_tensor(tf.constant(TARGET_LABELS))\n return table.lookup(label_string_tensor)\n\ndef csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, \n skip_header_lines=0, \n num_epochs=None, \n batch_size=200):\n \n shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False\n \n print(\"\")\n print(\"* data input_fn:\")\n print(\"================\")\n print(\"Input file(s): {}\".format(files_name_pattern))\n print(\"Batch size: {}\".format(batch_size))\n print(\"Epoch Count: {}\".format(num_epochs))\n print(\"Mode: {}\".format(mode))\n print(\"Shuffle: {}\".format(shuffle))\n print(\"================\")\n print(\"\")\n\n file_names = tf.matching_files(files_name_pattern)\n dataset = data.TextLineDataset(filenames=file_names)\n \n dataset = dataset.skip(skip_header_lines)\n \n if shuffle:\n dataset = dataset.shuffle(buffer_size=2 * batch_size + 1)\n \n dataset = dataset.batch(batch_size)\n dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row))\n \n if PROCESS_FEATURES:\n dataset = dataset.map(lambda features, target: (process_features(features), target))\n \n dataset = dataset.repeat(num_epochs)\n iterator = dataset.make_one_shot_iterator()\n \n features, target = iterator.get_next()\n return features, parse_label_column(target)\n\nfeatures, target = csv_input_fn(files_name_pattern=\"\")\nprint(\"Feature read from CSV: {}\".format(list(features.keys())))\nprint(\"Target read from CSV: {}\".format(target))",
"3. Define Feature Columns",
"def extend_feature_columns(feature_columns, hparams):\n \n num_buckets = hparams.num_buckets\n embedding_size = hparams.embedding_size\n\n buckets = np.linspace(-3, 3, num_buckets).tolist()\n\n alpha_X_beta = tf.feature_column.crossed_column(\n [feature_columns['alpha'], feature_columns['beta']], 4)\n\n x_bucketized = tf.feature_column.bucketized_column(\n feature_columns['x'], boundaries=buckets)\n\n y_bucketized = tf.feature_column.bucketized_column(\n feature_columns['y'], boundaries=buckets)\n\n x_bucketized_X_y_bucketized = tf.feature_column.crossed_column(\n [x_bucketized, y_bucketized], num_buckets**2)\n\n x_bucketized_X_y_bucketized_embedded = tf.feature_column.embedding_column(\n x_bucketized_X_y_bucketized, dimension=embedding_size)\n\n\n feature_columns['alpha_X_beta'] = alpha_X_beta\n feature_columns['x_bucketized_X_y_bucketized'] = x_bucketized_X_y_bucketized\n feature_columns['x_bucketized_X_y_bucketized_embedded'] = x_bucketized_X_y_bucketized_embedded\n \n return feature_columns\n \n\ndef get_feature_columns(hparams):\n \n CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy']\n all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy() \n \n if PROCESS_FEATURES:\n all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES\n\n numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name)\n for feature_name in all_numeric_feature_names}\n\n categorical_column_with_vocabulary = \\\n {item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1])\n for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()}\n \n feature_columns = {}\n\n if numeric_columns is not None:\n feature_columns.update(numeric_columns)\n\n if categorical_column_with_vocabulary is not None:\n feature_columns.update(categorical_column_with_vocabulary)\n \n if EXTEND_FEATURE_COLUMNS:\n feature_columns = extend_feature_columns(feature_columns, hparams)\n \n return feature_columns\n\nfeature_columns = get_feature_columns(tf.contrib.training.HParams(num_buckets=5,embedding_size=3))\nprint(\"Feature Columns: {}\".format(feature_columns))",
"4. Define Model Function",
"def get_input_layer_feature_columns(hparams):\n \n feature_columns = list(get_feature_columns(hparams).values())\n \n dense_columns = list(\n filter(lambda column: isinstance(column, feature_column._NumericColumn) |\n isinstance(column, feature_column._EmbeddingColumn),\n feature_columns\n )\n )\n\n categorical_columns = list(\n filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) |\n isinstance(column, feature_column._BucketizedColumn),\n feature_columns)\n )\n \n\n indicator_columns = list(\n map(lambda column: tf.feature_column.indicator_column(column),\n categorical_columns)\n )\n \n return dense_columns+indicator_columns\n\ndef classification_model_fn(features, labels, mode, params):\n\n hidden_units = params.hidden_units\n output_layer_size = len(TARGET_LABELS)\n\n feature_columns = get_input_layer_feature_columns(hparams)\n\n # Create the input layers from the feature columns\n input_layer = tf.feature_column.input_layer(features= features, \n feature_columns=feature_columns)\n\n\n # Create a fully-connected layer-stack based on the hidden_units in the params\n hidden_layers = tf.contrib.layers.stack(inputs= input_layer,\n layer= tf.contrib.layers.fully_connected,\n stack_args= hidden_units)\n\n # Connect the output layer (logits) to the hidden layer (no activation fn)\n logits = tf.layers.dense(inputs=hidden_layers, \n units=output_layer_size)\n\n # Reshape output layer to 1-dim Tensor to return predictions\n output = tf.squeeze(logits)\n\n # Provide an estimator spec for `ModeKeys.PREDICT`.\n if mode == tf.estimator.ModeKeys.PREDICT:\n probabilities = tf.nn.softmax(logits)\n predicted_indices = tf.argmax(probabilities, 1)\n\n # Convert predicted_indices back into strings\n predictions = {\n 'class': tf.gather(TARGET_LABELS, predicted_indices),\n 'probabilities': probabilities\n }\n export_outputs = {\n 'prediction': tf.estimator.export.PredictOutput(predictions)\n }\n \n # Provide an estimator spec for `ModeKeys.PREDICT` modes.\n return tf.estimator.EstimatorSpec(mode,\n predictions=predictions,\n export_outputs=export_outputs)\n\n # Calculate loss using softmax cross entropy\n loss = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits=logits, labels=labels))\n \n tf.summary.scalar('loss', loss)\n \n \n if mode == tf.estimator.ModeKeys.TRAIN:\n # Create Optimiser\n optimizer = tf.train.AdamOptimizer()\n\n # Create training operation\n train_op = optimizer.minimize(\n loss=loss, global_step=tf.train.get_global_step())\n\n # Provide an estimator spec for `ModeKeys.TRAIN` modes.\n return tf.estimator.EstimatorSpec(mode=mode,\n loss=loss, \n train_op=train_op)\n \n\n\n if mode == tf.estimator.ModeKeys.EVAL:\n probabilities = tf.nn.softmax(logits)\n predicted_indices = tf.argmax(probabilities, 1)\n\n # Return accuracy and area under ROC curve metrics\n labels_one_hot = tf.one_hot(\n labels,\n depth=len(TARGET_LABELS),\n on_value=True,\n off_value=False,\n dtype=tf.bool\n )\n \n eval_metric_ops = {\n 'accuracy': tf.metrics.accuracy(labels, predicted_indices),\n 'auroc': tf.metrics.auc(labels_one_hot, probabilities)\n }\n \n # Provide an estimator spec for `ModeKeys.EVAL` modes.\n return tf.estimator.EstimatorSpec(mode, \n loss=loss, \n eval_metric_ops=eval_metric_ops)\n\n\n\ndef create_estimator(run_config, hparams):\n estimator = tf.estimator.Estimator(model_fn=classification_model_fn, \n params=hparams, \n config=run_config)\n \n print(\"\")\n print(\"Estimator Type: {}\".format(type(estimator)))\n print(\"\")\n\n return estimator",
"6. Run Experiment\na. Define experiment function",
"def generate_experiment_fn(**experiment_args):\n\n def _experiment_fn(run_config, hparams):\n\n train_input_fn = lambda: csv_input_fn(\n TRAIN_DATA_FILES_PATTERN,\n mode = tf.estimator.ModeKeys.TRAIN,\n num_epochs=hparams.num_epochs,\n batch_size=hparams.batch_size\n )\n\n eval_input_fn = lambda: csv_input_fn(\n VALID_DATA_FILES_PATTERN,\n mode=tf.estimator.ModeKeys.EVAL,\n num_epochs=1,\n batch_size=hparams.batch_size\n )\n\n estimator = create_estimator(run_config, hparams)\n\n return tf.contrib.learn.Experiment(\n estimator,\n train_input_fn=train_input_fn,\n eval_input_fn=eval_input_fn,\n eval_steps=None,\n **experiment_args\n )\n\n return _experiment_fn",
"b. Set HParam and RunConfig",
"TRAIN_SIZE = 12000\nNUM_EPOCHS = 1 #1000\nBATCH_SIZE = 500\nNUM_EVAL = 1 #10\nCHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL))\n\nhparams = tf.contrib.training.HParams(\n num_epochs = NUM_EPOCHS,\n batch_size = BATCH_SIZE,\n hidden_units=[16, 12, 8],\n num_buckets = 6,\n embedding_size = 3,\n dropout_prob = 0.001)\n\nmodel_dir = 'trained_models/{}'.format(MODEL_NAME)\n\nrun_config = tf.contrib.learn.RunConfig(\n save_checkpoints_steps=CHECKPOINT_STEPS,\n tf_random_seed=19830610,\n model_dir=model_dir\n)\n\nprint(hparams)\nprint(\"Model Directory:\", run_config.model_dir)\nprint(\"\")\nprint(\"Dataset Size:\", TRAIN_SIZE)\nprint(\"Batch Size:\", BATCH_SIZE)\nprint(\"Steps per Epoch:\",TRAIN_SIZE/BATCH_SIZE)\nprint(\"Total Steps:\", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS)\nprint(\"Required Evaluation Steps:\", NUM_EVAL) \nprint(\"That is 1 evaluation step after each\",NUM_EPOCHS/NUM_EVAL,\" epochs\")\nprint(\"Save Checkpoint After\",CHECKPOINT_STEPS,\"steps\")",
"c. Define JSON serving function",
"def json_serving_input_fn():\n \n receiver_tensor = {}\n\n for feature_name in FEATURE_NAMES:\n dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string\n receiver_tensor[feature_name] = tf.placeholder(shape=[None], dtype=dtype)\n\n if PROCESS_FEATURES:\n features = process_features(receiver_tensor)\n\n return tf.estimator.export.ServingInputReceiver(\n features, receiver_tensor)",
"d. Run the Experiment via learn_runner",
"if not RESUME_TRAINING:\n print(\"Removing previous artifacts...\")\n shutil.rmtree(model_dir, ignore_errors=True)\nelse:\n print(\"Resuming training...\") \n\n\ntf.logging.set_verbosity(tf.logging.INFO)\n \ntime_start = datetime.utcnow() \nprint(\"Experiment started at {}\".format(time_start.strftime(\"%H:%M:%S\")))\nprint(\".......................................\") \n\nlearn_runner.run(\n experiment_fn=generate_experiment_fn(\n\n export_strategies=[\n make_export_strategy(\n json_serving_input_fn,\n exports_to_keep=1,\n as_text=True\n )\n ]\n ),\n run_config=run_config,\n schedule=\"train_and_evaluate\",\n hparams=hparams\n)\n\ntime_end = datetime.utcnow() \nprint(\".......................................\")\nprint(\"Experiment finished at {}\".format(time_end.strftime(\"%H:%M:%S\")))\nprint(\"\")\ntime_elapsed = time_end - time_start\nprint(\"Experiment elapsed time: {} seconds\".format(time_elapsed.total_seconds()))\n ",
"6. Evaluate the Model",
"TRAIN_SIZE = 12000\nVALID_SIZE = 3000\nTEST_SIZE = 5000\ntrain_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.EVAL,\n batch_size= TRAIN_SIZE)\n\nvalid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.EVAL,\n batch_size= VALID_SIZE)\n\ntest_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.EVAL,\n batch_size= TEST_SIZE)\n\nestimator = create_estimator(run_config, hparams)\n\ntrain_results = estimator.evaluate(input_fn=train_input_fn, steps=1)\nprint()\nprint(\"######################################################################################\")\nprint(\"# Train Measures: {}\".format(train_results))\nprint(\"######################################################################################\")\n\nvalid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1)\nprint()\nprint(\"######################################################################################\")\nprint(\"# Valid Measures: {}\".format(valid_results))\nprint(\"######################################################################################\")\n\ntest_results = estimator.evaluate(input_fn=test_input_fn, steps=1)\nprint()\nprint(\"######################################################################################\")\nprint(\"# Test Measures: {}\".format(test_results))\nprint(\"######################################################################################\")",
"7. Prediction",
"import itertools\n\npredict_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, \n mode= tf.estimator.ModeKeys.PREDICT,\n batch_size= 5)\n\npredictions = list(itertools.islice(estimator.predict(input_fn=predict_input_fn),5))\n\nprint(\"\")\n\nprint(\"* Predicted Classes: {}\".format(list(map(lambda item: item[\"class\"]\n ,predictions))))\n\nprint(\"* Predicted Probabilities: {}\".format(list(map(lambda item: list(item[\"probabilities\"])\n ,predictions))))",
"Serving Exported Model",
"import os\n\nexport_dir = model_dir +\"/export/Servo/\"\n\nsaved_model_dir = export_dir + \"/\" + os.listdir(path=export_dir)[-1] \n\nprint(saved_model_dir)\nprint(\"\")\n\npredictor_fn = tf.contrib.predictor.from_saved_model(\n export_dir = saved_model_dir,\n signature_def_key=\"prediction\"\n)\n\noutput = predictor_fn(\n {\n 'x': [0.5, -1],\n 'y': [1, 0.5],\n 'alpha': ['ax01', 'ax01'],\n 'beta': ['bx02', 'bx01']\n \n }\n)\nprint(output)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
craigrshenton/home
|
notebooks/notebook.ipynb
|
mit
|
[
"1.0 Load data from http://media.wiley.com/product_ancillary/6X/11186614/DOWNLOAD/ch01.zip, Concessions.xlsx",
"# code written in python_3. (for py_2.7 users some changes may be required)\n\nimport pandas # load pandas dataframe lib\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\nimport numpy as np\n\n# find path to your Concessions.xlsx\n# df = short for dataframe == excel worksheet\n# zero indexing in python, so first worksheet = 0\ndf_sales = pandas.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch01_complete/Concessions.xlsx','rb'), sheetname=0) \ndf_sales = df_sales.iloc[0:, 0:4]\ndf_sales.head() # use .head() to just show top 4 results\n\ndf_sales.dtypes # explore the dataframe\n\ndf_sales['Item'].head() # how to select a col\n\ndf_sales['Price'].describe() # basic stats",
"1.2 Calculate Actual Profit",
"df_sales = df_sales.assign(Actual_Profit = df_sales['Price']*df_sales['Profit']) # adds new col\ndf_sales.head()",
"1.3 Load data from 'Calories' worksheet and plot",
"# find path to your Concessions.xlsx \ndf_cals = pandas.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch01_complete/Concessions.xlsx','rb'), sheetname=1) \ndf_cals = df_cals.iloc[0:14, 0:2] # take data from 'Calories' worksheet\ndf_cals.head()\n\ndf_cals = df_cals.set_index('Item') # index df by items\n# Items ranked by calories = .sort_values(by='Calories',ascending=True) \n# rot = axis rotation\nax = df_cals.sort_values(by='Calories',ascending=True).plot(kind='bar', title =\"Calories\",figsize=(15,5),legend=False, fontsize=10, alpha=0.75, rot=20,)\nplt.xlabel(\"\") # no x-axis lable\nplt.show()",
"1.4 add calorie data to sales worksheet",
"df_sales = df_sales.assign(Calories=df_sales['Item'].map(df_cals['Calories'])) # map num calories from df_cals per item in df_sales (==Vlookup)\ndf_sales.head()",
"1.5 pivot table: number of sales per item",
"pivot = pandas.pivot_table(df_sales, index=[\"Item\"], values=[\"Price\"], aggfunc=len) # len == 'count of price'\npivot.columns = ['Count'] # renames col\npivot.index.name = None # removes intex title which is not needed\npivot",
"1.6 pivot table: revenue per item / category",
"# revenue = price * number of sales\npivot = pandas.pivot_table(df_sales, index=[\"Item\"], values=[\"Price\"], columns=[\"Category\"], aggfunc=np.sum, fill_value='')\npivot.index.name = None\npivot.columns = pivot.columns.get_level_values(1) # sets cols to product categories\npivot\n\n# set up decision variables\nitems = df_cals.index.tolist()\nitems\n\ncost = dict(zip(df_cals.index, df_cals.Calories)) # calarific cost of each item\ncost\n\nfrom pulp import *\n# create the LinProg object, set up as a minimisation problem\nprob = pulp.LpProblem('Diet', pulp.LpMinimize)\n\nvars = LpVariable.dicts(\"Number of\",items, lowBound = 0, cat='Integer')\n# Obj Func\nprob += lpSum([cost[c]*vars[c] for c in items])\n\nprob += sum(vars[c] for c in items)\n\n# add constraint representing demand for soldiers\nprob += (lpSum([cost[c]*vars[c] for c in items]) == 2400)\n\nprint(prob)\n\nprob.solve()\n\n# Is the solution optimal?\nprint(\"Status:\", LpStatus[prob.status])\n# Each of the variables is printed with it's value\nfor v in prob.variables():\n print(v.name, \"=\", v.varValue)\n# The optimised objective function value is printed to the screen \nprint(\"Minimum Number of Items = \", value(prob.objective))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gabrielrezzonico/dogsandcats
|
notebooks/03 - VGG16 pre-trained model.ipynb
|
mit
|
[
"VGG16 pre-trained model\nIn order to explore tranfer learning we are going to use a VGG16 model pre-trained with the ImageNet dataset. The model we are going to use are the ones provided by keras. We are not going to used the pre-trained top layers, instead we are going to define our small Fully Connected Network. This FCN is going to be trained to try to use the features calculated by the pre-trained layers with the new dataset of \"cats and dogs\"\nCommon configuration",
"IMAGE_SIZE = (180,202) # The dimensions to which all images found will be resized.\nBATCH_SIZE = 16\nNUMBER_EPOCHS = 8\n\nTENSORBOARD_DIRECTORY = \"../logs/simple_model/tensorboard\"\nTRAIN_DIRECTORY = \"../data/train/\"\nVALID_DIRECTORY = \"../data/valid/\"\nWEIGHTS_DIRECTORY = \"../weights/\"\nTEST_DIRECTORY = \"../data/test/\"\n\nNUMBER_TRAIN_SAMPLES = 20000\nNUMBER_VALIDATION_SAMPLES = 5000\nNUMBER_TEST_SAMPLES = 2500\n\nPRECOMPUTED_DIRECTORY = \"../precomputed/vgg16/\"",
"Check that we are using the GPU:",
"from tensorflow.python.client import device_lib\ndef get_available_gpus():\n local_device_protos = device_lib.list_local_devices()\n return [x.name for x in local_device_protos if x.device_type == 'GPU']\n \nget_available_gpus()\n\nimport tensorflow as tf\n# Creates a graph.\nwith tf.device('/gpu:0'):\n a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')\n b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')\n c = tf.matmul(a, b)\n# Creates a session with log_device_placement set to True.\nsess = tf.Session(config=tf.ConfigProto(log_device_placement=True))\n# Runs the op.\nprint(sess.run(c))",
"Model\nModel definition",
"from keras.applications.vgg16 import VGG16\n\n# create the base pre-trained model\nbase_model = VGG16(weights='imagenet', include_top=False)",
"Base Model arquitecture\nWe have the following model arquitecture:",
"base_model.summary()",
"Complete model with FCN Classifier on top",
"from keras.layers import Dense, Dropout, GlobalAveragePooling2D\n\n# add a global spatial average pooling layer\nx = base_model.output\n\nx = GlobalAveragePooling2D()(x)\n\n# let's add a fully-connected layer\nx = Dense(64, activation='relu')(x)\nx = Dropout(0.3)(x)\n\n# and a logistic layer \npredictions = Dense(2, activation='softmax')(x)\n\nfrom keras.models import Model\n# this is the model we will train\nmodel = Model(inputs=base_model.input, outputs=predictions)",
"Set the non trainable layers\nWe are going to set all the vgg16 layers as non trainables:",
"TRAINABLE_LAST_LAYERS = 0\n\nassert TRAINABLE_LAST_LAYERS >= 0\n\n# first: train only the top layers (which were randomly initialized)\n# i.e. freeze all convolutional InceptionV3 layers\nif TRAINABLE_LAST_LAYERS == 0:\n for layer in base_model.layers:\n layer.trainable = False\n print(len(base_model.layers))\nelse:\n for layer in base_model.layers[:-TRAINABLE_LAST_LAYERS]:\n layer.trainable = False\n print(len(base_model.layers[:-TRAINABLE_LAST_LAYERS]))\n\nmodel.summary()\n\nimport pandas as pd\ndf = pd.DataFrame(([layer.name, layer.trainable] for layer in model.layers), columns=['layer', 'trainable'])\ndf",
"Training the top layer\nKeras callbacks",
"from keras.callbacks import EarlyStopping\nfrom keras.callbacks import TensorBoard\n\n# Early stop in case of getting worse\nearly_stop = EarlyStopping(monitor = 'val_loss', patience = 3, verbose = 0)\n\n#TensorBoard\n# run tensorboard with tensorboard --logdir=/full_path_to_your_logs\n#tensorboard_path = TENSORBOARD_DIRECTORY\n#tensorboard_logger = TensorBoard(log_dir=tensorboard_path, histogram_freq=0, write_graph=False, write_images=False)\n#print('Logging basic info to be used by TensorBoard to {}. To see this log run:'.format(tensorboard_path))\n#print('tensorboard --logdir={}'.format(tensorboard_path))\n\ncallbacks = [early_stop]#, tensorboard_logger]",
"Model optimizer",
"OPTIMIZER_LEARNING_RATE = 1e-2\nOPTIMIZER_DECAY = 1e-4\nOPTIMIZER_MOMENTUM = 0.89\nOPTIMIZER_NESTEROV_ENABLED = False\n\nfrom keras.optimizers import SGD\n\noptimizer = SGD(lr=OPTIMIZER_LEARNING_RATE, \n decay=OPTIMIZER_DECAY, \n momentum=OPTIMIZER_MOMENTUM, \n nesterov=OPTIMIZER_NESTEROV_ENABLED)",
"Model compilation",
"model.compile(loss='categorical_crossentropy', \n optimizer=optimizer, \n metrics=[\"accuracy\"])",
"Model Training\nTrain data generator",
"from keras.preprocessing.image import ImageDataGenerator\n\n## train generator with shuffle but no data augmentation\ntrain_datagen = ImageDataGenerator(rescale = 1./255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True)\n\ntrain_batch_generator = train_datagen.flow_from_directory(TRAIN_DIRECTORY, \n target_size = IMAGE_SIZE,\n class_mode = 'categorical', \n batch_size = BATCH_SIZE)",
"Validation data generator",
"from keras.preprocessing.image import ImageDataGenerator\n\n## train generator with shuffle but no data augmentation\nvalidation_datagen = ImageDataGenerator(rescale = 1./255)\n\nvalid_batch_generator = validation_datagen.flow_from_directory(VALID_DIRECTORY, \n target_size = IMAGE_SIZE,\n class_mode = 'categorical', \n batch_size = BATCH_SIZE)",
"Model fitting",
"# fine-tune the model\nhist = model.fit_generator(\n train_batch_generator,\n steps_per_epoch=NUMBER_TRAIN_SAMPLES/BATCH_SIZE,\n epochs=NUMBER_EPOCHS, # epochs: Integer, total number of iterations on the data.\n validation_data=valid_batch_generator,\n validation_steps=NUMBER_VALIDATION_SAMPLES/BATCH_SIZE,\n callbacks=callbacks,\n verbose=2)",
"Filenames and labels\nSave and load classes and filenames:",
"(val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames) = get_all_classes()\n\nimport pickle\n\nfile = PRECOMPUTED_DIRECTORY + '/classes_and_filenames.dat'\n\n# Saving the objects:\nwith open(file, 'wb') as file: # Python 2: open(..., 'w')\n pickle.dump([val_classes, trn_classes, val_labels, trn_labels, \n val_filenames, filenames, test_filenames], file)\n ",
"Keras callbacks\nWe are going to define two callbacks that are going to be called in the training. EarlyStopping to stop the training if its not getting better. And a tensorboard callback to log information to be used by tensorboard.",
"from keras.callbacks import EarlyStopping\nfrom keras.callbacks import TensorBoard\n\n# Early stop in case of getting worse\nearly_stop = EarlyStopping(monitor = 'val_loss', patience = 3, verbose = 0)\n\n#TensorBoard\n# run tensorboard with tensorboard --logdir=/full_path_to_your_logs\ntensorboard_path = TENSORBOARD_DIRECTORY\ntensorboard_logger = TensorBoard(log_dir=tensorboard_path, histogram_freq=0, write_graph=False, write_images=False)\nprint('Logging basic info to be used by TensorBoard to {}. To see this log run:'.format(tensorboard_path))\nprint('tensorboard --logdir={}'.format(tensorboard_path))\n\ncallbacks = [early_stop, tensorboard_logger]",
"Model Optimizer",
"OPTIMIZER_LEARNING_RATE = 1e-2\nOPTIMIZER_DECAY = 1e-4 # LearningRate = LearningRate * 1/(1 + decay * epoch)\nOPTIMIZER_MOMENTUM = 0.89\nOPTIMIZER_NESTEROV_ENABLED = False\n\nfrom keras.optimizers import SGD\n\noptimizer = SGD(lr=OPTIMIZER_LEARNING_RATE, \n decay=OPTIMIZER_DECAY, \n momentum=OPTIMIZER_MOMENTUM, \n nesterov=OPTIMIZER_NESTEROV_ENABLED)",
"Compile the model",
"model.compile(loss='categorical_crossentropy', \n optimizer=optimizer, \\\n metrics=[\"accuracy\"])",
"Training\nTrain data generator",
"from keras.preprocessing.image import ImageDataGenerator\n\n## train generator with shuffle but no data augmentation\ntrain_datagen = ImageDataGenerator(rescale = 1./255)\n\ntrain_batch_generator = train_datagen.flow_from_directory(TRAIN_DIRECTORY, \n target_size = IMAGE_SIZE,\n class_mode = 'categorical', \n batch_size = BATCH_SIZE)",
"Validation data generator",
"from keras.preprocessing.image import ImageDataGenerator\n\n## train generator with shuffle but no data augmentation\nvalidation_datagen = ImageDataGenerator(rescale = 1./255)\n\nvalid_batch_generator = validation_datagen.flow_from_directory(VALID_DIRECTORY, \n target_size = IMAGE_SIZE,\n class_mode = 'categorical', \n batch_size = BATCH_SIZE)",
"Model fitting",
"# fine-tune the model\nhist = model.fit_generator(\n train_batch_generator,\n steps_per_epoch=NUMBER_TRAIN_SAMPLES/BATCH_SIZE,\n epochs=NUMBER_EPOCHS, # epochs: Integer, total number of iterations on the data.\n validation_data=valid_batch_generator,\n validation_steps=NUMBER_VALIDATION_SAMPLES/BATCH_SIZE,\n callbacks=callbacks,\n verbose=2)",
"Training plots",
"import matplotlib.pyplot as plt\n\n# summarize history for accuracy\nplt.figure(figsize=(15, 5))\nplt.subplot(1, 2, 1)\nplt.plot(hist.history['acc']); plt.plot(hist.history['val_acc']);\nplt.title('model accuracy'); plt.ylabel('accuracy');\nplt.xlabel('epoch'); plt.legend(['train', 'valid'], loc='upper left');\n\n# summarize history for loss\nplt.subplot(1, 2, 2)\nplt.plot(hist.history['loss']); plt.plot(hist.history['val_loss']);\nplt.title('model loss'); plt.ylabel('loss');\nplt.xlabel('epoch'); plt.legend(['train', 'valid'], loc='upper left');\nplt.show()",
"Plot a few examples\nEvaluate the model",
"############\n# load weights\n############\nmodel_save_path = WEIGHTS_DIRECTORY + 'vgg16_pretrained_v2.h5'\nprint(\"Loading weights from: {}\".format(model_save_path))\nmodel.load_weights(model_save_path)\n\nfrom keras.preprocessing.image import ImageDataGenerator\n\n## train generator with shuffle but no data augmentation\nvalidation_datagen = ImageDataGenerator(rescale = 1./255)\n\ntest_batch_generator = validation_datagen.flow_from_directory(TEST_DIRECTORY, \n target_size = IMAGE_SIZE,\n class_mode = 'categorical', \n batch_size = BATCH_SIZE)\n\nmodel.evaluate_generator(test_batch_generator,\n steps = NUMBER_TEST_SAMPLES/BATCH_SIZE)",
"Test the model",
"from keras.preprocessing.image import ImageDataGenerator\n\n## train generator with shuffle but no data augmentation\ntest_datagen = ImageDataGenerator(rescale = 1./255)\n\ntest_batch_generator = test_datagen.flow_from_directory(\n TEST_DIRECTORY,\n target_size = IMAGE_SIZE,\n batch_size=1,\n shuffle = False, # Important !!!\n classes = None,\n class_mode = None)\n\ntest_batch_generator.classes.shape\n\nimport pickle\ntest_classes_file = open(\"../results/vgg16_true.pickle\", \"wb\" )\npickle.dump( test_batch_generator.classes, test_classes_file )\n\ntrue_values = test_batch_generator.classes\n\nlen(test_batch_generator.filenames)\n\ntest_filenames = open(\"../results/vgg16_filenames.pickle\", \"wb\" )\npickle.dump( test_batch_generator.filenames, test_filenames )\n\nimport numpy as np\n\npred = []\n\nfor i in range(int(NUMBER_TEST_SAMPLES)):\n X = next(test_batch_generator) # get the next batch\n #print(X.shape)\n pred1 = model.predict(X, batch_size = 1, verbose = 0) #predict on a batch\n pred = pred + pred1.tolist()\n\nprobabilities = np.array(pred)\nprint(probabilities.shape)\nassert probabilities.shape == (NUMBER_TEST_SAMPLES, 2)\n\ntest_filenames = open(\"../results/vgg16_probabilities.pickle\", \"wb\")\npickle.dump( probabilities, test_filenames )\n\nprobabilities[0]\n\npredictions=np.argmax(probabilities,1)\n\ntest_filenames = open(\"../results/vgg16_predictions.pickle\", \"wb\" )\npickle.dump( predictions, test_filenames )\n\npredictions[0]\n\nimport matplotlib.pyplot as plt\n\ndef plot_confusion_matrix(cm, classes,\n normalize=False,\n title='Confusion matrix',\n cmap=plt.cm.Blues):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(classes))\n plt.xticks(tick_marks, classes, rotation=45)\n plt.yticks(tick_marks, classes)\n\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n print(\"Normalized confusion matrix\")\n else:\n print('Confusion matrix, without normalization')\n\n print(cm)\n\n thresh = cm.max() / 2.\n for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n plt.text(j, i, cm[i, j],\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n\nimport itertools\nfrom sklearn.metrics import confusion_matrix\n\nclass_names = ['cat', 'dog']\ncnf_matrix = confusion_matrix(true_values, predictions)\n# Plot normalized confusion matrix\nplt.figure()\nplot_confusion_matrix(cnf_matrix, classes=class_names,\n title='Confusion matrix')\nplt.show()\n\nfrom numpy.random import random, permutation\n#1. A few correct labels at random\ncorrect = np.where(predictions==true_values)[0]\n\nidx = permutation(correct)[:4]\n#plots_idx(idx, probs[idx])\n\nlen(correct)\n\nfrom scipy import ndimage\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\nim = ndimage.imread(\"../data/test/\" + test_batch_generator.filenames[idx[0]])\nimage = Image.fromarray(im)\nplt.imshow(image)\nplt.title(probabilities[idx[0]])\nplt.show()\n\nim = ndimage.imread(\"../data/test/\" + test_batch_generator.filenames[idx[1]])\nimage = Image.fromarray(im)\nplt.imshow(image)\nplt.title(probabilities[idx[1]])\nplt.show()\n\nfrom numpy.random import random, permutation\n#1. A few correct labels at random\ncorrect = np.where(predictions != true_values)[0]\n\nidx = permutation(correct)[:4]\n#plots_idx(idx, probs[idx])\n\nim = ndimage.imread(\"../data/test/\" + test_batch_generator.filenames[idx[0]])\nimage = Image.fromarray(im)\nplt.imshow(image)\nplt.title(probabilities[idx[0]])\nplt.show()\n\nim = ndimage.imread(\"../data/test/\" + test_batch_generator.filenames[idx[1]])\nimage = Image.fromarray(im)\nplt.imshow(image)\nplt.title(probabilities[idx[1]])\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mohanbolisetty/MEDS6498-SingleCell
|
AnalysisNotebook.ipynb
|
mit
|
[
"%%bash\nrm -r Data\nmkdir Data\ncp ../ubuntu/ZZ17001a/mm10/barcodes.tsv Data/\ncp ../ubuntu/ZZ17001a/mm10/genes.tsv Data/\ncp ../ubuntu/ZZ17001a/mm10/matrix.mtx Data/\n\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport scipy.io\nimport scipy.stats as stats\nfrom statsmodels.robust.scale import mad\npd.core.config.option_context('mode.use_inf_as_null',True)\nimport seaborn as sns\nimport os \nimport sys\nimport csv\nimport shlex\nimport subprocess\n\nsys.setrecursionlimit(10000)\n\nfrom plotly.graph_objs import Scatter3d, Data, Marker,Layout, Figure, Scene, XAxis, YAxis, ZAxis\nimport plotly.plotly as py\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\n\ninit_notebook_mode(connected=True)\n\nmatplotlib.rcParams['axes.edgecolor']='k'\nmatplotlib.rcParams['axes.linewidth']=3\nmatplotlib.rcParams['axes.spines.top']='off'\nmatplotlib.rcParams['axes.spines.right']='off'\nmatplotlib.rcParams['axes.facecolor']='white'\n\ndef read10X(path):\n mat = scipy.io.mmread(os.path.join(path,\"matrix.mtx\"))\n \n genes_path =os.path.join(path,\"genes.tsv\")\n gene_ids = [row[0] for row in csv.reader(open(genes_path), delimiter=\"\\t\")]\n gene_names = [row[1] for row in csv.reader(open(genes_path), delimiter=\"\\t\")]\n\n barcodes_path = os.path.join(path,\"barcodes.tsv\")\n barcodes = [row[0] for row in csv.reader(open(barcodes_path), delimiter=\"\\t\")]\n \n featureData=pd.DataFrame(data=gene_names, index=gene_ids, columns=['Associated.Gene.Name'])\n \n counts=pd.DataFrame(index=gene_ids,columns=barcodes,data=mat.todense())\n \n return counts, featureData\n\ndef filterCells(counts):\n umi_counts=counts.sum()\n cells1000=umi_counts[umi_counts>500].index\n \n return cells1000\n\ndef filterGenes(counts):\n filteredGenes=counts.index[(counts >= 2).sum(1) >=2]\n return filteredGenes\n\ndef plotQC(counts):\n genesdetected=(counts>=1.).sum()\n umi_counts=counts.sum()\n fig,(ax,ax1)=plt.subplots(1,2,figsize=(10, 5))\n\n genesdetected.plot(kind='hist',bins=np.arange(0,5000,100),lw=0,ax=ax)\n ax.grid('off')\n ax.patch.set_facecolor('white')\n ax.axvline(x=np.median(genesdetected),ls='--',lw=2,c='k')\n ax.set_xlabel('Genes',fontsize=13)\n ax.set_ylabel('Cells',fontsize=13)\n\n umi_counts.plot(kind='hist',bins=np.arange(0,10000,500),lw=0,ax=ax1,color=sns.color_palette()[1])\n ax1.grid('off')\n ax1.patch.set_facecolor('white')\n ax1.axvline(x=np.median(umi_counts),ls='--',lw=2,c='k')\n ax1.set_xlabel('Transcripts - UMI',fontsize=13)\n ax1.set_ylabel('Cells',fontsize=13)\n\ndef normalize(counts):\n cells1000=filterCells(counts)\n filteredGenes=filterGenes(counts)\n umi_counts=counts.sum()\n \n cpt=counts*np.median(umi_counts)/umi_counts\n cpt=cpt.loc[filteredGenes,cells1000]\n cpt=(cpt+1).apply(np.log)\n \n return cpt\n\ndef overdispersion(cpt,nGenes):\n \n meanExpression=np.log(np.mean(np.exp(cpt)-1,1)+1)\n dispersion=np.log(np.var(np.exp(cpt)-1,1)/np.mean(np.exp(cpt)-1,1))\n bins = np.linspace(min(meanExpression),max(meanExpression),20)\n pos = np.digitize(meanExpression, bins)\n overDispersion=[]\n\n for index,gene in enumerate(meanExpression.index):\n medianBin=dispersion[pos==pos[index]].median()\n madBin=mad(dispersion[pos==pos[index]])\n normalizedDispersion=abs(dispersion.ix[gene]-medianBin)/madBin\n overDispersion.append([ gene, normalizedDispersion ])\n\n overDispersion=pd.DataFrame(overDispersion)\n overDispersion.set_index(0,inplace=True)\n top1000=overDispersion.sort_values(1,ascending=False)[:nGenes].index\n \n return top1000\n \ndef variance(cpt,nGenes):\n variance=cpt.var(1)\n top1000=variance.sort_values(inplace=True,ascending=False)[:nGenes].index\n \n return top1000\n \ndef runTSNE(cpt,genes):\n np.savetxt('Data/filtered.tsv', cpt.loc[top1000].T.values, delimiter='\\t')\n cmd='/Users/mby/Downloads/bhtsne-master/bhtsne.py -d 3 -i Data/filtered.tsv --no_pca -r 1024 -o Data/out.tsv'\n cmd=shlex.split(cmd) \n proc=subprocess.Popen(cmd,stdout=subprocess.PIPE,stderr=subprocess.PIPE)\n stdout, stderr=proc.communicate()\n tsne=np.loadtxt('Data/out.tsv')\n tsneData=pd.DataFrame(tsne,index=cpt.columns, columns=['V1','V2','V3'])\n return tsneData\n\ndef PCA(cpt,genes):\n from sklearn.decomposition import PCA as sklearnPCA\n sklearn_pca = sklearnPCA(n_components=50)\n Y_sklearn = sklearn_pca.fit_transform(cpt.ix[top1000].T)\n \n pcaData=pd.DataFrame(Y_sklearn,index=cpt.columns)\n \n eig_vals=sklearn_pca.explained_variance_\n tot = sum(eig_vals)\n var_exp = [(i / tot)*100 for i in sorted(eig_vals, reverse=True)]\n cum_var_exp = np.cumsum(var_exp)\n \n return pcaData,cum_var_exp\n\ndef getEnsid(featureData,gene):\n return featureData[featureData['Associated.Gene.Name']==gene].index\n\ndef plotTSNE(cpt,tsnedata,gene,featureData,dim1,dim2):\n fig,ax=plt.subplots(1)\n ax.scatter(tsnedata[dim1],tsnedata[dim2],c=cpt.loc[getEnsid(featureData,gene),],s=10, \n linewidths=1, cmap=plt.cm.Greens,vmax=2,vmin=0.1)\n ax.set_title(gene)\n \n #return fig\n\ndef dbscan(tsnedata,eps,minCells):\n from sklearn.cluster import DBSCAN\n db = DBSCAN(eps=eps, min_samples=minCells).fit(tsnedata.values)\n tsnedata['dbCluster'] = db.labels_+1\n \n return tsnedata\n\ndef plotTSNEClusters(tsnedata,dim1,dim2):\n colors=['#a6cee3','#1f78b4','#b2df8a',\n '#33a02c','#fb9a99','#e31a1c',\n '#fdbf6f','#ff7f00','#cab2d6',\n '#6a3d9a','#ffff99','#b15928',\n '#000000','#bdbdbd','#ffff99']\n\n k2=sns.lmplot(dim1, dim2, data=tsnedata, hue='dbCluster', fit_reg=False,palette=colors,scatter_kws={\"s\": 5})\n k2.ax.grid('off')\n k2.ax.patch.set_facecolor('white')\n #k2.savefig('../Figures/TSNE-KM.pdf',format='pdf',dpi=300)\n\ndef mkRds(cpt,featureData,tsnedata):\n \n cpt.to_csv('Data/Expression-G.csv')\n featureData['Chromosome.Name']=1\n featureData.to_csv('Data/MM10_10X-FeatureData.csv')\n tsnedata.to_csv('Data/TSNEData-Dbscan.csv')\n \n rscript='''\n rm(list=ls())\n\n setwd('%s')\n\n log2cpm<-read.csv('%s',row.names=1,stringsAsFactors = F, as.is=T, check.names=F)\n featuredata<-read.csv('%s',row.names=1,stringsAsFactors = F, as.is=T,sep=',',check.names=F)\n tsne.data<-read.csv('%s',row.names=1,stringsAsFactors = F,as.is=T,check.names=F)\n\n save(log2cpm,featuredata,tsne.data,file='%s')\n\n '''%(os.getcwd(),'Data/Expression-G.csv','Data/MM10_10X-FeatureData.csv',\n 'Data/TSNEData-Dbscan.csv','Data/Data.Rds')\n \n with open('Data/setupRds.R','w') as fout:\n fout.writelines(rscript)\n \n cmd='R --no-save -f Data/setupRds.R'\n os.system(cmd)\n\ndef tsne3d(tsnedata):\n walkers=[]\n colors=['#a6cee3','#1f78b4','#b2df8a','#33a02c','#fb9a99','#e31a1c','#fdbf6f','#ff7f00','#cab2d6',\n '#6a3d9a','#ffff99','#b15928','#000000','#bdbdbd','#ffff99']\n colors=colors*3\n\n for ii in range(0,44,1):\n tsne_subset=tsnedata[tsnedata['dbCluster']==ii]\n\n cellnames=tsne_subset.index\n a=tsne_subset['V1'].values\n b=tsne_subset['V2'].values\n c=tsne_subset['V3'].values\n\n trace = Scatter3d(\n x=a,\n y=b,\n z=c,\n text=['CellName: %s' %(i) for i in cellnames],\n mode='markers',\n name=ii,\n marker=dict(\n color=colors[ii],\n size=3,\n symbol='circle',\n line=dict(\n color=colors[ii],\n width=0\n )\n ))\n walkers.append(trace)\n\n data = Data(walkers)\n\n layout = Layout(\n title='BS16001-TE1',\n hovermode='closest',\n\n xaxis=dict(\n title='TSNE-1',\n ticklen=0,\n showline=True,\n zeroline=True\n ),\n yaxis=dict(\n title='TSNE-2',\n ticklen=5,\n ), \n scene=Scene(\n xaxis=XAxis(title='TSNE-1',showgrid=True,zeroline=True,showticklabels=True),\n yaxis=YAxis(title='TSNE-2',showgrid=True,zeroline=True,showticklabels=True),\n zaxis=ZAxis(title='TSNE-3',showgrid=True,zeroline=True,showticklabels=True)\n )\n )\n\n\n fig = Figure(data=data, layout=layout)\n iplot(fig)\n \ndef findMarkers(cpt,cells1,cells2,genes):\n aucScores=[]\n from sklearn import metrics\n for gene in genes:\n y=[1]*len(cells2)+[2]*len(cells1)\n pred = np.concatenate((cpt.loc[gene,cells2],cpt.loc[gene,cells1]))\n fpr, tpr, thresholds = metrics.roc_curve(y, pred, pos_label=2)\n aucScores.append(metrics.auc(fpr, tpr))\n return pd.DataFrame(aucScores,index=genes,columns=['Score'])\n\ndef expMean(x):\n return(np.log(np.mean(np.exp(x)-1)+1))\n\ndef markerHeatmap(cpt,genes,tsnedata_dbscan,featureData):\n hdata=cpt.loc[genes,].dropna()\n colorMap=dict(zip(range(1,8,1),sns.color_palette('Set1',9)))\n\n hetmap=sns.clustermap(hdata,z_score=0,yticklabels=False,vmin=-3,vmax=3,\\\n xticklabels=featureData.loc[genes,'Associated.Gene.Name']\n ,row_cluster=True,col_cluster=True\n ,col_colors=colorMap,metric='correlation'\n )\n b=plt.setp(hetmap.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)\n\n",
"Read Input data",
"counts, featuredata=read10X('Data/')",
"Shape of dataset: Genes, Cells",
"counts.shape\n\nplotQC(counts)",
"Normalize data\nSince number of genes and transcipts detected is directly dependent on read depth, library size normalization is essential. This function will normalize gene expression based on total transcripts detected in each cell, multiply with a constant and log transform.",
"cpt=normalize(counts)",
"Feature Selection\nOne of the most important steps in single cell RNA seq processing, is selecting genes that describe most of the biological variance. However, this is confounded by the high levels of technical noise associated with single cell RNA-seq data. \nThis jupyter notebook contains 2 functions to enable feature selection:\n1. variance - select the top variable genes in the dataset\n2. overdispersion - select the top variable genes in the dataset corrected for technical variance",
"top1000=overdispersion(cpt,1000)",
"Dimensionality reduction\nAfter gene selection, the high dimensionality of single cell RNA-seq data is commnly reduced to cluster similar cells together. \nThis jupyter notebook contains 2 functions for dimensionality reduction:\n1. PCA \n2. tSNE - for the purposes of the demonstration, we will use tSNE and reduce data to 3 dimensions",
"tsnedata=runTSNE(cpt,top1000)\n\ntsnedata=pd.read_csv('')",
"Visualization\nVisualization is an important part of an single cell experiment. Exploring data with genes of interest helps validate clustering as wells begins the process of identifying the cell type of each cluster\nLets take a look at our dimensionality reduction by plotting cells.",
"plt.scatter(tsnedata['V2'],tsnedata['V3'],s=5)",
"Visualization\nVisualization is an important part of an single cell experiment. Exploring data with genes of interest helps validate clustering as wells begins the process of identifying the cell type of each cluster\nLets take a look at our dimensionality reduction by plotting cells, but this time color each cell by the expression of particular gene. Pick from Emcn, Olig1, Olig2, Pdgra, Fyn, Aqp4,Mog,Slc32a1,Slc17a6,Cx3cr1.",
"plotTSNE(cpt,tsnedata,'Snap25',featuredata,'V2','V3')",
"Cluster identification\nAfter dimensionality reduction, clusters are identified using a variety of approaches. We will use a simple algorithm called DBSCAN to identify clusters\nThis jupyter notebook contains 1 functions for dimensionality reduction:\n1. DBSCAN",
"tsnedata_dbscan=dbscan(tsnedata,3.2,20)",
"Visualization\nLets take a look at our dimensionality reduction by plotting cells, but this time color each cell by the cluster assignment as determined by DBSCAN",
"plotTSNEClusters(tsnedata_dbscan,'V2','V3')",
"Visualization\nLets take a look at our dimensionality reduction by plotting cells, but this time color each cell by the cluster assignment as determined by DBSCAN. Remember that our data was reduced to 3 dimensions. So, lets plot all 3 dimensions",
"walkers=[]\ncolors=['#a6cee3','#1f78b4','#b2df8a',\n '#33a02c','#fb9a99','#e31a1c',\n '#fdbf6f','#ff7f00','#cab2d6',\n '#6a3d9a','#ffff99','#b15928',\n '#000000','#bdbdbd','#ffff99']\n\nfor ii in range(0,44,1):\n tsne_subset=tsne[tsne['dbCluster']==ii]\n\n cellnames=tsne_subset.index\n a=tsne_subset['V1'].values\n b=tsne_subset['V2'].values\n c=tsne_subset['V3'].values\n\n trace = Scatter3d(\n x=a,\n y=b,\n z=c,\n text=['CellName: %s' %(i) for i in cellnames],\n mode='markers',\n name=ii,\n marker=dict(\n color=colors[ii],\n size=3,\n symbol='circle',\n line=dict(\n color=colors[ii],\n width=0\n )\n ))\n walkers.append(trace)\n\ndata = Data(walkers)\n\nlayout = Layout(\n title='BS16001-TE1',\n hovermode='closest',\n \n xaxis=dict(\n title='TSNE-1',\n ticklen=0,\n showline=True,\n zeroline=True\n ),\n yaxis=dict(\n title='TSNE-2',\n ticklen=5,\n ), \n scene=Scene(\n xaxis=XAxis(title='TSNE-1',showgrid=True,zeroline=True,showticklabels=True),\n yaxis=YAxis(title='TSNE-2',showgrid=True,zeroline=True,showticklabels=True),\n zaxis=ZAxis(title='TSNE-3',showgrid=True,zeroline=True,showticklabels=True)\n )\n )\n\n \nfig = Figure(data=data, layout=layout)\npy.iplot(fig, filename='BS16001-TE1-KMEANS.html')\n\ntsne3d(tsnedata_dbscan)",
"Marker Identification\nIdentifying genes that differentiate each of these cell populations is an important aspect of single cell RNA-seq data. There are many different methods to do this type of analysis. Given the size of the dataset ome of these are compute heavy. For the sake of brevity, we will use AUROC classification of differentially expressed genes.",
"aurocScoresAll=pd.DataFrame()\nfor cluster in range(1,8,1):\n\n cells1=tsnedata_dbscan[tsnedata_dbscan['dbCluster']==cluster].index\n cells2=tsnedata_dbscan.index.difference(cells1)\n\n data1=cpt.loc[cpt.index,cells1].apply(expMean,1)\n data2=cpt.loc[cpt.index,cells2].apply(expMean,1)\n totaldiff=(data1-data2)\n genes=totaldiff[totaldiff>1.].index\n\n aurocScores=findMarkers(cpt,\n cells1,\n cells2,\n genes\n )\n aurocScores['Associated.Gene.Name']=featuredata['Associated.Gene.Name']\n aurocScores['dbCluster']=cluster\n aurocScoresAll=aurocScoresAll.append(aurocScores)",
"Visualization\nLet's make a heatmap of all markergenes",
"markerHeatmap(cpt,aurocScoresAll.index,tsnedata_dbscan,featuredata)",
"Make .Rds file for CellView\nAnd finally, let's summarize this analysis into an .Rds file that we can share with others",
"mkRds(cpt,featuredata,tsnedata_dbscan)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.