repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
darienmt/intro-to-tensorflow
LeNet-Lab.ipynb
mit
[ "LeNet Lab\n\nSource: Yan LeCun\nLoad Data\nLoad the MNIST data, which comes pre-loaded with TensorFlow.\nYou do not need to modify this section.", "from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"./datasets/\", reshape=False)\nX_train, y_train = mnist.train.images, mnist.train.labels\nX_validation, y_validation = mnist.validation.images, mnist.validation.labels\nX_test, y_test = mnist.test.images, mnist.test.labels\n\nassert(len(X_train) == len(y_train))\nassert(len(X_validation) == len(y_validation))\nassert(len(X_test) == len(y_test))\n\nprint()\nprint(\"Image Shape: {}\".format(X_train[0].shape))\nprint()\nprint(\"Training Set: {} samples\".format(len(X_train)))\nprint(\"Validation Set: {} samples\".format(len(X_validation)))\nprint(\"Test Set: {} samples\".format(len(X_test)))", "The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.\nHowever, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.\nIn order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).\nYou do not need to modify this section.", "import numpy as np\n\n# Pad images with 0s\nX_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')\n \nprint(\"Updated Image Shape: {}\".format(X_train[0].shape))", "Visualize Data\nView a sample from the dataset.\nYou do not need to modify this section.", "import random\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nindex = random.randint(0, len(X_train))\nimage = X_train[index].squeeze()\n\nplt.figure(figsize=(1,1))\nplt.imshow(image, cmap=\"gray\")\nprint(y_train[index])", "Preprocess Data\nShuffle the training data.\nYou do not need to modify this section.", "from sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)", "Setup TensorFlow\nThe EPOCH and BATCH_SIZE values affect the training speed and model accuracy.\nYou do not need to modify this section.", "import tensorflow as tf\n\nEPOCHS = 10\nBATCH_SIZE = 128", "TODO: Implement LeNet-5\nImplement the LeNet-5 neural network architecture.\nThis is the only cell you need to edit.\nInput\nThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.\nArchitecture\nLayer 1: Convolutional. The output shape should be 28x28x6.\nActivation. Your choice of activation function.\nPooling. The output shape should be 14x14x6.\nLayer 2: Convolutional. The output shape should be 10x10x16.\nActivation. Your choice of activation function.\nPooling. The output shape should be 5x5x16.\nFlatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.\nLayer 3: Fully Connected. This should have 120 outputs.\nActivation. Your choice of activation function.\nLayer 4: Fully Connected. This should have 84 outputs.\nActivation. Your choice of activation function.\nLayer 5: Fully Connected (Logits). This should have 10 outputs.\nOutput\nReturn the result of the 2nd fully connected layer.", "from tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n \n # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))\n conv1_b = tf.Variable(tf.zeros(6))\n conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b\n \n # TODO: Activation.\n conv1 = tf.nn.relu(conv1)\n\n # TODO: Pooling. Input = 28x28x6. Output = 14x14x6.\n conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # TODO: Layer 2: Convolutional. Output = 10x10x16.\n conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))\n conv2_b = tf.Variable(tf.zeros(16))\n conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b\n \n # TODO: Activation.\n conv2 = tf.nn.relu(conv2)\n\n # TODO: Pooling. Input = 10x10x16. Output = 5x5x16.\n conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')\n\n # TODO: Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n # TODO: Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))\n fc1_b = tf.Variable(tf.zeros(120))\n fc1 = tf.matmul(fc0, fc1_W) + fc1_b \n \n # TODO: Activation.\n fc1 = tf.nn.relu(fc1)\n\n # TODO: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))\n fc2_b = tf.Variable(tf.zeros(84))\n fc2 = tf.matmul(fc1, fc2_W) + fc2_b\n \n # TODO: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # TODO: Layer 5: Fully Connected. Input = 84. Output = 10.\n fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))\n fc3_b = tf.Variable(tf.zeros(10))\n logits = tf.matmul(fc2, fc3_W) + fc3_b\n \n return logits", "Features and Labels\nTrain LeNet to classify MNIST data.\nx is a placeholder for a batch of input images.\ny is a placeholder for a batch of output labels.\nYou do not need to modify this section.", "x = tf.placeholder(tf.float32, (None, 32, 32, 1))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 10)", "Training Pipeline\nCreate a training pipeline that uses the model to classify MNIST data.\nYou do not need to modify this section.", "rate = 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)", "Model Evaluation\nEvaluate how well the loss and accuracy of the model for a given dataset.\nYou do not need to modify this section.", "correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples", "Train the Model\nRun the training data through the training pipeline to train the model.\nBefore each epoch, shuffle the training set.\nAfter each epoch, measure the loss and accuracy of the validation set.\nSave the model after training.\nYou do not need to modify this section.", "with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, './models/lenet')\n print(\"Model saved\")", "Evaluate the Model\nOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.\nBe sure to only do this once!\nIf you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.\nYou do not need to modify this section.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('./models'))\n\n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
notebooks/jump2.ipynb
mit
[ "Modeling and Simulation in Python\nBungee dunk example, taking into account the mass of the bungee cord\nCopyright 2019 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "Bungee jumping\nIn the previous case study, we simulated a bungee jump with a model that took into account gravity, air resistance, and the spring force of the bungee cord, but we ignored the weight of the cord.\nIt is tempting to say that the weight of the cord doesn't matter, because it falls along with the jumper. But that intuition is incorrect, as explained by Heck, Uylings, and Kędzierska. As the cord falls, it transfers energy to the jumper. They derive a differential equation that relates the acceleration of the jumper to position and velocity:\n$a = g + \\frac{\\mu v^2/2}{\\mu(L+y) + 2L}$ \nwhere $a$ is the net acceleration of the number, $g$ is acceleration due to gravity, $v$ is the velocity of the jumper, $y$ is the position of the jumper relative to the starting point (usually negative), $L$ is the length of the cord, and $\\mu$ is the mass ratio of the cord and jumper.\nIf you don't believe this model is correct, this video might convince you.\nFollowing the example in Chapter 21, we'll model the jump with the following modeling assumptions:\n\n\nInitially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.\n\n\nUntil the cord is fully extended, it applies a force to the jumper as explained above.\n\n\nAfter the cord is fully extended, it obeys Hooke's Law; that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.\n\n\nThe jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.\n\n\nFirst I'll create a Param object to contain the quantities we'll need:\n\n\nLet's assume that the jumper's mass is 75 kg and the cord's mass is also 75 kg, so mu=1.\n\n\nThe jumpers's frontal area is 1 square meter, and terminal velocity is 60 m/s. I'll use these values to back out the coefficient of drag.\n\n\nThe length of the bungee cord is L = 25 m.\n\n\nThe spring constant of the cord is k = 40 N / m when the cord is stretched, and 0 when it's compressed.\n\n\nI adopt the coordinate system and most of the variable names from Heck, Uylings, and Kędzierska.", "m = UNITS.meter\ns = UNITS.second\nkg = UNITS.kilogram\nN = UNITS.newton\n\nparams = Params(v_init = 0 * m / s,\n g = 9.8 * m/s**2,\n M = 75 * kg, # mass of jumper\n m_cord = 75 * kg, # mass of cord\n area = 1 * m**2, # frontal area of jumper\n rho = 1.2 * kg/m**3, # density of air\n v_term = 60 * m / s, # terminal velocity of jumper\n L = 25 * m, # length of cord\n k = 40 * N / m) # spring constant of cord", "Now here's a version of make_system that takes a Params object as a parameter.\nmake_system uses the given value of v_term to compute the drag coefficient C_d.\nIt also computes mu and the initial State object.", "def make_system(params):\n \"\"\"Makes a System object for the given params.\n \n params: Params object\n \n returns: System object\n \"\"\"\n M, m_cord = params.M, params.m_cord\n g, rho, area = params.g, params.rho, params.area\n v_init, v_term = params.v_init, params.v_term\n \n # back out the coefficient of drag\n C_d = 2 * M * g / (rho * area * v_term**2)\n \n mu = m_cord / M\n init = State(y=0*m, v=v_init)\n t_end = 10 * s\n\n return System(params, C_d=C_d, mu=mu,\n init=init, t_end=t_end)", "Let's make a System", "system = make_system(params)", "drag_force computes drag as a function of velocity:", "def drag_force(v, system):\n \"\"\"Computes drag force in the opposite direction of `v`.\n \n v: velocity\n \n returns: drag force in N\n \"\"\"\n rho, C_d, area = system.rho, system.C_d, system.area\n\n f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2\n return f_drag", "Here's drag force at 20 m/s.", "drag_force(20 * m/s, system)", "The following function computes the acceleration of the jumper due to tension in the cord.\n$a_{cord} = \\frac{\\mu v^2/2}{\\mu(L+y) + 2L}$", "def cord_acc(y, v, system):\n \"\"\"Computes the force of the bungee cord on the jumper:\n \n y: height of the jumper\n v: velocity of the jumpter\n \n returns: acceleration in m/s\n \"\"\"\n L, mu = system.L, system.mu\n \n a_cord = -v**2 / 2 / (2*L/mu + (L+y))\n return a_cord", "Here's acceleration due to tension in the cord if we're going 20 m/s after falling 20 m.", "y = -20 * m\nv = -20 * m/s\ncord_acc(y, v, system)", "Now here's the slope function:", "def slope_func1(state, t, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object containing g, rho,\n C_d, area, and mass\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n M, g = system.M, system.g\n \n a_drag = drag_force(v, system) / M\n a_cord = cord_acc(y, v, system)\n dvdt = -g + a_cord + a_drag\n \n return v, dvdt", "As always, let's test the slope function with the initial params.", "slope_func1(system.init, 0, system)", "We'll need an event function to stop the simulation when we get to the end of the cord.", "def event_func(state, t, system):\n \"\"\"Run until y=-L.\n \n state: position, velocity\n t: time\n system: System object containing g, rho,\n C_d, area, and mass\n \n returns: difference between y and -L\n \"\"\"\n y, v = state \n return y + system.L", "We can test it with the initial conditions.", "event_func(system.init, 0, system)", "And then run the simulation.", "results, details = run_ode_solver(system, slope_func1, events=event_func)\ndetails.message", "Here's how long it takes to drop 25 meters.", "t_final = get_last_label(results)", "Here's the plot of position as a function of time.", "def plot_position(results, **options):\n plot(results.y, **options)\n decorate(xlabel='Time (s)',\n ylabel='Position (m)')\n \nplot_position(results)", "We can use min to find the lowest point:", "min(results.y)", "Here's velocity as a function of time:", "def plot_velocity(results):\n plot(results.v, color='C1', label='v')\n \n decorate(xlabel='Time (s)',\n ylabel='Velocity (m/s)')\n \nplot_velocity(results)", "Velocity when we reach the end of the cord.", "min(results.v)", "Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_ode_solver.\nWe can approximate it by computing the numerical derivative of v:", "a = gradient(results.v)\nplot(a)\ndecorate(xlabel='Time (s)',\n ylabel='Acceleration (m/$s^2$)')", "The maximum downward acceleration, as a factor of g", "max_acceleration = max(abs(a)) * m/s**2 / params.g", "Using Equation (1) from Heck, Uylings, and Kędzierska, we can compute the peak acceleration due to interaction with the cord, neglecting drag.", "def max_acceleration(system):\n mu = system.mu\n return 1 + mu * (4+mu) / 8\n\nmax_acceleration(system)", "If you set C_d=0, the simulated acceleration approaches the theoretical result, although you might have to reduce max_step to get a good numerical estimate.\nSweeping cord weight\nNow let's see how velocity at the crossover point depends on the weight of the cord.", "def sweep_m_cord(m_cord_array, params):\n sweep = SweepSeries()\n\n for m_cord in m_cord_array:\n system = make_system(Params(params, m_cord=m_cord))\n results, details = run_ode_solver(system, slope_func1, events=event_func)\n min_velocity = min(results.v) * m/s\n sweep[m_cord.magnitude] = min_velocity\n \n return sweep\n\nm_cord_array = linspace(1, 201, 21) * kg\nsweep = sweep_m_cord(m_cord_array, params)", "Here's what it looks like. As expected, a heavier cord gets the jumper going faster.\nThere's a hitch near 25 kg that seems to be due to numerical error.", "plot(sweep)\n\ndecorate(xlabel='Mass of cord (kg)',\n ylabel='Fastest downward velocity (m/s)')", "Phase 2\nOnce the jumper falls past the length of the cord, acceleration due to energy transfer from the cord stops abruptly. As the cord stretches, it starts to exert a spring force. So let's simulate this second phase.\nspring_force computes the force of the cord on the jumper:", "def spring_force(y, system):\n \"\"\"Computes the force of the bungee cord on the jumper:\n \n y: height of the jumper\n \n Uses these variables from system:\n y_attach: height of the attachment point\n L: resting length of the cord\n k: spring constant of the cord\n \n returns: force in N\n \"\"\"\n L, k = system.L, system.k\n \n distance_fallen = -y\n extension = distance_fallen - L\n f_spring = k * extension\n return f_spring", "The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.", "spring_force(-25*m, system)\n\nspring_force(-26*m, system)", "The slope function for Phase 2 includes the spring force, and drops the acceleration due to the cord.", "def slope_func2(state, t, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object containing g, rho,\n C_d, area, and mass\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n M, g = system.M, system.g\n \n a_drag = drag_force(v, system) / M\n a_spring = spring_force(y, system) / M\n dvdt = -g + a_drag + a_spring\n \n return v, dvdt", "I'll run Phase 1 again so we can get the final state.", "system1 = make_system(params)\n\nevent_func.direction=-1\nresults1, details1 = run_ode_solver(system1, slope_func1, events=event_func)\nprint(details1.message)", "Now I need the final time, position, and velocity from Phase 1.", "t_final = get_last_label(results1)\n\ninit2 = results1.row[t_final]", "And that gives me the starting conditions for Phase 2.", "system2 = System(system1, t_0=t_final, init=init2)", "Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.", "event_func.direction=+1\nresults2, details2 = run_ode_solver(system2, slope_func2, events=event_func)\nprint(details2.message)\nt_final = get_last_label(results2)", "We can plot the results on the same axes.", "plot_position(results1, label='Phase 1')\nplot_position(results2, label='Phase 2')", "And get the lowest position from Phase 2.", "min(results2.y)", "To see how big the effect of the cord is, I'll collect the previous code in a function.", "def simulate_system2(params):\n \n system1 = make_system(params)\n event_func.direction=-1\n results1, details1 = run_ode_solver(system1, slope_func1, events=event_func)\n\n t_final = get_last_label(results1)\n init2 = results1.row[t_final]\n \n system2 = System(system1, t_0=t_final, init=init2)\n results2, details2 = run_ode_solver(system2, slope_func2, events=event_func)\n t_final = get_last_label(results2)\n return TimeFrame(pd.concat([results1, results2]))", "Now we can run both phases and get the results in a single TimeFrame.", "results = simulate_system2(params);\n\nplot_position(results)\n\nparams_no_cord = Params(params, m_cord=1*kg)\nresults_no_cord = simulate_system2(params_no_cord);\n\nplot_position(results, label='m_cord = 75 kg')\nplot_position(results_no_cord, label='m_cord = 1 kg')\n\nsavefig('figs/jump.png')\n\nmin(results_no_cord.y)\n\ndiff = min(results.y) - min(results_no_cord.y)", "The difference is more than 2 meters, which could certainly be the difference between a successful bungee dunk and a bad day." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
astro4dev/OAD-Data-Science-Toolkit
Teaching Materials/Machine Learning/ml-training-intro/notebooks/04 - Preprocessing.ipynb
gpl-3.0
[ "import numpy as np\nimport matplotlib.pyplot as plt\n% matplotlib inline\nplt.rcParams[\"figure.dpi\"] = 200\n\nfrom sklearn.datasets import load_boston\nboston = load_boston()\nfrom sklearn.model_selection import train_test_split\nX, y = boston.data, boston.target\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, random_state=0)\n\nprint(boston.DESCR)\n\nfig, axes = plt.subplots(3, 5, figsize=(20, 10))\nfor i, ax in enumerate(axes.ravel()):\n if i > 12:\n ax.set_visible(False)\n continue\n ax.plot(X[:, i], y, 'o', alpha=.5)\n ax.set_title(\"{}: {}\".format(i, boston.feature_names[i]))\n ax.set_ylabel(\"MEDV\")\n\nplt.boxplot(X)\nplt.xticks(np.arange(1, X.shape[1] + 1),\n boston.feature_names, rotation=30, ha=\"right\")\nplt.ylabel(\"MEDV\")\n\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train)\n\nfrom sklearn.model_selection import cross_val_score\n\nfrom sklearn.neighbors import KNeighborsRegressor\nscores = cross_val_score(KNeighborsRegressor(),\n X_train, y_train, cv=10)\nnp.mean(scores), np.std(scores)\n\nfrom sklearn.neighbors import KNeighborsRegressor\nscores = cross_val_score(KNeighborsRegressor(),\n X_train_scaled, y_train, cv=10)\nnp.mean(scores), np.std(scores)", "Categorical Variables", "import pandas as pd\ndf = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219],\n 'boro': ['Manhatten', 'Queens', 'Manhatten', 'Brooklyn', 'Brooklyn', 'Bronx']})\ndf\n\npd.get_dummies(df)\n\ndf = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219],\n 'boro': [0, 1, 0, 2, 2, 3]})\ndf\n\npd.get_dummies(df, columns=['boro'])", "Exercise\nApply dummy encoding and scaling to the \"adult\" dataset consisting of income data from the census.\nBonus: visualize the data.", "data = pd.read_csv(\"adult.csv\", index_col=0)\n\n# %load solutions/load_adult.py" ]
[ "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
apache-2.0
[ "# Copyright 2019 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "Composing a pipeline from reusable, pre-built, and lightweight components\nThis tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:\n\nWrite the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.\nContainerize the program.\nWrite a component specification in YAML format that describes the component for the Kubeflow Pipelines system.\nUse the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.\n\nThen, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps:\n- Train an MNIST model and export it to Google Cloud Storage.\n- Deploy the exported TensorFlow model on AI Platform Prediction service.\n- Test the deployment by calling the endpoint with test data.\nNote: Ensure that you have Docker installed, if you want to build the image locally, by running the following command:\nwhich docker\nThe result should be something like:\n/usr/bin/docker", "import kfp\nimport kfp.gcp as gcp\nimport kfp.dsl as dsl\nimport kfp.compiler as compiler\nimport kfp.components as comp\nimport datetime\n\nimport kubernetes as k8s\n\n# Required Parameters\nPROJECT_ID='<ADD GCP PROJECT HERE>'\nGCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'", "Create client\nIf you run this notebook outside of a Kubeflow cluster, run the following command:\n- host: The URL of your Kubeflow Pipelines instance, for example \"https://&lt;your-deployment&gt;.endpoints.&lt;your-project&gt;.cloud.goog/pipeline\"\n- client_id: The client ID used by Identity-Aware Proxy\n- other_client_id: The client ID used to obtain the auth codes and refresh tokens.\n- other_client_secret: The client secret used to obtain the auth codes and refresh tokens.\npython\nclient = kfp.Client(host, client_id, other_client_id, other_client_secret)\nIf you run this notebook within a Kubeflow cluster, run the following command:\npython\nclient = kfp.Client()\nYou'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials", "# Optional Parameters, but required for running outside Kubeflow cluster\n\n# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'\n# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'\n# Examples are:\n# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com\n# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline\nHOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'\n\n# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following \n# will be needed to access the endpoint.\nCLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'\nOTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'\nOTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'\n\n# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'\n# If you are not working with 'AI Platform Pipelines', this step is not necessary\n! gcloud auth print-access-token\n\n# Create kfp client\nin_cluster = True\ntry:\n k8s.config.load_incluster_config()\nexcept:\n in_cluster = False\n pass\n\nif in_cluster:\n client = kfp.Client()\nelse:\n if HOST.endswith('googleusercontent.com'):\n CLIENT_ID = None\n OTHER_CLIENT_ID = None\n OTHER_CLIENT_SECRET = None\n\n client = kfp.Client(host=HOST, \n client_id=CLIENT_ID,\n other_client_id=OTHER_CLIENT_ID, \n other_client_secret=OTHER_CLIENT_SECRET)", "Build reusable components\nWriting the program code\nThe following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.\nYour component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.", "%%bash\n\n# Create folders if they don't exist.\nmkdir -p tmp/reuse_components_pipeline/mnist_training\n\n# Create the Python file that lists GCS blobs.\ncat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE\nimport argparse\nfrom datetime import datetime\nimport tensorflow as tf\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\n '--model_path', type=str, required=True, help='Name of the model file.')\nparser.add_argument(\n '--bucket', type=str, required=True, help='GCS bucket name.')\nargs = parser.parse_args()\n\nbucket=args.bucket\nmodel_path=args.model_path\n\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\nprint(model.summary()) \n\nmnist = tf.keras.datasets.mnist\n(x_train, y_train),(x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\n\ncallbacks = [\n tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),\n # Interrupt training if val_loss stops improving for over 2 epochs\n tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),\n]\n\nmodel.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,\n validation_data=(x_test, y_test))\n\nfrom tensorflow import gfile\n\ngcs_path = bucket + \"/\" + model_path\n# The export require the folder is new\nif gfile.Exists(gcs_path):\n gfile.DeleteRecursively(gcs_path)\ntf.keras.experimental.export_saved_model(model, gcs_path)\n\nwith open('/output.txt', 'w') as f:\n f.write(gcs_path)\nHERE", "Create a Docker container\nCreate your own container image that includes your program. \nCreating a Dockerfile\nNow create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.", "%%bash\n\n# Create Dockerfile.\n# AI platform only support tensorflow 1.14\ncat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF\nFROM tensorflow/tensorflow:1.14.0-py3\nWORKDIR /app\nCOPY . /app\nEOF", "Build docker image\nNow that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:\n- Use the kfp.containers.build_image_from_working_dir to build the image and push to the Container Registry (GCR). This requires kaniko, which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.\n- Use Cloud Build, which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.\n- Use Docker installed locally and push to e.g. GCR.\nNote:\nIf you run this notebook within Kubeflow cluster, with Kubeflow version >= 0.7 and exploring kaniko option, you need to ensure that valid credentials are created within your notebook's namespace.\n- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through Configurations, which doesn't work properly at the time of creating this notebook. \n- You can also add credentials to the new namespace by either copying credentials from an existing Kubeflow namespace, or by creating a new service account.\n- The following cell demonstrates how to copy the default secret to your own namespace.\n```bash\n%%bash\nNAMESPACE=<your notebook name space>\nSOURCE=kubeflow\nNAME=user-gcp-sa\nSECRET=$(kubectl get secrets \\${NAME} -n \\${SOURCE} -o jsonpath=\"{.data.\\${NAME}.json}\" | base64 -D)\nkubectl create -n \\${NAMESPACE} secret generic \\${NAME} --from-literal=\"\\${NAME}.json=\\${SECRET}\"\n```", "IMAGE_NAME=\"mnist_training_kf_pipeline\"\nTAG=\"latest\" # \"v_$(date +%Y%m%d_%H%M%S)\"\n\nGCR_IMAGE=\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}\".format(\n PROJECT_ID=PROJECT_ID,\n IMAGE_NAME=IMAGE_NAME,\n TAG=TAG\n)\n\nAPP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/'\n\n# In the following, for the purpose of demonstration\n# Cloud Build is choosen for 'AI Platform Pipelines'\n# kaniko is choosen for 'full Kubeflow deployment'\n\nif HOST.endswith('googleusercontent.com'):\n # kaniko is not pre-installed with 'AI Platform Pipelines'\n import subprocess\n # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER}\n cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER]\n build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))\n print(build_log)\n \nelse:\n if kfp.__version__ <= '0.1.36':\n # kfp with version 0.1.36+ introduce broken change that will make the following code not working\n import subprocess\n \n builder = kfp.containers._container_builder.ContainerBuilder(\n gcs_staging=GCS_BUCKET + \"/kfp_container_build_staging\"\n )\n\n kfp.containers.build_image_from_working_dir(\n image_name=GCR_IMAGE,\n working_dir=APP_FOLDER,\n builder=builder\n )\n else:\n raise(\"Please build the docker image use either [Docker] or [Cloud Build]\")", "If you want to use docker to build the image\nRun the following in a cell\n```bash\n%%bash -s \"{PROJECT_ID}\"\nIMAGE_NAME=\"mnist_training_kf_pipeline\"\nTAG=\"latest\" # \"v_$(date +%Y%m%d_%H%M%S)\"\nCreate script to build docker image and push it.\ncat > ./tmp/components/mnist_training/build_image.sh <<HERE\nPROJECT_ID=\"${1}\"\nIMAGE_NAME=\"${IMAGE_NAME}\"\nTAG=\"${TAG}\"\nGCR_IMAGE=\"gcr.io/\\${PROJECT_ID}/\\${IMAGE_NAME}:\\${TAG}\"\ndocker build -t \\${IMAGE_NAME} .\ndocker tag \\${IMAGE_NAME} \\${GCR_IMAGE}\ndocker push \\${GCR_IMAGE}\ndocker image rm \\${IMAGE_NAME}\ndocker image rm \\${GCR_IMAGE}\nHERE\ncd tmp/components/mnist_training\nbash build_image.sh\n```", "image_name = GCR_IMAGE", "Writing your component definition file\nTo create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.\nFor the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.\nStart writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:", "%%bash -s \"{image_name}\"\n\nGCR_IMAGE=\"${1}\"\necho ${GCR_IMAGE}\n\n# Create Yaml\n# the image uri should be changed according to the above docker image push output\n\ncat > mnist_pipeline_component.yaml <<HERE\nname: Mnist training\ndescription: Train a mnist model and save to GCS\ninputs:\n - name: model_path\n description: 'Path of the tf model.'\n type: String\n - name: bucket\n description: 'GCS bucket name.'\n type: String\noutputs:\n - name: gcs_model_path\n description: 'Trained model path.'\n type: GCSPath\nimplementation:\n container:\n image: ${GCR_IMAGE}\n command: [\n python, /app/app.py,\n --model_path, {inputValue: model_path},\n --bucket, {inputValue: bucket},\n ]\n fileOutputs:\n gcs_model_path: /output.txt\nHERE\n\nimport os\nmnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml')) \n\nmnist_train_op.component_spec", "Define deployment operation on AI Platform", "mlengine_deploy_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml')\n\ndef deploy(\n project_id,\n model_uri,\n model_id,\n runtime_version,\n python_version):\n \n return mlengine_deploy_op(\n model_uri=model_uri,\n project_id=project_id, \n model_id=model_id, \n runtime_version=runtime_version, \n python_version=python_version,\n replace_existing_version=True, \n set_default=True)", "Kubeflow serving deployment component as an option. Note that, the deployed Endppoint URI is not availabe as output of this component.\n```python\nkubeflow_deploy_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml')\ndef deploy_kubeflow(\n model_dir,\n tf_server_name):\n return kubeflow_deploy_op(\n model_dir=model_dir,\n server_name=tf_server_name,\n cluster_name='kubeflow', \n namespace='kubeflow',\n pvc_name='', \n service_type='ClusterIP')\n```\nCreate a lightweight component for testing the deployment", "def deployment_test(project_id: str, model_name: str, version: str) -> str:\n\n model_name = model_name.split(\"/\")[-1]\n version = version.split(\"/\")[-1]\n \n import googleapiclient.discovery\n \n def predict(project, model, data, version=None):\n \"\"\"Run predictions on a list of instances.\n\n Args:\n project: (str), project where the Cloud ML Engine Model is deployed.\n model: (str), model name.\n data: ([[any]]), list of input instances, where each input instance is a\n list of attributes.\n version: str, version of the model to target.\n\n Returns:\n Mapping[str: any]: dictionary of prediction results defined by the model.\n \"\"\"\n\n service = googleapiclient.discovery.build('ml', 'v1')\n name = 'projects/{}/models/{}'.format(project, model)\n\n if version is not None:\n name += '/versions/{}'.format(version)\n\n response = service.projects().predict(\n name=name, body={\n 'instances': data\n }).execute()\n\n if 'error' in response:\n raise RuntimeError(response['error'])\n\n return response['predictions']\n\n import tensorflow as tf\n import json\n \n mnist = tf.keras.datasets.mnist\n (x_train, y_train),(x_test, y_test) = mnist.load_data()\n x_train, x_test = x_train / 255.0, x_test / 255.0\n\n result = predict(\n project=project_id,\n model=model_name,\n data=x_test[0:2].tolist(),\n version=version)\n print(result)\n \n return json.dumps(result)\n\n# # Test the function with already deployed version\n# deployment_test(\n# project_id=PROJECT_ID,\n# model_name=\"mnist\",\n# version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing\n# )\n\ndeployment_test_op = comp.func_to_container_op(\n func=deployment_test, \n base_image=\"tensorflow/tensorflow:1.15.0-py3\",\n packages_to_install=[\"google-api-python-client==1.7.8\"])", "Create your workflow as a Python function\nDefine your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.", "# Define the pipeline\n@dsl.pipeline(\n name='Mnist pipeline',\n description='A toy pipeline that performs mnist model training.'\n)\ndef mnist_reuse_component_deploy_pipeline(\n project_id: str = PROJECT_ID,\n model_path: str = 'mnist_model', \n bucket: str = GCS_BUCKET\n):\n train_task = mnist_train_op(\n model_path=model_path, \n bucket=bucket\n ).apply(gcp.use_gcp_secret('user-gcp-sa'))\n \n deploy_task = deploy(\n project_id=project_id,\n model_uri=train_task.outputs['gcs_model_path'],\n model_id=\"mnist\", \n runtime_version=\"1.14\",\n python_version=\"3.5\"\n ).apply(gcp.use_gcp_secret('user-gcp-sa')) \n \n deploy_test_task = deployment_test_op(\n project_id=project_id,\n model_name=deploy_task.outputs[\"model_name\"], \n version=deploy_task.outputs[\"version_name\"],\n ).apply(gcp.use_gcp_secret('user-gcp-sa'))\n \n return True", "Submit a pipeline run", "pipeline_func = mnist_reuse_component_deploy_pipeline\n\nexperiment_name = 'minist_kubeflow'\n\narguments = {\"model_path\":\"mnist_model\",\n \"bucket\":GCS_BUCKET}\n\nrun_name = pipeline_func.__name__ + ' run'\n\n# Submit pipeline directly from pipeline function\nrun_result = client.create_run_from_pipeline_func(pipeline_func, \n experiment_name=experiment_name, \n run_name=run_name, \n arguments=arguments)", "As an alternative, you can compile the pipeline into a package. The compiled pipeline can be easily shared and reused by others to run the pipeline.\n```python\npipeline_filename = pipeline_func.name + '.pipeline.zip'\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)\nexperiment = client.create_experiment('python-functions-mnist')\nrun_result = client.run_pipeline(\n experiment_id=experiment.id, \n job_name=run_name, \n pipeline_package_path=pipeline_filename, \n params=arguments)\n```" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
babraham123/script-runner
notebooks/bubble_sort.ipynb
mit
[ "{\n \"nb_display_name\": \"Bubble Sort\",\n \"nb_description\": \"An example implementation of bubble sort\",\n \"nb_filename\": \"bubble_sort.ipynb\",\n \"params\":[\n {\n \"name\":\"user_id\",\n \"display_name\":\"Test num\",\n \"description\":\"\",\n \"input_type\":\"integer\"\n },\n {\n \"name\":\"username\",\n \"display_name\":\"Test str\",\n \"description\":\"\",\n \"input_type\":\"string\"\n }\n ]\n}", "Sebastian Raschka", "import time\nprint('Last updated: %s' %time.strftime('%d/%m/%Y'))", "Sorting Algorithms\nOverview", "import platform\nimport multiprocessing\n\ndef print_sysinfo():\n \n print('\\nPython version :', platform.python_version())\n print('compiler :', platform.python_compiler())\n\n print('\\nsystem :', platform.system())\n print('release :', platform.release())\n print('machine :', platform.machine())\n print('processor :', platform.processor())\n print('CPU count :', multiprocessing.cpu_count())\n print('interpreter :', platform.architecture()[0])\n print('\\n\\n')", "Bubble sort\n[back to top]\nQuick note about Bubble sort\nI don't want to get into the details about sorting algorithms here, but there is a great report\n\"Sorting in the Presence of Branch Prediction and Caches - Fast Sorting on Modern Computers\" written by Paul Biggar and David Gregg, where they describe and analyze elementary sorting algorithms in very nice detail (see chapter 4). \nAnd for a quick reference, this website has a nice animation of this algorithm.\nA long story short: The \"worst-case\" complexity of the Bubble sort algorithm (i.e., \"Big-O\")\n $\\Rightarrow \\pmb O(n^2)$", "print_sysinfo()", "Bubble sort implemented in (C)Python", "def python_bubblesort(a_list):\n \"\"\" Bubblesort in Python for list objects (sorts in place).\"\"\"\n length = len(a_list)\n for i in range(length):\n for j in range(1, length):\n if a_list[j] < a_list[j-1]:\n a_list[j-1], a_list[j] = a_list[j], a_list[j-1]\n return a_list", "<br>\nBelow is a improved version that quits early if no further swap is needed.", "def python_bubblesort_improved(a_list):\n \"\"\" Bubblesort in Python for list objects (sorts in place).\"\"\"\n length = len(a_list)\n swapped = 1\n for i in range(length):\n if swapped: \n swapped = 0\n for ele in range(length-i-1):\n if a_list[ele] > a_list[ele + 1]:\n temp = a_list[ele + 1]\n a_list[ele + 1] = a_list[ele]\n a_list[ele] = temp\n swapped = 1\n return a_list", "Verifying that all implementations work correctly", "import random\nimport copy\nrandom.seed(4354353)\n\nl = [random.randint(1,1000) for num in range(1, 1000)]\nl_sorted = sorted(l)\nfor f in [python_bubblesort, python_bubblesort_improved]:\n assert(l_sorted == f(copy.copy(l)))\nprint('Bubblesort works correctly')\n", "Performance comparison", "# small list\n\nl_small = [random.randint(1,100) for num in range(1, 100)]\nl_small_cp = copy.copy(l_small)\n\n%timeit python_bubblesort(l_small)\n%timeit python_bubblesort_improved(l_small_cp)\n\n# larger list\n\nl_small = [random.randint(1,10000) for num in range(1, 10000)]\nl_small_cp = copy.copy(l_small)\n\n%timeit python_bubblesort(l_small)\n%timeit python_bubblesort_improved(l_small_cp)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/statespace_arma_0.ipynb
bsd-3-clause
[ "Autoregressive Moving Average (ARMA): Sunspots data\nThis notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class.", "%matplotlib inline\n\nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\n\nfrom statsmodels.graphics.api import qqplot", "Sunspots Data", "print(sm.datasets.sunspots.NOTE)\n\ndta = sm.datasets.sunspots.load_pandas().data\n\ndta.index = pd.Index(pd.date_range(\"1700\", end=\"2009\", freq=\"A-DEC\"))\ndel dta[\"YEAR\"]\n\ndta.plot(figsize=(12,4));\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)\n\narma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)\nprint(arma_mod20.params)\n\narma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False)\n\nprint(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)\n\nprint(arma_mod30.params)\n\nprint(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)", "Does our model obey the theory?", "sm.stats.durbin_watson(arma_mod30.resid)\n\nfig = plt.figure(figsize=(12,4))\nax = fig.add_subplot(111)\nax = plt.plot(arma_mod30.resid)\n\nresid = arma_mod30.resid\n\nstats.normaltest(resid)\n\nfig = plt.figure(figsize=(12,4))\nax = fig.add_subplot(111)\nfig = qqplot(resid, line='q', ax=ax, fit=True)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)\n\nr,q,p = sm.tsa.acf(resid, fft=True, qstat=True)\ndata = np.c_[r[1:], q, p]\nindex = pd.Index(range(1,q.shape[0]+1), name=\"lag\")\ntable = pd.DataFrame(data, columns=[\"AC\", \"Q\", \"Prob(>Q)\"], index=index)\nprint(table)", "This indicates a lack of fit.\n\n\nIn-sample dynamic prediction. How good does our model do?", "predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)\n\nfig, ax = plt.subplots(figsize=(12, 8))\ndta.loc['1950':].plot(ax=ax)\npredict_sunspots.plot(ax=ax, style='r');\n\ndef mean_forecast_err(y, yhat):\n return y.sub(yhat).mean()\n\nmean_forecast_err(dta.SUNACTIVITY, predict_sunspots)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dkirkby/astroml-study
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
mit
[ "Overfitting demo\nCreate a dataset based on a true sinusoidal relationship\nLet's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \\sin(4x)$:", "import graphlab\nimport math\nimport random\nimport numpy\nfrom matplotlib import pyplot as plt\n%matplotlib inline", "Create random values for x in interval [0,1)", "random.seed(98103)\nn = 30\nx = graphlab.SArray([random.random() for i in range(n)]).sort()", "Compute y", "y = x.apply(lambda x: math.sin(4*x))", "Add random Gaussian noise to y", "random.seed(1)\ne = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])\ny = y + e", "Put data into an SFrame to manipulate later", "data = graphlab.SFrame({'X1':x,'Y':y})\ndata", "Create a function to plot the data, since we'll do it many times", "def plot_data(data): \n plt.plot(data['X1'],data['Y'],'k.')\n plt.xlabel('x')\n plt.ylabel('y')\n\nplot_data(data)", "Define some useful polynomial regression functions\nDefine a function to create our features for a polynomial regression model of any degree:", "def polynomial_features(data, deg):\n data_copy=data.copy()\n for i in range(1,deg):\n data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']\n return data_copy", "Define a function to fit a polynomial linear regression model of degree \"deg\" to the data in \"data\":", "def polynomial_regression(data, deg):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,l1_penalty=0.,\n validation_set=None,verbose=False)\n return model", "Define function to plot data and predictions made, since we are going to use it many times.", "def plot_poly_predictions(data, model):\n plot_data(data)\n\n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n \n # Create 200 points in the x axis and compute the predicted value for each point\n x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})\n y_pred = model.predict(polynomial_features(x_pred,deg))\n \n # plot predictions\n plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')\n plt.legend(loc='upper left')\n plt.axis([0,1,-1.5,2])", "Create a function that prints the polynomial coefficients in a pretty way :)", "def print_coefficients(model): \n # Get the degree of the polynomial\n deg = len(model.coefficients['value'])-1\n\n # Get learned parameters as a list\n w = list(model.coefficients['value'])\n\n # Numpy has a nifty function to print out polynomials in a pretty way\n # (We'll use it, but it needs the parameters in the reverse order)\n print 'Learned polynomial for degree ' + str(deg) + ':'\n w.reverse()\n print numpy.poly1d(w)", "Fit a degree-2 polynomial\nFit our degree-2 polynomial to the data generated above:", "model = polynomial_regression(data, deg=2)", "Inspect learned parameters", "print_coefficients(model)", "Form and plot our predictions along a grid of x values:", "plot_poly_predictions(data,model)", "Fit a degree-4 polynomial", "model = polynomial_regression(data, deg=4)\nprint_coefficients(model)\nplot_poly_predictions(data,model)", "Fit a degree-16 polynomial", "model = polynomial_regression(data, deg=16)\nprint_coefficients(model)", "Woah!!!! Those coefficients are crazy! On the order of 10^6.", "plot_poly_predictions(data,model)", "Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.\n\n\n# \n# \nRidge Regression\nRidge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\\|w\\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called \"L2_penalty\").\nDefine our function to solve the ridge objective for a polynomial regression model of any degree:", "def polynomial_ridge_regression(data, deg, l2_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n return model", "Perform a ridge fit of a degree-16 polynomial using a very small penalty strength", "model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)", "Perform a ridge fit of a degree-16 polynomial using a very large penalty strength", "model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)", "Let's look at fits for a sequence of increasing lambda values", "for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e0,1e2]:\n model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)\n print 'lambda = %.2e' % l2_penalty\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('Ridge, lambda = %.2e' % l2_penalty)", "Perform a ridge fit of a degree-16 polynomial using a \"good\" penalty strength\nWe will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider \"leave one out\" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.", "# LOO cross validation -- return the average MSE\ndef loo(data, deg, l2_penalty_values):\n # Create polynomial features\n polynomial_features(data, deg)\n \n # Create as many folds for cross validatation as number of data points\n num_folds = len(data)\n folds = graphlab.cross_validation.KFold(data,num_folds)\n \n # for each value of l2_penalty, fit a model for each fold and compute average MSE\n l2_penalty_mse = []\n min_mse = None\n best_l2_penalty = None\n for l2_penalty in l2_penalty_values:\n next_mse = 0.0\n for train_set, validation_set in folds:\n # train model\n model = graphlab.linear_regression.create(train_set,target='Y', \n l2_penalty=l2_penalty,\n validation_set=None,verbose=False)\n \n # predict on validation set \n y_test_predicted = model.predict(validation_set)\n # compute squared error\n next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()\n \n # save squared error in list of MSE for each l2_penalty\n next_mse = next_mse/num_folds\n l2_penalty_mse.append(next_mse)\n if min_mse is None or next_mse < min_mse:\n min_mse = next_mse\n best_l2_penalty = l2_penalty\n \n return l2_penalty_mse,best_l2_penalty", "Run LOO cross validation for \"num\" values of lambda, on a log scale", "l2_penalty_values = numpy.logspace(-4, 10, num=10)\nl2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)", "Plot results of estimating LOO for each value of lambda", "plt.plot(l2_penalty_values,l2_penalty_mse,'k-')\nplt.xlabel('$\\L2_penalty$')\nplt.ylabel('LOO cross validation error')\nplt.xscale('log')\nplt.yscale('log')", "Find the value of lambda, $\\lambda_{\\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit", "best_l2_penalty\n\nmodel = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)\nprint_coefficients(model)\n\nplot_poly_predictions(data,model)", "Lasso Regression\nLasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called \"L1_penalty\"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\\|w\\|$.\nDefine our function to solve the lasso objective for a polynomial regression model of any degree:", "def polynomial_lasso_regression(data, deg, l1_penalty):\n model = graphlab.linear_regression.create(polynomial_features(data,deg), \n target='Y', l2_penalty=0.,\n l1_penalty=l1_penalty,\n validation_set=None, \n solver='fista', verbose=False,\n max_iterations=3000, convergence_threshold=1e-10)\n return model", "Explore the lasso solution as a function of a few different penalty strengths\nWe refer to lambda in the lasso case below as \"l1_penalty\"", "for l1_penalty in [0.0001, 0.01, 0.1, 10]:\n model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)\n print 'l1_penalty = %e' % l1_penalty\n print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()\n print_coefficients(model)\n print '\\n'\n plt.figure()\n plot_poly_predictions(data,model)\n plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))", "Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
stable/_downloads/709b65f447b790ec915e9d00176f0746/virtual_evoked.ipynb
bsd-3-clause
[ "%matplotlib inline", "Remap MEG channel types\nIn this example, MEG data are remapped from one channel type to another.\nThis is useful to:\n- visualize combined magnetometers and gradiometers as magnetometers\n or gradiometers.\n- run statistics from both magnetometers and gradiometers while\n working with a single type of channels.", "# Author: Mainak Jas <mainak.jas@telecom-paristech.fr>\n\n# License: BSD-3-Clause\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# read the evoked\ndata_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nfname = meg_path / 'sample_audvis-ave.fif'\nevoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))", "First, let's call remap gradiometers to magnometers, and plot\nthe original and remapped topomaps of the magnetometers.", "# go from grad + mag to mag and plot original mag\nvirt_evoked = evoked.as_type('mag')\nevoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\n\n# plot interpolated grad + mag\nvirt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')", "Now, we remap magnometers to gradiometers, and plot\nthe original and remapped topomaps of the gradiometers", "# go from grad + mag to grad and plot original grad\nvirt_evoked = evoked.as_type('grad')\nevoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\n\n# plot interpolated grad + mag\nvirt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gfeiden/Notebook
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
mit
[ "NGC 2516 vs the Pleiades\nThese two clusters have similar ages, but do their CMDs show a similar morphology for low-mass stars?", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "Import NGC 2516 low-mass star data.", "ngc2516 = np.genfromtxt('data/ngc2516_Christophe_v3.dat') # data for this study from J&J (2012)\nirwin07 = np.genfromtxt('data/irwin2007.phot') # data from Irwin+ (2007)\njeffr01 = np.genfromtxt('data/jeff_2001.tsv', delimiter=';', comments='#') # data from Jeffries+ (2001)\njeffr01 = np.array([star for star in jeffr01 if star[9] == 1]) # extract candidate members", "Jackson et al. (2009) recommend a small correction to I-band magnitudes from Irwin et al. (2007) to place them on the same photometric scale as Jeffries et al. (2001), which they deem to be \"better calibrated.\" Jackson & Jeffries (2012) suggest that the tabulated data (on Vizier) has been transformed to the \"better calibrated\" system. Key to understanding their results, however, is to also transform $(V-I_C)$ and then calculate a correction to $V$-band magnitudes, as well.", "irwinVI = (irwin07[:, 7] - irwin07[:, 8])*(1.0 - 0.153) + 0.300\nirwin07[:, 8] = (1.0 - 0.0076)*irwin07[:, 8] + 0.080\nirwin07[:, 7] = irwinVI + irwin07[:, 8]", "~~Note that it is not immediately clear whether this correction should be applied to photometric data cataloged by Jackson & Jeffries (2012).~~ Reading through Irwin et al. (2007) and Jackson & Jeffries (2012), it appears that the transformations are largely performed to transform the Irwin+ photometric system (Johnson $I$-band) into Cousins $I_C$ magnitudes. There may be reasons related to a \"better calibration,\" but the issue is to first and foremost put them in the same photometric system. Why that involves altering the $V$-band magnitudes is not abundantly clear.\nNow data for the Pleiades.", "pleiades_s07 = np.genfromtxt('../pleiades_colors/data/Stauffer_Pleiades_litPhot.txt', usecols=(2, 3, 5, 6, 8, 9, 13, 14, 15))\npleiades_k14 = np.genfromtxt('../pleiades_colors/data/Kamai_Pleiades_cmd.dat', usecols=(0, 1, 2, 3, 4, 5))\niso_emp_k14 = np.genfromtxt('../pleiades_colors/data/Kamai_Pleiades_emp.iso') # empirical Pleiades isochrone", "Adopt literature values for reddening, neglecting differential reddening across the Pleiades.", "pl_dis = 5.61\npl_ebv = 0.034\npl_evi = 1.25*pl_ebv\npl_evk = 2.78*pl_ebv\npl_eik = pl_evk - pl_evi\npl_av = 3.12*pl_ebv\n\nng_dis = 7.95\nng_ebv = 0.12\nng_evi = 1.25*ng_ebv\nng_evk = 2.78*ng_ebv\nng_eik = ng_evk - ng_evi\nng_av = 3.12*ng_ebv", "Overlay the CMDs for each cluster, corrected for reddening and distance.", "fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharex=True, sharey=True)\n\nfor axis in ax:\n axis.grid(True)\n axis.tick_params(which='major', axis='both', length=15., labelsize=16.)\n axis.set_ylim(12., 5.)\n axis.set_xlim(0.5, 3.0)\n axis.set_xlabel('$(V - I_C)$', fontsize=20.)\n\nax[0].set_ylabel('$M_V$', fontsize=20.)\n\nax[0].plot(jeffr01[:,5] - ng_evi, jeffr01[:,3] - ng_av - ng_dis, \n 'o', markersize=4.0, c='#555555', alpha=0.2)\nax[0].plot(irwin07[:, 7] - irwin07[:, 8] - ng_evi, irwin07[:, 7] - ng_av - ng_dis, \n 'o', c='#1e90ff', markersize=4.0, alpha=0.6)\nax[0].plot(ngc2516[:, 1] - ngc2516[:, 2] - ng_evi, ngc2516[:, 1] - ng_av - ng_dis, \n 'o', c='#555555', markersize=4.0, alpha=0.8)\nax[0].plot(iso_emp_k14[:, 2] - pl_evi, iso_emp_k14[:, 0] - pl_av - pl_dis, \n dashes=(20., 5.), lw=3, c='#b22222')\n\nax[1].plot(irwin07[:, 7] - irwin07[:, 8] - ng_evi, irwin07[:, 7] - ng_av - ng_dis, \n 'o', c='#1e90ff', markersize=4.0, alpha=0.6)\nax[1].plot(ngc2516[:, 1] - ngc2516[:, 2] - ng_evi, ngc2516[:, 1] - ng_av - ng_dis, \n 'o', c='#555555', markersize=4.0, alpha=0.6)\nax[1].plot(iso_emp_k14[:, 2] - pl_evi, iso_emp_k14[:, 0] - pl_av - pl_dis, \n dashes=(20., 5.), lw=3, c='#b22222')", "While the Stauffer et al (2007) and Jackson et al. (2009) samples lie a bit redward of the median sequence in the Jeffries et al. (2001), the former two samples compare well against the empirical cluster sequence (shown as a red dashed line; Kamai et al. 2014) from the Pleiades in a $M_V/(V-I_C)$ CMD. \nWhat about $M_V/(V-K)$ and $M_V/(I_C-K)$ CMDs?", "fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True)\n\nfor axis in ax:\n axis.grid(True)\n axis.tick_params(which='major', axis='both', length=15., labelsize=16.)\n axis.set_ylim(12., 5.)\n \nax[0].set_xlim(1.0, 6.0)\nax[0].set_xlabel('$(V - K)$', fontsize=20.)\nax[0].set_ylabel('$M_V$', fontsize=20.)\n\n# include K_CIT --> K_2mass correction for NGC 2516\nax[0].plot(ngc2516[:, 1] - ngc2516[:, 3] - 0.024 - ng_evk, ngc2516[:, 1] - ng_av - ng_dis, \n 'o', c='#555555', markersize=4.0, alpha=0.6)\nax[0].plot(iso_emp_k14[:, 3] - pl_evk, iso_emp_k14[:, 0] - pl_av - pl_dis, \n dashes=(20., 5.), lw=3, c='#b22222')\n\nax[1].set_xlim(0.5, 3.0)\nax[1].set_xlabel('$(I_C - K)$', fontsize=20.)\n\nax[1].plot(ngc2516[:, 2] - ngc2516[:, 3] - 0.024 - ng_eik, ngc2516[:, 1] - ng_av - ng_dis, \n 'o', c='#555555', markersize=4.0, alpha=0.6)\nax[1].plot(iso_emp_k14[:, 3] - iso_emp_k14[:, 2] - pl_eik, iso_emp_k14[:, 0] - pl_av - pl_dis, \n dashes=(20., 5.), lw=3, c='#b22222')", "While data in the $M_V/(V-I_C)$ CMD appears to be bluer for early M-dwarf stars and redder for later M-dwarf stars, we find that M-dwarfs in NGC 2516 appear to be generally bluer than low-mass stars in the Pleiades. \nAn interesting implication is that empirical isochornes based on the Pleiades or NGC 2516 may not reliably fit other clusters. Something is different between the two. Is it magnetic activity, or perhaps chemical composition?\n\n$(B-V)/(V-I_C)$ color-color diagram using data from Jeffries et al. (2001) for NGC 2516.", "fig, ax = plt.subplots(1, 1, figsize=(6., 8.))\n\nax.grid(True)\nax.tick_params(which='major', axis='both', length=15., labelsize=16.)\nax.set_xlim(-0.5, 2.0)\nax.set_ylim( 0.0, 2.5)\nax.set_ylabel('$(V - I_C)$', fontsize=20.)\nax.set_xlabel('$(B - V)$', fontsize=20.)\n\nax.plot(jeffr01[:,4] - ng_ebv, jeffr01[:,5] - ng_evi, \n 'o', markersize=4.0, c='#555555', alpha=0.2)\nax.plot(iso_emp_k14[:, 1] - pl_ebv, iso_emp_k14[:, 2] - pl_evi, \n dashes=(20., 5.), lw=3, c='#b22222')", "The empirical isochrone from Kamai et al. (2014) agrees well with photometric data for NGC 2516, with both corrected for differential extinction. There may be some small disagreements at various locations along the sequence, but the morphology of the empirical isochrone is broadly consistent with NGC 2516. However, stars in NGC 2516 appear to have bluer $(B-V)$ colors for $(V-I_C) > 1.8$.\n\nExploring a transformation of the Irwin+ data from Johnson $(V-I)$ to Cousins $(V-I_C)$ from Bessell (1979). This is ignoring issues related to photometric calibrations, that Jackson & Jeffries posit is important. I'm not claiming JJ are wrong, just wondering how using a more standard photometric transformation would affect the resulting CMDs.\nWAIT: The Irwin et al. (2007) data file suggests the I band photometry is quoted in terms of Cousins I, not Johnson... unlike what is stated in their paper.", "tmp_data = np.genfromtxt('data/irwin2007.phot') # re-load Irwin et al. (2007) data into new array", "Now, applying transformation from Bessell (1979), which states that", "old_vmi = tmp_data[:, 7] - tmp_data[:, 8]\nnew_vmi = old_vmi*0.835 - 0.130\n\nfig, ax = plt.subplots(1, 1, figsize=(6., 8.))\n\nax.grid(True)\nax.tick_params(which='major', axis='both', length=15., labelsize=16.)\nax.set_xlim(2.0, 3.0)\nax.set_ylim(22., 16.)\nax.set_xlabel('$(V - I_C)$', fontsize=20.)\nax.set_ylabel('$M_V$', fontsize=20.)\n\nax.plot(jeffr01[:, 5], jeffr01[:, 3], 'o', markersize=4.0, c='#555555', alpha=0.2)\nax.plot(old_vmi, tmp_data[:, 7], 'o', c='#b22222', alpha=0.6)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Diyago/Machine-Learning-scripts
DEEP LEARNING/Pytorch from scratch/TODO/Autoencoders/linear-autoencoder/Simple_Autoencoder_Solution.ipynb
apache-2.0
[ "A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n<img src='notebook_ims/autoencoder_1.png' />\nCompressed Representation\nA compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!\n<img src='notebook_ims/denoising.png' width=60%/>\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.", "import torch\nimport numpy as np\nfrom torchvision import datasets\nimport torchvision.transforms as transforms\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# load the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)\n\n# Create training and test dataloaders\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)", "Visualize the Data", "import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# get one image from the batch\nimg = np.squeeze(images[0])\n\nfig = plt.figure(figsize = (5,5)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')", "Linear Autoencoder\nWe'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of one linear layer. The units that connect the encoder and decoder will be the compressed representation.\nSince the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values that match this input value range.\n<img src='notebook_ims/simple_autoencoder.png' width=50% />\nTODO: Build the graph for the autoencoder in the cell below.\n\nThe input images will be flattened into 784 length vectors. The targets are the same as the inputs. \nThe encoder and decoder will be made of two linear layers, each.\nThe depth dimensions should change as follows: 784 inputs > encoding_dim > 784 outputs.\nAll layers will have ReLu activations applied except for the final output layer, which has a sigmoid activation.\n\nThe compressed representation should be a vector with dimension encoding_dim=32.", "import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass Autoencoder(nn.Module):\n def __init__(self, encoding_dim):\n super(Autoencoder, self).__init__()\n ## encoder ##\n # linear layer (784 -> encoding_dim)\n self.fc1 = nn.Linear(28 * 28, encoding_dim)\n \n ## decoder ##\n # linear layer (encoding_dim -> input size)\n self.fc2 = nn.Linear(encoding_dim, 28*28)\n \n\n def forward(self, x):\n # add layer, with relu activation function\n x = F.relu(self.fc1(x))\n # output layer (sigmoid for scaling from 0 to 1)\n x = F.sigmoid(self.fc2(x))\n return x\n\n# initialize the NN\nencoding_dim = 32\nmodel = Autoencoder(encoding_dim)\nprint(model)", "Training\nHere I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \nWe are not concerned with labels in this case, just images, which we can get from the train_loader. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows:\nloss = criterion(outputs, images)\nOtherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go.", "# specify loss function\ncriterion = nn.MSELoss()\n\n# specify loss function\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n\n# number of epochs to train the model\nn_epochs = 20\n\nfor epoch in range(1, n_epochs+1):\n # monitor training loss\n train_loss = 0.0\n \n ###################\n # train the model #\n ###################\n for data in train_loader:\n # _ stands in for labels, here\n images, _ = data\n # flatten images\n images = images.view(images.size(0), -1)\n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n outputs = model(images)\n # calculate the loss\n loss = criterion(outputs, images)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*images.size(0)\n \n # print avg training statistics \n train_loss = train_loss/len(train_loader)\n print('Epoch: {} \\tTraining Loss: {:.6f}'.format(\n epoch, \n train_loss\n ))", "Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.", "# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\n\nimages_flatten = images.view(images.size(0), -1)\n# get sample outputs\noutput = model(images_flatten)\n# prep images for display\nimages = images.numpy()\n\n# output is resized into a batch of images\noutput = output.view(batch_size, 1, 28, 28)\n# use detach when it's an output that requires_grad\noutput = output.detach().numpy()\n\n# plot the first ten input images and then reconstructed images\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))\n\n# input images on top row, reconstructions on bottom\nfor images, row in zip([images, output], axes):\n for img, ax in zip(images, row):\n ax.imshow(np.squeeze(img), cmap='gray')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)", "Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dtamayo/rebound
ipython_examples/Units.ipynb
gpl-3.0
[ "Unit convenience functions\nFor convenience, REBOUND offers simple functionality for converting units. One implicitly sets the units for the simulation through the values used for the initial conditions, but one has to set the appropriate value for the gravitational constant G, and sometimes it is convenient to get the output in different units.\nThe default value for G is 1, so one can:\na) use units for the initial conditions where G=1 (e.g., AU, $M_\\odot$, yr/$2\\pi$)\nb) set G manually to the value appropriate for the adopted initial conditions, e.g., to use SI units,", "import rebound\nimport math\nsim = rebound.Simulation()\nsim.G = 6.674e-11", "c) set rebound.units:", "sim.units = ('yr', 'AU', 'Msun')\nprint(\"G = {0}.\".format(sim.G))", "When you set the units, REBOUND converts G to the appropriate value for the units passed (must pass exactly 3 units for mass length and time, but they can be in any order). Note that if you are interested in high precision, you have to be quite particular about the exact units. \nAs an aside, the reason why G differs from $4\\pi^2 \\approx 39.47841760435743$ is mostly that we follow the convention of defining a \"year\" as 365.25 days (a Julian year), whereas the Earth's sidereal orbital period is closer to 365.256 days (and at even finer level, Venus and Mercury modify the orbital period). G would only equal $4\\pi^2$ in units where a \"year\" was exactly equal to one orbital period at $1 AU$ around a $1 M_\\odot$ star.\nAdding particles\nIf you use sim.units at all, you need to set the units before adding any particles. You can then add particles in any of the ways described in WHFast.ipynb. You can also add particles drawing from the horizons database (see Churyumov-Gerasimenko.ipynb). If you don't set the units ahead of time, HORIZONS will return initial conditions in units of AU, $M_\\odot$ and yrs/$2\\pi$, such that G=1. \nAbove we switched to units of AU, $M_\\odot$ and yrs, so when we add Earth:", "sim.add('Earth')\nps = sim.particles\nimport math\nprint(\"v = {0}\".format(math.sqrt(ps[0].vx**2 + ps[0].vy**2 + ps[0].vz**2)))", "we see that the velocity is correctly set to approximately $2\\pi$ AU/yr.\nIf you'd like to enter the initial conditions in one set of units, and then use a different set for the simulation, you can use the sim.convert_particle_units function, which converts both the initial conditions and G. Since we added Earth above, we restart with a new Simulation instance; otherwise we'll get an error saying that we can't set the units with particles already loaded:", "sim = rebound.Simulation()\nsim.units = ('m', 's', 'kg')\nsim.add(m=1.99e30)\nsim.add(m=5.97e24,a=1.5e11)\n\nsim.convert_particle_units('AU', 'yr', 'Msun')\nsim.status()", "We first set the units to SI, added (approximate values for) the Sun and Earth in these units, and switched to AU, yr, $M_\\odot$. You can see that the particle states were converted correctly--the Sun has a mass of about 1, and the Earth has a distance of about 1.\nNote that when you pass orbital elements to sim.add, you must make sure G is set correctly ahead of time (through either 3 of the methods above), since it will use the value of sim.G to generate the velocities:", "sim = rebound.Simulation()\nprint(\"G = {0}\".format(sim.G))\nsim.add(m=1.99e30)\nsim.add(m=5.97e24,a=1.5e11)\nsim.status()", "The orbital speed of Earth is $\\sim 3\\times 10^4$ m/s, but since we didn't correctly set G ahead of time, we get $\\sim 3\\times 10^9$ m/s, so the Earth would fly off the Sun in this simulation." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mdeff/ntds_2016
algorithms/04_sol_tensorflow.ipynb
mit
[ "A Network Tour of Data Science\n&nbsp; &nbsp; &nbsp; Xavier Bresson, Winter 2016/17\nExercise 4 : Introduction to TensorFlow", "# Import libraries\nimport tensorflow as tf\nimport numpy as np\nimport time\nimport collections\nimport os\n\n# Import MNIST data with TensorFlow\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(os.path.join('datasets', 'mnist'), one_hot=True) # load data in local folder\n\ntrain_data = mnist.train.images.astype(np.float32)\ntrain_labels = mnist.train.labels\n\ntest_data = mnist.test.images.astype(np.float32)\ntest_labels = mnist.test.labels\n\nprint(train_data.shape)\nprint(train_labels.shape)\nprint(test_data.shape)\nprint(test_labels.shape)", "1st Step: Construct Computational Graph\nQuestion 1: Prepare the input variables (x,y_label) of the computational graph\nHint: You may use the function tf.placeholder()", "# computational graph inputs\nbatch_size = 100\nd = train_data.shape[1]\nnc = 10\nx = tf.placeholder(tf.float32,[batch_size,d]); print('x=',x,x.get_shape())\ny_label = tf.placeholder(tf.float32,[batch_size,nc]); print('y_label=',y_label,y_label.get_shape())", "Question 2: Prepare the variables (W,b) of the computational graph\nHint: You may use the function tf.Variable(), tf.truncated_normal()", "# computational graph variables\ninitial = tf.truncated_normal([d,nc], stddev=0.1); W = tf.Variable(initial); print('W=',W.get_shape())\nb = tf.Variable(tf.zeros([nc],tf.float32)); print('b=',b.get_shape())", "Question 3: Compute the classifier such that\n$$\ny=softmax(Wx +b)\n$$\nHint: You may use the function tf.matmul(), tf.nn.softmax()", "# Construct CG / output value\ny = tf.matmul(x, W); print('y1=',y,y.get_shape())\ny += b; print('y2=',y,y.get_shape())\ny = tf.nn.softmax(y); print('y3=',y,y.get_shape())", "Question 4: Construct the loss of the computational graph such that\n$$\nloss = cross\\ entropy(y_{label},y) = mean_{all\\ data} \\ \\sum_{all\\ classes} -\\ y_{label}.\\log(y)\n$$\nHint: You may use the function tf.Variable(), tf.truncated_normal()", "# Loss\ncross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))", "Question 5: Construct the L2 regularization of (W,b) to the computational graph such that\n$$\nR(W) = \\|W\\|_2^2\\\nR(b) = \\|b\\|_2^2\n$$\nHint: You may use the function tf.nn.l2_loss()", "reg_loss = tf.nn.l2_loss(W)\nreg_loss += tf.nn.l2_loss(b)", "Question 6: Form the total loss\n$$\ntotal\\ loss = cross\\ entropy(y_{label},y) + reg_par* (R(W) + R(b))\n$$", "reg_par = 1e-3\ntotal_loss = cross_entropy + reg_par* reg_loss", "Question 7: Perform optimization of the total loss for learning weight variables of the computational graph\nHint: You may use the function tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)", "# Update CG variables / backward pass\ntrain_step = tf.train.GradientDescentOptimizer(0.25).minimize(total_loss)", "Question 8: Evaluate the accuracy\nHint: You may use the function tf.equal(tf.argmax(y,1), tf.argmax(y_label,1)) and tf.reduce_mean()", "# Accuracy\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "2nd Step: Run the Computational Graph with batches of training data\nCheck out the accuracy of test set", "# Create test set \nidx = np.random.permutation(test_data.shape[0]) # rand permutation\nidx = idx[:batch_size]\ntest_x, test_y = test_data[idx,:], test_labels[idx]\n\nn = train_data.shape[0]\nindices = collections.deque()\n\n# Running Computational Graph\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\nfor i in range(50):\n \n # Batch extraction\n if len(indices) < batch_size:\n indices.extend(np.random.permutation(n)) # rand permutation\n idx = [indices.popleft() for i in range(batch_size)] # extract n_batch data\n batch_x, batch_y = train_data[idx,:], train_labels[idx]\n \n # Run CG for variable training\n _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={x: batch_x, y_label: batch_y})\n print('\\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)\n \n # Run CG for testset\n acc_test = sess.run(accuracy, feed_dict={x: test_x, y_label: test_y})\n print('test accuracy=',acc_test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
machine-learning/delete_observations_with_missing_values.ipynb
mit
[ "Title: Delete Observations With Missing Values \nSlug: delete_observations_with_missing_values \nSummary: How to delete observations with missing values. \nDate: 2017-09-05 12:00\nCategory: Machine Learning\nTags: Preprocessing Structured Data\nAuthors: Chris Albon \nPreliminaries", "# Load libraries\nimport numpy as np\nimport pandas as pd", "Create Feature Matrix", "# Create feature matrix\nX = np.array([[1.1, 11.1], \n [2.2, 22.2], \n [3.3, 33.3], \n [4.4, 44.4], \n [np.nan, 55]])", "Delete Observations With Missing Values", "# Remove observations with missing values\nX[~np.isnan(X).any(axis=1)]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
QuantStack/quantstack-talks
2019-07-10-CICM/notebooks/DrawControl.ipynb
bsd-3-clause
[ "from ipyleaflet import (\n Map,\n Marker,\n TileLayer, ImageOverlay,\n Polyline, Polygon, Rectangle, Circle, CircleMarker,\n GeoJSON,\n DrawControl\n)\n\nfrom traitlets import link\n\ncenter = [34.6252978589571, -77.34580993652344]\nzoom = 10\n\nm = Map(center=center, zoom=zoom)\nm\n\nm.zoom", "Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.", "dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}},\n rectangle={'shapeOptions': {'color': '#0000FF'}},\n circle={'shapeOptions': {'color': '#0000FF'}},\n circlemarker={},\n )\n\ndef handle_draw(self, action, geo_json):\n print(action)\n print(geo_json)\n\ndc.on_draw(handle_draw)\nm.add_control(dc)", "In addition, the DrawControl also has last_action and last_draw attributes that are created dynamicaly anytime a new drawn path arrives.", "dc.last_action\n\ndc.last_draw", "It's possible to remove all drawings from the map", "dc.clear_circles()\n\ndc.clear_polylines()\n\ndc.clear_rectangles()\n\ndc.clear_markers()\n\ndc.clear_polygons()\n\ndc.clear()", "Let's draw a second map and try to import this GeoJSON data into it.", "m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px'))\nm2", "We can use link to synchronize traitlets of the two maps:", "map_center_link = link((m, 'center'), (m2, 'center'))\nmap_zoom_link = link((m, 'zoom'), (m2, 'zoom'))\n\nnew_poly = GeoJSON(data=dc.last_draw)\n\nm2.add_layer(new_poly)", "Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details.\nNow let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit.", "dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={},\n circle={'shapeOptions': {'color': '#0000FF'}})\nm2.add_control(dc2)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
olgabot/cshl-singlecell-2017
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
mit
[ "Note: This is not really a \"network analysis\" - we are only looking at the graph and seeing what cells are there. if you want to do more than just zoom in and look around at the cells in the graphs, I recommend using Cytoscape for visualizing newtorks.", "# Interactive jupyter widgets - use IntSlider directly for more control\nfrom ipywidgets import IntSlider, interact\n\n# Convert RGB colors to hex cor portability\nfrom matplotlib.colors import rgb2hex\n\n# Visualize networks\nimport networkx\n\n# Numerical python\nimport numpy as np\n\n# Pandas for dataframes\nimport pandas as pd\n\n# K-nearest neighbors cell clustering from Dana Pe'er's lab\nimport phenograph\n\n# Make color palettes\nimport seaborn as sns\n%matplotlib inline\n\n# Bokeh - interactive plotting in the browser\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import HoverTool, ColumnDataSource\nfrom bokeh.models.widgets import Panel, Tabs\nfrom bokeh.layouts import widgetbox\nfrom bokeh.io import output_notebook\n\n# Local file: networkplots.py\nimport networkplots\n\n# This line is required for the plots to appear in the notebooks\noutput_notebook()", "At this point, you can follow along with either the pre-baked Macosko2015 amacrine data, or you can load in your own expression matrices. For the best experience, make sure that the rows are cells and the columns are gene names.", "import macosko2015\ncounts, cell_metadata, gene_metadata = macosko2015.load_big_clusters()\ncounts.head()", "Calculate correlation between cells:", "correlations = counts.T.rank().corr()\nprint(correlations.shape)\ncorrelations.head()", "Correlation != distance\nCorrelation is not equal to distance. If two things are exactly the same, their correlation value is 1. But in space, if two things are exactly the same, the distance between them is 0. Therefore, correlation is not a distance! Correlation is a similarity metric, where bigger = more similar. But we want a dissimilarity (aka distance) metric.\nTake a look for yourself. Many values in the distribution of all correlation values are near zero (not correlated), and a blip near 1 ( self-correlations).", "sns.distplot(correlations.values.flat)", "But for building a K-nearest neighbors graph, we want the closest things (in distance space) to be actually close. So we'll convert our correlation ($\\rho$) into a distance ($d$) using this equation:\n$$\nd = \\sqrt{2(1-\\rho)}\n$$\nYou can look at the code for networkplots.correlation_to_distance to convince yourself that's actually what it's doing:", "networkplots.correlation_to_distance??", "Exercise 1\nCreate a dataframe called distance using the correlation_to_distance function from networkplots on your corr dataframe.", "# YOUR CODE HERE", "", "distances = networkplots.correlation_to_distance(correlations)\ndistances.head()", "Exercise 2\nLet's take a look at our values to make sure we have most of our values far away from zero. Use sns.distplot to look the flattened values of the distances dataframe.", "# YOUR CODE HERE", "", "sns.distplot(distances.values.flat)", "Now we'll run phenograph.cluster, which returns three items:\n\ncommunities: the cluster labels of each cell\nsparse_matrix: a sparse matrix representing the connections between cells in the graph\nQ: the modularity score. Higher is better, and the highest is 1. \n0 means your graph is randomly connected and -1 means your graph isn't connected at all.", "communities, sparse_matrix, Q = phenograph.cluster(distances, k=10)", "Let's take a look at each of these returned values", "communities\n\nsparse_matrix\n\nQ", "It looks like the communities labels each cell as belonging to a particular cluster, the sparse_matrix is some data type that we can't directly investigate, and Q is the modularity value.\nMake a graph from the sparse matrix\nTo be able to lay out our graph in two dimensions, we'll use the networkx Python Package to build the graph and lay out the cells and edges.", "graph = networkx.from_scipy_sparse_matrix(sparse_matrix)\ngraph", "We'll use the \"Spring layout\" which is a force-directed layout that pushes cells and edges away from each other. We'll use the built-in networkx function called spring_layout on our graph:", "positions = networkx.spring_layout(graph)\npositions", "Convert positions dict to dataframe with node information\nThis positions dataframe is a dictionary mapping the node id (in this case, a number) and the $(x, y)$ position. The nodes are in exactly the same order as the rows of the distances dataframe we gave phenograph.cluster.", "networkplots.get_nodes_specs??", "Looks like this function can deal with if we already have some clusters defined in our metadata! Let's look at our cell_metadata and remind ourselves of which column we might like to use for the other_cluster_col value.", "cell_metadata.head()", "In this case, I'd like to use the cluster_n_celltype column.\nLet's take a look at the code again to see how the networkplots.get_nodes_specs function uses the metadata:", "networkplots.get_nodes_specs??", "Looks like this function uses another one, called labels_to_colors -- what does that do?", "networkplots.labels_to_colors??", "Now let's use get_nodes_specs to create a dataframe of information about nodes so we can plot them.", "nodes_specs = networkplots.get_nodes_specs(\n positions, cell_metadata, distances.index, \n communities, other_cluster_col='cluster_n_celltype',\n palette='Set2')\nprint(nodes_specs.shape)\nnodes_specs.head()", "Convert positions dict to dataframe with edge information\nWe've now created a dataframe containing the x,y positions, the community labels, and the colors for the communities and other clusters we were interested in. Now we want to do the same for the edges (lines between cells).\nLet's take a look at the function we'll use:", "networkplots.get_edges_specs??", "What arguments does it take? What does it do with them? What does it return?\nExercise 3\nCreate a variable called edges_specs using the networkplots.get_edges_specs and the correct inputs.", "# YOUR CODE HERE", "", "edges_specs = networkplots.get_edges_specs(graph, positions)\nprint(edges_specs.shape)\nedges_specs.head()", "To be able to use the dataframes with the Bokeh plotting language, we need to convert our dataframes into ColumnDataSource objects.", "nodes_source = ColumnDataSource(nodes_specs)\nedges_source = ColumnDataSource(edges_specs)\n\n# --- First tab: KNN clustering --- #\ntab1 = networkplots.plot_graph(nodes_source, edges_source, \n legend_col='community',\n color_col='community_color', tab=True,\n title='KNN Clustering')\n\n# --- Second tab: Clusters from paper --- #\ntab2 = networkplots.plot_graph(nodes_source, edges_source,\n legend_col='cluster_n_celltype', tab=True,\n color_col='other_cluster_color',\n title=\"Clusters from paper\")\n\ntabs = Tabs(tabs=[tab1, tab2])\nshow(tabs)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
csieber/yt-dataset
notebooks/avg_quality.ipynb
mit
[ "Average video quality\nThe basic example shows how to plot the shaping to average quality level plot from the IFIP Networking 2016 publication.\nReading the dataset with pandas\nRemove warnings and show plots inline:", "import warnings\nwarnings.filterwarnings(\"ignore\")\n%matplotlib inline", "Import the required modules:", "import numpy as np\nimport pandas as pd\nimport matplotlib.pylab as plt\nimport scipy.stats", "Read the dataset:", "data = pd.read_csv(\"../data/ifip_networking.csv.gz\")", "Convert the shaping to Mbps:", "data.loc[:,'shaping_mbps'] = data.loc[:,'net_avg_shaping_rate']*8/1000/1000\ndata.loc[:,'shaping_mbps_rounded'] = data.loc[:,'shaping_mbps'].round(1)", "Definitions\nDict for translating itags to quality levels and vice-versa:", "ITAG_TO_QL = {160: 0,\n 133: 1,\n 134: 2,\n 135: 3,\n 136: 4}\nQL_TO_ITAG = {v: k for k, v in ITAG_TO_QL.items()}\n\nVIDDEF = {160: {'label': '144p', 'color': 'green', 'resolution': '256x144'},\n 133: {'label': '240p', 'color': 'red' , 'resolution': '320x240'},\n 134: {'label': '360p', 'color': 'blue' , 'resolution': '480x360'},\n 135: {'label': '480p', 'color': 'grey' , 'resolution': '640x480'},\n 136: {'label': '720p', 'color': 'cyan' , 'resolution': '1280x720'}}", "Confidence Interval:", "def confintv_yerr(values):\n n, min_max, mean, var, skew, kurt = scipy.stats.describe(values)\n std = np.sqrt(var)\n\n intv = scipy.stats.t.interval(0.95,len(values)-1,loc=mean,scale=std/np.sqrt(len(values)))\n\n yerr = ((intv[1] - intv[0]) / 2)\n \n return yerr", "Plotting shaping to average quality level\nThe subsequent plot shows the fraction of time the video spent on the a certain quality level and the overall average quality level for a specific network shaping value. For example, at 2.2 Mbps, the player spends nearly 100% of the time on the highest quality level (480p).", "fig = plt.figure(figsize=(9, 7))\n\nplt.hold(True)\nax1 = fig.add_subplot(111)\n\nby_shaping = data.groupby('shaping_mbps').mean() \n\ny_offset = 0\ncmap = plt.get_cmap('copper')\ncolors = iter(cmap(np.linspace(0,1,len(QL_TO_ITAG))))\n\nfor ql,itag in list(QL_TO_ITAG.items())[0:4]:\n\n idx_itag = 'pl_time_spent_norm_itag%d' % itag\n ax1.fill_between(by_shaping.index, \n y_offset,\n by_shaping[idx_itag],\n alpha=0.35,\n facecolor=next(colors))\n\n y_offset = by_shaping[idx_itag]\n\nplt.annotate(s=VIDDEF[QL_TO_ITAG[0]]['label'], xy=(0.46, 0.014))\nplt.annotate(s=VIDDEF[QL_TO_ITAG[1]]['label'], xy=(0.65, 0.42))\nplt.annotate(s=VIDDEF[QL_TO_ITAG[2]]['label'], xy=(1.05, 0.42))\nplt.annotate(s=VIDDEF[QL_TO_ITAG[3]]['label'], xy=(1.6, 0.42))\n\nplt.ylabel(r\"Relative Playback Time $T_{fq}$\")\nplt.xlabel(r\"Bandwidth $f$ (Mbps)\") \n\nax2 = ax1.twinx()\n\nax2_data = pd.DataFrame(columns=['shaping', 'avg_ql', 'yerr'])\nfor shaping,group in data.groupby('shaping_mbps'):\n\n ql_median = group['pl_avg_pl_quality_ql'].mean()\n ql_yerr = confintv_yerr(group['pl_avg_pl_quality_ql'])\n\n ax2_data = ax2_data.append(pd.DataFrame([[shaping, ql_median, ql_yerr]], columns=ax2_data.columns))\n\nax2_data.reset_index(drop=True)\n\nax2.errorbar(ax2_data['shaping'], ax2_data['avg_ql'], yerr=list(ax2_data['yerr']), color='black')\n\nplt.ylabel(r\"Average Quality $J_f$\")\n\nmax_mbps = 2.2\ntl = [\"\"]*int(2.2/0.1)\ntl[1] = \"0.5\"\ntl[6] = \"1.0\"\ntl[11] = \"1.5\"\ntl[16] = \"2.0\"\nplt.xticks(np.arange(by_shaping.index.min(), max_mbps, 0.1), tl)\n\nplt.xlim([by_shaping.index.min(), max_mbps])\n_ = plt.ylim([0, 3])", "Export notebook to HTML:", "!ipython nbconvert avg_quality.ipynb --to html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liufuyang/deep_learning_tutorial
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
mit
[ "Convolutional Neural Networks: Step by Step\nWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. \nNotation:\n- Superscript $[l]$ denotes an object of the $l^{th}$ layer. \n - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.\n\n\nSuperscript $(i)$ denotes an object from the $i^{th}$ example. \n\nExample: $x^{(i)}$ is the $i^{th}$ training example input.\n\n\n\nLowerscript $i$ denotes the $i^{th}$ entry of a vector.\n\nExample: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.\n\n\n\n$n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. \n\n$n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. \n\nWe assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!\n1 - Packages\nLet's first import all the packages that you will need during this assignment. \n- numpy is the fundamental package for scientific computing with Python.\n- matplotlib is a library to plot graphs in Python.\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.", "import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)", "2 - Outline of the Assignment\nYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:\n\nConvolution functions, including:\nZero Padding\nConvolve window \nConvolution forward\nConvolution backward (optional)\n\n\nPooling functions, including:\nPooling forward\nCreate mask \nDistribute value\nPooling backward (optional)\n\n\n\nThis notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:\n<img src=\"images/model.png\" style=\"width:800px;height:300px;\">\nNote that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. \n3 - Convolutional Neural Networks\nAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. \n<img src=\"images/conv_nn.png\" style=\"width:350px;height:200px;\">\nIn this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. \n3.1 - Zero-Padding\nZero-padding adds zeros around the border of an image:\n<img src=\"images/PAD.png\" style=\"width:600px;height:400px;\">\n<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption>\nThe main benefits of padding are the following:\n\n\nIt allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the \"same\" convolution, in which the height/width is exactly preserved after one layer. \n\n\nIt helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.\n\n\nExercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array \"a\" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do:\npython\na = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))", "# GRADED FUNCTION: zero_pad\n\ndef zero_pad(X, pad):\n \"\"\"\n Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, \n as illustrated in Figure 1.\n \n Argument:\n X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images\n pad -- integer, amount of padding around each image on vertical and horizontal dimensions\n \n Returns:\n X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line)\n X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), 'constant', constant_values = 0)\n ### END CODE HERE ###\n \n return X_pad\n\nnp.random.seed(1)\nx = np.random.randn(4, 3, 3, 2)\nx_pad = zero_pad(x, 2)\nprint (\"x.shape =\", x.shape)\nprint (\"x_pad.shape =\", x_pad.shape)\nprint (\"x[1,1] =\", x[1,1])\nprint (\"x_pad[1,1] =\", x_pad[1,1])\n\nfig, axarr = plt.subplots(1, 2)\naxarr[0].set_title('x')\naxarr[0].imshow(x[0,:,:,0])\naxarr[1].set_title('x_pad')\naxarr[1].imshow(x_pad[0,:,:,0])", "Expected Output:\n<table>\n <tr>\n <td>\n **x.shape**:\n </td>\n <td>\n (4, 3, 3, 2)\n </td>\n </tr>\n <tr>\n <td>\n **x_pad.shape**:\n </td>\n <td>\n (4, 7, 7, 2)\n </td>\n </tr>\n <tr>\n <td>\n **x[1,1]**:\n </td>\n <td>\n [[ 0.90085595 -0.68372786]\n [-0.12289023 -0.93576943]\n [-0.26788808 0.53035547]]\n </td>\n </tr>\n <tr>\n <td>\n **x_pad[1,1]**:\n </td>\n <td>\n [[ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]\n [ 0. 0.]]\n </td>\n </tr>\n\n</table>\n\n3.2 - Single step of convolution\nIn this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: \n\nTakes an input volume \nApplies a filter at every position of the input\nOutputs another volume (usually of different size)\n\n<img src=\"images/Convolution_schematic.gif\" style=\"width:500px;height:300px;\">\n<caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>\nIn a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. \nLater in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. \nExercise: Implement conv_single_step(). Hint.", "# GRADED FUNCTION: conv_single_step\n\ndef conv_single_step(a_slice_prev, W, b):\n \"\"\"\n Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation \n of the previous layer.\n \n Arguments:\n a_slice_prev -- slice of input data of shape (f, f, n_C_prev)\n W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)\n b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)\n \n Returns:\n Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data\n \"\"\"\n\n ### START CODE HERE ### (≈ 2 lines of code)\n # Element-wise product between a_slice and W. Do not add the bias yet.\n s = np.multiply(a_slice_prev, W)\n # Sum over all entries of the volume s.\n Z = np.sum(s)\n # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.\n Z = Z + float(b)\n ### END CODE HERE ###\n\n return Z\n\nnp.random.seed(1)\na_slice_prev = np.random.randn(4, 4, 3)\nW = np.random.randn(4, 4, 3)\nb = np.random.randn(1, 1, 1)\n\nZ = conv_single_step(a_slice_prev, W, b)\nprint(\"Z =\", Z)", "Expected Output:\n<table>\n <tr>\n <td>\n **Z**\n </td>\n <td>\n -6.99908945068\n </td>\n </tr>\n\n</table>\n\n3.3 - Convolutional Neural Networks - Forward pass\nIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: \n<center>\n<video width=\"620\" height=\"440\" src=\"images/conv_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\nExercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. \nHint: \n1. To select a 2x2 slice at the upper left corner of a matrix \"a_prev\" (shape (5,5,3)), you would do:\npython\na_slice_prev = a_prev[0:2,0:2,:]\nThis will be useful when you will define a_slice_prev below, using the start/end indexes you will define.\n2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.\n<img src=\"images/vert_horiz_kiank.png\" style=\"width:400px;height:300px;\">\n<caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption>\nReminder:\nThe formulas relating the output shape of the convolution to the input shape is:\n$$ n_H = \\lfloor \\frac{n_{H_{prev}} - f + 2 \\times pad}{stride} \\rfloor +1 $$\n$$ n_W = \\lfloor \\frac{n_{W_{prev}} - f + 2 \\times pad}{stride} \\rfloor +1 $$\n$$ n_C = \\text{number of filters used in the convolution}$$\nFor this exercise, we won't worry about vectorization, and will just implement everything with for-loops.", "# GRADED FUNCTION: conv_forward\n\ndef conv_forward(A_prev, W, b, hparameters):\n \"\"\"\n Implements the forward propagation for a convolution function\n \n Arguments:\n A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)\n b -- Biases, numpy array of shape (1, 1, 1, n_C)\n hparameters -- python dictionary containing \"stride\" and \"pad\"\n \n Returns:\n Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward() function\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve dimensions from A_prev's shape (≈1 line) \n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve dimensions from W's shape (≈1 line)\n (f, f, n_C_prev, n_C) = W.shape\n \n # Retrieve information from \"hparameters\" (≈2 lines)\n stride = hparameters['stride']\n pad = hparameters['pad']\n \n # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)\n n_H = int((n_H_prev - f + 2*pad)/stride) + 1\n n_W = int((n_W_prev - f + 2*pad)/stride) + 1\n \n # Initialize the output volume Z with zeros. (≈1 line)\n Z = np.zeros((m, n_H, n_W, n_C))\n \n # Create A_prev_pad by padding A_prev\n A_prev_pad = zero_pad(A_prev, pad)\n \n for i in range(m): # loop over the batch of training examples\n a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation\n for h in range(n_H): # loop over vertical axis of the output volume\n for w in range(n_W): # loop over horizontal axis of the output volume\n for c in range(n_C): # loop over channels (= #filters) of the output volume\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h*stride\n vert_end = vert_start+f\n horiz_start = w*stride\n horiz_end = horiz_start+f\n \n # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)\n a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end,:]\n \n # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)\n Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c])\n \n ### END CODE HERE ###\n \n # Making sure your output shape is correct\n assert(Z.shape == (m, n_H, n_W, n_C))\n \n # Save information in \"cache\" for the backprop\n cache = (A_prev, W, b, hparameters)\n \n return Z, cache\n\nnp.random.seed(1)\nA_prev = np.random.randn(10,4,4,3)\nW = np.random.randn(2,2,3,8)\nb = np.random.randn(1,1,1,8)\nhparameters = {\"pad\" : 2,\n \"stride\": 2}\n\nZ, cache_conv = conv_forward(A_prev, W, b, hparameters)\nprint(\"Z's mean =\", np.mean(Z))\nprint(\"Z[3,2,1] =\", Z[3,2,1])\nprint(\"cache_conv[0][1][2][3] =\", cache_conv[0][1][2][3])", "Expected Output:\n<table>\n <tr>\n <td>\n **Z's mean**\n </td>\n <td>\n 0.0489952035289\n </td>\n </tr>\n <tr>\n <td>\n **Z[3,2,1]**\n </td>\n <td>\n [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437\n 5.18531798 8.75898442]\n </td>\n </tr>\n <tr>\n <td>\n **cache_conv[0][1][2][3]**\n </td>\n <td>\n [-0.20075807 0.18656139 0.41005165]\n </td>\n </tr>\n\n</table>\n\nFinally, CONV layer should also contain an activation, in which case we would add the following line of code:\n```python\nConvolve the window to get back one output neuron\nZ[i, h, w, c] = ...\nApply activation\nA[i, h, w, c] = activation(Z[i, h, w, c])\n```\nYou don't need to do it here. \n4 - Pooling layer\nThe pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: \n\n\nMax-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.\n\n\nAverage-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.\n\n\n<table>\n<td>\n<img src=\"images/max_pool1.png\" style=\"width:500px;height:300px;\">\n<td>\n\n<td>\n<img src=\"images/a_pool.png\" style=\"width:500px;height:300px;\">\n<td>\n</table>\n\nThese pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. \n4.1 - Forward Pooling\nNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. \nExercise: Implement the forward pass of the pooling layer. Follow the hints in the comments below.\nReminder:\nAs there's no padding, the formulas binding the output shape of the pooling to the input shape is:\n$$ n_H = \\lfloor \\frac{n_{H_{prev}} - f}{stride} \\rfloor +1 $$\n$$ n_W = \\lfloor \\frac{n_{W_{prev}} - f}{stride} \\rfloor +1 $$\n$$ n_C = n_{C_{prev}}$$", "# GRADED FUNCTION: pool_forward\n\ndef pool_forward(A_prev, hparameters, mode = \"max\"):\n \"\"\"\n Implements the forward pass of the pooling layer\n \n Arguments:\n A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n hparameters -- python dictionary containing \"f\" and \"stride\"\n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters \n \"\"\"\n \n # Retrieve dimensions from the input shape\n (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape\n \n # Retrieve hyperparameters from \"hparameters\"\n f = hparameters[\"f\"]\n stride = hparameters[\"stride\"]\n \n # Define the dimensions of the output\n n_H = int(1 + (n_H_prev - f) / stride)\n n_W = int(1 + (n_W_prev - f) / stride)\n n_C = n_C_prev\n \n # Initialize output matrix A\n A = np.zeros((m, n_H, n_W, n_C)) \n \n ### START CODE HERE ###\n for i in range(m): # loop over the training examples\n for h in range(n_H): # loop on the vertical axis of the output volume\n for w in range(n_W): # loop on the horizontal axis of the output volume\n for c in range (n_C): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = h*stride\n vert_end = vert_start+f\n horiz_start = w*stride\n horiz_end = horiz_start+f\n \n # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)\n a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]\n \n # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.\n if mode == \"max\":\n A[i, h, w, c] = np.max(a_prev_slice)\n elif mode == \"average\":\n A[i, h, w, c] = np.mean(a_prev_slice)\n \n ### END CODE HERE ###\n \n # Store the input and hparameters in \"cache\" for pool_backward()\n cache = (A_prev, hparameters)\n \n # Making sure your output shape is correct\n assert(A.shape == (m, n_H, n_W, n_C))\n \n return A, cache\n\nnp.random.seed(1)\nA_prev = np.random.randn(2, 4, 4, 3)\nhparameters = {\"stride\" : 2, \"f\": 3}\n\nA, cache = pool_forward(A_prev, hparameters)\nprint(\"mode = max\")\nprint(\"A =\", A)\nprint()\nA, cache = pool_forward(A_prev, hparameters, mode = \"average\")\nprint(\"mode = average\")\nprint(\"A =\", A)", "Expected Output:\n<table>\n\n <tr>\n <td>\n A =\n </td>\n <td>\n [[[[ 1.74481176 0.86540763 1.13376944]]]\n\n\n [[[ 1.13162939 1.51981682 2.18557541]]]]\n\n </td>\n </tr>\n <tr>\n <td>\n A =\n </td>\n <td>\n [[[[ 0.02105773 -0.20328806 -0.40389855]]]\n\n\n [[[-0.22154621 0.51716526 0.48155844]]]]\n\n </td>\n </tr>\n\n</table>\n\nCongratulations! You have now implemented the forward passes of all the layers of a convolutional network. \nThe remainer of this notebook is optional, and will not be graded.\n5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)\nIn modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. \nWhen in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.\n5.1 - Convolutional layer backward pass\nLet's start by implementing the backward pass for a CONV layer. \n5.1.1 - Computing dA:\nThis is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:\n$$ dA += \\sum {h=0} ^{n_H} \\sum{w=0} ^{n_W} W_c \\times dZ_{hw} \\tag{1}$$\nWhere $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. \nIn code, inside the appropriate for-loops, this formula translates into:\npython\nda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]\n5.1.2 - Computing dW:\nThis is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:\n$$ dW_c += \\sum {h=0} ^{n_H} \\sum{w=0} ^ {n_W} a_{slice} \\times dZ_{hw} \\tag{2}$$\nWhere $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. \nIn code, inside the appropriate for-loops, this formula translates into:\npython\ndW[:,:,:,c] += a_slice * dZ[i, h, w, c]\n5.1.3 - Computing db:\nThis is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:\n$$ db = \\sum_h \\sum_w dZ_{hw} \\tag{3}$$\nAs you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. \nIn code, inside the appropriate for-loops, this formula translates into:\npython\ndb[:,:,:,c] += dZ[i, h, w, c]\nExercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.", "def conv_backward(dZ, cache):\n \"\"\"\n Implement the backward propagation for a convolution function\n \n Arguments:\n dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)\n cache -- cache of values needed for the conv_backward(), output of conv_forward()\n \n Returns:\n dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),\n numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)\n dW -- gradient of the cost with respect to the weights of the conv layer (W)\n numpy array of shape (f, f, n_C_prev, n_C)\n db -- gradient of the cost with respect to the biases of the conv layer (b)\n numpy array of shape (1, 1, 1, n_C)\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve information from \"cache\"\n (A_prev, W, b, hparameters) = None\n \n # Retrieve dimensions from A_prev's shape\n (m, n_H_prev, n_W_prev, n_C_prev) = None\n \n # Retrieve dimensions from W's shape\n (f, f, n_C_prev, n_C) = None\n \n # Retrieve information from \"hparameters\"\n stride = None\n pad = None\n \n # Retrieve dimensions from dZ's shape\n (m, n_H, n_W, n_C) = None\n \n # Initialize dA_prev, dW, db with the correct shapes\n dA_prev = None \n dW = None\n db = None\n\n # Pad A_prev and dA_prev\n A_prev_pad = None\n dA_prev_pad = None\n \n for i in range(None): # loop over the training examples\n \n # select ith training example from A_prev_pad and dA_prev_pad\n a_prev_pad = None\n da_prev_pad = None\n \n for h in range(None): # loop over vertical axis of the output volume\n for w in range(None): # loop over horizontal axis of the output volume\n for c in range(None): # loop over the channels of the output volume\n \n # Find the corners of the current \"slice\"\n vert_start = None\n vert_end = None\n horiz_start = None\n horiz_end = None\n \n # Use the corners to define the slice from a_prev_pad\n a_slice = None\n\n # Update gradients for the window and the filter's parameters using the code formulas given above\n da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None\n dW[:,:,:,c] += None\n db[:,:,:,c] += None\n \n # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])\n dA_prev[i, :, :, :] = None\n ### END CODE HERE ###\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))\n \n return dA_prev, dW, db\n\nnp.random.seed(1)\ndA, dW, db = conv_backward(Z, cache_conv)\nprint(\"dA_mean =\", np.mean(dA))\nprint(\"dW_mean =\", np.mean(dW))\nprint(\"db_mean =\", np.mean(db))", "Expected Output: \n<table>\n <tr>\n <td>\n **dA_mean**\n </td>\n <td>\n 1.45243777754\n </td>\n </tr>\n <tr>\n <td>\n **dW_mean**\n </td>\n <td>\n 1.72699145831\n </td>\n </tr>\n <tr>\n <td>\n **db_mean**\n </td>\n <td>\n 7.83923256462\n </td>\n </tr>\n\n</table>\n\n5.2 Pooling layer - backward pass\nNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. \n5.2.1 Max pooling - backward pass\nBefore jumping into the backpropagation of the pooling layer, you are going to build a helper function called create_mask_from_window() which does the following: \n$$ X = \\begin{bmatrix}\n1 && 3 \\\n4 && 2\n\\end{bmatrix} \\quad \\rightarrow \\quad M =\\begin{bmatrix}\n0 && 0 \\\n1 && 0\n\\end{bmatrix}\\tag{4}$$\nAs you can see, this function creates a \"mask\" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. \nExercise: Implement create_mask_from_window(). This function will be helpful for pooling backward. \nHints:\n- np.max() may be helpful. It computes the maximum of an array.\n- If you have a matrix X and a scalar x: A = (X == x) will return a matrix A of the same size as X such that:\nA[i,j] = True if X[i,j] = x\nA[i,j] = False if X[i,j] != x\n- Here, you don't need to consider cases where there are several maxima in a matrix.", "def create_mask_from_window(x):\n \"\"\"\n Creates a mask from an input matrix x, to identify the max entry of x.\n \n Arguments:\n x -- Array of shape (f, f)\n \n Returns:\n mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.\n \"\"\"\n \n ### START CODE HERE ### (≈1 line)\n mask = None\n ### END CODE HERE ###\n \n return mask\n\nnp.random.seed(1)\nx = np.random.randn(2,3)\nmask = create_mask_from_window(x)\nprint('x = ', x)\nprint(\"mask = \", mask)", "Expected Output: \n<table> \n<tr> \n<td>\n\n**x =**\n</td>\n\n<td>\n\n[[ 1.62434536 -0.61175641 -0.52817175] <br>\n [-1.07296862 0.86540763 -2.3015387 ]]\n\n </td>\n</tr>\n\n<tr> \n<td>\n**mask =**\n</td>\n<td>\n[[ True False False] <br>\n [False False False]]\n</td>\n</tr>\n\n\n</table>\n\nWhy do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will \"propagate\" the gradient back to this particular input value that had influenced the cost. \n5.2.2 - Average pooling - backward pass\nIn max pooling, for each input window, all the \"influence\" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.\nFor example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: \n$$ dZ = 1 \\quad \\rightarrow \\quad dZ =\\begin{bmatrix}\n1/4 && 1/4 \\\n1/4 && 1/4\n\\end{bmatrix}\\tag{5}$$\nThis implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. \nExercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint", "def distribute_value(dz, shape):\n \"\"\"\n Distributes the input value in the matrix of dimension shape\n \n Arguments:\n dz -- input scalar\n shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz\n \n Returns:\n a -- Array of size (n_H, n_W) for which we distributed the value of dz\n \"\"\"\n \n ### START CODE HERE ###\n # Retrieve dimensions from shape (≈1 line)\n (n_H, n_W) = None\n \n # Compute the value to distribute on the matrix (≈1 line)\n average = None\n \n # Create a matrix where every entry is the \"average\" value (≈1 line)\n a = None\n ### END CODE HERE ###\n \n return a\n\na = distribute_value(2, (2,2))\nprint('distributed value =', a)", "Expected Output: \n<table> \n<tr> \n<td>\ndistributed_value =\n</td>\n<td>\n[[ 0.5 0.5]\n<br\\> \n[ 0.5 0.5]]\n</td>\n</tr>\n</table>\n\n5.2.3 Putting it together: Pooling backward\nYou now have everything you need to compute backward propagation on a pooling layer.\nExercise: Implement the pool_backward function in both modes (\"max\" and \"average\"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ.", "def pool_backward(dA, cache, mode = \"max\"):\n \"\"\"\n Implements the backward pass of the pooling layer\n \n Arguments:\n dA -- gradient of cost with respect to the output of the pooling layer, same shape as A\n cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters \n mode -- the pooling mode you would like to use, defined as a string (\"max\" or \"average\")\n \n Returns:\n dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev\n \"\"\"\n \n ### START CODE HERE ###\n \n # Retrieve information from cache (≈1 line)\n (A_prev, hparameters) = None\n \n # Retrieve hyperparameters from \"hparameters\" (≈2 lines)\n stride = None\n f = None\n \n # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)\n m, n_H_prev, n_W_prev, n_C_prev = None\n m, n_H, n_W, n_C = None\n \n # Initialize dA_prev with zeros (≈1 line)\n dA_prev = None\n \n for i in range(None): # loop over the training examples\n \n # select training example from A_prev (≈1 line)\n a_prev = None\n \n for h in range(None): # loop on the vertical axis\n for w in range(None): # loop on the horizontal axis\n for c in range(None): # loop over the channels (depth)\n \n # Find the corners of the current \"slice\" (≈4 lines)\n vert_start = None\n vert_end = None\n horiz_start = None\n horiz_end = None\n \n # Compute the backward propagation in both modes.\n if mode == \"max\":\n \n # Use the corners and \"c\" to define the current slice from a_prev (≈1 line)\n a_prev_slice = None\n # Create the mask from a_prev_slice (≈1 line)\n mask = None\n # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None\n \n elif mode == \"average\":\n \n # Get the value a from dA (≈1 line)\n da = None\n # Define the shape of the filter as fxf (≈1 line)\n shape = None\n # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)\n dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None\n \n ### END CODE ###\n \n # Making sure your output shape is correct\n assert(dA_prev.shape == A_prev.shape)\n \n return dA_prev\n\nnp.random.seed(1)\nA_prev = np.random.randn(5, 5, 3, 2)\nhparameters = {\"stride\" : 1, \"f\": 2}\nA, cache = pool_forward(A_prev, hparameters)\ndA = np.random.randn(5, 4, 2, 2)\n\ndA_prev = pool_backward(dA, cache, mode = \"max\")\nprint(\"mode = max\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev[1,1] = ', dA_prev[1,1]) \nprint()\ndA_prev = pool_backward(dA, cache, mode = \"average\")\nprint(\"mode = average\")\nprint('mean of dA = ', np.mean(dA))\nprint('dA_prev[1,1] = ', dA_prev[1,1]) ", "Expected Output: \nmode = max:\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\n**dA_prev[1,1] =** \n</td>\n<td>\n[[ 0. 0. ] <br>\n [ 5.05844394 -1.68282702] <br>\n [ 0. 0. ]]\n</td>\n</tr>\n</table>\n\nmode = average\n<table> \n<tr> \n<td>\n\n**mean of dA =**\n</td>\n\n<td>\n\n0.145713902729\n\n </td>\n</tr>\n\n<tr> \n<td>\n**dA_prev[1,1] =** \n</td>\n<td>\n[[ 0.08485462 0.2787552 ] <br>\n [ 1.26461098 -0.25749373] <br>\n [ 1.17975636 -0.53624893]]\n</td>\n</tr>\n</table>\n\nCongratulations !\nCongratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kkhenriquez/python-for-data-science
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
mit
[ "<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nClustering with scikit-learn\n\n<br><br></p>\n\nIn this notebook, we will learn how to perform k-means lustering using scikit-learn in Python. \nWe will use cluster analysis to generate a big picture model of the weather at a local station using a minute-graunlarity data. In this dataset, we have in the order of millions records. How do we create 12 clusters our of them?\nNOTE: The dataset we will use is in a large CSV file called minute_weather.csv. Please download it into the weather directory in your Week-7-MachineLearning folder. The download link is: https://drive.google.com/open?id=0B8iiZ7pSaSFZb3ItQ1l4LWRMTjg \n<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nImporting the Necessary Libraries<br></p>", "from sklearn.preprocessing import StandardScaler\nfrom sklearn.cluster import KMeans\nimport python_utils\nimport pandas as pd\nimport numpy as np\nfrom itertools import cycle, islice\nimport matplotlib.pyplot as plt\nfrom pandas.tools.plotting import parallel_coordinates\n\n%matplotlib inline", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nCreating a Pandas DataFrame from a CSV file<br><br></p>", "data = pd.read_csv('./weather/minute_weather.csv')", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\">Minute Weather Data Description</p>\n<br>\nThe minute weather dataset comes from the same source as the daily weather dataset that we used in the decision tree based classifier notebook. The main difference between these two datasets is that the minute weather dataset contains raw sensor measurements captured at one-minute intervals. Daily weather dataset instead contained processed and well curated data. The data is in the file minute_weather.csv, which is a comma-separated file.\nAs with the daily weather data, this data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured.\nEach row in minute_weather.csv contains weather data captured for a one-minute interval. Each row, or sample, consists of the following variables:\n\nrowID: unique number for each row (Unit: NA)\nhpwren_timestamp: timestamp of measure (Unit: year-month-day hour:minute:second)\nair_pressure: air pressure measured at the timestamp (Unit: hectopascals)\nair_temp: air temperature measure at the timestamp (Unit: degrees Fahrenheit)\navg_wind_direction: wind direction averaged over the minute before the timestamp (Unit: degrees, with 0 means coming from the North, and increasing clockwise)\navg_wind_speed: wind speed averaged over the minute before the timestamp (Unit: meters per second)\nmax_wind_direction: highest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and increasing clockwise)\nmax_wind_speed: highest wind speed in the minute before the timestamp (Unit: meters per second)\nmin_wind_direction: smallest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and inceasing clockwise)\nmin_wind_speed: smallest wind speed in the minute before the timestamp (Unit: meters per second)\nrain_accumulation: amount of accumulated rain measured at the timestamp (Unit: millimeters)\nrain_duration: length of time rain has fallen as measured at the timestamp (Unit: seconds)\nrelative_humidity: relative humidity measured at the timestamp (Unit: percent)", "data.shape\n\ndata.head()", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nData Sampling<br></p>\n\nLots of rows, so let us sample down by taking every 10th row. <br>", "sampled_df = data[(data['rowID'] % 10) == 0]\nsampled_df.shape", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nStatistics\n<br><br></p>", "sampled_df.describe().transpose()\n\nsampled_df[sampled_df['rain_accumulation'] == 0].shape\n\nsampled_df[sampled_df['rain_duration'] == 0].shape", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nDrop all the Rows with Empty rain_duration and rain_accumulation\n<br><br></p>", "del sampled_df['rain_accumulation']\ndel sampled_df['rain_duration']\n\nrows_before = sampled_df.shape[0]\nsampled_df = sampled_df.dropna()\nrows_after = sampled_df.shape[0]", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nHow many rows did we drop ?\n<br><br></p>", "rows_before - rows_after\n\nsampled_df.columns", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nSelect Features of Interest for Clustering\n<br><br></p>", "features = ['air_pressure', 'air_temp', 'avg_wind_direction', 'avg_wind_speed', 'max_wind_direction', \n 'max_wind_speed','relative_humidity']\n\nselect_df = sampled_df[features]\n\nselect_df.columns\n\nselect_df", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nScale the Features using StandardScaler\n<br><br></p>", "X = StandardScaler().fit_transform(select_df)\nX", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nUse k-Means Clustering\n<br><br></p>", "kmeans = KMeans(n_clusters=12)\nmodel = kmeans.fit(X)\nprint(\"model\\n\", model)", "<p style=\"font-family: Arial; font-size:1.75em;color:purple; font-style:bold\"><br>\n\nWhat are the centers of 12 clusters we formed ?\n<br><br></p>", "centers = model.cluster_centers_\ncenters", "<p style=\"font-family: Arial; font-size:2.75em;color:purple; font-style:bold\"><br>\n\nPlots\n<br><br></p>\n\nLet us first create some utility functions which will help us in plotting graphs:", "# Function that creates a DataFrame with a column for Cluster Number\n\ndef pd_centers(featuresUsed, centers):\n\tcolNames = list(featuresUsed)\n\tcolNames.append('prediction')\n\n\t# Zip with a column called 'prediction' (index)\n\tZ = [np.append(A, index) for index, A in enumerate(centers)]\n\n\t# Convert to pandas data frame for plotting\n\tP = pd.DataFrame(Z, columns=colNames)\n\tP['prediction'] = P['prediction'].astype(int)\n\treturn P\n\n# Function that creates Parallel Plots\n\ndef parallel_plot(data):\n\tmy_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(data)))\n\tplt.figure(figsize=(15,8)).gca().axes.set_ylim([-3,+3])\n\tparallel_coordinates(data, 'prediction', color = my_colors, marker='o')\n\nP = pd_centers(features, centers)\nP", "Dry Days", "parallel_plot(P[P['relative_humidity'] < -0.5])", "Warm Days", "parallel_plot(P[P['air_temp'] > 0.5])", "Cool Days", "parallel_plot(P[(P['relative_humidity'] > 0.5) & (P['air_temp'] < 0.5)])", "Type 6 is similar to 9 only windier" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
npdoty/bigbang
examples/Single Word Trend.ipynb
agpl-3.0
[ "This note book gives the trend of a single word in single mailing list.", "%matplotlib inline\n\nfrom bigbang.archive import Archive\nimport bigbang.parse as parse\nimport bigbang.graph as graph\nimport bigbang.mailman as mailman\nimport bigbang.process as process\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom pprint import pprint as pp\nimport pytz\nimport numpy as np\nimport math\nimport nltk\nfrom itertools import repeat\nfrom nltk.stem.lancaster import LancasterStemmer\nst = LancasterStemmer()\nfrom nltk.corpus import stopwords\nimport re\n\nurls = [\"http://mail.scipy.org/pipermail/ipython-dev/\"]#,\n #\"http://mail.scipy.org/pipermail/ipython-user/\"],\n #\"http://mail.scipy.org/pipermail/scipy-dev/\",\n #\"http://mail.scipy.org/pipermail/scipy-user/\",\n #\"http://mail.scipy.org/pipermail/numpy-discussion/\"]\n\n\narchives= [Archive(url,archive_dir=\"../archives\") for url in urls]\n\ncheckword = \"python\" #can change words, should be lower case", "You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.\nTo download, from an interactive Python shell, run:\nimport nltk\nnltk.download()\n\nAnd in the graphical UI that appears, choose \"punkt\" from the All Packages tab and Download.", "df = pd.DataFrame(columns=[\"MessageId\",\"Date\",\"From\",\"In-Reply-To\",\"Count\"])\nfor row in archives[0].data.iterrows():\n try: \n w = row[1][\"Body\"].replace(\"'\", \"\")\n k = re.sub(r'[^\\w]', ' ', w)\n k = k.lower()\n t = nltk.tokenize.word_tokenize(k)\n subdict = {}\n count = 0\n for g in t:\n try:\n word = st.stem(g)\n except:\n print g\n pass\n if word == checkword:\n count += 1\n if count == 0:\n continue\n else:\n subdict[\"MessageId\"] = row[0]\n subdict[\"Date\"] = row[1][\"Date\"]\n subdict[\"From\"] = row[1][\"From\"]\n subdict[\"In-Reply-To\"] = row[1][\"In-Reply-To\"]\n subdict[\"Count\"] = count\n df = df.append(subdict,ignore_index=True)\n except:\n if row[1][\"Body\"] is None: \n print '!!! Detected an email with an empty Body field...'\n else: print 'error'\n\ndf[:5] #dataframe of informations of the particular word.", "Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time.", "df.groupby([df.Date.dt.year, df.Date.dt.month]).agg({'Count':np.sum}).plot(y='Count')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ipsl/cmip6/models/sandbox-1/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: IPSL\nSource ID: SANDBOX-1\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 70 (38 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Aod Plus Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact aerosol internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.3. External Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact aerosol external mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WNoxchi/Kaukasos
pytorch/transfer_learning_tutorial.ipynb
mit
[ "Transfer Learning Tutorial\nhttp://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html", "%matplotlib inline\n%reload_ext autoreload\n%autoreload 2\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.optim import lr_scheduler\nfrom torch.autograd import Variable\nimport torchvision\nfrom torchvision import datasets, models, transforms\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\nimport os", "1. Load Data\nUsing torchvision and torch.utils.data for data loading. Training a model to classify ants and bees; 120 training images each cat. 75 val images each. data link", "# Data augmentation and normalization for training\n# Just normalization for validation\ndata_transforms = {\n 'train': transforms.Compose([\n transforms.RandomSizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize([0.485,0.456,0.406],[0.229, 0.224, 0.225])\n ]),\n 'val': transforms.Compose([\n transforms.Scale(256),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize([0.485,0.456,0.406],[0.229, 0.224, 0.225])\n ]),\n}\n\ndata_dir = 'hymenoptera_data'\nimage_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),\n data_transforms[x])\n for x in ['train', 'val']}\ndataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,\n shuffle=True, num_workers=4)\n for x in ['train','val']}\ndataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}\nclass_names = image_datasets['train'].classes\n\nuse_gpu = torch.cuda.is_available()\n\ntorchvision.transforms.Scale??", "Init signature: torchvision.transforms.Scale(*args, **kwargs)\nSource: \nclass Scale(Resize):\n \"\"\"\n Note: This transform is deprecated in favor of Resize.\n \"\"\"\n def __init__(self, *args, **kwargs):\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n \"please use transforms.Resize instead.\")\n super(Scale, self).__init__(*args, **kwargs)", "torchvision.transforms.Resize??", "```\nInit signature: torchvision.transforms.Resize(size, interpolation=2)\nSource: \nclass Resize(object):\n \"\"\"Resize the input PIL Image to the given size.\nArgs:\n size (sequence or int): Desired output size. If size is a sequence like\n (h, w), output size will be matched to this. If size is an int,\n smaller edge of the image will be matched to this number.\n i.e, if height &gt; width, then image will be rescaled to\n (size * height / width, size)\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``\n\"\"\"\n\ndef __init__(self, size, interpolation=Image.BILINEAR):\n assert isinstance(size, int) or (isinstance(size, collections.Iterable) and len(size) == 2)\n self.size = size\n self.interpolation = interpolation\n\n```\n2. Visualize a few images", "plt.pause?\n\ndef imshow(inp, title=None):\n \"\"\"Imshow for Tensor\"\"\"\n inp = inp.numpy().transpose((1,2,0))\n mean = np.array([0.485, 0.456, 0.406])\n std = np.array([0.229, 0.224, 0.225])\n inp = std * inp + mean\n inp = np.clip(inp, 0, 1)\n plt.imshow(inp)\n if title is not None:\n plt.title(title)\n plt.pause(0.001) # pause a bit so that plots are updates\n\n# Get a batch of training data\ninputs, classes = next(iter(dataloaders['train']))\n\n# Make a grid from batch\nout = torchvision.utils.make_grid(inputs)\n\nimshow(out, title=[class_names[x] for x in classes])", "Huh, cool\n3. Training the model\n\nScheduling the learning rate\nSaving the best model\n\nParameter scheduler is an LR scheduler object from torch.optim.lr_scheduler", "def train_model(model, criterion, optimizer, scheduler, num_epochs=25):\n since = time.time()\n \n best_model_wts = model.state_dict()\n best_acc = 0.0\n \n for epoch in range(num_epochs):\n print(f'Epoch {epoch}/{num_epochs-1}')\n print('-' * 10)\n \n # Each epoch has a training and validation phase\n for phase in ['train', 'val']:\n if phase == 'train':\n scheduler.step()\n model.train(True) # Set model to training mode\n else:\n model.train(False) # Set model to evaulation mode\n \n running_loss = 0.0\n running_corrects = 0\n \n # Iterate over data.\n for data in dataloaders[phase]:\n # get the inputs\n inputs, labels = data\n \n # wrap them in Variable\n if use_gpu:\n inputs = Variable(inputs.cuda())\n labels = Variable(labels.cuda())\n else:\n inputs, labels = Variable(inputs), Variable(labels)\n \n # zero the parameter gradients\n optimizer.zero_grad()\n \n # forward\n outputs = model(inputs)\n _, preds = torch.max(outputs.data, 1)\n loss = criterion(outputs, labels)\n \n # backward + optimize only if in training phase\n if phase == 'train':\n loss.backward()\n optimizer.step()\n \n # statistics\n running_loss += loss.data[0]\n running_corrects += torch.sum(preds == labels.data)\n \n epoch_loss = running_loss / dataset_sizes[phase]\n epoch_acc = running_corrects / dataset_sizes[phase]\n \n print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')\n \n # deep copy the model ### <-- ooo this is very cool. .state_dict() & acc\n if phase == 'val' and epoch_acc > best_acc:\n best_acc = epoch_acc\n mest_model_wts = model.state_dict()\n \n print()\n\n time_elapsed = time.time() - since\n print('Training complete in {time_ellapsed//60:.0f}m {time_elapsed%60:.0fs}')\n print(f'Best val Acc: {best_acc:.4f}')\n\n # load best model weights\n model.load_state_dict(best_model_wts)\n return model", "4. Visualizing the model's predictions", "def visualize_model(model, num_images=6):\n images_so_far = 0\n fig = plt.figure()\n \n for i, data in enumerate(dataloaders['val']):\n inputs, labels = data\n if use_gpu:\n inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())\n else:\n inputs, labels = Variable(inputs), Variable(labels)\n \n outputs = model(inputs)\n _, preds = torch.max(outputs.data, 1)\n \n for j in range(inputs.size()[0]):\n images_so_far += 1\n ax = plt.subplot(num_images//2, 2, images_so_far)\n ax.axis('off')\n ax.set_title(f'predicted: {class_names[preds[j]]}')\n imshow(inputs.cpu().data[j])\n \n if images_so_far == num_images:\n return", "```\nVariable.cpu(self)\nSource: \n def cpu(self):\n return self.type(getattr(torch, type(self.data).name))\n```", "# looking at the cpu() method\ntemp = Variable(torch.FloatTensor([1,2]))\ntemp.cpu()", "5. Finetuning the ConvNet\nLoad a pretrained model and reset final fully-connected layer", "model_ft = models.resnet18(pretrained=True)\nnum_ftrs = model_ft.fc.in_features\nmodel_ft.fc = nn.Linear(num_ftrs, 2)\n\nif use_gpu:\n model_ft = model_ft.cuda()\n\ncriterion = nn.CrossEntropyLoss()\n\n# Observe that all parameters are being optimized\noptimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)\n\n# Delay LR by a factor of 0.1 every 7 epochs\nexp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)", "```\ntorch.optim.lr_scheduler.StepLR\n--> defines `get_lr(self):\ndef get_lr(self):\n return [base_lr * self.gamma ** (self.last_epoch // self.step_size)\n for base_lr in self.base_lrs]\n```\nso gamma is exponentiated by ( last_epoch // step_size )\n5.1 Train and Evaluate\nShould take 15-25 min on CPU; < 1 min on GPU.", "model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)\n\nvisualize_model(model_ft)", "6. ConvNet as a fixed feature extractor\nFreeze entire network except final layer. Need set requires_grad == False to freeze pars st grads aren't computed in backward().\nLink to Documentation", "model_conv = torchvision.models.resnet18(pretrained=True)\nfor par in model_conv.parameters():\n par.requires_grad = False\n\n# Parameters of newly constructed modules have requires_grad=True by default\nnum_ftrs = model_conv.fc.in_features\nmodel_conv.fc = nn.Linear(num_ftrs, 2)\n\nif use_gpu:\n model_conv = model_conv.cuda()\n \ncriterion = nn.CrossEntropyLoss()\n\n# Observe that only parameters of the final layer are being optimized as \n# opposed to before.\noptimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)\n\n# Delay LR by a factor of 0.1 every 7 epochs\nexp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)", "6.1 Train and evaluate\nFor CPU: will take about half the time as before. This is expected as grads don't need to be computed for most of the network -- the forward pass though, has to be computed.", "model_conv = train_model(model_conv, criterion, optimizer_conv,\n exp_lr_scheduler, num_epochs=25)\n\nvisualize_model(model_conv)\n\nplt.ioff()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
christophmark/bayesloop
docs/source/tutorials/priordistributions.ipynb
mit
[ "Prior distributions\nOne important aspect of Bayesian inference has not yet been discussed in this tutorial: prior distributions. In Bayesian statistics, one has to provide probability (density) values for every possible parameter value before taking into account the data at hand. This prior distribution thus reflects all prior knowledge of the system that is to be investigated. In the case that no prior knowledge is available, a non-informative prior in the form of the so-called Jeffreys prior allows to minimize the effect of the prior on the results. The next two sub-sections discuss how one can set custom prior distributions for the parameters of the observation model and for hyper-parameters in a hyper-study or change-point study.", "%matplotlib inline\nimport matplotlib.pyplot as plt # plotting\nimport seaborn as sns # nicer plots\nsns.set_style('whitegrid') # plot styling\n\nimport numpy as np\nimport bayesloop as bl\n\n# prepare study for coal mining data\nS = bl.Study()\nS.loadExampleData()", "Parameter prior\nbayesloop employs a forward-backward algorithm that is based on Hidden Markov models. This inference algorithm iteratively produces a parameter distribution for each time step, but it has to start these iterations from a specified probability distribution - the parameter prior. All built-in observation models already have a predefined prior, stored in the attribute prior. Here, the prior distribution is stored as a Python function that takes as many arguments as there are parameters in the observation model. The prior distributions can be looked up directly within observationModels.py. For the Poisson model discussed in this tutorial, the default prior distribution is defined in a method called jeffreys as\ndef jeffreys(x):\n return np.sqrt(1. / x)\ncorresponding to the non-informative Jeffreys prior, $p(\\lambda) \\propto 1/\\sqrt{\\lambda}$. This type of prior can also be determined automatically for arbitrary user-defined observation models, see here.\nPrior functions and arrays\nTo change the predefined prior of a given observation model, one can add the keyword argument prior when defining an observation model. There are different ways of defining a parameter prior in bayesloop: If prior=None is set, bayesloop will assign equal probability to all parameter values, resulting in a uniform prior distribution within the specified parameter boundaries. One can also directly supply a Numpy array with prior probability (density) values. The shape of the array must match the shape of the parameter grid! Another way to define a custom prior is to provide a function that takes exactly as many arguments as there are parameters in the defined observation model. bayesloop will then evaluate the function for all parameter values and assign the corresponding probability values.\n<div style=\"background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em\">\n**Note:** In all of the cases described above, *bayesloop* will re-normalize the provided prior values, so they do not need to be passed in a normalized form. Below, we describe the possibility of using probability distributions from the SymPy stats module as prior distributions, which are not re-normalized by *bayesloop*.\n</div>\n\nNext, we illustrate the difference between the Jeffreys prior and a flat, uniform prior with a very simple inference example: We fit the coal mining example data set using the Poisson observation model and further assume the rate parameter to be static:", "# we assume a static rate parameter for simplicity\nS.set(bl.tm.Static())\n\nprint 'Fit with built-in Jeffreys prior:'\nS.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000)))\nS.fit()\njeffreys_mean = S.getParameterMeanValues('accident_rate')[0]\nprint('-----\\n')\n \nprint 'Fit with custom flat prior:'\nS.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000), \n prior=lambda x: 1.))\n# alternatives: prior=None, prior=np.ones(1000)\nS.fit()\nflat_mean = S.getParameterMeanValues('accident_rate')[0]", "First note that the model evidence indeed slightly changes due to the different choices of the parameter prior. Second, one may notice that the posterior mean value of the flat-prior-fit does not exactly match the arithmetic mean of the data. This small deviation shows that a flat/uniform prior is not completely non-informative for a Poisson model! The fit using the Jeffreys prior, however, succeeds in reproducing the frequentist estimate, i.e. the arithmetic mean:", "print('arithmetic mean = {}'.format(np.mean(S.rawData)))\nprint('flat-prior mean = {}'.format(flat_mean))\nprint('Jeffreys prior mean = {}'.format(jeffreys_mean))", "SymPy prior\nThe second option is based on the SymPy module that introduces symbolic mathematics to Python. Its sub-module sympy.stats covers a wide range of discrete and continuous random variables. The keyword argument prior also accepts a list of sympy.stats random variables, one for each parameter (if there is only one parameter, the list can be omitted). The multiplicative joint probability density of these random variables is then used as the prior distribution. The following example defines an exponential prior for the Poisson model, favoring small values of the rate parameter:", "import sympy.stats\nS.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000), \n prior=sympy.stats.Exponential('expon', 1)))\nS.fit()", "Note that one needs to assign a name to each sympy.stats variable. In this case, the output of bayesloop shows the mathematical formula that defines the prior. This is possible because of the symbolic representation of the prior by SymPy.\n<div style=\"background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em\">\n**Note:** The support interval of a prior distribution defined via SymPy can deviate from the parameter interval specified in *bayesloop*. In the example above, we specified the parameter interval ]0, 6[, while the exponential prior has the support ]0, $\\infty$[. SymPy priors are not re-normalized with respect to the specified parameter interval. Be aware that the resulting model evidence value will only be correct if no parameter values outside of the parameter boundaries gain significant probability values. In most cases, one can simply check whether the parameter distribution has sufficiently *fallen off* at the parameter boundaries.\n</div>\n\nHyper-parameter priors\nAs shown before, hyper-studies and change-point studies can be used to determine the full distribution of hyper-parameters (the parameters of the transition model). As for the time-varying parameters of the observation model, one might have prior knowledge about the values of certain hyper-parameters that can be included into the study to refine the resulting distribution of these hyper-parameters. Hyper-parameter priors can be defined just as regular priors, either by an arbitrary function or by a list of sympy.stats random variables.\nIn a first example, we return to the simple change-point model of the coal-mining data set and perform to fits of the change-point: first, we specify no hyper-prior for the time step of our change-point, assuming equal probability for each year in our data set. Second, we define a Normal distribution around the year 1920 with a (rather unrealistic) standard deviation of 5 years as the hyper-prior using a SymPy random variable. For both fits, we plot the change-point distribution to show the differences induced by the different priors:", "print 'Fit with flat hyper-prior:'\nS = bl.ChangepointStudy()\nS.loadExampleData()\n\nL = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))\nT = bl.tm.ChangePoint('tChange', 'all')\n\nS.set(L, T)\nS.fit()\n\nplt.figure(figsize=(8,4))\nS.plot('tChange', facecolor='g', alpha=0.7)\nplt.xlim([1870, 1930])\nplt.show()\nprint('-----\\n')\n \nprint 'Fit with custom normal prior:'\nT = bl.tm.ChangePoint('tChange', 'all', prior=sympy.stats.Normal('norm', 1920, 5))\nS.set(T)\nS.fit()\n\nplt.figure(figsize=(8,4))\nS.plot('tChange', facecolor='g', alpha=0.7)\nplt.xlim([1870, 1930]);", "Since we used a quite narrow prior (containing a lot of information) in the second case, the resulting distribution is strongly shifted towards the prior. The following example revisits the two break-point-model from here and a linear decrease with a varying slope as a hyper-parameter. Here, we define a Gaussian prior for the slope hyper-parameter, which is centered around the value -0.2 with a standard deviation of 0.4, via a lambda-function. For simplification, we set the break-points to fixed years.", "S = bl.HyperStudy()\nS.loadExampleData()\n\nL = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))\nT = bl.tm.SerialTransitionModel(bl.tm.Static(),\n bl.tm.BreakPoint('t_1', 1880),\n bl.tm.Deterministic(lambda t, slope=np.linspace(-2.0, 0.0, 30): t*slope, \n target='accident_rate',\n prior=lambda slope: np.exp(-0.5*((slope + 0.2)/(2*0.4))**2)/0.4),\n bl.tm.BreakPoint('t_2', 1900),\n bl.tm.Static()\n )\n\nS.set(L, T)\nS.fit()", "Finally, note that you can mix SymPy- and function-based hyper-priors for nested transition models." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mbeyeler/opencv-machine-learning
notebooks/08.04-Implementing-Agglomerative-Hierarchical-Clustering.ipynb
mit
[ "<!--BOOK_INFORMATION-->\n<a href=\"https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv\" target=\"_blank\"><img align=\"left\" src=\"data/cover.jpg\" style=\"width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;\"></a>\nThis notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.\nThe code is released under the MIT license,\nand is available on GitHub.\nNote that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.\nIf you find this content useful, please consider supporting the work by\nbuying the book!\n<!--NAVIGATION-->\n< Classifying handwritten digits using k-means | Contents | 9. Using Deep Learning to Classify Handwritten Digits >\nImplementing Agglomerative Hierarchical Clustering\nAlthough OpenCV does not provide an implementation of agglomerative hierarchical\nclustering, it is a popular algorithm that should, by all means, belong to our machine\nlearning repertoire.\nWe start out by generating 10 random data points, just like in the previous figure:", "from sklearn.datasets import make_blobs\nX, y = make_blobs(n_samples=10, random_state=100)", "Using the familiar statistical modeling API, we import the AgglomerativeClustering\nalgorithm and specify the desired number of clusters:", "from sklearn import cluster\nagg = cluster.AgglomerativeClustering(n_clusters=3)", "Fitting the model to the data works, as usual, via the fit_predict method:", "labels = agg.fit_predict(X)", "We can generate a scatter plot where every data point is colored according to the predicted\nlabel:", "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')\nplt.figure(figsize=(10, 6))\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=100)", "That's it! This marks the end of another wonderful adventure.\n<!--NAVIGATION-->\n< Classifying handwritten digits using k-means | Contents | 9. Using Deep Learning to Classify Handwritten Digits >" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
markovmodel/adaptivemd
examples/rp/3_example_adaptive.ipynb
lgpl-2.1
[ "AdaptiveMD\nExample 3 - Running an adaptive loop", "import sys, os\n\n# stop RP from printing logs until severe\n# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')\nos.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'\n\nfrom adaptivemd import (\n Project,\n Event, FunctionalEvent,\n File\n)\n\n# We need this to be part of the imports. You can only restore known objects\n# Once these are imported you can load these objects.\nfrom adaptivemd.engine.openmm import OpenMMEngine\nfrom adaptivemd.analysis.pyemma import PyEMMAAnalysis", "Let's open our test project by its name. If you completed the first examples this should all work out of the box.", "project = Project('test')", "Open all connections to the MongoDB and Session so we can get started.\n\nAn interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in the last example, go back here and check on the change of the contents of the project.\n\nLet's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.", "print project.files\nprint project.generators\nprint project.models", "Now restore our old ways to generate tasks by loading the previously used generators.", "engine = project.generators['openmm']\nmodeller = project.generators['pyemma']\npdb_file = project.files['initial_pdb']", "Run simulations\nNow we really start simulations. The general way to do so is to create a simulation task and then submit it to a cluster to be executed. A Task object is a general description of what should be done and boils down to staging some files to your working directory, executing a bash script and finally moving files back from your working directory to a shared storage. RP takes care of most of this very elegantly and hence a Task is designed somewhat to cover the capabilities but in a somehow simpler and more pythonic way.\nFor example there is a RPC Python Call Task that allows you to execute a function remotely and pull back the results. \nFunctional Events\nWe want to first look into a way to run python code asynchroneously in the project. For this, write a function that should be executed. Start with opening a scheduler or using an existing one (in the latter case you need to make sure that when it is executed - which can take a while - the scheduler still exists).\nIf the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called.", "def strategy():\n # create a new scheduler\n with project.get_scheduler(cores=2) as local_scheduler:\n for loop in range(10):\n tasks = local_scheduler(project.new_ml_trajectory(\n length=100, number=10))\n yield tasks.is_done()\n\n task = local_scheduler(modeller.execute(list(project.trajectories)))\n yield task.is_done", "turn a generator of your function use add strategy() and not strategy to the FunctionalEvent", "ev = FunctionalEvent(strategy())", "and execute the event inside your project", "project.add_event(ev)", "after some time you will have 10 more trajectories. Just like that.\nLet's see how our project is growing", "import time\nfrom IPython.display import clear_output\n\ntry:\n while True:\n clear_output(wait=True)\n print '# of files %8d : %s' % (len(project.trajectories), '#' * len(project.trajectories))\n print '# of models %8d : %s' % (len(project.models), '#' * len(project.models))\n sys.stdout.flush()\n time.sleep(1)\n \nexcept KeyboardInterrupt:\n pass", "And some analysis", "trajs = project.trajectories\nq = {}\nins = {}\nfor f in trajs:\n source = f.frame if isinstance(f.frame, File) else f.frame.trajectory\n ind = 0 if isinstance(f.frame, File) else f.frame.index\n ins[source] = ins.get(source, []) + [ind]", "Event", "scheduler = project.get_scheduler(cores=2)\n\ndef strategy1():\n for loop in range(10):\n tasks = scheduler(project.new_ml_trajectory(\n length=100, number=10))\n yield tasks.is_done()\n\ndef strategy2():\n for loop in range(10):\n num = len(project.trajectories)\n task = scheduler(modeller.execute(list(project.trajectories)))\n yield task.is_done\n yield project.on_ntraj(num + 5)\n\nproject._events = []\n\nproject.add_event(FunctionalEvent(strategy1))\nproject.add_event(FunctionalEvent(strategy2))\n\nproject.close()", "Tasks\nTo actually run simulations you need to have a scheduler (maybe a better name?). This instance can execute tasks or more precise you can use it to submit tasks which will be converted to ComputeUnitDescriptions and executed on the cluster previously chosen.", "scheduler = project.get_scheduler(cores=2) # get the default scheduler using 2 cores", "Now we are good to go and can run a first simulation\nThis works by creating a Trajectory object with a filename, a length and an initial frame. Then the engine will take this information and create a real trajectory with exactly this name, this initil frame and the given length.\nSince this is such a common task you can also submit just a Trajectory without the need tp convert it to a Task first (which the engine can also do).\nOut project can create new names automatically and so we want 4 new trajectories of length 100 and starting at the existing pdb_file we use to initialize the engine.", "trajs = project.new_trajectory(pdb_file, 100, 4)", "Let's submit and see", "scheduler.submit(trajs)", "Once the trajectories exist these objects will be saved to the database. It might be a little confusing to have objects before they exist, but this way you can actually work with these trajectories like referencing even before they exist.\nThis would allow to write now a function that triggers when the trajectory comes into existance. But we are not doing this right now.\nWait is dangerous since it is blocking and you cannot do anything until all tasks are finished. Normally you do not need it. Especially in interactive sessions.", "scheduler.wait()", "Look at all the files our project now contains.", "print '# of files', len(project.files)", "Great! That was easy (I hope you agree). \nNext we want to run a simple analysis.", "t = modeller.execute(list(project.trajectories))\n\nscheduler(t)\n\nscheduler.wait()", "Let's look at the model we generated", "print project.models.last.data.keys()", "And pick some information", "print project.models.last.data['msm']['P']", "Next example will demonstrate on how to write a full adaptive loop\nEvents\nA new concept. Tasks are great and do work for us. But so far we needed to submit tasks ourselves. In adaptive simulations we want this to happen automagically. To help with some of this events exist. This are basically a task_generator coupled with conditions on when to be executed.\nLet's write a little task generator (in essence a function that returns tasks)", "def task_generator():\n return [\n engine.task_run_trajectory(traj) for traj in\n project.new_ml_trajectory(100, 4)]\n\ntask_generator()", "Now create an event.", "ev = Event().on(project.on_ntraj(range(20,22,2))).do(task_generator)", ".on specifies when something should be executed. In our case when the project has a number of 20 trajectories. This is not yet the case so this event will not do anything unless we simulation more trajectories.\n.do specifies the function to be called.\nThe concept is borrowed from event based languages like often used in JavaScript. \nYou can build quite complex execution patterns with this. An event for example also knows when it is finished and this can be used as another trigger.", "def hello():\n print 'DONE!!!'\n return [] # todo: allow for None here\n\nfinished = Event().on(ev.on_done).do(hello)\n\nscheduler.add_event(ev)\nscheduler.add_event(finished)", "All events and tasks run parallel or at least get submitted and queue for execution in parallel. RP takes care of the actual execution.", "print '# of files', len(project.files)", "So for now lets run more trajectories and schedule computation of models in regular intervals.", "ev1 = Event().on(project.on_ntraj(range(30, 70, 4))).do(task_generator)\nev2 = Event().on(project.on_ntraj(38)).do(lambda: modeller.execute(list(project.trajectories))).repeat().until(ev1.on_done)\nscheduler.add_event(ev1)\nscheduler.add_event(ev2)\n\nlen(project.trajectories)\n\nlen(project.models)", ".repeat means to redo the same task when the last is finished (it will just append an infinite list of conditions to keep on running).\n.until specifies a termination condition. The event will not be executed once this condition is met. Makes most sense if you use .repeat or if the trigger condition and stopping should be independent. You might say, run 100 times unless you have a good enough model.", "print project.files", "Strategies (aka the brain)\nThe brain is just a collection of events. This makes it reuseable and easy to extend.", "project.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oscarmore2/deep-learning-study
intro-to-rnns/Anna_KaRNNa.ipynb
mit
[ "Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">", "import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf", "First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.", "with open('jinpingmei.txt', 'r') as f:\n text=f.read()\nvocab = sorted(set(text))\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nencoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)", "Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.", "text[:100]", "And we can see the characters encoded as integers.", "encoded[:100]", "Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.", "len(vocab)", "Making training mini-batches\nHere is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:\n<img src=\"assets/sequence_batching@1x.png\" width=500px>\n<br>\nWe have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.\nThe first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \\times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.\nAfter that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \\times (M * K)$ where $K$ is the number of batches.\nNow that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \\times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:\npython\ny[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]\nwhere x is the input batch and y is the target batch.\nThe way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.", "def get_batches(arr, n_seqs, n_steps):\n '''Create a generator that returns batches of size\n n_seqs x n_steps from arr.\n \n Arguments\n ---------\n arr: Array you want to make batches from\n n_seqs: Batch size, the number of sequences per batch\n n_steps: Number of sequence steps per batch\n '''\n # Get the number of characters per batch and number of batches we can make\n characters_per_batch = n_seqs * n_steps\n n_batches = len(arr)//characters_per_batch\n \n # Keep only enough characters to make full batches\n arr = arr[:n_batches * characters_per_batch]\n \n # Reshape into n_seqs rows\n arr = arr.reshape((n_seqs, -1))\n \n for n in range(0, arr.shape[1], n_steps):\n # The features\n x = arr[:, n:n+n_steps]\n # The targets, shifted by one\n y = np.zeros_like(x)\n y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]\n yield x, y", "Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.", "batches = get_batches(encoded, 10, 50)\nx, y = next(batches)\n\nprint('x\\n', x[:10, :10])\nprint('\\ny\\n', y[:10, :10])", "If you implemented get_batches correctly, the above output should look something like \n```\nx\n [[55 63 69 22 6 76 45 5 16 35]\n [ 5 69 1 5 12 52 6 5 56 52]\n [48 29 12 61 35 35 8 64 76 78]\n [12 5 24 39 45 29 12 56 5 63]\n [ 5 29 6 5 29 78 28 5 78 29]\n [ 5 13 6 5 36 69 78 35 52 12]\n [63 76 12 5 18 52 1 76 5 58]\n [34 5 73 39 6 5 12 52 36 5]\n [ 6 5 29 78 12 79 6 61 5 59]\n [ 5 78 69 29 24 5 6 52 5 63]]\ny\n [[63 69 22 6 76 45 5 16 35 35]\n [69 1 5 12 52 6 5 56 52 29]\n [29 12 61 35 35 8 64 76 78 28]\n [ 5 24 39 45 29 12 56 5 63 29]\n [29 6 5 29 78 28 5 78 29 45]\n [13 6 5 36 69 78 35 52 12 43]\n [76 12 5 18 52 1 76 5 58 52]\n [ 5 73 39 6 5 12 52 36 5 78]\n [ 5 29 78 12 79 6 61 5 59 63]\n [78 69 29 24 5 6 52 5 63 76]]\n ``\n although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.\nBuilding the model\nBelow is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.\n<img src=\"assets/charRNN.png\" width=500px>\nInputs\nFirst off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.", "def build_inputs(batch_size, num_steps):\n ''' Define placeholders for inputs, targets, and dropout \n \n Arguments\n ---------\n batch_size: Batch size, number of sequences per batch\n num_steps: Number of sequence steps in a batch\n \n '''\n # Declare placeholders we'll feed into the graph\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n \n # Keep probability placeholder for drop out layers\n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n return inputs, targets, keep_prob", "LSTM Cell\nHere we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.\nWe first create a basic LSTM cell with\npython\nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nwhere num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with \npython\ntf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\nYou pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this\npython\ntf.contrib.rnn.MultiRNNCell([cell]*num_layers)\nThis might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like\n```python\ndef build_cell(num_units, keep_prob):\n lstm = tf.contrib.rnn.BasicLSTMCell(num_units)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\nreturn drop\n\ntf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])\n```\nEven though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.\nWe also need to create an initial cell state of all zeros. This can be done like so\npython\ninitial_state = cell.zero_state(batch_size, tf.float32)\nBelow, we implement the build_lstm function to create these LSTM cells and the initial state.", "def build_lstm(lstm_size, num_layers, batch_size, keep_prob):\n ''' Build LSTM cell.\n \n Arguments\n ---------\n keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability\n lstm_size: Size of the hidden layers in the LSTM cells\n num_layers: Number of LSTM layers\n batch_size: Batch size\n\n '''\n ### Build the LSTM Cell\n \n def build_cell(lstm_size, keep_prob):\n # Use a basic LSTM cell\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n \n # Add dropout to the cell\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n return drop\n \n \n # Stack up multiple LSTM layers, for deep learning\n cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])\n initial_state = cell.zero_state(batch_size, tf.float32)\n \n return cell, initial_state", "RNN Output\nHere we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.\nIf our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \\times M \\times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \\times M \\times L$.\nWe are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \\times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.\nOne we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.", "def build_output(lstm_output, in_size, out_size):\n ''' Build a softmax layer, return the softmax output and logits.\n \n Arguments\n ---------\n \n x: Input tensor\n in_size: Size of the input tensor, for example, size of the LSTM cells\n out_size: Size of this softmax layer\n \n '''\n\n # Reshape output so it's a bunch of rows, one row for each step for each sequence.\n # That is, the shape should be batch_size*num_steps rows by lstm_size columns\n seq_output = tf.concat(lstm_output, axis=1)\n x = tf.reshape(seq_output, [-1, in_size])\n \n # Connect the RNN outputs to a softmax layer\n with tf.variable_scope('softmax'):\n softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(out_size))\n \n # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch\n # of rows of logit outputs, one for each step and sequence\n logits = tf.matmul(x, softmax_w) + softmax_b\n \n # Use softmax to get the probabilities for predicted characters\n out = tf.nn.softmax(logits, name='predictions')\n \n return out, logits", "Training loss\nNext up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \\times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \\times C$.\nThen we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.", "def build_loss(logits, targets, lstm_size, num_classes):\n ''' Calculate the loss from the logits and the targets.\n \n Arguments\n ---------\n logits: Logits from final fully connected layer\n targets: Targets for supervised learning\n lstm_size: Number of LSTM hidden units\n num_classes: Number of classes in targets\n \n '''\n \n # One-hot encode targets and reshape to match logits, one row per batch_size per step\n y_one_hot = tf.one_hot(targets, num_classes)\n y_reshaped = tf.reshape(y_one_hot, logits.get_shape())\n \n # Softmax cross entropy loss\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)\n loss = tf.reduce_mean(loss)\n return loss", "Optimizer\nHere we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.", "def build_optimizer(loss, learning_rate, grad_clip):\n ''' Build optmizer for training, using gradient clipping.\n \n Arguments:\n loss: Network loss\n learning_rate: Learning rate for optimizer\n \n '''\n \n # Optimizer for training, using gradient clipping to control exploding gradients\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n return optimizer", "Build the network\nNow we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.", "class CharRNN:\n \n def __init__(self, num_classes, batch_size=64, num_steps=50, \n lstm_size=128, num_layers=2, learning_rate=0.001, \n grad_clip=5, sampling=False):\n \n # When we're using this network for sampling later, we'll be passing in\n # one character at a time, so providing an option for that\n if sampling == True:\n batch_size, num_steps = 1, 1\n else:\n batch_size, num_steps = batch_size, num_steps\n\n tf.reset_default_graph()\n \n # Build the input placeholder tensors\n self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)\n\n # Build the LSTM cell\n cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)\n\n ### Run the data through the RNN layers\n # First, one-hot encode the input tokens\n x_one_hot = tf.one_hot(self.inputs, num_classes)\n \n # Run each sequence step through the RNN and collect the outputs\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)\n self.final_state = state\n \n # Get softmax predictions and logits\n self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)\n \n # Loss and optimizer (with gradient clipping)\n self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)\n self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)", "Hyperparameters\nHere I'm defining the hyperparameters for the network. \n\nbatch_size - Number of sequences running through the network in one pass.\nnum_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\nlstm_size - The number of units in the hidden layers.\nnum_layers - Number of hidden LSTM layers to use\nlearning_rate - Learning rate for training\nkeep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.\n\nHere's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.\n\nTips and Tricks\nMonitoring Validation Loss vs. Training Loss\nIf you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n\nIf your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\nIf your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n\nApproximate number of parameters\nThe two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n\nThe number of parameters in your model. This is printed when you start training.\nThe size of your dataset. 1MB file is approximately 1 million characters.\n\nThese two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n\nI have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.\nI have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n\nBest models strategy\nThe winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\nIt is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\nBy the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.", "batch_size = 128 # Sequences per batch\nnum_steps = 100 # Number of sequence steps per batch\nlstm_size = 512 # Size of hidden layers in LSTMs\nnum_layers = 2 # Number of LSTM layers\nlearning_rate = 0.0003 # Learning rate\nkeep_prob = 0.5 # Dropout keep probability", "Time for training\nThis is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.\nHere I'm saving checkpoints with the format\ni{iteration number}_l{# hidden layer units}.ckpt", "epochs = 50\n# Save every N iterations\nsave_every_n = 200\n\nmodel = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,\n lstm_size=lstm_size, num_layers=num_layers, \n learning_rate=learning_rate)\n\nsaver = tf.train.Saver(max_to_keep=100)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/______.ckpt')\n counter = 0\n for e in range(epochs):\n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for x, y in get_batches(encoded, batch_size, num_steps):\n counter += 1\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: keep_prob,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.loss, \n model.final_state, \n model.optimizer], \n feed_dict=feed)\n \n end = time.time()\n print('Epoch: {}/{}... '.format(e+1, epochs),\n 'Training Step: {}... '.format(counter),\n 'Training loss: {:.4f}... '.format(batch_loss),\n '{:.4f} sec/batch'.format((end-start)))\n \n if (counter % save_every_n == 0):\n saver.save(sess, \"checkpoints/i{}_l{}.ckpt\".format(counter, lstm_size))\n \n saver.save(sess, \"checkpoints/i{}_l{}.ckpt\".format(counter, lstm_size))", "Saved checkpoints\nRead up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables", "tf.train.get_checkpoint_state('checkpoints')", "Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.", "def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n samples = [c for c in prime]\n model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.prediction, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.prediction, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)", "Here, pass in the path to a checkpoint and sample from the network.", "tf.train.latest_checkpoint('checkpoints')\n\ncheckpoint = tf.train.latest_checkpoint('checkpoints')\nsamp = sample(checkpoint, 7000, lstm_size, len(vocab), prime=\"浪\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i200_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i600_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = 'checkpoints/i1200_l512.ckpt'\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MingChen0919/learning-apache-spark
notebooks/04-miscellaneous/add-python-files-to-spark-cluster.ipynb
mit
[ "The SparkContext.addPyFiles() function can be used to add py files. We can define objects and variables in these files and make them available to the Spark cluster.\nCreate a SparkContext object", "from pyspark import SparkConf, SparkContext, SparkFiles\nfrom pyspark.sql import SparkSession\n\nsc = SparkContext(conf=SparkConf())", "Add py files", "sc.addPyFile('pyFiles/my_module.py')\n\nSparkFiles.get('my_module.py')", "Use my_module.py\nWe can import my_module as a python module", "from my_module import *\n\naddPyFiles_is_successfull()\n\nsum_two_variables(4,5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
European-XFEL/h5tools-py
docs/Demo.ipynb
bsd-3-clause
[ "Reading data with karabo_data\nThis command creates the sample data files used in the rest of this example. These files contain no real data, but they have the same structure as European XFEL's HDF5 data files.", "!python3 -m karabo_data.tests.make_examples", "Single files", "!h5ls fxe_control_example.h5\n\nfrom karabo_data import H5File\nf = H5File('fxe_control_example.h5')\n\nf.control_sources\n\nf.instrument_sources", "Get data by train", "for tid, data in f.trains():\n print(\"Processing train\", tid)\n print(\"beam iyPos:\", data['SA1_XTD2_XGM/DOOCS/MAIN']['beamPosition.iyPos.value'])\n \n break\n\ntid, data = f.train_from_id(10005)\ndata['FXE_XAD_GEC/CAM/CAMERA:daqOutput']['data.image.dims']", "These are just a few of the ways to access data. The attributes and methods described below for run directories also work with individual files. We expect that it will normally make sense to access a run directory as a single object, rather than working with the files separately.\nRun directories\nAn experimental run is recorded as a collection of files in a directory.\nAnother dummy example:", "!ls fxe_example_run/\n\nfrom karabo_data import RunDirectory\nrun = RunDirectory('fxe_example_run/')\n\nrun.files[:3] # The objects for the individual files (see above)", "What devices were recording in this run?\nControl devices are slow data, recording once per train. Instrument devices includes detector data, but also some other data sources such as cameras. They can have more than one reading per train.", "run.control_sources\n\nrun.instrument_sources", "Which trains are in this run?", "print(run.train_ids[:10])", "See the available keys for a given source:", "run.keys_for_source('SPB_XTD9_XGM/DOOCS/MAIN:output')", "This collects data from across files, including detector data:", "for tid, data in run.trains():\n print(\"Processing train\", tid)\n print(\"Detctor data module 0 shape:\", data['FXE_DET_LPD1M-1/DET/0CH0:xtdf']['image.data'].shape)\n\n break # Stop after the first train to keep the demo short", "Train IDs are meant to be globally unique (although there were some glitches with this in the past). A train index is only within this run.", "tid, data = run.train_from_id(10005)\ntid, data = run.train_from_index(5)", "Series data to pandas\nData which holds a single number per train (or per pulse) can be extracted to as series (individual columns) and dataframes (tables) for pandas, a widely-used tool for data manipulation.\nkarabo_data chains sequence files, which contain successive data from the same source. In this example, trains 10000–10399 are in one sequence file (...DA01-S00000.h5), and 10400–10479 are in another (...DA01-S00001.h5). They are concatenated into one series:", "ixPos = run.get_series('SA1_XTD2_XGM/DOOCS/MAIN', 'beamPosition.ixPos.value')\nixPos.tail(10)", "To extract a dataframe, you can select interesting data fields with glob syntax, as often used for selecting files on Unix platforms.\n\n[abc]: one character, a/b/c\n?: any one character\n*: any sequence of characters", "run.get_dataframe(fields=[(\"*_XGM/*\", \"*.i[xy]Pos\")])", "Labelled arrays\nData with extra dimensions can be handled as xarray labelled arrays.\nThese are a wrapper around Numpy arrays with indexes which can be used to align them and select data.", "xtd2_intensity = run.get_array('SA1_XTD2_XGM/DOOCS/MAIN:output', 'data.intensityTD', extra_dims=['pulseID'])\nxtd2_intensity", "Here's a brief example of using xarray to align the data and select by train ID. See the examples in the xarray docs for more on what it can do.\nIn this example data, all the data sources have the same range of train IDs, so aligning them doesn't change anything. In real data, devices may miss some trains that other devices did record.", "import xarray as xr\nxtd9_intensity = run.get_array('SPB_XTD9_XGM/DOOCS/MAIN:output', 'data.intensityTD', extra_dims=['pulseID'])\n\n# Align two arrays, keep only trains which they both have data for:\nxtd2_intensity, xtd9_intensity = xr.align(xtd2_intensity, xtd9_intensity, join='inner')\n\n# Select data for a single train by train ID:\nxtd2_intensity.sel(trainId=10004)\n\n# Select data from a range of train IDs.\n# This includes the end value, unlike normal Python indexing\nxtd2_intensity.loc[10004:10006]", "You can also specify a region of interest from an array to load only part of the data:", "from karabo_data import by_index\n\n# Select the first 5 trains in this run:\nsel = run.select_trains(by_index[:5])\n\n# Get the whole of this array:\narr = sel.get_array('FXE_XAD_GEC/CAM/CAMERA:daqOutput', 'data.image.pixels')\nprint(\"Whole array shape:\", arr.shape)\n\n# Get a region of interest\narr2 = sel.get_array('FXE_XAD_GEC/CAM/CAMERA:daqOutput', 'data.image.pixels', roi=by_index[100:200, :512])\nprint(\"ROI array shape:\", arr2.shape)", "General information\nkarabo_data provides a few ways to get general information about what's in data files. First, from Python code:", "run.info()\n\nrun.detector_info('FXE_DET_LPD1M-1/DET/0CH0:xtdf')", "The lsxfel command provides similar information at the command line:", "!lsxfel fxe_example_run/RAW-R0450-LPD00-S00000.h5\n\n!lsxfel fxe_example_run/RAW-R0450-DA01-S00000.h5\n\n!lsxfel fxe_example_run" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/hub/tutorials/tf2_object_detection.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Copyright 2020 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/tf2_object_detection\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/tensorflow/collections/object_detection/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub models</a>\n </td>\n</table>\n\nTensorFlow Hub Object Detection Colab\nWelcome to the TensorFlow Hub Object Detection Colab! This notebook will take you through the steps of running an \"out-of-the-box\" object detection model on images.\nMore models\nThis collection contains TF2 object detection models that have been trained on the COCO 2017 dataset. Here you can find all object detection models that are currently hosted on tfhub.dev.\nImports and Setup\nLet's start with the base imports.", "# This Colab requires TF 2.5.\n!pip install -U \"tensorflow>=2.5\"\n\nimport os\nimport pathlib\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nimport io\nimport scipy.misc\nimport numpy as np\nfrom six import BytesIO\nfrom PIL import Image, ImageDraw, ImageFont\nfrom six.moves.urllib.request import urlopen\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\ntf.get_logger().setLevel('ERROR')", "Utilities\nRun the following cell to create some utils that will be needed later:\n\nHelper method to load an image\nMap of Model Name to TF Hub handle\nList of tuples with Human Keypoints for the COCO 2017 dataset. This is needed for models with keypoints.", "# @title Run this!!\n\ndef load_image_into_numpy_array(path):\n \"\"\"Load an image from file into a numpy array.\n\n Puts image into numpy array to feed into tensorflow graph.\n Note that by convention we put it into a numpy array with shape\n (height, width, channels), where channels=3 for RGB.\n\n Args:\n path: the file path to the image\n\n Returns:\n uint8 numpy array with shape (img_height, img_width, 3)\n \"\"\"\n image = None\n if(path.startswith('http')):\n response = urlopen(path)\n image_data = response.read()\n image_data = BytesIO(image_data)\n image = Image.open(image_data)\n else:\n image_data = tf.io.gfile.GFile(path, 'rb').read()\n image = Image.open(BytesIO(image_data))\n\n (im_width, im_height) = image.size\n return np.array(image.getdata()).reshape(\n (1, im_height, im_width, 3)).astype(np.uint8)\n\n\nALL_MODELS = {\n'CenterNet HourGlass104 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',\n'CenterNet HourGlass104 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1',\n'CenterNet HourGlass104 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1',\n'CenterNet HourGlass104 Keypoints 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1',\n'CenterNet Resnet50 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1',\n'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1',\n'CenterNet Resnet101 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1',\n'CenterNet Resnet50 V2 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1',\n'CenterNet Resnet50 V2 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1',\n'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1',\n'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1',\n'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1',\n'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1',\n'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1',\n'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1',\n'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1',\n'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1',\n'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2',\n'SSD MobileNet V1 FPN 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1',\n'SSD MobileNet V2 FPNLite 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1',\n'SSD MobileNet V2 FPNLite 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1',\n'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1',\n'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1',\n'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1',\n'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1',\n'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1',\n'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1',\n'Faster R-CNN ResNet50 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1',\n'Faster R-CNN ResNet50 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1',\n'Faster R-CNN ResNet50 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1',\n'Faster R-CNN ResNet101 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1',\n'Faster R-CNN ResNet101 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1',\n'Faster R-CNN ResNet101 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1',\n'Faster R-CNN ResNet152 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1',\n'Faster R-CNN ResNet152 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1',\n'Faster R-CNN ResNet152 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1',\n'Faster R-CNN Inception ResNet V2 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1',\n'Faster R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1',\n'Mask R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'\n}\n\nIMAGES_FOR_TEST = {\n 'Beach' : 'models/research/object_detection/test_images/image2.jpg',\n 'Dogs' : 'models/research/object_detection/test_images/image1.jpg',\n # By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg\n 'Naxos Taverna' : 'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg',\n # Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg\n 'Beatles' : 'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg',\n # By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg\n 'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg',\n # Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg\n 'Birds' : 'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg',\n}\n\nCOCO17_HUMAN_POSE_KEYPOINTS = [(0, 1),\n (0, 2),\n (1, 3),\n (2, 4),\n (0, 5),\n (0, 6),\n (5, 7),\n (7, 9),\n (6, 8),\n (8, 10),\n (5, 6),\n (5, 11),\n (6, 12),\n (11, 12),\n (11, 13),\n (13, 15),\n (12, 14),\n (14, 16)]", "Visualization tools\nTo visualize the images with the proper detected boxes, keypoints and segmentation, we will use the TensorFlow Object Detection API. To install it we will clone the repo.", "# Clone the tensorflow models repository\n!git clone --depth 1 https://github.com/tensorflow/models", "Intalling the Object Detection API", "%%bash\nsudo apt install -y protobuf-compiler\ncd models/research/\nprotoc object_detection/protos/*.proto --python_out=.\ncp object_detection/packages/tf2/setup.py .\npython -m pip install .\n", "Now we can import the dependencies we will need later", "from object_detection.utils import label_map_util\nfrom object_detection.utils import visualization_utils as viz_utils\nfrom object_detection.utils import ops as utils_ops\n\n%matplotlib inline", "Load label map data (for plotting).\nLabel maps correspond index numbers to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine.\nWe are going, for simplicity, to load from the repository that we loaded the Object Detection API code", "PATH_TO_LABELS = './models/research/object_detection/data/mscoco_label_map.pbtxt'\ncategory_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)", "Build a detection model and load pre-trained model weights\nHere we will choose which Object Detection model we will use.\nSelect the architecture and it will be loaded automatically.\nIf you want to change the model to try other architectures later, just change the next cell and execute following ones.\nTip: if you want to read more details about the selected model, you can follow the link (model handle) and read additional documentation on TF Hub. After you select a model, we will print the handle to make it easier.", "#@title Model Selection { display-mode: \"form\", run: \"auto\" }\nmodel_display_name = 'CenterNet HourGlass104 Keypoints 512x512' # @param ['CenterNet HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 1024x1024','Mask R-CNN Inception ResNet V2 1024x1024']\nmodel_handle = ALL_MODELS[model_display_name]\n\nprint('Selected model:'+ model_display_name)\nprint('Model Handle at TensorFlow Hub: {}'.format(model_handle))", "Loading the selected model from TensorFlow Hub\nHere we just need the model handle that was selected and use the Tensorflow Hub library to load it to memory.", "print('loading model...')\nhub_model = hub.load(model_handle)\nprint('model loaded!')", "Loading an image\nLet's try the model on a simple image. To help with this, we provide a list of test images.\nHere are some simple things to try out if you are curious:\n* Try running inference on your own images, just upload them to colab and load the same way it's done in the cell below.\n* Modify some of the input images and see if detection still works. Some simple things to try out here include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).\nBe careful: when using images with an alpha channel, the model expect 3 channels images and the alpha will count as a 4th.", "#@title Image Selection (don't forget to execute the cell!) { display-mode: \"form\"}\nselected_image = 'Beach' # @param ['Beach', 'Dogs', 'Naxos Taverna', 'Beatles', 'Phones', 'Birds']\nflip_image_horizontally = False #@param {type:\"boolean\"}\nconvert_image_to_grayscale = False #@param {type:\"boolean\"}\n\nimage_path = IMAGES_FOR_TEST[selected_image]\nimage_np = load_image_into_numpy_array(image_path)\n\n# Flip horizontally\nif(flip_image_horizontally):\n image_np[0] = np.fliplr(image_np[0]).copy()\n\n# Convert image to grayscale\nif(convert_image_to_grayscale):\n image_np[0] = np.tile(\n np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8)\n\nplt.figure(figsize=(24,32))\nplt.imshow(image_np[0])\nplt.show()", "Doing the inference\nTo do the inference we just need to call our TF Hub loaded model.\nThings you can try:\n* Print out result['detection_boxes'] and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).\n* inspect other output keys present in the result. A full documentation can be seen on the models documentation page (pointing your browser to the model handle printed earlier)", "# running inference\nresults = hub_model(image_np)\n\n# different object detection models have additional results\n# all of them are explained in the documentation\nresult = {key:value.numpy() for key,value in results.items()}\nprint(result.keys())", "Visualizing the results\nHere is where we will need the TensorFlow Object Detection API to show the squares from the inference step (and the keypoints when available).\nthe full documentation of this method can be seen here\nHere you can, for example, set min_score_thresh to other values (between 0 and 1) to allow more detections in or to filter out more detections.", "label_id_offset = 0\nimage_np_with_detections = image_np.copy()\n\n# Use keypoints if available in detections\nkeypoints, keypoint_scores = None, None\nif 'detection_keypoints' in result:\n keypoints = result['detection_keypoints'][0]\n keypoint_scores = result['detection_keypoint_scores'][0]\n\nviz_utils.visualize_boxes_and_labels_on_image_array(\n image_np_with_detections[0],\n result['detection_boxes'][0],\n (result['detection_classes'][0] + label_id_offset).astype(int),\n result['detection_scores'][0],\n category_index,\n use_normalized_coordinates=True,\n max_boxes_to_draw=200,\n min_score_thresh=.30,\n agnostic_mode=False,\n keypoints=keypoints,\n keypoint_scores=keypoint_scores,\n keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS)\n\nplt.figure(figsize=(24,32))\nplt.imshow(image_np_with_detections[0])\nplt.show()", "[Optional]\nAmong the available object detection models there's Mask R-CNN and the output of this model allows instance segmentation.\nTo visualize it we will use the same method we did before but adding an aditional parameter: instance_masks=output_dict.get('detection_masks_reframed', None)", "# Handle models with masks:\nimage_np_with_mask = image_np.copy()\n\nif 'detection_masks' in result:\n # we need to convert np.arrays to tensors\n detection_masks = tf.convert_to_tensor(result['detection_masks'][0])\n detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])\n\n # Reframe the bbox mask to the image size.\n detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(\n detection_masks, detection_boxes,\n image_np.shape[1], image_np.shape[2])\n detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,\n tf.uint8)\n result['detection_masks_reframed'] = detection_masks_reframed.numpy()\n\nviz_utils.visualize_boxes_and_labels_on_image_array(\n image_np_with_mask[0],\n result['detection_boxes'][0],\n (result['detection_classes'][0] + label_id_offset).astype(int),\n result['detection_scores'][0],\n category_index,\n use_normalized_coordinates=True,\n max_boxes_to_draw=200,\n min_score_thresh=.30,\n agnostic_mode=False,\n instance_masks=result.get('detection_masks_reframed', None),\n line_thickness=8)\n\nplt.figure(figsize=(24,32))\nplt.imshow(image_np_with_mask[0])\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RoebideBruijn/datascience-intensive-course
exercises/statistics project 2/sliderule_dsi_inferential_statistics_exercise_2.ipynb
mit
[ "Examining racial discrimination in the US job market\nBackground\nRacial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.\nData\nIn the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.\nNote that the 'b' and 'w' values in race are assigned randomly to the resumes.\nExercise\nYou will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.\nAnswer the following questions in this notebook below and submit to your Github account. \n\nWhat test is appropriate for this problem? Does CLT apply?\nWhat are the null and alternate hypotheses?\nCompute margin of error, confidence interval, and p-value.\nDiscuss statistical significance.\n\nYou can include written notes in notebook cells using Markdown: \n - In the control panel at the top, choose Cell > Cell Type > Markdown\n - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\nResources\n\nExperiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states\nScipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html \nMarkdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet", "%matplotlib inline \n\nimport pandas as pd\nimport numpy as np\nfrom scipy import stats\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\n\nsns.set_style('white')\n\ndata = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')\n\n# number of callbacks for black-sounding names\nprint(sum(data[data.race=='b'].call))\n# number of callbacks for white-sounding names\nprint(sum(data[data.race=='w'].call))\n# difference\nsum(data[data.race=='w'].call) - sum(data[data.race=='b'].call)\n\nsns.countplot(data.race)\nplt.show()\nsns.countplot(data.call)\nplt.show()\nprint(sum(data.race == 'w'))\nprint(sum(data.race == 'b'))\n\n# 1. A permutation test to see whether the difference can be based on coincidence. \n# No, CLT does not apply, there are only 2 values, not multiple values from which you extract a mean and std.\n# We can use the permuted distribution which will be normally distributed and CLT will apply there. (more than 30 samples)\n# On the other hand we could see it as a proportion of callbacks for two populations with n=2435 and k=#calls\n# In that way CLT does apply. n>30, hence assume normal distribution. So do Z-test.\n# Can't find a package that made a Z-test, hence I'll be using the T-test instead (gives similar results with many samples)\n\n# 2. H0: Race has no effect on callbask. H1: race has an effect on callback.\n# The question is whether race has a significant impact, not whether being black has a significant impact,\n# hence test is two-sided.", "Permutation", "from numpy.random import permutation\n\ndef permutate(X):\n new_array = permutation(X)\n return sum(new_array[0:2435]) - sum(new_array[2435::]) # calculate difference between first group and second\n\ndifference = [] \nfor i in range(0,100000):\n difference.append(permutate(data.call))\n\n# Confidence interval 95%, \n# our result is very much outside the confidence interval of the difference between two groups\nprint(np.percentile(difference, [2.5, 97.5]))\nsns.distplot(difference) # permuted data, normally distributed (CLT applies on this)\n\n# Margin of error with Z-table\n# Critical value is 1.96 in the Z-statistic for 0.95% (more than 30 samples and not skewed, hence normally distributed)\nprint(1.96 * np.std(difference)) # hence our value is outside the margin of error\n# margin of error is the difference between the border of the confidence interval and the mean,\n# which is 0 in this case, hence margin of error that calculated that way is 38.\nnp.percentile(difference, [2.5, 97.5])[1] - np.mean(difference)\n\ndiff = sum(data[data.race=='w'].call) - sum(data[data.race=='b'].call)\ntimes = sum(difference > diff) + sum(difference < -diff) # times the difference is bigger than the found difference\nprint(times)\nprint(times / 100000) # p-value, hence clearly significant\n\n# Alle measurements lead to the conclusion that it's very unlikely that our value would come from the permuted distribution.\n# Therefore race is concluded to have an effect on callback.", "T-test", "nw = sum(data.race == 'w')\nnb = sum(data.race == 'b')\nkw = sum(data[data.race=='w'].call)\nkb = sum(data[data.race=='b'].call)\npw = kw/nw\npb = kb/nb\npw - pb # difference in means\n\n# You should actually use the Z-test, since it's normally distributed and over 30 samples, \n# but T-test gives similar results.\nfrom scipy.stats import ttest_ind\n\nttest_ind(data[data.race=='w'].call, data[data.race=='b'].call) # p-value clearly significant\n\n# 95% confidence interval\nprint((pw - pb) - 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb))) # lower limit\nprint((pw - pb) + 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb))) # upper limit\n# No difference: 0, lies outside the confidence interval, hence race seems to have an effect\n\n# margin of error\n(pw - pb) + 1.96 * np.sqrt(((pw*(1-pw))/nw) + ((pb*(1-pb))/nb)) - (pw - pb)\n# Our mean is 0.03, hence outside the margin of error, if the true mean would be 0.\n\n# Also from this calculation it's clear that it's very likely that race has an effect on callback." ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
alexmojaki/flower
docs/api.ipynb
bsd-3-clause
[ "flower REST API\nThis document shows how to use the flower REST API. \nWe will use requests for accessing the API. (See here on how to install it.) \nCode\nWe'll use the following code throughout the documentation.\ntasks.py", "from celery import Celery\nfrom time import sleep\n\ncelery = Celery()\ncelery.config_from_object({\n 'BROKER_URL': 'amqp://localhost',\n 'CELERY_RESULT_BACKEND': 'amqp://',\n 'CELERYD_POOL_RESTARTS': True, # Required for /worker/pool/restart API\n})\n\n\n@celery.task\ndef add(x, y):\n return x + y\n\n\n@celery.task\ndef sub(x, y):\n sleep(30) # Simulate work\n return x - y", "Running\nYou'll need a celery worker instance and a flower instance running. In one terminal window run\ncelery worker --loglevel INFO -A proj -E --autoscale 10,3\n\nand in another terminal run\ncelery flower -A proj\n\nTasks API\nThe tasks API is async, meaning calls will return immediatly and you'll need to poll on task status.", "# Done once for the whole docs\nimport requests, json\napi_root = 'http://localhost:5555/api'\ntask_api = '{}/task'.format(api_root)", "async-apply", "args = {'args': [1, 2]}\nurl = '{}/async-apply/tasks.add'.format(task_api)\nprint(url)\nresp = requests.post(url, data=json.dumps(args))\nreply = resp.json()\nreply", "We can see that we created a new task and it's pending. Note that the API is async, meaning it won't wait until the task finish.\napply\nFor create task and wait results you can use 'apply' API.", "args = {'args': [1, 2]}\nurl = '{}/apply/tasks.add'.format(task_api)\nprint(url)\nresp = requests.post(url, data=json.dumps(args))\nreply = resp.json()\nreply", "result\nGets the task result. This is async and will return immediatly even if the task didn't finish (with state 'PENDING')", "url = '{}/result/{}'.format(task_api, reply['task-id'])\nprint(url)\nresp = requests.get(url)\nresp.json()", "revoke\nRevoke a running task.", "# Run a task\nargs = {'args': [1, 2]}\nresp = requests.post('{}/async-apply/tasks.sub'.format(task_api), data=json.dumps(args))\nreply = resp.json()\n\n# Now revoke it\nurl = '{}/revoke/{}'.format(task_api, reply['task-id'])\nprint(url)\nresp = requests.post(url, data='terminate=True')\nresp.json()", "rate-limit\nUpdate rate limit for a task.", "worker = 'miki-manjaro' # You'll need to get the worker name from the worker API (seel below)\nurl = '{}/rate-limit/{}'.format(task_api, worker)\nprint(url)\nresp = requests.post(url, params={'taskname': 'tasks.add', 'ratelimit': '10'})\nresp.json()", "timeout\nSet timeout (both hard and soft) for a task.", "url = '{}/timeout/{}'.format(task_api, worker)\nprint(url)\nresp = requests.post(url, params={'taskname': 'tasks.add', 'hard': '3.14', 'soft': '3'}) # You can omit soft or hard\nresp.json()", "Worker API", "# Once for the documentation\nworker_api = '{}/worker'.format(api_root)", "workers\nList workers.", "url = '{}/workers'.format(api_root) # Only one not under /worker\nprint(url)\nresp = requests.get(url)\nworkers = resp.json()\nworkers", "pool/shutdown\nShutdown a worker.", "worker = workers.keys()[0]\nurl = '{}/shutdown/{}'.format(worker_api, worker)\nprint(url)\nresp = requests.post(url)\nresp.json()", "pool/restart\nRestart a worker pool, you need to have CELERYD_POOL_RESTARTS enabled in your configuration).", "pool_api = '{}/pool'.format(worker_api)\nurl = '{}/restart/{}'.format(pool_api, worker)\nprint(url)\nresp = requests.post(url)\nresp.json()", "pool/grow\nGrows worker pool.", "url = '{}/grow/{}'.format(pool_api, worker)\nprint(url)\nresp = requests.post(url, params={'n': '10'})\nresp.json()", "pool/shrink\nShrink worker pool.", "url = '{}/shrink/{}'.format(pool_api, worker)\nprint(url)\nresp = requests.post(url, params={'n': '3'})\nresp.json()", "pool/autoscale\nAutoscale a pool.", "url = '{}/autoscale/{}'.format(pool_api, worker)\nprint(url)\nresp = requests.post(url, params={'min': '3', 'max': '10'})\nresp.json()", "queue/add-consumer\nAdd a consumer to a queue.", "queue_api = '{}/queue'.format(worker_api)\nurl = '{}/add-consumer/{}'.format(queue_api, worker)\nprint(url)\nresp = requests.post(url, params={'queue': 'jokes'})\nresp.json()", "queue/cancel-consumer\nCancel a consumer queue.", "url = '{}/cancel-consumer/{}'.format(queue_api, worker)\nprint(url)\nresp = requests.post(url, params={'queue': 'jokes'})\nresp.json()", "Queue API\nWe assume that we've two queues; the default one 'celery' and 'all'", "url = '{}/queues/length'.format(api_root)\nprint(url)\nresp = requests.get(url)\nresp.json()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vallis/libstempo
demo/libstempo-toasim-demo.ipynb
mit
[ "libstempo tutorial: simulating residuals with toasim\nMichele Vallisneri, vallis@vallis.org, 2014/10/31\nThis notebook demonstrates the libstempo module toasim, which allows the simple simulation of various kinds of noise.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nfrom __future__ import print_function\nimport sys\n\nimport numpy as N\nimport libstempo as T\nimport libstempo.plot as LP, libstempo.toasim as LT\n\nT.data = T.__path__[0] + '/data/' # example files\n\nprint(\"Python version :\",sys.version.split()[0])\nprint(\"libstempo version:\",T.__version__)\nprint(\"Tempo2 version :\",T.libstempo.tempo2version())", "We open up a NANOGrav par/tim file combination with libstempo, and plot the residuals.", "psr = T.tempopulsar(parfile = T.data + 'B1953+29_NANOGrav_dfg+12.par',\n timfile = T.data + 'B1953+29_NANOGrav_dfg+12.tim')\nLP.plotres(psr)", "We now remove the computed residuals from the TOAs, obtaining (in effect) a perfect realization of the deterministic timing model. The pulsar parameters will have changed somewhat, so make_ideal calls fit() on the pulsar object.", "LT.make_ideal(psr)\nLP.plotres(psr)", "We now add a single line of noise at $10^{6.5}$ Hz, with an amplitude of 10 us. We also put back radiometer noise, with rms amplitude equal to 1x the nominal TOA errors.\nAll the noise-generating commands take an optional argument seed that will reseed the numpy pseudorandom-number generator, so you are able to reproduce the same instance of noise. However, if you issue several noise-generating commands in sequence, you should use different seeds.", "#LT.add_line(psr,f=10**6.5,A=1e-5)\nLT.add_efac(psr,efac=1.0,seed=1234)\nLP.plotres(psr)", "We could also add EQUAD quadrature noise (with add_equad) or its coarse-grained version (with add_jitter), but instead we prefer some red noise of \"GW-like\" amplitude $10^{-12}$ and spectral slope $\\gamma = -3$.", "LT.add_rednoise(psr,1e-12,3)\nLP.plotres(psr)", "Or, we may add a GW background as simulated by the tempo2 GWbkgrd plugin (see the docstring below).", "LT.add_gwb(psr,flow=1e-8,gwAmp=5e-12)\nLP.plotres(psr)\n\nhelp(LT.add_gwb)\n\nLT.createGWB([psr],Amp=5e-15,gam=13./3.)\nLP.plotres(psr)", "Refitting will remove some of the power.", "psr.fit()\nLP.plotres(psr)", "All done! We can save the resulting par and tim file, and analyze them with a favorite pipeline.", "psr.savepar('B1953+29-simulate.par')\npsr.savetim('B1953+29-simulate.tim')", "Note that currently the tim file that is output by tempo2 has a spurious \"MODE 1\" line that tempo2 does not like upon reloading. To erase it, you can do", "T.purgetim('B1953+29-simulate.tim')", "And if we reload the files we get pack the same thing...", "psr2 = T.tempopulsar(parfile = 'B1953+29-simulate.par',\n timfile = 'B1953+29-simulate.tim')\nLP.plotres(psr2)", "It's also possible to obtain a perfect realization of the timing model described in a par file without a tim file, by specifying a new set of observation times (in MJD) and errors (in us). The observation frequency, observatory, and flags can also be specified (see the docstring below).", "psr = LT.fakepulsar(parfile=T.data+'B1953+29_NANOGrav_dfg+12.par',\n obstimes=N.arange(53000,54800,30)+N.random.randn(60), # observe every 30+-1 days\n toaerr=0.1)\n\nLT.add_efac(psr,efac=1.0,seed=1234)\nLP.plotres(psr)\n\nhelp(LT.fakepulsar)", "Rather than generating fake TOAs you might want to calculate a pulsar's phase at a particular set of times. Using the tempopulsar object you can input an arbitrary set of observation times and use the residuals to get the pulsar's relative phase. For example:", "# create a set of times (in MJD)\nobstimes = N.arange(53000, 54800, 10, dtype=N.float128)\ntoaerr = 1e-3 # set the (probably arbitrary) errors in the times (us)\nobservatory = \"ao\" # the observatory\nobsfreq = 1440.0 # the observation frequency (MHz)\n\npsr = T.tempopulsar(\n parfile=\"B1953+29-simulate.par\",\n toas=obstimes,\n toaerrs=toaerr,\n observatory=observatory,\n obsfreq=obsfreq,\n dofit=False,\n)\n\n# get the phases in cycles (mod 1) referenced to the initial observation time\nphases = psr.phaseresiduals(removemean=False)", "The observation times can be input as an array of astropy Time objects. The TOA error values, observatory values, and observation frequencies, can also be arrays of the same length as array of observation times.\nIf you want to extract phases referenced to a particular epoch, observatory and frequency, you can use the refphs argument to the residuals (or phaseresiduals) method of the tempopulsar object. For example, to reference the phase to an epoch of 52973 at the solar system barycentre you could use:", "phaseref = psr.phaseresiduals(removemean=\"refphs\", epoch=52973.0, site=\"@\")", "Note: this can also be set by using a parameter file containing the line REFPHS TZR and having that values TZRMJD, TZRSITE and TZRFREQ set." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ALEXKIRNAS/DataScience
CS231n/assignment2/ConvolutionalNetworks.ipynb
mit
[ "Convolutional Networks\nSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.\nFirst you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.cnn import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient\nfrom cs231n.layers import *\nfrom cs231n.fast_layers import *\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)", "Convolution: Naive forward pass\nThe core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. \nYou don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.\nYou can test your implementation by running the following:", "x_shape = (2, 3, 4, 4)\nw_shape = (3, 3, 4, 4)\nx = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)\nw = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)\nb = np.linspace(-0.1, 0.2, num=3)\n\nconv_param = {'stride': 2, 'pad': 1}\nout, _ = conv_forward_naive(x, w, b, conv_param)\ncorrect_out = np.array([[[[-0.08759809, -0.10987781],\n [-0.18387192, -0.2109216 ]],\n [[ 0.21027089, 0.21661097],\n [ 0.22847626, 0.23004637]],\n [[ 0.50813986, 0.54309974],\n [ 0.64082444, 0.67101435]]],\n [[[-0.98053589, -1.03143541],\n [-1.19128892, -1.24695841]],\n [[ 0.69108355, 0.66880383],\n [ 0.59480972, 0.56776003]],\n [[ 2.36270298, 2.36904306],\n [ 2.38090835, 2.38247847]]]])\n\n# Compare your output to ours; difference should be around 2e-8\nprint('Testing conv_forward_naive')\nprint('difference: ', rel_error(out, correct_out))", "Aside: Image processing via convolutions\nAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.", "from scipy.misc import imread, imresize\n\nkitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')\n# kitten is wide, and puppy is already square\nd = kitten.shape[1] - kitten.shape[0]\nkitten_cropped = kitten[:, d//2:-d//2, :]\n\nimg_size = 200 # Make this smaller if it runs too slow\nx = np.zeros((2, 3, img_size, img_size))\nx[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))\nx[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))\n\n# Set up a convolutional weights holding 2 filters, each 3x3\nw = np.zeros((2, 3, 3, 3))\n\n# The first filter converts the image to grayscale.\n# Set up the red, green, and blue channels of the filter.\nw[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]\nw[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]\nw[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]\n\n# Second filter detects horizontal edges in the blue channel.\nw[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]\n\n# Vector of biases. We don't need any bias for the grayscale\n# filter, but for the edge detection filter we want to add 128\n# to each output so that nothing is negative.\nb = np.array([0, 128])\n\n# Compute the result of convolving each input in x with each filter in w,\n# offsetting by b, and storing the results in out.\nout, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})\n\ndef imshow_noax(img, normalize=True):\n \"\"\" Tiny helper to show images as uint8 and remove axis labels \"\"\"\n if normalize:\n img_max, img_min = np.max(img), np.min(img)\n img = 255.0 * (img - img_min) / (img_max - img_min)\n plt.imshow(img.astype('uint8'))\n plt.gca().axis('off')\n\n# Show the original images and the results of the conv operation\nplt.subplot(2, 3, 1)\nimshow_noax(puppy, normalize=False)\nplt.title('Original image')\nplt.subplot(2, 3, 2)\nimshow_noax(out[0, 0])\nplt.title('Grayscale')\nplt.subplot(2, 3, 3)\nimshow_noax(out[0, 1])\nplt.title('Edges')\nplt.subplot(2, 3, 4)\nimshow_noax(kitten_cropped, normalize=False)\nplt.subplot(2, 3, 5)\nimshow_noax(out[1, 0])\nplt.subplot(2, 3, 6)\nimshow_noax(out[1, 1])\nplt.show()", "Convolution: Naive backward pass\nImplement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.\nWhen you are done, run the following to check your backward pass with a numeric gradient check.", "np.random.seed(231)\nx = np.random.randn(4, 3, 5, 5)\nw = np.random.randn(2, 3, 3, 3)\nb = np.random.randn(2,)\ndout = np.random.randn(4, 2, 5, 5)\nconv_param = {'stride': 1, 'pad': 1}\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)\n\nout, cache = conv_forward_naive(x, w, b, conv_param)\ndx, dw, db = conv_backward_naive(dout, cache)\n\n# Your errors should be around 1e-8'\nprint('Testing conv_backward_naive function')\nprint('dx error: ', rel_error(dx, dx_num))\nprint('dw error: ', rel_error(dw, dw_num))\nprint('db error: ', rel_error(db, db_num))", "Max pooling: Naive forward\nImplement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.\nCheck your implementation by running the following:", "x_shape = (2, 3, 4, 4)\nx = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)\npool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}\n\nout, _ = max_pool_forward_naive(x, pool_param)\n\ncorrect_out = np.array([[[[-0.26315789, -0.24842105],\n [-0.20421053, -0.18947368]],\n [[-0.14526316, -0.13052632],\n [-0.08631579, -0.07157895]],\n [[-0.02736842, -0.01263158],\n [ 0.03157895, 0.04631579]]],\n [[[ 0.09052632, 0.10526316],\n [ 0.14947368, 0.16421053]],\n [[ 0.20842105, 0.22315789],\n [ 0.26736842, 0.28210526]],\n [[ 0.32631579, 0.34105263],\n [ 0.38526316, 0.4 ]]]])\n\n# Compare your output with ours. Difference should be around 1e-8.\nprint('Testing max_pool_forward_naive function:')\nprint('difference: ', rel_error(out, correct_out))", "Max pooling: Naive backward\nImplement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.\nCheck your implementation with numeric gradient checking by running the following:", "np.random.seed(231)\nx = np.random.randn(3, 2, 8, 8)\ndout = np.random.randn(3, 2, 4, 4)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\ndx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)\n\nout, cache = max_pool_forward_naive(x, pool_param)\ndx = max_pool_backward_naive(dout, cache)\n\n# Your error should be around 1e-12\nprint('Testing max_pool_backward_naive function:')\nprint('dx error: ', rel_error(dx, dx_num))", "Fast layers\nMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.\nThe fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:\nbash\npython setup.py build_ext --inplace\nThe API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.\nNOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.\nYou can compare the performance of the naive and fast versions of these layers by running the following:", "from cs231n.fast_layers import conv_forward_fast, conv_backward_fast\nfrom time import time\nnp.random.seed(231)\nx = np.random.randn(100, 3, 31, 31)\nw = np.random.randn(25, 3, 3, 3)\nb = np.random.randn(25,)\ndout = np.random.randn(100, 25, 16, 16)\nconv_param = {'stride': 2, 'pad': 1}\n\nt0 = time()\nout_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)\nt1 = time()\nout_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)\nt2 = time()\n\nprint('Testing conv_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('Difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting conv_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))\nprint('dw difference: ', rel_error(dw_naive, dw_fast))\nprint('db difference: ', rel_error(db_naive, db_fast))\n\nfrom cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast\nnp.random.seed(231)\nx = np.random.randn(100, 3, 32, 32)\ndout = np.random.randn(100, 3, 16, 16)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nt0 = time()\nout_naive, cache_naive = max_pool_forward_naive(x, pool_param)\nt1 = time()\nout_fast, cache_fast = max_pool_forward_fast(x, pool_param)\nt2 = time()\n\nprint('Testing pool_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('fast: %fs' % (t2 - t1))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive = max_pool_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast = max_pool_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting pool_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))", "Convolutional \"sandwich\" layers\nPreviously we introduced the concept of \"sandwich\" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.", "from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 16, 16)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nout, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)\ndx, dw, db = conv_relu_pool_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)\n\nprint('Testing conv_relu_pool')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))\n\nfrom cs231n.layer_utils import conv_relu_forward, conv_relu_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 8, 8)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\n\nout, cache = conv_relu_forward(x, w, b, conv_param)\ndx, dw, db = conv_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)\n\nprint('Testing conv_relu:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))", "Three-layer ConvNet\nNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.\nOpen the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:\nSanity check loss\nAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.", "model = ThreeLayerConvNet()\n\nN = 50\nX = np.random.randn(N, 3, 32, 32)\ny = np.random.randint(10, size=N)\n\nloss, grads = model.loss(X, y)\nprint('Initial loss (no regularization): ', loss)\n\nmodel.reg = 0.5\nloss, grads = model.loss(X, y)\nprint('Initial loss (with regularization): ', loss)", "Gradient check\nAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.", "num_inputs = 2\ninput_dim = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nnp.random.seed(231)\nX = np.random.randn(num_inputs, *input_dim)\ny = np.random.randint(num_classes, size=num_inputs)\n\nmodel = ThreeLayerConvNet(num_filters=3, filter_size=3,\n input_dim=input_dim, hidden_dim=7,\n dtype=np.float64)\nloss, grads = model.loss(X, y)\nfor param_name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))", "Overfit small data\nA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.", "np.random.seed(231)\n\nnum_train = 100\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nmodel = ThreeLayerConvNet(weight_scale=1e-2)\n\nsolver = Solver(model, small_data,\n num_epochs=15, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=1)\nsolver.train()", "Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:", "plt.subplot(2, 1, 1)\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(solver.train_acc_history, '-o')\nplt.plot(solver.val_acc_history, '-o')\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()", "Train the net\nBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:", "model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)\n\nsolver = Solver(model, data,\n num_epochs=1, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()", "Visualize Filters\nYou can visualize the first-layer convolutional filters from the trained network by running the following:", "from cs231n.vis_utils import visualize_grid\n\ngrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))\nplt.imshow(grid.astype('uint8'))\nplt.axis('off')\nplt.gcf().set_size_inches(5, 5)\nplt.show()", "Spatial Batch Normalization\nWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called \"spatial batch normalization.\"\nNormally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.\nIf the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.\nSpatial batch normalization: forward\nIn the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:", "np.random.seed(231)\n# Check the training-time forward pass by checking means and variances\n# of features both before and after spatial batch normalization\n\nN, C, H, W = 2, 3, 4, 5\nx = 4 * np.random.randn(N, C, H, W) + 10\n\nprint('Before spatial batch normalization:')\nprint(' Shape: ', x.shape)\nprint(' Means: ', x.mean(axis=(0, 2, 3)))\nprint(' Stds: ', x.std(axis=(0, 2, 3)))\n\n# Means should be close to zero and stds close to one\ngamma, beta = np.ones(C), np.zeros(C)\nbn_param = {'mode': 'train'}\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization:')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\n# Means should be close to beta and stds close to gamma\ngamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization (nontrivial gamma, beta):')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\nnp.random.seed(231)\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nN, C, H, W = 10, 4, 11, 12\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(C)\nbeta = np.zeros(C)\nfor t in range(50):\n x = 2.3 * np.random.randn(N, C, H, W) + 13\n spatial_batchnorm_forward(x, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nx = 2.3 * np.random.randn(N, C, H, W) + 13\na_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After spatial batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=(0, 2, 3)))\nprint(' stds: ', a_norm.std(axis=(0, 2, 3)))", "Spatial batch normalization: backward\nIn the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:", "np.random.seed(231)\nN, C, H, W = 2, 3, 4, 5\nx = 5 * np.random.randn(N, C, H, W) + 12\ngamma = np.random.randn(C)\nbeta = np.random.randn(C)\ndout = np.random.randn(N, C, H, W)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))", "Extra Credit Description\nIf you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
TESScience/httm
test/notebooks/tutorial.ipynb
gpl-3.0
[ "Tutorial\nThis tutorial demonstrates basic usage of httm.\nGetting Started\nImporting matplotlib\nTo start, we will import matplotlib and increase the figure size so we can reasonably see artifacts in various FITS images we are going to be looking at.", "%matplotlib inline\n%config InlineBackend.figure_format = 'png'\n\nimport matplotlib\nmatplotlib.rcParams['figure.figsize'] = (8, 8)", "Viewing a RAW FITS File\nAssume you have a file: \nfits_data/raw_fits/single_ccd.fits\n\n...containing an unmodified FITS full frame image.\nTo get started, open this file and extract a httm.data_structures.raw_converter.SingleCCDRawConverter object.\nThis is done by calling httm.fits_utilities.raw_fits.raw_converter_from_fits.", "import httm\n\nfrom httm.fits_utilities.raw_fits import raw_converter_from_fits\n\nraw_data = raw_converter_from_fits('fits_data/raw_fits/single_ccd.fits')", "Each raw image contains the data for a single CCD. It contains 4 slices if it was taken by the instrument, and either 1 or 4 if it was created synthetically.\nBelow, we visualize the first slice of the image.", "matplotlib.pyplot.imshow(raw_data.slices[0].pixels)\nmatplotlib.pyplot.gca().invert_yaxis()", "Viewing an Electron Flux FITS Image", "from httm.fits_utilities.electron_flux_fits import electron_flux_converter_from_fits\n\nelectron_flux_data = electron_flux_converter_from_fits('fits_data/electron_flux_fits/small_simulated_data.fits')\n\nmatplotlib.pyplot.imshow(electron_flux_data.slices[0].pixels)\nmatplotlib.pyplot.gca().invert_yaxis()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hich28/mytesttxx
tests/python/formulas.ipynb
gpl-3.0
[ "Handling LTL and PSL formulas", "import spot", "For interactive use, formulas can be entered as text strings and passed to the spot.formula constructor.", "f = spot.formula('p1 U p2 R (p3 & !p4)')\nf\n\ng = spot.formula('{a;b*;c[+]}<>->GFb'); g", "By default the parser recognizes an infix syntax, but when this fails, it tries to read the formula with the LBT syntax:", "h = spot.formula('& | a b c'); h", "By default, a formula object is presented using mathjax as above.\nWhen a formula is converted to string you get Spot's syntax by default:", "str(f)", "If you prefer to print the string in another syntax, you may use the to_str() method, with an argument that indicates the output format to use. The latex format assumes that you will the define macros such as \\U, \\R to render all operators as you wish. On the otherhand, the sclatex (with sc for self-contained) format hard-codes the rendering of each of those operators: this is typically the output that is used to render formulas using MathJax in a notebook.", "for i in ['spot', 'spin', 'lbt', 'wring', 'utf8', 'latex', 'sclatex']:\n print(\"%-10s%s\" % (i, f.to_str(i)))", "Formulas output via format() can also use some convenient shorthand to select the syntax:", "print(\"\"\"\\\nSpin: {0:s}\nSpin+parentheses: {0:sp}\nSpot (default): {0}\nSpot+shell quotes: {0:q}\nLBT, right aligned: {0:l:~>40}\nLBT, no M/W/R: {0:[MWR]l}\"\"\".format(f))", "The specifiers that can be used with format are documented as follows:", "help(spot.formula.__format__)", "A spot.formula object has a number of built-in predicates whose value have been computed when the formula was constructed. For instance you can check whether a formula is in negative normal form using is_in_nenoform(), and you can make sure it is an LTL formula (i.e. not a PSL formula) using is_ltl_formula():", "f.is_in_nenoform() and f.is_ltl_formula()\n\ng.is_ltl_formula()", "Similarly, is_syntactic_stutter_invariant() tells wether the structure of the formula guarranties it to be stutter invariant. For LTL formula, this means the X operator should not be used. For PSL formula, this function capture all formulas built using the siPSL grammar.", "f.is_syntactic_stutter_invariant()\n\nspot.formula('{a[*];b}<>->c').is_syntactic_stutter_invariant()\n\nspot.formula('{a[+];b[*]}<>->d').is_syntactic_stutter_invariant()", "spot.relabel renames the atomic propositions that occur in a formula, using either letters, or numbered propositions:", "gf = spot.formula('(GF_foo_) && \"a > b\" && \"proc[2]@init\"'); gf\n\nspot.relabel(gf, spot.Abc)\n\nspot.relabel(gf, spot.Pnn)", "The AST of any formula can be displayed with show_ast(). Despite the name, this is not a tree but a DAG, because identical subtrees are merged. Binary operators have their left and right operands denoted with L and R, while non-commutative n-ary operators have their operands numbered.", "print(g); g.show_ast()", "Any formula can also be classified in the temporal hierarchy of Manna & Pnueli", "g.show_mp_hierarchy()\n\nspot.mp_class(g, 'v')\n\nf = spot.formula('F(a & X(!a & b))'); f", "Etessami's rule for removing X (valid only in stutter-invariant formulas)", "spot.remove_x(f)", "Removing abbreviated operators", "f = spot.formula(\"G(a xor b) -> F(a <-> b)\")\nspot.unabbreviate(f, \"GF^\")\n\nspot.unabbreviate(f, \"GF^ei\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NeuroDataDesign/pan-synapse
pipeline_1/background/connectLib_revised.md.ipynb
apache-2.0
[ "connectLib Pipeline\nIntroduction\nThe connectLib Pipeline filters out the background noise of an n-dimensional image and then segments the resulting image into groups of data type Cluster presented in list-form. The pipeline uses Otsu's Binarization to filter out the background noise. Next, Connected Components clusters the remaining foreground. We then remove outlier clusters using the Interquartile Range Rule. These outliers result from the filtered background which gets labeled as one large cluster. Since we know this background cluster is large, we just need to threshold our clusters so that the upper outlier volumes get removed. The final step is to coregister our clusters with the raw image. This is a consequence from the PLOS Pipeline (see PLOS_Pipeline_Revised.md) which degrades the original clusters.\nSimulation Data\nEasy Simulation:\nOur simulated data will be a 100x100x100 volume with a voxel intensity distribution approximately the same as that of the true image volumes (i.e., 98% noise, 2% synapse). The synapse voxels will be grouped together in clusters as they would in the true data. Based on research into the true size of synapses, these synthetic synapse clusters will be given area of ~.2 microns ^3, or about 27 voxels (assuming the synthetic data here and the real world data have identical resolutions). We will differeniate the background from the foreground in this simulation by assigning intensity values. Background voxels will be assigned a value from 0-10,000; foreground points will be given a value of 60,000. After the data goes through the pipeLine, I will gauge performance based on the following:\n\naverage volume of synapses (should be about 27 voxels) \nvolumetric density of data (should be about 2% of the data)\n\nWe believe our pipeline will yield perfect results on this simulated data. This is because the main filtering from Otsu's Binarization requires the distribution of voxel values to be bimodal. That is, there is a clear differentiation between background and foreground. \nEasy Simulation Code", "import sys\nsys.path.insert(0,'../code/functions/')\nfrom random import randrange as rand\nfrom skimage.measure import label\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport pickle\n\ndef generatePointSet():\n center = (rand(0, 99), rand(0, 99), rand(0, 99))\n toPopulate = []\n for z in range(-1, 2):\n for y in range(-1, 2):\n for x in range(-1, 2):\n curPoint = (center[0]+z, center[1]+y, center[2]+x)\n #only populate valid points\n valid = True\n for dim in range(3):\n if curPoint[dim] < 0 or curPoint[dim] >= 100:\n valid = False\n if valid:\n toPopulate.append(curPoint)\n return set(toPopulate)\n \ndef generateTestVolume():\n #create a test volume\n volume = np.zeros((100, 100, 100))\n myPointSet = set()\n for _ in range(rand(500, 800)):\n potentialPointSet = generatePointSet()\n #be sure there is no overlap\n while len(myPointSet.intersection(potentialPointSet)) > 0:\n potentialPointSet = generatePointSet()\n for elem in potentialPointSet:\n myPointSet.add(elem)\n #populate the true volume\n for elem in myPointSet:\n volume[elem[0], elem[1], elem[2]] = 60000\n #introduce noise\n noiseVolume = np.copy(volume)\n for z in range(noiseVolume.shape[0]):\n for y in range(noiseVolume.shape[1]):\n for x in range(noiseVolume.shape[2]):\n if not (z, y, x) in myPointSet:\n noiseVolume[z][y][x] = rand(0, 10000)\n return volume, noiseVolume\n\nrandIm = generateTestVolume()\nforeground = randIm[0]\ncombinedIm = randIm[1]", "What We Expect Our Simulation Data Will Look Like:\nThe above code should generate a 100x100x100 volume and populate it with various, non-intersectting pointsets (representing foreground synpases). When the foreground is generated, the volume will then be introduced to random background noise which will fill the rest of the volume. \nEasy Simulation Plots", "#displaying the random clusters\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nz, y, x = foreground.nonzero()\nax.scatter(x, y, z, zdir='z', c='r')\nplt.title('Random Foreground')\nplt.show()\n\n#displaying the noise\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nz, y, x = combinedIm.nonzero()\nax.scatter(x, y, z, zdir='z', c='r')\nplt.title('Random Noise + Foreground')\nplt.show()", "Why Our Simulation is Correct: Real microscopic images of synapses usually contain a majority of background noise and relatively few synapse clusters. As shown above, the generated test volume follows this expectation. \nDifficult Simulation\nWe will now simulate data where our algorithm will not perform well on. We will generate a 100x100x100 test volume populated with background and foreground voxels containing the same intensity. Since the distribution of voxels is now unimodal (no clear difference between background and foreground), our filtering algorithm should not work well. However, the intensity values will not appear in our matplotlib plots. Therefore, our difficult simulation will appear to be the same as the Easy Simulation, but should fail after it goes through the connectLib pipeline.\nDifficult Simulation Code and Plot", "def generateDifficultTestVolume():\n #create a test volume\n volume = np.zeros((100, 100, 100))\n myPointSet = set()\n for _ in range(rand(500, 800)):\n potentialPointSet = generatePointSet()\n #be sure there is no overlap\n while len(myPointSet.intersection(potentialPointSet)) > 0:\n potentialPointSet = generatePointSet()\n for elem in potentialPointSet:\n myPointSet.add(elem)\n #populate the true volume\n for elem in myPointSet:\n volume[elem[0], elem[1], elem[2]] = 60000\n #introduce noise\n noiseVolume = np.copy(volume)\n for z in range(noiseVolume.shape[0]):\n for y in range(noiseVolume.shape[1]):\n for x in range(noiseVolume.shape[2]):\n if not (z, y, x) in myPointSet:\n noiseVolume[z][y][x] = 60000\n return volume, noiseVolume\n\nrandImHard = generateDifficultTestVolume()\nforegroundHard = randImHard[0]\ncombinedImHard = randImHard[1]\n\n#displaying the random clusters\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nz, y, x = foregroundHard.nonzero()\nax.scatter(x, y, z, zdir='z', c='r')\nplt.title('Random Foreground')\nplt.show()\n\n#displaying the noise\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nz, y, x = combinedImHard.nonzero()\nax.scatter(x, y, z, zdir='z', c='r')\nplt.title('Random Noise + Foreground')\nplt.show()", "Simulation Analysis\nPseudocode\nInputs: 3D image array that has been processed through plosLib pipeline, raw image file that hasn't been through plosLib\nOutputs: List of synapse clusters", "####Pseudocode: Will not run!####\n\n#Step 1 Otsu's Binarization to threshold out background noise intensity to 0.\nfor(each 2D image slice in 3D plos_image): \n threshold_otsu on slice #uses Otsu's Binarization to threshold background noise to 0. \nreturn thresholded_image\n\n#Step 2 Cluster foreground using connected components\nconnected_components on thresholded_image #labels and clusters 'connected' regions in foreground \nfor(each labeled region): \n MAKE Cluster object #instance that contains voxel members that made up labeled region \n plos_ClusterList.append(Cluster) #list of synapse/foreground clusters \nreturn plos_ClusterList\n\n#Step 3 Use Naive Fencing (IQR Range Rule) to remove large background cluster that formed \nIQR = getIQR(plos_ClusterList.getVolumes()) #calculate IQR of Cluster volumes\nUpperOutlierFence = 75thpercentile(plos_ClusterList.getVolumes()) + 1.5*IQR #get upper volume threshold (third quartile + 1.5*IQR) \nfor (Cluster in plos_ClusterList):\n if (Cluster.getVolume() > UpperOutlierFence) #if volume is considered an upper outlier, remove it\n plos_ClusterList.remove(Cluster) \n\n#Step 4 Coregister Degraded clusters found above with Raw clusters\nthreshold_otsu on raw_image #Thresholds raw image background\nrawClusterList = connected_components on thresholded_raw_image #Clusters raw image\nfor raw_cluster in rawClusterList: \n for plos_cluster in plos_ClusterList:\n if plos_cluster in raw_cluster: #if degraded cluster is contained in the raw cluster\n actualClusterList.append(raw_cluster) #add raw cluster to actual Cluster list.\n \nreturn actualClusterList", "Algorithm Code", "from skimage.filters import threshold_otsu\nfrom skimage.measure import label\nfrom cluster import Cluster\nimport numpy as np\nimport cv2\nimport plosLib as pLib\n\n### Step 1: Threshold the image using Otsu Binarization \ndef otsuVox(argVox):\n probVox = np.nan_to_num(argVox)\n bianVox = np.zeros_like(probVox)\n for zIndex, curSlice in enumerate(probVox):\n #if the array contains all the same values\n if np.max(curSlice) == np.min(curSlice):\n #otsu thresh will fail here, leave bianVox as all 0's\n continue\n thresh = threshold_otsu(curSlice)\n bianVox[zIndex] = curSlice > thresh\n return bianVox\n\n### Step 2: Cluster foreground using Connected Components\ndef connectedComponents(voxel):\n labelMap = label(voxel)\n clusterList = []\n #plus 1 since max label should be included\n for uniqueLabel in range(0, np.max(labelMap)+1):\n memberList = [list(elem) for elem in zip(*np.where(labelMap == uniqueLabel))]\n if not len(memberList) == 0:\n clusterList.append(Cluster(memberList))\n return clusterList\n\n### Step 3: Remove outlier clusters using IRQ Rule\ndef thresholdByVolumePercentile(clusterList):\n #putting the plosPipeline clusters volumes in a list\n plosClusterVolList =[]\n for cluster in (range(len(clusterList))):\n plosClusterVolList.append(clusterList[cluster].getVolume())\n\n #finding the upper outlier fence\n Q3 = np.percentile(plosClusterVolList, 75)\n Q1 = np.percentile(plosClusterVolList, 25)\n IQR = Q3 - Q1\n upperThreshFence = Q3 + 1.5*IQR\n\n #filtering out the background cluster\n upperThreshClusterList = []\n for cluster in (range(len(clusterList))):\n if clusterList[cluster].getVolume() < upperThreshFence:\n upperThreshClusterList.append(clusterList[cluster])\n\n return upperThreshClusterList\n\n### Step 4: Coregister clusters with raw data.\ndef clusterCoregister(plosClusterList, rawClusterList):\n #creating a list of all the member indices of the plos cluster list\n plosClusterMemberList = []\n for cluster in range(len(plosClusterList)):\n plosClusterMemberList.extend(plosClusterList[cluster].members)\n\n #creating a list of all the clusters without any decay\n finalClusterList =[]\n for rawCluster in range(len(rawClusterList)):\n for index in range(len(plosClusterMemberList)):\n if ((plosClusterMemberList[index] in rawClusterList[rawCluster].members) and (not(rawClusterList[rawCluster] in finalClusterList))):\n finalClusterList.append(rawClusterList[rawCluster])\n\n return finalClusterList\n\n########## Complete Pipeline ##########\ndef completePipeline(image):\n #Plos Pipeline Results\n plosOut = pLib.pipeline(image)\n #Otsu's Binarization Thresholding\n bianOut = otsuVox(plosOut)\n #Connected Components\n connectList = connectedComponents(bianOut)\n #Remove outlier clusters\n threshClusterList = thresholdByVolumePercentile(connectList)\n #finding the clusters without plosPipeline - lists the entire clusters\n bianRawOut = otsuVox(image)\n clusterRawList = connectedComponents(bianRawOut)\n #coregistering with raw data\n clusters = clusterCoregister(threshClusterList, clusterRawList)\n return clusters", "Easy Simulation Analysis\nWhat We Expect\nAs previously mentioned, we believe the pipeline will work very well on the easy simulation (See Simulation Data: Easy Simulation for explanation).\nGenerate Easy Simulation Data: See Simulation Data Above.\nPipeline Run on Easy Data", "completeClusterMemberList = completePipeline(combinedIm)", "Easy Simulation Results", "### Get Cluster Volumes\ndef getClusterVolumes(clusterList):\n completeClusterVolumes = []\n for cluster in clusterList:\n completeClusterVolumes.append(cluster.getVolume())\n return completeClusterVolumes\n\nimport mouseVis as mv\n\n#plotting results\ncompleteClusterVolumes = getClusterVolumes(completeClusterMemberList)\nmv.generateHist(completeClusterVolumes, title = 'Cluster Volumes for Easy Simulation', bins = 25, xaxis = 'Volumes', yaxis = 'Relative Frequency')\n", "Performance Metrics:\nWe will be judging our algorithm's performance through two metrics: average cluster volume and cluster density per volume. This is based off of the 2 parameters we used to generate the test volume (see Simulation Data: Easy Simulation).\nIf our algorithm was successful, the average volume of detected synapse clusters should be equal to the average volume of the total foreground clusters that we generated. That is, our pipeline labeled synapses into correctly sized clusters (27 voxels). \nCluster density basically returns how many clusters were detected given a certain volume size. This is to show how many of the synapse clusters our algorithm was actually able to label. If the algorithm performs correctly, the relative number of synapses clusters per volume should equal around 2% (the volumetric density of synapses we generated in the test volume).", "#test stats\n\n# get actual cluster volumes from foreground (for 'Expected' values)\ndef getForegroundClusterVols(foreground):\n foregroundClusterList = connectedComponents(foreground)\n del foregroundClusterList[0] #background cluster\n foregroundClusterVols = []\n for cluster in foregroundClusterList:\n foregroundClusterVols.append(cluster.getVolume())\n return foregroundClusterVols\n \ndef getAverageMetric(coClusterVols, foreClusterVols):\n #no clusters found\n if (len(coClusterVols)==0):\n avgClusterVol = 0\n else:\n #average volume of detected clusters\n avgClusterVol = np.mean(coClusterVols)\n #average volume of total foreground clusters\n avgExpectedVol = np.mean(foreClusterVols)\n print 'Average Volume'\n print \"\\tExpected: \" + str(avgExpectedVol) + '\\tActual: ' + str(avgClusterVol)\n return avgExpectedVol, avgClusterVol\n\ndef getDensityMetric(coClusterVols, foreClusterVols):\n #no clusters found\n if (len(coClusterVols)==0):\n coClusterVols.append(0)\n print 'Cluster Density of Data By Volume'\n print \"\\tExpected: \" + str(np.sum(foreClusterVols)/(100*100*100.0)) + '\\tActual: ' + str(np.sum(coClusterVols)/(100*100*100.0))", "Quantify Performance for Easy Simulation", "foregroundClusterVols = getForegroundClusterVols(foreground)\ngetAverageMetric(completeClusterVolumes, foregroundClusterVols)\ngetDensityMetric(completeClusterVolumes, foregroundClusterVols)", "As shown above, our connectLib pipeline worked extremely well on the easy simulation. The small difference between the actual and expected values come from the generated synapse point sets. Foreground synapses can potentially be adjacent to each other in the test volume. Connected Components will label the multiple, connected synapses as one cluster, which explains the cluster volumes at roughly 56 (2 synapses) and 81 (3 synapses) [See Histogram in Easy Simulation Results]. \nDifficult Simulation Analysis\nWhat We Expect: Since Otsu's Binarization depends on a bimodal distribution of voxel intensities, the background should not get thresholded for the difficult simulation. Furthermore, since all the voxels are identical in terms of intensity, connectedComponents should label the entire volume as just one cluster.\nGenerate Difficult Simulation Data: See Simulate Data: Difficult Simulation.\nPipeline Run on Difficult Data:", "completeClusterMemberListHard = completePipeline(combinedImHard)\nprint len(completeClusterMemberListHard)", "Difficult Simulation Results:", "#Plos Pipeline Results\nplosOut = pLib.pipeline(combinedImHard)\n#Otsu's Binarization Thresholding\nbianOut = otsuVox(plosOut)\n#Connected Components\nconnectList = connectedComponents(bianOut)\n#get total volume for hard simulation clusters\ntotalClusterHard = []\nfor cluster in connectList:\n totalClusterHard.append(cluster.getVolume())\n#get coregistered (complete) cluster volumes\ncompleteClusterVolumesHard = getClusterVolumes(completeClusterMemberListHard)\n\nprint 'Number of Clusters: ' + str(len(totalClusterHard))\nprint 'Cluster Volume: ' + str(totalClusterHard[0])\nprint 'Coregistered Clusters: ' + str(len(completeClusterMemberListHard))", "Performance Metrics\nSee Easy Simulation Analysis: Performance Metrics.\nQuantify Performance for Difficult Simulation", "foregroundClusterVolsHard = getForegroundClusterVols(foregroundHard)\ngetAverageMetric(completeClusterVolumesHard, foregroundClusterVolsHard)\ngetDensityMetric(completeClusterVolumesHard, foregroundClusterVolsHard)", "As predicted, the foreground and background was combined into one cluster through the connectLib Pipeline (see Results). This large cluster does not coregister with any of the original foreground clusters. Clearly, our pipeline performed very poorly on the difficult simulation as zero clusters were actually detected. This ultimately proves our earlier thesis that the connectLib pipeline is dependent on the foreground and background voxels having significantly different intensities.\nVerify Simulation Analysis\nRepeat Easy and Hard simulation analysis 10 times each.", "easySimulationVolumes = []\nhardSimulationVolumes = []\n\nfor i in range(10):\n #Easy Simulation\n randIm = generateTestVolume()\n foreground = randIm[0]\n combinedIm = randIm[1]\n completeClusterMemberList = completePipeline(combinedIm)\n completeClusterVolumes = getClusterVolumes(completeClusterMemberList)\n foregroundClusterVols = getForegroundClusterVols(foreground)\n easySimulationVolumes.append(getAverageMetric(completeClusterVolumes, foregroundClusterVols))\n getDensityMetric(completeClusterVolumes, foregroundClusterVols)\n \n #Hard Simulation\n randImHard = generateDifficultTestVolume()\n foregroundHard = randImHard[0]\n combinedImHard = randImHard[1]\n completeClusterMemberListHard = completePipeline(combinedImHard)\n completeClusterVolumesHard = getClusterVolumes(completeClusterMemberListHard)\n foregroundClusterVolsHard = getForegroundClusterVols(foregroundHard)\n hardSimulationVolumes.append(getAverageMetric(completeClusterVolumesHard, foregroundClusterVolsHard))\n getDensityMetric(completeClusterVolumesHard, foregroundClusterVolsHard)", "Plotting Expected and Average Cluster Volumes for each easy simulation.\nRed = Expected Average Volume\nBlue = Observed Average Volume", "#separate expected and actual values into separate indices\nesv = [list(t) for t in zip(*easySimulationVolumes)]\n#outlier\ndel esv[0][6]\ndel esv [1][6]\n\nfig = plt.figure()\nplt.title('Easy Simulation: Average and Expected Cluster Volumes (10 Trials)')\nplt.xlabel('Simulation #')\nplt.ylabel('Volume (voxels)')\nx = np.arange(9)\nplt.scatter(x, esv[0], c='r')\nplt.scatter(x, esv[1], c='b')\nplt.show()", "Plotting Expected and Average Cluster Volumes for each difficult simulation.", "hsv = [list(t) for t in zip(*hardSimulationVolumes)]\nfig = plt.figure()\nplt.title('Difficult Simulation: Average and Expected Cluster Volumes (10 Trials)')\nplt.xlabel('Simulation #')\nplt.ylabel('Volume (voxels)')\nx = np.arange(10)\nplt.scatter(x, hsv[0], c='r')\nplt.scatter(x, hsv[1], c='b')\nplt.show()", "Summary of Simulation Analysis\nOur difficult and easy simulation data demonstrates how our connectLib pipeline is dependent on how different the background and foreground voxel intensity values are. When the background and foreground are not distinguishable, the connectLib cannot threshold and filter out the background clusters, thus creating one large cluster combining all the voxels in the volume. Thus, essentially no synpases (clusters) can be detected correctly. On the other hand, if the foreground voxels are very distinguishable from the background noise (easy simulation), our connectLib pipeline works extremely well. For the easy simulations, 100% of the background noise was filtered out and almost all of the foreground point sets (representing synapses) were clustered correctly. The only errors were from adjacent 'synapses' that were clustered together. \nReal Data\nOur sample data will come from different slices (z = 5) of a tiff image (3D). The tiff file is a photon microscope image of a mouse brain. The dimensions of our data will be 1024 x 1024 x 5 voxels^3 (x,y,z axis respectively). \nDisplaying Real Data", "import pickle\n\nrealData = pickle.load(open('../data/realDataRaw_t0.synth'))\nrealDataSection = realData[5: 10]\n\nplosDataSection = pLib.pipeline(realDataSection)\nmv.generateHist(plosDataSection, bins = 50, title = \"Voxel Intensity Distribution after PLOS\", xaxis = 'Relative Voxel Intensity', yaxis = 'Frequency')", "Predicting Performance:\nMouse brains have a lot more activity than be portrayed in our simulated data. There are different captured cell types and a wide variation of background/foreground noise. Our Naive Fencing method and Otsu's Binarization could potentially not be enough to produce clean synapse clusters. Because of this added complexity present in mouse brain images, we believe our connectLib pipeline might not work perfectly on the real data. What is more concerning is that the distribution of voxel intensities is unimodal. The foreground does not appear to be significantly different from the background. Thus, Otsu's Binarization might not threshold the background successfully. \nconnectLib Algorithm Run on Real Data", "print 'Running'\nrealClusterList = completePipeline(plosDataSection)\nrealClusterVols = getClusterVolumes(realClusterList)", "Results", "mv.generateHist(realClusterVols, title = 'Cluster Volumes for Easy Simulation', bins = 50, xaxis = 'Volumes', yaxis = 'Relative Frequency')\n\nprint realClusterVols\n\n\ndel realClusterVols[0]\nmv.generateHist(realClusterVols, title = 'Cluster Volumes for Easy Simulation', axisStart = 0, axisEnd = 200, bins = 25, xaxis = 'Volumes', yaxis = 'Relative Frequency')", "Potential Corrections to connectLib Pipeline\nBecause the distribution of intensities is not clearly bimodal, a simple binary threshold, with the lower 98% of voxel intensities getting thresholded to 0, might be a better method for filtering background noise than Otsu's Method. Furthermore, there is actually more issue in filtering out additional foreground noise that are not synapses. These elements such as glial cells still get clustered and labeled as synapses but are not (as you can tell by the voxel volume)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sot/aca_stats
fit_acq_model-2019-08-binned-poly-binom-floor.ipynb
bsd-3-clause
[ "Fit binned-floor acquisition probability model in 2019-08\nThis is the acquisition and probability model calculated in 2019-08. \nIt is NOT promoted to flight due to lack of change from 2018-11.\nCopied from the 2018-11 notebook, modified slightly and re-run. \nChanges\n\nFactored out the date and model name to single values at the top.\nMade the data start time be a fixed 4.5 years before end time.\nFixed the calculation of probability confidence intervals in the\n comparison to flight data at 120 arcsec box size.\nAdded another temperature bin in the flight comparison plot to\n show higher temperature data.\nUsed chandra_aca.star_probs to compute the new model instead of a\n local replica of that code.\n\nKey features of the model for color != 1.5 stars (with good mag estimates)\n\nIncorporates ASVT data for t_ccd &gt;= -10 C in order to provide reasonable estimates\n of probability in regimes with little or no flight data.\nFits a quadradatic model for p_fail (probit) as a \n function of t_ccd in a series of magnitude bins. The mag bins are driven by magnitudes \n of ASVT simulated data.\nModel now includes a floor parameter that sets a hard lower limit on p_fail.\n This is seen in data and represents other factors that cause acquisition failure\n independent of t_ccd. In other words, even for an arbitrarily cold CCD there will\n still be a small fraction of acquisition failures. For flight data this can include\n spoilers or an ionizing radiation flag.\nAs in past models, the p_fail model is adjusted by a box_delta term which applies\n a search-box dependent offset in probit space. The box_delta term is defined to\n have a value of 0.0 for box halfwidth = 120.\nThe global model (for arbitrary mag) is computed by linearly interpolating the\n binned quadratic coefficients as a function of mag. The previous flight model\n (spline) did a global mag - t_ccd fit using a 5-element spline in the\n mag direction.\n\nKey features of the model for color == 1.5 stars (with poor mag estimates)\n\nPost AGASC 1.7, there is inadequate data to independently perform the binned\n fitting.\nInstead assume a magnitude error distribution which is informed by examining\n the observed distribution of dmag = mag_obs - mag_aca (observed - catalog). This\n turns out to be well-represented by an exp(-abs(dmag) / dmag_scale)\n distribution. This contrasts with a gaussian that scales as exp(dmag^2).\nUse the assumed mag error distribution and sample the color != 1.5 star\n probabilities accordingly and compute the weighted mean failure probability.\nFlight data show a steeper falloff for dmag &gt; 0 (stars observed to be fainter\n than expected) than for dmag &lt; 0. As noted by JC this likely includes a\n survival effect that stars which are actually much fainter don't get acquired\n and do not get into the sample. Indeed using the observed distribution gives\n a poor fit to flight data, so dmag_scale for dmag &gt; 0 was arbitrarily\n increased from 2.8 to 4.0 in order to better fit flight data.\n\nModel details\n\nIn order to get a good match to flight data for faint stars near -11 C, it\n was necessary to apply an ad-hoc correction to ASVT data for mag &gt; 10.1.\n The correction effectively made the model assume smaller search box sizes,\n so for the canonical 120 arcsec box the model p_fail is slightly increased\n relative to the raw failure rate from ASVT.\nThe mag = 8.0 data from ASVT show a dependence on search-box size that is\n flipped from usual. There are more failures for smaller search boxes,\n though we are dealing with small number statistics (up to 3 fails per bin).\n This caused problems in the fitting, so for this bin the box_delta term\n was simplied zeroed out and a good fit was obtained in the automatic fit\n process. Since p_fail is quite low in all cases this has little practical\n impact either way.\nFitting now uses binomial statistics to compute the fit statistic during\n model parameter optimization. Previously it was using a poisson statistic\n which is similar except near 1.0. The poisson statistic is built in to\n Sherpa and was easier, but the new binomial statistic is formally correct\n and behaves better for probabilities near 1.0.\n\nParadigm shift for production model implementation\n\nThis model is complicated, and the color = 1.5 star case is computationally\n intensive.\nInstead of transfering the analytic algorithm and fit values into \n chandra_aca.star_probs for production use, take a new approach of generating\n a 3-d grid of p_fail (in probit space) as a function of mag, t_ccd,\n and halfwidth. Do this for color != 1.5 and color = 1.5.\nThe ranges are 5.0 &lt;= mag &lt;= 12.0, -16 &lt;= t_ccd &lt;= -1, and\n 60 &lt;= halfwidth &lt;= 180. Values outside that range are clipped.\nThis separates the model generation from the production model calculation.\nGridded 3-d linear interpolation is used in chandra_aca and is quite fast.\nThe gridded value files are about 150 kb, and make it easy to generate\n new models without changing code in chandra_aca (except for a hard-coded\n value for the default model).", "import sys\nimport os\nfrom itertools import count\nfrom pathlib import Path\n\n# Include utils.py for asvt_utils\nsys.path.insert(0, str(Path(os.environ['HOME'], 'git', 'skanb', 'pea-test-set')))\nimport utils as asvt_utils\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom astropy.table import Table, vstack\nfrom astropy.time import Time\nimport tables\nfrom scipy import stats\nfrom scipy.interpolate import CubicSpline\nfrom Chandra.Time import DateTime\nfrom astropy.table import Table\nfrom chandra_aca.star_probs import (get_box_delta, broadcast_arrays, \n acq_success_prob, grid_model_acq_prob)\nfrom chandra_aca import star_probs\n\n%matplotlib inline\n\nMODEL_DATE = '2019-08'\nMODEL_NAME = f'grid-floor-{MODEL_DATE}'\n\nnp.random.seed(0)\n\nSKA = Path(os.environ['SKA'])", "Get acq stats data and clean", "# Make a map of AGASC_ID to AGACS 1.7 MAG_ACA. The acq_stats.h5 file has whatever MAG_ACA\n# was in place at the time of planning the loads.\n# Define new term `red_mag_err` which is used here in place of the \n# traditional COLOR1 == 1.5 test.\nwith tables.open_file(str(SKA / 'data' / 'agasc' / 'miniagasc_1p7.h5'), 'r') as h5:\n agasc_mag_aca = h5.root.data.col('MAG_ACA')\n agasc_id = h5.root.data.col('AGASC_ID')\n has_color3 = h5.root.data.col('RSV3') != 0 # \n red_star = np.isclose(h5.root.data.col('COLOR1'), 1.5)\n mag_aca_err = h5.root.data.col('MAG_ACA_ERR') / 100\n red_mag_err = red_star & ~has_color3 # MAG_ACA, MAG_ACA_ERR is potentially inaccurate\n\nagasc1p7_idx = {id: idx for id, idx in zip(agasc_id, count())}\nagasc1p7 = Table([agasc_mag_aca, mag_aca_err, red_mag_err], \n names=['mag_aca', 'mag_aca_err', 'red_mag_err'], copy=False)\n\nacq_file = str(SKA / 'data' / 'acq_stats' / 'acq_stats.h5')\nwith tables.open_file(str(acq_file), 'r') as h5:\n cols = h5.root.data.cols\n names = {'tstart': 'guide_tstart',\n 'obsid': 'obsid',\n 'obc_id': 'acqid',\n 'halfwidth': 'halfw',\n 'warm_pix': 'n100_warm_frac',\n 'mag_aca': 'mag_aca',\n 'mag_obs': 'mean_trak_mag',\n 'known_bad': 'known_bad',\n 'color': 'color1',\n 'img_func': 'img_func', \n 'ion_rad': 'ion_rad',\n 'sat_pix': 'sat_pix',\n 'agasc_id': 'agasc_id',\n 't_ccd': 'ccd_temp',\n 'slot': 'slot'}\n acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()],\n names=list(names.keys())) \n\nyear_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately\nacqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4')\nacqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4')\n\n# Create 'fail' column, rewriting history as if the OBC always\n# ignore the MS flag in ID'ing acq stars.\n#\n# CHECK: is ion_rad being ignored on-board?\n# Answer: Not as of 2019-09\n#\nobc_id = acqs['obc_id']\nobc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad']\nacqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0)\n\n# Re-map acq_stats database magnitudes for AGASC 1.7\nacqs['mag_aca'] = [agasc1p7['mag_aca'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']]\nacqs['red_mag_err'] = [agasc1p7['red_mag_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']]\nacqs['mag_aca_err'] = [agasc1p7['mag_aca_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']]\n\n# Add a flag to distinguish flight from ASVT data\nacqs['asvt'] = False\n\n# Filter for year and mag\n#\nyear_max = Time(f'{MODEL_DATE}-01').decimalyear\nyear_min = year_max - 4.5\nacq_ok = ((acqs['year'] > year_min) & (acqs['year'] < year_max) & \n (acqs['mag_aca'] > 7.0) & (acqs['mag_aca'] < 11) &\n (~np.isclose(acqs['color'], 0.7)))\n\n# Filter known bad obsids. NOTE: this is no longer doing anything, but\n# consider updating the list of known bad obsids or obtaining programmically?\n\nprint('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(acq_ok)))\nbad_obsids = [\n # Venus\n 2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583,\n 7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499,\n 16500,16501,16503,16504,16505,16506,16502,\n ]\nfor badid in bad_obsids:\n acq_ok = acq_ok & (acqs['obsid'] != badid)\nprint('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(acq_ok)))", "Get ASVT data and make it look more like acq stats data", "peas = Table.read('pea_analysis_results_2018_299_CCD_temp_performance.csv', format='ascii.csv')\npeas = asvt_utils.flatten_pea_test_data(peas)\npeas = peas[peas['ccd_temp'] > -10.5]\n\n# Version of ASVT PEA data that is more flight-like\nfpeas = Table([peas['star_mag'], peas['ccd_temp'], peas['search_box_hw']],\n names=['mag_aca', 't_ccd', 'halfwidth'])\nfpeas['year'] = np.random.uniform(2019.0, 2019.5, size=len(peas))\nfpeas['color'] = 1.0\nfpeas['quarter'] = (np.trunc((fpeas['year'] - year_q0) * 4)).astype('f4')\nfpeas['fail'] = 1.0 - peas['search_success']\nfpeas['asvt'] = True\nfpeas['red_mag_err'] = False\nfpeas['mag_obs'] = 0.0", "Combine flight acqs and ASVT data", "data_all = vstack([acqs[acq_ok]['year', 'fail', 'mag_aca', 't_ccd', 'halfwidth', 'quarter', \n 'color', 'asvt', 'red_mag_err', 'mag_obs'], \n fpeas])\ndata_all.sort('year')", "Compute box probit delta term based on box size", "# Adjust probability (in probit space) for box size. \ndata_all['box_delta'] = get_box_delta(data_all['halfwidth'])\n\n# Put in an ad-hoc penalty on ASVT data that introduces up to a -0.3 shift\n# on probit probability. It goes from 0.0 for mag < 10.1 up to 0.3 at mag=10.4.\nok = data_all['asvt']\nbox_delta_tweak = (data_all['mag_aca'][ok] - 10.1).clip(0, 0.3)\ndata_all['box_delta'][ok] -= box_delta_tweak\n\n# Another ad-hoc tweak: the mag=8.0 data show more failures at smaller\n# box sizes. This confounds the fitting. For this case only just\n# set the box deltas to zero and this makes the fit work.\nok = data_all['asvt'] & (data_all['mag_aca'] == 8)\ndata_all['box_delta'][ok] = 0.0\n\ndata_all = data_all.group_by('quarter')\ndata_all0 = data_all.copy() # For later augmentation with simulated red_mag_err stars\ndata_mean = data_all.groups.aggregate(np.mean)", "Model definition", "def t_ccd_normed(t_ccd):\n return (t_ccd + 8.0) / 8.0\n\ndef p_fail(pars, \n t_ccd, tc2=None,\n box_delta=0, rescale=True, probit=False):\n \"\"\"\n Acquisition probability model\n\n :param pars: p0, p1, p2 (quadratic in t_ccd) and floor (min p_fail)\n :param t_ccd: t_ccd (degC) or scaled t_ccd if rescale is False.\n :param tc2: (scaled t_ccd) ** 2, this is just for faster fitting\n :param box_delta: delta p_fail for search box size\n :param rescale: rescale t_ccd to about -1 to 1 (makes P0, P1, P2 better-behaved)\n :param probit: return probability as probit instead of 0 to 1.\n \"\"\"\n p0, p1, p2, floor = pars\n\n tc = t_ccd_normed(t_ccd) if rescale else t_ccd\n \n if tc2 is None:\n tc2 = tc ** 2\n \n # Make sure box_delta has right dimensions\n tc, box_delta = np.broadcast_arrays(tc, box_delta)\n\n # Compute the model. Also clip at +10 to avoid values that are\n # exactly 1.0 at 64-bit precision.\n probit_p_fail = (p0 + p1 * tc + p2 * tc2 + box_delta).clip(floor, 10)\n\n # Possibly transform from probit to linear probability\n out = probit_p_fail if probit else stats.norm.cdf(probit_p_fail)\n return out\n\ndef p_acq_fail(data=None):\n \"\"\"\n Sherpa fit function wrapper to ensure proper use of data in fitting.\n \"\"\"\n if data is None:\n data = data_all\n \n tc = t_ccd_normed(data['t_ccd'])\n tc2 = tc ** 2\n box_delta = data['box_delta']\n \n def sherpa_func(pars, x=None):\n return p_fail(pars, tc, tc2, box_delta, rescale=False)\n\n return sherpa_func", "Model fitting functions", "def calc_binom_stat(data, model, staterror=None, syserror=None, weight=None, bkg=None):\n \"\"\"\n Calculate log-likelihood for a binomial probability distribution\n for a single trial at each point.\n \n Defining p = model, then probability of seeing data == 1 is p and\n probability of seeing data == 0 is (1 - p). Note here that ``data``\n is strictly either 0.0 or 1.0, and np.where interprets those float\n values as False or True respectively.\n \"\"\"\n fit_stat = -np.sum(np.log(np.where(data, model, 1.0 - model))) \n return fit_stat, np.ones(1)\n\ndef fit_poly_model(data):\n from sherpa import ui\n \n comp_names = ['p0', 'p1', 'p2', 'floor']\n\n data_id = 1\n ui.set_method('simplex')\n \n # Set up the custom binomial statistics\n ones = np.ones(len(data))\n ui.load_user_stat('binom_stat', calc_binom_stat, lambda x: ones)\n ui.set_stat(binom_stat)\n\n # Define the user model\n ui.load_user_model(p_acq_fail(data), 'model')\n ui.add_user_pars('model', comp_names)\n ui.set_model(data_id, 'model')\n ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float))\n\n # Initial fit values from fit of all data\n fmod = ui.get_model_component('model')\n\n # Define initial values / min / max\n # This is the p_fail value at t_ccd = -8.0\n fmod.p0 = -2.605\n fmod.p0.min = -10\n fmod.p0.max = 10\n\n # Linear slope of p_fail\n fmod.p1 = 2.5\n fmod.p1.min = 0.0\n fmod.p1.max = 10\n \n # Quadratic term. Only allow negative curvature, and not too much at that.\n fmod.p2 = 0.0\n fmod.p2.min = -1\n fmod.p2.max = 0\n\n # Floor to p_fail.\n fmod.floor = -2.6\n fmod.floor.min = -2.6\n fmod.floor.max = -0.5\n\n ui.fit(data_id)\n\n return ui.get_fit_results()", "Plotting and validation", "def plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all, year0=2015.0):\n ok = (data_all['year'] > year0) & ~data_all['fail'].astype(bool)\n da = data_all[ok]\n fuzzx = np.random.uniform(-0.3, 0.3, len(da))\n fuzzy = np.random.uniform(-0.125, 0.125, len(da))\n plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C0', markersize=4)\n\n ok = (data_all['year'] > year0) & data_all['fail'].astype(bool)\n da = data_all[ok]\n fuzzx = np.random.uniform(-0.3, 0.3, len(da))\n fuzzy = np.random.uniform(-0.125, 0.125, len(da))\n plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C1', markersize=4, alpha=0.8)\n \n # plt.xlim(-18, -10)\n # plt.ylim(7.0, 11.1)\n x0, x1 = plt.xlim()\n for y in mag_bins:\n plt.plot([x0, x1], [y, y], '-', color='r', linewidth=2, alpha=0.8)\n plt.xlabel('T_ccd (C)')\n plt.ylabel('Mag_aca')\n plt.title(f'Acq successes (blue) and failures (orange) since {year0}')\n plt.grid()\n\ndef plot_fit_grouped(data, group_col, group_bin, log=False, colors='br', label=None, probit=False):\n \n group = np.trunc(data[group_col] / group_bin)\n data = data.group_by(group)\n data_mean = data.groups.aggregate(np.mean)\n len_groups = np.diff(data.groups.indices)\n data_fail = data_mean['fail']\n model_fail = np.array(data_mean['model'])\n \n fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups\n \n # Possibly plot the data and model probabilities in probit space\n if probit:\n dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6))\n dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6))\n data_fail = stats.norm.ppf(data_fail)\n model_fail = stats.norm.ppf(model_fail)\n fail_sigmas = np.vstack([data_fail - dm, dp - data_fail])\n \n plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas, \n fmt='.' + colors[1], label=label, markersize=8)\n plt.plot(data_mean[group_col], model_fail, '-' + colors[0])\n \n if log:\n ax = plt.gca()\n ax.set_yscale('log')\n\ndef mag_filter(mag0, mag1):\n ok = (data_all['mag_aca'] > mag0) & (data_all['mag_aca'] < mag1)\n return ok\n\ndef t_ccd_filter(t_ccd0, t_ccd1):\n ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1)\n return ok\n\ndef wp_filter(wp0, wp1):\n ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1)\n return ok", "Define magnitude bins for fitting and show data", "mag_centers = np.array([6.3, 8.1, 9.1, 9.55, 9.75, 10.0, 10.25, 10.55, 10.75, 11.0])\nmag_bins = (mag_centers[1:] + mag_centers[:-1]) / 2\nmag_means = np.array([8.0, 9.0, 9.5, 9.75, 10.0, 10.25, 10.5, 10.75])\n\nfor m0, m1, mm in zip(mag_bins[:-1], mag_bins[1:], mag_means):\n ok = (data_all['asvt'] == False) & (data_all['mag_aca'] >= m0) & (data_all['mag_aca'] < m1)\n print(f\"m0={m0:.2f} m1={m1:.2f} mean_mag={data_all['mag_aca'][ok].mean():.2f} vs. {mm}\")\n\nplt.figure(figsize=(10, 14))\nfor subplot, halfwidth in enumerate([60, 80, 100, 120, 140, 160, 180]):\n plt.subplot(4, 2, subplot + 1)\n ok = (data_all['halfwidth'] > halfwidth - 10) & (data_all['halfwidth'] <= halfwidth + 10)\n plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all[ok])\n plt.title(f'Acq success (blue) fail (orange) box={halfwidth}')\nplt.tight_layout()", "Color != 1.5 fit (this is MOST acq stars)", "# fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True)\nmask_no_1p5 = ((data_all['red_mag_err'] == False) & \n (data_all['t_ccd'] > -18) &\n (data_all['t_ccd'] < -0.5))\n\nmag0s, mag1s = mag_bins[:-1], mag_bins[1:]\nfits = {}\nmasks = []\nfor m0, m1 in zip(mag0s, mag1s):\n print(m0, m1)\n mask = mask_no_1p5 & mag_filter(m0, m1) # & t_ccd_filter(-10.5, 0)\n print(np.count_nonzero(mask))\n masks.append(mask)\n fits[m0, m1] = fit_poly_model(data_all[mask])\n\ncolors = [f'kC{i}' for i in range(9)]\n\nplt.figure(figsize=(13, 4))\nfor subplot in (1, 2):\n plt.subplot(1, 2, subplot)\n probit = (subplot == 2)\n for m0_m1, color, mask, mag_mean in zip(list(fits), colors, masks, mag_means):\n fit = fits[m0_m1]\n data = data_all[mask]\n data['model'] = p_acq_fail(data)(fit.parvals) \n plot_fit_grouped(data, 't_ccd', 2.0, \n probit=probit, colors=[color, color], label=str(mag_mean))\n plt.grid()\n if probit:\n plt.ylim(-3.5, 2.5)\n plt.ylabel('Probit(p_fail)' if probit else 'p_fail')\n plt.xlabel('T_ccd');\n plt.legend(fontsize='small')\n\n# This computes probabilities for 120 arcsec boxes, corresponding to raw data\nt_ccds = np.linspace(-16, -0, 20)\nplt.figure(figsize=(13, 4))\n\nfor subplot in (1, 2):\n plt.subplot(1, 2, subplot)\n probit = (subplot == 2)\n for m0_m1, color, mag_mean in zip(list(fits), colors, mag_means):\n fit = fits[m0_m1]\n probs = p_fail(fit.parvals, t_ccds)\n if probit:\n probs = stats.norm.ppf(probs)\n plt.plot(t_ccds, probs, label=f'{mag_mean:.2f}')\n\n plt.legend()\n plt.xlabel('T_ccd')\n plt.ylabel('P_fail' if subplot == 1 else 'Probit(p_fail)')\n plt.title('P_fail for halfwidth=120')\n plt.grid()\n\nmag_bin_centers = np.concatenate([[5.0], mag_means, [13.0]])\nfit_parvals = []\nfor fit in fits.values():\n fit_parvals.extend(fit.parvals)\n\nfit_parvals = np.array(fit_parvals).reshape(-1, 4)\nparvals_mag12 = [[5, 0, 0, 0]]\nparvals_mag5 = [[-5, 0, 0, -3]]\nfit_parvals = np.concatenate([parvals_mag5, fit_parvals, parvals_mag12])\nfit_parvals = fit_parvals.transpose()\nfor ps, parname in zip(fit_parvals, fit.parnames):\n plt.plot(mag_bin_centers, ps, '.-', label=parname)\n\nplt.legend(fontsize='small')\nplt.title('Model coefficients vs. mag')\nplt.xlabel('Mag_aca')\nplt.grid()", "Define model for color=1.5 stars\n\nPost AGASC 1.7, there is inadequate data to independently perform the binned\n fitting.\nInstead assume a magnitude error distribution which is informed by examining\n the observed distribution of dmag = mag_obs - mag_aca (observed - catalog). This\n turns out to be well-represented by an exp(-abs(dmag) / dmag_scale)\n distribution. This contrasts with a gaussian that scales as exp(dmag^2).\nUse the assumed mag error distribution and sample the color != 1.5 star\n probabilities accordingly and compute the weighted mean failure probability.\n\nExamine distribution of mag error for color=1.5 stars", "def plot_mag_errs(acqs, red_mag_err):\n ok = ((acqs['red_mag_err'] == red_mag_err) & \n (acqs['mag_obs'] > 0) & \n (acqs['img_func'] == 'star'))\n dok = acqs[ok]\n dmag = dok['mag_obs'] - dok['mag_aca']\n plt.figure(figsize=(14, 4.5))\n plt.subplot(1, 3, 1)\n plt.plot(dok['mag_aca'], dmag, '.')\n plt.plot(dok['mag_aca'], dmag, ',', alpha=0.3)\n plt.xlabel('mag_aca (catalog)')\n plt.ylabel('Mag err')\n plt.title('Mag err (observed - catalog) vs mag_aca')\n plt.xlim(5, 11.5)\n plt.ylim(-4, 2)\n plt.grid()\n \n plt.subplot(1, 3, 2)\n plt.hist(dmag, bins=np.arange(-3, 4, 0.2), log=True);\n plt.grid()\n plt.xlabel('Mag err')\n plt.title('Mag err (observed - catalog)')\n plt.xlim(-4, 2)\n \n plt.subplot(1, 3, 3)\n plt.hist(dmag, bins=100, cumulative=-1, normed=True)\n plt.xlim(-1, 1)\n plt.xlabel('Mag err')\n plt.title('Mag err (observed - catalog)')\n plt.grid()\n\nplot_mag_errs(acqs, red_mag_err=True)\nplt.subplot(1, 3, 2)\nplt.plot([-2.8, 0], [1, 7000], 'r');\nplt.plot([0, 4.0], [7000, 1], 'r');\nplt.xlim(-4, 4);", "Define an analytical approximation for distribution with ad-hoc positive tail", "# Define parameters / metadata for floor model\nFLOOR = {'fit_parvals': fit_parvals,\n 'mag_bin_centers': mag_bin_centers}\n\ndef calc_1p5_mag_err_weights():\n x = np.linspace(-2.8, 4, 18)\n ly = 3.8 * (1 - np.abs(x) / np.where(x > 0, 4.0, 2.8))\n y = 10 ** ly\n return x, y / y.sum() \n\nFLOOR['mag_errs_1p5'], FLOOR['mag_err_weights_1p5'] = calc_1p5_mag_err_weights()\n\nplt.semilogy(FLOOR['mag_errs_1p5'], FLOOR['mag_err_weights_1p5'])\nplt.grid()", "Global model for arbitrary mag, t_ccd, color, and halfwidth", "def floor_model_acq_prob(mag, t_ccd, color=0.6, halfwidth=120, probit=False):\n \"\"\"\n Acquisition probability model\n\n :param mag: Star magnitude(s)\n :param t_ccd: CCD temperature(s)\n :param color: Star color (compared to 1.5 to decide which p_fail model to use)\n :param halfwidth: Search box size (arcsec)\n :param probit: Return probit of failure probability\n \n :returns: acquisition failure probability\n \"\"\"\n\n parvals = FLOOR['fit_parvals']\n mag_bin_centers = FLOOR['mag_bin_centers']\n mag_errs_1p5 = FLOOR['mag_errs_1p5']\n mag_err_weights_1p5 = FLOOR['mag_err_weights_1p5']\n\n # Make sure inputs have right dimensions\n is_scalar, t_ccds, mags, halfwidths, colors = broadcast_arrays(t_ccd, mag, halfwidth, color)\n box_deltas = get_box_delta(halfwidths) \n\n p_fails = []\n for t_ccd, mag, box_delta, color in zip(t_ccds.flat, mags.flat, box_deltas.flat, colors.flat):\n if np.isclose(color, 1.5):\n pars_list = [[np.interp(mag + mag_err_1p5, mag_bin_centers, ps) for ps in parvals]\n for mag_err_1p5 in mag_errs_1p5]\n weights = mag_err_weights_1p5\n if probit:\n raise ValueError('cannot use probit=True with color=1.5 stars')\n else:\n pars_list = [[np.interp(mag, mag_bin_centers, ps) for ps in parvals]]\n weights = [1]\n\n pf = sum(weight * p_fail(pars, t_ccd, box_delta=box_delta, probit=probit)\n for pars, weight in zip(pars_list, weights))\n p_fails.append(pf)\n \n out = np.array(p_fails).reshape(t_ccds.shape)\n return out\n\nmags, t_ccds = np.mgrid[8.75:10.75:30j, -16:-4:30j]\nplt.figure(figsize=(13, 4))\nfor subplot, color in enumerate([1.0, 1.5]):\n plt.subplot(1, 2, subplot + 1)\n p_fails = floor_model_acq_prob(mags, t_ccds, probit=False, color=color)\n\n cs = plt.contour(t_ccds, mags, p_fails, levels=[0.05, 0.1, 0.2, 0.5, 0.75, 0.9], \n colors=['g', 'g', 'b', 'c', 'm', 'r'])\n plt.clabel(cs, inline=1, fontsize=10)\n plt.grid()\n plt.xlim(-17, -4)\n plt.ylim(8.5, 11.0)\n plt.xlabel('T_ccd (degC)')\n plt.ylabel('Mag_ACA')\n plt.title(f'Failure probability color={color}');\n\nmags = np.linspace(8, 11, 301)\nplt.figure()\nfor t_ccd in np.arange(-16, -0.9, 1):\n p_fails = floor_model_acq_prob(mags, t_ccd, probit=True)\n plt.plot(mags, p_fails)\nplt.grid()\nplt.xlim(8, 11)", "Compare to flight data for halfwidth=120\nSelecting only data with halfwidth=120 is a clean, model-independent way to\ncompare the model to raw flight statistics.\nSetup functions to get appropriate data", "# NOTE this is in chandra_aca.star_probs as of version 4.27\n\nfrom scipy.stats import binom\n\ndef binom_ppf(k, n, conf, n_sample=1000):\n \"\"\"\n Compute percent point function (inverse of CDF) for binomial, where\n the percentage is with respect to the \"p\" (binomial probability) parameter\n not the \"k\" parameter.\n \n The following example returns the 1-sigma (0.17 - 0.84) confidence interval\n on the true binomial probability for an experiment with 4 successes in 5 trials.\n \n Example::\n \n >>> binom_ppf(4, 5, [0.17, 0.84])\n array([ 0.55463945, 0.87748177])\n \n :param k: int, number of successes (0 < k <= n)\n :param n: int, number of trials\n :param conf: float, array of floats, percent point values\n :param n_sample: number of PMF samples for interpolation\n \n :return: percent point function values corresponding to ``conf``\n \"\"\"\n ps = np.linspace(0, 1, n_sample)\n vals = binom.pmf(k=k, n=n, p=ps)\n return np.interp(conf, xp=np.cumsum(vals) / np.sum(vals), fp=ps)\n\nbinom_ppf(4, 5, [0.17, 0.84])\n\nn = 156\nk = 127\nbinom_ppf(k, n, [0.17, 0.84])\n\ndef calc_binned_pfail(data_all, mag, dmag, t_ccd, dt, halfwidth=120):\n da = data_all[~data_all['asvt'] & (data_all['halfwidth'] == halfwidth)]\n fail = da['fail'].astype(bool)\n ok = (np.abs(da['mag_aca'] - mag) < dmag) & (np.abs(da['t_ccd'] - t_ccd) < dt)\n n_fail = np.count_nonzero(fail[ok])\n n_acq = np.count_nonzero(ok)\n p_fail = n_fail / n_acq\n p_fail_lower, p_fail_upper = binom_ppf(n_fail, n_acq, [0.17, 0.84])\n mean_t_ccd = np.mean(da['t_ccd'][ok])\n mean_mag = np.mean(da['mag_aca'][ok])\n return p_fail, p_fail_lower, p_fail_upper, mean_t_ccd, mean_mag, n_fail, n_acq\n\nhalfwidth = 120\npfs_list = []\nfor mag in (10.0, 10.3, 10.55):\n pfs = []\n for t_ccd in np.linspace(-15, -10, 6):\n pf = calc_binned_pfail(data_all, mag, 0.2, t_ccd, 0.5, halfwidth=halfwidth)\n pfs.append(pf)\n print(f'mag={mag} mean_mag_aca={pf[4]:.2f} t_ccd=f{pf[3]:.2f} p_fail={pf[-2]}/{pf[-1]}={pf[0]:.2f}')\n pfs_list.append(pfs)", "Compare model to flight for color != 1.5 stars", "def plot_floor_and_flight(color, halfwidth=120):\n\n # This computes probabilities for 120 arcsec boxes, corresponding to raw data\n t_ccds = np.linspace(-16, -6, 20)\n mag_acas = np.array([9.5, 10.0, 10.25, 10.5, 10.75])\n\n for ii, mag_aca in enumerate(reversed(mag_acas)):\n flight_probs = 1 - acq_success_prob(date='2018-05-01T00:00:00', \n t_ccd=t_ccds, mag=mag_aca, color=color, halfwidth=halfwidth)\n new_probs = floor_model_acq_prob(mag_aca, t_ccds, color=color, halfwidth=halfwidth)\n plt.plot(t_ccds, flight_probs, '--', color=f'C{ii}')\n plt.plot(t_ccds, new_probs, '-', color=f'C{ii}', label=f'mag_aca={mag_aca}')\n\n if color != 1.5:\n # pf1, pf2 have p_fail, p_fail_lower, p_fail_upper, mean_t_ccd, mean_mag_aca, n_fail, n_acq\n for pfs, clr in zip(pfs_list, ('C3', 'C2', 'C1')):\n for pf in pfs:\n yerr = np.array([pf[0] - pf[1], pf[2] - pf[0]]).reshape(2, 1)\n plt.errorbar(pf[3], pf[0], xerr=0.5, yerr=yerr, color=clr)\n\n # plt.xlim(-16, None)\n plt.legend()\n plt.xlabel('T_ccd')\n plt.ylabel('P_fail')\n plt.title(f'P_fail (color={color}: new (solid) and flight (dashed)')\n plt.grid()\n\nplot_floor_and_flight(color=1.0)", "Compare model to flight for color = 1.5 stars", "plt.figure(figsize=(13, 4))\nplt.subplot(1, 2, 1)\nfor m0, m1, color in [(9, 9.5, 'C0'), (9.5, 10, 'C1'), (10, 10.3, 'C2'), (10.3, 10.7, 'C3')]:\n ok = data_all['red_mag_err'] & mag_filter(m0, m1) & t_ccd_filter(-16, -10)\n data = data_all[ok]\n data['model'] = floor_model_acq_prob(data['mag_aca'], data['t_ccd'], color=1.5, halfwidth=data['halfwidth'])\n plot_fit_grouped(data, 't_ccd', 2.0, \n probit=False, colors=[color, color], label=f'{m0}-{m1}')\nplt.ylim(0, 1.0)\nplt.legend(fontsize='small')\nplt.grid()\nplt.xlabel('T_ccd')\nplt.title('COLOR1=1.5 acquisition probabilities')\n\nplt.subplot(1, 2, 2)\nplot_floor_and_flight(color=1.5)", "Write model as a 3-d grid to a gzipped FITS file", "def write_model_as_fits(model_name,\n comment=None,\n mag0=5, mag1=12, n_mag=141, # 0.05 mag spacing\n t_ccd0=-16, t_ccd1=-1, n_t_ccd=31, # 0.5 degC spacing\n halfw0=60, halfw1=180, n_halfw=7, # 20 arcsec spacing\n ):\n from astropy.io import fits\n \n mags = np.linspace(mag0, mag1, n_mag)\n t_ccds = np.linspace(t_ccd0, t_ccd1, n_t_ccd)\n halfws = np.linspace(halfw0, halfw1, n_halfw)\n mag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij')\n\n print('Computing probs, stand by...')\n \n # COLOR = 1.5 (stars with poor mag estimates)\n p_fails = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=False, color=1.5)\n p_fails_probit_1p5 = stats.norm.ppf(p_fails)\n\n # COLOR not 1.5 (most stars)\n p_fails_probit = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0)\n \n hdu = fits.PrimaryHDU()\n if comment:\n hdu.header['comment'] = comment\n hdu.header['date'] = DateTime().fits\n hdu.header['mdl_name'] = model_name\n hdu.header['mag_lo'] = mags[0]\n hdu.header['mag_hi'] = mags[-1]\n hdu.header['mag_n'] = len(mags)\n hdu.header['t_ccd_lo'] = t_ccds[0]\n hdu.header['t_ccd_hi'] = t_ccds[-1]\n hdu.header['t_ccd_n'] = len(t_ccds)\n hdu.header['halfw_lo'] = halfws[0]\n hdu.header['halfw_hi'] = halfws[-1]\n hdu.header['halfw_n'] = len(halfws)\n\n hdu1 = fits.ImageHDU(p_fails_probit.astype(np.float32))\n hdu1.header['comment'] = 'COLOR1 != 1.5 (good mag estimates)'\n \n hdu2 = fits.ImageHDU(p_fails_probit_1p5.astype(np.float32))\n hdu2.header['comment'] = 'COLOR1 == 1.5 (poor mag estimates)'\n\n hdus = fits.HDUList([hdu, hdu1, hdu2])\n hdus.writeto(f'{model_name}.fits.gz', overwrite=True)\n\ncomment = f'Created with fit_acq_model-{MODEL_DATE}-binned-poly-binom-floor.ipynb in aca_stats repository'\nwrite_model_as_fits(MODEL_NAME, comment=comment)\n\n# Fudge the chandra_aca.star_probs global STAR_PROBS_DATA_DIR temporarily\n# in order to load the dev model that was just created locally\n_dir_orig = star_probs.STAR_PROBS_DATA_DIR\nstar_probs.STAR_PROBS_DATA_DIR = '.'\ngrid_model_acq_prob(model=MODEL_NAME)\nstar_probs.STAR_PROBS_DATA_DIR = _dir_orig\n\n# Remake standard plot comparing grouped data to model, but now use\n# chandra_aca.star_probs grid_model_acq_prob function with the newly\n# generated 3-d FITS model that we just loaded.\n\ncolors = [f'kC{i}' for i in range(9)]\n\nplt.figure(figsize=(13, 4))\nfor subplot in (1, 2):\n plt.subplot(1, 2, subplot)\n probit = (subplot == 2)\n for m0_m1, color, mask, mag_mean in zip(list(fits), colors, masks, mag_means):\n fit = fits[m0_m1]\n data = data_all[mask]\n data['model'] = 1 - grid_model_acq_prob(data['mag_aca'], data['t_ccd'],\n halfwidth=data['halfwidth'],\n model=MODEL_NAME)\n plot_fit_grouped(data, 't_ccd', 2.0, \n probit=probit, colors=[color, color], label=str(mag_mean))\n plt.grid()\n if probit:\n plt.ylim(-3.5, 2.5)\n plt.ylabel('Probit(p_fail)' if probit else 'p_fail')\n plt.xlabel('T_ccd');\n plt.legend(fontsize='small')\n\n# Check chandra_aca implementation vs. native model from this notebook\nmags = np.linspace(5, 12, 40)\nt_ccds = np.linspace(-16, -1, 40)\nhalfws = np.linspace(60, 180, 7)\nmag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij')\n\n# First color != 1.5\n# Notebook\nnb_probs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0)\n# Chandra_aca. Note that grid_model returns p_success, so need to negate it.\nca_probs = -grid_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0, \n model=MODEL_NAME)\n\nassert nb_probs.shape == ca_probs.shape\nprint('Max difference is {:.3f}'.format(np.max(np.abs(nb_probs - ca_probs))))\nassert np.allclose(nb_probs, ca_probs, rtol=0, atol=0.1)\n\n\nd_probs = (nb_probs - ca_probs)[:, :, 3]\nplt.imshow(d_probs, origin='lower', extent=[-16, -1, 5, 12], aspect='auto', cmap='jet')\nplt.colorbar();\nplt.title('Delta between probit p_fail: analytical vs. gridded');\n\nmags = np.linspace(8, 11, 200)\nplt.figure()\nfor ii, t_ccd in enumerate(np.arange(-16, -0.9, 2)):\n p_fails = floor_model_acq_prob(mags, t_ccd, probit=True)\n plt.plot(mags, p_fails, color=f'C{ii}')\n p_success = grid_model_acq_prob(mags, t_ccd, probit=True, model=MODEL_NAME)\n plt.plot(mags, -p_success, color=f'C{ii}')\nplt.grid()\nplt.xlim(8, 11)", "Generate regression data for chandra_aca\nThe real testing is done here with a copy of the functions from chandra_aca, but\nnow generate some regression test data as a smoke test that things are working\non all platforms.", "mags = [9, 9.5, 10.5]\nt_ccds = [-10, -5]\nhalfws = [60, 120, 160]\nmag, t_ccd, halfw = np.meshgrid(mags, t_ccds, halfws, indexing='ij')\n\nprobs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=True, color=1.0)\nprint(repr(probs.round(3).flatten()))\n\nprobs = floor_model_acq_prob(mag, t_ccd, halfwidth=halfw, probit=False, color=1.5)\nprobs = stats.norm.ppf(probs)\nprint(repr(probs.round(3).flatten()))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SylvainCorlay/bqplot
examples/Interactions/Mark Interactions.ipynb
apache-2.0
[ "from __future__ import print_function\nfrom bqplot import *\nimport numpy as np\nimport pandas as pd\nfrom ipywidgets import Layout, Dropdown, Button\nfrom ipywidgets import Image as ImageIpy", "Scatter Chart\nScatter Chart Selections\nClick a point on the Scatter plot to select it. Now, run the cell below to check the selection. After you've done this, try holding the ctrl (or command key on Mac) and clicking another point. Clicking the background will reset the selection.", "x_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(20)\ny_data = np.random.randn(20)\n\nscatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],\n interactions={'click': 'select'},\n selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},\n unselected_style={'opacity': 0.5})\n\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nFigure(marks=[scatter_chart], axes=[ax_x, ax_y])\n\nscatter_chart.selected", "Alternately, the selected attribute can be directly set on the Python side (try running the cell below):", "scatter_chart.selected = [1, 2, 3]", "Scatter Chart Interactions and Tooltips", "x_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(20)\ny_data = np.random.randn(20)\n\ndd = Dropdown(options=['First', 'Second', 'Third', 'Fourth'])\nscatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'],\n names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True,\n labels=['Blue'])\nins = Button(icon='fa-legal')\nscatter_chart.tooltip = ins\nline = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, colors=['dodgerblue'])\nscatter_chart2 = Scatter(x=x_data, y=np.random.randn(20), \n scales= {'x': x_sc, 'y': y_sc}, colors=['orangered'],\n tooltip=dd, names=np.arange(100, 200), names_unique=False, display_names=False, \n display_legend=True, labels=['Red'])\n\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nfig = Figure(marks=[scatter_chart, scatter_chart2, line], axes=[ax_x, ax_y])\nfig\n\ndef print_event(self, target):\n print(target)\n\n# Adding call back to scatter events\n# print custom mssg on hover and background click of Blue Scatter\nscatter_chart.on_hover(print_event)\nscatter_chart.on_background_click(print_event)\n\n# print custom mssg on click of an element or legend of Red Scatter\nscatter_chart2.on_element_click(print_event)\nscatter_chart2.on_legend_click(print_event)\nline.on_element_click(print_event)\n\n# Changing interaction from hover to click for tooltip\nscatter_chart.interactions = {'click': 'tooltip'}\n\n# Adding figure as tooltip\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(10)\ny_data = np.random.randn(10)\n\nlc = Lines(x=x_data, y=y_data, scales={'x': x_sc, 'y':y_sc})\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\ntooltip_fig = Figure(marks=[lc], axes=[ax_x, ax_y], layout=Layout(min_width='600px'))\n\nscatter_chart.tooltip = tooltip_fig", "Image\nFor images, on_element_click returns the location of the mouse click.", "i = ImageIpy.from_file(os.path.abspath('../data_files/trees.jpg'))\nbqi = Image(image=i, scales={'x': x_sc, 'y': y_sc}, x=(0, 10), y=(-1, 1))\n\nfig_image = Figure(marks=[bqi], axes=[ax_x, ax_y])\nfig_image\n\nbqi.on_element_click(print_event)", "Line Chart", "# Adding default tooltip to Line Chart\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nx_data = np.arange(100)\ny_data = np.random.randn(3, 100)\n\ndef_tt = Tooltip(fields=['name', 'index'], formats=['', '.2f'], labels=['id', 'line_num'])\nline_chart = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, \n tooltip=def_tt, display_legend=True, labels=[\"line 1\", \"line 2\", \"line 3\"] )\n\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nFigure(marks=[line_chart], axes=[ax_x, ax_y])\n\n# Adding call back to print event when legend or the line is clicked\nline_chart.on_legend_click(print_event)\nline_chart.on_element_click(print_event)", "Bar Chart", "# Adding interaction to select bar on click for Bar Chart\nx_sc = OrdinalScale()\ny_sc = LinearScale()\n\nx_data = np.arange(10)\ny_data = np.random.randn(2, 10)\n\nbar_chart = Bars(x=x_data, y=[y_data[0, :].tolist(), y_data[1, :].tolist()], scales= {'x': x_sc, 'y': y_sc},\n interactions={'click': 'select'},\n selected_style={'stroke': 'orange', 'fill': 'red'},\n labels=['Level 1', 'Level 2'],\n display_legend=True)\nax_x = Axis(scale=x_sc)\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nFigure(marks=[bar_chart], axes=[ax_x, ax_y])\n\n# Adding a tooltip on hover in addition to select on click\ndef_tt = Tooltip(fields=['x', 'y'], formats=['', '.2f'])\nbar_chart.tooltip=def_tt\nbar_chart.interactions = {\n 'legend_hover': 'highlight_axes',\n 'hover': 'tooltip', \n 'click': 'select',\n}\n\n# Changing tooltip to be on click\nbar_chart.interactions = {'click': 'tooltip'}\n\n# Call back on legend being clicked\nbar_chart.type='grouped'\nbar_chart.on_legend_click(print_event)", "Histogram", "# Adding tooltip for Histogram\nx_sc = LinearScale()\ny_sc = LinearScale()\n\nsample_data = np.random.randn(100)\n\ndef_tt = Tooltip(formats=['', '.2f'], fields=['count', 'midpoint'])\nhist = Hist(sample=sample_data, scales= {'sample': x_sc, 'count': y_sc},\n tooltip=def_tt, display_legend=True, labels=['Test Hist'], select_bars=True)\nax_x = Axis(scale=x_sc, tick_format='0.2f')\nax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f')\n\nFigure(marks=[hist], axes=[ax_x, ax_y])\n\n# Changing tooltip to be displayed on click\nhist.interactions = {'click': 'tooltip'}\n\n# Changing tooltip to be on click of legend\nhist.interactions = {'legend_click': 'tooltip'}", "Pie Chart\nSet up a pie chart with click to show the tooltip.", "pie_data = np.abs(np.random.randn(10))\n\nsc = ColorScale(scheme='Reds')\ntooltip_widget = Tooltip(fields=['size', 'index', 'color'], formats=['0.2f', '', '0.2f'])\npie = Pie(sizes=pie_data, scales={'color': sc}, color=np.random.randn(10), \n tooltip=tooltip_widget, interactions = {'click': 'tooltip'}, selected_style={'fill': 'red'})\n\npie.selected_style = {\"opacity\": \"1\", \"stroke\": \"white\", \"stroke-width\": \"2\"}\npie.unselected_style = {\"opacity\": \"0.2\"}\n\nFigure(marks=[pie])\n\n# Changing interaction to select on click and tooltip on hover\npie.interactions = {'click': 'select', 'hover': 'tooltip'}" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JelleAalbers/xeshape
notebooks/extraction/extract_s1s.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nfrom tqdm import tqdm\nfrom multihist import Histdd, Hist1d\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport hax\nfrom hax import cuts\nhax.init(pax_version_policy='6.8',\n minitree_paths=['./sr1_s1shape_minitrees/', \n '/project2/lgrandi/xenon1t/minitrees/pax_v6.8.0/',\n '/project/lgrandi/xenon1t/minitrees/pax_v6.8.0/'])\n\nfrom pax import units, configuration\npax_config = configuration.load_configuration('XENON1T')\ntpc_r = pax_config['DEFAULT']['tpc_radius']\ntpc_z = -pax_config['DEFAULT']['tpc_length']", "Select clean 83mKr events\nKR83m cuts similar to Adam's note: \nhttps://github.com/XENON1T/FirstResults/blob/master/PositionReconstructionSignalCorrections/S2map/s2-correction-xy-kr83m-fit-in-bins.ipynb\n\nValid second interaction\nTime between S1s in [0.6, 2] $\\mu s$\nz in [-90, -5] cm", "# Get SR1 krypton datasets\ndsets = hax.runs.datasets\ndsets = dsets[dsets['source__type'] == 'Kr83m']\ndsets = dsets[dsets['trigger__events_built'] > 10000] # Want a lot of Kr, not diffusion mode \ndsets = hax.runs.tags_selection(dsets, include='sciencerun0')\n\n# Sample ten datasets randomly (with fixed seed, so the analysis is reproducible)\ndsets = dsets.sample(10, random_state=0)\ndsets.number.values\n\n# Suppress rootpy warning about root2rec.. too lazy to fix. \nimport warnings\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n \n data = hax.minitrees.load(dsets.number, \n 'Basics DoubleScatter Corrections'.split(),\n num_workers=5,\n preselection=['int_b_x>-60.0',\n '600 < s1_b_center_time - s1_a_center_time < 2000',\n '-90 < z < -5'])", "Get S1s from these events", "from hax.treemakers.peak_treemakers import PeakExtractor\n\ndt = 10 * units.ns\nwv_length = pax_config['BasicProperties.SumWaveformProperties']['peak_waveform_length']\nwaveform_ts = np.arange(-wv_length/2, wv_length/2 + 0.1, dt)\n\nclass GetS1s(PeakExtractor):\n __version__ = '0.0.1'\n uses_arrays = True\n # (don't actually need all properties, but useful to check if there's some problem)\n peak_fields = ['area', 'range_50p_area', 'area_fraction_top', \n 'n_contributing_channels', 'left', 'hit_time_std', 'n_hits',\n 'type', 'detector', 'center_time', 'index_of_maximum',\n 'sum_waveform',\n ]\n peak_cut_list = ['detector == \"tpc\"', 'type == \"s1\"']\n \n def get_data(self, dataset, event_list=None):\n # Get the event list from the dataframe selected above\n event_list = data[data['run_number'] == hax.runs.get_run_number(dataset)]['event_number'].values\n \n return PeakExtractor.get_data(self, dataset, event_list=event_list)\n \n def extract_data(self, event):\n peak_data = PeakExtractor.extract_data(self, event)\n \n # Convert sum waveforms from arcane pyroot buffer type to proper numpy arrays\n for p in peak_data:\n p['sum_waveform'] = np.array(list(p['sum_waveform']))\n \n return peak_data\n\ns1s = hax.minitrees.load(dsets.number, GetS1s, num_workers=5)", "Save to disk\nPandas object array is very memory-ineficient. Takes about 25 MB/dataset to store it in this format (even compressed). If we'd want to extract more than O(10) datasets we'd get into trouble already at the extraction stage.\nLeast we can do is convert to sensible format (waveform matrix, ordinary dataframe) now. Unfortunately dataframe retains 'object' mark even after deleting sum waveform column. Converting to and from a record array removes this.", "waveforms = np.vstack(s1s['sum_waveform'].values)\ndel s1s['sum_waveform']\ns1s = pd.DataFrame(s1s.to_records())", "Merge with the per-event data (which is useful e.g. for making position-dependent selections)", "merged_data = hax.minitrees._merge_minitrees(s1s, data)\ndel merged_data['index']\n\nnp.savez_compressed('sr0_kr_s1s.npz', waveforms=waveforms)\nmerged_data.to_hdf('sr0_kr_s1s.hdf5', 'data')", "Quick look", "len(s1s)\n\nfrom pax import units\nplt.hist(s1s.left * 10 * units.ns / units.ms, bins=np.linspace(0, 2.5, 100));\nplt.yscale('log')", "S1 is usually at trigger.", "plt.hist(s1s.area, bins=np.logspace(0, 3, 100));\nplt.axvline(35, color='r')\nplt.yscale('log')\nplt.xscale('log')\n\nnp.sum(s1s['area'] > 35)/len(s1s)", "Single electron contamination is not so severe." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MatthewDaws/OSMDigest
notebooks/Geopandas.ipynb
mit
[ "# Allow to import without installing\nimport sys\nsys.path.insert(0, \"..\")\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport numpy as np\nimport pandas as pd\nimport geopandas as gpd", "How to build a GeoDataFrame\nWe firstly explore how to do this by using the GeoJSON schema.\n\nSee https://gist.github.com/sgillies/2217756 for the \"__geo_interface__\".\nBut this basically copies GeoJSON, for which see https://tools.ietf.org/html/rfc7946\n\nIt's then as simple as this...", "point_features = [{\"geometry\": {\n \"type\": \"Point\",\n \"coordinates\": [102.0, 0.5]\n },\n \"properties\": {\n \"prop0\": \"value0\", \"prop1\": \"value1\"\n }\n }]\n\npoint_data = gpd.GeoDataFrame.from_features(point_features)\npoint_data\n\npoint_data.ix[0].geometry\n\nline_features = [{\"geometry\": {\n \"type\": \"LineString\",\n \"coordinates\": [[102.0, 0.5], [104, 3], [103, 2]]\n },\n \"properties\": {\n \"prop3\": \"value3\"\n }\n }]\n\nline_data = gpd.GeoDataFrame.from_features(line_features)\nline_data\n\nline_data.ix[0].geometry\n\npolygon_features = [{\"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [[[102.0, 0.5], [104, 3], [102, 2], [102,0.5]]]\n },\n \"properties\": {\n \"prop4\": \"value4\", \"prop1\": \"value1\"\n }\n }]\n\ndata = gpd.GeoDataFrame.from_features(polygon_features)\ndata\n\ndata.plot()\n\ndata.ix[0].geometry\n\nfeatures = []\nfeatures.extend(point_features)\nfeatures.extend(line_features)\nfeatures.extend(polygon_features)\ngpd.GeoDataFrame.from_features(features)", "Notes\nSome things that jumped out at me as I read the GeoJSON spec:\n\nCoordinates are always in the order: longitude, latitude.\nA \"Polygon\" is allowed to contain holes. The \"outer\" edge should be ordered counter-clockwise, and each \"inner\" edge (i.e. a \"hole\") should be clockwise.\nIf a polygon contains more than one array of points, then the first array is the outer edge, and the rest inner edges.\nLines crossing the anti-meridian need to be split. (I wonder what OSM does?)\n\nVia using shapely\nUnder the hood, geopandas uses the shapely library, and we can alternatively build data frames by directly building shapely objects", "type(point_data.geometry[0]), type(line_data.geometry[0]), type(data.geometry[0])\n\nimport shapely.geometry\n\npts = shapely.geometry.LineString([shapely.geometry.Point(0,0), shapely.geometry.Point(1,0), shapely.geometry.Point(1,1)])\ndf = gpd.GeoDataFrame({\"geometry\": [pts], \"key1\":[\"value1\"], \"key2\":[\"value2\"]})\ndf\n\ndf.ix[0].geometry", "Support in the library", "import osmdigest.geometry as geometry\nimport osmdigest.sqlite as sq\n\nimport os\nfilename = os.path.join(\"..\", \"..\", \"..\", \"Data\", \"california-latest.db\")\n\ndb = sq.OSM_SQLite(filename)\nway = db.complete_way(33088737)\nseries = geometry.geoseries_from_way(way)\nseries\n\ngpd.GeoDataFrame(series).T.plot()\n\nway = db.complete_way(285549437)\nseries = geometry.geoseries_from_way(way)\nseries\n\ndf = gpd.GeoDataFrame(series).T\ndf\n\ndf.plot()", "For relations\nWe can build a geo data frame with the raw data from a relation.", "relation = db.complete_relation(2866485)\ngeometry.geodataframe_from_relation(relation)\n\ngeometry.geodataframe_from_relation( db.complete_relation(63222) )", "Looking at relations\nThese are harder to compute automatically, because the exact interpretation of the sub-elements depends upon context. However, most relations which have \"interesting\" geometry (as opposed to giving contextual information on other elements) are of \"multi-polygon\" type, and can be recognised by the presence of ways with the \"role\" of \"inner\" or \"outer\".\nI found that using the shapely library itself was the easiest way to conver the geometry.\nThere are some cases of geometry which shapely cannot handle. For example:\n- http://www.openstreetmap.org/relation/70986 (A lot of self-intersection, I think).\n- http://www.openstreetmap.org/relation/184199 (Ditto).\n- http://www.openstreetmap.org/relation/1483140 (Adjoining polygons).", "gen = db.relations()\nfor _ in range(15):\n next(gen)\n\nrelation = next(gen)\nprint(relation)\nseries = geometry.geoseries_from_relation(db.complete_relation(relation))\nseries\n\ngpd.GeoDataFrame(series).T.plot()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/starthinker
colabs/bigquery_query.ipynb
apache-2.0
[ "BigQuery Query To Table\nSave query results into a BigQuery table.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter BigQuery Query To Table Recipe Parameters\n\nSpecify a single query and choose legacy or standard mode.\nFor PLX use user authentication and: SELECT * FROM [plx.google:FULL_TABLE_NAME.all] WHERE...\nEvery time the query runs it will overwrite the table.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_write':'service', # Credentials used for writing data.\n 'query':'', # SQL with newlines and all.\n 'dataset':'', # Existing BigQuery dataset.\n 'table':'', # Table to create from this query.\n 'legacy':True, # Query type must match source tables.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute BigQuery Query To Table\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'bigquery':{\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'from':{\n 'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}},\n 'legacy':{'field':{'name':'legacy','kind':'boolean','order':4,'default':True,'description':'Query type must match source tables.'}}\n },\n 'to':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}},\n 'table':{'field':{'name':'table','kind':'string','order':3,'default':'','description':'Table to create from this query.'}}\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Fifth-Cohort-Awesome/NightThree
three_agd.ipynb
mit
[ "Goal One\nInjest a csv file as pure text... (just 500 chars)", "\nwith open('tmdb_5000_movies.csv','r') as f:\n rtext=''\n for line in f:\n rtext += line\nrtext[:500]", "Then as a list of lines... (just one line)", "with open('tmdb_5000_movies.csv','r') as f:\n lines = [line for line in f]\nlines[0]", "Then as a data frame... (just Avatar)", "import pandas as pd\ndf = pd.read_csv(\"tmdb_5000_movies.csv\")\ndf.query('id == 19995')", "Goal Two\nRight now, the file is in a 'narrow' format. In other words, several interesting bits are collapsed into a single field. Let's attempt to make the data frame a 'wide' format. All the collapsed items expanded horizontally.\nReferences:\nhttps://www.kaggle.com/fabiendaniel/film-recommendation-engine\nhttp://www.jeannicholashould.com/tidy-data-in-python.html", "import json\nimport pandas as pd\nimport numpy as np\n\ndf = pd.read_csv(\"tmdb_5000_movies.csv\")\n\n#convert to json\njson_columns = ['genres', 'keywords', 'production_countries',\n 'production_companies', 'spoken_languages']\nfor column in json_columns:\n df[column] = df[column].apply(json.loads)\n\n\ndef get_unique_inner_json(feature):\n tmp = []\n for i, row in df[feature].iteritems():\n for x in range(0,len(df[feature].iloc[i])):\n tmp.append(df[feature].iloc[i][x]['name'])\n\n unique_values = set(tmp)\n return unique_values\n\ndef widen_data(df, feature):\n unique_json = get_unique_inner_json(feature)\n \n tmp = []\n #rearrange genres\n for i, row in df.iterrows():\n for x in range(0,len(row[feature])):\n for val in unique_json:\n if row[feature][x]['name'] == val:\n row[val] = 1\n \n tmp.append(row)\n \n new_df = pd.DataFrame(tmp)\n new_df[list(unique_json)] = new_df[list(unique_json)].fillna(value=0)\n return new_df\n\ngenres_arranged_df = widen_data(df, \"genres\")\ngenres_arranged_df[list(get_unique_inner_json(\"genres\"))] = genres_arranged_df[list(get_unique_inner_json(\"genres\"))].astype(int)\n\n\n\ngenres_arranged_df.query('title == \"Avatar\"')", "Goal Three", "genres_long_df = pd.melt(genres_arranged_df, id_vars=df.columns, value_vars=get_unique_inner_json(\"genres\"), var_name=\"genre\", value_name=\"genre_val\")\ngenres_long_df = genres_long_df[genres_long_df['genre_val'] == 1]\ngenres_long_df.query('title == \"Avatar\"')\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karst87/ml
dev/pyml/2001_使用sklearn做单机特征工程.ipynb
mit
[ "使用sklearn做单机特征工程\nhttp://www.cnblogs.com/jasonfreak/p/5448385.html\n1 特征工程是什么?\n2 数据预处理\n 2.1 无量纲化\n 2.1.1 标准化\n 2.1.2 区间缩放法\n 2.1.3 标准化与归一化的区别\n 2.2 对定量特征二值化\n 2.3 对定性特征哑编码\n 2.4 缺失值计算\n 2.5 数据变换\n 2.6 回顾\n3 特征选择\n 3.1 Filter\n 3.1.1 方差选择法\n 3.1.2 相关系数法\n 3.1.3 卡方检验\n 3.1.4 互信息法\n 3.2 Wrapper\n 3.2.1 递归特征消除法\n 3.3 Embedded\n 3.3.1 基于惩罚项的特征选择法\n 3.3.2 基于树模型的特征选择法\n 3.4 回顾\n4 降维\n 4.1 主成分分析法(PCA)\n 4.2 线性判别分析法(LDA)\n 4.3 回顾\n5 总结\n\n1 特征工程是什么?\n有这么一句话在业界广泛流传:数据和特征决定了机器学习的上限,而模型和算法只是逼近这个上限而已。那特征工程到底是什么呢?顾名思义,其本质是一项工程活动,目的是最大限度地从原始数据中提取特征以供算法和模型使用。通过总结和归纳,人们认为特征工程包括以下方面:(参考im ages/2001_1.png)\n\n特征处理是特征工程的核心部分,sklearn提供了较为完整的特征处理方法,包括数据预处理,特征选择,降维等。首次接触到sklearn,通常会被其丰富且方便的算法模型库吸引,但是这里介绍的特征处理库也十分强大!\n\n本文中使用sklearn中的IRIS(鸢尾花)数据集来对特征处理功能进行说明。IRIS数据集由Fisher在1936年整理,包含4个特征(Sepal.Length(花萼长度)、Sepal.Width(花萼宽度)、Petal.Length(花瓣长度)、Petal.Width(花瓣宽度)),特征值都为正浮点数,单位为厘米。目标值为鸢尾花的分类(Iris Setosa(山鸢尾)、Iris Versicolour(杂色鸢尾),Iris Virginica(维吉尼亚鸢尾))。导入IRIS数据集的代码如下:", "from sklearn.datasets import load_iris\n\n# 导入IRIS数据集\niris = load_iris()\n\n# 特征矩阵\niris.data\n\n# 目标微量\niris.target", "2 数据预处理\n通过特征提取,我们能得到未经处理的特征,这时的特征可能有以下问题:\n\n不属于同一量纲:即特征的规格不一样,不能够放在一起比较。无量纲化可以解决这一问题。\n\n信息冗余:对于某些定量特征,其包含的有效信息为区间划分,例如学习成绩,假若只关心“及格”或不“及格”,那么需要将定量的考分,转换成“1”和“0”表示及格和未及格。二值化可以解决这一问题。\n\n定性特征不能直接使用:某些机器学习算法和模型只能接受定量特征的输入,那么需要将定性特征转换为定量特征。最简单的方式是为每一种定性值指定一个定量值,但是这种方式过于灵活,增加了调参的工作。通常使用哑编码的方式将定性特征转换为定量特征:假设有N种定性值,则将这一个特征扩展为N种特征,当原始特征值为第i种定性值时,第i个扩展特征赋值为1,其他扩展特征赋值为0。哑编码的方式相比直接指定的方式,不用增加调参的工作,对于线性模型来说,使用哑编码后的特征可达到非线性的效果。\n\n存在缺失值:缺失值需要补充。\n\n信息利用率低:不同的机器学习算法和模型对数据中信息的利用是不同的,之前提到在线性模型中,使用对定性特征哑编码可以达到非线性的效果。类似地,对定量变量多项式化,或者进行其他的转换,都能达到非线性的效果。\n\n我们使用sklearn中的preproccessing库来进行数据预处理,可以覆盖以上问题的解决方案。\n\n2.1 无量纲化\n无量纲化使不同规格的数据转换到同一规格。常见的无量纲化方法有标准化和区间缩放法。标准化的前提是特征值服从正态分布,标准化后,其转换成标准正态分布。区间缩放法利用了边界值信息,将特征的取值区间缩放到某个特点的范围,例如[0, 1]等。\n\n2.1.1 标准化\n标准化需要计算特征的均值和标准差,公式表达为:\nx' = (x - mean) / std\n\n使用preproccessing库的StandardScaler类对数据进行标准化的代码如下:", "from sklearn.preprocessing import StandardScaler\n\n# 标准化,返回值为标准化后的数据\nStandardScaler().fit_transform(iris.data)", "2.1.2 区间缩放法\n区间缩放法的思路有多种,常见的一种为利用两个最值进行缩放,公式表达为:\nx' = (x - min) / (max - min)\n\n使用preproccessing库的MinMaxScaler类对数据进行区间缩放的代码如下:", "from sklearn.preprocessing import MinMaxScaler\n\n# 区间缩放,返回值为缩放到[0, 1]区间的数据\nMinMaxScaler().fit_transform(iris.data)", "2.1.3 标准化与归一化的区别\n简单来说,标准化是依照特征矩阵的列处理数据,其通过求z-score的方法,将样本的特征值转换到同一量纲下。归一化是依照特征矩阵的行处理数据,其目的在于样本向量在点乘运算或其他核函数计算相似性时,拥有统一的标准,也就是说都转化为“单位向量”。规则为l2的归一化公式如下:\nx' = x / ((sum(x[j] ^ 2)) ^ 0.5)\n\n使用preproccessing库的Normalizer类对数据进行归一化的代码如下:", "from sklearn.preprocessing import Normalizer\n\n# 归一化,返回值为归一化后的数据\nNormalizer().fit_transform(iris.data)", "2.2 对定量特征二值化\n定量特征二值化的核心在于设定一个阈值,大于阈值的赋值为1,小于等于阈值的赋值为0,公式表达如下:\nx = 1 if x &gt; threshold else 0\n\n使用preproccessing库的Binarizer类对数据进行二值化的代码如下:", "from sklearn.preprocessing import Binarizer\n\n# 二值化,阈值设置为3,返回值 为二值化后的数据\nBinarizer(threshold=3).fit_transform(iris.data)", "2.3 对定性特征哑编码\n由于IRIS数据集的特征皆为定量特征,故使用其目标值进行哑编码(实际上是不需要的)。使用preproccessing库的OneHotEncoder类对数据进行哑编码的代码如下:", "from sklearn.preprocessing import OneHotEncoder\n\n# 哑编码,对数据的目标值,返回值为哑编码后的数据\nOneHotEncoder().fit_transform(iris.target.reshape((-1,1)))", "2.4 缺失值计算\n由于IRIS数据集没有缺失值,故对数据集新增一个样本,4个特征均赋值为NaN,表示数据缺失。使用preproccessing库的Imputer类对数据进行缺失值计算的代码如下:", "import numpy as np\nfrom sklearn.preprocessing import Imputer\n\n# 缺失值计算,返回值为计算缺失值后的数据\n# 参数missing_value为缺失值的表示形式,默认为NaN\n# 参数strategy为缺失值的填充方式,默认为mean(均值)\nImputer().fit_transform(\\\n np.vstack((np.array([np.nan, np.nan, np.nan, np.nan]),iris.data)))", "2.5 数据变换\n常见的数据变换有基于多项式的、基于指数函数的、基于对数函数的。4个特征,度为2的多项式转换公式如下:\n(x1',x2',x3',...,xn')\n=(1, x1, x2, ..., xn, x1^2, x1*x2, x1*x2*x3, ..., )\n\n使用preproccessing库的PolynomialFeatures类对数据进行多项式转换的代码如下:", "from sklearn.preprocessing import PolynomialFeatures\n\n# 多项式转换\n# 参数degree为度,默认值为2\nPolynomialFeatures().fit_transform(iris.data)", "基于单变元函数的数据变换可以使用一个统一的方式完成,使用preproccessing库的FunctionTransformer对数据进行对数函数转换的代码如下:", "from sklearn.preprocessing import FunctionTransformer\n\n#自定义转换函数为对数函数的数据变换\n#第一个参数是单变元函数\nFunctionTransformer(np.log1p).fit_transform(iris.data)", "2.6 回顾\n类 功能 说明\nStandardScaler 无量纲化 标准化,基于特征矩阵的列,将特征值转换至服从标准正态分布\nMinMaxScaler 无量纲化 区间缩放,基于最大最小值,将特征值转换到[0, 1]区间上\nNormalizer 归一化 基于特征矩阵的行,将样本向量转换为“单位向量”\nBinarizer 二值化 基于给定阈值,将定量特征按阈值划分\nOneHotEncoder 哑编码 将定性数据编码为定量数据\nImputer 缺失值计算 计算缺失值,缺失值可填充为均值等\nPolynomialFeatures 多项式数据转换 多项式数据转换\nFunctionTransformer 自定义单元数据转换 使用单变元的函数来转换数据\n\n3 特征选择\n当数据预处理完成后,我们需要选择有意义的特征输入机器学习的算法和模型进行训练。通常来说,从两个方面考虑来选择特征:\n\n特征是否发散:如果一个特征不发散,例如方差接近于0,也就是说样本在这个特征上基本上没有差异,这个特征对于样本的区分并没有什么用。\n\n特征与目标的相关性:这点比较显见,与目标相关性高的特征,应当优选选择。除方差法外,本文介绍的其他方法均从相关性考虑。\n\n根据特征选择的形式又可以将特征选择方法分为3种:\nFilter:过滤法,按照发散性或者相关性对各个特征进行评分,设定阈值或者待选择阈值的个数,选择特征。\n\nWrapper:包装法,根据目标函数(通常是预测效果评分),每次选择若干特征,或者排除若干特征。\n\nEmbedded:嵌入法,先使用某些机器学习的算法和模型进行训练,得到各个特征的权值系数,根据系数从大到小选择特征。类似于Filter方法,但是是通过训练来确定特征的优劣。\n\n我们使用sklearn中的feature_selection库来进行特征选择。\n\n3.1 Filter\n3.1.1 方差选择法\n使用方差选择法,先要计算各个特征的方差,然后根据阈值,选择方差大于阈值的特征。使用feature_selection库的VarianceThreshold类来选择特征的代码如下:", "from sklearn.feature_selection import VarianceThreshold\n\n# 方差选择法,返回值为特征选择后的数据\n# 参数threshold为方差的阈值\nVarianceThreshold(threshold=3).fit_transform(iris.data)", "3.1.2 相关系数法\n使用相关系数法,先要计算各个特征对目标值的相关系数以及相关系数的P值。用feature_selection库的SelectKBest类结合相关系数来选择特征的代码如下:", "from sklearn.feature_selection import SelectKBest\nfrom scipy.stats import pearsonr\n\n# 选择K个最好的特征,返回选择特征后的数据\n# 第一个参数为计算评估特征是否好的函数,该函数输入特征矩阵和目标向量,\n# 输出二元组(评分,P值)的数组,数组第i项为第i个特征的评分和P值。\n# 在此定义为计算相关系数\n# 参数k为选择的特征个数\nSelectKBest(lambda X, Y: tuple(map(tuple,np.array(list(map(lambda x:pearsonr(x, Y), X.T))).T)), k=2).fit_transform(iris.data, iris.target)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Olsthoorn/TransientGroundwaterFlow
exercises_notebooks/TransientFlowToAWell.ipynb
gpl-3.0
[ "Transient flow to a well\nThe Theis' well function (a well in a confined aquifer)\nThe Theis will function is perhaps the most famous, and most often used and practical analytical solution in groundwater science. It describes the transient flow to a fully penetrating well in a confined aquifer after the well starts pumping at time zero. The solution is also used for unconfined flow, but then it is an approximation that is good as long as the thickness of the aquifer does not change substantially, not more thabn 20%, say, from it's initial value.\n\nFigure: The situation considered by Theis (confined aquifer)\n\nFigure: The situation considered by Theis (unconfined aquifer, s<<h)\nIn cases with wells that are only partially penetrating the aquifer, we can add the influence of that separately as we will see.\nAlthough the solution was derived for a uniform and unchanging ambient groundwater head, it can still be applied in much more general situations, because we can use superpositioin, that is, we can add the influence of different and indiependent actors that change the groundwater level in space and or in time separately. Therefore, if we can, with a solution like that of Theis, compute the effect of a single well everywhere in the aquifer of any time, we can do for an arbitrary number of wells, simply, by adding their individual effects. Not only this, we can also superimpose other effects that are not due to wells, if we have their analytical solution available.\nGoverning partial differential equation solved by Theis\nTheis solved the following partial differential equation\n\nFigure: Situation to derive the partial differential equation\nContinuity for a ring of width $dr$ at radius $r$, see figure, yields:\n$$ \\frac {\\partial Q} {\\partial r} = \\frac \\partial {\\partial r} \\left(-2 \\pi r kD \\frac {\\partial \\phi} {\\partial r} \\right)= - 2 \\pi r S \\frac {\\partial \\phi} {\\partial t} $$\nFor convenience, use drawdown $s$ instead of head $\\phi$\n$$ s = \\phi_0 - \\phi $$\n$$ \\frac {\\partial} {\\partial r} \\left( 2 \\pi r kD \\frac {\\partial s} {\\partial r} \\right) = 2\\pi r S \\frac {\\partial s} {\\partial t} $$\n$$ kD \\frac {\\partial} {\\partial r} \\left( r \\frac {\\partial s} {\\partial r} \\right) = r S \\frac {\\partial s} {\\partial t} $$\nWhich yields the governing partial differential equation for transient horizointal flow to a well that starts pumping at a fixef flow $Q_0$ at $t=0$:\n$$\\frac 1 r \\frac {\\partial s} {\\partial r} + \\frac {\\partial^2 s} {\\partial r^2} = \\frac S {kD} \\frac {\\partial s} {\\partial t}$$\nWhich was solved by Theis (1935) subject to the initial condition $s(x,0) = 0$ and boundary conditions $s(\\infty, t)=0$ and $2\\pi r kD \\frac{\\partial s}{\\partial r} = Q_0$ for $r \\rightarrow 0$. (This solution can be readily obtained by means of the Laplace transform).\nThe dradown according to Theis is mathematically descrbided, by hydrologists, as\n$$ s = \\frac Q {2 \\pi kD} W \\left( \\frac {r^2 S} {4 kD t} \\right) $$\nWhere owercse $s$ [L] is the transient drawdown of the groundwater head due to the well, $Q$ [L3/T] is the well extraction, $kD$ [L2/T] the transmissivity of the aqufier, $S$ [-] the storage coefficient of the aquifer, $r$ [L] the distance to the well center and $t$ [T] time since the well was switched on.\n$W(u)$ is the so-called Theis well function, which is a function of only one dimensionless parameter, $u$ that is a combination of $r$, $t$, $S$ and $kD$ as shown.\nThe name Well Function was given by C.V. Theis (1930). The well function turned out to be a regular mathematical function that already was available under the name exponential integra1 at the time that Theis developed his formunla. It's form is:\n$$ W \\left( u \\right) = Ei \\left( u \\right) = \\intop_u^\\infty \\frac {e^{-y}} y dy $$\nThe function has been tabled in many books on groundwater hydrology and pumping test analysis, among which the book\nKruseman, G.P. and N.A. de Rider (1994) Analysis of Pumping Test Data. ILRI publication 47, Wageningen, The Netherlands, 1970 to 1994. ISBN 90 70754 207.\nThe print of the book of the year 2000 is available on the internet: KrdR 2000\nFor verification of self-implemented well functions here is the table of its values form page 294 of the mentioned book:\n\nHow to get the well function?\nIn the past we used to look up the well function in a table like the one given. Nowadays, with computing power everywhere, we only use such tables to verify our version of the function when we programmed it ourselves or use one from a scientific library. This is what we'll do here was well.\nOne way is to see if the function is already available on our computer. Well if you have Maple, Matlab or Python.scipy it is in one form or another. If you don't know where, then searhing the internet is always a good start.\nThis shows that we have to look for the function expi in the module scipy.special", "import numpy as np\nfrom scipy.special import expi\n\n#help(expi) # remove the first # to show the help for the function expi", "The reveals that we have the function\n$$ expi = \\intop_{-\\infty}^u \\frac {e^y} y dy $$\nBy just changing the sign of y to -y we obtain\n$$ W(u) = \\intop_u^\\infty \\frac {e^{-y}} y dy = - \\intop_{y = -\\infty}^{y = u} \\frac {e^{y}} y dy $$\nReplace $y$ by $-\\xi$ the $W(u)$ becomes\n$$ W(u) = - \\intop_{\\xi = \\infty}^{\\xi = -u} \\frac {e^{-\\xi}} \\xi d \\xi = - expi(-u) $$\nSo that\n$$ W(u) = -expi(-u) $$\naccording to the definition used in scipy.special.expi.\nNotice that diferent libraries and books may define the exponential integral differently. The famous `Abramowitz M &amp; Stegun, I (1964) Handbook of Mathematical Functions. Dover`, for example define the exponential function exactly as the theis well function.\n\nWe can readily check the expi function using the table from Kruseman and De Ridder (2000) p294 that was referenced above. Verifying for example the values for u = 4, 0.4, 0.04, 0.004 etc to $4^{-10$ can be done as follows:", "u = 4 * 10** -np.arange(11.) # generates values 4, 4e-1, 4e-2 .. 4e-10\nprint(\"{:>10s} {:>10s}\".format('u ', 'wu '))\nfor u, wu in zip(u, -expi(-u)): # makes a list of value pairs [u, W(u)]\n print(\"{0:10.1e} {1:10.4e}\".format(u, wu))", "which is equal to the values in the table.\nIt''s now convenient to use the familiar form W(u) instead of -expi(-u)\nWe can define a function for W either as an anonymous function or a regular function. Anonymous functions are called lambdda functions or lambda expressions in Python. In this case:", "from scipy.special import expi\nW = lambda u : -expi(-u)", "Or, alternatively as a regular one-line function:", "def W(u): return -expi(-u)", "or in full, so that we don't need the import above and we directly see where the function comes from:", "import scipy\nW = lambda u: -scipy.special.expi( -u ) # Theis well function", "Now we can put this well function immediately to use for answering practical questions. For example: what is the drawdown after $t=1\\,d$ at distance $r=350 \\, m$ by a well extracting $Q = 2400\\, m^3/d$ in an confined aquifer with transmissivity $kD = 2400\\, m^2/d$ and storage coefficient $S=0.001$ [-] ?", "r = 350; t = 1.; kD=2400; S=0.001; Q=2400\nu = r**2 * S / (4 * kD * t)\n\ns = Q/(4 * np.pi * kD) * W(u) # applying the theis well function according to the book\n\nprint(\" r = {} m\\n\\\n t = {} d\\n\\\n kD = {} m2/d\\n\\\n S = {} [-]\\n\\\n Q = {} m3/d\\n\\\n u = {:.5g} [-]\\n\\\n W(u) = {:.5g} [-]\\n\\\n s(r, t) = {:.5g} m\".\n format(r, t, kD, S, Q, u, W(u), s))", "Above we computed $u$ separately to prevent cluttering the expression. Of course, you can define a lambda or regular function to compute like so", "u = lambda r, t: r**2 * S / (4 * kD * t)", "The lambda function $u$ now takes two parameters like $u(r,t)$ and uses the other parameters $S$ and $kD$ that it finds in the workspace at the moment when the lambda function is created. So don't change $S$ and $kD$ afterwards without redefining $u(r,t)$.\nTry this out:", "u(r,t) # yields u as a function of r and t\n\nW(u(r,t)) # given W(u) as a function of r and t\n\nQ/(4 * np.pi * kD) * W(u(r,t)) # gives the drawdown that we had before", "It's now straight forward to compute the drawdown for many times like so:", "t = np.logspace(-3, 2, 51) # gives 51 times on log scale between 10^(-3) = 0.001 and 10^(2) = 100", "This given the following times:", "for it, tt in enumerate(t):\n if it % 10 == 0: print()\n print(\"%8.3g\" % tt, end=\" \")", "With these times we can compute the drawdown for all these times in a single strike without changing anything to our formula:", "s = Q / (4 * np.pi * kD) * W(u(r,t)) # computes s(r,t)\ns # shows s(r,t)", "For a nicer print print t and s next to each other", "print(\"{:>10s} {:>10s}\".format('time', 'drawdown'))\nfor tt, ss in zip(t, s):\n print(\"{0:10.3g} {1:10.3g}\".format(tt,ss))", "And of course we can make a plot of these results:", "import matplotlib.pyplot as plt # imports plot functions (matlab style)\n\nfig = plt.figure()\n\n# Drawdown versus log(t)\nax1 = fig.add_subplot(121)\nax1.set(xlabel='time [d]', ylabel='drawdown [m]', xscale='log', title='Drawdown versus log(t)')\nax1.invert_yaxis()\nax1.grid(True)\nplt.plot(t, s)\n\n# Drawdown versus t\nax2 = fig.add_subplot(122)\nax2.set(xlabel='time [d]', ylabel='', xscale='linear', title='Drawdown versus t')\nax2.invert_yaxis()\nax2.grid(True)\nplt.plot(t, s)\n\nplt.show()", "Exercises\n\nShow the drawdown as a function of r instead of x, for t=2 d and r between 0.1 and 1000 m\nFor the 5 wells of which the lcoations and extractions are given below, show the combined drawdown for time between 0.01 and 10 days at x= 0 and y = 0.", "well_names = ['School', 'Lazaret', 'Square', 'Mosque', 'Water_company']\nQ = [400., 1200., 1150., 600., 1900]\nx = [-300., -250., 100., 55., 125.]\ny =[-450., +230., 50., -300., 250.]\nNwells = len(well_names)\nx0 = 0.\ny0 = 0.\n\nt = np.logspace(-2, 2, 41)\ns = np.zeros((Nwells, len(t)))\nfor iw, Q0, xw, yw in zip(range(Nwells), Q, y, x):\n r = np.sqrt((xw-x0) ** 2 + (yw - y0) **2)\n s[iw,:] = Q0 / (4 * np.pi * kD) * W(u(r,t))\n \nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set(xlabel='time [d]', ylabel='drawdown[m]', title='Drawdown due to multiple wells')\nax.invert_yaxis()\nax.grid(True)\nfor iw, name in zip(range(Nwells), well_names):\n ax.plot(t, s[iw,:], label=name)\nax.plot(t, np.sum(s, axis=0), label='total_drawdown')\nax.legend()\nplt.show()", "Show the drawdown for the case that the wells start at different times as given here:\n(hint: compute use tw = t - ts[iw] for each well and deal met times tw < 0). ts = [0., 5., 2. 8. 1.5.]\nThere is a vertical imermeable boundary at x = -500 m. Show the head as a function of time at x = -10 m and y = 0 m for both the situation\nwith \nand without \n\n\nThere is a fixed-head boundary (e.g. a fully penetrating river) at x = -500 m. Show the head as a function of time at x=-10 m and y = 0 m for both the situation \nwith\nand without\n\n\nWith the fixed-head boundary. Show the head as a function of time at x = -10 m and y = 0 m for all three cases, where time runs as in t = np.logspace(-2, 2, 41)\nShow the heads between r = -500 and r = 500 for time = 5 d the three cases, that is\nwithout impermeable or fixed-head boundary at x = -500 m\nwith the impermeable boundary at x = -500 m\nwith the fixed head bounary at x = -500.\n\n\nTwo vertical impermeable boundaries: Consider not only an impermeable boundary at x = -500 m but also at y = + 200 m. Then using an extraction of Q = 1200 m3/d and times between 0.01 and 100 days, compute the drawdown as a function of time at pooint x= -100.m and y = + 50 m.\nConsider a fixed-head boundary at x = -500. m and one at y = 200., with an extraction of Q = 1200 m3/d at x= -100. and y = 50 m, compute the drawdown for times between 0.001 and 100 days.\nExtraction from two different aquifers: Compare two situaations where both aquifers have the same transmissivity kD [m2/d] but different storage coefficient. The extracton from both aquifers is the same, Q = 1200 m3/d. Plot de drawdown as a function of time on a logarithmic scale and determine the shift of the two curves along the time axis. Given an explanation for that shift.\n\nimportant Don't forget to rerun the lambda expressions above, if you change the kD and S, or redefine them so that they take kD and or S as input parameters." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aflaxman/SmartVA-Analyze-Mapping-Example
01_example_mapping_in_python.ipynb
gpl-3.0
[ "import numpy as np, pandas as pd\n\n# load codebook\nfname = 'https://github.com/aflaxman/SmartVA-Analyze-Mapping-Example/raw/master/Guide%20for%20data%20entry.xlsx'\ncb = pd.read_excel(fname, index_col=2)\ncb.head()", "Minimal example\nGenerate a .csv file that is accepted as input to SmartVA-Analyze 1.1", "# SmartVA-Analyze 1.1 accepts a csv file as input\n# and expects a column for every field name in the \"Guide for data entry.xlsx\" spreadsheet\n\ndf = pd.DataFrame(index=[0], columns=cb.index.unique())\n\n# SmartVA-Analyze 1.1 also requires a handful of columns that are not in the Guide\ndf['child_3_10'] = np.nan\ndf['agedays'] = np.nan\ndf['child_5_7e'] = np.nan\ndf['child_5_6e'] = np.nan\ndf['adult_2_9a'] = np.nan\n\ndf.loc[0,'sid'] = 'example'\n\n# if we save this dataframe as a csv, we can run it through SmartVA-Analyze 1.1\n\nfname = 'example_1.csv'\ndf.to_csv(fname, index=False)\n\n# here are the results of running this example through SmartVA-Analyze 1.1\npd.read_csv('neonate-predictions.csv')", "Example of simple, hypothetical mapping\nIf we have data on a set of verbal autopsies (VAs) that did not use the PHMRC Shortened Questionnaire, we must map them to the expected format. This is a simple, hypothetical example for a set of VAs that asked only about injuries, hypertension, chest pain:", "hypothetical_data = pd.DataFrame(index=range(5))\n\nhypothetical_data['sex'] = ['M', 'M', 'F', 'M', 'F']\nhypothetical_data['age'] = [35, 45, 75, 67, 91]\n\nhypothetical_data['injury'] = ['rti', 'fall', '', '', '']\nhypothetical_data['heart_disease'] = ['N', 'N', 'Y', 'Y', 'Y']\nhypothetical_data['chest_pain'] = ['N', 'N', 'Y', 'N', '']\n\nhypothetical_data\n\n# SmartVA-Analyze 1.1 accepts a csv file as input\n# and expects a column for every field name in the \"Guide for data entry.xlsx\" spreadsheet\n\ndf = pd.DataFrame(index=hypothetical_data.index, columns=cb.index.unique())\n\n# SmartVA-Analyze 1.1 also requires a handful of columns that are not in the Guide\ndf['child_3_10'] = np.nan\ndf['agedays'] = np.nan\ndf['child_5_7e'] = np.nan\ndf['child_5_6e'] = np.nan\ndf['adult_2_9a'] = np.nan\n\n# to find the coding of specific variables, look in the Guide, and \n# as necessary refer to the numbers in paper form for the PHMRC Shortened Questionnaire\n# http://www.healthdata.org/sites/default/files/files/Tools/SmartVA/2015/PHMRC%20Shortened%20VAI_all-modules_2015.zip\n\n# set id\ndf['sid'] = hypothetical_data.index\n\n# set sex\ndf['gen_5_2'] = hypothetical_data['sex'].map({'M': '1', 'F': '2'})\n\n# set age\ndf['gen_5_4'] = 1 # units are years\ndf['gen_5_4a'] = hypothetical_data['age'].astype(int)\n\n\n# good place to save work and confirm that it runs through SmartVA\nfname = 'example_2.csv'\ndf.to_csv(fname, index=False)\n\n# here are the results of running this example\npd.read_csv('adult-predictions.csv')\n\n# map injuries to appropriate codes\n# suffered injury?\ndf['adult_5_1'] = hypothetical_data['injury'].map({'rti':'1', 'fall':'1', '':'0'})\n# injury type\ndf['adult_5_2'] = hypothetical_data['injury'].map({'rti':'1', 'fall':'2'})\n\n# _another_ good place to save work and confirm that it runs through SmartVA\nfname = 'example_3.csv'\ndf.to_csv(fname, index=False)\n\n# here are the results of running this example\npd.read_csv('adult-predictions.csv')\n\n# map heart disease (to column adult_1_1i, see Guide)\ndf['adult_1_1i'] = hypothetical_data['heart_disease'].map({'Y':'1', 'N':'0'})\n\n# map chest pain (to column adult_2_43, see Guide)\ndf['adult_2_43'] = hypothetical_data['chest_pain'].map({'Y':'1', 'N':'0', '':'9'})\n\n# and that completes the work for a simple, hypothetical mapping\nfname = 'example_4.csv'\ndf.to_csv(fname, index=False)\n\n# have a look at the non-empty entries in the mapped database:\ndf.T.dropna()\n\n# here are the results of running this example\npd.read_csv('adult-predictions.csv')" ]
[ "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ec-earth-consortium/cmip6/models/ec-earth3-aerchem/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: EC-EARTH-CONSORTIUM\nSource ID: EC-EARTH3-AERCHEM\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:59\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-aerchem', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.17/_downloads/8b68ef11c9dcc68ed3cd0ccec9a41a34/plot_decoding_unsupervised_spatial_filter.ipynb
bsd-3-clause
[ "%matplotlib inline", "Analysis of evoked response using ICA and PCA reduction techniques\nThis example computes PCA and ICA of evoked or epochs data. Then the\nPCA / ICA components, a.k.a. spatial filters, are used to transform\nthe channel data to new sources / virtual channels. The output is\nvisualized on the average of all the epochs.", "# Authors: Jean-Remi King <jeanremi.king@gmail.com>\n# Asish Panda <asishrocks95@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.decoding import UnsupervisedSpatialFilter\n\nfrom sklearn.decomposition import PCA, FastICA\n\nprint(__doc__)\n\n# Preprocess data\ndata_path = sample.data_path()\n\n# Load and filter data, set up epochs\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.1, 0.3\nevent_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)\n\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 20, fir_design='firwin')\nevents = mne.read_events(event_fname)\n\npicks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,\n exclude='bads')\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,\n picks=picks, baseline=None, preload=True,\n verbose=False)\n\nX = epochs.get_data()", "Transform data with PCA computed on the average ie evoked response", "pca = UnsupervisedSpatialFilter(PCA(30), average=False)\npca_data = pca.fit_transform(X)\nev = mne.EvokedArray(np.mean(pca_data, axis=0),\n mne.create_info(30, epochs.info['sfreq'],\n ch_types='eeg'), tmin=tmin)\nev.plot(show=False, window_title=\"PCA\", time_unit='s')", "Transform data with ICA computed on the raw epochs (no averaging)", "ica = UnsupervisedSpatialFilter(FastICA(30), average=False)\nica_data = ica.fit_transform(X)\nev1 = mne.EvokedArray(np.mean(ica_data, axis=0),\n mne.create_info(30, epochs.info['sfreq'],\n ch_types='eeg'), tmin=tmin)\nev1.plot(show=False, window_title='ICA', time_unit='s')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
abhi1509/deep-learning
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
mit
[ "Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.", "# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist", "Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.", "# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)\n\nprint(trainX.shape)\nprint(trainY.shape)", "Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.", "# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef show_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\nshow_digit(5)", "Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.\nHint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.", "# Define the neural network\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n # Include the input layer, hidden layer(s), and set how you want to train the model\n \n #Input layer\n net = tflearn.input_data([None, 784])#len(trainX[0])])\n \n #Hidden layers\n net = tflearn.fully_connected(net, 392, activation=\"ReLU\") #ReLU -> f(x)=max(x,0)\n net = tflearn.fully_connected(net, 196, activation=\"ReLU\") #ReLU -> f(x)=max(x,0)\n \n #Output layer\n net = tflearn.fully_connected(net, 10, activation=\"softmax\") #ReLU -> f(x)=max(x,0)\n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\n # This model assumes that your network is named \"net\" \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. \nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)", "Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!", "# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eneskemalergin/OldBlog
_oldnotebooks/Inferential_Statistics.ipynb
mit
[ "Inferential Statistics\nLet's say you have collected the height of 1,000 people living in Hong Kong. The mean of their height would be descriptive statistics, but their mean height does not indicate that it's the average height of whole of Hong Kong. Here, inferential statistics will help us in determining what the average height of whole of Hong Kong would be, which is described in depth in this chapter.\n\nInferential statistics is all about describing the larger picture of the analysis with a limited set of data and deriving conclusions from it.\n\nDistributions Types\nNormal Distribution\n\nMost common distribution\n\"Gaussian curve\", \"bell curve\" other names.\nThe numbers in the plot are the standard deviation numbers from the mean, which is zero.\n\n\nA normal distribution from a binomial distribution:\nLet's take a coin and flip it. The probability of getting a head or a tail is 50%. If you take the same coin and flip it six times, the probability of getting a head three times can be computed using the following formula:\n$$\nP(x) = \\frac{n!}{x!(n-x)!}p^{x}q^{n-x}\n$$\n and x is the number of successes desired\n\nIn the preceding formula, n is the number of times the coin is flipped, p is the probability of success, and q is (1– p), which is the probability of failure.", "# Calling the binom module from scipy stats package\nfrom scipy.stats import binom \n# Plotting Function\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nx = list(range(7))\nn, p = 6, 0.5\nrv = binom(n, p)\nplt.vlines(x, 0, rv.pmf(x), colors='r', linestyles='-', lw=1, label='Probability')\nplt.legend(loc='best', frameon=False)\nplt.xlabel(\"No. of instances\")\nplt.ylabel(\"Probability\")\nplt.show()\n\nx = range(1001)\nn, p = 1000, 0.4\nrv = binom(n, p)\nplt.vlines(x,0,rv.pmf(x), colors='g', linestyles='-', lw=1, label='Probability')\nplt.legend(loc='best', frameon=True)\nplt.xlabel(\"No. of instances\")\nplt.ylabel(\"Probability\")\nplt.show()", "Poisson Distribution\n\nIndependent interval occurrences in an interval.\nUsed for count-based distributions.\n\n$$\nf(k;\\lambda)=Pr(X = k)=\\frac{\\lambda^ke^{-k}}{k!}\n$$\nHere, e is the Euler's number, k is the number of occurrences for which the probability is going to be determined, and lambda is the mean number of occurrences.\nExample:\nLet's understand this with an example. The number of cars that pass through a bridge in an hour is 20. What would be the probability of 23 cars passing through the bridge in an hour?\n```Python\nfrom scipy.stats import poisson\nrv = poisson(20)\nrv.pmf(23)\nResult: 0.066881473662401172\n```\nWith the Poisson function, we define the mean value, which is 20 cars. The rv.pmf function gives the probability, which is around 6%, that 23 cars will pass the bridge.\nBernoulli Distribution\n\nCan perform an experiment with two possible outcomes: success or failure.\n\nSuccess has a probability of p, and failure has a probability of 1 - p. A random variable that takes a 1 value in case of a success and 0 in case of failure is called a Bernoulli distribution. The probability distribution function can be written as:\n$$\n P(n)=\\begin{cases}1-p & for & n = 0\\p & for & n = 1\\end{cases} \n$$\nIt can also be written like this:\n$$\nP(n)=p^n(1-p)^{1-n}\n$$\nThe distribution function can be written like this:\n$$\nD(n) = \\begin{cases}1-p & for & n=0\\1 & for & n=1\\end{cases}\n$$\nExample: Voting in an election is a good example of the Bernoulli distribution. A Bernoulli distribution can be generated using the bernoulli.rvs() function of the SciPy package.", "from scipy.stats import bernoulli\nbernoulli.rvs(0.7, size=100)", "z-score\n\nExpresses the value of a distribution in std with respect to mean.\n\n$$\nz = \\frac{X - \\mu}{\\sigma}\n$$\nHere, X is the value in the distribution, μ is the mean of the distribution, and σ is the\nstandard deviation of the distribution.\nExample: A classroom has 60 students in it and they have just got their mathematics examination score. We simulate the score of these 60 students with a normal distribution using the following command:", "import numpy as np\nclass_score = np.random.normal(50, 10, 60).round()\n\nplt.hist(class_score, 30, normed=True) # Number of breaks is 30\nplt.show()", "The score of each student can be converted to a z-score using the following functions:", "from scipy import stats\nstats.zscore(class_score)", "So, a student with a score of 60 out of 100 has a z-score of 1.334. To make more sense of the z-score, we'll use the standard normal table.\nThis table helps in determining the probability of a score.\nWe would like to know what the probability of getting a score above 60 would be.\nThe standard normal table can help us in determining the probability of the occurrence of the score, but we do not have to perform the cumbersome task of finding the value by looking through the table and finding the probability. This task is made simple by the cdf function, which is the cumulative distribution function:", "prob = 1 - stats.norm.cdf(1.334)\nprob", "The cdf function gives the probability of getting values up to the z-score of 1.334, and doing a minus one of it will give us the probability of getting a z-score, which is above it. In other words, 0.09 is the probability of getting marks above 60. \nLet's ask another question, \"how many students made it to the top 20% of the class?\"\nNow, to get the z-score at which the top 20% score marks, we can use the ppf function in SciPy:", "stats.norm.ppf(0.80)", "The z-score for the preceding output that determines whether the top 20% marks are at 0.84 is as follows:", "(0.84 * class_score.std()) + class_score.mean()", "We multiply the z-score with the standard deviation and then add the result with the mean of the distribution. This helps in converting the z-score to a value in the distribution. The 55.83 marks means that students who have marks more than this are in the top 20% of the distribution.\nThe z-score is an essential concept in statistics, which is widely used. Now you can understand that it is basically used in standardizing any distribution so that it can be compared or inferences can be derived from it.\n### p-value\nA p-value is the probability of rejecting a null-hypothesis when the hypothesis is proven true.\nIf the p-value is equal to or less than the significance level (α), then the null hypothesis is inconsistent and it needs to be rejected.\nLet's understand this concept with an example where the null hypothesis is that it is common for students to score 68 marks in mathematics.\nLet's define the significance level at 5%. If the p-value is less than 5%, then the null hypothesis is rejected and it is not common to score 68 marks in mathematics.\nLet's get the z-score of 68 marks:", "zscore = ( 68 - class_score.mean() ) / class_score.std()\nzscore", "", "prob = 1 - stats.norm.cdf(zscore)\nprob", "One-tailed and two-tailed tests\nThe example in the previous section was an instance of a one-tailed test where the null hypothesis is rejected or accepted based on one direction of the normal distribution.\nIn a two-tailed test, both the tails of the null hypothesis are used to test the hypothesis.\n\nIn a two-tailed test, when a significance level of 5% is used, then it is distributed equally in the both directions, that is, 2.5% of it in one direction and 2.5% in the other direction.\nLet's understand this with an example. The mean score of the mathematics exam at a national level is 60 marks and the standard deviation is 3 marks.\nThe mean marks of a class are 53. The null hypothesis is that the mean marks of the class are similar to the national average. Let's test this hypothesis by first getting the z-score 60:", "zscore = (53-50)/3.0\nzscore\n\nprob = stats.norm.cdf(zscore)\nprob", "Type 1 and Type 2 errors\nType 1 error is a type of error that occurs when there is a rejection of the null hypothesis when it is actually true. This kind of error is also called an error of the first kind and is equivalent to false positives.\n\nLet's understand this concept using an example. There is a new drug that is being developed and it needs to be tested on whether it is effective in combating diseases. The null hypothesis is that it is not effective in combating diseases.\nThe significance level is kept at 5% so that the null hypothesis can be accepted confidently 95% of the time. However, 5% of the time, we'll accept the rejecttion of the hypothesis although it had to be accepted, which means that even though the drug is ineffective, it is assumed to be effective.\n\nThe Type 1 error is controlled by controlling the significance level, which is alpha. Alpha is the highest probability to have a Type 1 error. The lower the alpha, the lower will be the Type 1 error.\nThe Type 2 error is the kind of error that occurs when we do not reject a null hypothesis that is false. This error is also called the error of the second kind and is equivalent to a false negative.\n\nThis kind of error occurs in a drug scenario when the drug is assumed to be ineffective but is actually it is effective.\nThese errors can be controlled one at a time. If one of the errors is lowered, then the other one increases. It depends on the use case and the problem statement that the analysis is trying to address, and depending on it, the appropriate error should reduce. In the case of this drug scenario, typically, a Type 1 error should be lowered because it is better to ship a drug that is confidently effective.\nConfidence Interval\nA confidence interval is a type of interval statistics for a population parameter. The confidence interval helps in determining the interval at which the population mean can be defined.\n\nLet's try to understand this concept by using an example. Let's take the height of every man in Kenya and determine with 95% confidence interval the average of height of Kenyan men at a national level.\nLet's take 50 men and their height in centimeters:", "height_data = np.array([ 186.0, 180.0, 195.0, 189.0, 191.0,\n 177.0, 161.0, 177.0, 192.0, 182.0,\n 185.0, 192.0, 173.0, 172.0, 191.0, \n 184.0, 193.0, 182.0, 190.0, 185.0, \n 181.0,188.0, 179.0, 188.0, 170.0, 179.0, \n 180.0, 189.0, 188.0, 185.0, 170.0, \n 197.0, 187.0,182.0, 173.0, 179.0,184.0, \n 177.0, 190.0, 174.0, 203.0, 206.0, 173.0, \n 169.0, 178.0,201.0, 198.0, 166.0,171.0, 180.0])\n\nplt.hist(height_data, 30, normed=True, color='r')\nplt.show()\n\n# The mean of the distribution\nheight_data.mean()", "So, the average height of a man from the sample is 183.4 cm.\nTo determine the confidence interval, we'll now define the standard error of the mean.\nThe standard error of the mean is the deviation of the sample mean from the population mean. It is defined using the following formula:\n$$\nSE_{\\overline{x}} = \\frac{s}{\\sqrt{n}}\n$$\nHere, s is the standard deviation of the sample, and n is the number of elements of the sample.\nThis can be calculated using the sem() function of the SciPy package:", "stats.sem(height_data)", "So, there is a standard error of the mean of 1.38 cm. The lower and upper limit of the confidence interval can be determined by using the following formula:\n Upper/Lower limit = mean(height) + / - sigma * SEmean(x)\n\nFor lower limit:\n 183.24 + (1.96 * 1.38) = 185.94\n\nFor upper limit:\n 183.24 - (1.96*1.38) = 180.53\n\nA 1.96 standard deviation covers 95% of area in the normal distribution.\nWe can confidently say that the population mean lies between 180.53 cm and 185.94 cm of height.\nNew Example: Let's assume we take a sample of 50 people, record their height, and then repeat this process 30 times. We can then plot the averages of each sample and observe the distribution.", "average_height = []\nfor i in range(30):\n # Create a sample of 50 with mean 183 and standard deviation 10\n sample50 = np.random.normal(183, 10, 50).round()\n # Add the mean on sample of 50 into average_height list\n average_height.append(sample50.mean())\n\n# Plot it with 10 bars and normalization\nplt.hist(average_height, 10, normed=True)\nplt.show()", "You can observe that the mean ranges from 180 to 187 cm when we simulated the average height of 50 sample men, which was taken 30 times.\nLet's see what happens when we sample 1000 men and repeat the process 30 times:", "average_height = []\nfor i in range(30):\n # Create a sample of 50 with mean 183 and standard deviation 10\n sample1000 = np.random.normal(183, 10, 1000).round()\n average_height.append(sample1000.mean())\n\nplt.hist(average_height, 10, normed=True)\nplt.show()", "As you can see, the height varies from 182.4 cm and to 183.5 cm. What does this mean?\n\nIt means that as the sample size increases, the standard error of the mean decreases, which also means that the confidence interval becomes narrower, and we can tell with certainty the interval that the population mean would lie on.\nCorrelation\nIn statistics, correlation defines the similarity between two random variables. The most commonly used correlation is the Pearson correlation and it is defined by the following:\n$$\n\\rho_{X,Y} = \\frac{cov(X,Y)}{\\sigma_{x}\\sigma_{y}} = \\frac{E[(X - \\mu_{X})(Y - \\mu_{Y})]}{\\sigma_{x}\\sigma_{y}}\n$$\nThe preceding formula defines the Pearson correlation as the covariance between X and Y, which is divided by the standard deviation of X and Y, or it can also be defined as the expected mean of the sum of multiplied difference of random variables with respect to the mean divided by the standard deviation of X and Y. Let's understand this with an example. Let's take the mileage and horsepower of various cars and see if there is a relation between the two. This can be achieved using the pearsonr function in the SciPy package:", "mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8,\n 16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4, 33.9, 21.5, 15.5,\n 15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,19.7, 15.0, 21.4]\n\nhp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 180,\n 205, 215, 230, 66, 52, 65, 97, 150, 150, 245, 175, 66, 91, 113, 264,\n 175, 335, 109]\n\nstats.pearsonr(mpg,hp)", "The first value of the output gives the correlation between the horsepower and the mileage \nThe second value gives the p-value.\n\nSo, the first value tells us that it is highly negatively correlated and the p-value tells us that there is significant correlation between them:", "plt.scatter(mpg, hp, color='r')\nplt.show()", "Let's look into another correlation called the Spearman correlation. The Spearman correlation applies to the rank order of the values and so it provides a monotonic relation between the two distributions. It is useful for ordinal data (data that has an order, such as movie ratings or grades in class) and is not affected by outliers.\nLet's get the Spearman correlation between the miles per gallon and horsepower. This can be achieved using the spearmanr() function in the SciPy package:", "stats.spearmanr(mpg, hp)", "We can see that the Spearman correlation is -0.89 and the p-value is significant.\nLet's do an experiment in which we introduce a few outlier values in the data and see how the Pearson and Spearman correlation gets affected:", "mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8,\n 19.2, 17.8, 16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4,\n 33.9, 21.5, 15.5, 15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,\n 19.7, 15.0, 21.4, 120, 3]\nhp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180,\n 180, 180, 205, 215, 230, 66, 52, 65, 97, 150, 150, 245,\n 175, 66, 91, 113, 264, 175, 335, 109, 30, 600]\n\nplt.scatter(mpg, hp)\nplt.show()", "From the plot, you can clearly make out the outlier values. Lets see how the correlations get affected for both the Pearson and Spearman correlation", "stats.pearsonr(mpg, hp)\n\nstats.spearmanr(mpg, hp)", "We can clearly see that the Pearson correlation has been drastically affected due to the outliers, which are from a correlation of 0.89 to 0.47.\nThe Spearman correlation did not get affected much as it is based on the order rather than the actual value in the data.\nZ-test vs T-test\nWe have already done a few Z-tests before where we validated our null hypothesis.\n\nA T-distribution is similar to a Z-distribution—it is centered at zero and has a basic bell shape, but its shorter and flatter around the center than the Z-distribution.\nThe T-distributions' standard deviation is usually proportionally larger than the Z, because of which you see the fatter tails on each side.\nThe t distribution is usually used to analyze the population when the sample is small.\nThe Z-test is used to compare the population mean against a sample or compare the population mean of two distributions with a sample size greater than 30. An example of a Z-test would be comparing the heights of men from different ethnicity groups.\nThe T-test is used to compare the population mean against a sample, or compare the population mean of two distributions with a sample size less than 30, and when you don't know the population's standard deviation.\nLet's do a T-test on two classes that are given a mathematics test and have 10 students in each class:\nTo perform the T-test, we can use the ttest_ind() function in the SciPy package:", "class1_score = np.array([45.0, 40.0, 49.0, 52.0, 54.0, 64.0, 36.0, 41.0, 42.0, 34.0])\nclass2_score = np.array([75.0, 85.0, 53.0, 70.0, 72.0, 93.0, 61.0, 65.0, 65.0, 72.0])\nstats.ttest_ind(class1_score,class2_score)", "The first value in the output is the calculated t-statistics, whereas the second value is the p-value and p-value shows that the two distributions are not identical.\nThe F distribution\nThe F distribution is also known as Snedecor's F distribution or the Fisher–Snedecor distribution.\nAn f statistic is given by the following formula:\n$$\nf = {[{s_1^2}/{\\sigma_1^2}]}{[{s_2^2}/{\\sigma_2^2}]}\n$$\nHere, s 1 is the standard deviation of a sample 1 with an $n_1$ size, $s_2$ is the standard deviation of a sample 2, where the size $n_2σ_1$ is the population standard deviation of a sample $1σ_2$ is the population standard deviation of a sample 12.\nThe distribution of all the possible values of f statistics is called F distribution. The d1 and d2 represent the degrees of freedom in the following chart:\n\nThe chi-square distribution\nThe chi-square statistics are defined by the following formula:\n$$\nX^2 = [(n-1)*s^2]/\\sigma^2\n$$\nHere, n is the size of the sample, s is the standard deviation of the sample, and σ is the standard deviation of the population.\nIf we repeatedly take samples and define the chi-square statistics, then we can form a chi-square distribution, which is defined by the following probability density function:\n$$\nY = Y_0 * (X^2)^{(v/2-1)} * e^{-X2/2}\n$$\nHere, $Y_0$ is a constant that depends on the number of degrees of freedom, $Χ_2$ is the chi-square statistic, $v = n - 1$ is the number of degrees of freedom, and e is a constant equal to the base of the natural logarithm system.\n$Y_0$ is defined so that the area under the chi-square curve is equal to one.\n\nThe Chi-square test can be used to test whether the observed data differs significantly from the expected data. Let's take the example of a dice. The dice is rolled 36 times and the probability that each face should turn upwards is 1/6. So, the expected and observed distribution is as follows:", "expected = np.array([6,6,6,6,6,6])\nobserved = np.array([7, 5, 3, 9, 6, 6])", "The null hypothesis in the chi-square test is that the observed value is similar to the\nexpected value.\nThe chi-square can be performed using the chisquare function in the SciPy package:", "stats.chisquare(observed,expected)", "The first value is the chi-square value and the second value is the p-value, which is very high. This means that the null hypothesis is valid and the observed value is similar to the expected value.\n\nThe chi-square test of independence is a statistical test used to determine whether two categorical variables are independent of each other or not.\nLet's take the following example to see whether there is a preference for a book based on the gender of people reading it.\nThe Chi-Square test of independence can be performed using the chi2_contingency function in the SciPy package:", "men_women = np.array([[100, 120, 60],[350, 200, 90]])\nstats.chi2_contingency(men_women)", "The first value is the chi-square value,\nThe second value is the p-value, which is very small, and means that there is an\nassociation between the gender of people and the genre of the book they read.\nThe third value is the degrees of freedom. \nThe fourth value, which is an array, is the expected frequencies.\n\nAnova\nAnalysis of Variance (ANOVA) is a statistical method used to test differences between two or more means.This test basically compares the means between groups and determines whether any of these means are significantly different from each other:\n$$\nH_0 : \\mu_1 = \\mu_2 = \\mu_3 = ... = \\mu_k\n$$\nANOVA is a test that can tell you which group is significantly different from each other. Let's take the height of men who are from three different countries and see if their heights are significantly different from others:", "country1 = np.array([ 176., 201., 172., 179., 180., 188., 187., 184., 171.,\n 181., 192., 187., 178., 178., 180., 199., 185., 176.,\n 207., 177., 160., 174., 176., 192., 189., 187., 183.,\n 180., 181., 200., 190., 187., 175., 179., 181., 183.,\n 171., 181., 190., 186., 185., 188., 201., 192., 188.,\n 181., 172., 191., 201., 170., 170., 192., 185., 167.,\n 178., 179., 167., 183., 200., 185.])\ncountry2 = np.array([177., 165., 185., 187., 175., 172.,179., 192.,169.,\n 167., 162., 165., 188., 194., 187., 175., 163., 178.,\n 197., 172., 175., 185., 176., 171., 172., 186., 168.,\n 178., 191., 192., 175., 189., 178., 181., 170., 182.,\n 166., 189., 196., 192., 189., 171., 185., 198., 181.,\n 167., 184., 179., 178., 193., 179., 177., 181., 174.,\n 171., 184., 156., 180., 181., 187.])\ncountry3 = np.array([ 191.,173., 175., 200., 190.,191.,185.,190.,184.,190.,\n 191., 184., 167., 194., 195., 174., 171., 191.,\n 174., 177., 182., 184., 176., 180., 181., 186., 179.,\n 176., 186., 176., 184., 194., 179., 171., 174., 174.,\n 182., 198., 180., 178., 200., 200., 174., 202., 176.,\n 180., 163., 159., 194., 192., 163., 194., 183., 190.,\n 186., 178., 182., 174., 178., 182.])\nstats.f_oneway(country1,country2,country3)", "The first value of the output gives the F-value and the second value gives the p-value. Since the p-value is greater than 5% by a small margin, we can tell that the mean of the heights in the three countries is not significantly different from each other.\nSummary\nIn this tutorial we have seen various probability distributions. We also covered how to use z-score, p-value, Type 1, and Type 2 errors. We gained an insight into the Z-test and T-test followed by the chi-square distribution and saw how it can be used to test a hypothesis." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
macks22/gensim
docs/notebooks/FastText_Tutorial.ipynb
lgpl-2.1
[ "Using FastText via Gensim\nThis tutorial is about using the Gensim wrapper for the FastText library for training FastText models, loading them and performing similarity operations and vector lookups analogous to Word2Vec.\nWhen to use FastText?\nThe main principle behind FastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings.\nFastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams.\nAccording to a detailed comparison of Word2Vec and FastText in this notebook, FastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases.\nTraining time for FastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100).\nFastText can be used to obtain vectors for out-of-vocabulary (oov) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data.\nTraining models\nFor the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim)\nYou need to have FastText setup locally to be able to train models. See installation instructions for FastText if you don't have FastText installed.", "import gensim, os\nfrom gensim.models.wrappers.fasttext import FastText\n\n# Set FastText home to the path to the FastText executable\nft_home = '/home/jayant/Projects/fastText/fasttext'\n\n# Set file names for train and test data\ndata_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep\nlee_train_file = data_dir + 'lee_background.cor'\n\nmodel = FastText.train(ft_home, lee_train_file)\n\nprint(model)", "Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec - \n - model: Training architecture. Allowed values: cbow, skipgram (Default cbow)\n - size: Size of embeddings to be learnt (Default 100)\n - alpha: Initial learning rate (Default 0.025)\n - window: Context window size (Default 5)\n - min_count: Ignore words with number of occurrences below this (Default 5)\n - loss: Training objective. Allowed values: ns, hs, softmax (Default ns)\n - sample: Threshold for downsampling higher-frequency words (Default 0.001)\n - negative: Number of negative words to sample, for ns (Default 5)\n - iter: Number of epochs (Default 5)\n - sorted_vocab: Sort vocab by descending frequency (Default 1)\n - threads: Number of threads to use (Default 12)\nIn addition, FastText has two additional parameters - \n - min_n: min length of char ngrams to be used (Default 3)\n - max_n: max length of char ngrams to be used for (Default 6)\nThese control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n, no character ngrams are used, and the model effectively reduces to Word2Vec.", "model = FastText.train(ft_home, lee_train_file, size=50, alpha=0.05, min_count=10)\nprint(model)", "Continuation of training with FastText models is not supported.\nSaving/loading models\nModels can be saved and loaded via the load and save methods.", "model.save('saved_fasttext_model')\nloaded_model = FastText.load('saved_fasttext_model')\nprint(loaded_model)", "The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model. \nWord vector lookup\nFastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word.", "print('night' in model.wv.vocab)\nprint('nights' in model.wv.vocab)\nprint(model['night'])\nprint(model['nights'])", "The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example -", "# Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data\nmodel['axe']", "The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary -", "# Tests if word present in vocab\nprint(\"word\" in model.wv.vocab)\n# Tests if vector present for word\nprint(\"word\" in model)", "Similarity operations\nSimilarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data.", "print(\"nights\" in model.wv.vocab)\nprint(\"night\" in model.wv.vocab)\nmodel.similarity(\"night\", \"nights\")", "Syntactically similar words generally have high similarity in FastText models, since a large number of the component char-ngrams will be the same. As a result, FastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here.\nOther similarity operations -", "# The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only\nmodel.most_similar(\"nights\")\n\nmodel.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant'])\n\nmodel.doesnt_match(\"breakfast cereal dinner lunch\".split())\n\nmodel.most_similar(positive=['baghdad', 'england'], negative=['london'])\n\nmodel.accuracy(questions='questions-words.txt')\n\n# Word Movers distance\nsentence_obama = 'Obama speaks to the media in Illinois'.lower().split()\nsentence_president = 'The president greets the press in Chicago'.lower().split()\n\n# Remove their stopwords.\nfrom nltk.corpus import stopwords\nstopwords = stopwords.words('english')\nsentence_obama = [w for w in sentence_obama if w not in stopwords]\nsentence_president = [w for w in sentence_president if w not in stopwords]\n\n# Compute WMD.\ndistance = model.wmdistance(sentence_obama, sentence_president)\ndistance" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
david-hoffman/scripts
notebooks/CurveFitTest.ipynb
apache-2.0
[ "Curve fitting test", "#Pull in all the normal math functions and set up matplotlib as inline plotting (i.e. Mathematica like)\n%pylab inline\n#import the curve_fit function\nfrom scipy.optimize import curve_fit\nfrom peaks.gauss2d import Gauss2D\nimport scipy.optimize.minpack as mp\ncurve_fit = mp.curve_fit\n\nset_cmap('gnuplot2')", "Below we define a simple exponential decay function of the form\n$$y(x)=a e^{-b x}+c$$", "#Here's a simple test function of an exponential decay\ndef func(x, *params):\n #this is the important part, if you want to pass an unspecified number of parameters\n #you need to unpack the parameters list in the function definition and then you need\n #to specify initial guesses when using curve_fit\n return params[0] * exp(-params[1] * x) + params[2]\n\n#Here we generate some fake data\nxdata = linspace(0, 4, 100)\ny = func(xdata, 2.5, 1.3, 0.5)\n\n#add gaussian white noise\nydata = y + 0.1 * random.normal(size=len(xdata))\n\n#perform the curve_fit, NOTE: you have to give guesses so that curve fit can determine\n#the correct number of parameters for the function func\npopt, pcov = curve_fit(func, xdata, ydata,p0=ones(3))\n\n#generate a nice fit\nx_fit=linspace(0,4,1024)\nfit = func(x_fit,*popt) #you need to use the '*' operator to unpack the array specifically\nplot(xdata,ydata,'.',x_fit,fit,'--r')\nfigure()\nplot(xdata,ydata-func(xdata,*popt),'ro')\n\n#Let's try writing a function that fits lorentzian peaks\ndef lor(x, *p):\n toReturn = zeros(len(x)) #initialize our returned array\n toReturn += p[0] #first parameter is the offset\n if (len(p)-1)%3 != 0:\n #Here's where we should raise an error\n pass\n \n for i in range(1,len(p),3):\n toReturn += p[i]/(((x-p[i+1])/p[i+2])**2+1)\n \n return toReturn\n\n#Here we generate some fake data\nxdata = linspace(0, 100, 2048)\np = [1,1,24,5,2,56,10]\ny = lor(xdata, *p)\n\n#add gaussian white noise\nydata = y + 0.2 * random.normal(size=len(xdata))\n\n#perform the curve_fit, NOTE: you have to give guesses so that curve fit can determine\n#the correct number of parameters for the function func\npopt, pcov = curve_fit(lor, xdata, ydata,p0=p)\n\n#generate a nice fit\nfit = lor(xdata,*popt) #you need to use the '*' operator to unpack the array specifically\nplot(xdata,ydata,'.',xdata,fit,'--r')\nfigure()\nplot(xdata,ydata-lor(xdata,*popt),'ro')", "2D Fitting\nHere I've taken the code from here.\nThere's a better definition of a skewed gaussian available here.", "#define model function and pass independant variables x and y as a list\ndef twoD_Gaussian(xdata_tuple, amplitude, xo, yo, sigma_x, sigma_y, theta, offset):\n (x, y) = xdata_tuple \n xo = float(xo) \n yo = float(yo) \n a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2) \n b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2) \n c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2) \n g = offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo) \n + c*((y-yo)**2))) \n return g.ravel()\n\n# Create x and y indices\nx = np.linspace(0, 200, 201)\ny = np.linspace(0, 200, 201)\nx, y = np.meshgrid(x, y)\n\n#create data\ndata = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, pi/4, 10)\n\n# plot twoD_Gaussian data generated above\nplt.figure()\nplt.imshow(data.reshape(201, 201),origin='bottom')\nplt.colorbar()\n\n# add some noise to the data and try to fit the data generated beforehand\ninitial_guess = (3,100,100,20,40,0,10)\n\ndata_noisy = data + 0.2*np.random.normal(size=data.shape)\n\npopt, pcov = curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess)\n\n#And plot the results:\n\ndata_fitted = twoD_Gaussian((x, y), *popt)\n\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\ndata_noisy.shape = (201, 201)\nax.imshow(data_noisy.reshape(201, 201), origin='bottom',\n extent=(x.min(), x.max(), y.min(), y.max()))\nax.contour(x, y, data_fitted.reshape(201, 201), 8, colors='w')\n\n# Create x and y indices\nx = arange(32)\ny = arange(32)\nx, y = np.meshgrid(x, y)\n\n#create data\ndata = twoD_Gaussian((x, y), 10, 14, 17, 5, 10, pi/12, 0)\n\n# plot twoD_Gaussian data generated above\nplt.figure()\nplt.matshow(data.reshape(32, 32),origin='bottom')\nplt.colorbar()\n\n# add some noise to the data and try to fit the data generated beforehand\ninitial_guess = (3,16,16,5,5,0,10)\n\ndata_noisy = data + np.random.poisson(4, size=data.shape)\n\npopt, pcov = curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess)\n\n#And plot the results:\n\ndata_fitted = twoD_Gaussian((x, y), *popt)\n\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\ndata_noisy.shape = (32, 32)\ndata.shape = (32, 32)\nimg = ax.matshow(data_noisy.reshape(32, 32), origin='bottom',\n extent=(x.min(), x.max(), y.min(), y.max()))\nax.contour(x, y, data_fitted.reshape(32, 32), 8, colors='w')\ncolorbar(img)\n\ndef _general_function_mle(params, xdata, ydata, function):\n # calculate the function\n f = function(xdata, *params)\n # calculate the MLE version of chi2\n chi2 = 2*(f - ydata - ydata * np.log(f/ydata))\n # return the sqrt because the np.leastsq will square and sum the result\n if chi2.min() < 0:\n return nan_to_num(inf)*ones_like(chi2)\n else:\n return np.sqrt(chi2)\n\n \ndef _weighted_general_function_mle(params, xdata, ydata, function, weights):\n return weights * (_general_function_mle(params, xdata, ydata, function))\n\n\ndef _general_function_ls(params, xdata, ydata, function):\n return function(xdata, *params) - ydata\n\n\ndef _weighted_general_function_ls(params, xdata, ydata, function, weights):\n return weights * _general_function_ls(params, xdata, ydata, function)\n\n##Here's my modifcation to the above code\n\n#define model function and pass independant variables x and y as a list\ndef gaussian2D(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset):\n (x, y) = xdata_tuple\n g = offset + amp*exp( -((x-x0)**2/(2*sigma_x**2)+(y-y0)**2/(2*sigma_y**2))) \n return g\n\n#create a wrapper function\ndef gaussian2D_fit(*args):\n return gaussian2D(*args).ravel()\n\ndef gaussian2D_sym(xdata_tuple, amp, x0, y0, sigma_x, offset):\n (x, y) = xdata_tuple\n g = offset + amp*exp( -((x-x0)**2+(y-y0)**2)/(2*sigma_x**2))\n return g\n\n#create a wrapper function\ndef gaussian2D_sym_fit(*args):\n return gaussian2D_sym(*args).ravel()\n\n# # Create x and y indices\n# x = arange(32)\n# y = arange(32)\n# x, y = np.meshgrid(x, y)\n\n# #create data\n# real_params = array([10, 14, 17, 2, 4, 0])\n# data = gaussian2D((x, y), *real_params)\n\n# plot twoD_Gaussian data generated above\nplt.figure()\nplt.matshow(data,origin='bottom')\nplt.colorbar()\n\n# add some noise to the data and try to fit the data generated beforehand\ninitial_guess = (10,12,15,5,7,8)\n\ndata_noisy = data + random.poisson(4, data.shape)\n\nmp._general_function = _general_function_mle\nmp._weighted_general_function = _weighted_general_function_mle\npopt_mle, pcov_mle = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)\n\nmp._general_function = _general_function_ls\nmp._weighted_general_function = _weighted_general_function_ls\npopt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)\n\n#And plot the results:\n\ndata_fitted = gaussian2D((x, y), *popt)\n\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\nax.matshow(data_noisy, origin='bottom', extent=(x.min(), x.max(), y.min(), y.max()))\nax.contour(x, y, data_fitted, 8, colors='w')\nprint(popt_mle)\nprint(popt)\n#[ 2.97066005 31.99547047 31.96779469 4.97361061 9.97955038 1.20776079]", "With my method of using a wrapper function to prepare the data for fitting I avoid the need to reshape the data for plotting and later analysis, but it means I need to ravel the y-data.", "# define jacobians for timing experiments.\ndef myDfun( params, xdata, ydata, f):\n x = xdata[0].ravel()\n y = xdata[1].ravel()\n amp, x0, y0, sigma_x, sigma_y, offset = params\n value = f(xdata, *params)-offset\n dydamp = value/amp\n dydx0 = value*(x-x0)/sigma_x**2\n dydsigmax = value*(x-x0)**2/sigma_x**3\n dydy0 = value*(y-y0)/sigma_y**2\n dydsigmay = value*(y-y0)**2/sigma_y**3\n return vstack((dydamp, dydx0, dydy0, dydsigmax, dydsigmay, ones_like(value)))\n\ndef myDfun_sym( params, xdata, ydata, f):\n x = xdata[0].ravel()\n y = xdata[1].ravel()\n amp, x0, y0, sigma_x, offset = params\n value = f(xdata, *params)-offset\n dydamp = value/amp\n dydx0 = value*(x-x0)/sigma_x**2\n dydsigmax = value*(x-x0)**2/sigma_x**3\n dydy0 = value*(y-y0)/sigma_x**2\n return vstack((dydamp, dydx0, dydy0, dydsigmax, ones_like(value)))\n\nmyDfun(popt, (x, y), data_noisy.ravel(), gaussian2D_fit)[0].shape", "Testing timing\nComparing using a Jacobian vs not using one", "# With MLE fitting\nmp._general_function = _general_function_mle\nmp._weighted_general_function = _weighted_general_function_mle\n%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess, Dfun=myDfun, col_deriv=1)\n%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)\n\n# With least squares fitting for a non-symmetric model function\nmp._general_function = _general_function_ls\nmp._weighted_general_function = _weighted_general_function_ls\n%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess, Dfun=myDfun, col_deriv=1)\n%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)\n\n# With least squares for a symmetric model function\nmp._general_function = _general_function_ls\nmp._weighted_general_function = _weighted_general_function_ls\ninitial_guess2 = (10,12,15,5,8)\n%timeit popt, pcov = mp.curve_fit(gaussian2D_sym_fit, (x, y), data_noisy.ravel(), p0=initial_guess2, Dfun=myDfun_sym, col_deriv=1)\n%timeit popt, pcov = mp.curve_fit(gaussian2D_sym_fit, (x, y), data_noisy.ravel(), p0=initial_guess2)\n\n# Testing my class, notice its slower, but there's a lot more\njunk_g = Gauss2D(data_noisy)\n\n\njunk_g.optimize_params(modeltype='full')\n%timeit junk_g.optimize_params(junk_g.guess_params)\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\nax.matshow(junk_g.data, origin='bottom')\n(y, x) = indices(junk_g.data.shape)\nax.contour(x, y, junk_g.fit_model, 8, colors='w')\nax.contour(x, y, data, 8, colors='r')\n\njunk_g.optimize_params(modeltype='norot')\n%timeit junk_g.optimize_params(junk_g.guess_params)\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\nax.matshow(junk_g.data, origin='bottom')\n(y, x) = indices(junk_g.data.shape)\nax.contour(x, y, junk_g.fit_model, 8, colors='w')\nax.contour(x, y, data, 8, colors='r')\n\njunk_g.optimize_params(modeltype='sym')\n%timeit junk_g.optimize_params(junk_g.guess_params)\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\nax.matshow(junk_g.data, origin='bottom')\n(y, x) = indices(junk_g.data.shape)\nax.contour(x, y, junk_g.fit_model, 8, colors='w')\nax.contour(x, y, data, 8, colors='r')", "Lorentzians", "##Here's my modifcation to the above code\n\n#define model function and pass independant variables x and y as a list\ndef lor2D(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset):\n (x, y) = xdata_tuple\n g = offset + amp/(1+((x-x0)/(sigma_x/2))**2)/(1+((y-y0)/(sigma_y/2))**2)\n return g\n\n#create a wrapper function\ndef lor2D_fit(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset):\n return gaussian2D(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset).ravel()\n\n# Create x and y indices\nx = arange(64)\ny = arange(64)\nx, y = np.meshgrid(x, y)\n\n#create data\ndata = lor2D((x, y), 3, 32, 32, 5, 10, 10)\n\n# plot twoD_Gaussian data generated above\nplt.figure()\nplt.matshow(data,origin='bottom')\nplt.colorbar()\n\n# add some noise to the data and try to fit the data generated beforehand\ninitial_guess = (4,0,30,25,35,8)\n\ndata_noisy = data + 0.2*randn(*data.shape)\n\npopt, pcov = curve_fit(lor2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)\n\n#And plot the results:\n\ndata_fitted = lor2D((x, y), *popt)\n\nfig, ax = plt.subplots(1, 1)\nax.hold(True)\nax.matshow(data_noisy, origin='bottom', extent=(x.min(), x.max(), y.min(), y.max()))\nax.contour(x, y, data_fitted, 8, colors='w')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/niwa/cmip6/models/sandbox-2/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: NIWA\nSource ID: SANDBOX-2\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:30\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Danghor/Formal-Languages
ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../../style.css', 'r') as file:\n css = file.read()\nHTML(css)", "Implementing an SLR-Table-Generator\nA Grammar for Grammars\nAs the goal is to generate an SLR-table-generator we first need to implement a parser for context free grammars.\nThe file simple.g contains an example grammar that describes arithmetic expressions.", "!cat Examples/c-grammar.g", "We use <span style=\"font-variant:small-caps;\">Antlr</span> to develop a parser for context free grammars. The pure grammar used to parse context free grammars is stored in the file Pure.g4. It is similar to the grammar that we have already used to implement Earley's algorithm, but allows additionally the use of the operator |, so that all grammar rules that define a variable can be combined in one rule.", "!cat Pure.g4", "The annotated grammar is stored in the file Grammar.g4.\nThe parser will return a list of grammar rules, where each rule of the form\n$$ a \\rightarrow \\beta $$\nis stored as the tuple (a,) + 𝛽.", "!cat -n Grammar.g4", "We start by generating both scanner and parser.", "!antlr4 -Dlanguage=Python3 Grammar.g4\n\nfrom GrammarLexer import GrammarLexer\nfrom GrammarParser import GrammarParser\nimport antlr4", "The Class GrammarRule\nThe class GrammarRule is used to store a single grammar rule. As we have to use objects of type GrammarRule as keys in a dictionary later, we have to provide the methods __eq__, __ne__, and __hash__.", "class GrammarRule:\n def __init__(self, variable, body):\n self.mVariable = variable\n self.mBody = body\n \n def __eq__(self, other):\n return isinstance(other, GrammarRule) and \\\n self.mVariable == other.mVariable and \\\n self.mBody == other.mBody\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.__repr__())\n \n def __repr__(self):\n return f'{self.mVariable} → {\" \".join(self.mBody)}'", "The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure.", "def parse_grammar(filename):\n input_stream = antlr4.FileStream(filename, encoding=\"utf-8\")\n lexer = GrammarLexer(input_stream)\n token_stream = antlr4.CommonTokenStream(lexer)\n parser = GrammarParser(token_stream)\n grammar = parser.start()\n return [GrammarRule(head, tuple(body)) for head, *body in grammar.g]\n\ngrammar = parse_grammar('Examples/c-grammar.g')\ngrammar", "Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character \"'\".", "def is_var(name):\n return name[0] != \"'\" and name.islower()", "Fun Fact: The invocation of \"'return'\".islower() returns True. This is the reason that we have to test that\nname does not start with a \"'\" character because otherwise keywords like 'return' or 'while' appearing in a grammar would be mistaken for variables.", "\"'return'\".islower()", "Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules.", "def collect_variables(Rules):\n Variables = set()\n for rule in Rules:\n Variables.add(rule.mVariable)\n for item in rule.mBody:\n if is_var(item):\n Variables.add(item)\n return Variables", "Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules.", "def collect_tokens(Rules):\n Tokens = set()\n for rule in Rules:\n for item in rule.mBody:\n if not is_var(item):\n Tokens.add(item)\n return Tokens", "Marked Rules\nThe class MarkedRule stores a single marked rule of the form\n$$ v \\rightarrow \\alpha \\bullet \\beta $$\nwhere the variable $v$ is stored in the member variable mVariable, while $\\alpha$ and $\\beta$ are stored in the variables mAlphaand mBeta respectively. These variables are assumed to contain tuples of grammar symbols. A grammar symbol is either\n- a variable,\n- a token, or\n- a literal, i.e. a string enclosed in single quotes.\nLater, we need to maintain sets of marked rules to represent states. Therefore, we have to define the methods __eq__, __ne__, and __hash__.", "class MarkedRule():\n def __init__(self, variable, alpha, beta):\n self.mVariable = variable\n self.mAlpha = alpha\n self.mBeta = beta\n \n def __eq__(self, other):\n return isinstance(other, MarkedRule) and \\\n self.mVariable == other.mVariable and \\\n self.mAlpha == other.mAlpha and \\\n self.mBeta == other.mBeta\n \n def __ne__(self, other):\n return not self.__eq__(other)\n \n def __hash__(self):\n return hash(self.__repr__())\n \n def __repr__(self):\n alphaStr = ' '.join(self.mAlpha)\n betaStr = ' '.join(self.mBeta)\n return f'{self.mVariable} → {alphaStr} • {betaStr}'", "Given a marked rule self, the function is_complete checks, whether the marked rule self has the form\n$$ c \\rightarrow \\alpha\\; \\bullet,$$\ni.e. it checks, whether the $\\bullet$ is at the end of the grammar rule.", "def is_complete(self):\n return len(self.mBeta) == 0\n\nMarkedRule.is_complete = is_complete\ndel is_complete", "Given a marked rule self of the form\n$$ c \\rightarrow \\alpha \\bullet X\\, \\delta, $$\nthe function symbol_after_dot returns the symbol $X$. If there is no symbol after the $\\bullet$, the method returns None.", "def symbol_after_dot(self):\n if len(self.mBeta) > 0:\n return self.mBeta[0]\n return None\n\nMarkedRule.symbol_after_dot = symbol_after_dot\ndel symbol_after_dot", "Given a marked rule, this function returns the variable following the dot. If there is no variable following the dot, the function returns None.", "def next_var(self):\n if len(self.mBeta) > 0:\n var = self.mBeta[0]\n if is_var(var):\n return var\n return None\n\nMarkedRule.next_var = next_var\ndel next_var", "The function move_dot(self) transforms a marked rule of the form \n$$ c \\rightarrow \\alpha \\bullet X\\, \\beta $$\ninto a marked rule of the form\n$$ c \\rightarrow \\alpha\\, X \\bullet \\beta, $$\ni.e. the $\\bullet$ is moved over the next symbol. Invocation of this method assumes that there is a symbol\nfollowing the $\\bullet$.", "def move_dot(self):\n return MarkedRule(self.mVariable, \n self.mAlpha + (self.mBeta[0],), \n self.mBeta[1:])\n\nMarkedRule.move_dot = move_dot\ndel move_dot", "The function to_rule(self) turns the marked rule self into a GrammarRule, i.e. the marked rule\n$$ c \\rightarrow \\alpha \\bullet \\beta $$\nis turned into the grammar rule\n$$ c \\rightarrow \\alpha\\, \\beta. $$", "def to_rule(self):\n return GrammarRule(self.mVariable, self.mAlpha + self.mBeta)\n\nMarkedRule.to_rule = to_rule\ndel to_rule", "SLR-Table-Generation\nThe class Grammar represents a context free grammar. It stores a list of the GrammarRules of the given grammar.\nEach grammar rule is of the form\n$$ a \\rightarrow \\beta $$\nwhere $\\beta$ is a tuple of variables, tokens, and literals.\nThe start symbol is assumed to be the variable on the left hand side of the first rule. The grammar is augmented with the rule\n$$ \\widehat{s} \\rightarrow s\\, \\$. $$\nHere $s$ is the start variable of the given grammar and $\\widehat{s}$ is a new variable that is the start variable of the augmented grammar. The symbol $ denotes the end of input. The non-obvious member variables of the class Grammar have the following interpretation\n- mStates is the set of all states of the SLR-parser. These states are sets of marked rules.\n- mStateNamesis a dictionary assigning names of the form s0, s1, $\\cdots$, sn to the states stored in \n mStates. The functions action and goto will be defined for state names, not for states, because \n otherwise the table representing these functions would become both huge and unreadable.\n- mConflicts is a Boolean variable that will be set to true if the table generation discovers \n shift/reduce conflicts or reduce/reduce conflicts.", "class Grammar():\n def __init__(self, Rules):\n self.mRules = Rules\n self.mStart = Rules[0].mVariable\n self.mVariables = collect_variables(Rules)\n self.mTokens = collect_tokens(Rules)\n self.mStates = set()\n self.mStateNames = {}\n self.mConflicts = False\n self.mVariables.add('ŝ')\n self.mTokens.add('$')\n self.mRules.append(GrammarRule('ŝ', (self.mStart, '$'))) # augmenting\n self.compute_tables()", "Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables.", "def initialize_dictionary(Variables):\n return { a: set() for a in Variables }", "Given a Grammar, the function compute_tables computes\n- the sets First(v) and Follow(v) for every variable v,\n- the set of all states of the SLR-Parser,\n- the action table, and\n- the goto table. \nGiven a grammar g,\n- the set g.mFirst is a dictionary such that g.mFirst[a] = First[a] and\n- the set g.mFollow is a dictionary such that g.mFollow[a] = Follow[a] for all variables a.", "def compute_tables(self):\n self.mFirst = initialize_dictionary(self.mVariables)\n self.mFollow = initialize_dictionary(self.mVariables)\n self.compute_first()\n self.compute_follow()\n self.compute_rule_names()\n self.all_states()\n self.compute_action_table()\n self.compute_goto_table()\n \nGrammar.compute_tables = compute_tables\ndel compute_tables", "The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later\nto represent reduce actions in the action table.", "def compute_rule_names(self):\n self.mRuleNames = {}\n counter = 0\n for rule in self.mRules:\n self.mRuleNames[rule] = 'r' + str(counter)\n counter += 1\n \nGrammar.compute_rule_names = compute_rule_names\ndel compute_rule_names", "The function compute_first(self) computes the sets $\\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$:\n$$\\texttt{First}(\\texttt{c}) := \n \\Bigl{ t \\in T \\Bigm| \\exists \\gamma \\in (V \\cup T)^: \\texttt{c} \\Rightarrow^ t\\,\\gamma \\Bigr}.\n$$\nThe definition of the function $\\texttt{First}()$ is extended to strings from $(V \\cup T)^$ as follows:\n- $\\texttt{FirstList}(\\varepsilon) = {}$.\n- $\\texttt{FirstList}(t \\beta) = { t }$ if $t \\in T$.\n- $\\texttt{FirstList}(\\texttt{a} \\beta) = \\left{\n \\begin{array}[c]{ll}\n \\texttt{First}(\\texttt{a}) \\cup \\texttt{FirstList}(\\beta) & \\mbox{if $\\texttt{a} \\Rightarrow^ \\varepsilon$;} \\\n \\texttt{First}(\\texttt{a}) & \\mbox{otherwise.}\n \\end{array}\n \\right.\n $ \nIf $\\texttt{a}$ is a variable of $G$ and the rules defining $\\texttt{a}$ are given as \n$$\\texttt{a} \\rightarrow \\alpha_1 \\mid \\cdots \\mid \\alpha_n, $$\nthen we have\n$$\\texttt{First}(\\texttt{a}) = \\bigcup\\limits_{i=1}^n \\texttt{FirstList}(\\alpha_i). $$\nThe dictionary mFirst that stores this function is computed via a fixed point iteration.", "def compute_first(self):\n change = True\n while change:\n change = False\n for rule in self.mRules:\n a, body = rule.mVariable, rule.mBody\n first_body = self.first_list(body)\n if not (first_body <= self.mFirst[a]):\n change = True\n self.mFirst[a] |= first_body \n print('First sets:')\n for v in self.mVariables:\n print(f'First({v}) = {self.mFirst[v]}')\n \nGrammar.compute_first = compute_first\ndel compute_first", "Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\\texttt{FirstList}(\\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\\varepsilon = \\texttt{''}$.", "def first_list(self, alpha):\n if len(alpha) == 0:\n return { '' }\n elif is_var(alpha[0]): \n v, *r = alpha\n return eps_union(self.mFirst[v], self.first_list(r))\n else:\n t = alpha[0]\n return { t }\n \nGrammar.first_list = first_list\ndel first_list", "The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string.", "def eps_union(S, T):\n if '' in S: \n if '' in T: \n return S | T\n return (S - { '' }) | T\n return S", "Given an augmented grammar $G = \\langle V,T,R\\cup{\\widehat{s} \\rightarrow s\\,\\$}, \\widehat{s}\\rangle$ \nand a variable $a$, the set of tokens that might follow $a$ is defined as:\n$$\\texttt{Follow}(a) := \n \\bigl{ t \\in \\widehat{T} \\,\\bigm|\\, \\exists \\beta,\\gamma \\in (V \\cup \\widehat{T})^: \n \\widehat{s} \\Rightarrow^ \\beta \\,a\\, t\\, \\gamma \n \\bigr}.\n$$\nThe function compute_follow computes the sets $\\texttt{Follow}(a)$ for all variables $a$ via a fixed-point iteration.", "def compute_follow(self):\n self.mFollow[self.mStart] = { '$' }\n change = True\n while change:\n change = False\n for rule in self.mRules:\n a, body = rule.mVariable, rule.mBody\n for i in range(len(body)):\n if is_var(body[i]):\n yi = body[i]\n Tail = self.first_list(body[i+1:])\n firstTail = eps_union(Tail, self.mFollow[a])\n if not (firstTail <= self.mFollow[yi]): \n change = True\n self.mFollow[yi] |= firstTail \n print('Follow sets (note that \"$\" denotes the end of file):');\n for v in self.mVariables:\n print(f'Follow({v}) = {self.mFollow[v]}')\n \nGrammar.compute_follow = compute_follow\ndel compute_follow", "If $\\mathcal{M}$ is a set of marked rules, then the closure of $\\mathcal{M}$ is the smallest set $\\mathcal{K}$ such that\nwe have the following:\n- $\\mathcal{M} \\subseteq \\mathcal{K}$,\n- If $a \\rightarrow \\beta \\bullet c\\, \\delta$ is a marked rule from \n $\\mathcal{K}$, and $c$ is a variable and if, furthermore,\n $c \\rightarrow \\gamma$ is a grammar rule,\n then the marked rule $c \\rightarrow \\bullet \\gamma$\n is an element of $\\mathcal{K}$:\n $$(a \\rightarrow \\beta \\bullet c\\, \\delta) \\in \\mathcal{K} \n \\;\\wedge\\; \n (c \\rightarrow \\gamma) \\in R\n \\;\\Rightarrow\\; (c \\rightarrow \\bullet \\gamma) \\in \\mathcal{K}\n $$\nWe define $\\texttt{closure}(\\mathcal{M}) := \\mathcal{K}$. The function cmp_closure computes this closure for a given set of marked rules via a fixed-point iteration.", "def cmp_closure(self, Marked_Rules):\n All_Rules = Marked_Rules\n New_Rules = Marked_Rules\n while True:\n More_Rules = set()\n for rule in New_Rules:\n c = rule.next_var()\n if c == None:\n continue\n for rule in self.mRules:\n head, alpha = rule.mVariable, rule.mBody\n if c == head:\n More_Rules |= { MarkedRule(head, (), alpha) }\n if More_Rules <= All_Rules:\n return frozenset(All_Rules)\n New_Rules = More_Rules - All_Rules\n All_Rules |= New_Rules\n\nGrammar.cmp_closure = cmp_closure\ndel cmp_closure", "Given a set of marked rules $\\mathcal{M}$ and a grammar symbol $X$, the function $\\texttt{goto}(\\mathcal{M}, X)$ \nis defined as follows:\n$$\\texttt{goto}(\\mathcal{M}, X) := \\texttt{closure}\\Bigl( \\bigl{ \n a \\rightarrow \\beta\\, X \\bullet \\delta \\bigm| (a \\rightarrow \\beta \\bullet X\\, \\delta) \\in \\mathcal{M} \n \\bigr} \\Bigr).\n$$", "def goto(self, Marked_Rules, x):\n Result = set()\n for mr in Marked_Rules:\n if mr.symbol_after_dot() == x:\n Result.add(mr.move_dot())\n return self.cmp_closure(Result)\n\nGrammar.goto = goto\ndel goto", "The function all_states computes the set of all states of an SLR-parser. The function starts with the state\n$$ \\texttt{closure}\\bigl({ \\widehat{s} \\rightarrow \\bullet s \\, $}\\bigr) $$\nand then tries to compute new states by using the function goto. This computation proceeds via a \nfixed-point iteration. Once all states have been computed, the function assigns names to these states.\nThis association is stored in the dictionary mStateNames.", "def all_states(self): \n start_state = self.cmp_closure({ MarkedRule('ŝ', (), (self.mStart, '$')) })\n self.mStates = { start_state }\n New_States = self.mStates\n while True:\n More_States = set()\n for Rule_Set in New_States:\n for mr in Rule_Set: \n if not mr.is_complete():\n x = mr.symbol_after_dot()\n if x != '$':\n More_States |= { self.goto(Rule_Set, x) }\n if More_States <= self.mStates:\n break\n New_States = More_States - self.mStates;\n self.mStates |= New_States\n print(\"All SLR-states:\")\n counter = 1\n self.mStateNames[start_state] = 's0'\n print(f's0 = {set(start_state)}')\n for state in self.mStates - { start_state }:\n self.mStateNames[state] = f's{counter}'\n print(f's{counter} = {set(state)}')\n counter += 1\n\nGrammar.all_states = all_states\ndel all_states", "The following function computes the action table and is defined as follows:\n- If $\\mathcal{M}$ contains a marked rule of the form $a \\rightarrow \\beta \\bullet t\\, \\delta$\n then we have\n $$\\texttt{action}(\\mathcal{M},t) := \\langle \\texttt{shift}, \\texttt{goto}(\\mathcal{M},t) \\rangle.$$\n- If $\\mathcal{M}$ contains a marked rule of the form $a \\rightarrow \\beta\\, \\bullet$ and we have\n $t \\in \\texttt{Follow}(a)$, then we define\n $$\\texttt{action}(\\mathcal{M},t) := \\langle \\texttt{reduce}, a \\rightarrow \\beta \\rangle$$\n- If $\\mathcal{M}$ contains the marked rule $\\widehat{s} \\rightarrow s \\bullet \\$ $, then we define \n $$\\texttt{action}(\\mathcal{M},\\$) := \\texttt{accept}. $$\n- Otherwise, we have\n $$\\texttt{action}(\\mathcal{M},t) := \\texttt{error}. $$", "def compute_action_table(self):\n self.mActionTable = {}\n print('\\nAction Table:')\n for state in self.mStates:\n stateName = self.mStateNames[state]\n actionTable = {}\n # compute shift actions\n for token in self.mTokens:\n if token != '$':\n newState = self.goto(state, token)\n if newState != set():\n newName = self.mStateNames[newState]\n actionTable[token] = ('shift', newName)\n self.mActionTable[stateName, token] = ('shift', newName)\n print(f'action(\"{stateName}\", {token}) = (\"shift\", {newName})')\n # compute reduce actions\n for mr in state:\n if mr.is_complete():\n for token in self.mFollow[mr.mVariable]:\n action1 = actionTable.get(token)\n action2 = ('reduce', mr.to_rule())\n if action1 == None:\n actionTable[token] = action2 \n r = self.mRuleNames[mr.to_rule()]\n self.mActionTable[stateName, token] = ('reduce', r)\n print(f'action(\"{stateName}\", {token}) = {action2}')\n elif action1 != action2: \n self.mConflicts = True\n print('')\n print(f'conflict in state {stateName}:')\n print(f'{stateName} = {state}')\n print(f'action(\"{stateName}\", {token}) = {action1}') \n print(f'action(\"{stateName}\", {token}) = {action2}')\n print('')\n for mr in state:\n if mr == MarkedRule('ŝ', (self.mStart,), ('$',)):\n actionTable['$'] = 'accept'\n self.mActionTable[stateName, '$'] = 'accept'\n print(f'action(\"{stateName}\", $) = accept')\n\nGrammar.compute_action_table = compute_action_table\ndel compute_action_table", "The function compute_goto_table computes the goto table.", "def compute_goto_table(self):\n self.mGotoTable = {}\n print('\\nGoto Table:')\n for state in self.mStates:\n for var in self.mVariables:\n newState = self.goto(state, var)\n if newState != set():\n stateName = self.mStateNames[state]\n newName = self.mStateNames[newState]\n self.mGotoTable[stateName, var] = newName\n print(f'goto({stateName}, {var}) = {newName}')\n\nGrammar.compute_goto_table = compute_goto_table\ndel compute_goto_table\n\n%%time\ng = Grammar(grammar)\n\ndef strip_quotes(t):\n if t[0] == \"'\" and t[-1] == \"'\":\n return t[1:-1]\n return t\n\ndef dump_parse_table(self, file):\n with open(file, 'w') as handle:\n handle.write('# Grammar rules:\\n')\n for rule in self.mRules:\n rule_name = self.mRuleNames[rule] \n handle.write(f'{rule_name} = (\"{rule.mVariable}\", {rule.mBody})\\n')\n handle.write('\\n# Action table:\\n')\n handle.write('actionTable = {}\\n')\n for s, t in self.mActionTable:\n action = self.mActionTable[s, t]\n t = strip_quotes(t)\n if action[0] == 'reduce':\n rule_name = action[1]\n handle.write(f\"actionTable['{s}', '{t}'] = ('reduce', {rule_name})\\n\")\n elif action == 'accept':\n handle.write(f\"actionTable['{s}', '{t}'] = 'accept'\\n\")\n else:\n handle.write(f\"actionTable['{s}', '{t}'] = {action}\\n\")\n handle.write('\\n# Goto table:\\n')\n handle.write('gotoTable = {}\\n')\n for s, v in self.mGotoTable:\n state = self.mGotoTable[s, v]\n handle.write(f\"gotoTable['{s}', '{v}'] = '{state}'\\n\")\n \nGrammar.dump_parse_table = dump_parse_table\ndel dump_parse_table\n\ng.dump_parse_table('parse-table.py')\n\n!cat parse-table.py\n\n!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp \n!rm -r __pycache__\n\n!ls" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
btel/2015_eitn_swc_pandas
01 - Introduction.ipynb
bsd-2-clause
[ "<h1>Pandas Tutorial</h1>\n<h3>Software Carpentry, EITN, Paris, November 20th, 2015</h3>\n<h2>Bartosz Teleńczuk</h2>\n\nforked from the tutorial at EuroScipy 2015 by Joris Van den Bossche (Ghent University, Belgium)\nLicensed under CC BY 4.0 Creative Commons\nContent of this talk\n\nWhy do you need pandas?\nBasic introduction to the data structures\nGuided tour through some of the pandas features with two case studies: movie database and a case study about air quality\n\nIf you want to follow along, this is a notebook that you can view or run yourself:\n\nAll materials (notebook, data): https://github.com/btel/2015_eitn_swc_pandas\nYou need pandas >= 0.15.2 (easy solution is using Anaconda)\n\nSome imports:", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\npd.options.display.max_rows = 8", "Let's start with a showcase\nCase study: air quality in Europe\nAirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe\nStarting from these hourly data for different stations:", "data = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True, na_values='-9999')\n\ndata", "to answering questions about this data in a few lines of code:\nDoes the air pollution show a decreasing trend over the years?", "data['1999':].resample('A').plot(ylim=[0,100])", "How many exceedances of the limit values?", "exceedances = data > 200\nexceedances = exceedances.groupby(exceedances.index.year).sum()\nax = exceedances.loc[2005:].plot(kind='bar')\nax.axhline(18, color='k', linestyle='--')", "What is the difference in diurnal profile between weekdays and weekend?", "data['weekday'] = data.index.weekday\ndata['weekend'] = data['weekday'].isin([5, 6])\ndata_weekend = data.groupby(['weekend', data.index.hour])['FR04012'].mean().unstack(level=0)\ndata_weekend.plot()", "We will come back to these example, and build them up step by step.\nWhy do you need pandas?\nWhy do you need pandas?\nWhen working with tabular or structured data (like R dataframe, SQL table, Excel spreadsheet, ...):\n\nImport data\nClean up messy data\nExplore data, gain insight into data\nProcess and prepare your data for analysis\nAnalyse your data (together with scikit-learn, statsmodels, ...)\n\nPandas: data analysis in python\nFor data-intensive work in Python the Pandas library has become essential.\nWhat is pandas?\n\nPandas can be thought of as NumPy arrays with labels for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.\nPandas can also be thought of as R's data.frame in Python.\n\nIt's documentation: http://pandas.pydata.org/pandas-docs/stable/\nKey features\n\nFast, easy and flexible input/output for a lot of different data formats\nWorking with missing data (.dropna(), pd.isnull())\nMerging and joining (concat, join)\nGrouping: groupby functionality\nReshaping (stack, pivot)\nPowerful time series manipulation (resampling, timezones, ..)\nEasy plotting\n\nFurther reading\n\nthe documentation: http://pandas.pydata.org/pandas-docs/stable/\nWes McKinney's book \"Python for Data Analysis\"\nlots of tutorials on the internet, eg http://github.com/jvns/pandas-cookbook\n\nHow can you help?\nWe need you!\nContributions are very welcome and can be in different domains:\n\nreporting issues\nimproving the documentation\ntesting release candidates and provide feedback\ntriaging and fixing bugs\nimplementing new features\nspreading the word\n\n-> https://github.com/pydata/pandas" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
alexandrnikitin/algorithm-sandbox
courses/DAT256x/Module01/01-05-Polynomials.ipynb
mit
[ "Polynomials\nSome of the equations we've looked at so far include expressions that are actually polynomials; but what is a polynomial, and why should you care?\nA polynomial is an algebraic expression containing one or more terms that each meet some specific criteria. Specifically:\n- Each term can contain:\n - Numeric values that are coefficients or constants (for example 2, -5, <sup>1</sup>/<sub>7</sub>)\n - Variables (for example, x, y)\n - Non-negative integer exponents (for example <sup>2</sup>, <sup>64</sup>)\n- The terms can be combined using arithmetic operations - but not division by a variable.\nFor example, the following expression is a polynomial:\n\\begin{equation}12x^{3} + 2x - 16 \\end{equation}\nWhen identifying the terms in a polynomial, it's important to correctly interpret the arithmetic addition and subtraction operators as the sign for the term that follows. For example, the polynomial above contains the following three terms:\n- 12x<sup>3</sup>\n- 2x\n- -16\nThe terms themselves include:\n- Two coefficients(12 and 2) and a constant (-16)\n- A variable (x)\n- An exponent (<sup>3</sup>)\nA polynomial that contains three terms is also known as a trinomial. Similarly, a polynomial with two terms is known as a binomial and a polynomial with only one term is known as a monomial.\nSo why do we care? Well, polynomials have some useful properties that make them easy to work with. for example, if you multiply, add, or subtract a polynomial, the result is always another polynomial.\nStandard Form for Polynomials\nTechbnically, you can write the terms of a polynomial in any order; but the standard form for a polynomial is to start with the highest degree first and constants last. The degree of a term is the highest order (exponent) in the term, and the highest order in a polynomial determines the degree of the polynomial itself.\nFor example, consider the following expression:\n\\begin{equation}3x + 4xy^{2} - 3 + x^{3} \\end{equation}\nTo express this as a polynomial in the standard form, we need to re-order the terms like this:\n\\begin{equation}x^{3} + 4xy^{2} + 3x - 3 \\end{equation}\nSimplifying Polynomials\nWe saw previously how you can simplify an equation by combining like terms. You can simplify polynomials in the same way.\nFor example, look at the following polynomial:\n\\begin{equation}x^{3} + 2x^{3} - 3x - x + 8 - 3 \\end{equation}\nIn this case, we can combine x<sup>3</sup> and 2x<sup>3</sup> by adding them to make 3x<sup>3</sup>. Then we can add -3x and -x (which is really just a shorthand way to say -1x) to get -4x, and then add 8 and -3 to get 5. Our simplified polynomial then looks like this:\n\\begin{equation}3x^{3} - 4x + 5 \\end{equation}\nWe can use Python to compare the original and simplified polynomials to check them - using an arbitrary random value for x:", "from random import randint\nx = randint(1,100)\n\n(x**3 + 2*x**3 - 3*x - x + 8 - 3) == (3*x**3 - 4*x + 5)", "Adding Polynomials\nWhen you add two polynomials, the result is a polynomial. Here's an example:\n\\begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \\end{equation}\nbecause this is an addition operation, you can simply add all of the like terms from both polynomials. To make this clear, let's first put the like terms together:\n\\begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \\end{equation}\nThis simplifies to:\n\\begin{equation}5x^{3} + 3x^{2} - 6x + 7 \\end{equation}\nWe can verify this with Python:", "from random import randint\nx = randint(1,100)\n\n\n(3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7", "Subtracting Polynomials\nSubtracting polynomials is similar to adding them but you need to take into account that one of the polynomials is a negative.\nConsider this expression:\n\\begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \\end{equation}\nThe key to performing this calculation is to realize that the subtraction of the second polynomial is really an expression that adds -1(x<sup>2</sup> - 2x + 2); so you can use the distributive property to multiply each of the terms in the polynomial by -1 (which in effect simply reverses the sign for each term). So our expression becomes:\n\\begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \\end{equation}\nWhich we can solve as an addition problem. First place the like terms together:\n\\begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \\end{equation}\nWhich simplifies to:\n\\begin{equation}x^{2} - 2x + 3 \\end{equation}\nLet's check that with Python:", "from random import randint\nx = randint(1,100)\n\n(2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3", "Multiplying Polynomials\nTo multiply two polynomials, you need to perform the following two steps:\n1. Multiply each term in the first polynomial by each term in the second polynomial.\n2. Add the results of the multiplication operations, combining like terms where possible.\nFor example, consider this expression:\n\\begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \\end{equation}\nLet's do the first step and multiply each term in the first polynomial by each term in the second polynomial. The first term in the first polynomial is x<sup>4</sup>, and the first term in the second polynomial is 2x<sup>2</sup>, so multiplying these gives us 2x<sup>6</sup>. Then we can multiply the first term in the first polynomial (x<sup>4</sup>) by the second term in the second polynomial (3x), which gives us 3x<sup>5</sup>, and so on until we've multipled all of the terms in the first polynomial by all of the terms in the second polynomial, which results in this:\n\\begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \\end{equation}\nWe can verify a match between this result and the original expression this with the following Python code:", "from random import randint\nx = randint(1,100)\n\n(x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6", "Dividing Polynomials\nWhen you need to divide one polynomial by another, there are two approaches you can take depending on the number of terms in the divisor (the expression you're dividing by).\nDividing Polynomials Using Simplification\nIn the simplest case, division of a polynomial by a monomial, the operation is really just simplification of a fraction.\nFor example, consider the following expression:\n\\begin{equation}(4x + 6x^{2}) \\div 2x \\end{equation}\nThis can also be written as:\n\\begin{equation}\\frac{4x + 6x^{2}}{2x} \\end{equation}\nOne approach to simplifying this fraction is to split it it into a separate fraction for each term in the dividend (the expression we're dividing), like this:\n\\begin{equation}\\frac{4x}{2x} + \\frac{6x^{2}}{2x}\\end{equation}\nThen we can simplify each fraction and add the results. For the first fraction, 2x goes into 4x twice, so the fraction simplifies to 2; and for the second, 6x<sup>2</sup> is 2x mutliplied by 3x. So our answer is 2 + 3x:\n\\begin{equation}2 + 3x\\end{equation}\nLet's use Python to compare the original fraction with the simplified result for an arbitrary value of x:", "from random import randint\nx = randint(1,100)\n\n(4*x + 6*x**2) / (2*x) == 2 + 3*x", "Dividing Polynomials Using Long Division\nThings get a little more complicated for divisors with more than one term.\nSuppose we have the following expression:\n\\begin{equation}(x^{2} + 2x - 3) \\div (x - 2) \\end{equation}\nAnother way of writing this is to use the long-division format, like this:\n\\begin{equation} x - 2 |\\overline{x^{2} + 2x - 3} \\end{equation}\nWe begin long-division by dividing the highest order divisor into the highest order dividend - so in this case we divide x into x<sup>2</sup>. X goes into x<sup>2</sup> x times, so we put an x on top and then multiply it through the divisor:\n\\begin{equation} \\;\\;\\;\\;x \\end{equation}\n\\begin{equation}x - 2 |\\overline{x^{2} + 2x - 3} \\end{equation}\n\\begin{equation} \\;x^{2} -2x \\end{equation}\nNow we'll subtract the remaining dividend, and then carry down the -3 that we haven't used to see what's left:\n\\begin{equation} \\;\\;\\;\\;x \\end{equation}\n\\begin{equation}x - 2 |\\overline{x^{2} + 2x - 3} \\end{equation}\n\\begin{equation}- (x^{2} -2x) \\end{equation}\n\\begin{equation}\\;\\;\\;\\;\\;\\overline{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;4x -3} \\end{equation}\nOK, now we'll divide our highest order divisor into the highest order of the remaining dividend. In this case, x goes into 4x four times, so we'll add a 4 to the top line, multiply it through the divisor, and subtract the remaining dividend:\n\\begin{equation} \\;\\;\\;\\;\\;\\;\\;\\;x + 4 \\end{equation}\n\\begin{equation}x - 2 |\\overline{x^{2} + 2x - 3} \\end{equation}\n\\begin{equation}- (x^{2} -2x) \\end{equation}\n\\begin{equation}\\;\\;\\;\\;\\;\\overline{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;4x -3} \\end{equation}\n\\begin{equation}- (\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;4x -8) \\end{equation}\n\\begin{equation}\\;\\;\\;\\;\\;\\overline{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;5} \\end{equation}\nWe're now left with just 5, which we can't divide further by x - 2; so that's our remainder, which we'll add as a fraction.\nThe solution to our division problem is:\n\\begin{equation}x + 4 + \\frac{5}{x-2} \\end{equation}\nOnce again, we can use Python to check our answer:", "from random import randint\nx = randint(3,100)\n\n(x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2))\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Lstyle1/Deep_learning_projects
transfer-learning/Transfer_Learning.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.", "!pip install tqdm\n\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "!pip install scikit-image\n\nimport os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)\n\n#Test\nprint(labels[:10])\nprint(codes[:10])\nprint(codes.shape)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))\n\n#Test\nprint(labels[:10])\nprint(codes[:10])\nprint(codes.shape)", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "from sklearn import preprocessing\nlabel_binarizer = preprocessing.LabelBinarizer()\nlabel_binarizer.fit(classes)\nlabels_vecs = label_binarizer.transform(labels) # Your one-hot encoded labels array here\n\n#Test\nlabel_binarizer.classes_\nprint(labels_vecs[:5])", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "from sklearn.model_selection import StratifiedShuffleSplit\n\n#shufflesplitter for train and test(valid)\nshuffle_train_test = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nshuffle_test_valid = StratifiedShuffleSplit(n_splits=1, test_size=0.5)\n\ntrain_index, test_valid_index = next(shuffle_train_test.split(codes, labels_vecs))\ntest_index, valid_index = next(shuffle_test_valid.split(codes[test_valid_index], labels_vecs[test_valid_index]))\n\ntrain_x, train_y = codes[train_index], labels_vecs[train_index]\nval_x, val_y = codes[valid_index], labels_vecs[valid_index]\ntest_x, test_y = codes[test_index], labels_vecs[test_index]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\nfully_layer = tf.contrib.layers.fully_connected(inputs=inputs_,\\\n num_outputs=256,\\\n weights_initializer=tf.truncated_normal_initializer(stddev=0.1))\nlogits = tf.contrib.layers.fully_connected(inputs=fully_layer,\\\n num_outputs=len(classes),\\\n activation_fn=None,\\\n weights_initializer=tf.truncated_normal_initializer(stddev=0.1))\n # output layer logits\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)) # cross entropy loss\noptimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!", "epochs = 5\n\nsaver = tf.train.Saver()\nwith tf.Session() as sess: \n sess.run(tf.global_variables_initializer())\n # TODO: Your training code here\n for epoch in range(epochs):\n for x, y in get_batches(train_x, train_y):\n loss, _ = sess.run([cost,optimizer], feed_dict={inputs_: x, labels_: y})\n #if epoch % 5 == 0:\n val_accuracy = sess.run(accuracy, feed_dict={inputs_: val_x, labels_: val_y})\n print(\"Epoch: {:>3}, Training Loss: {:.5f}, Validation Accuracy: {:.4f}\".format(epoch+1, loss, val_accuracy))\n \n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), label_binarizer.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ethen8181/machine-learning
model_selection/kl_divergence.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Kullback-Leibler-Divergence\" data-toc-modified-id=\"Kullback-Leibler-Divergence-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Kullback-Leibler Divergence</a></span></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div>", "# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(css_style='custom2.css', plot_style=False)\n\nos.chdir(path)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.stats import binom\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n%watermark -a 'Ethen' -d -t -v -p numpy,scipy,matplotlib", "Kullback-Leibler Divergence\nIn this post we're going to take a look at way of comparing two probability distributions called Kullback-Leibler Divergence (a.k.a KL divergence). Very often in machine learning, we'll replace observed data or a complex distributions with a simpler, approximating distribution. KL Divergence helps us to measure just how much information we lose when we choose an approximation, thus we can even use it as our objective function to pick which approximation would work best for the problem at hand.\nLet's look at an example: (The example here is borrowed from the following link. Blog: Kullback-Leibler Divergence Explained).\nSuppose we're a group of scientists visiting space and we discovered some space worms. These space worms have varying number of teeth. After a decent amount of collecting, we have come to this empirical probability distribution of the number of teeth in each worm:", "# ensure the probability adds up to 1\ntrue_data = np.array([0.02, 0.03, 0.05, 0.14, 0.16, 0.15, 0.12, 0.08, 0.1, 0.08, 0.07])\nn = true_data.shape[0]\nindex = np.arange(n)\nassert sum(true_data) == 1.0\n\n# change default style figure and font size\nplt.rcParams['figure.figsize'] = 8, 6\nplt.rcParams['font.size'] = 12\n\nplt.bar(index, true_data)\nplt.xlabel('Teeth Number')\nplt.title('Probability Distribution of Space Worm Teeth')\nplt.ylabel('Probability')\nplt.xticks(index)\nplt.show()", "Now we need to send this information back to earth. But the problem is that sending information from space to earth is expensive. So we wish to represent this information with a minimum amount of information, perhaps just one or two parameters. One option to represent the distribution of teeth in worms is a uniform distribution.", "uniform_data = np.full(n, 1.0 / n)\n\n# we can plot our approximated distribution against the original distribution\nwidth = 0.3\nplt.bar(index, true_data, width=width, label='True')\nplt.bar(index + width, uniform_data, width=width, label='Uniform')\nplt.xlabel('Teeth Number')\nplt.title('Probability Distribution of Space Worm Teeth')\nplt.ylabel('Probability')\nplt.xticks(index)\nplt.legend()\nplt.show()", "Another option is to use a binomial distribution.", "# we estimate the parameter of the binomial distribution\np = true_data.dot(index) / n\nprint('p for binomial distribution:', p)\nbinom_data = binom.pmf(index, n, p)\nbinom_data\n\nwidth = 0.3\nplt.bar(index, true_data, width=width, label='True')\nplt.bar(index + width, binom_data, width=width, label='Binomial')\nplt.xlabel('Teeth Number')\nplt.title('Probability Distribution of Space Worm Teeth')\nplt.ylabel('Probability')\nplt.xticks(np.arange(n))\nplt.legend()\nplt.show()", "Comparing each of our models with our original data we can see that neither one is the perfect match, but the question now becomes, which one is better?", "plt.bar(index - width, true_data, width=width, label='True')\nplt.bar(index, uniform_data, width=width, label='Uniform')\nplt.bar(index + width, binom_data, width=width, label='Binomial')\nplt.xlabel('Teeth Number')\nplt.title('Probability Distribution of Space Worm Teeth Number')\nplt.ylabel('Probability')\nplt.xticks(index)\nplt.legend()\nplt.show()", "Given these two distributions that we are using to approximate the original distribution, we need a quantitative way to measure which one does the job better. This is where Kullback-Leibler (KL) Divergence comes in.\nKL Divergence has its origins in information theory. The primary goal of information theory is to quantify how much information is in our data. To recap, one of the most important metric in information theory is called Entropy, which we will denote as $H$. The entropy for a probability distribution is defined as:\n\\begin{align}\nH = -\\sum_{i=1}^N p(x_i) \\cdot \\log p(x_i)\n\\end{align}\nIf we use $log_2$ for our calculation we can interpret entropy as, using a distribution $p$, the minimum number of bits it would take us to encode events drawn from distribution $p$. Knowing we have a way to quantify how much information is in our data, we now extend it to quantify how much information is lost when we substitute our observed distribution for a parameterized approximation.\nThe formula for Kullback-Leibler Divergence is a slight modification of entropy. Rather than just having our probability distribution $p$ we add in our approximating distribution $q$, then we look at the difference of the log values for each:\n\\begin{align}\nD_{KL}(p || q) = \\sum_{i=1}^{N} p(x_i)\\cdot (\\log p(x_i) - \\log q(x_i))\n\\end{align}\nEssentially, what we're looking at with KL divergence is the expectation of the log difference between the probability of data in the original distribution with the approximating distribution. Because we're multiplying the difference between the two distribution with $p(x_i)$, this means that matching areas where the original distribution has a higher probability is more important than areas that has a lower probability. Again, if we think in terms of $\\log_2$, we can interpret this as, how many extra bits of information we need to encode events drawn from true distribution $p$, if using an optimal code from distribution $q$ rather than $p$.\nThe more common way to see KL divergence written is as follows:\n\\begin{align}\nD_{KL}(p || q) = \\sum_{i=1}^N p(x_i) \\cdot \\log \\frac{p(x_i)}{q(x_i)}\n\\end{align}\nsince $\\text{log}a - \\text{log}b = \\text{log}\\frac{a}{b}$.\nIf two distributions, $p$ and $q$ perfectly match, $D_{KL}(p || q) = 0$, otherwise the lower the KL divergence value, the better we have matched the true distribution with our approximation.\nSide Note: If you're interested in having an understanding of the relationship between entropy, cross entropy and KL divergence, the following links are good places to start. Maybe they will clear up some of the hand-wavy explanation of these concepts ... Youtube: A Short Introduction to Entropy, Cross-Entropy and KL-Divergence and StackExchange: Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function?\nGiven these information, we can go ahead and calculate the KL divergence for our two approximating distributions.", "# both function are equivalent ways of computing KL-divergence\n# one uses for loop and the other uses vectorization\ndef compute_kl_divergence(p_probs, q_probs):\n \"\"\"\"KL (p || q)\"\"\"\n kl_div = 0.0\n for p, q in zip(p_probs, q_probs):\n kl_div += p * np.log(p / q)\n\n return kl_div\n\n\ndef compute_kl_divergence(p_probs, q_probs):\n \"\"\"\"KL (p || q)\"\"\"\n kl_div = p_probs * np.log(p_probs / q_probs)\n return np.sum(kl_div)\n\n\nprint('KL(True||Uniform): ', compute_kl_divergence(true_data, uniform_data))\nprint('KL(True||Binomial): ', compute_kl_divergence(true_data, binom_data))", "As we can see the information lost by using the Binomial approximation is greater than using the uniform approximation. If we have to choose one to represent our observations, we're better off sticking with the Uniform approximation.\nTo close this discussion, we used KL-divergence to calculate which our approximate distribution more closely reflects our true distribution. One caveat to note is that it may be tempting to think of KL-divergence as a way of measuring distance, however, whenever we talk about KL-divergence, we do not categorized it as a distance metric due to the fact that it is asymmetric. In other words, $D_{KL}(p || q) \\neq D_{KL}(q || p)$.\nReference\n\nBlog: Kullback-Leibler Divergence Explained\nYoutube: A Short Introduction to Entropy, Cross-Entropy and KL-Divergence\nStackExchange: Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jaimefrio/pydatabcn2017
taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb
unlicense
[ "import numpy as np", "Array views and slicing\nA NumPy array is an object of numpy.ndarray type:", "a = np.arange(3)\ntype(a)", "All ndarrays have a .base attribute.\nIf this attribute is not None, then the array is a view of some other object's memory, typically another ndarray.\nThis is a very powerful tool, because allocating memory and copying memory contents are expensive operations, but updating metadata on how to interpret some already allocated memory is cheap!\nThe simplest way of creating an array's view is by slicing it:", "a = np.arange(3)\na.base is None\n\na[:].base is None", "Let's look more closely at what an array's metadata looks like. NumPy provides the np.info function, which can list for us some low level attributes of an array:", "np.info(a)", "By the end of the workshop you will understand what most of these mean.\nBut rather than listen through a lesson, you get to try and figure what they mean yourself.\nTo help you with that, here's a function that prints the information from two arrays side by side:", "def info_for_two(one_array, another_array):\n \"\"\"Prints side-by-side results of running np.info on its inputs.\"\"\"\n def info_as_ordered_dict(array):\n \"\"\"Converts return of np.infor into an ordered dict.\"\"\"\n import collections\n import io\n buffer = io.StringIO()\n np.info(array, output=buffer)\n data = (\n item.split(':') for item in buffer.getvalue().strip().split('\\n'))\n return collections.OrderedDict(\n ((key, value.strip()) for key, value in data))\n one_dict = info_as_ordered_dict(one_array)\n another_dict = info_as_ordered_dict(another_array)\n name_w = max(len(name) for name in one_dict.keys())\n one_w = max(len(name) for name in one_dict.values())\n another_w = max(len(name) for name in another_dict.values())\n output = (\n f'{name:<{name_w}} : {one:>{one_w}} : {another:>{another_w}}'\n for name, one, another in zip(\n one_dict.keys(), one_dict.values(), another_dict.values()))\n print('\\n'.join(output))", "Exercise 1.\n\nCreate a one dimensional NumPy array with a few items (consider using np.arange).\nCompare the printout of np.info on your array and on slices of it (use the [start:stop:step] indexing syntax, and make sure to try steps other than one).\nDo you see any patterns?", "# Your code goes here", "Exercise 1 debrief\nEvery array has an underlying block of memory assigned to it.\nWhen we slice an array, rather than making a copy of it, NumPy makes a view, reusing the memory block, but interpreting it differently.\nLets take a look at what NumPy did for us in the above examples, and make sense of some of the changes to info.\n\n\nshape: for a one dimensional array shape is a single item tuple, equal to the total number of items in the array. You can get the shape of an array as its .shape attribute.\nstrides: is also a single item tuple for one-dimensional arrays, its value being the number of bytes to skip in memory to get to the next item. And yes, strides can be negative. You can get this as the .strides attribute of any array.\ndata pointer: this is the address in memory of the first byte of the first item of the array. Note that this doesn't have to be the same as the first byte of the underlying memory block! You rarely need to know the exact address of the data pointer, but it's part of the string representation of the arrays .data attribute. \nitemsize: this isn't properly an attribute of the array, but of it's data type. It is the number of bytes that an array item takes up in memory. You can get this value from an array as the .itemsize attribute of its .dtype attribute, i.e. array.dtype.itemsize.\ntype: this lets us know how each array item should be interpreted e.g. for calculations. We'll talk more about this later, but you can get an array's type object through its .dtype attribute.\ncontiguous: this is one of several boolean flags of an array. Its meaning is a little more specific, but for now lets say it tells us whether the array items use the memory block efficiently, without leaving unused spaces between items. It's value can be checked as the .contiguous attribute of the arrays .flags attribute\n\nExercise 2\nTake a couple or minutes to familiarize yourself with the NumPy array's attributes discussed above:\n\nCreate a small one dimensional array of your choosing.\nLook at its .shape, .strides, .dtype, .flags and .data attributes.\nFor .dtype and .flags, store them into a separate variable, and use tab completion on those to explore their subattributes.", "# Your code goes here", "A look at data types\nSimilarly to how we can change the shape, strides and data pointer of an array through slicing, we can change how it's items are interpreted by changing it's data type.\nThis is done by calling the array's .view() method, and passing it the new data type.\nBut before we go there, lets look a little closer at dtypes. You are hopefully familiar with the basic NumPy numerical data types:\n| Type Family | NumPy Defined Types | Character Codes |\n| :---: |\n| boolean | np.bool | '?' |\n| unsigned integers | np.uint8 - np.uint64 | 'u1', 'u2', 'u4', 'u8' |\n| signed integers | np.int8 - np.int64 | 'i1', 'i2', 'i4', 'i8' |\n| floating point | np.float16 - np.float128 | 'f2', 'f4', 'f8', 'f16' |\n| complex | np.complex64, np.complex128 | 'c8', 'c16' |\nYou can create a new data type by calling its constructor, np.dtype(), with either a NumPy defined type, or the character code.\nCharacter codes can have '&lt;' or '&gt;' prepended, to indicate whether the type is little or big endian. If unspecified, native encoding is used, which for all practical purposes is going to be little endian.\nExercise 3\nLet's play a little with dtype views:\n\nCreate a simple array of a type you feel comfortable you understand, e.g. np.arange(4, dtype=np.uint16).\nTake a view of type np.uint8 of your array. This will give you the raw byte contents of your array. Is this what you were expecting?\nTake a few views of your array, with dtypes of larger itemsize, or changing the endianess of the data type. Try to predict what the output will be before running the examples.\nTake a look at the wikipedia page on single precision floating point numbers, more specifically its examples of encodings. Create arrays of four np.uint8 values which, when viewed as a np.float32 give the values 1, -2, and 1/3.", "# Your code goes here", "The Constructor They Don't Want You To Know About.\nYou typically construct your NumPy arrays using one of the many factory fuctions provided, np.array() being the most popular.\nBut it is also possible to call the np.ndarray object constructor directly.\nYou will typically not want to do this, because there are probably simpler alternatives.\nBut it is a great way of putting your understanding of views of arrays to the test!\nYou can check the full documentation, but the np.ndarray constructor takes the following arguments that we care about:\n\nshape: the shape of the returned array,\ndtype: the data type of the returned array,\nbuffer: an object to reuse the underlying memory from, e.g. an existing array or its .data attribute,\noffset: by how many bytes to move the starting data pointer of the returned array relative to the passed buffer,\nstrides: the strides of the returned array.\n\nExercise 4\nWrite a function, using the np.ndarray constructor, that takes a one dimensional array and returns a reversed view of it.", "# Your code goes here", "Reshaping Into Higher Dimensions\nSo far we have sticked to one dimensional arrays. Things get substantially more interesting when we move into higher dimensions.\nOne way of getting views with a different number of dimensions is by using the .reshape() method of NumPy arrays, or the equivalent np.reshape() function.\nThe first argument to any of the reshape functions is the new shape of the array. When providing it, keep in mind: \n\nthe total size of the array must stay unchanged, i.e. the product of the values of the new shape tuple must be equal to the product of the values of the old shape tuple.\nby entering -1 for one of the new dimensions, you can have NumPy compute its value for you, but the other dimensions must be compatible with the calculated one being an integer.\n\n.reshape() can also take an order= kwarg, which can be set to 'C' (as the programming language) or 'F' (for the Fortran programming language). This correspond to row and column major orders, respectively.\nExercise 5\nLet's look at how multidimensional arrays are represented in NumPy with an exercise.\n\nCreate a small linear array with a total length that is a multiple of two different small primes, e.g. 6 = 2 * 3.\nReshape the array into a two dimensional one, starting with the default order='C'. Try both possible combinations of rows and columns, e.g. (2, 3) and (3, 2). Look at the resulting arrays, and compare their metadata. Do you understand what's going on?\nTry the same reshaping with order='F'. Can you see what the differences are?\nIf you feel confident with these, give a higher dimensional array a try.", "# Your code goes here", "Exercise 5 debrief\nAs the examples show, an n-dimensional array will have an n item tuple .shape and .strides. The number of dimensions can be directly queried from the .ndim attribute.\nThe shape tells us how large the array is along each dimension, the strides tell us how many bytes to skip in memory to get to the next item along each dimension.\nWhen we reshape an array using C order, a.k.a. row major order, items along higher dimensions are closer in memory. When we use Fortran orser, a.k.a. column major order, it is items along smaller dimensions that are closer.\n\nReshaping with a purpose\nOne typical use of reshaping is to apply some aggregation function to equal subdivision of an array.\nSay you have, e.g. a 12 item 1D array, and would like to compute the sum of every three items. This is how this is typically accomplished:", "a = np.arange(12, dtype=float)\na\n\na.reshape(4, 3).sum(axis=-1)", "You can apply fancier functions than .sum(), e.g. let's compute the variance of each group:", "a.reshape(4, 3).var(axis=-1)", "Exercise 6\nYour turn to do a fancier reshaping: we will compute the average of a 2D array over non-overlapping rectangular patches:\n\nChoose to small numbers m and n, e.g. 3 and 4.\nCreate a 2D array, with number of rows a multiple of one of those numbers, and number of columns a multiple of the other, e.g. 15 x 24.\nReshape and aggregate to create a 2D array holding the sums over non overlapping m x n tiles, e.g. a 5 x 6 array.\nHint: .sum() can take a tuple of integers as axis=, so you can do the whole thing in a single reshape from 2D to 4D, then aggregate back to 2D. If tyou find this confusing, doing two aggregations will also work.", "# Your code goes here", "Rearranging dimensions\nOnce we have a multidimensional array, rearranging the order of its dimensions is as simple as rearranging its .shape and .tuple attributes. You could do this with np.ndarray, but it would be a pain. NumPy has a bunch of functions for doing that, but they are all watered down versions of np.transpose, which takes a tuple with the desired permutation of the array dimensions.\nExercise 7\n\nWrite a function roll_axis_to_end that takes an array and an axis, and makes that axis the last dimension of the array.\nFor extra credit, rewrite your function using np.ndarray.", "# Your code goes here", "Playing with strides\nFor the rest of the workshop we are going to dome some fancy tricks with strides, to create interesting views of an existing array.\nExercise 8\nCreate a function to extract the diagonal of a 2-D array, using the np.ndarray constructor.", "# Your code goes here", "Exercise 9\n\nSomething very interesting happens when we set a stride to zero. Give that idea some thought and then:\nCreate two functions, stacked_column_vector and stacked_row_vector, that take a 1D array (the vector), and an integer n, and create a 2D view of the array that stack n copies of the vector, either as columns or rows of the view.\nUse this functions to create an outer_product function that takes two 1D vectors and computes their outer product.", "# Your code goes here", "Exercise 10\nIn the last exercise we used zero strides to reuse an item more than once in the resulting view. Let's try to build on that idea:\n\nWrite a function that takes a 1D array and a window integer value, and creates a 2D view of the array, each row a view through a sliding window of size window into the original array.\nHint: There are len(array) - window + 1 such \"views through a window\".\n\nAnother hint: Here's a small example expected run:\n&gt;&gt;&gt; sliding_window(np.arange(4), 2)\n[[0, 1],\n [1, 2],\n [2, 3]]", "# Your code goes here", "Parting pro tip\nNumPy's worst kept secret is the existence of a mostly undocumented, mostly hidden, as_strided function, that makes creating views with funny strides much easier (and also much more dangerous!) than using np.ndarray. Here's the available documentation:", "from numpy.lib.stride_tricks import as_strided\n\nnp.info(as_strided)", "Note that this function will not protect you, the way np.ndarray does, from accessing memory that is not indexed by the array the view is taken for. You may want to do that, but be wary of the world of segmentation faults you are getting yourself into!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
HKUST-SING/tensorflow
tensorflow/tools/docker/notebooks/3_mnist_from_scratch.ipynb
apache-2.0
[ "MNIST from scratch\nThis notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits.\nAn example follows.", "from __future__ import print_function\n\nfrom IPython.display import Image\nimport base64\nImage(data=base64.decodestring(\"iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==\".encode('utf-8')), embed=True)", "We're going to be building a model that recognizes these digits as 5, 0, and 4.\nImports and input data\nWe'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive.", "import os\nfrom six.moves.urllib.request import urlretrieve\n\nSOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'\nWORK_DIRECTORY = \"/tmp/mnist-data\"\n\ndef maybe_download(filename):\n \"\"\"A helper to download the data files if not present.\"\"\"\n if not os.path.exists(WORK_DIRECTORY):\n os.mkdir(WORK_DIRECTORY)\n filepath = os.path.join(WORK_DIRECTORY, filename)\n if not os.path.exists(filepath):\n filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)\n statinfo = os.stat(filepath)\n print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')\n else:\n print('Already downloaded', filename)\n return filepath\n\ntrain_data_filename = maybe_download('train-images-idx3-ubyte.gz')\ntrain_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')\ntest_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')\ntest_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')", "Working with the images\nNow we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].\nLet's try to unpack the data using the documented format:\n[offset] [type] [value] [description] \n0000 32 bit integer 0x00000803(2051) magic number \n0004 32 bit integer 60000 number of images \n0008 32 bit integer 28 number of rows \n0012 32 bit integer 28 number of columns \n0016 unsigned byte ?? pixel \n0017 unsigned byte ?? pixel \n........ \nxxxx unsigned byte ?? pixel\n\nPixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).\nWe'll start by reading the first image from the test data as a sanity check.", "import gzip, binascii, struct, numpy\nimport matplotlib.pyplot as plt\n\nwith gzip.open(test_data_filename) as f:\n # Print the header fields.\n for field in ['magic number', 'image count', 'rows', 'columns']:\n # struct.unpack reads the binary data provided by f.read.\n # The format string '>i' decodes a big-endian integer, which\n # is the encoding of the data.\n print(field, struct.unpack('>i', f.read(4))[0])\n \n # Read the first 28x28 set of pixel values. \n # Each pixel is one byte, [0, 255], a uint8.\n buf = f.read(28 * 28)\n image = numpy.frombuffer(buf, dtype=numpy.uint8)\n \n # Print the first few values of image.\n print('First 10 pixels:', image[:10])", "The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.\nWe could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image.", "%matplotlib inline\n\n# We'll show the image and its pixel value histogram side-by-side.\n_, (ax1, ax2) = plt.subplots(1, 2)\n\n# To interpret the values as a 28x28 image, we need to reshape\n# the numpy array, which is one dimensional.\nax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);\n\nax2.hist(image, bins=20, range=[0,255]);", "The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.\nBoth the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.\nWe'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models.", "# Let's convert the uint8 image to 32 bit floats and rescale \n# the values to be centered around 0, between [-0.5, 0.5]. \n# \n# We again plot the image and histogram to check that we \n# haven't mangled the data.\nscaled = image.astype(numpy.float32)\nscaled = (scaled - (255 / 2.0)) / 255\n_, (ax1, ax2) = plt.subplots(1, 2)\nax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);\nax2.hist(scaled, bins=20, range=[-0.5, 0.5]);", "Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].\nReading the labels\nLet's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail:\n[offset] [type] [value] [description] \n0000 32 bit integer 0x00000801(2049) magic number (MSB first) \n0004 32 bit integer 10000 number of items \n0008 unsigned byte ?? label \n0009 unsigned byte ?? label \n........ \nxxxx unsigned byte ?? label\n\nAs with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7.", "with gzip.open(test_labels_filename) as f:\n # Print the header fields.\n for field in ['magic number', 'label count']:\n print(field, struct.unpack('>i', f.read(4))[0])\n\n print('First label:', struct.unpack('B', f.read(1))[0])", "Indeed, the first label of the test set is 7.\nForming the training, testing, and validation data sets\nNow that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.\nImage data\nThe code below is a generalization of our prototyping above that reads the entire test and training data set.", "IMAGE_SIZE = 28\nPIXEL_DEPTH = 255\n\ndef extract_data(filename, num_images):\n \"\"\"Extract the images into a 4D tensor [image index, y, x, channels].\n \n For MNIST data, the number of channels is always 1.\n\n Values are rescaled from [0, 255] down to [-0.5, 0.5].\n \"\"\"\n print('Extracting', filename)\n with gzip.open(filename) as bytestream:\n # Skip the magic number and dimensions; we know these values.\n bytestream.read(16)\n\n buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)\n data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)\n data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH\n data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)\n return data\n\ntrain_data = extract_data(train_data_filename, 60000)\ntest_data = extract_data(test_data_filename, 10000)", "A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.\nLet's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.)", "print('Training data shape', train_data.shape)\n_, (ax1, ax2) = plt.subplots(1, 2)\nax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);\nax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);", "Looks good. Now we know how to index our full set of training and test images.\nLabel data\nLet's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1.", "NUM_LABELS = 10\n\ndef extract_labels(filename, num_images):\n \"\"\"Extract the labels into a 1-hot matrix [image index, label index].\"\"\"\n print('Extracting', filename)\n with gzip.open(filename) as bytestream:\n # Skip the magic number and count; we know these values.\n bytestream.read(8)\n buf = bytestream.read(1 * num_images)\n labels = numpy.frombuffer(buf, dtype=numpy.uint8)\n # Convert to dense 1-hot representation.\n return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)\n\ntrain_labels = extract_labels(train_labels_filename, 60000)\ntest_labels = extract_labels(test_labels_filename, 10000)", "As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.", "print('Training labels shape', train_labels.shape)\nprint('First label vector', train_labels[0])\nprint('Second label vector', train_labels[1])", "The 1-hot encoding looks reasonable.\nSegmenting data into training, test, and validation\nThe final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set.", "VALIDATION_SIZE = 5000\n\nvalidation_data = train_data[:VALIDATION_SIZE, :, :, :]\nvalidation_labels = train_labels[:VALIDATION_SIZE]\ntrain_data = train_data[VALIDATION_SIZE:, :, :, :]\ntrain_labels = train_labels[VALIDATION_SIZE:]\n\ntrain_size = train_labels.shape[0]\n\nprint('Validation shape', validation_data.shape)\nprint('Train size', train_size)", "Defining the model\nNow that we've prepared our data, we're ready to define our model.\nThe comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.\nWe'll separate our model definition into three steps:\n\nDefining the variables that will hold the trainable weights.\nDefining the basic model graph structure described above. And,\nStamping out several copies of the model graph for training, testing, and validation.\n\nWe'll start with the variables.", "import tensorflow as tf\n\n# We'll bundle groups of examples during training for efficiency.\n# This defines the size of the batch.\nBATCH_SIZE = 60\n# We have only one channel in our grayscale images.\nNUM_CHANNELS = 1\n# The random seed that defines initialization.\nSEED = 42\n\n# This is where training samples and labels are fed to the graph.\n# These placeholder nodes will be fed a batch of training data at each\n# training step, which we'll write once we define the graph structure.\ntrain_data_node = tf.placeholder(\n tf.float32,\n shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))\ntrain_labels_node = tf.placeholder(tf.float32,\n shape=(BATCH_SIZE, NUM_LABELS))\n\n# For the validation and test data, we'll just hold the entire dataset in\n# one constant node.\nvalidation_data_node = tf.constant(validation_data)\ntest_data_node = tf.constant(test_data)\n\n# The variables below hold all the trainable weights. For each, the\n# parameter defines how the variables will be initialized.\nconv1_weights = tf.Variable(\n tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.\n stddev=0.1,\n seed=SEED))\nconv1_biases = tf.Variable(tf.zeros([32]))\nconv2_weights = tf.Variable(\n tf.truncated_normal([5, 5, 32, 64],\n stddev=0.1,\n seed=SEED))\nconv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))\nfc1_weights = tf.Variable( # fully connected, depth 512.\n tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],\n stddev=0.1,\n seed=SEED))\nfc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))\nfc2_weights = tf.Variable(\n tf.truncated_normal([512, NUM_LABELS],\n stddev=0.1,\n seed=SEED))\nfc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))\n\nprint('Done')", "Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.\nWe'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.)", "def model(data, train=False):\n \"\"\"The Model definition.\"\"\"\n # 2D convolution, with 'SAME' padding (i.e. the output feature map has\n # the same size as the input). Note that {strides} is a 4D array whose\n # shape matches the data layout: [image index, y, x, depth].\n conv = tf.nn.conv2d(data,\n conv1_weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n\n # Bias and rectified linear non-linearity.\n relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))\n\n # Max pooling. The kernel size spec ksize also follows the layout of\n # the data. Here we have a pooling window of 2, and a stride of 2.\n pool = tf.nn.max_pool(relu,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n conv = tf.nn.conv2d(pool,\n conv2_weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))\n pool = tf.nn.max_pool(relu,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n\n # Reshape the feature map cuboid into a 2D matrix to feed it to the\n # fully connected layers.\n pool_shape = pool.get_shape().as_list()\n reshape = tf.reshape(\n pool,\n [pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])\n \n # Fully connected layer. Note that the '+' operation automatically\n # broadcasts the biases.\n hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)\n\n # Add a 50% dropout during training only. Dropout also scales\n # activations such that no rescaling is needed at evaluation time.\n if train:\n hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)\n return tf.matmul(hidden, fc2_weights) + fc2_biases\n\nprint('Done')", "Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.\nHere, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.\nThe vaildation and prediction graphs are much simpler the generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output.", "# Training computation: logits + cross-entropy loss.\nlogits = model(train_data_node, True)\nloss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n labels=train_labels_node, logits=logits))\n\n# L2 regularization for the fully connected parameters.\nregularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +\n tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))\n# Add the regularization term to the loss.\nloss += 5e-4 * regularizers\n\n# Optimizer: set up a variable that's incremented once per batch and\n# controls the learning rate decay.\nbatch = tf.Variable(0)\n# Decay once per epoch, using an exponential schedule starting at 0.01.\nlearning_rate = tf.train.exponential_decay(\n 0.01, # Base learning rate.\n batch * BATCH_SIZE, # Current index into the dataset.\n train_size, # Decay step.\n 0.95, # Decay rate.\n staircase=True)\n# Use simple momentum for the optimization.\noptimizer = tf.train.MomentumOptimizer(learning_rate,\n 0.9).minimize(loss,\n global_step=batch)\n\n# Predictions for the minibatch, validation set and test set.\ntrain_prediction = tf.nn.softmax(logits)\n# We'll compute them only once in a while by calling their {eval()} method.\nvalidation_prediction = tf.nn.softmax(model(validation_data_node))\ntest_prediction = tf.nn.softmax(model(test_data_node))\n\nprint('Done')", "Training and visualizing results\nNow that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.\nAll of these operations take place in the context of a session. In Python, we'd write something like:\nwith tf.Session() as s:\n ...training / test / evaluation loop...\n\nBut, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession.\nWe'll start by creating a session and initializing the varibles we defined above.", "# Create a new interactive session that we'll use in\n# subsequent code cells.\ns = tf.InteractiveSession()\n\n# Use our newly created session as the default for \n# subsequent operations.\ns.as_default()\n\n# Initialize all the variables we defined above.\ntf.global_variables_initializer().run()", "Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.", "BATCH_SIZE = 60\n\n# Grab the first BATCH_SIZE examples and labels.\nbatch_data = train_data[:BATCH_SIZE, :, :, :]\nbatch_labels = train_labels[:BATCH_SIZE]\n\n# This dictionary maps the batch data (as a numpy array) to the\n# node in the graph it should be fed to.\nfeed_dict = {train_data_node: batch_data,\n train_labels_node: batch_labels}\n\n# Run the graph and fetch some of the nodes.\n_, l, lr, predictions = s.run(\n [optimizer, loss, learning_rate, train_prediction],\n feed_dict=feed_dict)\n\nprint('Done')", "Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.", "print(predictions[0])", "As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.", "# The highest probability in the first entry.\nprint('First prediction', numpy.argmax(predictions[0]))\n\n# But, predictions is actually a list of BATCH_SIZE probability vectors.\nprint(predictions.shape)\n\n# So, we'll take the highest probability for each vector.\nprint('All predictions', numpy.argmax(predictions, 1))", "Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.", "print('Batch labels', numpy.argmax(batch_labels, 1))", "Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.", "correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))\ntotal = predictions.shape[0]\n\nprint(float(correct) / float(total))\n\nconfusions = numpy.zeros([10, 10], numpy.float32)\nbundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))\nfor predicted, actual in bundled:\n confusions[predicted, actual] += 1\n\nplt.grid(False)\nplt.xticks(numpy.arange(NUM_LABELS))\nplt.yticks(numpy.arange(NUM_LABELS))\nplt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');", "Now let's wrap this up into our scoring function.", "def error_rate(predictions, labels):\n \"\"\"Return the error rate and confusions.\"\"\"\n correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))\n total = predictions.shape[0]\n\n error = 100.0 - (100 * float(correct) / float(total))\n\n confusions = numpy.zeros([10, 10], numpy.float32)\n bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))\n for predicted, actual in bundled:\n confusions[predicted, actual] += 1\n \n return error, confusions\n\nprint('Done')", "We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.\nHere, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.\n(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.)", "# Train over the first 1/4th of our training set.\nsteps = train_size // BATCH_SIZE\nfor step in range(steps):\n # Compute the offset of the current minibatch in the data.\n # Note that we could use better randomization across epochs.\n offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)\n batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]\n batch_labels = train_labels[offset:(offset + BATCH_SIZE)]\n # This dictionary maps the batch data (as a numpy array) to the\n # node in the graph it should be fed to.\n feed_dict = {train_data_node: batch_data,\n train_labels_node: batch_labels}\n # Run the graph and fetch some of the nodes.\n _, l, lr, predictions = s.run(\n [optimizer, loss, learning_rate, train_prediction],\n feed_dict=feed_dict)\n \n # Print out the loss periodically.\n if step % 100 == 0:\n error, _ = error_rate(predictions, batch_labels)\n print('Step %d of %d' % (step, steps))\n print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))\n print('Validation error: %.1f%%' % error_rate(\n validation_prediction.eval(), validation_labels)[0])\n", "The error seems to have gone down. Let's evaluate the results using the test set.\nTo help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix.", "test_error, confusions = error_rate(test_prediction.eval(), test_labels)\nprint('Test error: %.1f%%' % test_error)\n\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.grid(False)\nplt.xticks(numpy.arange(NUM_LABELS))\nplt.yticks(numpy.arange(NUM_LABELS))\nplt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');\n\nfor i, cas in enumerate(confusions):\n for j, count in enumerate(cas):\n if count > 0:\n xoff = .07 * len(str(count))\n plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')", "We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.\nLet's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values.", "plt.xticks(numpy.arange(NUM_LABELS))\nplt.hist(numpy.argmax(test_labels, 1));", "Indeed, we appear to have fewer 5 labels in the test set. So, on the whole, it seems like our model is learning and our early results are sensible.\nBut, we've only done one round of training. We can greatly improve accuracy by training for longer. To try this out, just re-execute the training cell above." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
planet-os/notebooks
aws/era5-s3-via-boto.ipynb
mit
[ "Accessing ERA5 Data on S3\nThis notebook explores how to access ERA5 data stored on a public S3 bucket as part of the AWS Public Dataset program.. We'll examine how the data is organized in S3, download sample files in NetCDF format, and perform some simple analysis on the data.\nERA5 provides hourly estimates of a large number of atmospheric, land and oceanic climate variables. The data cover the Earth on a 30km grid and resolve the atmosphere using 137 levels from the surface up to a height of 80km.\nA first segment of the ERA5 dataset is now available for public use (2008 to within 3 months of real time). Subsequent releases of ERA5 will cover the earlier decades. The entire ERA5 dataset from 1950 to present is expected to be available for use by early 2019.\nThe ERA5 data available on S3 contains an initial subset of 15 near surface variables. If there are additional variables you would like to see on S3, please contact datahub@intertrust.com with your request. We'll be evaluating the feedback we receive and potentially adding more variables in the future.", "# Initialize notebook environment.\n%matplotlib inline\nimport boto3\nimport botocore\nimport datetime\nimport matplotlib.pyplot as plt\nimport os.path\nimport xarray as xr", "Setting Up S3 Access Using Boto\nWe'll use boto to access the S3 bucket. Below, we'll set the bucket ID and create a resource to access it.\nNote that although the bucket is public, boto requires the presence of an AWS access key and secret key to use a s3 resource. To request data anonymously, we'll use a low-level client instead.", "era5_bucket = 'era5-pds'\n\n# AWS access / secret keys required\n# s3 = boto3.resource('s3')\n# bucket = s3.Bucket(era5_bucket)\n\n# No AWS keys required\nclient = boto3.client('s3', config=botocore.client.Config(signature_version=botocore.UNSIGNED))", "ERA5 Data Structure on S3\nThe ERA5 data is chunked into distinct NetCDF files per variable, each containing a month of hourly data. These files are organized in the S3 bucket by year, month, and variable name.\nThe data is structured as follows:\n/{year}/{month}/main.nc\n /data/{var1}.nc\n /{var2}.nc\n /{....}.nc\n /{varN}.nc\n\nwhere year is expressed as four digits (e.g. YYYY) and month as two digits (e.g. MM). Individual data variables (var1 through varN) use names corresponding to CF standard names convention plus any applicable additional info, such as vertical coordinate.\nFor example, the full file path for air temperature for January 2008 is:\n/2008/01/data/air_temperature_at_2_metres.nc\n\nNote that due to the nature of the ERA5 forecast timing, which is run twice daily at 06:00 and 18:00 UTC, the monthly data file begins with data from 07:00 on the first of the month and continues through 06:00 of the following month. We'll see this in the coordinate values of a data file we download later in the notebook.\nGranule variable structure and metadata attributes are stored in main.nc. This file contains coordinate and auxiliary variable data. This file is also annotated using NetCDF CF metadata conventions.\nWe can use the paginate method to list the top level key prefixes in the bucket, which corresponds to the available years of ERA5 data.", "paginator = client.get_paginator('list_objects')\nresult = paginator.paginate(Bucket=era5_bucket, Delimiter='/')\nfor prefix in result.search('CommonPrefixes'):\n print(prefix.get('Prefix'))", "Let's take a look at the objects available for a specific month using boto's list_objects_v2 method.", "keys = []\ndate = datetime.date(2018,1,1) # update to desired date\nprefix = date.strftime('%Y/%m/')\n\nresponse = client.list_objects_v2(Bucket=era5_bucket, Prefix=prefix)\nresponse_meta = response.get('ResponseMetadata')\n\nif response_meta.get('HTTPStatusCode') == 200:\n contents = response.get('Contents')\n if contents == None:\n print(\"No objects are available for %s\" % date.strftime('%B, %Y'))\n else:\n for obj in contents:\n keys.append(obj.get('Key'))\n print(\"There are %s objects available for %s\\n--\" % (len(keys), date.strftime('%B, %Y')))\n for k in keys:\n print(k)\nelse:\n print(\"There was an error with your request.\")", "Downloading Files\nLet's download main.nc file for that month and use xarray to inspect the metadata relating to the data files.", "metadata_file = 'main.nc'\nmetadata_key = prefix + metadata_file\nclient.download_file(era5_bucket, metadata_key, metadata_file)\nds_meta = xr.open_dataset('main.nc', decode_times=False)\nds_meta.info()", "Now let's acquire data for a single variable over the course of a month. Let's download air temperature for August of 2017 and open the NetCDF file using xarray.\nNote that the cell below may take some time to execute, depending on your connection speed. Most of the variable files are roughly 1 GB in size.", "# select date and variable of interest\ndate = datetime.date(2017,8,1)\nvar = 'air_temperature_at_2_metres'\n\n# file path patterns for remote S3 objects and corresponding local file\ns3_data_ptrn = '{year}/{month}/data/{var}.nc'\ndata_file_ptrn = '{year}{month}_{var}.nc'\n\nyear = date.strftime('%Y')\nmonth = date.strftime('%m')\ns3_data_key = s3_data_ptrn.format(year=year, month=month, var=var)\ndata_file = data_file_ptrn.format(year=year, month=month, var=var)\n\nif not os.path.isfile(data_file): # check if file already exists\n print(\"Downloading %s from S3...\" % s3_data_key)\n client.download_file(era5_bucket, s3_data_key, data_file)\n\nds = xr.open_dataset(data_file)\nds.info", "The ds.info output above shows us that there are three dimensions to the data: lat, lon, and time0; and one data variable: air_temperature_at_2_metres. Let's inspect the coordinate values to see what they look like...", "ds.coords.values()", "In the coordinate values, we can see that longitude is expressed as degrees east, ranging from 0 to 359.718 degrees. Latitude is expressed as degrees north, ranging from -89.784874 to 89.784874. And finally the time0 coordinate, ranging from 2017-08-01T07:00:00Z to 2017-09-01T06:00:00Z.\nAs mentioned above, due to the forecast run timing the first forecast run of the month results in data beginning at 07:00, while the last produces data through September 1 at 06:00.\nTemperature at Specific Locations\nLet's create a list of various locations and plot their temperature values during the month. Note that the longitude values of the coordinates below are not given in degrees east, but rather as a mix of eastward and westward values. The data's longitude coordinate is degrees east, so we'll convert these location coordinates accordingly to match the data.", "# location coordinates\nlocs = [\n {'name': 'santa_monica', 'lon': -118.496245, 'lat': 34.010341},\n {'name': 'tallinn', 'lon': 24.753574, 'lat': 59.436962},\n {'name': 'honolulu', 'lon': -157.835938, 'lat': 21.290014},\n {'name': 'cape_town', 'lon': 18.423300, 'lat': -33.918861},\n {'name': 'dubai', 'lon': 55.316666, 'lat': 25.266666},\n]\n\n# convert westward longitudes to degrees east\nfor l in locs:\n if l['lon'] < 0:\n l['lon'] = 360 + l['lon']\nlocs\n\nds_locs = xr.Dataset()\n\n# interate through the locations and create a dataset\n# containing the temperature values for each location\nfor l in locs:\n name = l['name']\n lon = l['lon']\n lat = l['lat']\n var_name = name\n\n ds2 = ds.sel(lon=lon, lat=lat, method='nearest')\n\n lon_attr = '%s_lon' % name\n lat_attr = '%s_lat' % name\n\n ds2.attrs[lon_attr] = ds2.lon.values.tolist()\n ds2.attrs[lat_attr] = ds2.lat.values.tolist()\n ds2 = ds2.rename({var : var_name}).drop(('lat', 'lon'))\n \n ds_locs = xr.merge([ds_locs, ds2])\n\nds_locs.data_vars", "Convert Units and Create a Dataframe\nTemperature data in the ERA5 dataset uses Kelvin. Let's convert it to something more meaningful. I've chosen to use Fahrenheit, because as a U.S. citizen (and stubborn metric holdout) Celcius still feels foreign to me ;-)\nWhile we're at it, let's also convert the dataset to a pandas dataframe and use the describe method to display some statistics about the data.", "def kelvin_to_celcius(t):\n return t - 273.15\n\ndef kelvin_to_fahrenheit(t):\n return t * 9/5 - 459.67\n\nds_locs_f = ds_locs.apply(kelvin_to_fahrenheit)\n\ndf_f = ds_locs_f.to_dataframe()\ndf_f.describe()", "Show Me Some Charts!\nFinally, let's plot the temperature data for each of the locations over the period. The first plot displays the hourly temperature for each location over the month.\nThe second plot is a box plot. A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2). The whiskers extend from the edges of box to show the range of the data. The position of the whiskers is set by default to 1.5 * IQR (IQR = Q3 - Q1) from the edges of the box. Outlier points are those past the end of the whiskers.", "# readability please\nplt.rcParams.update({'font.size': 16})\n\nax = df_f.plot(figsize=(18, 10), title=\"ERA5 Air Temperature at 2 Meters\", grid=1)\nax.set(xlabel='Date', ylabel='Air Temperature (deg F)')\nplt.show()\n\nax = df_f.plot.box(figsize=(18, 10))\nax.set(xlabel='Location', ylabel='Air Temperature (deg F)')\nplt.show()", "Conclusions? Dubai was absolutely cooking in August, with a mean temperature of ~98° and a high of 117°! While Honolulu was a consistent 78° with a standard deviation of less than 1°!\nQuestions? Feedback? Email us at datahub@intertrust.com. We also provide an API for accessing ERA5 data, for more details vis the Planet OS Datahub or check out another notebook example using ERA5 data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
citxx/sis-python
crash-course/strings.ipynb
mit
[ "<h1>Содержание<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Спецсимволы\" data-toc-modified-id=\"Спецсимволы-1\">Спецсимволы</a></span></li><li><span><a href=\"#Операции-со-строками\" data-toc-modified-id=\"Операции-со-строками-2\">Операции со строками</a></span><ul class=\"toc-item\"><li><span><a href=\"#Сложение\" data-toc-modified-id=\"Сложение-2.1\">Сложение</a></span></li><li><span><a href=\"#Повторение\" data-toc-modified-id=\"Повторение-2.2\">Повторение</a></span></li><li><span><a href=\"#Индексация\" data-toc-modified-id=\"Индексация-2.3\">Индексация</a></span></li><li><span><a href=\"#Длина-строки\" data-toc-modified-id=\"Длина-строки-2.4\">Длина строки</a></span></li><li><span><a href=\"#Проверка-наличия-подстроки\" data-toc-modified-id=\"Проверка-наличия-подстроки-2.5\">Проверка наличия подстроки</a></span></li></ul></li><li><span><a href=\"#Кодировка-символов\" data-toc-modified-id=\"Кодировка-символов-3\">Кодировка символов</a></span><ul class=\"toc-item\"><li><span><a href=\"#Код-по-символу\" data-toc-modified-id=\"Код-по-символу-3.1\">Код по символу</a></span></li><li><span><a href=\"#Символ-по-коду\" data-toc-modified-id=\"Символ-по-коду-3.2\">Символ по коду</a></span></li><li><span><a href=\"#ASCII\" data-toc-modified-id=\"ASCII-3.3\">ASCII</a></span></li></ul></li></ul></div>\n\nСтроки\nДля представления строк в Python используется тип str (аналог std::string в С++ или String в Pascal).", "s1 = \"Строки можно задавать в двойных кавычках\"\ns2 = 'А можно в одинарных'\nprint(s1, type(s1))\nprint(s2, type(s2))", "Спецсимволы\nДля задания в строке особых символов (например переводов строк или табуляций) в Python используются специальный последовательности, вроде \\n для перевода строки или \\t для символа табуляции:", "s = \"Эта строка\\nсостоит из двух строк\"\nprint(s)\n\ns = \"А в этой\\tстроке\\nиспользуются\\tсимволы табуляции\"\nprint(s)", "Такой же синтаксис используетя для задания кавычек в строке:", "s1 = \"Это \\\"строка\\\" с кавычками.\"\ns2 = 'И \"это\" тоже.'\ns3 = 'С одинарными Кавычками \\'\\' всё работает также.'\nprint(s1, s2, s3)\n\nprint(\"Если надо задать обратный слэш \\\\, то его надо просто удвоить: '\\\\\\\\'\")", "Операции со строками\nСложение\nСтроки можно складывать. В этом случае они просто припишутся друг к другу. По-умному это называется конкатенацией.", "greeting = \"Привет\"\nexclamation = \"!!!\"\nprint(greeting + exclamation)", "Повторение\nМожно умножать на целое число, чтобы повторить строку нужное число раз.", "print(\"I will write in Python with style!\\n\" * 10)\nprint(3 * \"Really\\n\")", "Индексация\nПолучить символ на заданной позиции можно также, как и в C++ или Pascal. Индекасация начинается с 0.", "s = \"Это моя строка\"\nprint(s[0], s[1], s[2])", "Но нельзя поменять отдельный символ. Это сделано для того, чтобы более логично и эффективно реализовать некоторые возможности Python.", "s = \"Вы не можете изменить символы этой строки\"\ns[0] = \"Т\"", "Перевод: ОшибкаТипа: объект 'str' не поддерживает присваивание элементов\n\nМожно указывать отрицательные индексы, тогда нумерация происходит с конца.", "s = \"Строка\"\nprint(s[-1], \"=\", s[5])\nprint(s[-2], \"=\", s[4])\nprint(s[-3], \"=\", s[3])\nprint(s[-4], \"=\", s[2])\nprint(s[-5], \"=\", s[1])\nprint(s[-6], \"=\", s[0])", "Длина строки", "s = \"Для получения длины используется функция len\"\nprint(len(s))", "Проверка наличия подстроки\nПроверить наличие или отсутствие в строке подстроки или символа можно с помощью операций in и not in.", "vowels = \"аеёиоуыэюя\"\nc = \"ы\"\nif c in vowels:\n print(c, \"- гласная\")\nelse:\n print(c, \"- согласная\")\n\ns = \"Python - лучший из неторопливых языков :)\"\nprint(\"Python\" in s)\nprint(\"C++\" in s)", "Кодировка символов\nВ памяти компьютера каждый символ хранится как число. Соответствие между символом и числом называется кодировкой.\nСамая простая кодировка для латинских букв, цифр и часто используемых символов — ASCII. Она задаёт коды (числа) для 128 символов и используется в Python для представления этих символов.\nКод по символу", "# Код любого символа можно получить с помощью функции ord\nprint(ord(\"a\"))\n\n# Можно пользоваться тем, что коды чисел, маленьких латинских букв и больших латинских букв идут подряд.\nprint(\"Цифры:\", ord(\"0\"), ord(\"1\"), ord(\"2\"), ord(\"3\"), \"...\", ord(\"8\"), ord(\"9\"))\nprint(\"Маленькие буквы:\", ord(\"a\"), ord(\"b\"), ord(\"c\"), ord(\"d\"), \"...\", ord(\"y\"), ord(\"z\"))\nprint(\"Большие буквы:\", ord(\"A\"), ord(\"B\"), ord(\"C\"), ord(\"D\"), \"...\", ord(\"Y\"), ord(\"Z\"))\n\n# Например, так можно получить номер буквы в алфавите\nc = \"g\"\nprint(ord(c) - ord('a'))", "Символ по коду", "# Для получение символа по коду используется функция chr\nprint(chr(100))", "ASCII", "# Этот код выводит всю таблицу ASCII\nfor code in range(128):\n print('chr(' + str(code) + ') =', repr(chr(code)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
seth2000/chinesepoem
PrepareData.ipynb
mit
[ "This is the test file to the idea prove.\nTry to do the Json formatted corpus, but it is so hard, then I find the word2vec can avoid this hard work.", "# -*- coding: utf-8 -*-\n\nimport os\nimport re\nimport time\nimport codecs\nimport argparse\n\nTIME_FORMAT = '%Y-%m-%d %H:%M:%S'\nBASE_FOLDER = os.getcwd() # os.path.abspath(os.path.dirname(__file__))\nDATA_FOLDER = os.path.join(BASE_FOLDER, 'data')\nDEFAULT_FIN = os.path.join(DATA_FOLDER, '唐诗语料库.txt')\nDEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')\nreg_noisy = re.compile('[^\\u3000-\\uffee]')\nreg_note = re.compile('((.*))') # Cannot deal with () in seperate lines\n# 中文及全角标点符号(字符)是\\u3000-\\u301e\\ufe10-\\ufe19\\ufe30-\\ufe44\\ufe50-\\ufe6b\\uff01-\\uffee\n\n", "读取数据,去掉不用的数据", "if __name__ == '__main__':\n # parser = set_arguments()\n # cmd_args = parser.parse_args()\n\n print('{} START'.format(time.strftime(TIME_FORMAT)))\n\n fd = codecs.open(DEFAULT_FIN, 'r', 'utf-8')\n fw = codecs.open( DEFAULT_FOUT, 'w', 'utf-8')\n reg = re.compile('〖(.*)〗')\n start_flag = False\n for line in fd:\n line = line.strip()\n if not line or '《全唐诗》' in line or '<http' in line or '□' in line:\n continue\n elif '〖' in line and '〗' in line:\n if start_flag:\n fw.write('\\n')\n start_flag = True\n g = reg.search(line)\n if g:\n fw.write(g.group(1))\n fw.write('\\n')\n else:a\n # noisy data\n print(line)\n else:\n line = reg_noisy.sub('', line)\n line = reg_note.sub('', line)\n line = line.replace(' .', '')\n fw.write(line)\n\n fd.close()\n fw.close()\n\n print('{} STOP'.format(time.strftime(TIME_FORMAT)))", "分词实验\nDEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')\nthu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注\ntext = thu1.cut(\"我爱北京天安门\", text=True) #进行一句话分词\nprint(text)\nthu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注\nthu1.cut_f(DEFAULT_FOUT, outp) #对input.txt文件内容进行分词,输出到output.txt", "print('{} START'.format(time.strftime(TIME_FORMAT)))\n\nimport thulac \nDEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\n\nfd = codecs.open(DEFAULT_FOUT, 'r', 'utf-8')\nfw = codecs.open(DEFAULT_Segment, 'w', 'utf-8')\n\nthu1 = thulac.thulac(seg_only=True) #只进行分词,不进行词性标注\n\n\nfor line in fd:\n #print(line)\n fw.write(thu1.cut(line, text=True))\n fw.write('\\n')\n \nfd.close()\nfw.close()\n\nprint('{} STOP'.format(time.strftime(TIME_FORMAT)))\n \n\nprint('{} START'.format(time.strftime(TIME_FORMAT)))\nfrom gensim.models import word2vec\n\n\n#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\nDEFAULT_Word2Vec = os.path.join(DATA_FOLDER, 'Word2Vec150.bin')\n\nsentences = word2vec.Text8Corpus(DEFAULT_Segment)\n\nmodel = word2vec.Word2Vec(sentences, size=150)\n\n#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\nmodel.save(DEFAULT_Word2Vec)\n\nprint('{} STOP'.format(time.strftime(TIME_FORMAT)))\n\nmodel[u'男']\n\n\nDEFAULT_FIN = os.path.join(DATA_FOLDER, '唐诗语料库.txt')\nDEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')\nDEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\ndef GetFirstNline(filePath, linesNumber):\n fd = codecs.open(filePath, 'r', 'utf-8')\n for i in range(1,linesNumber):\n print(fd.readline())\n fd.close()\n\nGetFirstNline(DEFAULT_Segment, 3)\nGetFirstNline(DEFAULT_FOUT, 3)", "分词不是很成功,我们转向直接用汉字字符来代替分段,我们保留标点符号", "print('{} START'.format(time.strftime(TIME_FORMAT)))\n\nDEFAULT_FOUT = os.path.join(DATA_FOLDER, 'poem.txt')\nDEFAULT_charSegment = os.path.join(DATA_FOLDER, 'Charactersegment.txt')\n\nfd = codecs.open(DEFAULT_FOUT, 'r', 'utf-8')\nfw = codecs.open(DEFAULT_charSegment, 'w', 'utf-8')\n\nstart_flag = False\nfor line in fd:\n if len(line) > 0:\n for c in line:\n if c != '\\n':\n fw.write(c)\n fw.write(' ')\n fw.write('\\n')\n\nfd.close()\nfw.close()\n\nprint('{} STOP'.format(time.strftime(TIME_FORMAT)))\n\n\nGetFirstNline(DEFAULT_charSegment, 3)\n\nprint('{} START'.format(time.strftime(TIME_FORMAT)))\nfrom gensim.models import word2vec\n\n\n#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\nDEFAULT_Char2Vec = os.path.join(DATA_FOLDER, 'Char2Vec100.bin')\n\nfd = codecs.open(DEFAULT_charSegment, 'r', 'utf-8')\n\nsentences = fd.readlines()\n\nfd.close\n\n\nmodel = word2vec.Word2Vec(sentences, size=100)\n\n#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\nmodel.save(DEFAULT_Char2Vec)\n\nprint('{} STOP'.format(time.strftime(TIME_FORMAT)))\n\nmodel[u'男']\n\nprint('{} START'.format(time.strftime(TIME_FORMAT)))\nfrom gensim.models import word2vec\n\nDEFAULT_charSegment = os.path.join(DATA_FOLDER, 'Charactersegment.txt')\nDEFAULT_Char2Vec50 = os.path.join(DATA_FOLDER, 'Char2Vec50.bin')\n\nfd = codecs.open(DEFAULT_charSegment, 'r', 'utf-8')\n\nsentences = fd.readlines()\n\nfd.close\n\nmodel = word2vec.Word2Vec(sentences, size=50)\n\n#DEFAULT_Segment = os.path.join(DATA_FOLDER, 'wordsegment.txt')\nmodel.save(DEFAULT_Char2Vec50)\n\nprint('{} STOP'.format(time.strftime(TIME_FORMAT)))\n\nmodel.wv.most_similar([u'好'])", "把汉字转成拼音", "from pypinyin import pinyin" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
teuben/astr288p
notebooks/04-plotting.ipynb
mit
[ "Plotting\nIn order to do inline plotting within a notebook, ipython needs a magic command, commands that start with the %", "%matplotlib inline", "Importing some modules (libraries) and giving them short names such as np and plt. You will find that most users will use these common ones.", "import numpy as np\nimport matplotlib.pyplot as plt", "It might be tempting to import a module in a blank namespace, to make for \"more readable code\" like the following example:\nfrom math import *\n s2 = sqrt(2)\nbut the danger of this is that importing multiple modules in blank namespace can make some invisible, plus obfuscates the code where the function came from. So it is safer to stick to import where you get the module namespace (or a shorter alias):\nimport math\n s2 = math.sqrt(2)\nLine plot\nThe array $x$ will contain numbers from 0 to 9.5 in steps of 0.5. We then compute two arrays $y$ and $z$ as follows:\n$$\ny = {1\\over{10}}{x^2} \n$$\nand\n$$\nz = 3\\sqrt{x}\n$$", "x = 0.5*np.arange(20)\ny = x*x*0.1\nz = np.sqrt(x)*3\n\nplt.plot(x,y,'o-',label='y')\nplt.plot(x,z,'*--',label='z')\nplt.title(\"$x^2$ and $\\sqrt{x}$\")\n#plt.legend(loc='best')\nplt.legend()\nplt.xlabel('X axis')\nplt.ylabel('Y axis')\n#plt.xscale('log')\n#plt.yscale('log')\n#plt.savefig('sample1.png')\n\n", "Scatter plot", "plt.scatter(x,y,s=40.0,c='r',label='y')\nplt.scatter(x,z,s=20.0,c='g',label='z')\nplt.legend(loc='best')\nplt.show()", "Multi planel plots|", "fig = plt.figure()\nfig1 = fig.add_subplot(121)\nfig1.scatter(x,z,s=20.0,c='g',label='z')\nfig2 = fig.add_subplot(122)\nfig2.scatter(x,y,s=40.0,c='r',label='y');", "Histogram", "n = 100000\nmean = 4.0\ndisp = 2.0\nbins = 32\ng = np.random.normal(mean,disp,n)\np = np.random.poisson(mean,n)\n\ngh=plt.hist(g,bins)\n\nph=plt.hist(p,bins)\n\nplt.hist([g,p],bins)", "Notice that in this example the output from the plt.hist() command was not captured in a variable, but instead send to the output. You can see it contains the values and edges of the bins it computed. Of course this is also documented!\nhttp://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist\nLots examples of plots and corresponding code on matplotlib's gallery:\nhttp://matplotlib.org/gallery.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
EricChiquitoG/Simulacion2017
Modulo1/.ipynb_checkpoints/Clase4_OsciladorArmonico-checkpoint.ipynb
mit
[ "¿Cómo se mueve un péndulo?\n\nSe dice que un sistema cualquiera, mecánico, eléctrico, neumático, etc., es un oscilador armónico si, cuando se deja en libertad fuera de su posición de equilibrio, vuelve hacia ella describiendo oscilaciones sinusoidales, o sinusoidales amortiguadas en torno a dicha posición estable.\n- https://es.wikipedia.org/wiki/Oscilador_armónico\n\nReferencias: \n - http://matplotlib.org\n - https://seaborn.pydata.org\n - http://www.numpy.org\n - http://ipywidgets.readthedocs.io/en/latest/index.html\nEn realidad esto es el estudio de oscilaciones. \n\n<div>\n<img style=\"float: left; margin: 0px 0px 15px 15px;\" src=\"http://images.iop.org/objects/ccr/cern/51/3/17/CCast2_03_11.jpg\" width=\"300px\" height=\"100px\" />\n\n<img style=\"float: right; margin: 0px 0px 15px 15px;\" src=\"https://qph.ec.quoracdn.net/main-qimg-f7a6d0342e57b06d46506e136fb7d437-c\" width=\"225px\" height=\"50px\" />\n\n </div>", "from IPython.display import YouTubeVideo\nYouTubeVideo('k5yTVHr6V14')", "Los sistemas mas sencillos a estudiar en oscilaciones son el sistema masa-resorte y el péndulo simple. \n<div>\n<img style=\"float: left; margin: 0px 0px 15px 15px;\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/76/Pendulum.jpg\" width=\"150px\" height=\"50px\" />\n<img style=\"float: right; margin: 15px 15px 15px 15px;\" src=\"https://upload.wikimedia.org/wikipedia/ko/9/9f/Mass_spring.png\" width=\"200px\" height=\"100px\" />\n</div>\n\n\\begin{align}\n\\frac{d^2 x}{dt^2} + \\omega_{0}^2 x &= 0, \\quad \\omega_{0} = \\sqrt{\\frac{k}{m}}\\notag\\\n\\frac{d^2 \\theta}{dt^2} + \\omega_{0}^{2}\\, \\theta &= 0, \\quad\\mbox{donde}\\quad \\omega_{0}^2 = \\frac{g}{l} \n\\end{align} \n\nSistema masa-resorte\nLa solución a este sistema masa-resorte se explica en términos de la segunda ley de Newton. Para este caso, si la masa permanece constante y solo consideramos la dirección en $x$. Entonces,\n\\begin{equation}\nF = m \\frac{d^2x}{dt^2}.\n\\end{equation}\n¿Cuál es la fuerza? Ley de Hooke! \n\\begin{equation}\nF = -k x, \\quad k > 0.\n\\end{equation}\nVemos que la fuerza se opone al desplazamiento y su intensidad es proporcional al mismo. Y $k$ es la constante elástica o recuperadora del resorte. \nEntonces, un modelo del sistema masa-resorte está descrito por la siguiente ecuación diferencial:\n\\begin{equation}\n\\frac{d^2x}{dt^2} + \\frac{k}{m}x = 0,\n\\end{equation}\ncuya solución se escribe como \n\\begin{equation}\nx(t) = A \\cos(\\omega_{o} t) + B \\sin(\\omega_{o} t)\n\\end{equation}\nY su primera derivada (velocidad) sería \n\\begin{equation}\n\\frac{dx(t)}{dt} = \\omega_{0}[- A \\sin(\\omega_{0} t) + B\\cos(\\omega_{0}t)]\n\\end{equation}\n<font color=red> Ver en el tablero que significa solución de la ecuación diferencial.</font>\n¿Cómo se ven las gráficas de $x$ vs $t$ y $\\frac{dx}{dt}$ vs $t$?\nEsta instrucción es para que las gráficas aparezcan dentro de este entorno.", "%matplotlib inline", "_Esta es la librería con todas las instrucciones para realizar gráficos. _", "import matplotlib.pyplot as plt \n\nimport matplotlib as mpl\nlabel_size = 14\nmpl.rcParams['xtick.labelsize'] = label_size \nmpl.rcParams['ytick.labelsize'] = label_size ", "Y esta es la librería con todas las funciones matemáticas necesarias.", "import numpy as np\n\n# Definición de funciones a graficar\nA, B, w0 = .5, .1, .5 # Parámetros\nt = np.linspace(0, 50, 100) # Creamos vector de tiempo de 0 a 50 con 100 puntos\nx = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición\ndx = w0*(-A*np.sin(w0*t)+B*np.cos(w0*t)) # Función de velocidad\n\n# Gráfico\nplt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño\nplt.plot(t, x, '-', lw = 1, ms = 4, \n label = '$x(t)$') # Explicación\nplt.plot(t, dx, 'ro-', lw = 1, ms = 4,\n label = r'$\\dot{x(t)}$') \nplt.xlabel('$t$', fontsize = 20) # Etiqueta eje x\nplt.show()\n\n# Colores, etiquetas y otros formatos\n\nplt.figure(figsize = (7, 4))\nplt.scatter(t, x, lw = 0, c = 'red',\n label = '$x(t)$') # Gráfica con puntos\nplt.plot(t, x, 'r-', lw = 1) # Grafica normal\nplt.scatter(t, dx, lw = 0, c = 'b',\n label = r'$\\frac{dx}{dt}$') # Con la r, los backslash se tratan como un literal, no como un escape\nplt.plot(t, dx, 'b-', lw = 1)\nplt.xlabel('$t$', fontsize = 20)\nplt.legend(loc = 'best') # Leyenda con las etiquetas de las gráficas\nplt.show()", "Y si consideramos un conjunto de frecuencias de oscilación, entonces", "frecuencias = np.array([.1, .2 , .5, .6]) # Vector de diferentes frecuencias\nplt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño\n\n# Graficamos para cada frecuencia\nfor w0 in frecuencias:\n x = A*np.cos(w0*t)+B*np.sin(w0*t)\n plt.plot(t, x, '*-')\nplt.xlabel('$t$', fontsize = 16) # Etiqueta eje x\nplt.ylabel('$x(t)$', fontsize = 16) # Etiqueta eje y\nplt.title('Oscilaciones', fontsize = 16) # Título de la gráfica\nplt.show()", "Estos colores, son el default de matplotlib, sin embargo existe otra librería dedicada, entre otras cosas, a la presentación de gráficos.", "import seaborn as sns\nsns.set(style='ticks', palette='Set2')\n\nfrecuencias = np.array([.1, .2 , .5, .6])\nplt.figure(figsize = (7, 4))\nfor w0 in frecuencias:\n x = A*np.cos(w0*t)+B*np.sin(w0*t)\n plt.plot(t, x, 'o-', \n label = '$\\omega_0 = %s$'%w0) # Etiqueta cada gráfica con frecuencia correspondiente (conversion float a string)\nplt.xlabel('$t$', fontsize = 16)\nplt.ylabel('$x(t)$', fontsize = 16)\nplt.title('Oscilaciones', fontsize = 16)\nplt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), prop={'size': 14})\nplt.show()", "Si queremos tener manipular un poco mas las cosas, hacemos uso de lo siguiente:", "from ipywidgets import *\n\ndef masa_resorte(t = 0):\n A, B, w0 = .5, .1, .5 # Parámetros\n x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición\n \n fig = plt.figure()\n ax = fig.add_subplot(1, 1, 1)\n ax.plot(x, [0], 'ko', ms = 10)\n ax.set_xlim(xmin = -0.6, xmax = .6)\n ax.axvline(x=0, color = 'r')\n ax.axhline(y=0, color = 'grey', lw = 1)\n fig.canvas.draw()\n\ninteract(masa_resorte, t = (0, 50,.01));", "La opción de arriba generalmente será lenta, así que lo recomendable es usar interact_manual.", "def masa_resorte(t = 0):\n A, B, w0 = .5, .1, .5 # Parámetros\n x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición\n \n fig = plt.figure()\n ax = fig.add_subplot(1, 1, 1)\n ax.plot(x, [0], 'ko', ms = 10)\n ax.set_xlim(xmin = -0.6, xmax = .6)\n ax.axvline(x=0, color = 'r')\n ax.axhline(y=0, color = 'grey', lw = 1)\n fig.canvas.draw()\n \ninteract_manual(masa_resorte, t = (0, 50,.01));", "Péndulo simple\nAhora, si fijamos nuestra atención al movimiento de un péndulo simple (oscilaciones pequeñas), la ecuación diferencial a resolver tiene la misma forma:\n\\begin{equation}\n\\frac{d^2 \\theta}{dt^2} + \\omega_{0}^{2}\\, \\theta = 0, \\quad\\mbox{donde}\\quad \\omega_{0}^2 = \\frac{g}{l}.\n\\end{equation}\nLa diferencia más evidente es como hemos definido a $\\omega_{0}$. Esto quiere decir que, \n\\begin{equation}\n\\theta(t) = A\\cos(\\omega_{0} t) + B\\sin(\\omega_{0}t)\n\\end{equation}\nSi graficamos la ecuación de arriba vamos a encontrar un comportamiento muy similar al ya discutido anteriormente. Es por ello que ahora veremos el movimiento en el plano $xy$. Es decir, \n\\begin{align}\nx &= l \\sin(\\theta), \\quad\ny = l \\cos(\\theta) \n\\end{align}", "# Podemos definir una función que nos entregue theta dados los parámetros y el tiempo\ndef theta_t(a, b, g, l, t):\n omega_0 = np.sqrt(g/l)\n return a * np.cos(omega_0 * t) + b * np.sin(omega_0 * t) \n\n# Hacemos un gráfico interactivo del péndulo\n\ndef pendulo_simple(t = 0):\n fig = plt.figure(figsize = (5,5))\n ax = fig.add_subplot(1, 1, 1)\n x = 2 * np.sin(theta_t(.4, .6, 9.8, 2, t))\n y = - 2 * np.cos(theta_t(.4, .6, 9.8, 2, t))\n ax.plot(x, y, 'ko', ms = 10)\n ax.plot([0], [0], 'rD')\n ax.plot([0, x ], [0, y], 'k-', lw = 1)\n ax.set_xlim(xmin = -2.2, xmax = 2.2)\n ax.set_ylim(ymin = -2.2, ymax = .2)\n fig.canvas.draw()\n \ninteract_manual(pendulo_simple, t = (0, 10,.01));", "Condiciones iniciales\nRealmente lo que se tiene que resolver es, \n\\begin{equation}\n\\theta(t) = \\theta(0) \\cos(\\omega_{0} t) + \\frac{\\dot{\\theta}(0)}{\\omega_{0}} \\sin(\\omega_{0} t)\n\\end{equation}\n\nActividad. Modificar el programa anterior para incorporar las condiciones iniciales.", "# Solución: \ndef theta_t():\n \n return \n\ndef pendulo_simple(t = 0):\n fig = plt.figure(figsize = (5,5))\n ax = fig.add_subplot(1, 1, 1)\n x = 2 * np.sin(theta_t( , t))\n y = - 2 * np.cos(theta_t(, t))\n ax.plot(x, y, 'ko', ms = 10)\n ax.plot([0], [0], 'rD')\n ax.plot([0, x ], [0, y], 'k-', lw = 1)\n ax.set_xlim(xmin = -2.2, xmax = 2.2)\n ax.set_ylim(ymin = -2.2, ymax = .2)\n fig.canvas.draw()\ninteract_manual(pendulo_simple, t = (0, 10,.01));", "Plano fase $(x, \\frac{dx}{dt})$\nLa posición y velocidad para el sistema masa-resorte se escriben como: \n\\begin{align}\nx(t) &= x(0) \\cos(\\omega_{o} t) + \\frac{\\dot{x}(0)}{\\omega_{0}} \\sin(\\omega_{o} t)\\\n\\dot{x}(t) &= -\\omega_{0}x(0) \\sin(\\omega_{0} t) + \\dot{x}(0)\\cos(\\omega_{0}t)]\n\\end{align}", "k = 3 #constante elástica [N]/[m] \nm = 1 # [kg]\nomega_0 = np.sqrt(k/m)\nx_0 = .5\ndx_0 = .1 \n\nt = np.linspace(0, 50, 300)\n\nx_t = x_0 *np.cos(omega_0 *t) + (dx_0/omega_0) * np.sin(omega_0 *t)\ndx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0 * np.cos(omega_0 * t)\n\nplt.figure(figsize = (7, 4))\nplt.plot(t, x_t, label = '$x(t)$', lw = 1)\nplt.plot(t, dx_t, label = '$\\dot{x}(t)$', lw = 1)\n#plt.plot(t, dx_t/omega_0, label = '$\\dot{x}(t)$', lw = 1) # Mostrar que al escalar, la amplitud queda igual\nplt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), prop={'size': 14})\nplt.xlabel('$t$', fontsize = 18)\nplt.show()\n\nplt.figure(figsize = (5, 5))\nplt.plot(x_t, dx_t/omega_0, 'ro', ms = 2)\nplt.xlabel('$x(t)$', fontsize = 18)\nplt.ylabel('$\\dot{x}(t)/\\omega_0$', fontsize = 18)\nplt.show()\n\nplt.figure(figsize = (5, 5))\nplt.scatter(x_t, dx_t/omega_0, cmap = 'viridis', c = dx_t, s = 8, lw = 0)\nplt.xlabel('$x(t)$', fontsize = 18)\nplt.ylabel('$\\dot{x}(t)/\\omega_0$', fontsize = 18)\nplt.show()", "Multiples condiciones iniciales", "k = 3 #constante elástica [N]/[m] \nm = 1 # [kg]\nomega_0 = np.sqrt(k/m)\n\nt = np.linspace(0, 50, 50)\n\nx_0s = np.array([.7, .5, .25, .1])\ndx_0s = np.array([.2, .1, .05, .01])\ncmaps = np.array(['viridis', 'inferno', 'magma', 'plasma'])\n\nplt.figure(figsize = (6, 6))\nfor indx, x_0 in enumerate(x_0s):\n x_t = x_0 *np.cos(omega_0 *t) + (dx_0s[indx]/omega_0) * np.sin(omega_0 *t)\n dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0s[indx] * np.cos(omega_0 * t)\n plt.scatter(x_t, dx_t/omega_0, cmap = cmaps[indx], \n c = dx_t, s = 10, \n lw = 0)\n plt.xlabel('$x(t)$', fontsize = 18)\n plt.ylabel('$\\dot{x}(t)/\\omega_0$', fontsize = 18)\n #plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5))", "Trayectorias del oscilador armónico simple en el espacio fase $(x,\\, \\dot{x}\\,/\\omega_0)$ para diferentes valores de la energía. \n<script>\n $(document).ready(function(){\n $('div.prompt').hide();\n $('div.back-to-top').hide();\n $('nav#menubar').hide();\n $('.breadcrumb').hide();\n $('.hidden-print').hide();\n });\n</script>\n\n<footer id=\"attribution\" style=\"float:right; color:#808080; background:#fff;\">\nCreated with Jupyter by Lázaro Alonso. Modified by Esteban Jiménez Rodríguez\n</footer>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
diegocavalca/Studies
deep-learnining-specialization/2. improving deep neural networks/week1/Regularization.ipynb
cc0-1.0
[ "Regularization\nWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that overfitting can be a serious problem, if the training dataset is not big enough. Sure it does well on the training set, but the learned network doesn't generalize to new examples that it has never seen!\nYou will learn to: Use regularization in your deep learning models.\nLet's first import the packages you are going to use.", "# import packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec\nfrom reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters\nimport sklearn\nimport sklearn.datasets\nimport scipy.io\nfrom testCases import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'", "Problem Statement: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. \n<img src=\"images/field_kiank.png\" style=\"width:600px;height:350px;\">\n<caption><center> <u> Figure 1 </u>: Football field<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>\nThey give you the following 2D dataset from France's past 10 games.", "train_X, train_Y, test_X, test_Y = load_2D_dataset()", "Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.\n- If the dot is blue, it means the French player managed to hit the ball with his/her head\n- If the dot is red, it means the other team's player hit the ball with their head\nYour goal: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.\nAnalysis of the dataset: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. \nYou will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. \n1 - Non-regularized model\nYou will use the following neural network (already implemented for you below). This model can be used:\n- in regularization mode -- by setting the lambd input to a non-zero value. We use \"lambd\" instead of \"lambda\" because \"lambda\" is a reserved keyword in Python. \n- in dropout mode -- by setting the keep_prob to a value less than one\nYou will first try the model without any regularization. Then, you will implement:\n- L2 regularization -- functions: \"compute_cost_with_regularization()\" and \"backward_propagation_with_regularization()\"\n- Dropout -- functions: \"forward_propagation_with_dropout()\" and \"backward_propagation_with_dropout()\"\nIn each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.", "def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):\n \"\"\"\n Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)\n learning_rate -- learning rate of the optimization\n num_iterations -- number of iterations of the optimization loop\n print_cost -- If True, print the cost every 10000 iterations\n lambd -- regularization hyperparameter, scalar\n keep_prob - probability of keeping a neuron active during drop-out, scalar.\n \n Returns:\n parameters -- parameters learned by the model. They can then be used to predict.\n \"\"\"\n \n grads = {}\n costs = [] # to keep track of the cost\n m = X.shape[1] # number of examples\n layers_dims = [X.shape[0], 20, 3, 1]\n \n # Initialize parameters dictionary.\n parameters = initialize_parameters(layers_dims)\n\n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.\n if keep_prob == 1:\n a3, cache = forward_propagation(X, parameters)\n elif keep_prob < 1:\n a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)\n \n # Cost function\n if lambd == 0:\n cost = compute_cost(a3, Y)\n else:\n cost = compute_cost_with_regularization(a3, Y, parameters, lambd)\n \n # Backward propagation.\n assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout, \n # but this assignment will only explore one at a time\n if lambd == 0 and keep_prob == 1:\n grads = backward_propagation(X, Y, cache)\n elif lambd != 0:\n grads = backward_propagation_with_regularization(X, Y, cache, lambd)\n elif keep_prob < 1:\n grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)\n \n # Update parameters.\n parameters = update_parameters(parameters, grads, learning_rate)\n \n # Print the loss every 10000 iterations\n if print_cost and i % 10000 == 0:\n print(\"Cost after iteration {}: {}\".format(i, cost))\n if print_cost and i % 1000 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('iterations (x1,000)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters", "Let's train the model without any regularization, and observe the accuracy on the train/test sets.", "parameters = model(train_X, train_Y)\nprint (\"On the training set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)", "The train accuracy is 94.8% while the test accuracy is 91.5%. This is the baseline model (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.", "plt.title(\"Model without regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.\n2 - L2 Regularization\nThe standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from:\n$$J = -\\frac{1}{m} \\sum\\limits_{i = 1}^{m} \\large{(}\\small y^{(i)}\\log\\left(a^{L}\\right) + (1-y^{(i)})\\log\\left(1- a^{L}\\right) \\large{)} \\tag{1}$$\nTo:\n$$J_{regularized} = \\small \\underbrace{-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} \\large{(}\\small y^{(i)}\\log\\left(a^{L}\\right) + (1-y^{(i)})\\log\\left(1- a^{L}\\right) \\large{)} }\\text{cross-entropy cost} + \\underbrace{\\frac{1}{m} \\frac{\\lambda}{2} \\sum\\limits_l\\sum\\limits_k\\sum\\limits_j W{k,j}^{[l]2} }_\\text{L2 regularization cost} \\tag{2}$$\nLet's modify your cost and observe the consequences.\nExercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\\sum\\limits_k\\sum\\limits_j W_{k,j}^{[l]2}$ , use :\npython\nnp.sum(np.square(Wl))\nNote that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \\frac{1}{m} \\frac{\\lambda}{2} $.", "# GRADED FUNCTION: compute_cost_with_regularization\n\ndef compute_cost_with_regularization(A3, Y, parameters, lambd):\n \"\"\"\n Implement the cost function with L2 regularization. See formula (2) above.\n \n Arguments:\n A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n parameters -- python dictionary containing parameters of the model\n \n Returns:\n cost - value of the regularized loss function (formula (2))\n \"\"\"\n m = Y.shape[1]\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n W3 = parameters[\"W3\"]\n \n cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost\n \n ### START CODE HERE ### (approx. 1 line)\n L2_regularization_cost = (lambd/(2*m))*(np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))\n ### END CODER HERE ###\n \n cost = cross_entropy_cost + L2_regularization_cost\n \n return cost\n\nA3, Y_assess, parameters = compute_cost_with_regularization_test_case()\n\nprint(\"cost = \" + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))", "Expected Output: \n<table> \n <tr>\n <td>\n **cost**\n </td>\n <td>\n 1.78648594516\n </td>\n\n </tr>\n\n</table>\n\nOf course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. \nExercise: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\\frac{d}{dW} ( \\frac{1}{2}\\frac{\\lambda}{m} W^2) = \\frac{\\lambda}{m} W$).", "# GRADED FUNCTION: backward_propagation_with_regularization\n\ndef backward_propagation_with_regularization(X, Y, cache, lambd):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added an L2 regularization.\n \n Arguments:\n X -- input dataset, of shape (input size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation()\n lambd -- regularization hyperparameter, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n \n ### START CODE HERE ### (approx. 1 line)\n dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3\n ### END CODE HERE ###\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2\n ### END CODE HERE ###\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1\n ### END CODE HERE ###\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients\n\nX_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()\n\ngrads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"dW3 = \"+ str(grads[\"dW3\"]))", "Expected Output:\n<table> \n <tr>\n <td>\n **dW1**\n </td>\n <td>\n [[-0.25604646 0.12298827 -0.28297129]\n [-0.17706303 0.34536094 -0.4410571 ]]\n </td>\n </tr>\n <tr>\n <td>\n **dW2**\n </td>\n <td>\n [[ 0.79276486 0.85133918]\n [-0.0957219 -0.01720463]\n [-0.13100772 -0.03750433]]\n </td>\n </tr>\n <tr>\n <td>\n **dW3**\n </td>\n <td>\n [[-1.77691347 -0.11832879 -0.09397446]]\n </td>\n </tr>\n</table>\n\nLet's now run the model with L2 regularization $(\\lambda = 0.7)$. The model() function will call: \n- compute_cost_with_regularization instead of compute_cost\n- backward_propagation_with_regularization instead of backward_propagation", "parameters = model(train_X, train_Y, lambd = 0.7)\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)", "Congrats, the test set accuracy increased to 93%. You have saved the French football team!\nYou are not overfitting the training data anymore. Let's plot the decision boundary.", "plt.title(\"Model with L2-regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "Observations:\n- The value of $\\lambda$ is a hyperparameter that you can tune using a dev set.\n- L2 regularization makes your decision boundary smoother. If $\\lambda$ is too large, it is also possible to \"oversmooth\", resulting in a model with high bias.\nWhat is L2-regularization actually doing?:\nL2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. \n<font color='blue'>\nWhat you should remember -- the implications of L2-regularization on:\n- The cost computation:\n - A regularization term is added to the cost\n- The backpropagation function:\n - There are extra terms in the gradients with respect to weight matrices\n- Weights end up smaller (\"weight decay\"): \n - Weights are pushed to smaller values.\n3 - Dropout\nFinally, dropout is a widely used regularization technique that is specific to deep learning. \nIt randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!\n<!--\nTo understand drop-out, consider this conversation with a friend:\n- Friend: \"Why do you need all these neurons to train your network and classify images?\". \n- You: \"Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!\"\n- Friend: \"I see, but are you sure that your neurons are learning different features and not all the same features?\"\n- You: \"Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution.\"\n!-->\n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/dropout1_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<br>\n<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep_prob$ or keep it with probability $keep_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>\n<center>\n<video width=\"620\" height=\"440\" src=\"images/dropout2_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>\nWhen you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. \n3.1 - Forward propagation with dropout\nExercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. \nInstructions:\nYou would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:\n1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$.\n2. Set each entry of $D^{[1]}$ to be 0 with probability (1-keep_prob) or 1 with probability (keep_prob), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: X = (X &lt; 0.5). Note that 0 and 1 are respectively equivalent to False and True.\n3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.\n4. Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)", "# GRADED FUNCTION: forward_propagation_with_dropout\n\ndef forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):\n \"\"\"\n Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (20, 2)\n b1 -- bias vector of shape (20, 1)\n W2 -- weight matrix of shape (3, 20)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n A3 -- last activation value, output of the forward propagation, of shape (1,1)\n cache -- tuple, information stored for computing the backward propagation\n \"\"\"\n np.random.seed(1)\n \n # retrieve parameters\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n \n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n ### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. \n D1 = np.random.rand(A1.shape[0], A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)\n D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)\n A1 = A1 * D1 # Step 3: shut down some neurons of A1\n A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n ### START CODE HERE ### (approx. 4 lines)\n D2 = np.random.rand(A2.shape[0], A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)\n D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)\n A2 = A2 * D2 # Step 3: shut down some neurons of A2\n A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n \n cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)\n \n return A3, cache\n\nX_assess, parameters = forward_propagation_with_dropout_test_case()\n\nA3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)\nprint (\"A3 = \" + str(A3))", "Expected Output: \n<table> \n <tr>\n <td>\n **A3**\n </td>\n <td>\n [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]\n </td>\n\n </tr>\n\n</table>\n\n3.2 - Backward propagation with dropout\nExercise: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. \nInstruction:\nBackpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:\n1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to A1. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to dA1. \n2. During forward propagation, you had divided A1 by keep_prob. In backpropagation, you'll therefore have to divide dA1 by keep_prob again (the calculus interpretation is that if $A^{[1]}$ is scaled by keep_prob, then its derivative $dA^{[1]}$ is also scaled by the same keep_prob).", "# GRADED FUNCTION: backward_propagation_with_dropout\n\ndef backward_propagation_with_dropout(X, Y, cache, keep_prob):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added dropout.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation_with_dropout()\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1./m * np.dot(dZ3, A2.T)\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n dA2 = np.dot(W3.T, dZ3)\n ### START CODE HERE ### (≈ 2 lines of code)\n dA2 = dA2 * D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation\n dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1./m * np.dot(dZ2, A1.T)\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n ### START CODE HERE ### (≈ 2 lines of code)\n dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation\n dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1./m * np.dot(dZ1, X.T)\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients\n\nX_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()\n\ngradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)\n\nprint (\"dA1 = \" + str(gradients[\"dA1\"]))\nprint (\"dA2 = \" + str(gradients[\"dA2\"]))", "Expected Output: \n<table> \n <tr>\n <td>\n **dA1**\n </td>\n <td>\n [[ 0.36544439 0. -0.00188233 0. -0.17408748]\n [ 0.65515713 0. -0.00337459 0. -0. ]]\n </td>\n\n </tr>\n <tr>\n <td>\n **dA2**\n </td>\n <td>\n [[ 0.58180856 0. -0.00299679 0. -0.27715731]\n [ 0. 0.53159854 -0. 0.53159854 -0.34089673]\n [ 0. 0. -0.00292733 0. -0. ]]\n </td>\n\n </tr>\n</table>\n\nLet's now run the model with dropout (keep_prob = 0.86). It means at every iteration you shut down each neurons of layer 1 and 2 with 24% probability. The function model() will now call:\n- forward_propagation_with_dropout instead of forward_propagation.\n- backward_propagation_with_dropout instead of backward_propagation.", "parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)\n\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)", "Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! \nRun the code below to plot the decision boundary.", "plt.title(\"Model with dropout\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)", "Note:\n- A common mistake when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training. \n- Deep learning frameworks like tensorflow, PaddlePaddle, keras or caffe come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.\n<font color='blue'>\nWhat you should remember about dropout:\n- Dropout is a regularization technique.\n- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.\n- Apply dropout both during forward and backward propagation.\n- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5. \n4 - Conclusions\nHere are the results of our three models: \n<table> \n <tr>\n <td>\n **model**\n </td>\n <td>\n **train accuracy**\n </td>\n <td>\n **test accuracy**\n </td>\n\n </tr>\n <td>\n 3-layer NN without regularization\n </td>\n <td>\n 95%\n </td>\n <td>\n 91.5%\n </td>\n <tr>\n <td>\n 3-layer NN with L2-regularization\n </td>\n <td>\n 94%\n </td>\n <td>\n 93%\n </td>\n </tr>\n <tr>\n <td>\n 3-layer NN with dropout\n </td>\n <td>\n 93%\n </td>\n <td>\n 95%\n </td>\n </tr>\n</table>\n\nNote that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system. \nCongratulations for finishing this assignment! And also for revolutionizing French football. :-) \n<font color='blue'>\nWhat we want you to remember from this notebook:\n- Regularization will help you reduce overfitting.\n- Regularization will drive your weights to lower values.\n- L2 regularization and Dropout are two very effective regularization techniques." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
setiQuest/ML4SETI
tutorials/General_move_data_to_from_Nimbix_Cloud.ipynb
apache-2.0
[ "How to move data to/from your Nimbix Cloud machine.\nThis tutorial shows you how to use the pysftp client to move data to/from your Nimbix cloud machine. \nThis will be especially useful for moving data between your IBM Apache Spark service and your IBM PowerAI system available during the Hackathon. \nhttps://pysftp.readthedocs.io/en/release_0.2.9/#\nImportant for hackathon participants using the PowerAI systems. When those machines are shut down, all data in your local user space will be lost. So, be sure to save your work!\nBUG: It was recently found that installing pysftp breaks the python-swiftclient, which is used to transfer data to Object Storage. If you install pysftp and then wish to resume using python-swiftclient you'll need to:\n\n!pip uninstall -y pysftp\n!pip uninstall -y paramiko\n!pip uninstall -y pyasn1\n!pip uninstall -y cryptography", "#!pip install --user pysftp\n#restart your kernel\n\nimport pysftp ", "Create Local File Space", "mydatafolder = os.environ['PWD'] + '/' + 'my_team_name_data_folder'\n\n# THIS DISABLES HOST KEY CHECKING! Should be okay for our temporary running machines though.\ncnopts = pysftp.CnOpts()\ncnopts.hostkeys = None\n\n#Get this from your Nimbix machine (or other cloud service provider!)\nhostname='NAE-xxxx.jarvice.com'\nusername='nimbix'\npassword='xx'", "PUT a file\nIf you follow the Step 3 tutorial, you will have created some zip files containing the PNGs. These will be located in your my_team_name_data_folder/zipfiles/ directory.", "with pysftp.Connection(hostname, username=username, password=password, cnopts=cnopts) as sftp:\n sftp.put(mydatafolder + '/zipfiles/classification_6_noise.zip') # upload file to remote", "GET a file\nFirst, I define a separate location to hold files I get from remote.", "fromnimbixfolder = mydatafolder + '/fromnimbix'\nif os.path.exists(fromnimbixfolder) is False:\n os.makedirs(fromnimbixfolder)\n\nwith pysftp.Connection(hostname, username=username, password=password, cnopts=cnopts) as sftp:\n with pysftp.cd(fromnimbixfolder):\n sftp.get('test.csv') #data in local HOME space\n sftp.get('/data/my_team_name_data_folder/our_results.csv') #data in persistent Nimbix Cloud storage" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mromanello/SunoikisisDC_NER
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
gpl-3.0
[ "Plan of the lecture\n\nIntroduction: Information Extraction and Named Entity Recognition (NER)\nNER: definitions and tasks (extraction, classification, disambiguation)\nbasic programming concepts in Python\nDoing NER with existing libraries:\nNER from Latin texts with CLTK\nNER from journal articles with NLTK\n\n\n\nPython: basic concepts\nPython is a very flexible and very powerful programming language that can help you working with texts and corpora. Python's phylosophy emphasizes code readability and features a simple and very expressive syntax. It is actually easy to master the basic aspects of Python's syntax: it is amazing how much you can do even with just the most basic concepts... The aim of these two lectures is to introduce to you some of these basic operation, let you see some code in action and also give you some exercise where you can apply what you've seen.\nIt is also amazing how many thing you can accomplish with some well written lines of Python! By the end of this class, we'd like to show you how you use Python to perform (some) Natural Language Processing. But of course, you can even just use Python do somethin as easy as...", "2 + 3", "Variables and data types\nHere we go! we've written our first line of code... But I guess we want to do something a little more interesting, right? Well, for a start, we might want to use Python to execute some operation (say: sum two numbers like 2 and 3) and process the result to print it on the screen, process it, and reuse it as many time as we want...\nVariables is what we use to store values. Think of it as a shoebox where you place your content; next time you need that content (i.e. the result of a previous operation, or for example some input you've read from a file) you simply call the shoebox name...", "result = 2 + 3\n\n#now we print the result\nprint(result)\n\n# by the way, I'm a comment. I'm not executed\n# every line of code following the sign # is ignored:\n# print(\"I'm line n. 3: do you see me?\")\n# see? You don't see me...\nprint(\"I'm line nr. 5 and you DO see me!\")", "That's it! As easy as that (yes, in some programming languages you have to create or declare the variable first and then use it to fill the shoebox; in Python, you go ahead and simply use it!)\nNow, what do you think we will get when we execute the following code?", "result + 5", "What types of values can we put into a variable? What goes into the shoebox? We can start by the members of this list:\n\nIntegers (-1,0,1,2,3,4...)\nStrings (\"Hello\", \"s\", \"Wolfgang Amadeus Mozart\", \"I am the α and the ω!\"...)\nfloats (3.14159; 2.71828...)\nBooleans (True, False)\n\nIf you're not sure what type of value you're dealing with, you can use the function type(). Yes, it works with variables too...!", "type(\"I am the α and the ω!\")\n\ntype(2.7182818284590452353602874713527)\n\ntype(True)\n\nresult = \"hello\"\n\ntype(result)", "You declare strings with single ('') or double (\"\") quote: it's totally indifferent! But now two questions:\n1. what happens if you forget the quotes?\n2. what happens if you put quotes around a number?", "hello = \"goodbye\"\nprint(hello)\n\nprint(\"hello\")\n\ntype(\"2\")", "String, integer, float... Why is that so important? Well, try to sum two strings and see what happens...", "\"2\" + \"3\"\n\n#probably you wanted this...\nint(\"2\") + int(\"3\")", "But if we are working with strings, then the \"+\" sign is used to concatenate the strings:", "a = \"interesting!\"\nprint(\"not very \" + a)", "Lists and dictionaries\nLists and dictionaries are two very useful types to store whole collections of data", "beatles = [\"John\", \"Paul\", \"George\", \"Ringo\"]\ntype(beatles)\n\n# dictionaries collections of key : value pairs\nbeatles_dictionary = { \"john\" : \"John Lennon\" ,\n \"paul\" : \"Paul McCartney\",\n \"george\" : \"George Harrison\",\n \"ringo\" : \"Ringo Starr\"}\ntype(beatles_dictionary)", "(there are also other types of collection, like Tuples and Sets, but we won't talk about them now; read the links if you're interested!)\nItems in list are accessible using their index. Do remember that indexing starts from 0!", "print(beatles[0])\n\n#indexes can be negative!\nbeatles[-1]", "Dictionaries are collections of key : value pairs. You access the value using the key as index", "beatles_dictionary[\"john\"]\n\nbeatles_dictionary[0]", "There are a bunch of methods that you can apply to list to work with them.\nYou can append items at the end of a list", "beatles.append(\"Billy Preston\")\nbeatles", "You can learn the index of an item", "beatles.index(\"George\")", "You can insert elements at a predefinite index:", "beatles.insert(0, \"Pete Best\")\nprint(beatles.index(\"George\"))\nbeatles", "But most importantly, you can slice lists, producing sub-lists by specifying the range of indexes you want:", "beatles[1:5]", "Do you notice something strange? Yes, the limit index is not inclusive (i.e. item beatles[5] is not included)", "beatles[5]", "What happens if you specify an index that is too high?", "beatles[7]", "How can you know how long a list is?", "len(beatles)", "Do remember that indexing starts at 0, so don't make the mistake of thinking that len(yourlist) will give you the last item of your list!", "beatles[len(beatles)]", "This will work!", "beatles[len(beatles) -1]", "If-statements\nMost of the times, what you want to do when you program is to check a value and execute some operation depending on whether the value matches some condition. That's where if statements help!\nIn its easiest form, an If statement is syntactic construction that checks whether a condition is met; if it is some part of code is executed", "bassist = \"Paul McCartney\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass with the Beatles!\")", "Mind the indentation very much! This is the essential element in the syntax of the statement", "bassist = \"Bill Wyman\"\n\nif bassist == \"Paul McCartney\":\n print(\"I'm part of the if statement...\")\n print(\"Paul played bass in the Beatles!\")", "What happens if the condition is not met? Nothing! The indented code is not executed, because the condition is not met, so lines 4 and 5 are simply skipped.\nBut what happens if we de-indent line 5? Can you guess why this is what happes?\nMost of the time, we need to specify what happens if the conditions are not met", "bassist = \"\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass in the Beatles!\")\nelse:\n print(\"This guy did not play for the Beatles...\")", "This is the flow:\n* the condition in line 3 is checked\n* is it met?\n * yes: then line 4 is executed\n * no: then line 6 is executed\nOr we can specify many different conditions...", "bassist = \"Bill\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass in the Beatles!\")\nelif bassist == \"Bill Wyman\":\n print(\"Bill Wyman played for the Rolling Stones!\")\nelse:\n print(\"I don't know what band this guy played for...\")", "For loops\nThe greatest thing about lists is that thet are iterable, that is you can loop through them. What do we do if we want to apply some line of code to each element in a list? Try with a for loop!\nA for loop can be paraphrased as: \"for each element named x in an iterable (e.g. a list): do some code (e.g. print the value of x)\"", "for b in beatles:\n print(b + \" was one of the Beatles\")", "Let's break the code down to its parts:\n* b: an arbitrary name that we give to the variable holding every value in the loop (it could have been any name; b is just very convenient in this case!)\n* beatles: the list we're iterating through\n* : as in the if-statements: don't forget the colon!\n* indent: also, don't forget to indent this code! it's the only thing that is telling python that line 2 is part of the for loop!\n* line 2: the function that we want to execute for each item in the iterables\nNow, let's join if statements and for loop to do something nice...", "beatles = [\"John\", \"Paul\", \"George\", \"Ringo\"]\nfor b in beatles:\n if b == \"Paul\":\n instrument = \"bass\"\n elif b == \"John\":\n instrument = \"rhythm guitar\"\n elif b == \"George\":\n instrument = \"lead guitar\"\n elif b == \"Ringo\":\n instrument = \"drum\"\n print(b + \" played \" + instrument + \" with the Beatles\")", "Input and Output\nOne of the most frequent tasks that programmers do is reading data from files, and write some of the output of the programs to a file. \nIn Python (as in many language), we need first to open a file-handler with the appropriate mode in order to process it. Files can be opened in:\n* read mode (\"r\")\n* write mode (\"w\")\n* append mode\nLet's try to read the content of one of the txt files of our Sunoikisis directory\nFirst, we open the file handler in read mode:", "#see? we assign the file-handler to a variable, or we wouldn't be able\n#to do anything with that!\nf = open(\"NOTES.md\", \"r\")", "note that \"r\" is optional: read is the default mode!\nNow there are a bunch of things we can do:\n* read the full content in one variable with this code:\ncontent = f.read()\n\nread the lines in a list of lines:\n\nlines = f.readlines()\n\nor, which is the easiest, simply read the content one line at the time with a for loop; the f object is iterable, so this is as easy as:", "for l in f:\n print(l)", "Once you're done, don't forget to close the handle:", "f.close()\n\n#all together\nf = open(\"NOTES.md\")\nfor l in f:\n print(l)\nf.close()", "Now, there's a shortcut statement, which you'll often see and is very convenient, because it takes care of opening, closing and cleaning up the mess, in case there's some error:", "with open(\"NOTES.md\") as f:\n #mind the indent!\n for l in f:\n #double indent, of course!\n print(l)", "Now, how about writing to a file? Let's try to write a simple message on a file; first, we open the handler in write mode", "out = open(\"test.txt\", \"w\")\n\n#the file is now open; let's write something in it\nout.write(\"This is a test!\\nThis is a second line (separated with a new-line feed)\")", "The file has been created! Let's check this out", "#don't worry if you don't understand this code!\n#We're simply listing the content of the current directory...\nimport os\nos.listdir()", "But before we can do anything (e.g. open it with your favorite text editor) you have to close the file-handler!", "out.close()", "Let's look at its content", "with open(\"test.txt\") as f:\n print(f.read())", "Again, also for writing we can use a with statement, which is very handy.\nBut let's have a look at what happens here, so we understand a bit better why \"write mode\" must be used carefully!", "with open(\"test.txt\", \"w\") as out:\n out.write(\"Oooops! new content\")", "Let's have a look at the content of \"test.txt\" now", "with open(\"test.txt\") as f:\n print(f.read())", "See? After we opened the file in \"write mode\" for the second time, all content of the file was erased and replaced with the new content that we wrote!!!\nSo keep in mind: when you open a file in \"w\" mode:\n\nif it doesn't exist, a new file with that name is created\nif it does exist, it is completely overwritten and all previous content is lost\n\nIf you want to write content to an existing file without losing its pervious content, you have to open the file with the \"a\" mode:", "with open(\"test.txt\", \"a\") as out:\n out.write('''\\nAnd this is some additional content.\nThe new content is appended at the bottom of the existing file''')\n\nwith open(\"test.txt\") as f:\n print(f.read())", "Functions\nAbove, we have opened a file several times to inspect its content. Each time, we had to type the same code over and over. This is the typical case where you would like to save some typing (and write code that is much easier to maintain!) by defining a function\nA function is a block of reusable code that can be invoked to perform a definite task. Most often (but not necessarily), it accepts one or more arguments and return a certain value.\nWe have already seen one of the built-in functions of Python: print(\"some str\")\nBut it's actually very easy to define your own. Let's define the function to print out the file content, as we said before. Note that this function takes one argument (the file name) and prints out some text, but doesn't return back any value.", "def printFileContent(file_name):\n #the function takes one argument: file_name\n with open(file_name) as f:\n print(f.read())", "As usual, mind the indent!\nfile_name (line 1) is the placeholder that we use in the function for any argument that we want to pass to the function in our real-life reuse of the code.\nNow, if we want to use our function we simply call it with the file name that we want to print out", "printFileContent(\"README.md\")", "Now, let's see an example of a function that returns some value to the users. Those functions typically take some argument, process them and yield back the result of this processing.\nHere's the easiest example possible: a function that takes two numbers as arguments, sum them and returns the result.", "def sumTwoNumbers(first_int, second_int):\n s = first_int + second_int\n return s\n\n#could be even shorter:\ndef sumTwoNumbers(first_int, second_int):\n return first_int + second_int\n\nsumTwoNumbers(5, 6)", "Most often, you want to assign the result returned to a variable, so that you can go on working with the results...", "s = sumTwoNumbers(5,6)\ns * 2", "Error and exceptions\nThings can go wrong, especially when you're a beginner. But no panic! Errors and exceptions are actually a good thing! Python gives you detailed reports about what is wrong, so read them carefully and try to figure out what is not right.\nOnce you're getting better, you'll actually learn that you can do something good with the exceptions: you'll learn how to handle them, and to anticipate some of the most common problems that dirty data can face you with...\nNow, what happens if you forget the all-important syntactic constraint of the code indent?", "if 1 > 0:\n print(\"Well, we know that 1 is bigger than 0!\")", "Pretty clear, isn't it? What you get is an error a construct that is not grammatical in Python's syntax. Note that you're also told where (at what line, and at what point of the code) your error is occurring. That is not always perfect (there are cases where the problem is actually occuring before what Python thinks), but in this case it's pretty OK.\nWhat if you forget to define a variable (or you misspell the name of a variable)?", "var = \"bla bla\"\nif var1:\n print(\"If you see me, then I was defined...\")", "You get an exception! The syntax of your code is right, but the execution met with a problem that caused the program to stop.\nNow, in your program, you can handle selected exception: this means that you can write your code in a way that the program would still be executed even if a certain exception is raised.\nLet's see what happens if we use our function to try to print the content of a file that doesn't exist:", "printFileContent(\"file_that_is_not_there.txt\")", "We get a FileNotFoundError! Now, let's re-write the function so that this event (somebody uses the function with a wrong file name) is taken care of...", "def printFileContent(file_name):\n #the function takes one argument: file_name\n try:\n with open(file_name) as f:\n print(f.read())\n except FileNotFoundError:\n print(\"The file does not exist.\\nNevertheless, I do like you, and I will print something to you anyway...\")\n\nprintFileContent(\"file_that_doesnt_exist.txt\")", "Appendix: useful links\nPython: how to install\nIf you're using Mac OSX or Linux, you already have (at least one version) of Python installed. Anyway, it's very easy to install Python or upgrade your version. See:\nhttps://wiki.python.org/moin/BeginnersGuide/Download\nJupyter: how to install\nhttp://jupyter.org/install.html\nPython and Jupyter come also in a pre-packaged environment (which is designed especially for data science) called Anaconda. You might be interested to look at that.\nPython 2 or Python 3?\nPython 3 is the latest version of Python (currently, 3.6.1). It's a major upgrade from Python 2, but the code has been somewhat dramatically changed in the passage from 2 to 3 and there is some problem of backward compatibility. Some version of Linux or Mac OSX still come with Python 2.7 (the final version of Python 2).\nAnyway, Python 3 is currently in active development: it's where the cutting-edge improvements and new stuff are being developed (especially for NLP and the NLTK library). In this code, we assume Python 3!\nhttps://wiki.python.org/moin/Python2orPython3\nNLTK: Book\nWould you like a book that is a great introduction to Python for absolute beginners, is a wonderfull resource to learn the basics of Natural Language processing and gives you a thorough introduction to the NLTK library to do NLP in Python? Oh, yeah, I was forgetting: that can be read for free on the internet? Yes, it's Christmass time!\nhttp://www.nltk.org/book/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
peterwittek/qml-rg
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
gpl-3.0
[ "Finding the right capcha with Keras", "import os\nimport numpy as np\nimport tools as im\nfrom matplotlib import pyplot as plt\nfrom skimage.transform import resize\n%matplotlib inline\n\npath=os.getcwd()+'/' # finds the path of the folder in which the notebook is\npath_train=path+'images/train/'\npath_test=path+'images/test/'\npath_real=path+'images/real_world/'", "We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.", "def prep_datas(xset,xlabels):\n X=list(xset)\n for i in range(len(X)):\n X[i]=resize(X[i],(32,32,1)) #reduce the size of the image from 100X100 to 32X32. Also flattens the color levels\n \n X=np.reshape(X,(len(X),1,32,32)) # reshape the liste to have the form required by keras (theano), ie (1,32,32)\n X=np.array(X) #transforms it into an array\n Y = np.eye(2, dtype='uint8')[xlabels] # generates vectors, here of two elements as required by keras (number of classes) \n return X,Y", "We then load the training set and the test set and prepare them with the function prep_datas.", "training_set, training_labels = im.load_images(path_train)\ntest_set, test_labels = im.load_images(path_test)\nX_train,Y_train=prep_datas(training_set,training_labels)\nX_test,Y_test=prep_datas(test_set,test_labels)", "Image before/after compression", "i=11\nplt.subplot(1,2,1)\nplt.imshow(training_set[i],cmap='gray')\nplt.subplot(1,2,2)\nplt.imshow(X_train[i][0],cmap='gray')", "Lenet neural network", "# import the necessary packages\nfrom keras.models import Sequential\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.convolutional import MaxPooling2D\nfrom keras.layers.core import Activation\nfrom keras.layers.core import Flatten\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n\n# this code comes from http://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/\n\nclass LeNet:\n\t@staticmethod\n\tdef build(width, height, depth, classes, weightsPath=None):\n\t\t# initialize the model\n\t\tmodel = Sequential()\n\n\t\t# first set of CONV => RELU => POOL\n\t\tmodel.add(Convolution2D(20, 5, 5, border_mode=\"same\",input_shape=(depth, height, width)))\n\t\tmodel.add(Activation(\"relu\"))\n\t\tmodel.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n\n\t\t# second set of CONV => RELU => POOL\n\t\tmodel.add(Convolution2D(50, 5, 5, border_mode=\"same\"))\n\t\tmodel.add(Activation(\"relu\"))\n\t\tmodel.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n\n\t\t# set of FC => RELU layers\n\t\tmodel.add(Flatten())\n\t\tmodel.add(Dense(500))\n\t\tmodel.add(Activation(\"relu\"))\n\n\t\t# softmax classifier\n\t\tmodel.add(Dense(classes))\n\t\tmodel.add(Activation(\"softmax\"))\n\n\t\t# return the constructed network architecture\n\t\treturn model", "We build the neural network and fit it on the training set", "model = LeNet.build(width=32, height=32, depth=1, classes=2)\nopt = SGD(lr=0.01)#Sochastic gradient descent with learning rate 0.01\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=opt,metrics=[\"accuracy\"])\nmodel.fit(X_train, Y_train, batch_size=10, nb_epoch=300,verbose=1)\n\ny_pred = model.predict_classes(X_test)\nprint(y_pred)\nprint(test_labels)", "We now compare with the real world images (with the deshear method)", "real_world_set=[]\nfor i in np.arange(1,73):\n filename=path+'images/real_world/'+str(i)+'.png'\n real_world_set.append(im.deshear(filename))\nfake_label=np.ones(len(real_world_set),dtype='int32')\nX_real,Y_real=prep_datas(real_world_set,fake_label)\ny_pred = model.predict_classes(X_real)", "with the labels of Peter", "f=open(path+'images/real_world/labels.txt',\"r\")\nlines=f.readlines()\nresult=[]\nfor x in lines:\n result.append((x.split('\t')[1]).replace('\\n',''))\nf.close()\n\nresult=np.array([int(x) for x in result])\nresult[result>1]=1\nplt.plot(y_pred,'o')\nplt.plot(2*result,'o')\nplt.ylim(-0.5,2.5);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
myuuuuun/various
応用統計/HW1/HW1.ipynb
mit
[ "応用統計HW1\n詳細: http://www.stat.t.u-tokyo.ac.jp/~takemura/ouyoutoukei/", "#-*- encoding: utf-8 -*-\n'''\nOuyoutoukei HW1\n'''\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport statsmodels.api as sm\nnp.set_printoptions(precision=3)\npd.set_option('display.precision', 4)", "下準備\nデータのインポート, 基礎統計量の表示", "# csvをインポート\ndf = pd.read_csv( 'odakyu-mansion.csv' )\n# 基礎統計量を表示\nprint(df.describe())", "家の向きは東, 西, 南, 北のダミー変数(0または1。南東の場合、南と東の両方に1)に分解し変換して、", "# サンプルサイズ\ndata_len = df.shape[0]\n\n# 家の向きはdummyに\ndf['d_N'] = np.zeros(data_len, dtype=float)\ndf['d_E'] = np.zeros(data_len, dtype=float)\ndf['d_W'] = np.zeros(data_len, dtype=float)\ndf['d_S'] = np.zeros(data_len, dtype=float)\nfor i, row in df.iterrows():\n for direction in [\"N\", \"W\", \"S\", \"E\"]:\n if direction in str(row.muki):\n df.loc[i, 'd_{0}'.format(direction)] = 1\n\n# 先頭10件を表示\nprint(df.head(10))", "欠損値は平均値で置き換える。", "df = df.fillna(df.mean())", "最小二乗法\n被説明変数に与える影響の小さい説明変数を順に取り除いていく。具体的にはp > 0.05であるような説明変数を除いていく。\n同時に、外れ値の考慮もする。\n最小二乗法その1\n説明変数13個で最小二乗法を実行すると、", "# 定数項も加える\nX = sm.add_constant(df[['time', 'bus', 'walk', 'area',\n 'bal', 'kosuu', 'floor', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']])\n\n# 普通の最小二乗法\nmodel = sm.OLS(df.price, X)\nresults = model.fit()\n\n# 結果を表示\nprint(results.summary())", "となる。\np値を見ると、kosuu, floorがほとんど無関係であるように見える。\nkosuuには外れ値が1つある(kosuu=2080)ので、それを除いてみる。\n最小二乗法その2\n外れ値を除き、", "print(df.loc[161])\ndf = df.drop(161)", "再び最小二乗法を実行すると、", "X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'bal',\n 'kosuu', 'floor', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']])\nmodel = sm.OLS(df.price, X)\nresults = model.fit()\nprint(results.summary())", "やはりkosuu, floorのp値が大きいので、説明変数から除くと、\n最小二乗法その3", "X = sm.add_constant(df[['time', 'bus', 'walk', 'area',\n 'bal', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']])\nmodel = sm.OLS(df.price, X)\nresults = model.fit()\nprint(results.summary())", "となる。さらに、balと南向き以外の方角のダミー変数を説明変数から除く。\n最小二乗法その4", "X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'tf', 'year', 'd_S']])\nmodel = sm.OLS(df.price, X)\nresults = model.fit()\nprint(results.summary())", "p値が大きい南向きのダミー変数d_E、築年数yearも説明変数から除く。\n最小二乗法その5", "X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'tf']])\nmodel = sm.OLS(df.price, X)\nresults = model.fit()\nprint(results.summary())", "p値が大きいtfを取り除く\n最小二乗法その6", "X = sm.add_constant(df[['time', 'bus', 'walk', 'area']])\nmodel = sm.OLS(df.price, X)\nresults = model.fit()\nprint(results.summary())", "修正済みR^2 = 0.783, F統計量のp値3.96e-59 を見ると、「新宿駅からの乗車時間」, 「バスの乗車時間」, 「徒歩時間」, 「部屋の広さ」の4つで十分に住宅価格を説明できていると考えられる。\n最小二乗法1〜6と比較しても、AIC・BICは殆ど変わらないか、改善している。 \nあとは残差を検討して、誤差項に関する諸仮定が満たされているかをチェックする。\n残差の分析\n残差に関する仮定は: \n\n誤差項の平均が0\n誤差項の分散が一定\n誤差項は互いに独立\n誤差項は(少なくとも近似的には)正規分布に従う\n誤差項と各説明変数の相関係数は0\n\nであった。\n※全ての項目を厳密にチェックする方法を知らないので、出来る項目だけを確認します。\nまずは、横軸に予測値(価格)を、縦軸に残差をとって点をプロットする。", "# 回帰に使った変数だけを抜き出す\nnew_df = df.loc[:, ['price', 'time', 'bus', 'walk', 'area']]\n# 説明変数行列\nexp_matrix = new_df.loc[:, ['time', 'bus', 'walk', 'area']]\n# 回帰係数ベクトル\ncoefs = results.params\n# 理論価格ベクトル\npredicted = exp_matrix.dot(coefs[1:]) + coefs[0]\n# 残差ベクトル\nresiduals = new_df.price - predicted\n\n# 残差をplot\nfig, ax = plt.subplots(figsize=(12, 8))\nplt.plot(predicted, residuals, 'o', color='b', linewidth=1, label=\"residuals distribution\")\nplt.xlabel(\"predicted values\")\nplt.ylabel(\"residuals\")\nplt.show()\n\n# 残差平均\nprint(\"residuals mean:\", residuals.mean())", "平均はほぼ0であり、グラフでも0付近に点が集中していることがわかる: 仮定1は満たす \nしかしながら、右側にいくつか外れ値が見える。右上の1点を除いて、再度回帰分析を行う。\n最小二乗法その7", "print(new_df.loc[12] )\nnew_df = new_df.drop(12)\n\nX = sm.add_constant(new_df[['time', 'bus', 'walk', 'area']])\nmodel = sm.OLS(new_df.price, X)\nresults = model.fit()\nprint(results.summary())\n\n# 説明変数行列\nexp_matrix = new_df.loc[:, ['time', 'bus', 'walk', 'area']]\n# 回帰係数ベクトル\ncoefs = results.params\n# 理論価格ベクトル\npredicted = exp_matrix.dot(coefs[1:]) + coefs[0]\n# 残差ベクトル\nresiduals = new_df.price - predicted\n\n# 残差をplot\nfig, ax = plt.subplots(figsize=(12, 8))\nplt.plot(predicted, residuals, 'o', color='b', linewidth=1, label=\"residuals distribution\")\nplt.xlabel(\"predicted values\")\nplt.ylabel(\"residuals\")\nplt.show()\n\n# 残差平均\nprint(\"residuals mean:\", residuals.mean())", "最小二乗法6の結果に比べ、ばらつきが均等になった。\n次に、縦軸に残差、横軸に各説明変数の観測値をとって、残差のばらつきを見る。", "# 残差をplot\nfig = plt.figure(figsize=(18, 10)) \nax1 = plt.subplot(2, 2, 1)\nplt.plot(exp_matrix['time'], residuals, 'o', color='b', linewidth=1, label=\"residuals - time\")\nplt.xlabel(\"time\")\nplt.ylabel(\"residuals\")\nplt.legend()\n\nax2 = plt.subplot(2, 2, 2, sharey=ax1)\nplt.plot(exp_matrix['bus'], residuals, 'o', color='b', linewidth=1, label=\"residuals - bus\")\nplt.xlabel(\"bus\")\nplt.ylabel(\"residuals\")\nplt.legend()\n\nax3 = plt.subplot(2, 2, 3, sharey=ax1)\nplt.plot(exp_matrix['walk'], residuals, 'o', color='b', linewidth=1, label=\"residuals - walk\")\nplt.xlabel(\"walk\")\nplt.ylabel(\"residuals\")\nplt.legend()\n\nax4 = plt.subplot(2, 2, 4, sharey=ax1)\nplt.plot(exp_matrix['area'], residuals, 'o', color='b', linewidth=1, label=\"residuals - area\")\nplt.xlabel(\"area\")\nplt.ylabel(\"residuals\")\nplt.legend()\n\nplt.show()", "どの説明変数と残差の間にも特徴的な相関関係は見られない: 仮定5は満たす。\narea変数だけ残差のばらつき方が異なるので、何らかの対策をとったほうが良い可能性がある(すみません、わかりません)。\nまとめ\n当てはまりの良い回帰モデルを作ることが出来たが、残差の性質、特に等分散性の仮定を置いて良いのかについては問題が残った。等分散性の仮定に問題がある場合は、重みを付けて最小二乗法を使う必要があるので、慎重に考える必要がある。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jamesfolberth/NGC_STEM_camp_AWS
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
bsd-3-clause
[ "Probability Distributions\nNext we are going to learn about probability distributions. Central to the fields of probability and statistics are random variables. Random variables (RVs) are used to model random experiments, such as the roll of a die. \nFor each random variable, there is a probability distribution function (or probability mass function for discrete RVs) that gives the probability of each outcome in the experiment. \nWe will learn about three important distributions today:\n- the binomial distribution\n- the normal (or \"Gaussian\") distribution\n- and the t distribution. \nAs you may remember, random variables (and their outcomes) can be either discrete or continuous. The binomial distribution is a discrete distribution, while the normal and t distributions are continuous distributions. \nThe Binomial Distribution\nA Binomial experiment:\n* The process consists of a sequence of $n$ trials.\n * $n=1$ is a common case, and this is known as the Bernoulli distribution\n* Only two exclusive outcomes are possible for each trial (a success and a failure)\n* If the probability of a success is '$p$' then the probability of failure is $q=1-p$\n* The trials are independent.\n* The random variable $X$ is the number of successes (after these $n$ trials)\nThe formula for a Binomial Distribution Probability Mass Function turns out to be:\n$$Pr(X=k)={n \\choose k} p^k (1-p)^{n-k}$$\nwhere n = number of trials, $k$ = number of successes, $p$ = probability of success, $1-p$ = probability of failure (often written as $q=1-p$).\nThis means that to get exactly '$k$' successes in '$n$' trials, we want exactly '$k$' successes: $$p^k$$ and we want '$n-k$' failures:$$(1-p)^{n-k}$$ Then finally, there are ${n \\choose k}$ ways of putting '$k$' successes in '$n$' trials. So we multiply all these together to get the probability of exactly that many success and failures in those $n$ trials!\nQuick note, ${n \\choose k}$ refers to the number of possible combinations of $N$ things taken $k$ at a time.\nThis is also equal to: $${n \\choose k} = \\frac{n!}{k!(n-k)!}$$\nQuick example to get you thinking. Let's say I'm a basketball player and I shoot 200 shots a day with 50% accuracy. On any given day, this is a random experiment with a binomial distribution and 200 trials.", "from scipy.stats import binom\n\nn = 200 #number of trials\np = 0.5 #probability of success\n\n# We can get stats: Mean('m'), variance('v'), skew('s'), and/or kurtosis('k')\nmn,vr= binom.stats(n,p)\n\nprint(mn)\nprint(vr**0.5)", "Now let's investigate the mean and standard deviation for the binomial distribution further.\nThe mean of a binomial distribution is simply: $$\\mu=np$$\nThis intuitively makes sense, the average number of successes should be the total trials multiplied by your average success rate.\nSimilarly we can see that the standard deviation of a binomial is: $$\\sigma=\\sqrt{nq*p}$$\nLet's try another example to see the full PMF (Probability Mass Function) plot.\nImagine you flip a fair coin. Your probability of getting a heads is p=0.5 (success in this example).\nSo what does your probability mass function look like for 10 coin flips?", "import numpy as np\n\n# Set up a new example, let's say n= 10 coin flips and p=0.5 for a fair coin.\nn=10\np=0.5\n\n# Set up n success, remember indexing starts at 0, so use n+1\nx = range(n+1)\n\n# Now create the probability mass function\nY = binom.pmf(x,n,p)\n\n#Show\nY\n\n# Next we'll visualize the pmf by plotting it.", "Finally we will plot the binomial distribution.", "import matplotlib.pyplot as plt\n%matplotlib inline\n\n# For simple plots, matplotlib is fine, seaborn is unnecessary.\n\n# Now simply use plot\nplt.plot(x,Y,'o')\n\n#Title (use y=1.08 to raise the long title a little more above the plot)\nplt.title('Binomial Distribution PMF: 10 coin Flips, Odds of Success for Heads is p=0.5',y=1.08)\n\n#Axis Titles\nplt.xlabel('Number of Heads')\nplt.ylabel('Probability')", "Looks awfully bell shaped...\nGoing further\nSuppose you play a Blackjack, and have a 50\\% chance of winning. You start with a 1 dollar bet, and if you lose, you double the amount you bet on the next play. So if you play three rounds, and Lose, Lose, Win, then you lost 1 dollar on the first round, 2 dollars on the second round, but gained 4 dollars on the first round, for a net profit of 1 dollar.\nWhat is the expected value of your winnings (assuming you play as many rounds as it takes until you win)?\nIn fact, are you guaranteed to make money?\nWhy might this be a bad idea in practice?\nNote: this is a famous strategy known as the Gambler's Ruin\nThe Normal Distribution\nNext we will talk about the normal distribution. This is the most important continuous distribution. It is also called the Gaussian distribution, or the bell curve. While the binomial distribution is often considered the most basic discrete distribution, the normal is the most fundamental of all continuous random variables.", "from IPython.display import Image\nImage(url='https://static.squarespace.com/static/549dcda5e4b0a47d0ae1db1e/54a06d6ee4b0d158ed95f696/54a06d70e4b0d158ed960413/1412514819046/1000w/Gauss_banknote.png')", "Now we define the normal pdf. The first equation below is the pdf for the normal distribution with mean $\\mu$ and variance $\\sigma^2$. The second equations is the standard normal $\\Phi$ with mean $0$ and variance $1$. We can always transform our random variable $X \\sim {\\mathcal {N}}(\\mu ,\\,\\sigma ^{2})$ to the standard normal $Z \\sim {\\mathcal {N}}(0 , \\, 1)$ by using the change of variables formula in the third equation.\n$$ f(x\\;|\\;\\mu ,\\sigma ^{2})={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\;e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}} $$\n$$ f(x,\\mu,\\sigma) = \\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{z^2}{2}} $$\n$$ z=\\frac{(X-\\mu)}{\\sigma} $$\nThe plot of the pdf may be familiar to some of you by now.", "#Import\nimport matplotlib as mpl\n%matplotlib inline\n\n#Import the stats library\nfrom scipy import stats\n\n# Set the mean\nmn = 0\n\n#Set the standard deviation\nstd_dev = 1\n\n# Create a range\nX = np.arange(-4,4,0.001)\n\n#Create the normal distribution for the range\nY = stats.norm.pdf(X,mn,std_dev)\n\n#\nplt.plot(X,Y)\n\nfrom IPython.display import Image\nImage(url='http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/The_Normal_Distribution.svg/725px-The_Normal_Distribution.svg.png')", "The bell curve, centered at the mean, also shows the variance with how wide the bell is. The x-axis gives the realized values of the random variable, and the y values of the curve give the probability these values are realized by the random variable/experiment. This image does a great job of giving a good interpretation of what percent of the outcomes lie within $n$ standard deviations of the mean.\nUsing python, we can draw samples from a normal distribution and plot these samples using a histogram. The idea is that if we take enough samples, this histogram should look like the bell curve. We first start with 30, then up it to 1000. The histogram gives you a good idea of what the pdf of these samples will look like, but just as a visual aid we also plot a pdf estimator in blue. This is called a kernel density estimator, but it is beyond the scope of this tutorial. We plot the normal distribution these samples came from in green.", "#Set the mean and the standard deviaiton\nmu,sigma = 0,1\n\n# Now grab 30 random numbers from the normal distribution\nnorm_set = np.random.normal(mu,sigma,30)\n#Now let's plot it using seaborn\n\nimport seaborn as sns\nimport sklearn as sk\nfrom scipy.stats import gaussian_kde\n\nresults, edges = np.histogram(norm_set, normed=True)\nbinWidth = edges[1] - edges[0]\nplt.bar(edges[:-1], results*binWidth, binWidth)\n\ndensity = gaussian_kde(norm_set)\ndensity.covariance_factor = lambda : .4 #this is the bandwidth in the kernel density estimator\ndensity._compute_covariance()\nplt.plot(X,density(X))\n\nplt.plot(X,Y)", "With enough samples, this should start to look normal.", "norm2 = np.random.normal(mu, sigma, 1000)\n\nresults, edges = np.histogram(norm2, normed=True)\nbinWidth = edges[1] - edges[0]\nplt.bar(edges[:-1], results*binWidth, binWidth)\n\ndensity = gaussian_kde(norm2)\ndensity.covariance_factor = lambda : .4\ndensity._compute_covariance()\nplt.plot(X,density(X))\n\nplt.plot(X,Y)", "Central Limit Theorem\nThe Central Limit Theorem is one of the most important theorems in statistical theory. It states that when independent random variables are added, their sum tends toward a normal distribution even if the original variables themselves are not normally distributed. \nThis may seem obscure so we will explain it with the previous example. We saw that as we took more samples from the normal distribution, the total distribution looks more and more normal. Here we will take more and more samples from a normal, and see how the mean of the samples behaves.", "n10 = np.random.normal(mu, sigma, 10)\nn100 = np.random.normal(mu, sigma, 100)\nn1000 = np.random.normal(mu, sigma, 1000)\nn10000 = np.random.normal(mu, sigma, 10000)\n\nprint(n10.mean(), n100.mean(), n1000.mean(), n10000.mean() )", "We see that as we add more samples, the mean approaches 0, which is the true mean. This is the central limit theorem: the sum of a lot of random variables tends towards a Gaussian distribution (under some conditions). The next image shows how adding more samples to a binomial distribution makes it look more and more Gaussian.", "Image(url='https://upload.wikimedia.org/wikipedia/commons/8/8c/Dice_sum_central_limit_theorem.svg')", "Student's t distribution\nFor the normal distribution it is often assumed that the sample size is assumed large ($N>30$). The t distribution allows for use of small samples, but does so by sacrificing certainty with a margin-of-error trade-off (i.e. a larger variance). The t distribution takes into account the sample size using n-1 degrees of freedom, which means there is a different t distribution for every different sample size. If we see the t distribution against a normal distribution, you'll notice the tail ends increase as the peak get 'squished' down.\nTo be precise, the t-distribution models the sample mean of $N$ observations that are taken from a normal distrubtion.\nIt's important to note, that as $N$ gets larger, the t distribution converges into a normal distribution.\nTo further explain degrees of freedom and how it relates to the t distribution, you can think of degrees of freedom as an adjustment to the sample size, such as (n-1). This is connected to the idea that we are estimating something of a larger population, in practice it gives a slightly larger margin of error in the estimate.\nLet's define a new variable called t, where : $$t=\\frac{\\overline{X}-\\mu}{s}\\sqrt{N-1}=\\frac{\\overline{X}-\\mu}{s/\\sqrt{N}}$$\nwhich is analogous to the z statistic given by $$z=\\frac{\\overline{X}-\\mu}{\\sigma/\\sqrt{N}}$$\nThe sampling distribution for t can be obtained:\n$$ f(t) = \\frac {\\varGamma(\\frac{v+1}{2})}{\\sqrt{v\\pi}\\varGamma(\\frac{v}{2})} (1+\\frac{t^2}{v})^{-\\frac{v+1}{2}}$$\nWhere the gamma function is: $$\\varGamma(n)=(n-1)!$$\nAnd v is the number of degrees of freedom, typically equal to N-1.\nPlease don't worry about these formulas. Literally no one memorizes this distribution. Just know the binomial and the normal, and the idea of what a t distribution is for (small sample sizes).\nThe t distribution is plotted in blue, and the normal distribution is in green as well to compare.", "#Import for plots\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#Import the stats library\nfrom scipy.stats import t\n\n#import numpy\nimport numpy as np\n\n# Create x range\nx = np.linspace(-5,5,100)\n\n# Create the t distribution with scipy\nrv = t(3)\n\n# Plot the PDF versus the x range\nplt.plot(x, rv.pdf(x))\nplt.plot(X, Y)", "Notice that the t distribution in blue has fatter tails than the normal distribution. This means there is more probability of realizing an observation in that region. As a consequence, there is less area under the peak/it is less spikey. This is a reflection of the fact that we have less certainty in where the observations will land because we have a smaller sample size of evidence to support this estimation of the data's distribution.\nMultivariate Normal\nA multivariate normal with unequal variances in the x and y direction and principal axes also rotated from the origin. Notice how each shade or horizontal slice looks like an ellipse.\nWhat is multivariate data? It just means more than one dimension. \n* An example of a one-dimensional random variable is the height of a person randomly chosen from this class\n* An example of a multi-variate random variable is the (height, shoe-size) of a person randomly chosen from this class\nFor an example like (height, shoe-size), where x=height and y=shoe-size, we expect the two variables to be correlated. Other examples, like (height, first-letter-of-your-name) are not correlated. If the variables are correlated, statistical tests should know about it!", "import scipy.stats\nx, y = np.mgrid[-1:1:.01, -1:1:.01]\npos = np.empty(x.shape + (2,))\npos[:, :, 0] = x; pos[:, :, 1] = y\nrv = scipy.stats.multivariate_normal([0.5, -0.2], [[2.0, 0.3], [0.3, 0.5]])\nplt.contourf(x, y, rv.pdf(pos))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mari-linhares/tensorflow-workshop
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
apache-2.0
[ "Before start: make sure you deleted the output_dir folder from this path\nSome things we get for free by using Estimators\nEstimators are a high level abstraction (Interface) that supports all the basic operations you need to support a ML model on top of TensorFlow.\nEstimators:\n * provide a simple interface for users of canned model architectures: Training, evaluation, prediction, export for serving.\n * provide a standard interface for model developers\n * drastically reduces the amount of user code required. This avoids bugs and speeds up development significantly.\n * enable building production services against a standard interface.\n * using experiments abstraction give you free data-parallelism (more here)\nIn the Estimator's interface includes: Training, evaluation, prediction, export for serving.\nImage from Effective TensorFlow for Non-Experts (Google I/O '17)\n\nYou can use a already implemented estimator (canned estimator) or implement your own (custom estimator).\nThis tutorial is not focused on how to build your own estimator, we're using a custom estimator that implements a CNN classifier for MNIST dataset defined in the model.py file, but we're not going into details about how that's implemented.\nHere we're going to show how Estimators make your life easier, once you have a estimator model is very simple to change your model and compare results.\nHaving a look at the code and running the experiment\nDependencies", "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n# our model \nimport model as m\n\n# tensorflow\nimport tensorflow as tf \nprint(tf.__version__) #tested with tf v1.2\n\nfrom tensorflow.contrib import learn\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom tensorflow.python.estimator.inputs import numpy_io\n\n# MNIST data\nfrom tensorflow.examples.tutorials.mnist import input_data\n# Numpy\nimport numpy as np\n\n# Enable TensorFlow logs\ntf.logging.set_verbosity(tf.logging.INFO)", "Getting the data\nWe're not going into details here", "# Import the MNIST dataset\nmnist = input_data.read_data_sets(\"/tmp/MNIST/\", one_hot=True)\n\nx_train = np.reshape(mnist.train.images, (-1, 28, 28, 1))\ny_train = mnist.train.labels\nx_test = np.reshape(mnist.test.images, (-1, 28, 28, 1))\ny_test = mnist.test.labels", "Defining the input function\nIf we look at the image above we can see that there're two main parts in the diagram, a input function interacting with data files and the Estimator interacting with the input function and checkpoints.\nThis means that the estimator doesn't know about data files, it knows about input functions. So if we want to interact with a data set we need to creat an input function that interacts with it, in this example we are creating a input function for the train and test data set.\nYou can learn more about input functions here", "BATCH_SIZE = 128\n\nx_train_dict = {'x': x_train }\ntrain_input_fn = numpy_io.numpy_input_fn(\n x_train_dict, y_train, batch_size=BATCH_SIZE, \n shuffle=True, num_epochs=None, \n queue_capacity=1000, num_threads=4)\n\nx_test_dict = {'x': x_test }\ntest_input_fn = numpy_io.numpy_input_fn(\n x_test_dict, y_test, batch_size=BATCH_SIZE, shuffle=False, num_epochs=1)\n", "Creating an experiment\nAfter an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training. More about it here", "# parameters\nLEARNING_RATE = 0.01\nSTEPS = 1000\n\n# create experiment\ndef generate_experiment_fn():\n def _experiment_fn(run_config, hparams):\n del hparams # unused, required by signature.\n # create estimator\n model_params = {\"learning_rate\": LEARNING_RATE}\n estimator = tf.estimator.Estimator(model_fn=m.get_model(), \n params=model_params,\n config=run_config)\n\n train_input = train_input_fn\n test_input = test_input_fn\n \n return tf.contrib.learn.Experiment(\n estimator,\n train_input_fn=train_input,\n eval_input_fn=test_input,\n train_steps=STEPS\n )\n return _experiment_fn", "Run the experiment", "OUTPUT_DIR = 'output_dir/model1'\nlearn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))", "Running a second time\nOkay, the model is definitely not good... But, check OUTPUT_DIR path, you'll see that a output_dir folder was created and that there are a lot of files there that were created automatically by TensorFlow! \nSo, most of these files are actually checkpoints, this means that if we run the experiment again with the same model_dir it will just load the checkpoint and start from there instead of starting all over again!\nThis means that:\n\nIf we have a problem while training you can just restore from where you stopped instead of start all over again \nIf we didn't train enough we can just continue to train \nIf you have a big file you can just break it into small files and train for a while with each small file and the model will continue from where it stopped at each time :) \n\nThis is all true as long as you use the same model_dir!\nSo, let's run again the experiment for more 1000 steps to see if we can improve the accuracy. So, notice that the first step in this run will actually be the step 1001. So, we need to change the number of steps to 2000 (otherwhise the experiment will find the checkpoint and will think it already finished training)", "STEPS = STEPS + 1000\nlearn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))", "Tensorboard\nAnother thing we get for free is tensorboard. \nIf you run: tensorboard --logdir=OUTPUT_DIR\nYou'll see that we get the graph and some scalars, also if you use an embedding layer you'll get an embedding visualization in tensorboard as well!\nSo, we can make small changes and we'll have an easy (and totally for free) way to compare the models.\nLet's make these changes:\n1. change the learning rate to 0.05 \n2. change the OUTPUT_DIR to some path in output_dir/ \nThe 2. is must be inside output_dir/ because we can run: tensorboard --logdir=output_dir/ \nAnd we'll get both models visualized at the same time in tensorboard.\nYou'll notice that the model will start from step 1, because there's no existing checkpoint in this path.", "LEARNING_RATE = 0.05\nOUTPUT_DIR = 'output_dir/model2'\nlearn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))", "If you run tensorboard how it's described above, you'll have something similar to the images bellow:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
thewtex/SimpleITK-Notebooks
33_Segmentation_Thresholding_Edge_Detection.ipynb
apache-2.0
[ "<h1 align=\"center\">Segmentation: Thresholding and Edge Detection</h1>\n\nIn this notebook our goal is to estimate the radius of spherical markers from an image (Cone-Beam CT volume).\nWe will use two approaches:\n1. Segment the fiducial using a thresholding approach, derive the sphere's radius from the segmentation. This approach is solely based on SimpleITK.\n2. Localize the fiducial's edges using the Canny edge detector and then fit a sphere to these edges using a least squares approach. This approach is a combination of SimpleITK and scipy/numpy.\nIt should be noted that all of the operations, filtering and computations, are natively in 3D. This is the \"magic\" of ITK and SimpleITK at work.", "import SimpleITK as sitk\n\nfrom downloaddata import fetch_data as fdata\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy as np\nfrom scipy import linalg\n\nfrom ipywidgets import interact, fixed", "Load the volume and look at the image (visualization requires window-leveling).", "spherical_fiducials_image = sitk.ReadImage(fdata(\"spherical_fiducials.mha\"))\nsitk.Show(spherical_fiducials_image, \"spheres\")", "After looking at the image you should have identified two spheres. Now select a Region Of Interest (ROI) around the sphere which you want to analyze.", "rois = {\"ROI1\":[range(280,320), range(65,90), range(8, 30)], \n \"ROI2\":[range(200,240), range(65,100), range(15, 40)]}\nmask_value = 255\n\ndef select_roi_dropdown_callback(roi_name, roi_dict):\n global mask, mask_ranges \n\n mask_ranges = roi_dict.get(roi_name)\n if mask_ranges:\n mask = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt8)\n mask.CopyInformation(spherical_fiducials_image)\n for x in mask_ranges[0]:\n for y in mask_ranges[1]: \n for z in mask_ranges[2]:\n mask[x,y,z] = mask_value\n # Use nice magic numbers for windowing the image. We need to do this as we are alpha blending the mask\n # with the original image.\n sitk.Show(sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, \n windowMaximum=-29611), sitk.sitkUInt8), \n mask, opacity=0.5))\n\nroi_list = rois.keys()\nroi_list.insert(0,'Select ROI')\ninteract(select_roi_dropdown_callback, roi_name=roi_list, roi_dict=fixed(rois)); ", "Thresholding based approach\nTo see whether this approach is appropriate we look at the histogram of intensity values inside the ROI. We know that the spheres have higher intensity values. Ideally we would have a bimodal distribution with clear separation between the sphere and background.", "intensity_values = sitk.GetArrayFromImage(spherical_fiducials_image)\nroi_intensity_values = intensity_values[mask_ranges[2][0]:mask_ranges[2][-1],\n mask_ranges[1][0]:mask_ranges[1][-1],\n mask_ranges[0][0]:mask_ranges[0][-1]].flatten()\nplt.hist(roi_intensity_values, bins=100)\nplt.title(\"Intensity Values in ROI\")\nplt.show() ", "Can you identify the region of the histogram associated with the sphere?\nIn our case it looks like we can automatically select a threshold separating the sphere from the background. We will use Otsu's method for threshold selection to segment the sphere and estimate its radius.", "# Set pixels that are in [min_intensity,otsu_threshold] to inside_value, values above otsu_threshold are\n# set to outside_value. The sphere's have higher intensity values than the background, so they are outside.\n\ninside_value = 0\noutside_value = 255\nnumber_of_histogram_bins = 100\nmask_output = True\n\nlabeled_result = sitk.OtsuThreshold(spherical_fiducials_image, mask, inside_value, outside_value, \n number_of_histogram_bins, mask_output, mask_value)\n\n# Estimate the sphere radius from the segmented image using the LabelShapeStatisticsImageFilter.\nlabel_shape_analysis = sitk.LabelShapeStatisticsImageFilter()\nlabel_shape_analysis.SetBackgroundValue(inside_value)\nlabel_shape_analysis.Execute(labeled_result)\nprint(\"The sphere's radius is: {0:.2f}mm\".format(label_shape_analysis.GetEquivalentSphericalRadius(outside_value)))\n\n# Visually inspect the results of segmentation, just to make sure.\nsitk.Show(sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611),\n sitk.sitkUInt8), labeled_result, opacity=0.5))", "Based on your visual inspection, did the automatic threshold correctly segment the sphere or did it over/under segment it?\nIf automatic thresholding did not provide the desired result, you can correct it by allowing the user to modify the threshold under visual inspection. Implement this approach below.", "# Your code here:", "Edge detection based approach\nIn this approach we will localize the sphere's edges in 3D using SimpleITK. We then compute the least squares sphere that optimally fits the 3D points using scipy/numpy. The mathematical formulation for this solution is described in this Insight Journal paper.", "# Create a cropped version of the original image.\nsub_image = spherical_fiducials_image[mask_ranges[0][0]:mask_ranges[0][-1],\n mask_ranges[1][0]:mask_ranges[1][-1],\n mask_ranges[2][0]:mask_ranges[2][-1]]\n\n# Edge detection on the sub_image with appropriate thresholds and smoothing.\nedges = sitk.CannyEdgeDetection(sitk.Cast(sub_image, sitk.sitkFloat32), lowerThreshold=0.0, \n upperThreshold=200.0, variance = (5.0,5.0,5.0))", "Get the 3D location of the edge points and fit a sphere to them.", "edge_indexes = np.where(sitk.GetArrayFromImage(edges) == 1.0)\n\n# Note the reversed order of access between SimpleITK and numpy (z,y,x)\nphysical_points = [edges.TransformIndexToPhysicalPoint([int(x), int(y), int(z)]) \\\n for z,y,x in zip(edge_indexes[0], edge_indexes[1], edge_indexes[2])]\n\n# Setup and solve linear equation system.\nA = np.ones((len(physical_points),4))\nb = np.zeros(len(physical_points))\n\nfor row, point in enumerate(physical_points):\n A[row,0:3] = -2*np.array(point)\n b[row] = -linalg.norm(point)**2\n\nres,_,_,_ = linalg.lstsq(A,b)\n\nprint(\"The sphere's radius is: {0:.2f}mm\".format(np.sqrt(linalg.norm(res[0:3])**2 - res[3])))\n\n# Visually inspect the results of edge detection, just to make sure. Note that because SimpleITK is working in the\n# physical world (not pixels, but mm) we can easily transfer the edges localized in the cropped image to the original.\n\nedge_label = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt16)\nedge_label.CopyInformation(spherical_fiducials_image)\ne_label = 255\nfor point in physical_points:\n edge_label[edge_label.TransformPhysicalPointToIndex(point)] = e_label\n\nsitk.Show(sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611),\n sitk.sitkUInt8), edge_label, opacity=0.5))", "You've made it to the end of the notebook, you deserve to know the correct answer\nThe sphere's radius is 3mm." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
2.3/examples/animation_binary_complete.ipynb
gpl-3.0
[ "Complete Binary Animation\nNOTE: animating within Jupyter notebooks can be very resource intensive. This script will likely run much quicker as a Python script.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.3,<2.4\"", "As always, let's do imports and initialize a logger and a new bundle.", "import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Adding Datasets", "times = np.linspace(0,1,21)\n\nb.add_dataset('lc', times=times, dataset='lc01')\n\nb.add_dataset('rv', times=times, dataset='rv01')\n\nb.add_dataset('mesh', times=times, columns=['visibilities', 'intensities@lc01', 'rvs@rv01'], dataset='mesh01')", "Running Compute", "b.run_compute(irrad_method='none')", "Plotting\nSee the Animations Tutorial for more examples and details.\nHere we'll create a figure with multiple subplots. The top row will be the light curve and RV curve. The bottom three subplots will be various representations of the mesh (intensities, rvs, and visibilities).\nWe'll do this by making separate calls to plot, passing the matplotlib subplot location for each axes we want to create. We can then call b.show(animate=True) or b.save('anim.gif', animate=True).", "b['lc01@model'].plot(axpos=221)\nb['rv01@model'].plot(c={'primary': 'blue', 'secondary': 'red'}, linestyle='solid', axpos=222)\nb['mesh@model'].plot(fc='intensities@lc01', ec='None', axpos=425)\nb['mesh@model'].plot(fc='rvs@rv01', ec='None', axpos=427)\nb['mesh@model'].plot(fc='visibilities', ec='None', y='ws', axpos=224)\n\nfig = plt.figure(figsize=(11,4))\nafig, mplanim = b.savefig('animation_binary_complete.gif', fig=fig, tight_layouot=True, draw_sidebars=False, animate=True, save_kwargs={'writer': 'imagemagick'})", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
infilect/ml-course1
week2/vgg_transfer_imagenet_to_flower/transfer_learning_python.ipynb
mit
[ "Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.", "from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")", "Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.", "import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()", "ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)", "import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]", "Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).", "# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n codes_batch = \n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)", "Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.", "# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))", "Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.", "labels_vecs = # Your one-hot encoded labels array here", "Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.", "train_x, train_y = \nval_x, val_y = \ntest_x, test_y = \n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)", "If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.", "inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\n\nlogits = # output layer logits\ncost = # cross entropy loss\n\noptimizer = # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.", "def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y", "Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!", "saver = tf.train.Saver()\nwith tf.Session() as sess:\n \n # TODO: Your training code here\n saver.save(sess, \"checkpoints/flowers.ckpt\")", "Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread", "Below, feel free to choose images and see how the trained classifier predicts the flowers in them.", "test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
deprecated/arhmm_example.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/arhmm_example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nAutoregressive (AR) HMM Demo\nModified from\nhttps://github.com/lindermanlab/ssm-jax-refactor/blob/main/notebooks/arhmm-example.ipynb\nThis notebook illustrates the use of the auto_regression observation model.\nLet $x_t$ denote the observation at time $t$. Let $z_t$ denote the corresponding discrete latent state.\nThe autoregressive hidden Markov model has the following likelihood,\n$$\n\\begin{align}\nx_t \\mid x_{t-1}, z_t &\\sim\n\\mathcal{N}\\left(A_{z_t} x_{t-1} + b_{z_t}, Q_{z_t} \\right).\n\\end{align}\n$$\n(Technically, higher-order autoregressive processes with extra linear terms from inputs are also implemented.)", "!pip install git+git://github.com/lindermanlab/ssm-jax-refactor.git\n\nimport ssm\n\nimport copy\n\nimport jax.numpy as np\nimport jax.random as jr\n\nfrom tensorflow_probability.substrates import jax as tfp\n\nfrom ssm.distributions.linreg import GaussianLinearRegression\nfrom ssm.arhmm import GaussianARHMM\nfrom ssm.utils import find_permutation, random_rotation\nfrom ssm.plots import gradient_cmap # , white_to_color_cmap\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nimport seaborn as sns\n\nsns.set_style(\"white\")\nsns.set_context(\"talk\")\n\ncolor_names = [\"windows blue\", \"red\", \"amber\", \"faded green\", \"dusty purple\", \"orange\", \"brown\", \"pink\"]\n\n\ncolors = sns.xkcd_palette(color_names)\ncmap = gradient_cmap(colors)\n\n# Make a transition matrix\nnum_states = 5\ntransition_probs = (np.arange(num_states) ** 10).astype(float)\ntransition_probs /= transition_probs.sum()\ntransition_matrix = np.zeros((num_states, num_states))\nfor k, p in enumerate(transition_probs[::-1]):\n transition_matrix += np.roll(p * np.eye(num_states), k, axis=1)\n\nplt.imshow(transition_matrix, vmin=0, vmax=1, cmap=\"Greys\")\nplt.xlabel(\"next state\")\nplt.ylabel(\"current state\")\nplt.title(\"transition matrix\")\nplt.colorbar()\n\nplt.savefig(\"arhmm-transmat.pdf\")\n\n# Make observation distributions\ndata_dim = 2\nnum_lags = 1\n\nkeys = jr.split(jr.PRNGKey(0), num_states)\nangles = np.linspace(0, 2 * np.pi, num_states, endpoint=False)\ntheta = np.pi / 25 # rotational frequency\nweights = np.array([0.8 * random_rotation(key, data_dim, theta=theta) for key in keys])\nbiases = np.column_stack([np.cos(angles), np.sin(angles), np.zeros((num_states, data_dim - 2))])\ncovariances = np.tile(0.001 * np.eye(data_dim), (num_states, 1, 1))\n\n# Compute the stationary points\nstationary_points = np.linalg.solve(np.eye(data_dim) - weights, biases)\n\nprint(theta / (2 * np.pi) * 360)\nprint(360 / 5)", "Plot dynamics functions", "if data_dim == 2:\n lim = 5\n x = np.linspace(-lim, lim, 10)\n y = np.linspace(-lim, lim, 10)\n X, Y = np.meshgrid(x, y)\n xy = np.column_stack((X.ravel(), Y.ravel()))\n\n fig, axs = plt.subplots(1, num_states, figsize=(3 * num_states, 6))\n for k in range(num_states):\n A, b = weights[k], biases[k]\n dxydt_m = xy.dot(A.T) + b - xy\n axs[k].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[k % len(colors)])\n\n axs[k].set_xlabel(\"$y_1$\")\n # axs[k].set_xticks([])\n if k == 0:\n axs[k].set_ylabel(\"$y_2$\")\n # axs[k].set_yticks([])\n axs[k].set_aspect(\"equal\")\n\n plt.tight_layout()\n\n plt.savefig(\"arhmm-flow-matrices.pdf\")\n\ncolors\n\nprint(stationary_points)", "Sample data from the ARHMM", "# Make an Autoregressive (AR) HMM\ntrue_initial_distribution = tfp.distributions.Categorical(logits=np.zeros(num_states))\ntrue_transition_distribution = tfp.distributions.Categorical(probs=transition_matrix)\n\ntrue_arhmm = GaussianARHMM(\n num_states,\n transition_matrix=transition_matrix,\n emission_weights=weights,\n emission_biases=biases,\n emission_covariances=covariances,\n)\ntime_bins = 10000\ntrue_states, data = true_arhmm.sample(jr.PRNGKey(0), time_bins)\n\nfig = plt.figure(figsize=(8, 8))\nfor k in range(num_states):\n plt.plot(*data[true_states == k].T, \"o\", color=colors[k], alpha=0.75, markersize=3)\n\nplt.plot(*data[:1000].T, \"-k\", lw=0.5, alpha=0.2)\nplt.xlabel(\"$y_1$\")\nplt.ylabel(\"$y_2$\")\n# plt.gca().set_aspect(\"equal\")\n\nplt.savefig(\"arhmm-samples-2d.pdf\")\n\nfig = plt.figure(figsize=(8, 8))\nfor k in range(num_states):\n ndx = true_states == k\n data_k = data[ndx]\n T = 12\n data_k = data_k[:T, :]\n plt.plot(data_k[:, 0], data_k[:, 1], \"o\", color=colors[k], alpha=0.75, markersize=3)\n for t in range(T):\n plt.text(data_k[t, 0], data_k[t, 1], t, color=colors[k], fontsize=12)\n\n# plt.plot(*data[:1000].T, '-k', lw=0.5, alpha=0.2)\nplt.xlabel(\"$y_1$\")\nplt.ylabel(\"$y_2$\")\n# plt.gca().set_aspect(\"equal\")\n\nplt.savefig(\"arhmm-samples-2d-temporal.pdf\")\n\nprint(biases)\n\nprint(stationary_points)\n\ncolors", "Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.", "lim\n\n# Plot the data and the smoothed data\nplot_slice = (0, 200)\nlim = 1.05 * abs(data).max()\nplt.figure(figsize=(8, 6))\nplt.imshow(\n true_states[None, :],\n aspect=\"auto\",\n cmap=cmap,\n vmin=0,\n vmax=len(colors) - 1,\n extent=(0, time_bins, -lim, (data_dim) * lim),\n)\n\n\nEy = np.array(stationary_points)[true_states]\nfor d in range(data_dim):\n plt.plot(data[:, d] + lim * d, \"-k\")\n plt.plot(Ey[:, d] + lim * d, \":k\")\n\nplt.xlim(plot_slice)\nplt.xlabel(\"time\")\n# plt.yticks(lim * np.arange(data_dim), [\"$y_{{{}}}$\".format(d+1) for d in range(data_dim)])\nplt.ylabel(\"observations\")\n\nplt.tight_layout()\n\nplt.savefig(\"arhmm-samples-1d.pdf\")\n\ndata.shape\n\ndata[:10, :]", "Fit an ARHMM", "# Now fit an HMM to the data\nkey1, key2 = jr.split(jr.PRNGKey(0), 2)\ntest_num_states = num_states\ninitial_distribution = tfp.distributions.Categorical(logits=np.zeros(test_num_states))\ntransition_distribution = tfp.distributions.Categorical(logits=np.zeros((test_num_states, test_num_states)))\nemission_distribution = GaussianLinearRegression(\n weights=np.tile(0.99 * np.eye(data_dim), (test_num_states, 1, 1)),\n bias=0.01 * jr.normal(key2, (test_num_states, data_dim)),\n scale_tril=np.tile(np.eye(data_dim), (test_num_states, 1, 1)),\n)\n\narhmm = GaussianARHMM(test_num_states, data_dim, num_lags, seed=jr.PRNGKey(0))\n\nlps, arhmm, posterior = arhmm.fit(data, method=\"em\")\n\n# Plot the log likelihoods against the true likelihood, for comparison\ntrue_lp = true_arhmm.marginal_likelihood(data)\nplt.plot(lps, label=\"EM\")\nplt.plot(true_lp * np.ones(len(lps)), \":k\", label=\"True\")\nplt.xlabel(\"EM Iteration\")\nplt.ylabel(\"Log Probability\")\nplt.legend(loc=\"lower right\")\nplt.show()\n\n# # Find a permutation of the states that best matches the true and inferred states\n# most_likely_states = posterior.most_likely_states()\n# arhmm.permute(find_permutation(true_states[num_lags:], most_likely_states))\n# posterior.update()\n# most_likely_states = posterior.most_likely_states()\n\nif data_dim == 2:\n lim = abs(data).max()\n x = np.linspace(-lim, lim, 10)\n y = np.linspace(-lim, lim, 10)\n X, Y = np.meshgrid(x, y)\n xy = np.column_stack((X.ravel(), Y.ravel()))\n\n fig, axs = plt.subplots(2, max(num_states, test_num_states), figsize=(3 * num_states, 6))\n for i, model in enumerate([true_arhmm, arhmm]):\n for j in range(model.num_states):\n dist = model._emissions._distribution[j]\n A, b = dist.weights, dist.bias\n dxydt_m = xy.dot(A.T) + b - xy\n axs[i, j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])\n\n axs[i, j].set_xlabel(\"$x_1$\")\n axs[i, j].set_xticks([])\n if j == 0:\n axs[i, j].set_ylabel(\"$x_2$\")\n axs[i, j].set_yticks([])\n axs[i, j].set_aspect(\"equal\")\n\n plt.tight_layout()\n\n plt.savefig(\"argmm-flow-matrices-true-and-estimated.pdf\")\n\nif data_dim == 2:\n lim = abs(data).max()\n x = np.linspace(-lim, lim, 10)\n y = np.linspace(-lim, lim, 10)\n X, Y = np.meshgrid(x, y)\n xy = np.column_stack((X.ravel(), Y.ravel()))\n\n fig, axs = plt.subplots(1, max(num_states, test_num_states), figsize=(3 * num_states, 6))\n for i, model in enumerate([arhmm]):\n for j in range(model.num_states):\n dist = model._emissions._distribution[j]\n A, b = dist.weights, dist.bias\n dxydt_m = xy.dot(A.T) + b - xy\n axs[j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])\n\n axs[j].set_xlabel(\"$y_1$\")\n axs[j].set_xticks([])\n if j == 0:\n axs[j].set_ylabel(\"$y_2$\")\n axs[j].set_yticks([])\n axs[j].set_aspect(\"equal\")\n\n plt.tight_layout()\n\n plt.savefig(\"arhmm-flow-matrices-estimated.pdf\")\n\n# Plot the true and inferred discrete states\nplot_slice = (0, 1000)\nplt.figure(figsize=(8, 4))\nplt.subplot(211)\nplt.imshow(true_states[None, num_lags:], aspect=\"auto\", interpolation=\"none\", cmap=cmap, vmin=0, vmax=len(colors) - 1)\nplt.xlim(plot_slice)\nplt.ylabel(\"$z_{\\\\mathrm{true}}$\")\nplt.yticks([])\n\nplt.subplot(212)\n# plt.imshow(most_likely_states[None,: :], aspect=\"auto\", cmap=cmap, vmin=0, vmax=len(colors)-1)\nplt.imshow(posterior.expected_states[0].T, aspect=\"auto\", interpolation=\"none\", cmap=\"Greys\", vmin=0, vmax=1)\nplt.xlim(plot_slice)\nplt.ylabel(\"$z_{\\\\mathrm{inferred}}$\")\nplt.yticks([])\nplt.xlabel(\"time\")\n\nplt.tight_layout()\n\nplt.savefig(\"arhmm-state-est.pdf\")\n\n# Sample the fitted model\nsampled_states, sampled_data = arhmm.sample(jr.PRNGKey(0), time_bins)\n\nfig = plt.figure(figsize=(8, 8))\nfor k in range(num_states):\n plt.plot(*sampled_data[sampled_states == k].T, \"o\", color=colors[k], alpha=0.75, markersize=3)\n\n# plt.plot(*sampled_data.T, '-k', lw=0.5, alpha=0.2)\nplt.plot(*sampled_data[:1000].T, \"-k\", lw=0.5, alpha=0.2)\nplt.xlabel(\"$x_1$\")\nplt.ylabel(\"$x_2$\")\n# plt.gca().set_aspect(\"equal\")\n\nplt.savefig(\"arhmm-samples-2d-estimated.pdf\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hvillanua/deep-learning
language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n source_split, target_split = source_text.split('\\n'), target_text.split('\\n')\n source_to_int, target_to_int = [], []\n for source, target in zip(source_split, target_split):\n source_to_int.append([source_vocab_to_int[word] for word in source.split()])\n targets = [target_vocab_to_int[word] for word in target.split()]\n targets.append((target_vocab_to_int['<EOS>']))\n target_to_int.append(targets)\n \n #print(source_to_int, target_to_int)\n return source_to_int, target_to_int\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n # TODO: Implement Function\n #max_tar_seq_len = np.max([len(sentence) for sentence in target_int_text])\n #max_sour_seq_len = np.max([len(sentence) for sentence in source_int_text])\n #max_source_len = np.max([max_tar_seq_len, max_sour_seq_len])\n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None])\n learning_rate = tf.placeholder(tf.float32)\n keep_probability = tf.placeholder(tf.float32, name='keep_prob')\n target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')\n max_target_seq_len = tf.reduce_max(target_seq_len, name='target_sequence_length')\n source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')\n return inputs, targets, learning_rate, keep_probability, target_seq_len, max_target_seq_len, source_seq_len\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n # TODO: Implement Function\n embed_seq = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n def lstm_cell():\n return tf.contrib.rnn.LSTMCell(rnn_size)\n rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])\n rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)\n output, state = tf.nn.dynamic_rnn(rnn, embed_seq, dtype=tf.float32)\n return output, state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # TODO: Implement Function\n training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)\n train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)\n output, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=False, maximum_iterations=max_summary_length)\n \n return output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n # TODO: Implement Function\n start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer)\n output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True, maximum_iterations=max_target_sequence_length)\n return output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n #embed_seq = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)\n \n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n \n def lstm_cell():\n return tf.contrib.rnn.LSTMCell(rnn_size)\n rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)])\n rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob)\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n \n with tf.variable_scope(\"decode\"):\n training_output = decoding_layer_train(encoder_state, rnn, dec_embed_input, \n target_sequence_length, max_target_sequence_length, output_layer, keep_prob)\n \n with tf.variable_scope(\"decode\", reuse=True):\n inference_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'],\n target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size,\n output_layer, batch_size, keep_prob)\n return training_output, inference_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :max_target_sentence_length: Maximum target sequence lenght\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n enc_embedding_size)\n dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)\n training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length, \n max_target_sentence_length, rnn_size, num_layers, \n target_vocab_to_int, target_vocab_size, batch_size, \n keep_prob, dec_embedding_size)\n return training_output, inference_output\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# Number of Epochs\nepochs = 10\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 254\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 200\ndecoding_embedding_size = 200\n# Learning Rate\nlearning_rate = 0.01\n# Dropout Keep Probability\nkeep_probability = 0.5\ndisplay_step = 10", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n sentence = sentence.lower()\n sentence_to_id = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split(' ')]\n return sentence_to_id\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Olsthoorn/IHE-python-course-2017
exercises/Mar07/readingText.ipynb
gpl-2.0
[ "<figure>\n <IMG SRC=\"../../logo/logo.png\" WIDTH=250 ALIGN=\"right\">\n</figure>\n\nIHE Python course, 2017\nReading text files\nT.N.Olsthoorn, Feb 27, 2017\nReading and writing files is one of the essential functions of computers as this is the connection between volatile data in the computer's memory and permanent data on disk, in the cloud.\nThe three forms of reading text files are shown, and we'll illustrate handling its contents. We'll take the README.md file as an example as it is readily available on this site.\nFinding the file in the directory tree\nFirst steps are finding the file in the directory structure. We'll start with an illutration how this can be done using some functions in the os module, which deal with directories.", "import os\n\nos.listdir() # make a list of the files in the current directory, so that we may handle them.", "Turn the relative refference of the current directory, '.' into an absolute path.\nThen specify the file name that we're looking for.\nAnd in a while-loop look upward in an ever higher diectory in the directory tree until we found that with our file.\nWalking upward in the tree is done by cutting off the tail of the path in each step.", "apth = os.path.abspath('.')\n\nfname = 'README.md'\n\nprint(\"Starting in folder <{}>,\\n to look for file <{}> ...\".format(apth, fname))\nprint()\n\nprint(\"I'm searching: ...\")\nwhile not fname in os.listdir(apth):\n apth = apth.rpartition(os.sep)[0]\n print(apth)\n \nif fname in os.listdir(apth):\n print(\"... Yep, got'm!\")\nelse:\n print(\"... H'm, missed him\")\n \nprint(\"\\nOk, file <{}> is in folder: \".format(fname), apth)\nprint('\\nHere is the list of files in this folder:')\nos.listdir(apth)", "Reading the file (at once)\nWhen we know where the file is, we can open it for reading.\nWe have to open it, which yields a reader object, by which we can read the file.\nreader = open(path, 'r')\ns = reader.read()\nreader.close()\nProblem with this, is that when we are exploring the reader, we may easily reach the end of file after which nothing more is read and s is an empty string. Furthermore, when we experiment, we may easily open the same file many times and forget to close it.\nThe with statement is a solution to that, because it automatically closes the file when we finish its block.\nWith the with statement we may read the entire file into a string like so.", "with open(os.path.join(apth, fname), 'r') as reader:\n s = reader.read()", "It's the read that swallows the entire file at once and dumps its contents in the string s.\nCheck if the reader in, indeed, closed after we finished the with block:", "reader.closed", "Then show the contenct of the sring s:", "print(s)", "Counting words and phrases\nNow that we have the entire file read into a single string, s, we can just as well analyze it a bit, by counting the number of words, letters, and the frequency of each letter.\njust split the sting in workds based on whitespace and count their number.", "print(\"There are {} words in file {}\".format(len(s.split(sep=' ')), fname))", "We might estimate the number of sentences by counting the number of periods '.'\nOne way is to use the . as a separator:", "nPhrases = len(s.split(sep='.')) # also works without the keyword sep\n\nprint(\"We find {} phrases in file {}\".format(nPhrases, fname))", "We could just as wel count the number of dots in s directly, using one of the string methods, in this case s.count()", "print(\"There are {} dots in file {}\".format(s.count('.'), fname))", "Counting the non-whitespace characters\nNow let's see how many non-whitespace characters there are in s.\nA coarse way to remove whitespace would be splitting s and rejoining the obtained list of words without any whitespace like so:", "s1 = \"\".join(s.split()).lower() # also make all letters in lowerface()\nprint(s1)", "All characters in a list\nIf we convert a string into a list, we get the list of its individual characters.", "list(s1)", "The unique characters in a set\nBy turning the string into a set, we get the set of its unique characters:", "set(s1)", "The number of occurences of each non-white character in the file\nTo count the frequency of each character we could use those from the set as keys in a dict. We can generate the dict with the frequency if each character in a dict comprehension that combines the unique letter as a key with the method count(key) applied on s1, the string without whitespace:", "ccnt = {c : s1.count(c) for c in set(s1)}\npprint(ccnt)", "Lets order the letters after their frequency of occurrence in the file:\nWe can do so in one line, but this needs some explanaion.\nFirst we generate a list from the dict in which each item is a list of 2 itmes namely [char, number]\nSecond we apply sorted on that list to get a sorted list. But we don't want it to be sorted based on the character, but based on the number. Therfore, we use the key argument. It tels that each item has to be compared on the second value (lambda x: x[1]).\nFinally, this yields the list that we want, but with the largest frequency at the bottom. So we turn this list upside down by using the slice [::-1] at the end.\nHere it is:", "sorted([[k, ccnt[k]] for k in ccnt.keys()], key=lambda x: x[1])[::-1]", "Reading the file and returning a list of strings, one per line\nFor this we would read reader.readlines() instead of reader.read:", "with open(os.path.join(apth, fname), 'r') as reader:\n s = reader.readlines()\n\ntype(s)\n\npprint(s)", "From this point onward, you can analyse each line in sequence, pick out lines, etc.\nReading a single line and lines one by one\nOften you don't want to read the entire file into memory (into a single character) at once. It might blow up the computer's memory if the file size were gigabits, as can easily the case with output of some models. And if it wouldn't crash the memory, your pc may still become very slow with large files. So a better and more generally applied way to read in a file is line by line, based on the newline characters that are embedded in them.\nIn that case you can read the file in line by line, one at a time, not using reader.read() or reader.readlines() but reader.readline()", "with open(os.path.join(apth, fname), 'r') as reader:\n s = reader.readline()\n\ntype(s)\n\nprint(s)", "Which yields a string, the first string of the file in this case.\nThe problem is now, that no more lines can be read from this file, because with the with statement, the file closes automatically as soon as the python reaches the end of its block:", "s = reader.readline()", "Therefore, we should not use the with statement and hand-close the file when we're done, or put anything that we do with the strings that we read inside the with block.\nWe may be tempted to put the reader in a while-loop like so\ns=[]\nwhile True:\n s.append(reader.readline())\nBut don't do that, becaus the while-loop will never end", "with open(os.path.join(apth, fname), 'r') as reader:\n lines = []\n while True:\n s = reader.readline()\n if s==\"\":\n break\n lines.append(s)\n\npprint(lines)\n\n\nreader.readline?", "Of course, there is much, much more, but this is probably the most important base knowledge about file reading. File writing of textfile is straightforward. You open a file with open( fname, 'w') for writing or open(fname,'a') for appending and you can start writing lines to it. Don't forget to close it when done. Still better, use the with statement to make sure that the file is automatically closed when its block is done." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
simulkade/peteng
python/test averaging methods.ipynb
mit
[ "Testing averaging methods\nFrom this post\nThe equation is: $$\\frac{\\partial\\phi}{\\partial t}+\\nabla . (-D(\\phi)\\nabla \\phi) =0$$", "from fipy import Grid2D, CellVariable, FaceVariable\nimport numpy as np\n\n\ndef upwindValues(mesh, field, velocity):\n \"\"\"Calculate the upwind face values for a field variable\n\n Note that the mesh.faceNormals point from `id1` to `id2` so if velocity is in the same\n direction as the `faceNormal`s then we take the value from `id1`s and visa-versa.\n\n Args:\n mesh: a fipy mesh\n field: a fipy cell variable or equivalent numpy array\n velocity: a fipy face variable (rank 1) or equivalent numpy array\n \n Returns:\n numpy array shaped as a fipy face variable\n \"\"\"\n # direction is over faces (rank 0)\n direction = np.sum(np.array(mesh.faceNormals * velocity), axis=0)\n # id1, id2 are shaped as faces but contains cell index values\n id1, id2 = mesh._adjacentCellIDs\n return np.where(direction >= 0, field[id1], field[id2])\n\nfrom fipy import *\nimport numpy as np", "$$\\frac{\\partial\\phi}{\\partial t}+\\nabla . \\left(-D\\left(\\phi_{0}\\right)\\nabla \\phi\\right)+\\nabla.\\left(-\\nabla \\phi_{0}\\left(\\frac{\\partial D}{\\partial \\phi}\\right){\\phi{0,face}}\\phi\\right) =\\nabla.\\left(-\\nabla \\phi_{0}\\left(\\frac{\\partial D}{\\partial \\phi}\\right){\\phi{0,face}}\\phi_{0,face}\\right)$$", "L= 1.0 # domain length\nNx= 100\ndx_min=L/Nx\nx=np.array([0.0, dx_min])\nwhile x[-1]<L:\n x=np.append(x, x[-1]+1.05*(x[-1]-x[-2]))\nx[-1]=L\n\nmesh = Grid1D(dx=dx)\n\nphi = CellVariable(mesh=mesh, name=\"phi\", hasOld=True, value = 0.0)\nphi.constrain(5.0, mesh.facesLeft)\nphi.constrain(0., mesh.facesRight)\n\n# D(phi)=D0*(1.0+phi.^2)\n# dD(phi)=2.0*D0*phi\nD0 = 1.0\ndt= 0.01*L*L/D0 # a proper time step for diffusion process\n\neq = TransientTerm(var=phi) - DiffusionTerm(var=phi, coeff=D0*(1+phi.faceValue**2))\n\nfor i in range(4):\n for i in range(5):\n c_res = eq.sweep(dt = dt)\n phi.updateOld()\n\nViewer(vars = phi, datamax=5.0, datamin=0.0);\n# viewer.plot()", "$$\\frac{\\partial\\phi}{\\partial t}+\\nabla . \\left(-D\\left(\\phi_{0}\\right)\\nabla \\phi\\right)+\\nabla.\\left(-\\nabla \\phi_{0}\\left(\\frac{\\partial D}{\\partial \\phi}\\right){\\phi{0,face}}\\phi\\right) =\\nabla.\\left(-\\nabla \\phi_{0}\\left(\\frac{\\partial D}{\\partial \\phi}\\right){\\phi{0,face}}\\phi_{0,face}\\right)$$", "phi2 = CellVariable(mesh=mesh, name=\"phi\", hasOld=True, value = 0.0)\nphi2.constrain(5.0, mesh.facesLeft)\nphi2.constrain(0., mesh.facesRight)\n\n# D(phi)=D0*(1.0+phi.^2)\n# dD(phi)=2.0*D0*phi\nD0 = 1.0\ndt= 0.01*L*L/D0 # a proper time step for diffusion process\n\neq2 = TransientTerm(var=phi2)-DiffusionTerm(var=phi2, coeff=D0*(1+phi2.faceValue**2))+ \\\nUpwindConvectionTerm(var=phi2, coeff=-2*D0*phi2.faceValue*phi2.faceGrad)== \\\n(-2*D0*phi2.faceValue*phi2.faceGrad*phi2.faceValue).divergence\n\nfor i in range(4):\n for i in range(5):\n c_res = eq2.sweep(dt = dt)\n phi2.updateOld()\n\nviewer = Viewer(vars = [phi, phi2], datamax=5.0, datamin=0.0)", "The above figure shows how the upwind convection term is not consistent with the linear averaging.", "phi3 = CellVariable(mesh=mesh, name=\"phi\", hasOld=True, value = 0.0)\nphi3.constrain(5.0, mesh.facesLeft)\nphi3.constrain(0., mesh.facesRight)\n\n# D(phi)=D0*(1.0+phi.^2)\n# dD(phi)=2.0*D0*phi\nD0 = 1.0\ndt= 0.01*L*L/D0 # a proper time step for diffusion process\nu = -2*D0*phi3.faceValue*phi3.faceGrad\n\neq3 = TransientTerm(var=phi3)-DiffusionTerm(var=phi3, coeff=D0*(1+phi3.faceValue**2))+ \\\nUpwindConvectionTerm(var=phi3, coeff=-2*D0*phi3.faceValue*phi3.faceGrad)== \\\n(-2*D0*phi3.faceValue*phi3.faceGrad*phi3.faceValue).divergence\n\nfor i in range(4):\n for i in range(5):\n c_res = eq3.sweep(dt = dt)\n phi_face = FaceVariable(mesh, upwindValues(mesh, phi3, u))\n u = -2*D0*phi_face*phi3.faceGrad\n eq3 = TransientTerm(var=phi3)-DiffusionTerm(var=phi3, coeff=D0*(1+phi3.faceValue**2))+ \\\n UpwindConvectionTerm(var=phi3, coeff=u)== \\\n (u*phi_face).divergence\n phi3.updateOld()\n\nviewer = Viewer(vars = [phi, phi3], datamax=5.0, datamin=0.0)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
letsgoexploring/teaching
winter2017/econ129/python/Econ129_Class_13_Complete.ipynb
mit
[ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\n%matplotlib inline", "Class 13: Introduction to Business Cycle Modeling\nEmpirical evidence of TFP fluctuation", "# Import actual and trend production data\ndata = pd.read_csv('http://www.briancjenkins.com/teaching/winter2017/econ129/data/Econ129_Rbc_Data.csv',index_col=0)\nprint(data.head())", "Recall:\n\\begin{align}\n\\frac{X_t - X_t^{trend}}{X_t^{trend}} & \\approx \\log\\left(X_t/X_t^{trend}\\right) = \\log X_t - \\log X_t^{trend}\n\\end{align}", "# Create new DataFrame of percent deviations from trend\ndata_cycles = pd.DataFrame({\n 'gdp':100*(np.log(data.gdp/data.gdp_trend)),\n 'consumption':100*(np.log(data.consumption/data.consumption_trend)),\n 'investment':100*(np.log(data.investment/data.investment_trend)),\n 'hours':100*(np.log(data.hours/data.hours_trend)),\n 'capital':100*(np.log(data.capital/data.capital_trend)),\n 'tfp':100*(np.log(data.tfp/data.tfp_trend)),\n})\n\n# Plot all percent deviations from trend\nfig = plt.figure(figsize=(12,8))\n\nax = fig.add_subplot(3,2,1)\nax.plot_date(data_cycles.index,data_cycles['gdp'],'-',lw=3,alpha=0.7)\nax.grid()\nax.set_title('GDP per capita')\nax.set_ylabel('% dev from trend')\n\nax = fig.add_subplot(3,2,2)\nax.plot_date(data_cycles.index,data_cycles['consumption'],'-',lw=3,alpha=0.7)\nax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2)\nax.grid()\nax.set_title('Consumption per capita (GDP in light gray)')\n\n\nax = fig.add_subplot(3,2,3)\nax.plot_date(data_cycles.index,data_cycles['investment'],'-',lw=3,alpha=0.7)\nax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2)\nax.grid()\nax.set_title('Investment per capita (GDP in light gray)')\nax.set_ylabel('% dev from trend')\n\nax = fig.add_subplot(3,2,4)\nax.plot_date(data_cycles.index,data_cycles['hours'],'-',lw=3,alpha=0.7)\nax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2)\nax.grid()\nax.set_title('Hours per capita (GDP in light gray)')\n\nax = fig.add_subplot(3,2,5)\nax.plot_date(data_cycles.index,data_cycles['capital'],'-',lw=3,alpha=0.7)\nax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2)\nax.grid()\nax.set_title('Capital per capita (GDP in light gray)')\nax.set_ylabel('% dev from trend')\n\nax = fig.add_subplot(3,2,6)\nax.plot_date(data_cycles.index,data_cycles['tfp'],'-',lw=3,alpha=0.7,label='TFP')\nax.plot_date(data_cycles.index,data_cycles['gdp'],'-k',lw=3,alpha=0.2,label='GDP')\nax.grid()\nax.set_title('TFP per capita (GDP in light gray)')\n\nplt.tight_layout()\n\n# Add a column of lagged tfp values\ndata_cycles['tfp_lag']= data_cycles['tfp'].shift()\ndata_cycles = data_cycles.dropna()\ndata_cycles.head()\n\nplt.scatter(data_cycles.tfp_lag,data_cycles.tfp,s=50,alpha = 0.7)\nplt.grid()\nplt.xlabel('TFP lagged one period (% dev from trend)')\nplt.ylabel('TFP (% dev from trend)')", "Since there appears to be a stong correlation between the lagged cyclical component of TFP and the current cyclical component of TFP, let's estimate the following AR(1) model using the statsmodels package.\n\\begin{align}\n\\hat{a}t & = \\rho \\hat{a}{t-1} + \\epsilon_t\n\\end{align}", "model = sm.OLS(data_cycles.tfp,data_cycles.tfp_lag)\nresults = model.fit()\nprint(results.summary())\n\n# Store the estimated autoregressive parameter\nrhoA = results.params['tfp_lag']\n\n# Compute the predicted values:\ntfp_pred = results.predict()\n\n# Compute the standard deviation of the residuals of the regression\nsigma = np.std(results.resid)\n\nprint('rho: ',np.round(rhoA,5))\nprint('sigma (in percent):',np.round(sigma,5))\n\n# Scatter plot of data with fitted regression line:\nplt.scatter(data_cycles.tfp_lag,data_cycles.tfp,s=50,alpha = 0.7)\nplt.plot(data_cycles.tfp_lag,tfp_pred,'r')\nplt.grid()\nplt.xlabel('TFP lagged one period (% dev from trend)')\nplt.ylabel('TFP (% dev from trend)')", "A Baseline Real Business Cycle Model\nConsider the following business cycle model:\n \\begin{align}\n Y_t & = A_t K_t^{\\alpha} \\tag{1}\\\n C_t & = (1-s)Y_t \\tag{2}\\\n I_t & = K_{t+1} - ( 1- \\delta) \\tag{3}\\\n Y_t & = C_t + I_t \\tag{4}\n \\end{align}\nwhere:\n \\begin{align}\n \\log A_{t+1} & = \\rho \\log A_t + \\epsilon_t, \\tag{5}\n \\end{align}\nreflects exogenous fluctuation in TFP. The endogenous variables in the model are $K_t$, $Y_t$, $C_t$, $I_t$, and $A_t$ and $\\epsilon_t$ is an exogenous white noise shock process with standard deviation $\\sigma$. $K_t$ and $A_t$ are called state variables because their values in period $t$ affect the equilibrium of the model in period $t+1$.\nNon-stochastic steady state\n\n\nThe non-stochastic steady state equilibrium for the model is an equilibrium in which the exogenous shock process $\\epsilon_t = 0$ for all $t$ and $K_{t+1} = K_t$ and $A_{t+1} = A_t$ for all $t$. Find the non-stochastic steady state of the model analytically. That is, use pencil and paper to find values for capital $\\bar{K}$, output $\\bar{Y}$, consumption $\\bar{C}$, and investment $\\bar{I}$ in terms of the model parameters $\\alpha$, $s$, and $\\delta$.\n\n\nSuppose that: $\\alpha = 0.35$, $\\delta = 0.025$, and $s = 0.1$. Use your answers to the previous exercise to compute numerical values for consumption, output, capital, and investment. Use the variable names kss, yss, css, and iss to store the computed steady state values.", "# Define parameters\ns = 0.1\ndelta = 0.025\nalpha = 0.35\n\n# Compute the steady state values of the endogenous variables\nkss = (s/delta)**(1/(1-alpha))\nyss = kss**alpha\ncss = (1-s)*yss\niss = yss - css\n\nprint('Steady states:\\n')\nprint('capital: ',round(kss,5))\nprint('output: ',round(yss,5))\nprint('consumption:',round(css,5))\nprint('investment: ',round(iss,5))", "Impulse responses\nIn this part, you will simulate the model directly in response to a 1 percent shock to aggregate technology. The simulation will run for $T+1$ periods from $t = 0,\\ldots, T$ and the shock arrives at $t = 1$. Suppose that $T = 12$.\n\n\nUse equations (1) through (4) to solve for $K_{t+1}$, $Y_t$, $C_t$, and $I_t$ in terms of only $K_t$, $a_t$, and the model parameters $\\alpha$, $\\delta$, and $s$.\n\n\nInitialize an array for $\\epsilon_t$ called eps_ir that is equal to a $\\times 1$ array of zeros. Set the first element of this array equal to 0.01.\n\n\nInitialize an array for $\\log A_t$ called log_a_ir that is equal to a $(T+1)\\times 1$ array of zeros. Set $\\rho = 0.75$ and compute the impulse response of $\\log A_t$ to the shock. Use the simulated values for $\\log A_t$ to compute $A_t$ and save the values in a variable called a_ir (Note: $A_t = e^{\\log A_t}$). Plot $\\log A_t$ and $A_t$.\n\n\nInitialize an array for $K_t$ called k_ir that is a $(T+1)\\times 1$ array of zeros. Set the first value in the array equal to steady state capital. Then compute the subsequent values for $K_t$ using the computed values for $A_t$. Plot $K_t$.\n\n\nInitialize $(T+1)\\times 1$ arrays for $Y_t$, $C_t$, and $I_t$ called y_ir, c_ir, and i_ir. Use the computed values for $K_t$ to compute simulated values for $Y_t$, $C_t$, and $I_t$.\n\n\nConstruct a $2\\times2$ grid of subplots of the impulse responses of capital, output, consumption, and investment to a one percent shock to aggregate technology.\n\n\nCompute the percent deviation of each variable from its steady state \n\\begin{align}\n100*(\\log(X_t) - \\log(\\bar{X}))\n\\end{align}\n and store the results in variables called: k_ir_dev, y_ir_dev, c_ir_dev, and i_ir_dev. Construct a $2\\times2$ grid of subplots of the impulse responses of capital, output, consumption, and investment to the technology shock with each variable expressed as a percent deviation from steady state.", "# Set number of simulation periods (minus 1):\nT = 12\n\n# Initialize eps_ir as a T x 1 array of zeros and set first value to 0.01\neps_ir = np.zeros(T)\neps_ir[0] = 0.01\n\n# Set coefficient of autocorrelation for log A\nrho = 0.75\n\n# Initialize log_a_ir as a (T+1) x 1 array of zeros and compute.\nlog_a_ir = np.zeros(T+1)\nfor t in range(T):\n log_a_ir[t+1] = rho*log_a_ir[t] + eps_ir[t]\n\n# Plot log_a_ir\nplt.plot(log_a_ir,lw=3,alpha =0.7)\nplt.title('$\\log A_t$')\nplt.grid()\n\n# Computes a_ir.\na_ir = np.exp(log_a_ir)\n\n# Plot a_ir\nplt.plot(a_ir,lw=3,alpha =0.7)\nplt.title('$A_t$')\nplt.grid()\n\n# Initialize k_ir as a (T+1) x 1 array of zeros and compute\nk_ir = np.zeros(T+1)\nk_ir[0] = kss\nfor t in range(T):\n k_ir[t+1] = s*a_ir[t] *k_ir[t]**alpha+ (1-delta)*k_ir[t]\n \n# Plot k_ir\nplt.plot(k_ir,lw=3,alpha =0.7)\nplt.title('$K_t$')\nplt.grid()\n\n# Compute y_ir, c_ir, i_ir\n\ny_ir = a_ir*k_ir**alpha\nc_ir = (1-s)*y_ir\ni_ir = s*y_ir\n\n# Create a 2x2 plot of y_ir, c_ir, i_ir, and k_ir\nfig = plt.figure(figsize=(12,8))\n\nax = fig.add_subplot(2,2,1)\nax.plot(y_ir,lw=3,alpha = 0.7)\nax.set_title('$Y_t$')\nax.grid()\n\nax = fig.add_subplot(2,2,2)\nax.plot(c_ir,lw=3,alpha = 0.7)\nax.set_title('$C_t$')\nax.grid()\n\nax = fig.add_subplot(2,2,3)\nax.plot(i_ir,lw=3,alpha = 0.7)\nax.set_title('$I_t$')\nax.grid()\n\nax = fig.add_subplot(2,2,4)\nax.plot(k_ir,lw=3,alpha = 0.7)\nax.set_title('$K_t$')\nax.grid()\n\nplt.tight_layout()\n\n# Compute y_ir_dev, c_ir_dev, i_ir_dev, and k_ir_dev to be the log deviations from steady state of the \n# respective variables\n\ny_ir_dev = np.log(y_ir) - np.log(yss)\nc_ir_dev = np.log(c_ir) - np.log(css)\ni_ir_dev = np.log(i_ir) - np.log(iss)\nk_ir_dev = np.log(k_ir) - np.log(kss)\n\n# Create a 2x2 plot of y_ir_dev, c_ir_dev, i_ir_dev, and k_ir_dev\nfig = plt.figure(figsize=(12,8))\n\nax = fig.add_subplot(2,2,1)\nax.plot(y_ir_dev,lw=3,alpha = 0.7)\nax.set_title('$\\hat{y}_t$')\nax.grid()\nax.set_ylabel('% dev from steady state')\n\nax = fig.add_subplot(2,2,2)\nax.plot(c_ir_dev,lw=3,alpha = 0.7)\nax.set_title('$\\hat{c}_t$')\nax.grid()\n\nax = fig.add_subplot(2,2,3)\nax.plot(i_ir_dev,lw=3,alpha = 0.7)\nax.set_title('$\\hat{\\imath}_t$')\nax.grid()\nax.set_ylabel('% dev from steady state')\n\nax = fig.add_subplot(2,2,4)\nax.plot(k_ir_dev,lw=3,alpha = 0.7)\nax.set_title('$\\hat{k}_t$')\nax.grid()\n\nplt.tight_layout()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deepfield/ibis
docs/source/notebooks/tutorial/6-Advanced-Topics-TopK-SelfJoins.ipynb
apache-2.0
[ "Advanced Topics: Top-K and Self Joins\nSetup", "import ibis\nimport os\nhdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)\nhdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)\ncon = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',\n hdfs_client=hdfs)\nibis.options.interactive = True", "\"Top-K\" Filtering\nA common analytical pattern involves subsetting based on some method of ranking. For example, \"the 5 most frequently occurring widgets in a dataset\". By choosing the right metric, you can obtain the most important or least important items from some dimension, for some definition of important.\nTo carry out the pattern by hand involves the following\n\nChoose a ranking metric\nAggregate, computing the ranking metric, by the target dimension\nOrder by the ranking metric and take the highest K values\nUse those values as a set filter (either with semi_join or isin) in your next query\n\nFor example, let's look at the TPC-H tables and find the 5 or 10 customers who placed the most orders over their lifetime:", "orders = con.table('tpch_orders')\n\ntop_orders = (orders\n .group_by('o_custkey')\n .size()\n .sort_by(('count', False))\n .limit(5))\ntop_orders", "Now, we could use these customer keys as a filter in some other analysis:", "# Among the top 5 most frequent customers, what's the histogram of their order statuses?\nanalysis = (orders[orders.o_custkey.isin(top_orders.o_custkey)]\n .group_by('o_orderstatus')\n .size())\nanalysis", "This is such a common pattern that Ibis supports a high level primitive topk operation, which can be used immediately as a filter:", "top_orders = orders.o_custkey.topk(5)\norders[top_orders].group_by('o_orderstatus').size()", "This goes a little further. Suppose now we want to rank customers by their total spending instead of the number of orders, perhaps a more meaningful metric:", "total_spend = orders.o_totalprice.sum().name('total')\ntop_spenders = (orders\n .group_by('o_custkey')\n .aggregate(total_spend)\n .sort_by(('total', False))\n .limit(5))\ntop_spenders", "To use another metric, just pass it to the by argument in topk:", "top_spenders = orders.o_custkey.topk(5, by=total_spend)\norders[top_spenders].group_by('o_orderstatus').size()", "Self joins\nIf you're a relational data guru, you may have wondered how it's possible to join tables with themselves, because joins clauses involve column references back to the original table.\nConsider the SQL\nsql\n SELECT t1.key, sum(t1.value - t2.value) AS metric\n FROM my_table t1\n JOIN my_table t2\n ON t1.key = t2.subkey\n GROUP BY 1\nHere, we have an unambiguous way to refer to each of the tables through aliasing.\nLet's consider the TPC-H database, and support we want to compute year-over-year change in total order amounts by region using joins.", "region = con.table('tpch_region')\nnation = con.table('tpch_nation')\ncustomer = con.table('tpch_customer')\norders = con.table('tpch_orders')\n\norders.limit(5)", "First, let's join all the things and select the fields we care about:", "fields_of_interest = [region.r_name.name('region'), \n nation.n_name.name('nation'),\n orders.o_totalprice.name('amount'),\n orders.o_orderdate.cast('timestamp').name('odate') # these are strings\n ]\n\njoined_all = (region.join(nation, region.r_regionkey == nation.n_regionkey)\n .join(customer, customer.c_nationkey == nation.n_nationkey)\n .join(orders, orders.o_custkey == customer.c_custkey)\n [fields_of_interest])", "Okay, great, let's have a look:", "joined_all.limit(5)", "Sweet, now let's aggregate by year and region:", "year = joined_all.odate.year().name('year')\n\ntotal = joined_all.amount.sum().cast('double').name('total')\n\nannual_amounts = (joined_all\n .group_by(['region', year])\n .aggregate(total))\nannual_amounts", "Looking good so far. Now, we need to join this table on itself, by subtracting 1 from one of the year columns.\nWe do this by creating a \"joinable\" view of a table that is considered a distinct object within Ibis. To do this, use the view function:", "current = annual_amounts\nprior = annual_amounts.view()\n\nyoy_change = (current.total - prior.total).name('yoy_change')\n\nresults = (current.join(prior, ((current.region == prior.region) & \n (current.year == (prior.year - 1))))\n [current.region, current.year, yoy_change])\ndf = results.execute()\n\ndf['yoy_pretty'] = df.yoy_change.map(lambda x: '$%.2fmm' % (x / 1000000.))\ndf", "If you're being fastidious and want to consider the first year occurring in the dataset for each region to have 0 for the prior year, you will instead need to do an outer join and treat nulls in the prior side of the join as zero:", "yoy_change = (current.total - prior.total.zeroifnull()).name('yoy_change')\nresults = (current.outer_join(prior, ((current.region == prior.region) & \n (current.year == (prior.year - 1))))\n [current.region, current.year, current.total,\n prior.total.zeroifnull().name('prior_total'), \n yoy_change])\n\nresults.limit(10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/data_cleaning/raw/ex1.ipynb
apache-2.0
[ "In this exercise, you'll apply what you learned in the Handling missing values tutorial.\nSetup\nThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.", "from learntools.core import binder\nbinder.bind(globals())\nfrom learntools.data_cleaning.ex1 import *\nprint(\"Setup Complete\")", "1) Take a first look at the data\nRun the next code cell to load in the libraries and dataset you'll use to complete the exercise.", "# modules we'll use\nimport pandas as pd\nimport numpy as np\n\n# read in all our data\nsf_permits = pd.read_csv(\"../input/building-permit-applications-data/Building_Permits.csv\")\n\n# set seed for reproducibility\nnp.random.seed(0) ", "Use the code cell below to print the first five rows of the sf_permits DataFrame.", "# TODO: Your code here!\n\n\n#%%RM_IF(PROD)%%\nsf_permits.head()", "Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.", "# Check your answer (Run this code cell to receive credit!)\nq1.check()\n\n# Line below will give you a hint\n#_COMMENT_IF(PROD)_\nq1.hint()", "2) How many missing data points do we have?\nWhat percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)", "# TODO: Your code here!\npercent_missing = ____\n\n# Check your answer\nq2.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq2.hint()\n#_COMMENT_IF(PROD)_\nq2.solution()\n\n#%%RM_IF(PROD)%%\n# get the number of missing data points per column\npercent_missing = sf_permits.isnull().sum().sum()\nq2.assert_check_failed()\n\n#%%RM_IF(PROD)%%\n# get the number of missing data points per column\nmissing_values_count = sf_permits.isnull().sum()\n\n# how many total missing values do we have?\ntotal_cells = np.product(sf_permits.shape)\ntotal_missing = missing_values_count.sum()\n\n# percent of data that is missing\npercent_missing = (total_missing/total_cells) * 100\nq2.assert_check_passed()", "3) Figure out why the data is missing\nLook at the columns \"Street Number Suffix\" and \"Zipcode\" from the San Francisco Building Permits dataset. Both of these contain missing values. \n- Which, if either, are missing because they don't exist? \n- Which, if either, are missing because they weren't recorded? \nOnce you have an answer, run the code cell below.", "# Check your answer (Run this code cell to receive credit!)\nq3.check()\n\n# Line below will give you a hint\n#_COMMENT_IF(PROD)_\nq3.hint()", "4) Drop missing values: rows\nIf you removed all of the rows of sf_permits with missing values, how many rows are left?\nNote: Do not change the value of sf_permits when checking this.", "# TODO: Your code here!\n\n\n#%%RM_IF(PROD)%%\nsf_permits.dropna()", "Once you have an answer, run the code cell below.", "# Check your answer (Run this code cell to receive credit!)\nq4.check()\n\n# Line below will give you a hint\n#_COMMENT_IF(PROD)_\nq4.hint()", "5) Drop missing values: columns\nNow try removing all the columns with empty values.\n- Create a new DataFrame called sf_permits_with_na_dropped that has all of the columns with empty values removed.\n- How many columns were removed from the original sf_permits DataFrame? Use this number to set the value of the dropped_columns variable below.", "# TODO: Your code here\nsf_permits_with_na_dropped = ____\n\ndropped_columns = ____\n\n# Check your answer\nq5.check()\n\n#%%RM_IF(PROD)%%\n# remove all columns with at least one missing value\nsf_permits_with_na_dropped = sf_permits.dropna(axis=1)\n\n# calculate number of dropped columns\ncols_in_original_dataset = sf_permits.shape[1]\ncols_in_na_dropped = sf_permits_with_na_dropped.shape[1]\ndropped_columns = cols_in_original_dataset - cols_in_na_dropped \n\nq5.assert_check_passed()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq5.hint()\n#_COMMENT_IF(PROD)_\nq5.solution()", "6) Fill in missing values automatically\nTry replacing all the NaN's in the sf_permits data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame sf_permits_with_na_imputed.", "# TODO: Your code here\nsf_permits_with_na_imputed = ____\n\n# Check your answer\nq6.check()\n\n#%%RM_IF(PROD)%%\nsf_permits_with_na_imputed = sf_permits_with_na_dropped.fillna(method='bfill', axis=0).fillna(0)\nq6.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nsf_permits_with_na_imputed = sf_permits.fillna(method='bfill', axis=0).fillna(0)\nq6.assert_check_passed()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq6.hint()\n#_COMMENT_IF(PROD)_\nq6.solution()", "More practice\nIf you're looking for more practice handling missing values:\n\nCheck out this noteboook on handling missing values using scikit-learn's imputer. \nLook back at the \"Zipcode\" column in the sf_permits dataset, which has some missing values. How would you go about figuring out what the actual zipcode of each address should be? (You might try using another dataset. You can search for datasets about San Fransisco on the Datasets listing.) \n\nKeep going\nIn the next lesson, learn how to apply scaling and normalization to transform your data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
2.3/tutorials/l3.ipynb
gpl-3.0
[ "\"Third\" Light\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.3,<2.4\"", "As always, let's do imports and initialize a logger and a new bundle.", "import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()", "Relevant Parameters\nAn l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux.\nSince this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.", "b.filter(qualifier='l3_mode')", "So let's add a LC dataset", "b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')", "We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.", "print(b.filter(qualifier='l3*'))", "l3_mode = 'flux'\nWhen l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset.", "print(b.filter(qualifier='l3*'))\n\nprint(b.get_parameter('l3'))", "To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\\pi$ at t0@system, and according to the compute options.\nNote that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.", "print(b.compute_l3s())", "l3_mode = 'fraction'\nWhen l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter.", "b.set_value('l3_mode', 'fraction')\n\nprint(b.filter(qualifier='l3*'))\n\nprint(b.get_parameter('l3_frac'))", "Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s.\nNote that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.", "print(b.compute_l3s())", "Influence on Light Curves (Fluxes)\n\"Third\" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.\nTo see this we'll compare a light curve with and without \"third\" light.", "b.run_compute(irrad_method='none', model='no_third_light')\n\nb.set_value('l3_mode', 'flux')\nb.set_value('l3', 5)\n\nb.run_compute(irrad_method='none', model='with_third_light')", "As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.", "afig, mplfig = b['lc01'].plot(model='no_third_light')\nafig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)", "Influence on Meshes (Intensities)\n\"Third\" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, \"third\" light only scales the fluxes.\nNOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.\nTo see this we can run both of our models again and look at the values of the intensities in the mesh.", "b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])\n\nb.set_value('l3', 0.0)\n\nb.run_compute(irrad_method='none', model='no_third_light', overwrite=True)\n\nb.set_value('l3', 5)\n\nb.run_compute(irrad_method='none', model='with_third_light', overwrite=True)\n\nprint(\"no_third_light abs_intensities: \", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light')))\nprint(\"with_third_light abs_intensities: \", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light')))\n\nprint(\"no_third_light intensities: \", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light')))\nprint(\"with_third_light intensities: \", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light')))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ffmmjj/intro_to_data_science_workshop
solutions/.ipynb_checkpoints/Boston housing prices prediction-checkpoint.ipynb
apache-2.0
[ "Regression problems involve the prediction of a continuous, numeric value from a set of characteristics.\nIn this example, we'll build a model to predict house prices from characteristics like the number of rooms and the crime rate at the house location.\nReading data\nWe'll be using the pandas package to read data.\nPandas is an open source library that can be used to read formatted data files into tabular structures that can be processed by python scripts.", "# Make sure you have a working installation of pandas by executing this cell\nimport pandas as pd", "In this exercise, we'll use the Boston Housing dataset to predict house prices from characteristics like the number of rooms and distance to employment centers.", "boston_housing_data = pd.read_csv('../datasets/boston.csv')", "Pandas allows reading our data from different file formats and sources. See this link for a list of supported operations.", "boston_housing_data.head()\n\nboston_housing_data.info()\n\nboston_housing_data.describe()", "Visualizing data\nAfter reading our data into a pandas DataFrame and getting a broader view of the dataset, we can build charts to visualize tha \"shape\" of the data.\nWe'll use python's Matplotlib library to create these charts.\nAn example\nSuppose you're given the following information about four datasets:", "datasets = pd.read_csv('../datasets/anscombe.csv')\n\nfor i in range(1, 5):\n dataset = datasets[datasets.Source == 1]\n print('Dataset {} (X, Y) mean: {}'.format(i, (dataset.x.mean(), dataset.y.mean())))\n\nprint('\\n')\nfor i in range(1, 5):\n dataset = datasets[datasets.Source == 1]\n print('Dataset {} (X, Y) std deviation: {}'.format(i, (dataset.x.std(), dataset.y.std())))\n\nprint('\\n')\nfor i in range(1, 5):\n dataset = datasets[datasets.Source == 1]\n print('Dataset {} correlation between X and Y: {}'.format(i, dataset.x.corr(dataset.y)))", "They all have roughly the same mean, standard deviations and correlation. How similar are they?\n\nThis dataset is known as the Anscombe's Quartet and it's used to illustrate how tricky it can be to trust only summary statistics to characterize a dataset.", "import matplotlib.pyplot as plt\n# This line makes the graphs appear as cell outputs rather than in a separate window or file.\n%matplotlib inline\n\n# Extract the house prices and average number of rooms to two separate variables\nprices = boston_housing_data.medv\nrooms = boston_housing_data.rm\n\n# Create a scatterplot of these two properties using plt.scatter()\nplt.scatter(rooms, prices)\n# Specify labels for the X and Y axis\nplt.xlabel('Number of rooms')\nplt.ylabel('House price')\n# Show graph\nplt.show()\n\n# Extract the house prices and average number of rooms to two separate variables\nprices = boston_housing_data.medv\nnox = boston_housing_data.nox\n\n# Create a scatterplot of these two properties using plt.scatter()\nplt.scatter(nox, prices)\n# Specify labels for the X and Y axis\nplt.xlabel('Nitric oxide concentration')\nplt.ylabel('House price')\n# Show graph\nplt.show()", "Predicting house prices\nWe could see in the previous graphs that some features have a roughy linear relationship to the house prices. We'll use Scikit-Learn's LinearRegression to model this data and predict house prices from other information.\nThe example below builds a LinearRegression model using the average number of rooms to predict house prices:", "from sklearn.linear_model import LinearRegression\n\nx = boston_housing_data.rm.values.reshape(-1, 1)\ny = boston_housing_data.medv.values.reshape(-1, 1)\n\nlr = LinearRegression().fit(x, y)\n\nlr.predict(6)", "We'll now use all the features in the dataset to predict house prices.\nLet's start by splitting our data into a training set and a validation set. The training set will be used to train our linear model; the validation set, on the other hand, will be used to assess how accurate our model is.", "X = boston_housing_data.drop('medv', axis=1)\nt = boston_housing_data.medv.values.reshape(-1, 1)\n\n# Use sklean's train_test_plit() method to split our data into two sets.\n# See http://scikit-learn.org/0.17/modules/generated/sklearn.cross_validation.train_test_split.html#sklearn.cross_validation.train_test_split\nfrom sklearn.cross_validation import train_test_split\n\nXtr, Xts, ytr, yts = train_test_split(X, t)\n\n# Use the training set to build a LinearRegression model\nlr = LinearRegression().fit(Xtr, ytr)\n\n# Use the validation set to assess the model's performance.\n# See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html\nfrom sklearn.metrics import mean_squared_error\n\nmean_squared_error(yts, lr.predict(Xts))", "What kind of enhancements could be done to get better results?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jasontlam/snorkel
tutorials/cdr/CDR_Tutorial_3.ipynb
apache-2.0
[ "Chemical-Disease Relation (CDR) Tutorial\nIn this example, we'll be writing an application to extract mentions of chemical-induced-disease relationships from Pubmed abstracts, as per the BioCreative CDR Challenge. This tutorial will show off some of the more advanced features of Snorkel, so we'll assume you've followed the Intro tutorial.\nLet's start by reloading from the last notebook.", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport numpy as np\nfrom snorkel import SnorkelSession\n\nsession = SnorkelSession()\n\nfrom snorkel.models import candidate_subclass\n\nChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])\n\ntrain = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()\ndev = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()\ntest = session.query(ChemicalDisease).filter(ChemicalDisease.split == 2).all()\n\nprint('Training set:\\t{0} candidates'.format(len(train)))\nprint('Dev set:\\t{0} candidates'.format(len(dev)))\nprint('Test set:\\t{0} candidates'.format(len(test)))", "Part V: Training an LSTM extraction model\nIn the intro tutorial, we automatically featurized the candidates and trained a linear model over these features. Here, we'll train a more complicated model for relation extraction: an LSTM network. You can read more about LSTMs here or here. An LSTM is a type of recurrent neural network and automatically generates a numerical representation for the candidate based on the sentence text, so no need for featurizing explicitly as in the intro tutorial. LSTMs take longer to train, and Snorkel doesn't currently support hyperparameter searches for them. We'll train a single model here, but feel free to try out other parameter sets. Just make sure to use the development set - and not the test set - for model selection.\nNote: Again, training for more epochs than below will greatly improve performance- try it out!", "from snorkel.annotations import load_marginals\ntrain_marginals = load_marginals(session, split=0)\n\nfrom snorkel.annotations import load_gold_labels\nL_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)\n\nfrom snorkel.learning import reRNN\n\ntrain_kwargs = {\n 'lr': 0.01,\n 'dim': 100,\n 'n_epochs': 20,\n 'dropout': 0.5,\n 'rebalance': 0.25,\n 'print_freq': 5\n}\n\nlstm = reRNN(seed=1701, n_threads=None)\nlstm.train(train, train_marginals, X_dev=dev, Y_dev=L_gold_dev, **train_kwargs)", "Scoring on the test set\nFinally, we'll evaluate our performance on the blind test set of 500 documents. We'll load labels similar to how we did for the development set, and use the score function of our extraction model to see how we did.", "from load_external_annotations import load_external_labels\nload_external_labels(session, ChemicalDisease, split=2, annotator='gold')\nL_gold_test = load_gold_labels(session, annotator_name='gold', split=2)\nL_gold_test\n\nlstm.score(test, L_gold_test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Neuroglycerin/neukrill-net-work
notebooks/model_modifications/Tuning Learning Rate.ipynb
mit
[ "In Yoshua Bengio's paper on practical recommendations for training gradient-based architectures he says about the learning rate:\n\nThis is often the single most important hyper-parameter and one should always make sure that\nit has been tuned (up to approximately a factor of 2). Typical values for a neural network\nwith standardized inputs (or inputs mapped to\nthe (0,1) interval) are less than 1 and greater\nthan 10 −6 but these should not be taken as strict ranges and greatly depend on the parametrization of the model.\nA default value of 0.01 typically works for standard multi-layer neural networks but it would be foolish to rely exclusively on this default value. If there is only\ntime to optimize one hyper-parameter and one\nuses stochastic gradient descent, then this is the\nhyper-parameter that is worth tuning.\n\nCurrently, the network that has been performing best for us (but not that well) is the alexnet_based architecture built by Matthew Graham for another project.\nPrevious results\nWe should look at the progress of previous results, see if we can see signs of overshoot (too high) or slow learning (too low):", "import pylearn2.utils\nimport pylearn2.config\nimport theano\nimport neukrill_net.dense_dataset\nimport neukrill_net.utils\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport holoviews as hl\n%load_ext holoviews.ipython\nimport sklearn.metrics\n\ncd ..\n\nsettings = neukrill_net.utils.Settings(\"settings.json\")\nrun_settings = neukrill_net.utils.load_run_settings(\n \"run_settings/replicate_8aug.json\", settings, force=True)\n\nmodel = pylearn2.utils.serial.load(run_settings['alt_picklepath'])\n\nc = 'train_objective'\nchannel = model.monitor.channels[c]\n\nplt.title(c)\nplt.plot(channel.example_record,channel.val_record)\n\nc = 'train_y_nll'\nchannel = model.monitor.channels[c]\nplt.title(c)\nplt.plot(channel.example_record,channel.val_record)\n\ndef plot_monitor(c = 'valid_y_nll'):\n channel = model.monitor.channels[c]\n plt.title(c)\n plt.plot(channel.example_record,channel.val_record)\n return None\nplot_monitor()\n\nplot_monitor(c=\"valid_objective\")", "Would actually like to know what kind of score this model gets on the check_test_score script.", "%run check_test_score.py run_settings/replicate_8aug.json", "So we can guess that the log loss score we're seeing is in fact correct. There are definitely some bugs in the ListDataset code.\nThe other model that we've run using it is the following:", "run_settings = neukrill_net.utils.load_run_settings(\n \"run_settings/online_manyaug.json\", settings, force=True)\n\nmodel = pylearn2.utils.serial.load(run_settings['alt_picklepath'])\n\nplot_monitor(c=\"valid_objective\")\n\nplot_monitor(c=\"train_objective\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
brujonildo/randomNonlinearDynamics
SIRmodel.ipynb
cc0-1.0
[ "Simple model of epidemic dynamics: SIR\nProf. Marco Arieli Herrera-Valdez,\nFacultad de Ciencias, Universidad Nacional Autónoma de México\nCreated March 7, 2016\nLet $x$, $y$, and $z$ represent the fraction of susceptibles, infected, and recovered individuals within a population. Assume homogeneous mixing with a probability of infection given a contact with an infected individual given by $\\beta$ and an average removal time $\\beta^{-1}$ from the infected group, by recovery or death due to infection. The population dynamics are given by\n\\begin{eqnarray}\n\\partial_t x &=& -\\alpha xy\n\\\n\\partial_t y &=& \\left( \\alpha x - \\beta \\right) y\n\\\n\\partial_t x &=& \\beta y\n\\end{eqnarray}\nNotice that the population size does not matter because it is kept constant.", "#Import the necessary modules and perform the necessary tests\nimport scipy as sc\nimport pylab as gr\nsc.test(\"all\",verbose=0)\n%matplotlib inline", "Setup a python function that specifies the dynamics", "def SIR(U,t,p):\n x,y,z=U\n yNew= p[\"alpha\"] * y * x\n zNew= p[\"beta\"] * y \n dx = -yNew\n dy = yNew - zNew\n dz = zNew\n return dx, dy, dz", "The function SIR above takes three arguments, $U$, $t$, and $p$ that represent the states of the system, the time and the parameters, respectively. \nOutbreak condition\nThe condition \n\\begin{equation}\n\\frac{\\alpha}{\\beta}x(t)>1 , \\quad y>0\n\\end{equation}\ndefines a threshold for a full epidemic outbreak. An equivalent condition is \n\\begin{equation}\nx>\\frac{\\beta}{\\alpha }, \\quad y>0\n\\end{equation}\nTherefore, with the parameters $(\\alpha,\\beta)$=(0.5,0.1), there will be an outbreak if the initial condition for $x(t)>1/5$ with $y>0$. \nNotice that the initial value for $z$ can be interpreted as the initial proportion of immune individuals within the population. \nThe dynamics related to the oubreak condition can be studied by defining a variable $B(t) = x(t) \\alpha/\\beta$, called by some authors \"effective reproductive number\". If $x(t)\\approx 1$, the corresponding $B(t)$ is called \"basic reproductive number\", or $R_o$.\nLet's define a python dictionary containing parameters and initial conditions to perform simulations.", "p={\"alpha\": 0.15, \"beta\":0.1, \"timeStop\":300.0, \"timeStep\":0.01 }\np[\"Ro\"]=p[\"alpha\"]/p[\"beta\"]\np[\"sampTimes\"]= sc.arange(0,p[\"timeStop\"],p[\"timeStep\"])\nN= 1e4; i0= 1e1; r0=0; s0=N-i0-r0\nx0=s0/N; y0=i0/N; z0=r0/N;\np[\"ic\"]=[x0,y0,z0]\nprint(\"N=%g with initial conditions (S,I,R)=(%g,%g,%g)\"%(N,s0,i0,r0))\nprint(\"Initial conditions: \", p[\"ic\"])\nprint(\"B(0)=%g\"%(p[\"ic\"][0]*p[\"Ro\"]))", "Integrate numerically and plot the results", "# Numerical integration\nxyz= sc.integrate.odeint(SIR, p[\"ic\"], p[\"sampTimes\"], args=(p,)).transpose()\n# Calculate the outbreak indicator\nB= xyz[0]*p[\"alpha\"]/p[\"beta\"]\n\n# Figure\nfig=gr.figure(figsize=(11,5))\ngr.ioff()\nrows=1; cols=2\nax=list()\nfor n in sc.arange(rows*cols):\n ax.append(fig.add_subplot(rows,cols,n+1))\n\nax[0].plot(p[\"sampTimes\"], xyz[0], 'k', label=r\"$(t,x(t))$\")\nax[0].plot(p[\"sampTimes\"], xyz[1], 'g', lw=3, label=r\"$(t,y(t))$\")\nax[0].plot(p[\"sampTimes\"], xyz[2], 'b', label=r\"$(t,z(t))$\")\nax[0].plot(p[\"sampTimes\"], B, 'r', label=r\"$(t,B(t))$\")\nax[0].plot([0, p[\"timeStop\"]], [1,1], 'k--', alpha=0.4)\nax[1].plot(xyz[0], xyz[1], 'g', lw=3, label=r\"$(x(t),y(t))$\")\nax[1].plot(xyz[0], xyz[2], 'b', label=r\"$(x(t),z(t))$\")\nax[1].plot(xyz[0], B, 'r', label=r\"$(x(t),B(t))$\")\nax[1].plot([0, 1], [1,1], 'k--', alpha=0.4)\nax[0].legend(); ax[1].legend(loc=\"upper left\")\ngr.ion(); gr.draw()", "Notice that $y$ reaches its maximum when $B(t)$ crosses 1. That is, the epidemic starts to wine down when the $B(t)<1$. \nExercises:\nSetup two simulations for which there is no outbreak, such that:\n(a) The initial density of \"immune\" individuals is large enough to prevent an epidemic.\n(b) The initial density of \"immune\" individuals is really small but there is no epidemic outbreak." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Hyperparticle/deep-learning-foundation
lessons/intro-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb
mit
[ "Sentiment analysis with TFLearn\nIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\nWe'll start off by importing all the modules we'll need, then load and prepare the data.", "import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical", "Preparing the data\nFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.\nRead the data\nUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.", "reviews = pd.read_csv('reviews.txt', header=None)\nlabels = pd.read_csv('labels.txt', header=None)", "Counting word frequency\nTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.\n\nExercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.", "from collections import Counter\ntotal_counts = Counter()\nfor _, row in reviews.iterrows():\n total_counts.update(row[0].split(' '))\nprint(\"Total words in data set: \", len(total_counts))", "Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.", "vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\nprint(vocab[:60])", "What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.", "print(vocab[-1], ': ', total_counts[vocab[-1]])", "The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\nNote: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.\nNow for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n\nExercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.", "word2idx = {word: i for i, word in enumerate(vocab)}", "Text to vector function\nNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n\nInitialize the word vector with np.zeros, it should be the length of the vocabulary.\nSplit the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.\nFor each word in that list, increment the element in the index associated with that word, which you get from word2idx.\n\nNote: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.", "def text_to_vector(text):\n word_vector = np.zeros(len(vocab), dtype=np.int_)\n for word in text.split(' '):\n idx = word2idx.get(word, None)\n if idx is None:\n continue\n else:\n word_vector[idx] += 1\n return np.array(word_vector)", "If you do this right, the following code should return\n```\ntext_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]\narray([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n```", "text_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]", "Now, run through our entire review data set and convert each review to a word vector.", "word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\nfor ii, (_, text) in enumerate(reviews.iterrows()):\n word_vectors[ii] = text_to_vector(text[0])\n\n# Printing out the first 5 word vectors\nword_vectors[:5, :23]", "Train, Validation, Test sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.", "Y = (labels=='positive').astype(np.int_)\nrecords = len(labels)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)\ntestX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2)\n\ntrainY", "Building the network\nTFLearn lets you build the network by defining the layers. \nInput layer\nFor the input layer, you just need to tell it how many units you have. For example, \nnet = tflearn.input_data([None, 100])\nwould create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.\nThe number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).\nOutput layer\nThe last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\nnet = tflearn.fully_connected(net, 2, activation='softmax')\nTraining\nTo set how you train the network, use \nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with the categorical cross-entropy.\n\nFinally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like \nnet = tflearn.input_data([None, 10]) # Input\nnet = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\nnet = tflearn.fully_connected(net, 2, activation='softmax') # Output\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nmodel = tflearn.DNN(net)\n\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.", "# Network building\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n # Inputs\n net = tflearn.input_data([None, 10000])\n\n # Hidden layer(s)\n net = tflearn.fully_connected(net, 200, activation='ReLU')\n net = tflearn.fully_connected(net, 25, activation='ReLU')\n\n # Output layer\n net = tflearn.fully_connected(net, 2, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', \n learning_rate=0.1, \n loss='categorical_crossentropy')\n \n model = tflearn.DNN(net)\n return model", "Intializing the model\nNext we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n\nNote: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.", "model = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.\nYou can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)", "Testing\nAfter you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.", "predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\nprint(\"Test accuracy: \", test_accuracy)", "Try out your own text!", "# Helper function that uses your model to predict sentiment\ndef test_sentence(sentence):\n positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]\n print('Sentence: {}'.format(sentence))\n print('P(positive) = {:.3f} :'.format(positive_prob), \n 'Positive' if positive_prob > 0.5 else 'Negative')\n\nsentence = \"Moonlight is by far the best movie of 2016.\"\ntest_sentence(sentence)\n\nsentence = \"It's amazing anyone could be talented enough to make something this spectacularly awful\"\ntest_sentence(sentence)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mndrake/PythonEuler
euler_031_040.ipynb
mit
[ "Coin sums\nProblem 31\nIn England the currency is made up of pound, £, and pence, p, and there are eight coins in general circulation: \n1p, 2p, 5p, 10p, 20p, 50p, £1 (100p) and £2 (200p).\nIt is possible to make £2 in the following way: \n1×£1 + 1×50p + 2×20p + 1×5p + 1×2p + 3×1p\nHow many different ways can £2 be made using any number of coins?", "from euler import timer\n\ndef p031():\n return len(\n [1 for p200 in range(0, 201, 200)\n for p100 in range(0,201 - p200, 100)\n for p50 in range(0,201 - p200 - p100, 50)\n for p20 in range(0,201 - p200 - p100 - p50, 20)\n for p10 in range(0,201 - p200 - p100 - p50 - p20, 10)\n for p5 in range(0,201 - p200 - p100 - p50 - p20 - p10, 5)\n for p2 in range(0,201 - p200 - p100 - p50 - p20 - p10 - p5, 2)])\n\ntimer(p031)", "Pandigital products\nProblem 32\nWe shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through 5 pandigital.\nThe product 7254 is unusual, as the identity, 39 × 186 = 7254, containing multiplicand, multiplier, and product is 1 through 9 pandigital.\nFind the sum of all products whose multiplicand/multiplier/product identity can be written as a 1 through 9 pandigital.\nHINT: Some products can be obtained in more than one way so be sure to only include it once in your sum.", "from math import sqrt\nfrom euler import Seq, timer\n\ndef isPandigital(n):\n return (range(2, int(sqrt(n)))\n >> Seq.filter(lambda x: n%x==0)\n >> Seq.map (lambda x: (str(x) + str(n/x) + str(n)) >> Seq.toSet) \n >> Seq.exists (lambda x: x == {'1','2','3','4','5','6','7','8','9'}))\n\ndef p032():\n return range(1000, 10000) >> Seq.filter(isPandigital) >> Seq.sum\n\ntimer(p032)", "Digit canceling fractions\nProblem 33\nThe fraction 49/98 is a curious fraction, as an inexperienced mathematician in attempting to simplify it may incorrectly believe that 49/98 = 4/8, which is correct, is obtained by cancelling the 9s.\nWe shall consider fractions like, 30/50 = 3/5, to be trivial examples.\nThere are exactly four non-trivial examples of this type of fraction, less than one in value, and containing two digits in the numerator and denominator.\nIf the product of these four fractions is given in its lowest common terms, find the value of the denominator.", "from euler import Seq, GCD, fst, snd, timer\n\ndef p033():\n\n def is_cancelling(a,b):\n a_str, b_str = str(a), str(b)\n for i in range(2):\n for j in range(2):\n if a_str[i] == b_str[j]:\n return float(a_str[not i]) / float(b_str[not j]) == float(a) / float(b)\n return False\n\n def numbers(n):\n return range(n,100) >> Seq.filter(lambda x: (x%10 != 0) & (x%10 != x/10))\n\n fraction = (numbers(10)\n >> Seq.collect(lambda x: numbers(x+1) >> Seq.map(lambda y: (x,y)))\n >> Seq.filter(lambda (x,y): is_cancelling(x,y))\n >> Seq.reduce(lambda x,y: (fst(x)*fst(y), snd(x)*snd(y))))\n\n # then define the denominator by the greatest common divisor \n return snd(fraction) / GCD(fst(fraction), snd(fraction))\n\ntimer(p033)", "Digit factorials\nProblem 34\n145 is a curious number, as 1! + 4! + 5! = 1 + 24 + 120 = 145.\nFind the sum of all numbers which are equal to the sum of the factorial of their digits.\nNote: as 1! = 1 and 2! = 2 are not sums they are not included.", "from math import factorial\nfrom euler import Seq, fst, timer\n\ndef p034():\n\n def factsum(n):\n acc = 0\n while n >= 1:\n acc += factorial(n%10)\n n /= 10\n return acc\n\n max_n = (fst(Seq.initInfinite(lambda x: (x, x * factorial(9)))\n >> Seq.find(lambda (a,b): (10 ** a - 1) > b)) - 1) * factorial(9)\n\n def nums():\n for i in range(3, max_n + 1):\n if i == factsum(i):\n yield i\n\n return nums() >> Seq.sum\n\ntimer(p034)", "Circular primes\nProblem 35\nThe number, 197, is called a circular prime because all rotations of the digits: 197, 971, and 719, are themselves prime.\nThere are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, and 97.\nHow many circular primes are there below one million?", "from euler import Seq, primes, timer\n\ndef p035():\n\n def contains_even(n):\n return str(n) >> Seq.map(int) >> Seq.exists(lambda x: x%2==0)\n\n def shift(n):\n str_n = str(n)\n return int(str_n[1:] + str_n[0])\n\n def circle(n):\n yield n\n m = shift(n)\n while m != n:\n yield m\n m = shift(m)\n\n p = (primes() \n >> Seq.filter(lambda n: not(contains_even(n)))\n >> Seq.takeWhile(lambda x: x<1000000) \n >> Seq.toList)\n\n def next_p(n): return p >> Seq.find(lambda m: m > n)\n\n n = 2\n\n while n is not None:\n if not(all((i in p) for i in circle(n))):\n for i in circle(n):\n if i in p:\n p.remove(i)\n n = next_p(n)\n\n return (p >> Seq.length) + 1\n\ntimer(p035)", "Double-base palindromes\nProblem 36\nThe decimal number, $585 = 1001001001_2$ (binary), is palindromic in both bases.\nFind the sum of all numbers, less than one million, which are palindromic in base 10 and base 2.\n(Please note that the palindromic number, in either base, may not include leading zeros.)", "from euler import Seq, timer\n\ndef p036():\n\n def dec_is_palindrome(n):\n return str(n)[::-1] == str(n)\n\n def bin_is_palindrome(n):\n a = (Seq.unfold(lambda x: (x%2, x/2) if (x != 0) else None, n)\n >> Seq.toList)\n return a == list(reversed(a))\n\n return (\n range(1,1000001)\n >> Seq.filter(dec_is_palindrome)\n >> Seq.filter(bin_is_palindrome)\n >> Seq.sum)\n\ntimer(p036)", "Truncatable primes\nProblem 37\nThe number 3797 has an interesting property. Being prime itself, it is possible to continuously remove digits from left to right, and remain prime at each stage: 3797, 797, 97, and 7. Similarly we can work from right to left: 3797, 379, 37, and 3.\nFind the sum of the only eleven primes that are both truncatable from left to right and right to left.\nNOTE: 2, 3, 5, and 7 are not considered to be truncatable primes.", "from euler import Seq, primes, is_prime, timer\n\ndef p037():\n\n def is_truncatable_prime(n):\n x = str(n)\n for i in range(1,len(x)):\n if not(is_prime(int(x[i:])) & is_prime(int(x[:i]))):\n return False\n return True\n \n return (\n primes()\n >> Seq.skipWhile(lambda x: x <= 7)\n >> Seq.filter(is_truncatable_prime)\n >> Seq.take(11)\n >> Seq.sum)\n\ntimer(p037)", "Pandigital multiples\nProblem 38\nTake the number 192 and multiply it by each of 1, 2, and 3:\n192 × 1 = 192\n192 × 2 = 384\n192 × 3 = 576\nBy concatenating each product we get the 1 to 9 pandigital, 192384576. We will call 192384576 the concatenated product of 192 and (1,2,3)\nThe same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5, giving the pandigital, 918273645, which is the concatenated product of 9 and (1,2,3,4,5).\nWhat is the largest 1 to 9 pandigital 9-digit number that can be formed as the concatenated product of an integer with (1,2, ... , n) where n > 1?", "from euler import Seq, timer\n\n# largest integer to test is 9876 (2*x concat x)\n\ndef p038():\n\n def get_pandigital(num):\n i = 0\n concat_num = ''\n while len(concat_num) < 9:\n i += 1\n concat_num += str(num * i)\n if (len(concat_num) == 9) and (sorted(map(int, concat_num)) == range(1,10)):\n return int(concat_num)\n else:\n return None\n\n return max(get_pandigital(n) for n in range(9876,0,-1))\n\ntimer(p038)", "Integer right triangles\nProblem 39\nIf $p$ is the perimeter of a right angle triangle with integral length sides, ${a,b,c}$, there are exactly three solutions for $p = 120$.\n${20,48,52}, {24,45,51}, {30,40,50}$\nFor which value of $p ≤ 1000$, is the number of solutions maximised?", "from euler import Seq, timer\n\ndef p039():\n\n def sols(p):\n return sum(1 for a in range(1,p-1)\n for b in range(a, p-a)\n if (p - a - b) ** 2 == a ** 2 + b ** 2)\n\n return range(3, 1001) >> Seq.maxBy(sols)\n\ntimer(p039)", "Champernowne's constant\nProblem 40\nAn irrational decimal fraction is created by concatenating the positive integers:\n0.123456789101112131415161718192021...\nIt can be seen that the 12th digit of the fractional part is 1.\nIf $d_n$ represents the nth digit of the fractional part, find the value of the following expression.\n$d_1 × d_{10} × d_{100} × d_{1000} × d_{10000} × d_{100000} × d_{1000000}$", "from euler import timer\n\ndef p040(): \n s = \"\".join(range(1,500001) >> Seq.map(str))\n\n return (\n Seq.init(7, lambda i: int(s[10 ** i - 1])) \n >> Seq.reduce(lambda x,y: x*y))\n \ntimer(p040)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
poppy-project/community-notebooks
tutorials-education/poppy-humanoid_balance_leg_math.ipynb
lgpl-3.0
[ "Mathematical method to let poppy vertical\nWhen you want to move the motors of the leg, you can not do whatever you want, because Poppy can fall if it is not balance.\nSo a very simple way to move the leg without any external perturbation (no wind, flat foor, no move of the upper body) is to let the upper body to the veticale of the ankle.\nTo do that, simple calculation of the angle of ankle, knee and hip can do the job.\nBasically, the shin and the thigh form a triangle with a knee angle. So if you determine the angle of the knee, what you want is just to calcul the angle of the ankle and the hip to let Poppy to the verticale position.\n\nTo calcul the missing angle, we can use the sinus law and the Al-Kashi theorem. \nMore information here.", "%pylab inline\nfrom math import *\n\nclass leg_angle:\n def __init__(self,knee=0):\n # different length of poppy in cm\n self.upper_body = 40.0\n self.shin = 18.0\n self.thigh = 18.0\n # the angle of the knee\n self.knee = radians(knee)\n gamma = radians(180 - knee)\n # Al-Kashi theorem to calcul the c side and the missing angle\n c = sqrt(self.shin**2+self.thigh**2-2*self.shin*self.thigh*cos(gamma))\n self.c = c\n self.hip = -acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))\n self.ankle = -acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))\n # The high of the leg and the foot gap\n self.high = c\n self.foot_gap = 0.0\n def update_knee(self,knee):\n self.knee = radians(knee)\n gamma = radians(180 - knee)\n # Al-Kashi theorem to calcul the c side\n c = sqrt(self.shin**2+self.thigh**2-2*self.shin*self.thigh*cos(gamma))\n self.c = c\n self.hip = -acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))\n self.ankle = -acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))\n self.high = sqrt(c**2-self.foot_gap**2)\n def update_foot_gap(self,foot_gap):\n if foot_gap >= 0 :\n s = 1\n else :\n s=-1\n self.foot_gap = foot_gap\n # move the foot but let the high constant\n c = sqrt(foot_gap**2+self.high**2)\n self.c = c\n alpha = acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))\n beta = acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))\n gamma = acos((self.shin**2+self.thigh**2-self.c**2)/(2*self.shin*self.thigh))\n self.knee = pi - gamma\n self.hip = -(alpha + s*acos(self.high/c))\n self.ankle = -(beta - s*acos(self.high/c))\n def update_high(self,high):\n if self.foot_gap >= 0 :\n s = 1\n else :\n s=-1\n self.high = high\n c = sqrt(self.foot_gap**2+self.high**2)\n self.c = c\n alpha = acos((self.thigh**2+c**2-self.shin**2)/(2*self.thigh*c))\n beta = acos((self.shin**2+c**2-self.thigh**2)/(2*self.shin*c))\n gamma = acos((self.shin**2+self.thigh**2-self.c**2)/(2*self.shin*self.thigh))\n self.knee = pi - gamma\n self.hip = -(alpha + s*acos(self.high/c))\n self.ankle = -(beta - s*acos(self.high/c))\n def gravity_center_front(self,d_thigh):\n c = sqrt(self.foot_gap**2+self.high**2)\n self.c = c\n alpha = acos(((self.thigh+d_thigh)**2+c**2-self.shin**2)/(2*(self.thigh+d_thigh)*c))\n beta = acos((self.shin**2+c**2-(self.thigh+d_thigh)**2)/(2*self.shin*c))\n gamma = acos((self.shin**2+(self.thigh+d_thigh)**2-self.c**2)/(2*self.shin*(self.thigh+d_thigh)))\n self.knee = pi - gamma\n self.hip = -(alpha + acos(self.high/c))\n self.ankle = -(beta - acos(self.high/c))\n gamma = pi+self.hip\n self.hip = -(pi-gamma-asin(((d_thigh*sin(gamma)))/self.upper_body))\n ", "Now,You need the robot and the V-REP time.", "from poppy.creatures import PoppyHumanoid\n\npoppy = PoppyHumanoid(simulator='vrep')\n\n\n\n\nimport time as real_time\nclass time:\n def __init__(self,robot):\n self.robot=robot\n def time(self):\n t_simu = self.robot.current_simulation_time\n return t_simu\n def sleep(self,t):\n t0 = self.robot.current_simulation_time\n while (self.robot.current_simulation_time - t0) < t-0.01:\n real_time.sleep(0.001)\n\ntime = time(poppy)\nprint time.time()\ntime.sleep(0.025) #0.025 is the minimum step according to the V-REP defined dt \nprint time.time()", "It is now possible to define a mobility in percentage, according to the angle limit of ankle.", "class leg_move(leg_angle):\n def __init__(self,motor_limit,knee=0):\n self.ankle_limit_front=radians(motor_limit.angle_limit[1])\n self.ankle_limit_back=radians(motor_limit.angle_limit[0])\n leg_angle.__init__(self,knee)\n \n def update_foot_gap_percent(self,foot_gap_percent):\n #calcul of foot_gap_max to convert foot_gap_percent into value\n if foot_gap_percent>=0:# si le foot_gap est positif\n if acos(self.high/(self.shin+self.thigh)) > self.ankle_limit_front:\n # construction 1 knee!=0\n gap1 = sin(self.ankle_limit_front)*self.shin\n high1 = cos(self.ankle_limit_front)*self.shin\n high2 = self.high - high1\n gap2 = sqrt(self.thigh**2-high2**2)\n foot_gap_max = gap1 + gap2\n foot_gap = foot_gap_percent * foot_gap_max / 100\n self.update_foot_gap(foot_gap)\n else:\n #construction 2 knee=0\n foot_gap_max = sqrt((self.shin+self.thigh)**2-self.high**2)\n foot_gap = foot_gap_percent * foot_gap_max / 100\n self.update_foot_gap(foot_gap)\n if foot_gap_percent<0:\n if -acos((self.high-self.thigh)/self.shin )< self.ankle_limit_back:\n #construction 1 knee!=0\n print degrees(self.ankle_limit_back)\n print degrees(-acos((self.high-self.thigh)/self.shin ))\n gap1 = sin(self.ankle_limit_back)*self.shin\n high1 = cos(self.ankle_limit_back)*self.shin\n high2 = self.high - high1\n print gap1,high1,high2\n gap2 = sqrt(self.thigh**2-high2**2)\n print gap1,gap2,high1,high2\n foot_gap_max = gap1 + gap2\n foot_gap = -foot_gap_percent * foot_gap_max / 100\n self.update_foot_gap(foot_gap)\n else:\n #constrution 2 knee=0\n foot_gap_max = sqrt((self.shin+self.thigh)**2-self.high**2)\n foot_gap = foot_gap_percent * foot_gap_max / 100\n self.update_foot_gap(foot_gap)\n \n def update_high_percent(self,high_percent,high_min,high_max):\n high_var = high_max-high_min\n high = (high_percent*high_var/100)+high_min\n self.update_high(high)\n \n def high_limit(self):\n high_max = sqrt((self.shin+self.thigh)**2-self.foot_gap**2)\n high1_min = cos(self.ankle_limit_back)*self.shin\n gap2 = self.foot_gap-sin(self.ankle_limit_back)*self.shin\n # si gap2 est supérieur a thigh alors ce n'est plus la flexion de la cheville qui est limitante\n # dans ce cas on met la hauteur a zero\n if gap2 <= self.thigh:\n high2_min = sqrt(self.thigh**2-gap2**2)\n high_min = high1_min + high2_min\n else:\n high_min = 0\n return [high_min,high_max]\n \n ", "Finaly, a primitive can set the high and the foot gap of poppy.", "from pypot.primitive import Primitive\n\nclass leg_primitive(Primitive):\n def __init__(self,robot,speed,knee=0):\n self.right = leg_move(robot.l_ankle_y,knee)# il faudrait mettre r_ankle_y mais les angles limites semblent faux, c'est l'opposé\n self.left = leg_move(robot.l_ankle_y,knee)\n self.robot = robot\n Primitive.__init__(self, robot)\n self.high_percent = 100\n self.r_foot_gap_percent = 0\n self.l_foot_gap_percent = 0\n self.speed = speed\n \n def run(self): \n if self.high_percent !=-1:\n high_limit=(max([self.right.high_limit()[0],self.left.high_limit()[0]]),min([self.right.high_limit()[1],self.left.high_limit()[1]]))\n self.right.update_high_percent(self.high_percent,high_limit[0],high_limit[1])\n self.left.update_high_percent(self.high_percent,high_limit[0],high_limit[1])\n \n if self.r_foot_gap_percent !=-1: \n self.right.update_foot_gap_percent(self.r_foot_gap_percent)\n \n if self.l_foot_gap_percent !=-1: \n self.left.update_foot_gap_percent(self.l_foot_gap_percent)\n \n print \"left - ankle\" ,degrees(self.left.ankle),'knee', degrees(self.left.knee),'hip', degrees(self.left.hip), 'high', self.left.high,'foot_gap',self.left.foot_gap\n print \"right - ankle\" ,degrees(self.right.ankle),'knee', degrees(self.right.knee),'hip', degrees(self.right.hip), 'high', self.right.high,'foot_gap',self.right.foot_gap\n \n \n self.robot.l_ankle_y.goto_position(degrees(self.left.ankle),self.speed)\n self.robot.r_ankle_y.goto_position(degrees(self.right.ankle),self.speed)\n\n self.robot.l_knee_y.goto_position(degrees(self.left.knee),self.speed)\n self.robot.r_knee_y.goto_position(degrees(self.right.knee),self.speed)\n\n self.robot.l_hip_y.goto_position(degrees(self.left.hip),self.speed)\n self.robot.r_hip_y.goto_position(degrees(self.right.hip),self.speed,wait=True)\n \n ", "It is now possible to set the high and the foot gap using the leg_primitive.", "leg=leg_primitive(poppy,speed=3)\n\nleg.start()\ntime.sleep(1)\n\ntime.sleep(1)\nleg.speed=3\nleg.high_percent=50\nleg.r_foot_gap_percent=20\nleg.l_foot_gap_percent=-20\nleg.start()\ntime.sleep(3)\nleg.high_percent=100\nleg.r_foot_gap_percent=-1\nleg.l_foot_gap_percent=-1\nleg.start()\ntime.sleep(3)\nleg.high_percent=0\nleg.start()\ntime.sleep(3)\nleg.high_percent=80\nleg.r_foot_gap_percent=-20\nleg.l_foot_gap_percent=20\nleg.start()\ntime.sleep(3)\nleg.r_foot_gap_percent=-1\nleg.l_foot_gap_percent=-1\nleg.high_percent=0\nleg.start()\ntime.sleep(3)\nleg.high_percent=100\nleg.r_foot_gap_percent=0\nleg.l_foot_gap_percent=0\nleg.start()\ntime.sleep(3)\n\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
netmanchris/PYHPEIMC
examples/Working with Network Assets.ipynb
apache-2.0
[ "Serial Numbers, How I love thee...\nNo one really like serial numbers, but keeping track of them is one of the \"brushing your teeth\" activities that everyone needs to take care of. It's like eating your brussel sprouts. Or listening to your mom. You're just better of if you do it quickly as it just gets more painful over time.\nNot only is it just good hygene, but you may be subject to regulations, like eRate in the United States where you have to be able to report on the location of any device by serial number at any point in time. \n\nTrust me, having to play hide-and-go seek with an SSH session is not something you want to do when government auditors are looking for answers.\n\nI'm sure you've already guessed what I'm about to say, but I\"ll say it anyway...\n\nThere's an API for that!!!\n\nHPE IMC base platform has a great network assets function that automatically gathers all the details of your various devices, assuming of course they support RFC 4133, otherwise known as the Entity MIB. On the bright side, most vendors have chosen to support this standards based MIB, so chances are you're in good shape. \nAnd if they don't support it, they really should. You should ask them. Ok?\nSo without further ado, let's get started.\nImporting the required libraries\nI'm sure you're getting used to this part, but it's import to know where to look for these different functions. In this case, we're going to look at a new library that is specifically designed to deal with network assets, including serial numbers.", "from pyhpeimc.auth import *\nfrom pyhpeimc.plat.netassets import *\nimport csv\n\nauth = IMCAuth(\"http://\", \"10.101.0.203\", \"8080\", \"admin\", \"admin\")\n\nciscorouter = get_dev_asset_details('10.101.0.1', auth.creds, auth.url)", "How many assets in a Cisco Router?\nAs some of you may have heard, HPE IMC is a multi-vendor tool and offers support for many of the common devices you'll see in your daily travels. \nIn this example, we're going to use a Cisco 2811 router to showcase the basic function.\nRouters, like chassis switches have multiple components. As any one who's ever been the ~~victem~~ owner of a Smartnet contract, you'll know that you have individual components which have serial numbers as well and all of them have to be reported for them to be covered. So let's see if we managed to grab all of those by first checking out how many individual items we got back in the asset list for this cisco router.", "len(ciscorouter)", "What's in the box???\nNow we know that we've got an idea of how many assets are in here, let's take a look to see exactly what's in one of the asset records to see if there's anything useful in here.", "ciscorouter[0]", "What can we do with this?\nWith some basic python string manipulation we could easily print out some of the attributes that we want into what could easily turn into a nicely formated report. \nAgain realise that the example below is just a subset of what's available in the JSON above. If you want more, just add it to the list.", "for i in ciscorouter:\n print (\"Device Name: \" + i['deviceName'] + \" Device Model: \" + i['model'] +\n \"\\nAsset Name is: \" + i['name'] + \" Asset Serial Number is: \" + \n i['serialNum']+ \"\\n\")", "Why not just write that to disk?\nAlthough we could go directly to the formated report without a lot of extra work, we would be losing a lot of data which we may have use for later. Instead why don't we export all the available data from the JSON above into a CSV file which can be later opened in your favourite spreadsheet viewer and manipulated to your hearst content.\nPretty cool, no?", "keys = ciscorouter[0].keys()\nwith open('ciscorouter.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(ciscorouter)", "Reading it back\nNow we'll read it back from disk to make sure it worked properly. When working with data like this, I find it useful to think about who's going to be consuming the data. For example, when looking at this remember this is a CSV file which can be easily opened in python, or something like Microsoft Excel to manipuate further. It's not realy intended to be read by human beings in this particular format. You'll need another program to consume and munge the data first to turn it into something human consumable.", "with open('ciscorouter.csv') as file:\n print (file.read())", "What about all my serial numbers at once?\nThat's a great question! I'm glad you asked. One of the most beautiful things about learning to automate things like asset gathering through an API is that it's often not much more work to do something 1000 times than it is to do it a single time. \nThis time instead of using the get_dev_asset_details function that we used above which gets us all the assets associated with a single device, let's grab ALL the devices at once.", "all_assets = get_dev_asset_details_all(auth.creds, auth.url)\n\nlen (all_assets)", "That's a lot of assets!\nExactly why we automate things. Now let's write the all_assets list to disk as well. \n**note for reasons unknown to me at this time, although the majority of the assets have 27 differnet fields, a few of them actually have 28 different attributes. Something I'll have to dig into later.", "keys = all_assets[0].keys()\nwith open('all_assets.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(all_assets)", "Well That's not good....\nSo it looks like there are a few network assets that have a different number of attributes than the first one in the list. We'll write some quick code to figure out how big of a problem this is.", "print (\"The length of the first items keys is \" + str(len(keys)))\nfor i in all_assets:\n if len(i) != len(all_assets[0].keys()):\n print (\"The length of index \" + str(all_assets.index(i)) + \" is \" + str(len(i.keys())))", "Well that's not so bad\nIt looks like the items which don't have exactly 27 attribues have exactly 28 attributes. So we'll just pick one of the longer ones to use as the headers for our CSV file and then run the script again.\nFor this one, I'm going to ask you to trust me that the file is on disk and save us all the trouble of having to print out 1013 seperate assets into this blog post.", "keys = all_assets[879].keys()\nwith open ('all_assets.csv', 'w') as file:\n dict_writer = csv.DictWriter(file, keys)\n dict_writer.writeheader()\n dict_writer.writerows(all_assets)", "What's next?\nSo now that we've got all of our assets into a CSV file which is easily consumable by something like Excel, you can now chose what to do with the data.\nFor me it's interesting to see how vendors internalyl instrument their boxes. Some have serial numbers on power supplies and fans, some don't. Some use the standard way of doing things. Some don't. \nFrom an operations perspective, not all gear is created equal and it's nice to understand what's supported when trying to make a purchasing choice for something you're going to have to live with for the next few years. \nIf you're looking at your annual SMARTnet upgrade, at least you've now got a way to easily audit all of your discovered environment and figure out what line cards need to be tied to a particualr contract.\nOr you could just look at another vendor who makes your life easier. Entirely your choice. \n@netmanchris" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
martinjrobins/hobo
examples/toy/model-logistic.ipynb
bsd-3-clause
[ "Logistic population growth model\nThis example shows how the Logistic model can be used.\nThis model describes the growth of a population, from an initial starting point up to a maxium carrying capacity $k$. The rate of growth is given by a second parameter, $r$.\n$$ y(t) = \\frac{k}{1 + (k / p_0 - 1) e^{-r t}} $$", "import pints\nimport pints.toy\nimport matplotlib.pyplot as plt\nimport numpy as np\n\np0 = 2 # initial population; initial value\nmodel = pints.toy.LogisticModel(p0)", "Parameters are given in the order (r, k).", "times = np.linspace(0, 100, 100)\nr = 0.1\nk = 50\nvalues = model.simulate((r, k), times)\n\nplt.figure(figsize=(15,2))\nplt.xlabel('t')\nplt.ylabel('y (Population)')\nplt.plot(times, values)\nplt.show()", "We can see that, starting from p0 = 2 the model quickly approaches the carrying capacity k = 50.\nWe can test that, if we wait long enough, we get very close to $k$:", "print(model.simulate((r, k), [40]))\nprint(model.simulate((r, k), [80]))\nprint(model.simulate((r, k), [120]))\nprint(model.simulate((r, k), [160]))\nprint(model.simulate((r, k), [200]))\nprint(model.simulate((r, k), [240]))\nprint(model.simulate((r, k), [280]))", "This model also provides sensitivities: derivatives $\\frac{\\partial y}{\\partial p}$ of the output $y$ with respect to the parameters $p$.", "values, sensitivities = model.simulateS1((r, k), times)", "We can plot these sensitivities, to see where the model is sensitive to each of the parameters:", "plt.figure(figsize=(15,7))\n\nplt.subplot(3, 1, 1)\nplt.ylabel('y (Population)')\nplt.plot(times, values)\n\nplt.subplot(3, 1, 2)\nplt.ylabel(r'$\\partial y/\\partial r$')\nplt.plot(times, sensitivities[:, 0])\n\nplt.subplot(3, 1, 3)\nplt.xlabel('t')\nplt.ylabel(r'$\\partial y/\\partial k$')\nplt.plot(times, sensitivities[:, 1])\n\nplt.show()", "For $\\frac{\\partial y}{\\partial r}$ (middle plot), we see that the model reacts most strongly to changes in $r$ at $t \\approx 35$, just when the population size is changing fastest. This makes a lot of sense: since $r$ determines the rate of growth it is most clearly visible during the period of greatest growth.\nFor $\\frac{\\partial y}{\\partial k}$ (bottom plot) the result is very similar to the graph of $y$ itself: The carrying capacity is hard to tell at the very beginning, as the population hasn't begun to grow much yet. As growth increases it becomes easier to predict what the final population size will be. Finally, when the carrying capacity has been reached the value of $y \\approx k$, so that the derivative is almost 1.\nFor parameter estimation, this tells us something interesting: Namely that we need to know about the final stages of the process to get an accurate estimate of $k$, while estimating $r$ requires the period of greatest growth to be present in the signal." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.17/_downloads/5c761b4eaf61d9e6642d568c8bc535a2/plot_source_power_spectrum.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute power spectrum densities of the sources with dSPM\nReturns an STC file containing the PSD (in dB) of each of the sources.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, compute_source_psd\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_label = data_path + '/MEG/sample/labels/Aud-lh.label'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, verbose=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\ninverse_operator = read_inverse_operator(fname_inv)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n stim=False, exclude='bads')\n\ntmin, tmax = 0, 120 # use the first 120s of data\nfmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz\nn_fft = 2048 # the FFT size (n_fft). Ideally a power of 2\nlabel = mne.read_label(fname_label)\n\nstc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method=\"dSPM\",\n tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,\n pick_ori=\"normal\", n_fft=n_fft, label=label,\n dB=True)\n\nstc.save('psd_dSPM')", "View PSD of sources in label", "plt.plot(1e3 * stc.times, stc.data.T)\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD (dB)')\nplt.title('Source Power Spectrum (PSD)')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
blua/deep-learning
language-translation/dlnd_language_translation_23.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n\n x = [[source_vocab_to_int.get(word, 0) for word in sentence.split()] \\\n for sentence in source_text.split('\\n')]\n y = [[target_vocab_to_int.get(word, 0) for word in sentence.split()] \\\n for sentence in target_text.split('\\n')]\n \n source_id_text = []\n target_id_text = []\n\n for i in range(len(x)):\n n1 = len(x[i])\n n2 = len(y[i])\n source_id_text.append(x[i])\n target_id_text.append(y[i] + [target_vocab_to_int['<EOS>']])\n\n return (source_id_text, target_id_text)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n # TODO: Implement Function\n input_text = tf.placeholder(tf.int32,[None, None], name=\"input\")\n target_text = tf.placeholder(tf.int32,[None, None], name=\"targets\")\n learning_rate = tf.placeholder(tf.float32, name=\"learning_rate\")\n keep_prob = tf.placeholder(tf.float32, name=\"keep_prob\")\n\n return input_text, target_text, learning_rate, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for dencoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n \n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().", "def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n # TODO: Implement Function\n \n enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n enc_cell_drop = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=keep_prob)\n _, enc_state = tf.nn.dynamic_rnn(enc_cell_drop, rnn_inputs, dtype=tf.float32)\n \n return enc_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.", "def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n # TODO: Implement Function\n \n train_dec_fm = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n \n train_logits_drop, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_dec_fm, \\\n dec_embed_input, sequence_length, scope=decoding_scope)\n\n \n train_logits = output_fn(train_logits_drop)\n \n #I'm missing the keep_prob! don't know where to put it\n \n return train_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: Maximum length of \n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n # TODO: Implement Function\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, \n maximum_length, vocab_size)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)\n \n #Again, don't know where to put the keep_drop param\n\n return inference_logits\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n dec_cell_drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n \n # Output Layer\n output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size,\\\n None, scope=decoding_scope)\n \n with tf.variable_scope(\"decoding\") as decoding_scope:\n\n train_logits = decoding_layer_train(encoder_state, dec_cell_drop, dec_embed_input,\\\n sequence_length, decoding_scope, output_fn, keep_prob)\n\n with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n\n infer_logits = decoding_layer_infer(encoder_state, dec_cell_drop, dec_embeddings,\\\n target_vocab_to_int['<GO>'],target_vocab_to_int['<EOS>'], sequence_length,\\\n vocab_size, decoding_scope, output_fn, keep_prob)\n\n return train_logits, infer_logits\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).", "def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)\n\n encoder_state = encoding_layer(embed_input, rnn_size, num_layers, keep_prob)\n \n processed_target_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)\n \n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n \n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, processed_target_data)\n \n train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,\\\n sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)\n \n \n return train_logits, infer_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability", "# Number of Epochs\nepochs = 20\n# Batch Size\nbatch_size = 512\n# RNN Size\nrnn_size = 512\n# Number of Layers\nnum_layers = 1\n# Embedding Size\nencoding_embedding_size = 512\ndecoding_embedding_size = 512\n# Learning Rate\nlearning_rate = 0.001\n# Dropout Keep Probability\nkeep_probability = 0.6", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target_batch,\n [(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],\n 'constant')\n if max_seq - batch_train_logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n words = sentence.split(\" \")\n \n word_ids = []\n \n for word in words:\n eord = word.lower()\n if word in vocab_to_int:\n word_id = vocab_to_int[word]\n else:\n word_id = vocab_to_int['<UNK>']\n word_ids.append(word_id)\n\n return word_ids\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
thushear/MLInAction
kaggle/Feature_engineering_and_model_tuning/Feature-engineering_and_Parameter_Tuning_XGBoost/XGBoost models tuning.ipynb
apache-2.0
[ "XGBoost模型调优\n加载要用的库", "import pandas as pd\nimport numpy as np\nimport xgboost as xgb\nfrom xgboost.sklearn import XGBClassifier\nfrom sklearn import cross_validation, metrics\nfrom sklearn.grid_search import GridSearchCV\n\nimport matplotlib.pylab as plt\n%matplotlib inline\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 12, 4", "载入数据\n上一个ipython notebook已经做了下面这些数据特征预处理\n1. City因为类别太多丢掉\n2. DOB生成Age字段,然后丢掉原字段\n3. EMI_Loan_Submitted_Missing 为1(EMI_Loan_Submitted) 为0(EMI_Loan_Submitted缺省) EMI_Loan_Submitted丢掉\n4. EmployerName丢掉\n5. Existing_EMI对缺省值用均值填充\n6. Interest_Rate_Missing同 EMI_Loan_Submitted\n7. Lead_Creation_Date丢掉\n8. Loan_Amount_Applied, Loan_Tenure_Applied 均值填充\n9. Loan_Amount_Submitted_Missing 同 EMI_Loan_Submitted \n10. Loan_Tenure_Submitted_Missing 同 EMI_Loan_Submitted\n11. LoggedIn, Salary_Account 丢掉\n12. Processing_Fee_Missing 同 EMI_Loan_Submitted\n13. Source - top 2 kept as is and all others combined into different category\n14. Numerical变化 和 One-Hot编码", "train = pd.read_csv('train_modified.csv')\ntest = pd.read_csv('test_modified.csv')\n\ntrain.shape, test.shape\n\ntarget='Disbursed'\nIDcol = 'ID'\n\ntrain['Disbursed'].value_counts()", "建模与交叉验证\n写一个大的函数完成以下的功能\n1. 数据建模\n2. 求训练准确率\n3. 求训练集AUC\n4. 根据xgboost交叉验证更新n_estimators\n5. 画出特征的重要度", "#test_results = pd.read_csv('test_results.csv')\ndef modelfit(alg, dtrain, dtest, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50):\n\n if useTrainCV:\n xgb_param = alg.get_xgb_params()\n xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)\n xgtest = xgb.DMatrix(dtest[predictors].values)\n cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,\n early_stopping_rounds=early_stopping_rounds, show_progress=False)\n alg.set_params(n_estimators=cvresult.shape[0])\n \n #建模\n alg.fit(dtrain[predictors], dtrain['Disbursed'],eval_metric='auc')\n \n #对训练集预测\n dtrain_predictions = alg.predict(dtrain[predictors])\n dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]\n \n #输出模型的一些结果\n print \"\\n关于现在这个模型\"\n print \"准确率 : %.4g\" % metrics.accuracy_score(dtrain['Disbursed'].values, dtrain_predictions)\n print \"AUC 得分 (训练集): %f\" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob)\n \n feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)\n feat_imp.plot(kind='bar', title='Feature Importances')\n plt.ylabel('Feature Importance Score')", "第1步- 对于高的学习率找到最合适的estimators个数", "predictors = [x for x in train.columns if x not in [target, IDcol]]\nxgb1 = XGBClassifier(\n learning_rate =0.1,\n n_estimators=1000,\n max_depth=5,\n min_child_weight=1,\n gamma=0,\n subsample=0.8,\n colsample_bytree=0.8,\n objective= 'binary:logistic',\n nthread=4,\n scale_pos_weight=1,\n seed=27)\nmodelfit(xgb1, train, test, predictors)\n\n#对subsample 和 max_features 用grid search查找最好的参数\nparam_test1 = {\n 'max_depth':range(3,10,2),\n 'min_child_weight':range(1,6,2)\n}\ngsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5,\n min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27), \n param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch1.fit(train[predictors],train[target])\n\ngsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_\n\n# 对于max_depth和min_child_weight查找最好的参数\nparam_test2 = {\n 'max_depth':[4,5,6],\n 'min_child_weight':[4,5,6]\n}\ngsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5,\n min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch2.fit(train[predictors],train[target])\n\ngsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_\n\n#交叉验证对min_child_weight寻找最合适的参数\nparam_test2b = {\n 'min_child_weight':[6,8,10,12]\n}\ngsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4,\n min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch2b.fit(train[predictors],train[target])\n\ngsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_\n\n#Grid seach选择合适的gamma\nparam_test3 = {\n 'gamma':[i/10.0 for i in range(0,5)]\n}\ngsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4,\n min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch3.fit(train[predictors],train[target])\n\ngsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_\n\npredictors = [x for x in train.columns if x not in [target, IDcol]]\nxgb2 = XGBClassifier(\n learning_rate =0.1,\n n_estimators=1000,\n max_depth=4,\n min_child_weight=6,\n gamma=0,\n subsample=0.8,\n colsample_bytree=0.8,\n objective= 'binary:logistic',\n nthread=4,\n scale_pos_weight=1,\n seed=27)\nmodelfit(xgb2, train, test, predictors)", "Tune subsample and colsample_bytree", "#对subsample 和 colsample_bytree用grid search寻找最合适的参数\nparam_test4 = {\n 'subsample':[i/10.0 for i in range(6,10)],\n 'colsample_bytree':[i/10.0 for i in range(6,10)]\n}\ngsearch4 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,\n min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch4.fit(train[predictors],train[target])\n\ngsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_", "tune subsample:", "# 同上\nparam_test5 = {\n 'subsample':[i/100.0 for i in range(75,90,5)],\n 'colsample_bytree':[i/100.0 for i in range(75,90,5)]\n}\ngsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,\n min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch5.fit(train[predictors],train[target])\n\ngsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_", "对正则化做交叉验证", "#对reg_alpha用grid search寻找最合适的参数\nparam_test6 = {\n 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]\n}\ngsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,\n min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch6.fit(train[predictors],train[target])\n\ngsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_\n\n# 换一组参数对reg_alpha用grid search寻找最合适的参数\nparam_test7 = {\n 'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]\n}\ngsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,\n min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,\n objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27), \n param_grid = param_test7, scoring='roc_auc',n_jobs=4,iid=False, cv=5)\ngsearch7.fit(train[predictors],train[target])\n\ngsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_\n\nxgb3 = XGBClassifier(\n learning_rate =0.1,\n n_estimators=1000,\n max_depth=4,\n min_child_weight=6,\n gamma=0,\n subsample=0.8,\n colsample_bytree=0.8,\n reg_alpha=0.005,\n objective= 'binary:logistic',\n nthread=4,\n scale_pos_weight=1,\n seed=27)\nmodelfit(xgb3, train, test, predictors)\n\nxgb4 = XGBClassifier(\n learning_rate =0.01,\n n_estimators=5000,\n max_depth=4,\n min_child_weight=6,\n gamma=0,\n subsample=0.8,\n colsample_bytree=0.8,\n reg_alpha=0.005,\n objective= 'binary:logistic',\n nthread=4,\n scale_pos_weight=1,\n seed=27)\nmodelfit(xgb4, train, test, predictors)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
svebk/qpr-winter-2017
notebook/winter2017_004.images_data.ipynb
mit
[ "Image data\nThe goal of this notebook is to detail how to interact with, and compute statistics on the images associated to the set of ads provided for the CP1 during the MEMEX Winter QPR 2017.\nData to download\nData posted on HDFS, see Wiki page.\n\nTraining images\nTraining images info\nTest images\nTest images info\n\nPlus the data available on the Wiki\nInfo files\n\nadjusted_images.json\nimage_url_sha1.csv\nfaces.jl\nimages_faces_stats.jl", "import os\nimport csv\nimport json\n\n# set some parameters\ndata_dir = \"../data\"\nprefix = \"test\"\nif prefix==\"train\":\n input_file = \"train_adjusted.json\"\nelse:\n input_file = \"test_adjusted_unlabelled.json\"\n\nimages_dir = os.path.join(data_dir,prefix+\"_images\")\nurl_sha1_file = os.path.join(data_dir,prefix+\"_image_url_sha1.csv\")\nfaces_file = os.path.join(data_dir,prefix+\"_faces.jl\")\nstats_file = os.path.join(data_dir,prefix+\"_images_faces_stats.jl\")\nimages_file = os.path.join(data_dir,prefix+\"_adjusted_images.json\")\n\n# parse faces_file\ndef parse_faces(faces_file):\n faces_dict = {}\n with open(faces_file, \"rt\") as faces:\n for line in faces:\n one_face_dict = json.loads(line)\n img_sha1 = one_face_dict.keys()[0]\n nb_faces = len(one_face_dict[img_sha1].keys())\n #print nb_faces\n faces_dict[img_sha1] = dict()\n faces_dict[img_sha1]['count'] = nb_faces\n faces_dict[img_sha1]['detections'] = one_face_dict[img_sha1]\n return faces_dict\n\nfaces_dict = parse_faces(faces_file)\n\nprint len(faces_dict)\ni = 3\nprint faces_dict.keys()[i], faces_dict[faces_dict.keys()[i]]\n\n# parse images_file\ndef parse_images_file(images_file):\n ads_images_dict = {}\n with open(images_file, \"rt\") as images:\n for line in images:\n one_image_dict = json.loads(line)\n ad_id_list = one_image_dict['obj_parent']\n img_url = one_image_dict['obj_stored_url']\n if type(ad_id_list) is not list:\n ad_id_list = [ad_id_list]\n for ad_id in ad_id_list:\n if ad_id not in ads_images_dict:\n ads_images_dict[ad_id] = [img_url]\n else:\n ads_images_dict[ad_id].append(img_url)\n return ads_images_dict\n\nads_images_dict = parse_images_file(images_file)\n\nprint len(ads_images_dict)\nprint ads_images_dict.keys()[0],ads_images_dict[ads_images_dict.keys()[0]]\n\n# parse image_url_sha1_file\ndef parse_url_sha1_file(url_sha1_file):\n url_sha1_dict = {}\n with open(url_sha1_file,\"rt\") as img_url_sha1:\n for line in img_url_sha1:\n url, sha1 = line.split(',')\n url_sha1_dict[url] = sha1\n return url_sha1_dict\n\n url_sha1_dict = parse_url_sha1_file(url_sha1_file)\n\nprint len(url_sha1_dict)\nprint url_sha1_dict.keys()[0],url_sha1_dict[url_sha1_dict.keys()[0]]", "Analyze image stats", "import matplotlib\nfrom numpy.random import randn\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import FuncFormatter\n%matplotlib inline\n\ndef to_percent(y, position):\n # Ignore the passed in position. This has the effect of scaling the default\n # tick locations.\n s = str(100 * y)\n\n # The percent symbol needs escaping in latex\n if matplotlib.rcParams['text.usetex'] is True:\n return s + r'$\\%$'\n else:\n return s + '%'", "Images distribution", "def get_ad_images(ad_id, ads_images_dict, url_sha1_dict, verbose=False):\n images_url_list = ads_images_dict[ad_id]\n images_sha1s = []\n for image_url in images_url_list:\n if image_url is None or not image_url:\n continue\n try:\n images_sha1s.append(url_sha1_dict[image_url.strip()].strip())\n except:\n if verbose:\n print 'Cannot find sha1 for: {}.'.format(image_url)\n return images_sha1s\n\n# Analyze distribution of images in ads_images_dict\nimages_count = []\nfor ad_id in ads_images_dict:\n images_count.append(len(get_ad_images(ad_id, ads_images_dict, url_sha1_dict)))\n\ndef print_stats(np_img_count):\n print np.min(np_img_count), np.mean(np_img_count), np.max(np_img_count)\n # Normed histogram seems to be broken, \n # using weights as suggested in http://stackoverflow.com/questions/5498008/pylab-histdata-normed-1-normalization-seems-to-work-incorrect\n weights = np.ones_like(np_img_count)/float(len(np_img_count))\n res = plt.hist(np_img_count, bins=100, weights=weights)\n print np.sum(res[0])\n # Create the formatter using the function to_percent. This multiplies all the\n # default labels by 100, making them all percentages\n formatter = FuncFormatter(to_percent)\n\n # Set the formatter\n plt.gca().yaxis.set_major_formatter(formatter)\n\n plt.show()\n\nprint_stats(np.asarray(images_count))", "Faces distribution", "def get_faces_images(images_sha1s, faces_dict):\n faces_out = {}\n for sha1 in images_sha1s:\n img_notfound = False\n try:\n tmp_faces = faces_dict[sha1]\n except:\n img_notfound = True\n if img_notfound or tmp_faces['count']==0:\n faces_out[sha1] = []\n continue\n bboxes = []\n for face in tmp_faces['detections']:\n bbox = [float(x) for x in tmp_faces['detections'][face]['bbox'].split(',')]\n bbox.append(float(tmp_faces['detections'][face]['score']))\n bboxes.append(bbox)\n #print bboxes\n faces_out[sha1] = bboxes\n return faces_out\n\ndef show_faces(faces, images_dir):\n from matplotlib.pyplot import imshow\n from IPython.display import display\n import numpy as np\n %matplotlib inline\n\n imgs = []\n for face in faces:\n if faces[face]:\n img = open_image(face, images_dir)\n draw_face_bbox(img, faces[face])\n imgs.append(img)\n if not imgs:\n print 'No face images'\n display(*imgs)\n\n# get all faces ads from each ad\nfaces_in_images_percent = []\nfor ad_id in ads_images_dict:\n images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict)\n faces_images = get_faces_images(images_sha1s, faces_dict)\n if len(faces_images)==0:\n continue\n nb_faces = 0\n for face in faces_images:\n if faces_images[face]:\n nb_faces += 1\n faces_in_images_percent.append(float(nb_faces)/len(faces_images))\n\nnp_faces_in_images_percent = np.asarray(faces_in_images_percent)\nprint_stats(np_faces_in_images_percent)\n\nno_faces = np.where(np_faces_in_images_percent==0.0)\nprint no_faces[0].shape\nprint np_faces_in_images_percent.shape\npercent_noface = float(no_faces[0].shape[0])/np_faces_in_images_percent.shape[0]\nprint 1-percent_noface\n\n# get all faces scores from each ad\nfaces_scores = []\nall_faces = []\nfor ad_id in ads_images_dict:\n images_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict)\n faces_images = get_faces_images(images_sha1s, faces_dict)\n if len(faces_images)==0:\n continue\n nb_faces = 0\n for face in faces_images:\n if faces_images[face]:\n for one_face in faces_images[face]:\n all_faces.append([face, one_face])\n faces_scores.append(float(one_face[4]))\n\nnp_faces_scores = np.asarray(faces_scores)\nprint_stats(faces_scores)\n\nlow_scores_faces = np.where(np_faces_scores<0.90)[0]\nprint float(len(low_scores_faces))/len(np_faces_scores)\nvery_low_scores_faces = np.where(np_faces_scores<0.80)[0]\nprint float(len(very_low_scores_faces))/len(np_faces_scores)\n#all_faces\nprint len(np_faces_scores)\n\nnb_faces_to_show = 10\nnp.random.shuffle(very_low_scores_faces)\nfaces_to_show = [all_faces[x] for x in very_low_scores_faces[:nb_faces_to_show]]\nprint faces_to_show\nfor face_id, face in faces_to_show:\n print face_id, face\n face_dict = {}\n face_dict[face_id] = [face]\n show_faces(face_dict, images_dir)", "Show images and faces of one ad", "def get_fnt(img, txt):\n from PIL import ImageFont\n # portion of image width you want text width to be\n img_fraction = 0.20\n fontsize = 2\n font = ImageFont.truetype(\"arial.ttf\", fontsize)\n while font.getsize(txt)[0] < img_fraction*img.size[0]:\n # iterate until the text size is just larger than the criteria\n fontsize += 1\n font = ImageFont.truetype(\"arial.ttf\", fontsize)\n return font, font.getsize(txt)[0]\n\ndef draw_face_bbox(img, bboxes, width=4):\n from PIL import ImageDraw\n import numpy as np\n draw = ImageDraw.Draw(img)\n for bbox in bboxes:\n for i in range(width):\n rect_start = (int(np.round(bbox[0] + width/2 - i)), int(np.round(bbox[1] + width/2 - i)))\n rect_end = (int(np.round(bbox[2] - width/2 + i)), int(np.round(bbox[3] - width/2 + i)))\n draw.rectangle((rect_start, rect_end), outline=(0, 255, 0))\n # print score?\n if len(bbox)==5:\n score = str(bbox[4])\n fnt, text_size = get_fnt(img, score[:5])\n draw.text((np.round((bbox[0]+bbox[2])/2-text_size/2),np.round(bbox[1])), score[:5], font=fnt, fill=(255,255,255,64))\n\ndef open_image(sha1, images_dir):\n from PIL import Image\n img = Image.open(os.path.join(images_dir, sha1[:3], sha1))\n return img\n\n#face images of ad '84FC37A4E38F7DE2B9FCAAB902332ED60A344B8DF90893A5A8BE3FC1139FCD5A' are blurred but detected\n# image '20893a926fbf50d1a5994f70ec64dbf33dd67e2a' highly pixelated\n# male strippers '20E4597A6DA11BC07BB7578FFFCE07027F885AF02265FD663C0911D2699E0A79'\n\nall_ads_id = range(len(ads_images_dict.keys()))\nimport numpy as np\nnp.random.shuffle(all_ads_id)\nad_id = ads_images_dict.keys()[all_ads_id[0]]\nprint ad_id\nimages_sha1s = get_ad_images(ad_id, ads_images_dict, url_sha1_dict)\nprint images_sha1s\nfaces = get_faces_images(images_sha1s, faces_dict)\nprint faces\nshow_faces(faces, images_dir)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GuillaumeDec/machine-learning
deep-lstm-rnn-anomaly-detector/deep-lstm-time-series.ipynb
gpl-3.0
[ "Deep LSTM RNNs", "from __future__ import print_function\nimport mxnet as mx\nfrom mxnet import nd, autograd\nimport numpy as np\nfrom collections import defaultdict\nmx.random.seed(1)\n# ctx = mx.gpu(0)\nctx = mx.cpu(0)\n\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nfrom datetime import datetime\n# import mpld3\nsns.set_style('whitegrid')\n#sns.set_context('notebook')\nsns.set_context('poster')\n# Make inline plots vector graphics instead of raster graphics\nfrom IPython.display import set_matplotlib_formats\n#set_matplotlib_formats('pdf', 'svg')\nset_matplotlib_formats('pdf', 'png')\n\nSEQ_LENGTH = 100 + 1 # needs to be at least the seq_length for training + 1 because of the time shift between inputs and labels\nNUM_SAMPLES_TRAINING = 5000 + 1\nNUM_SAMPLES_TESTING = 100 + 1\nCREATE_DATA_SETS = False # True if you don't have the data files or re-create them", "Dataset: \"Some time-series\"", "def gimme_one_random_number():\n return nd.random_uniform(low=0, high=1, shape=(1,1)).asnumpy()[0][0]\n\ndef create_one_time_series(seq_length=10):\n freq = (gimme_one_random_number()*0.5) + 0.1 # 0.1 to 0.6\n ampl = gimme_one_random_number() + 0.5 # 0.5 to 1.5\n x = np.sin(np.arange(0, seq_length) * freq) * ampl\n return x\n\ndef create_batch_time_series(seq_length=10, num_samples=4):\n column_labels = ['t'+str(i) for i in range(0, seq_length)]\n df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose()\n df.columns = column_labels\n df.index = ['s'+str(0)]\n for i in range(1, num_samples):\n more_df = pd.DataFrame(create_one_time_series(seq_length=seq_length)).transpose()\n more_df.columns = column_labels\n more_df.index = ['s'+str(i)]\n df = pd.concat([df, more_df], axis=0)\n return df # returns a dataframe of shape (num_samples, seq_length)\n\n# Create some time-series\n# uncomment below to force predictible random numbers\n# mx.random.seed(1)\nif CREATE_DATA_SETS:\n data_train = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TRAINING) \n data_test = create_batch_time_series(seq_length=SEQ_LENGTH, num_samples=NUM_SAMPLES_TESTING)\n # Write data to csv\n data_train.to_csv(\"../data/timeseries/train.csv\")\n data_test.to_csv(\"../data/timeseries/test.csv\")\nelse: \n data_train = pd.read_csv(\"../data/timeseries/train.csv\", index_col=0)\n data_test = pd.read_csv(\"../data/timeseries/test.csv\", index_col=0)", "Check the data real quick", "# num_sampling_points = min(SEQ_LENGTH, 400)\n# (data_train.sample(4).transpose().iloc[range(0, SEQ_LENGTH, SEQ_LENGTH//num_sampling_points)]).plot()", "Preparing the data for training", "# print(data_train.loc[:,data_train.columns[:-1]]) # inputs\n# print(data_train.loc[:,data_train.columns[1:]]) # outputs (i.e. inputs shift by +1)\n\nbatch_size = 64\nbatch_size_test = 1\nseq_length = 16\n\nnum_batches_train = data_train.shape[0] // batch_size\nnum_batches_test = data_test.shape[0] // batch_size_test\n\nnum_features = 1 # we do 1D time series for now, this is like vocab_size = 1 for characters\n\n# inputs are from t0 to t_seq_length - 1. because the last point is kept for the output (\"label\") of the penultimate point \ndata_train_inputs = data_train.loc[:,data_train.columns[:-1]]\ndata_train_labels = data_train.loc[:,data_train.columns[1:]]\ndata_test_inputs = data_test.loc[:,data_test.columns[:-1]]\ndata_test_labels = data_test.loc[:,data_test.columns[1:]]\n\ntrain_data_inputs = nd.array(data_train_inputs.values).reshape((num_batches_train, batch_size, seq_length, num_features))\ntrain_data_labels = nd.array(data_train_labels.values).reshape((num_batches_train, batch_size, seq_length, num_features))\ntest_data_inputs = nd.array(data_test_inputs.values).reshape((num_batches_test, batch_size_test, seq_length, num_features))\ntest_data_labels = nd.array(data_test_labels.values).reshape((num_batches_test, batch_size_test, seq_length, num_features))\n\ntrain_data_inputs = nd.swapaxes(train_data_inputs, 1, 2)\ntrain_data_labels = nd.swapaxes(train_data_labels, 1, 2)\ntest_data_inputs = nd.swapaxes(test_data_inputs, 1, 2)\ntest_data_labels = nd.swapaxes(test_data_labels, 1, 2)\n\n\nprint('num_samples_training={0} | num_batches_train={1} | batch_size={2} | seq_length={3}'.format(NUM_SAMPLES_TRAINING, num_batches_train, batch_size, seq_length))\nprint('train_data_inputs shape: ', train_data_inputs.shape)\nprint('train_data_labels shape: ', train_data_labels.shape)\n# print(data_train_inputs.values)\n# print(train_data_inputs[0]) # see what one batch looks like\n", "Long short-term memory (LSTM) RNNs\nAn LSTM block has mechanisms to enable \"memorizing\" information for an extended number of time steps. We use the LSTM block with the following transformations that map inputs to outputs across blocks at consecutive layers and consecutive time steps: $\\newcommand{\\xb}{\\mathbf{x}} \\newcommand{\\RR}{\\mathbb{R}}$\n$$g_t = \\text{tanh}(X_t W_{xg} + h_{t-1} W_{hg} + b_g),$$\n$$i_t = \\sigma(X_t W_{xi} + h_{t-1} W_{hi} + b_i),$$\n$$f_t = \\sigma(X_t W_{xf} + h_{t-1} W_{hf} + b_f),$$\n$$o_t = \\sigma(X_t W_{xo} + h_{t-1} W_{ho} + b_o),$$\n$$c_t = f_t \\odot c_{t-1} + i_t \\odot g_t,$$\n$$h_t = o_t \\odot \\text{tanh}(c_t),$$\nwhere $\\odot$ is an element-wise multiplication operator, and\nfor all $\\xb = [x_1, x_2, \\ldots, x_k]^\\top \\in \\RR^k$ the two activation functions:\n$$\\sigma(\\xb) = \\left[\\frac{1}{1+\\exp(-x_1)}, \\ldots, \\frac{1}{1+\\exp(-x_k)}]\\right]^\\top,$$\n$$\\text{tanh}(\\xb) = \\left[\\frac{1-\\exp(-2x_1)}{1+\\exp(-2x_1)}, \\ldots, \\frac{1-\\exp(-2x_k)}{1+\\exp(-2x_k)}\\right]^\\top.$$\nIn the transformations above, the memory cell $c_t$ stores the \"long-term\" memory in the vector form.\nIn other words, the information accumulatively captured and encoded until time step $t$ is stored in $c_t$ and is only passed along the same layer over different time steps.\nGiven the inputs $c_t$ and $h_t$, the input gate $i_t$ and forget gate $f_t$ will help the memory cell to decide how to overwrite or keep the memory information. The output gate $o_t$ further lets the LSTM block decide how to retrieve the memory information to generate the current state $h_t$ that is passed to both the next layer of the current time step and the next time step of the current layer. Such decisions are made using the hidden-layer parameters $W$ and $b$ with different subscripts: these parameters will be inferred during the training phase by gluon.\nAllocate parameters", "num_inputs = num_features # for a 1D time series, this is just a scalar equal to 1.0\nnum_outputs = num_features # same comment\nnum_hidden_units = [8, 8] # num of hidden units in each hidden LSTM layer\nnum_hidden_layers = len(num_hidden_units) # num of hidden LSTM layers\nnum_units_layers = [num_features] + num_hidden_units\n\n########################\n# Weights connecting the inputs to the hidden layer\n########################\nWxg, Wxi, Wxf, Wxo, Whg, Whi, Whf, Who, bg, bi, bf, bo = {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {} \nfor i_layer in range(1, num_hidden_layers+1):\n num_inputs = num_units_layers[i_layer-1]\n num_hidden_units = num_units_layers[i_layer]\n Wxg[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01\n Wxi[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01\n Wxf[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01\n Wxo[i_layer] = nd.random_normal(shape=(num_inputs,num_hidden_units), ctx=ctx) * .01\n\n ########################\n # Recurrent weights connecting the hidden layer across time steps\n ########################\n Whg[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01\n Whi[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01\n Whf[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01\n Who[i_layer] = nd.random_normal(shape=(num_hidden_units, num_hidden_units), ctx=ctx) * .01\n\n ########################\n # Bias vector for hidden layer\n ########################\n bg[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01\n bi[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01\n bf[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01\n bo[i_layer] = nd.random_normal(shape=num_hidden_units, ctx=ctx) * .01\n\n########################\n# Weights to the output nodes\n########################\nWhy = nd.random_normal(shape=(num_units_layers[-1], num_outputs), ctx=ctx) * .01\nby = nd.random_normal(shape=num_outputs, ctx=ctx) * .01", "Attach the gradients", "params = []\nfor i_layer in range(1, num_hidden_layers+1):\n params += [Wxg[i_layer], Wxi[i_layer], Wxf[i_layer], Wxo[i_layer], Whg[i_layer], Whi[i_layer], Whf[i_layer], Who[i_layer], bg[i_layer], bi[i_layer], bf[i_layer], bo[i_layer]]\n\nparams += [Why, by] # add the output layer\n\nfor param in params:\n param.attach_grad()", "Softmax Activation", "def softmax(y_linear, temperature=1.0):\n lin = (y_linear-nd.max(y_linear)) / temperature\n exp = nd.exp(lin)\n partition = nd.sum(exp, axis=0, exclude=True).reshape((-1,1))\n return exp / partition", "Cross-entropy loss function", "def cross_entropy(yhat, y):\n return - nd.mean(nd.sum(y * nd.log(yhat), axis=0, exclude=True))\n\ndef rmse(yhat, y):\n return nd.mean(nd.sqrt(nd.sum(nd.power(y - yhat, 2), axis=0, exclude=True)))", "Averaging the loss over the sequence", "def average_ce_loss(outputs, labels):\n assert(len(outputs) == len(labels))\n total_loss = 0.\n for (output, label) in zip(outputs,labels):\n total_loss = total_loss + cross_entropy(output, label)\n return total_loss / len(outputs)\n\ndef average_rmse_loss(outputs, labels):\n assert(len(outputs) == len(labels))\n total_loss = 0.\n for (output, label) in zip(outputs,labels):\n total_loss = total_loss + rmse(output, label)\n return total_loss / len(outputs)", "Optimizer", "def SGD(params, learning_rate):\n for param in params:\n# print('grrrrr: ', param.grad)\n param[:] = param - learning_rate * param.grad\n\ndef adam(params, learning_rate, M , R, index_adam_call, beta1, beta2, eps):\n k = -1\n for param in params:\n k += 1\n M[k] = beta1 * M[k] + (1. - beta1) * param.grad\n R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2\n # bias correction since we initilized M & R to zeros, they're biased toward zero on the first few iterations\n m_k_hat = M[k] / (1. - beta1**(index_adam_call))\n r_k_hat = R[k] / (1. - beta2**(index_adam_call))\n if((np.isnan(M[k].asnumpy())).any() or (np.isnan(R[k].asnumpy())).any()):\n# print('GRRRRRR ', M, K)\n stop()\n# print('grrrrr: ', param.grad)\n param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps)\n# print('m_k_hat r_k_hat', m_k_hat, r_k_hat)\n return params, M, R\n\n# def adam(params, learning_rate, M, R, index_iteration, beta1=0.9, beta2=0.999, eps=1e-8):\n# for k, param in enumerate(params):\n# if k==0:\n# print('batch_iteration {}: {}'.format(index_iteration, param))\n# M[k] = beta1 * M[k] + (1. - beta1) * param.grad\n# R[k] = beta2 * R[k] + (1. - beta2) * (param.grad)**2\n\n# m_k_hat = M[k] / (1. - beta1**(index_iteration))\n# r_k_hat = R[k] / (1. - beta2**(index_iteration))\n\n# param[:] = param - learning_rate * m_k_hat / (nd.sqrt(r_k_hat) + eps)\n# # print(beta1, beta2, M, R)\n# if k==0:\n# print('batch_iteration {}: {}'.format(index_iteration, param.grad))\n \n# for k, param in enumerate(params):\n# print('batch_iteration {}: {}'.format(index_iteration, param))\n\n# return M, R\n", "Define the model", "def single_lstm_unit_calcs(X, c, Wxg, h, Whg, bg, Wxi, Whi, bi, Wxf, Whf, bf, Wxo, Who, bo):\n g = nd.tanh(nd.dot(X, Wxg) + nd.dot(h, Whg) + bg)\n i = nd.sigmoid(nd.dot(X, Wxi) + nd.dot(h, Whi) + bi)\n f = nd.sigmoid(nd.dot(X, Wxf) + nd.dot(h, Whf) + bf)\n o = nd.sigmoid(nd.dot(X, Wxo) + nd.dot(h, Who) + bo)\n #######################\n c = f * c + i * g\n h = o * nd.tanh(c)\n return c, h\n\ndef deep_lstm_rnn(inputs, h, c, temperature=1.0):\n \"\"\"\n h: dict of nd.arrays, each key is the index of a hidden layer (from 1 to whatever). \n Index 0, if any, is the input layer\n \"\"\"\n outputs = []\n # inputs is one BATCH of sequences so its shape is number_of_seq, seq_length, features_dim \n # (latter is 1 for a time series, vocab_size for a character, n for a n different times series)\n for X in inputs:\n # X is batch of one time stamp. E.g. if each batch has 37 sequences, then the first value of X will be a set of the 37 first values of each of the 37 sequences \n # that means each iteration on X corresponds to one time stamp, but it is done in batches of different sequences\n h[0] = X # the first hidden layer takes the input X as input \n for i_layer in range(1, num_hidden_layers+1):\n # lstm units now have the 2 following inputs: \n # i) h_t from the previous layer (equivalent to the input X for a non-deep lstm net), \n # ii) h_t-1 from the current layer (same as for non-deep lstm nets)\n c[i_layer], h[i_layer] = single_lstm_unit_calcs(h[i_layer-1], c[i_layer], Wxg[i_layer], h[i_layer], Whg[i_layer], bg[i_layer], Wxi[i_layer], Whi[i_layer], bi[i_layer], Wxf[i_layer], Whf[i_layer], bf[i_layer], Wxo[i_layer], Who[i_layer], bo[i_layer])\n yhat_linear = nd.dot(h[num_hidden_layers], Why) + by\n # yhat is a batch of several values of the same time stamp\n # this is basically the prediction of the sequence, which overlaps most of the input sequence, plus one point (character or value)\n# yhat = softmax(yhat_linear, temperature=temperature)\n# yhat = nd.sigmoid(yhat_linear)\n# yhat = nd.tanh(yhat_linear)\n yhat = yhat_linear # we cant use a 1.0-bounded activation function since amplitudes can be greater than 1.0\n outputs.append(yhat) # outputs has same shape as inputs, i.e. a list of batches of data points.\n# print('some shapes... yhat outputs', yhat.shape, len(outputs) )\n return (outputs, h, c)", "Test and visualize predictions", "def test_prediction(one_input_seq, one_label_seq, temperature=1.0):\n #####################################\n # Set the initial state of the hidden representation ($h_0$) to the zero vector\n ##################################### # some better initialization needed??\n h, c = {}, {}\n for i_layer in range(1, num_hidden_layers+1):\n h[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx)\n c[i_layer] = nd.zeros(shape=(batch_size_test, num_units_layers[i_layer]), ctx=ctx)\n \n outputs, h, c = deep_lstm_rnn(one_input_seq, h, c, temperature=temperature)\n loss = rmse(outputs[-1][0], one_label_seq)\n return outputs[-1][0].asnumpy()[-1], one_label_seq.asnumpy()[-1], loss.asnumpy()[-1], outputs, one_label_seq\n\ndef check_prediction(index):\n o, label, loss, outputs, labels = test_prediction(test_data_inputs[index], test_data_labels[index], temperature=1.0)\n prediction = round(o, 3)\n true_label = round(label, 3)\n outputs = [float(i.asnumpy().flatten()) for i in outputs]\n true_labels = list(test_data_labels[index].asnumpy().flatten())\n # print(outputs, '\\n----\\n', true_labels)\n df = pd.DataFrame([outputs, true_labels]).transpose()\n df.columns = ['predicted', 'true']\n # print(df)\n rel_error = round(100. * (prediction / true_label - 1.0), 2)\n# print('\\nprediction = {0} | actual_value = {1} | rel_error = {2}'.format(prediction, true_label, rel_error))\n return df\n\nepochs = 48 # at some point, some nans appear in M, R matrices of Adam. TODO investigate why\nmoving_loss = 0.\nlearning_rate = 0.001 # 0.1 works for a [8, 8] after about 70 epochs of 32-sized batches\n\n# Adam Optimizer stuff\nbeta1 = .9\nbeta2 = .999\nindex_adam_call = 0\n# M & R arrays to keep track of momenta in adam optimizer. params is a list that contains all ndarrays of parameters\nM = {k: nd.zeros_like(v) for k, v in enumerate(params)}\nR = {k: nd.zeros_like(v) for k, v in enumerate(params)}\n\ndf_moving_loss = pd.DataFrame(columns=['Loss', 'Error'])\ndf_moving_loss.index.name = 'Epoch'\n\n# needed to update plots on the fly\n%matplotlib notebook\nfig, axes_fig1 = plt.subplots(1,1, figsize=(6,3))\nfig2, axes_fig2 = plt.subplots(1,1, figsize=(6,3))\n\nfor e in range(epochs):\n ############################\n # Attenuate the learning rate by a factor of 2 every 100 epochs\n ############################\n if ((e+1) % 80 == 0):\n learning_rate = learning_rate / 2.0 # TODO check if its ok to adjust learning_rate when using Adam Optimizer\n h, c = {}, {}\n for i_layer in range(1, num_hidden_layers+1):\n h[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx)\n c[i_layer] = nd.zeros(shape=(batch_size, num_units_layers[i_layer]), ctx=ctx)\n\n for i in range(num_batches_train):\n data_one_hot = train_data_inputs[i]\n label_one_hot = train_data_labels[i]\n with autograd.record():\n outputs, h, c = deep_lstm_rnn(data_one_hot, h, c)\n loss = average_rmse_loss(outputs, label_one_hot)\n loss.backward()\n# SGD(params, learning_rate)\n index_adam_call += 1 # needed for bias correction in Adam optimizer\n params, M, R = adam(params, learning_rate, M, R, index_adam_call, beta1, beta2, 1e-8)\n \n ##########################\n # Keep a moving average of the losses\n ##########################\n if (i == 0) and (e == 0):\n moving_loss = nd.mean(loss).asscalar()\n else:\n moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()\n df_moving_loss.loc[e] = round(moving_loss, 4)\n\n ############################\n # Predictions and plots\n ############################\n data_prediction_df = check_prediction(index=e)\n axes_fig1.clear()\n data_prediction_df.plot(ax=axes_fig1)\n fig.canvas.draw()\n prediction = round(data_prediction_df.tail(1)['predicted'].values.flatten()[-1], 3)\n true_label = round(data_prediction_df.tail(1)['true'].values.flatten()[-1], 3)\n rel_error = round(100. * np.abs(prediction / true_label - 1.0), 2)\n print(\"Epoch = {0} | Loss = {1} | Prediction = {2} True = {3} Error = {4}\".format(e, moving_loss, prediction, true_label, rel_error ))\n axes_fig2.clear()\n if e == 0:\n moving_rel_error = rel_error\n else:\n moving_rel_error = .9 * moving_rel_error + .1 * rel_error\n\n df_moving_loss.loc[e, ['Error']] = moving_rel_error\n axes_loss_plot = df_moving_loss.plot(ax=axes_fig2, secondary_y='Loss', color=['r','b'])\n axes_loss_plot.right_ax.grid(False)\n# axes_loss_plot.right_ax.set_yscale('log')\n fig2.canvas.draw()\n \n%matplotlib inline", "Conclusions" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pauliacomi/pyGAPS
docs/examples/initial_enthalpy.ipynb
mit
[ "Initial enthalpy calculations and enthalpy modelling\nExperimentally, the enthalpy of adsorption can be obtained either indirectly,\nthrough the isosteric enthalpy method, or directly, using adsorption\nmicrocalorimetry. Once an enthalpy curve is calculated, a useful performance\nindicator is the enthalpy of adsorption at zero loading, corresponding to the\ninitial interactions of the probe with the surface. pyGAPS contains two methods\nto determine the initial enthalpy of adsorption starting from an enthalpy curve.\nFirst, make sure the data is imported by running the import notebook.", "# import isotherms\n%run import.ipynb\n\n# import the characterisation module\nimport pygaps.characterisation as pgc", "Initial point method\nThe point method of determining enthalpy of adsorption is the simplest method.\nIt just returns the first measured point in the enthalpy curve.\nDepending on the data, the first point method may or may not be representative\nof the actual value.", "import matplotlib.pyplot as plt\n\n# Initial point method\nisotherm = next(i for i in isotherms_calorimetry if i.material=='HKUST-1(Cu)')\nres = pgc.initial_enthalpy_point(isotherm, 'enthalpy', verbose=True)\nplt.show()\n\nisotherm = next(i for i in isotherms_calorimetry if i.material=='Takeda 5A')\nres = pgc.initial_enthalpy_point(isotherm, 'enthalpy', verbose=True)\nplt.show()\n", "Compound model method\nThis method attempts to model the enthalpy curve by the superposition of several\ncontributions. It is slower, as it runs a constrained minimisation algorithm\nwith several initial starting guesses, then selects the optimal one.", "# Modelling method\nisotherm = next(i for i in isotherms_calorimetry if i.material=='HKUST-1(Cu)')\nres = pgc.initial_enthalpy_comp(isotherm, 'enthalpy', verbose=True)\nplt.show()\n\nisotherm = next(i for i in isotherms_calorimetry if i.material=='Takeda 5A')\nres = pgc.initial_enthalpy_comp(isotherm, 'enthalpy', verbose=True)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jomavera/Work
Center of Gravity with JAX.ipynb
mit
[ "Center of Gravity Method with Automatic Differentiation for Facility Location\n\n\nMinimize total cost due to transport of units\nCost Function $= \\sum_i distance_i \\times (\\frac{cost}{distance \\cdot unit})_i \\times units_i$ where $distance_i$ is the distance between location $i$ and distribution center at iteration $t$\nLets define problem data", "from jax import grad, jit, vmap\nimport jax.numpy as np\nimport numpy as np2\n\n#2D coordinates of locations of demand or supply\nx_n = np.array([[-0.97,-80.7],\n [-1.05, -80.45],\n [-2.15, -79.92],\n [-1.81, -79.53],\n [-1.03, -79.47]])\n\n#Quantities of demand/supply for each location\nquantities = np.array([[250],[200],[700],[150],[300]]) \n\n#Cost/(unit x distance) from each location to depot/distribution center\ncosts = np.array([[1],[1],[1],[1],[1]])", "Now we mus define the function which we want to minimize", "def distances(x,y):\n '''Function that return distance between two locations\n Input: Two 2D numpy arrays\n Output: Distance between locations'''\n\n x_rp = np.repeat(x,x_n.shape[0],0).reshape(-1,1)\n y_rp = np.repeat(y,x_n.shape[0],0).reshape(-1,1)\n dist_x = (x_rp - x_n[:,:1])**2\n dist_y = (y_rp - x_n[:,1:2])**2\n return np.sqrt(dist_x+dist_y).reshape((-1,1))\n\ndef cost_function(x_0):\n '''Function that calculate total cost due to transport for a depot/distribution center location x_0\n Input: 2D numpy array\n Output: Total cost'''\n \n x = np.array([[x_0[0,0]]])\n y = np.array([[x_0[0,1]]])\n\n dist = distances(x,y)\n dist_costo = quantities*costs*dist\n\n return np.sum(dist_costo)", "With the defined function we can calculate the gradient with JAX", "gradient_funcion = jit(grad(cost_function)) #jit (just in time) compile makes faster the evaluation of the gradient.", "Now lets define the procedure to apply gradient descent or newton nethod", "def optimize(funtion_opt, grad_fun, x_0, method, n_iter):\n '''Input:\n funtion_opt: Function to minimize\n grad_fun: gradient of the function to minimize\n x_0: initial 2D coordiantes of depot/distribution center\n method: method to use for minimize\n n_iter: Number of iterations of the method\n --------------\n Output:\n xs: List of x coordiantes for each iteration\n ys: List of y coordiantes for each iteration\n fs: List of costs for each iteration'''\n\n #Create empty lists to fill with iteration values\n xs = [] \n ys = []\n fs = []\n\n #Add the initial location\n xs.append(x_0[0,0])\n ys.append(x_0[0,1])\n fs.append(cost_function(x_0))\n\n for i in range(n_iter):\n \n if method == 'newton':\n loss_val = funtion_opt(x_0)\n loss_vec = np.array([[loss_val, loss_val]])\n x_0 -= 0.005*loss_vec/grad_fun(x_0)\n elif method == 'grad_desc':\n step = 0.0001*grad_fun(x_0)\n x_0 -= step\n \n xs.append(x_0[0,0])\n ys.append(x_0[0,1])\n fs.append(cost_function(x_0))\n return xs, ys, fs", "Lets minimize with gradient descent", "#Initial locationl of depots/distribution centers\nx0=np.array([[4.0,-84.0]])\n\nprint(\"Initial Cost: {:0.2f}\".format(cost_function(x0 ) ))\n\nxs, ys, fs = optimize(cost_function, gradient_funcion, x0, 'grad_desc', 100)\n\nprint(\"Final Cost: {:0.2f}\".format(fs[-1]))", "Now lets plot the trayectory of the optimization procedure.", "from mpl_toolkits import mplot3d\nimport matplotlib.pyplot as plt\n\n#We must modified how we feed the input to the cost function to plot values of x and y coordinates\ndef cost_function_2(x,y):\n dist = distances(x,y)\n dist_costo = quantities*costs*dist\n return np.sum(dist_costo)\n\nFIGSIZE = (9, 7)\n\nxs = np.array(xs).reshape(-1,)\nys = np.array(ys).reshape(-1,)\nfs = np.array(fs)\n\n\nX, Y = np2.meshgrid(np2.linspace(-5., 5., 50), np2.linspace(-84., -74., 50))\nfunc_vec = np2.vectorize(cost_function_2)\nf = func_vec(X,Y)\nindices = (slice(None, None, 4), slice(None, None, 4))\n\nfig = plt.figure(figsize=FIGSIZE)\nax = plt.axes(projection='3d', azim=10,elev=10)\nax.plot_surface(X, Y, f, shade=True, linewidth=2, antialiased=True,alpha=0.5)\nax.plot3D(xs, ys, fs, color='black', lw=4)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
climberwb/pycon-pandas-tutorial
Exercises-4.ipynb
mit
[ "%matplotlib inline\nimport pandas as pd\n\nfrom IPython.core.display import HTML\ncss = open('style-table.css').read() + open('style-notebook.css').read()\nHTML('<style>{}</style>'.format(css))\n\ntitles = pd.DataFrame.from_csv('data/titles.csv', index_col=None)\ntitles.head()\n\ncast = pd.DataFrame.from_csv('data/cast.csv', index_col=None)\ncast.head()", "Define a year as a \"Superman year\" whose films feature more Superman characters than Batman. How many years in film history have been Superman years?", "c = cast\nc = c[(c.character == 'Superman') | (c.character == 'Batman')]\nc = c.groupby(['year', 'character']).size()\nc = c.unstack()\nc = c.fillna(0)\nc.head()\n\nd = c.Superman - c.Batman\nprint('Superman years:')\nprint(len(d[d > 0.0]))", "How many years have been \"Batman years\", with more Batman characters than Superman characters?", "d = c.Superman - c.Batman\nprint('Batman years:')\nprint(len(d[d < 0.0]))", "Plot the number of actor roles each year and the number of actress roles each year over the history of film.", "c = cast\n#c = c[(c.character == 'Superman') | (c.character == 'Batman')]\nc = c.groupby(['year', 'type']).size()\nc = c.unstack()\nc = c.fillna(0)\nc.plot()", "Plot the number of actor roles each year and the number of actress roles each year, but this time as a kind='area' plot.", "c.plot(kind='area')", "Plot the difference between the number of actor roles each year and the number of actress roles each year over the history of film.", "c = cast\nc = c.groupby(['year', 'type']).size()\nc = c.unstack('type')\n(c.actor - c.actress).plot()", "Plot the fraction of roles that have been 'actor' roles each year in the hitsory of film.", "(c.actor/ (c.actor + c.actress)).plot(ylim=[0,1])", "Plot the fraction of supporting (n=2) roles that have been 'actor' roles each year in the history of film.", "c = cast[(cast[\"n\"] == 2) ]\nc = c.groupby(['year','type']).size()\nc = c.unstack('type')\n(c.actor/ (c.actor + c.actress)).plot(ylim=[0,1])", "Build a plot with a line for each rank n=1 through n=3, where the line shows what fraction of that rank's roles were 'actor' roles for each year in the history of film.", "\nc = cast\nc = c[c.n <= 3]\nc = c.groupby(['year', 'type', 'n']).size()\nc = c.unstack('type')\nr = c.actor / (c.actor + c.actress)\nr = r.unstack('n')\nr.plot(ylim=[0,1])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
daviddesancho/BestMSM
example/fourstate/fourstate_tpt.ipynb
gpl-2.0
[ "Transition path theory tests\nIn what follows we are going to look at a simple four state model to better understand some fundamental results of transition path theory. The essential reference for this work is a paper by Berezhkovskii, Hummer and Szabo (J. Chem. Phys., 2009). However, there are other important references for alternative formulations of the same results, by Vanden Eijnden et al (J. Stat. Phys., 2006, Multiscale Model. Simul.,2009 and Proc. Natl. Acad. Sci. U.S.A., 2009).\nModel system\nHere we focus in a simple four state model described in the Berezhkovskii-Hummer-Szabo (BHS) paper. It consists of two end states (folded, $F$, and unfolded, $U$), connected by two intermediates $I_1$ and $I_2$. In particular, we define an instance of this simple model in order to get some numerical results. Below we show a graph representation of the model taken directly from the BHS paper (Figure 2).\n<img src=\"files/fourstate.png\">\nThe model is itself described in the Fourstate class of the fourstate module. So first of all we create an instance of that class:", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport fourstate\nimport itertools\nimport networkx as nx\nimport numpy as np\nimport operator\nbhs = fourstate.FourState()", "By doing this we get a few variables initialized. First, a symmetric transition count matrix, $\\mathbf{N}$, where we see that the most frequent transitions are those within metastable states (corresponding to the terms in the diagonal $N_{ii}$). Non-diagonal transitions are much less frequent (i.e. $N_{ij}<<N_{ii}$ for all $i\\neq j$). \nThen we get the transition matrix $\\mathbf{T}$, whose diagonal elements are close to 1, as corresponds to a system with high metastability (i.e. high probability of the system remaining where it was). We can also construct a rate matrix, $\\mathbf{K}$. From it we obtain eigenvalues ($\\lambda_i$) and corresponding eigenvectors ($\\Psi_i$). The latter allow for estimating equilibrium probabilities (note that $U$ and $F$ have the largest populations). \nThe eigenvalues are sorted by value, with the first eigenvalue ($\\lambda_0$) being zero, as corresponds to a system with a unique stationary distribution. All other eigenvalues are negative, and they are characteristic of a two-state like system as there is a considerable time-scale separation between the slowest mode ($\\lambda_1$, corresponding to a relaxation time of $\\tau_1=-1/\\lambda_1$) and the other two ($\\lambda_2$ and $\\lambda_3$), as shown below.", "fig, ax = plt.subplots()\nax.bar([0.5,1.5,2.5], -1./bhs.evals[1:], width=1)\nax.set_xlabel(r'Eigenvalue', fontsize=16)\nax.set_ylabel(r'$\\tau_i$', fontsize=18)\nax.set_xlim([0,4])\nplt.show()", "Committors and fluxes\nNext we calculate the committors and fluxes for this four state model. For this we define two end states, so that we estimate the flux between folded ($F$) and unfolded ($U$). The values of the committor or $p_{fold}$ are defined to be 1 and 0 for $U$ and $F$, respectively, and using the Berezhkovskii-Hummer-Szabo (BHS) method we calculate the committors for the rest of the states.", "bhs.run_commit()", "We also obtain the flux matrix, $\\mathbf{J}$, containing local fluxes ($J_{ji}=J_{i\\rightarrow j}$) for the different edges in the network. The signs represent the direction of the transition: positive for those fluxes going from low to high $p_{fold}$ and negative for those going from high to low $p_{fold}$. For example, for intermediate $I_1$ (second column) we see that the transitions to $I_2$ and $F$ have a positive flux (i.e. flux goes from low to high $p_{fold}$).\nA property of flux conservation that must be fulfilled is that the flux into one state is the same as the flux out of that state, $J_j=\\sum_{p_{fold}(i)<p_{fold}(j)}J_{i\\rightarrow j}=\\sum_{p_{fold}(i)>p_{fold}(j)}J_{j\\rightarrow i}$. We check for this property for states $I_1$ and $I_2$.", "print \" j J_j(<-) J_j(->)\"\nprint \" - -------- --------\"\nfor i in [1,2]:\n print \"%2i %10.4e %10.4e\"%(i, np.sum([bhs.J[i,x] for x in range(4) if bhs.pfold[x] < bhs.pfold[i]]),\\\n np.sum([bhs.J[x,i] for x in range(4) if bhs.pfold[x] > bhs.pfold[i]]))", "Paths through the network\nAnother important bit in transition path theory is the possibility of identifying paths through the network. The advantage of a simple case like the one we are looking at is that we can enumerate all those paths and check how much flux each of them carry. For example, the contribution of one given path $U\\rightarrow I_1\\rightarrow I_2\\rightarrow F$ to the total flux is given by $J_{U\\rightarrow I_1\\rightarrow I_2\\rightarrow F}=J_{U \\rightarrow I_1}(J_{I_1 \\rightarrow I_2}/J_{I_1})(J_{I_2 \\rightarrow F}/J_{I_2})$.\nIn the BHS paper, simple rules are defined for calculating the length of a given edge in the network. These rules are implemented in the gen_path_lengths function.", "import tpt_functions\nJnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \\\n bhs.sum_flux, [3], [0])\nJpathG = nx.DiGraph(Jpath.transpose())\n\nprint Jnode\nprint Jpath", "We can exhaustively enumerate the paths and check whether the fluxes add up to the total flux.", "tot_flux = 0\npaths = {}\nk = 0\nfor path in nx.all_simple_paths(JpathG, 0, 3):\n paths[k] ={}\n paths[k]['path'] = path\n f = bhs.J[path[1],path[0]]\n print \"%2i -> %2i: %10.4e \"%(path[0], path[1], \\\n bhs.J[path[1],path[0]])\n for i in range(2, len(path)):\n print \"%2i -> %2i: %10.4e %10.4e\"%(path[i-1], path[i], \\\n bhs.J[path[i],path[i-1]], Jnode[path[i-1]])\n f *= bhs.J[path[i],path[i-1]]/Jnode[path[i-1]]\n tot_flux += f\n paths[k]['flux'] = f\n print \" J(path) = %10.4e\"%f\n print\n k+=1\nprint \" Commulative flux: %10.4e\"%tot_flux\n", "So indeed the cumulative flux is equal to the total flux we estimated before.\nBelow we print the sorted paths for furu", "sorted_paths = sorted(paths.items(), key=operator.itemgetter(1))\nsorted_paths.reverse()\nk = 1\nfor path in sorted_paths:\n print k, ':', path[1]['path'], ':', 'flux = %g'%path[1]['flux']\n k +=1", "Highest flux paths\nOne of the great things of using TPT is that it allows for visualizing the highest flux paths. In general we cannot just enumerate all the paths, so we resort to Dijkstra's algorithm to find the highest flux path. The problem with this is that the algorithm does not find the second highest flux path. So once identified, we must remove the flux from one path, so that the next highest flux path can be found by the algorithm. An algorithm for doing this was elegantly proposed by Metzner, Schütte and Vanden Eijnden. Now we implement it for the model system.", "while True:\n Jnode, Jpath = tpt_functions.gen_path_lengths(range(4), bhs.J, bhs.pfold, \\\n bhs.sum_flux, [3], [0])\n # generate nx graph from matrix\n JpathG = nx.DiGraph(Jpath.transpose())\n # find shortest path\n try:\n path = nx.dijkstra_path(JpathG, 0, 3)\n pathlength = nx.dijkstra_path_length(JpathG, 0, 3)\n print \" shortest path:\", path, pathlength\n except nx.NetworkXNoPath:\n print \" No path for %g -> %g\\n Stopping here\"%(0, 3)\n break\n \n # calculate contribution to flux\n f = bhs.J[path[1],path[0]]\n print \"%2i -> %2i: %10.4e \"%(path[0], path[1], bhs.J[path[1],path[0]])\n path_fluxes = [f]\n for j in range(2, len(path)):\n i = j - 1\n print \"%2i -> %2i: %10.4e %10.4e\"%(path[i], path[j], \\\n bhs.J[path[j],path[i]], \\\n bhs.J[path[j],path[i]]/Jnode[path[i]])\n f *= bhs.J[path[j],path[i]]/Jnode[path[i]]\n path_fluxes.append(bhs.J[path[j],path[i]])\n\n # find bottleneck\n ib = np.argmin(path_fluxes)\n print \"bottleneck: %2i -> %2i\"%(path[ib],path[ib+1])\n \n # remove flux from edges\n for j in range(1,len(path)):\n i = j - 1\n bhs.J[path[j],path[i]] -= f\n \n # numerically there may be some leftover flux in bottleneck\n bhs.J[path[ib+1],path[ib]] = 0.\n \n bhs.sum_flux -= f\n print ' flux from path ', path, ': %10.4e'%f\n print ' fluxes', path_fluxes\n print ' leftover flux: %10.4e\\n'%bhs.sum_flux", "So the algorithm works: we have been able to sort the paths based on the amount of flux going through them, which should allow for illustrating the maximum flux paths." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
madHatter106/DataScienceCorner
posts/a-bayesian-tutorial-in-python-part-II.ipynb
mit
[ "<center><u><u>Bayesian Modeling for the Busy and the Confused - Part II</u></u></center>\n<center><i>Markov Chain Monte-Carlo</i><center>\nCurrently, the capacity to gather data is far ahead of the ability to generate meaningful insight using conventional approaches. Hopes of alleviating this bottleneck has come through the application of machine learning tools. Among these tools one that is increasingly garnering traction is probabilistic programming, particularly Bayesian modeling. In this paradigm, variables that are used to define models carry a probabilistic distribution rather than a scalar value. \"Fitting\" a model to data can then , simplistically, be construed as finding the appropriate parameterization for these distributions, given the model structure and the data. This offers a number of advantages over other methods, not the least of which is the estimation of uncertainty around model results. This in turn can better inform subsequent processes, such as decision-making, and/or scientific discovery.\n<br><br>\nThe present is the first of a two-notebook series, the subject of which is a brief, basic, but hands-on programmatic introduction to Bayesian modeling. This notebook contains an of a few key probability principles relevant to Bayesian inference. An illustration of how to put these in practice follows. In particular, I will explain one of the conmore intuitve approaches to Bayesian computation; Grid Approximation (GA). With this framework I will show how to create simple models that can be used to interpret and predict real world data. <br>\nGA is computationally intensive and runs into problems quickly when the data set is large and/or the model increases in complexity. One of the more popular solutions to this problem is the use of the Markov Chain Monte-Carlo (MCMC) algorithm. The implementation of MCMC in Bayesian models will be the subject of the second notebook of this series.\n<br>\nAs of this writing the most popular programming language in machine learning is Python. Python is an easy language to pickup: pedagogical resources abound. Python is free, open source, and a large number of very useful libraries have been written over the years that have propelled it to its current place of prominence in a number of fields, in addition to machine learning.\n<br><br>\nI use Python (3.6+) code to illustrate the mechanics of Bayesian inference in lieu of lengthy explanations. I also use a number of dedicated Python libraries that shortens the code considerably. A solid understanding of Bayesian modeling cannot be spoon-fed and can only come from getting one's hands dirty.. Emphasis is therefore on readable reproducible code. This should ease the work the interested has to do to get some practice re-running the notebook and experimenting with some of the coding and Bayesian modeling patterns presented. Some know-how is required regarding installing and running a Python distribution, the required libraries, and jupyter notebooks; this is easily gleaned from the internet. A popular option in the machine learning community is Anaconda.\n<a id='TOP'></a>\nNotebook Contents\n\nBasics: Joint probability, Inverse probability and Bayes' Theorem\nExample: Inferring the Statistical Distribution of Chlorophyll from Data\nGrid Approximation\nImpact of priors\nImpact of data set size\n\n\nMCMC\nPyMC3\n\n\nRegression\nData Preparation\nRegression in PyMC3\nChecking Priors\nModel Fitting\nFlavors of Uncertainty\n\n\n[Final Comments](#Conclusion", "import pickle\nimport warnings\nimport sys\nfrom IPython.display import Image, HTML\n\nimport pandas as pd\nimport numpy as np\nfrom scipy.stats import norm as gaussian, uniform\nimport pymc3 as pm\nfrom theano import shared\n\nimport seaborn as sb\nimport matplotlib.pyplot as pl\nfrom matplotlib import rcParams\nfrom matplotlib import ticker as mtick\nimport arviz as ar\n\nprint('Versions:')\nprint('---------')\nprint(f'python: {sys.version.split(\"|\")[0]}')\nprint(f'numpy: {np.__version__}')\nprint(f'pandas: {pd.__version__}')\nprint(f'seaborn: {sb.__version__}')\nprint(f'pymc3: {pm.__version__}')\nprint(f'arviz: {ar.__version__}')\n\n%matplotlib inline\nwarnings.filterwarnings('ignore', category=FutureWarning)", "Under the hood: Inferring chlorophyll distribution\n\n~~Grid approximation: computing probability everywhere~~\n<font color='red'>Magical MCMC: Dealing with computational complexity</font>\nProbabilistic Programming with PyMC3: Industrial grade MCMC\n\nBack to Contents\n<a id=\"MCMC\"></a>\nMagical MCMC: Dealing with computational complexity\n\n\nGrid approximation:\n\nuseful for understanding mechanics of Bayesian computation\ncomputationally intensive\nimpractical and often intractable for large data sets or high-dimension models\n\n\n\nMCMC allows sampling <u>where it probabilistically matters</u>:\n\ncompute current probability given location in parameter space\npropose jump to new location in parameter space\ncompute new probability at proposed location\njump to new location if $\\frac{new\\ probability}{current\\ probability}>1$ \njump to new location if $\\frac{new\\ probability}{current\\ probability}>\\gamma\\in [0, 1]$\notherwise stay in current location", "def mcmc(data, μ_0=0.5, n_samples=1000,):\n print(f'{data.size} data points')\n data = data.reshape(1, -1)\n # set priors\n σ=0.75 # keep σ fixed for simplicity\n trace_μ = np.nan * np.ones(n_samples) # trace: where the sampler has been\n trace_μ[0] = μ_0 # start with a first guess\n for i in range(1, n_samples):\n proposed_μ = norm.rvs(loc=trace_μ[i-1], scale=0.1, size=1)\n prop_par_dict = dict(μ=proposed_μ, σ=σ)\n curr_par_dict = dict(μ=trace_μ[i-1], σ=σ)\n log_prob_prop = get_log_lik(data, prop_par_dict\n ) + get_log_prior(prop_par_dict)\n log_prob_curr = get_log_lik(data, curr_par_dict\n ) + get_log_prior(curr_par_dict) \n ratio = np.exp(log_prob_prop - log_prob_curr)\n if ratio > 1:\n # accept proposal\n trace_μ[i] = proposed_μ\n else:\n # evaluate low proba proposal\n if uniform.rvs(size=1, loc=0, scale=1) > ratio:\n # reject proposal\n trace_μ[i] = trace_μ[i-1] \n else:\n # accept proposal\n trace_μ[i] = proposed_μ\n return trace_μ\n\n def get_log_lik(data, param_dict):\n return np.sum(norm.logpdf(data, loc=param_dict['μ'],\n scale=param_dict['σ']\n ),\n axis=1)\n\ndef get_log_prior(par_dict, loc=1, scale=1):\n return norm.logpdf(par_dict['μ'], loc=loc, scale=scale)", "Timing MCMC", "%%time\nmcmc_n_samples = 2000\ntrace1 = mcmc(data=df_data_s.chl_l.values, n_samples=mcmc_n_samples)\n\nf, ax = pl.subplots(nrows=2, figsize=(8, 8))\nax[0].plot(np.arange(mcmc_n_samples), trace1, marker='.',\n ls=':', color='k')\nax[0].set_title('trace of μ, 500 data points')\nax[1].set_title('μ marginal posterior')\npm.plots.kdeplot(trace1, ax=ax[1], label='mcmc',\n color='orange', lw=2, zorder=1)\nax[1].legend(loc='upper left')\nax[1].set_ylim(bottom=0)\ndf_μ = df_grid_3.groupby(['μ']).sum().drop('σ',\n axis=1)[['post_prob']\n ].reset_index()\nax2 = ax[1].twinx()\ndf_μ.plot(x='μ', y='post_prob', ax=ax2, color='k',\n label='grid',)\nax2.set_ylim(bottom=0);\nax2.legend(loc='upper right')\nf.tight_layout()\n\nf.savefig('./figJar/Presentation/mcmc_1.svg')", "<img src='./resources/mcmc_1.svg?modified=\"1\"'>", "%%time\nsamples = 2000\ntrace2 = mcmc(data=df_data.chl_l.values, n_samples=samples)\n\nf, ax = pl.subplots(nrows=2, figsize=(8, 8))\nax[0].plot(np.arange(samples), trace2, marker='.',\n ls=':', color='k')\nax[0].set_title(f'trace of μ, {df_data.chl_l.size} data points')\nax[1].set_title('μ marginal posterior')\npm.plots.kdeplot(trace2, ax=ax[1], label='mcmc',\n color='orange', lw=2, zorder=1)\nax[1].legend(loc='upper left')\nax[1].set_ylim(bottom=0)\nf.tight_layout()\nf.savefig('./figJar/Presentation/mcmc_2.svg')", "<img src='./figJar/Presentation/mcmc_2.svg?modified=2'>", "f, ax = pl.subplots(ncols=2, figsize=(12, 5))\nax[0].stem(pm.autocorr(trace1[1500:]))\nax[1].stem(pm.autocorr(trace2[1500:]))\nax[0].set_title(f'{df_data_s.chl_l.size} data points')\nax[1].set_title(f'{df_data.chl_l.size} data points')\nf.suptitle('trace autocorrelation', fontsize=19)\nf.savefig('./figJar/Presentation/grid8.svg')\n\nf, ax = pl.subplots(nrows=2, figsize=(8, 8))\nthinned_trace = np.random.choice(trace2[100:], size=200, replace=False)\nax[0].plot(np.arange(200), thinned_trace, marker='.',\n ls=':', color='k')\nax[0].set_title('thinned trace of μ')\nax[1].set_title('μ marginal posterior')\npm.plots.kdeplot(thinned_trace, ax=ax[1], label='mcmc',\n color='orange', lw=2, zorder=1)\nax[1].legend(loc='upper left')\nax[1].set_ylim(bottom=0)\nf.tight_layout()\nf.savefig('./figJar/Presentation/grid9.svg')\n\nf, ax = pl.subplots()\nax.stem(pm.autocorr(thinned_trace[:20]));\nf.savefig('./figJar/Presentation/stem2.svg', dpi=300, format='svg');", "What's going on?\nHighly autocorrelated trace: <br>\n$\\rightarrow$ inadequate parameter space exploration<br>\n$\\rightarrow$ poor convergence...\nMetropolis MCMC<br>\n $\\rightarrow$ easy to implement + memory efficient<br>\n $\\rightarrow$ inefficient parameter space exploration<br>\n $\\rightarrow$ better MCMC sampler?\n\nHamiltonian Monte Carlo (HMC)\nGreatly improved convergence\nWell mixed traces are a signature and an easy diagnostic\nHMC does require a lot of tuning,\n\nNot practical for the inexperienced applied statistician or scientist\n\n\nNo-U-Turn Sampler (NUTS), HMC that automates most tuning steps\n\nNUTS scales well to complex problems with many parameters (1000's)\nImplemented in popular libraries\n\nProbabilistic modeling for the beginner\n\n<font color='red'>Under the hood: Inferring chlorophyll distribution</font>\n~~Grid approximation: computing probability everywhere~~\n~~MCMC: how it works~~\n<font color='red'>Probabilistic Programming with PyMC3: Industrial grade MCMC </font>\n\n\n\nBack to Contents\n<a id='PyMC3'></a> \n<u>Probabilistic Programming with PyMC3</u>\n\nrelatively simple syntax\neasily used in conjuction with mainstream python scientific data structures<br>\n $\\rightarrow$numpy arrays <br>\n $\\rightarrow$pandas dataframes\nmodels of reasonable complexity span ~10-20 lines.", "with pm.Model() as m1:\n μ_ = pm.Normal('μ', mu=1, sd=1)\n σ = pm.Uniform('σ', lower=0, upper=2)\n lkl = pm.Normal('likelihood', mu=μ_, sd=σ,\n observed=df_data.chl_l.dropna())\n\ngraph_m1 = pm.model_to_graphviz(m1)\ngraph_m1.format = 'svg'\ngraph_m1.render('./figJar/Presentation/graph_m1');", "<center>\n<img src=\"./resources/graph_m1.svg\"/>\n</center>", "with m1:\n trace_m1 = pm.sample(2000, tune=1000, chains=4)\n\npm.traceplot(trace_m1);\n\nar.plot_posterior(trace_m1, kind='hist', round_to=2);", "Back to Contents\n<a id='Reg'></a>\n<u><font color='purple'>Tutorial Overview:</font></u>\n\nProbabilistic modeling for the beginner<br>\n $\\rightarrow$~~The basics~~<br>\n $\\rightarrow$~~Starting easy: inferring chlorophyll~~<br>\n <font color='red'>$\\rightarrow$Regression: adding a predictor to estimate chlorophyll</font>\n\nBack to Contents\n<a id='DataPrep'></a>\nRegression: Adding a predictor to estimate chlorophyll\n\n<font color=red>Data preparation</font>\nWriting a regression model in PyMC3\nAre my priors making sense?\nModel fitting\nFlavors of uncertainty\n\nLinear regression takes the form\n$$ y = \\alpha + \\beta x $$\nwhere \n $$\\ \\ \\ \\ \\ y = log_{10}(chl)$$ and $$x = log_{10}\\left(\\frac{Gr}{MxBl}\\right)$$", "df_data.head().T\n\ndf_data['Gr-MxBl'] = -1 * df_data['MxBl-Gr']", "Regression coefficients easier to interpret with centered predictor:<br><br>\n$$x_c = x - \\bar{x}$$", "df_data['Gr-MxBl_c'] = df_data['Gr-MxBl'] - df_data['Gr-MxBl'].mean()\n\ndf_data[['Gr-MxBl_c', 'chl_l']].info()\n\nx_c = df_data.dropna()['Gr-MxBl_c'].values\ny = df_data.dropna().chl_l.values", "$$ y = \\alpha + \\beta x_c$$<br>\n$\\rightarrow \\alpha=y$ when $x=\\bar{x}$<br>\n$\\rightarrow \\beta=\\Delta y$ when $x$ increases by one unit", "g3 = sb.PairGrid(df_data.loc[:, ['Gr-MxBl_c', 'chl_l']], height=3,\n diag_sharey=False,)\ng3.map_diag(sb.kdeplot, color='k')\ng3.map_offdiag(sb.scatterplot, color='k');\nmake_lower_triangle(g3)\nf = pl.gcf()\naxs = f.get_axes()\nxlabel = r'$log_{10}\\left(\\frac{Rrs_{green}}{max(Rrs_{blue})}\\right), centered$'\nylabel = r'$log_{10}(chl)$'\naxs[0].set_xlabel(xlabel)\naxs[2].set_xlabel(xlabel)\naxs[2].set_ylabel(ylabel)\naxs[3].set_xlabel(ylabel)\nf.tight_layout()\nf.savefig('./figJar/Presentation/pairwise_1.png')", "Back to Contents\n<a id='RegPyMC3'></a>\nRegression: Adding a predictor to estimate chlorophyll\n\n~~Data preparation~~\n<font color=red>Writing a regression model in PyMC3</font>\nAre my priors making sense?\nModel fitting\nFlavors of uncertainty", "with pm.Model() as m_vague_prior:\n # priors\n σ = pm.Uniform('σ', lower=0, upper=2)\n α = pm.Normal('α', mu=0, sd=1)\n β = pm.Normal('β', mu=0, sd=1)\n # deterministic model\n μ = α + β * x_c\n # likelihood\n chl_i = pm.Normal('chl_i', mu=μ, sd=σ, observed=y)", "<center>\n<img src=\"./resources/m_vague_graph.svg\"/>\n</center>\nBack to Contents\n<a id='PriorCheck'></a>\nRegression: Adding a predictor to estimate chlorophyll\n\n~~Data preparation~~\n~~Writing a regression model in PyMC3~~\n<font color=red>Are my priors making sense?</font>\nModel fitting \nFlavors of uncertainty", "vague_priors = pm.sample_prior_predictive(samples=500, model=m_vague_prior, vars=['α', 'β',])\n\nx_dummy = np.linspace(-1.5, 1.5, num=50).reshape(-1, 1)\n\nα_prior_vague = vague_priors['α'].reshape(1, -1)\nβ_prior_vague = vague_priors['β'].reshape(1, -1)\nchl_l_prior_μ_vague = α_prior_vague + β_prior_vague * x_dummy\n\nf, ax = pl.subplots( figsize=(6, 5))\nax.plot(x_dummy, chl_l_prior_μ_vague, color='k', alpha=0.1,);\nax.set_xlabel(r'$log_{10}\\left(\\frac{green}{max(blue)}\\right)$, centered')\nax.set_ylabel('$log_{10}(chl)$')\nax.set_title('Vague priors')\nax.set_ylim(-3.5, 3.5)\nf.tight_layout(pad=1)\nf.savefig('./figJar/Presentation/prior_checks_1.png')", "<center>\n<img src='./figJar/Presentation/prior_checks_1.png?modified=3' width=65%>\n</center", "with pm.Model() as m_informative_prior:\n α = pm.Normal('α', mu=0, sd=0.2)\n β = pm.Normal('β', mu=0, sd=0.5)\n σ = pm.Uniform('σ', lower=0, upper=2)\n μ = α + β * x_c\n chl_i = pm.Normal('chl_i', mu=μ, sd=σ, observed=y)\n\nprior_info = pm.sample_prior_predictive(model=m_informative_prior, vars=['α', 'β'])\n\nα_prior_info = prior_info['α'].reshape(1, -1)\nβ_prior_info = prior_info['β'].reshape(1, -1)\nchl_l_prior_info = α_prior_info + β_prior_info * x_dummy\n\nf, ax = pl.subplots( figsize=(6, 5))\nax.plot(x_dummy, chl_l_prior_info, color='k', alpha=0.1,);\nax.set_xlabel(r'$log_{10}\\left(\\frac{green}{max(blue}\\right)$, centered')\nax.set_ylabel('$log_{10}(chl)$')\nax.set_title('Weakly informative priors')\nax.set_ylim(-3.5, 3.5)\nf.tight_layout(pad=1)\nf.savefig('./figJar/Presentation/prior_checks_2.png')", "<table>\n <tr>\n <td>\n <img src='./resources/prior_checks_1.png?modif=1' />\n </td>\n <td>\n <img src='./resources/prior_checks_2.png?modif=2' />\n </td>\n </tr>\n</table>\n\nBack to Contents\n<a id='Mining'></a>\nRegression: Adding a predictor to estimate chlorophyll\n\n~~Data preparatrion~~\n~~Writing a regression model in PyMC3~~\n~~Are my priors making sense?~~\n<font color=red>Model fitting</font>\nFlavors of uncertainty", "with m_vague_prior:\n trace_vague = pm.sample(2000, tune=1000, chains=4)\n\nwith m_informative_prior:\n trace_inf = pm.sample(2000, tune=1000, chains=4)\n\nf, axs = pl.subplots(ncols=2, nrows=2, figsize=(12, 7))\nar.plot_posterior(trace_vague, var_names=['α', 'β'], round_to=2, ax=axs[0,:], kind='hist');\nar.plot_posterior(trace_inf, var_names=['α', 'β'], round_to=2, ax=axs[1, :], kind='hist',\n color='brown');\naxs[0,0].tick_params(rotation=20)\naxs[0,0].text(-0.137, 430, 'vague priors',\n fontdict={'fontsize': 15})\naxs[1,0].tick_params(rotation=20)\naxs[1,0].text(-0.137, 430, 'informative priors',\n fontdict={'fontsize': 15})\nf.tight_layout()\nf.savefig('./figJar/Presentation/reg_posteriors.svg')", "<center>\n<img src='./resources/reg_posteriors.svg'/>\n</center>\nBack to Contents\n<a id='UNC'></a>\nRegression: Adding a predictor to estimate chlorophyll\n\n~~Data preparation~~\n~~Writing a regression model in PyMC3~~\n~~Are my priors making sense?~~\n~~Data review and model fitting~~\n<font color=red>Flavors of uncertainty</font>\n\nTwo types of uncertainties:\n1. model uncertainty\n2. prediction uncertainty", "α_posterior = trace_inf.get_values('α').reshape(1, -1)\nβ_posterior = trace_inf.get_values('β').reshape(1, -1)\nσ_posterior = trace_inf.get_values('σ').reshape(1, -1)", "model uncertainty: uncertainty around the model mean", "μ_posterior = α_posterior + β_posterior * x_dummy\n\npl.plot(x_dummy, μ_posterior[:, ::16], color='k', alpha=0.1);\npl.plot(x_dummy, μ_posterior[:, 1], color='k', label='model mean')\n\npl.scatter(x_c, y, color='orange', edgecolor='k', alpha=0.5, label='obs'); pl.legend();\npl.ylim(-2.5, 2.5); pl.xlim(-1, 1);\npl.xlabel(r'$log_{10}\\left(\\frac{Gr}{max(Blue)}\\right)$')\npl.ylabel(r'$log_{10}(chlorophyll)$')\nf = pl.gcf()\nf.savefig('./figJar/Presentation/mu_posterior.svg')", "<center>\n <img src='./resources/mu_posterior.svg/'>\n</center> \n\nprediction uncertainty: posterior predictive checks", "ppc = norm.rvs(loc=μ_posterior, scale=σ_posterior);\nci_94_perc = pm.hpd(ppc.T, alpha=0.06);\n\npl.scatter(x_c, y, color='orange', edgecolor='k', alpha=0.5, label='obs'); pl.legend();\npl.plot(x_dummy, ppc.mean(axis=1), color='k', label='mean prediction');\npl.fill_between(x_dummy.flatten(), ci_94_perc[:, 0], ci_94_perc[:, 1], alpha=0.5, color='k',\n label='94% credibility interval:\\n94% chance that prediction\\nwill be in here!');\npl.xlim(-1, 1); pl.ylim(-2.5, 2.5)\npl.legend(fontsize=12, loc='upper left')\nf = pl.gcf()\nf.savefig('./figJar/Presentation/ppc.svg')", "<center>\n <img src='./resources/ppc.svg/' width=\"70%\"/>\n</center> \nBack to Contents\n<a id=\"Conclusion\"></a>\nIn Conclusion Probabilistic Programming provides:\n\nTransparent modeling:\nExplicit assumptions\nEasy to debate/criticize\nEasy to communicate/reproduce/improve upon\n\n\nPosterior distribution much richer construct than point estimates\nPrincipled estimation of model and prediction uncertainty\nAccessibility\nConstantly improving algorithms\nEasy-to-use software\nFlexible framework, largely problem-agnostic\n\n\n\n<table><tr>\n <td><img src='./resources/krusche_diagrams_hs_reg.png?modif=2'/></td>\n <td><img src='./resources/krusche_diagrams_BNN.png?modif=1'/></td>\n </tr>\n </table>\n\nBack to Contents\n<a id=\"Next\"></a>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]