repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
the-deep-learners/TensorFlow-LiveLessons
notebooks/live_training/deep_net_in_tensorflow_LT.ipynb
mit
[ "Deep Neural Network in TensorFlow\nIn this notebook, we convert our intermediate-depth MNIST-classifying neural network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's Multi-Layer Perceptron Notebook style. \nSubsequently, we add a layer to make it deep! \nLoad dependencies", "import numpy as np\nnp.random.seed(42)\nimport tensorflow as tf\ntf.set_random_seed(42)", "Load data", "from tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)", "Set neural network hyperparameters (tidier at top of file!)", "lr = 0.1\nepochs = 1\nbatch_size = 128\nweight_initializer = tf.contrib.layers.xavier_initializer()", "Set number of neurons for each layer", "n_input = 784\nn_dense_1 = 64\nn_dense_2 = 64\nn_classes = 10", "Define placeholders Tensors for inputs and labels", "x = tf.placeholder(tf.float32, [None, n_input])\ny = tf.placeholder(tf.float32, [None, n_classes])", "Define types of layers", "# dense layer with ReLU activation:\ndef dense(x, W, b):\n # DEFINE\n # DEFINE\n return a", "Define dictionaries for storing weights and biases for each layer -- and initialize", "bias_dict = {\n 'b1': tf.Variable(tf.zeros([n_dense_1])), \n 'b2': tf.Variable(tf.zeros([n_dense_2])),\n 'b_out': tf.Variable(tf.zeros([n_classes]))\n}\n\nweight_dict = {\n 'W1': tf.get_variable('W1', [n_input, n_dense_1], initializer=weight_initializer),\n 'W2': tf.get_variable('W2', [n_dense_1, n_dense_2], initializer=weight_initializer),\n 'W_out': tf.get_variable('W_out', [n_dense_2, n_classes], initializer=weight_initializer)\n}", "Design neural network architecture", "def network(x, weights, biases):\n \n # DEFINE\n \n return out_layer_z", "Build model", "predictions = network(x, weights=weight_dict, biases=bias_dict)", "Define model's loss and its optimizer", "cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=lr).minimize(cost)", "Define evaluation metrics", "# calculate accuracy by identifying test cases where the model's highest-probability class matches the true y label: \ncorrect_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))\naccuracy_pct = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) * 100", "Create op for variable initialization", "initializer_op = tf.global_variables_initializer()", "Train the network in a session", "with tf.Session() as session:\n session.run(initializer_op)\n \n print(\"Training for\", epochs, \"epochs.\")\n \n # loop over epochs: \n for epoch in range(epochs):\n \n avg_cost = 0.0 # track cost to monitor performance during training\n avg_accuracy_pct = 0.0\n \n # loop over all batches of the epoch:\n n_batches = int(mnist.train.num_examples / batch_size)\n for i in range(n_batches):\n \n batch_x, batch_y = mnist.train.next_batch(batch_size)\n \n # feed batch data to run optimization and fetching cost and accuracy: \n _, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct], \n feed_dict={x: batch_x, y: batch_y})\n \n # accumulate mean loss and accuracy over epoch: \n avg_cost += batch_cost / n_batches\n avg_accuracy_pct += batch_acc / n_batches\n \n # output logs at end of each epoch of training:\n print(\"Epoch \", '%03d' % (epoch+1), \n \": cost = \", '{:.3f}'.format(avg_cost), \n \", accuracy = \", '{:.2f}'.format(avg_accuracy_pct), \"%\", \n sep='')\n \n print(\"Training Complete. Testing Model.\\n\")\n \n test_cost = cost.eval({x: mnist.test.images, y: mnist.test.labels})\n test_accuracy_pct = accuracy_pct.eval({x: mnist.test.images, y: mnist.test.labels})\n \n print(\"Test Cost:\", '{:.3f}'.format(test_cost))\n print(\"Test Accuracy: \", '{:.2f}'.format(test_accuracy_pct), \"%\", sep='')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
charlesreid1/empirical-model-building
ipython/Factorial - Two-Level Three-Factor Design.ipynb
mit
[ "A Two-Level, Three-Factor Full Factorial Design\n<br />\n<br />\n<br />\nTable of Contents\n\nIntroduction\nFactorial Experimental Design:\nTwo-Level Three-Factor Full Factorial Design\nDesign of the Experiment\nInputs and Responses\nEffects and Interactions:\nComputing Main Effects\nAnalyzing Main Effects\nTwo Way Interactions\nAnalyzing Two Way Interactions\nThree Way Interactions\nAnalyzing Three Way Interactions\nFitting a Polynomial Response Surface\nUncertainty:\nThe Impact of Uncertainty\nUncertainty Quantification: A Factory Example\nUncertainty Numbers\nUncertainty Measurements\nAccounting for Uncertainty in the Model\nDiscussion\nConclusion\n\n<br />\n<br />\n<br />\n<a name=\"intro\"></a>\nIntroduction\nAs with other notebooks in this repository, this notebook follows, more or less closely, content from Box and Draper's Empirical Model-Building and Response Surfaces (Wiley, 1984). This content is covered by Chapter 4 of Box and Draper.\nIn this notebook, we'll carry out an anaylsis of a full factorial design, and show how we can obtain inforomation about a system and its responses, and a quantifiable range of certainty about those values. This is the fundamental idea behind empirical model-building and allows us to construct cheap and simple models to represent complex, nonlinear systems.\nOnce we've nailed this down for simple models and small numbers of inputs and responses, we can expand on it, use more complex models, and link this material with machine learning algorithms.\nWe'll start by importing numpy for numerical analysis, and pandas for convenient data containers.", "import pandas as pd\nimport numpy as np\nfrom numpy.random import rand", "Box and Draper cover different experimental design methods in the book, but begin with the simplest type of factorial design in Chapter 4: a full factorial design with two levels. A factorial experimental design is appropriate for exploratory stages, when the effects of variables or their interactions on a system response are poorly understood or not quantifiable.\n<a name=\"twolevelfactorial\"></a>\nTwo-Level Full Factorial Design\nThe analysis begins with a two-level, three-variable experimental design - also written $2^3$, with $n=2$ levels for each factor, $k=3$ different factors. We start by encoding each fo the three variables to something generic: $(x_1,x_2,x_3)$. A dataframe with input variable values is then populated.", "inputs_labels = {'x1' : 'Length of specimen (mm)',\n 'x2' : 'Amplitude of load cycle (mm)',\n 'x3' : 'Load (g)'}\n\ndat = [('x1',250,350),\n ('x2',8,10),\n ('x3',40,50)]\n\ninputs_df = pd.DataFrame(dat,columns=['index','low','high'])\ninputs_df = inputs_df.set_index(['index'])\ninputs_df['label'] = inputs_df.index.map( lambda z : inputs_labels[z] )\n\ninputs_df", "Next, we encode the variable values. For an arbitrary variable value $\\phi_1$, the value of the variable can be coded to be between -1 and 1 according to the formula:\n$$\nx_i = \\dfrac{ \\phi_i - \\mbox{avg }(\\phi) }{ \\mbox{span }(\\phi) }\n$$\nwhere the average and the span of the variable $\\phi_i$ are defined as:\n$$\n\\mbox{avg }(\\phi) = \\left( \\dfrac{ \\phi_{\\text{high}} + \\phi_{\\text{low}} }{2} \\right)\n$$\n$$\n\\mbox{span }(\\phi) = \\left( \\dfrac{ \\phi_{\\text{high}} - \\phi_{\\text{low}} }{2} \\right)\n$$", "inputs_df['average'] = inputs_df.apply( lambda z : ( z['high'] + z['low'])/2 , axis=1)\ninputs_df['span'] = inputs_df.apply( lambda z : ( z['high'] - z['low'])/2 , axis=1)\n\ninputs_df['encoded_low'] = inputs_df.apply( lambda z : ( z['low'] - z['average'] )/( z['span'] ), axis=1)\ninputs_df['encoded_high'] = inputs_df.apply( lambda z : ( z['high'] - z['average'] )/( z['span'] ), axis=1)\n\ninputs_df = inputs_df.drop(['average','span'],axis=1)\n\ninputs_df", "<a name=\"designexperiment\"></a>\nDesign of the Experiment\nWhile everything preceding this point is important to state, to make sure we're being consistent and clear about our problem statement and assumptions, nothing preceding this point is particularly important to understanding how experimental design works. This is simply illustrating the process of transforming one's problem from a problem-specific problem space to a more general problem space.\n<a name=\"inputs_responses\"></a>\nInputs and Responses\nBox and Draper present the results (observed outcomes) of a $2^3$ factorial experiment. The $2^3$ comes from the fact that there are 2 levels for each variable (-1 and 1) and three variables (x1, x2, and x3). The observed, or output, variable is the number of cycles to failure for a particular piece of machinery; this variable is more conveniently cast as a logarithm, as it can be a very large number.\nEach observation data point consists of three input variable values and an output variable value, $(x_1, x_2, x_3, y)$, and can be thought of as a point in 3D space $(x_1,x_2,x_3)$ with an associated point value of $y$. Alternatively, this might be thought of as a point in 4D space (the first three dimensions are the location in 3D space where the point will appear, and the $y$ value is when it will actually appear). \nThe input variable values consist of all possible input value combinations, which we can produce using the itertools module:", "import itertools\nencoded_inputs = list( itertools.product([-1,1],[-1,1],[-1,1]) )\nencoded_inputs", "Now we implement the observed outcomes; as we mentioned, these numbers are large (hundreds or thousands of cycles), and are more conveniently scaled by taking $\\log_{10}()$ (which will rescale them to be integers between 1 and 4).", "results = [(-1, -1, -1, 674),\n ( 1, -1, -1, 3636),\n (-1, 1, -1, 170),\n ( 1, 1, -1, 1140),\n (-1, -1, 1, 292),\n ( 1, -1, 1, 2000),\n (-1, 1, 1, 90),\n ( 1, 1, 1, 360)]\n\nresults_df = pd.DataFrame(results,columns=['x1','x2','x3','y'])\nresults_df['logy'] = results_df['y'].map( lambda z : np.log10(z) )\nresults_df", "The variable inputs_df contains all input variables for the expeirment design, and results_df contains the inputs and responses for the experiment design; these variables are the encoded levels. To obtain the original, unscaled values, which allows us to check what experiments must be run, we can always convert the dataframe back to its originals by defining a function to un-apply the scaling equation. This is as simple as finding", "real_experiment = results_df\n\nvar_labels = []\nfor var in ['x1','x2','x3']:\n var_label = inputs_df.ix[var]['label']\n var_labels.append(var_label)\n real_experiment[var_label] = results_df.apply(\n lambda z : inputs_df.ix[var]['low'] if z[var]<0 else inputs_df.ix[var]['high'] , \n axis=1)\n\nprint(\"The values of each real variable in the experiment:\")\nreal_experiment[var_labels]\n", "<a name=\"computing_main_effects\"></a>\nComputing Main Effects\nNow we compute the main effects of each variable using the results of the experimental design. We'll use some shorthand Pandas functions to compute these averages: the groupby function, which groups rows of a dataframe according to some condition (in this case, the value of our variable of interest $x_i$).", "# Compute the mean effect of the factor on the response,\n# conditioned on each variable\nlabels = ['x1','x2','x3']\n\nmain_effects = {}\nfor key in labels:\n \n effects = results_df.groupby(key)['logy'].mean()\n\n main_effects[key] = sum( [i*effects[i] for i in [-1,1]] )\n\nmain_effects", "<a name=\"analyzing_main_effects\"></a>\nAnalyzing Main Effects\nThe main effect of a given variable (as defined by Yates 1937) is the average difference in the level of response as the input variable moves from the low to the high level. If there are other variables, the change in the level of response is averaged over all combinations of the other variables.\nNow that we've computed the main effects, we can analyze the results to glean some meaningful information about our system. The first variable x1 has a positive effect of 0.74 - this indicates that when x1 goes from its low level to its high level, it increases the value of the response (the lieftime of the equipment). This means x1 should be increased, if we want to make our equipment last longer. Furthermore, this effect was the largest, meaning it's the variable we should consider changing first. \nThis might be the case if, for example, changing the value of the input variables were capital-intensive. A company might decide that they can only afford to change one variable, x1, x2, or x3. If this were the case, increasing x1 would be the way to go.\nIn contrast, increasing the variables x2 and x3 will result in a decrease in the lifespan of our equipment (makes the response smaller), since these have a negative main effect. These variables should be kept at their lower levels, or decreased, to increase the lifespan of the equipment.\n<a name=\"twowayinteractions\"></a>\nTwo-Way Interactions\nIn addition to main effects, a factorial design will also reveal interaction effects between variables - both two-way interactions and three-way interactions. We can use the itertools library to compute the interaction effects using the results from the factorial design.\nWe'll use the Pandas groupby function again, grouping by two variables this time.", "import itertools\n\ntwoway_labels = list(itertools.combinations(labels, 2))\n\ntwoway_effects = {}\nfor key in twoway_labels:\n \n effects = results_df.groupby(key)['logy'].mean()\n \n twoway_effects[key] = sum([ i*j*effects[i][j]/2 for i in [-1,1] for j in [-1,1] ])\n\n # This somewhat hairy one-liner takes the mean of a set of sum-differences\n #twoway_effects[key] = mean([ sum([ i*effects[i][j] for i in [-1,1] ]) for j in [-1,1] ])\n\ntwoway_effects", "This one-liner is a bit hairy:\ntwoway_effects[key] = sum([ i*j*effects[i][j]/2 for i in [-1,1] for j in [-1,1] ])\nWhat this does is, computes the two-way variable effect with a multi-step calculation, but does it with a list comprehension. First, let's just look at this part:\ni*j*effects[i][j]/2 for i in [-1,1] for j in [-1,1]\nThis computes the prefix i*j, which determines if the interaction effect effects[i][j] is positive or negative. We're also looping over one additional dimension; we multiply by 1/2 for each additional dimension we loop over. These are all summed up to yield the final interaction effect for every combination of the input variables.\nIf we were computing three-way interaction effects, we would have a similar-looking one-liner, but with i, j, and k:\ni*j*k*effects[i][j][k]/4 for i in [-1,1] for j in [-1,1] for k in [-1,1]\n<a name=\"analyzing_twowayinteractions\"></a>\nAnalyzing Two-Way Interactions\nAs with main effects, we can analyze the results of the interaction effects analysis to come to some useful conclusions about our physical system. A two-way interaction is a measure of how the main effect of one variable changes as the level of another variable changes. A negative two-way interaction between $x_2$ and $x_3$ means that if we increase $x_3$, the main effect of $x_2$ will be to decrase the response; or, alternatively, if we increase $x_2$, the main effect of $x_3$ will be to decrease the response.\nIn this case, we see that the $x_2-x_3$ interaction effect is the largest, and it is negative. This means that if we decrease both $x_2$ and $x_3$, it will increase our response - make the equipment last longer. In fact, all of the variable interactions have the same result - increasing both variables will decrease the lifetime of the equipment - which indicates that any gains in equipment lifetime accomplished by increasing $x_1$ will be nullified by increases to $x_2$ or $x_3$, since these variables will interact.\nOnce again, if we are limited in the changes that we can actually make to the equipment and input levels, we would want to keep $x_2$ and $x_3$ both at their low levels to keep the response variable value as high as possible.\n<a name=\"threewayinteractions\"></a>\nThree-Way Interactions\nNow let's comptue the three-way effects (in this case, we can only have one three-way effect, since we only have three variables). We'll start by using the itertools library again, to create a tuple listing the three variables whose interactions we're computing. Then we'll use the Pandas groupby() feature to partition each output according to its inputs, and use it to compute the three-way effects.", "import itertools\n\nthreeway_labels = list(itertools.combinations(labels, 3))\n\nthreeway_effects = {}\nfor key in threeway_labels:\n \n effects = results_df.groupby(key)['logy'].mean()\n \n threeway_effects[key] = sum([ i*j*k*effects[i][j][k]/4 for i in [-1,1] for j in [-1,1] for k in [-1,1] ])\n\nthreeway_effects", "<a name=\"analyzing_threewayinteractions\"></a>\nAnalysis of Three-Way Effects\nWhile three-way interactions are relatively rare, typically smaller, and harder to interpret, a negative three-way interaction esssentially means that increasing these variables, all together, will lead to interactions which lower the response (the lifespan of the equipment) by -0.082, which is equivalent to decreasing the lifespan of the equipment by one cycle. However, this effect is very weak comapred to main and interaction effects.\n<a name=\"fitting_responsesurface\"></a>\nFitting a Polynomial Response Surface\nWhile identifying general trends and the effects of different input variables on a system response is useful, it's more useful to have a mathematical model for the system. The factorial design we used is designed to get us coefficients for a linear model $\\hat{y}$ that is a linear function of input variables $x_i$, and that predicts the actual system response $y$:\n$$\n\\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + a_{12} x_1 x_2 + a_{13} x_1 x_3 + a_{23} x_2 x_3 + a_{123} x_1 x_2 x_3 \n$$\nTo determine these coefficients, we can obtain the effects we computed above. When we computed effects, we defined them as measuring the difference in the system response that changing a variable from -1 to +1 would have. Because this quantifies the change per two units of x, and the coefficients of a polynomial quantify the change per one unit of x, the effect must be divided by two.", "s = \"yhat = \"\n\ns += \"%0.3f \"%(results_df['logy'].mean())\n\nfor i,k in enumerate(main_effects.keys()):\n if(main_effects[k]<0):\n s += \"%0.3f x%d \"%( main_effects[k]/2.0, i+1 )\n else:\n s += \"+ %0.3f x%d \"%( main_effects[k]/2.0, i+1 )\n\nprint(s)\n", "Thus, the final result of the experimental design matrix and the 8 experiments that were run is the following polynomial for $\\hat{y}$, which is a model for $y$, the system response:\n$$\n\\hat{y} = 2.744 - 0.295 x_1 - 0.175 x_2 + 0.375 x_3\n$$\n<a name=\"uncertainty\"></a>\nThe Impact of Uncertainty\nThe main and interaction effects give us a more quantitative idea of what variables are important, yes. They can also be important for identifying where a model can be improved (if an input is linked strongly to a system response, more effort should be spent understanding the nature of the relationship). \nBut there are still some practical considerations missing from the implementation above. Specifically, in the real world it is impossible to know the system repsonse, $y$, perfectly. Rather, we may measure the response with an instrument whose uncertainty has been quantified, or we may measure a quantity multiple times (or both). How do we determine the impact of that uncertainty on the model?\nUltimately, factorial designs are based on the underlying assumption that the response $y$ is a linear function of the inputs $x_i$. Thus, for the three-factor full factorial experiment design, we are collecting data and running experiments in such a way that we obtain a model $\\hat{y}$ for our system response $y$, and $\\hat{y}$ is a linear function of each factor:\n$$\n\\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3\n$$\nThe experiment design allows us to obtain a value for each coefficient $a_0$, $a_1$, etc. that will fit $\\hat{y}$ to $y$ to the best of its abilities. \nThus, uncertainty in the measured responses $y$ propagates into the linear model in the form of uncertainty in the coefficients $a_0$, $a_1$, etc.\n<a name=\"uncertainty_example\"></a>\nUncertainty Quantfication: Factory Example\nFor example, suppose that we're dealing with a machine on a factory floor, and we're measuring the system response $y$, which is a machine failure. Now, how do we know if a machine has failed? Perhaps we can't see its internals, and it still makes noise. We might find out that a machine has failed by seeing it emit smoke. But sometimes, machines will emit smoke before they fail, while other times, machines will only smoke after they've failed. We don't know exactly how many life cycles the machines went through, but we can quantify what we know. We can measure the mean $\\overline{y}$ and variance $\\sigma^2$ in a controlled setting, so that when a machine starts smoking, we have a probability distribution assigning probabilities to different times of failure (i.e., there is a 5% chance it failed more than 1 hour ago).\nOnce we obtain the variance, or $\\sigma^2$, we can obtain the value of $\\sigma$, which represents the distribution of uncertainty. Assuming 2 sigma is acceptable (covers 95% of cases), we can add or subtract $\\sigma$ from the estimate of parameters.\n<a name=\"uncertainty_numbers\"></a>\nUncertainty Numbers\nTo obtain an estimate of the uncertainty, the experimentalist will typically make several measurements at the center point, that is, where all parameter levels are 0. The more samples are taken at this condition, the better characterized the distribution of uncertainty becomes. These center point samples can be used to construct a Gaussian probability distribution function, which yeilds a variance, $\\sigma^2$ (or, to be proper, an estimate $s^2$ of the real variance $\\sigma^2$). This parameter is key for quantifying uncertainty.\n<a name=\"uncertainty_measurements\"></a>\nUsing Uncertainty Measurements\nSuppose we measure $s^2 = 0.0050$. Now what?\nNow we can obtain the variance of all measurements, and the variance in the effects that we computed above. These are computed via:\n$$\nVar_{mean} = V(\\overline{y}) = \\dfrac{\\sigma^2}{2^k} \\\nVar_{effect} = \\dfrac{4 \\sigma^2}{2^k}\n$$", "sigmasquared = 0.0050\nk = len(inputs_df.index)\nVmean = (sigmasquared)/(2**k)\nVeffect = (4*sigmasquared)/(2**k)\nprint(\"Variance in mean: %0.6f\"%(Vmean))\nprint(\"Variance in effects: %0.6f\"%(Veffect))", "Alternatively, if the responses $y$ are actually averages of a given number $r$ of $y$-observations, $\\overline{y}$, then the variance will shrink:\n$$\nVar_{mean} = \\dfrac{\\sigma^2}{r 2^k} \\\nVar_{effect} = \\dfrac{4 \\sigma^2}{r 2^k}\n$$\nThe variance gives us an estimate of sigma squared, and if we have sigma squared we can obtain sigma. Sigma is the quantity that represents the range of response values that captures 1 sigma, or 66%, of the probable values of $y$ with $\\hat{y}$. Adding a plus or minus sigma means we are capturing 2 sigma, or 95%, of the probable values of $y$.\nTaking the square root of the variance gives $\\sigma$:", "print(np.sqrt(Vmean))\nprint(np.sqrt(Veffect))", "<a name=\"uncertainty_accounting\"></a>\nAccounting for Uncertainty in Model\nNow we can convert the values of the effects, and the values of $\\sigma$, to values for the final linear model:\n$$\n\\hat{y} = a_0 + a_1 x_1 + a_2 x_2 + a_3 x_3 + a_{12} x_1 x_2 + a_{13} x_1 x_3 + a_{23} x_2 x_3 + a_{123} x_1 x_2 x_3 \n$$\nWe begin with the case where each variable value is at its middle point (all non-constant terms are 0), and \n$$\n\\hat{y} = a_0\n$$\nIn this case, the standard error is $\\pm \\sigma$ as computed for the mean (or overall) system response,\n$$\n\\hat{y} = a_0 \\pm \\sigma_{mean}\n$$\nwhere $\\sigma_{mean} = \\sqrt{Var(mean)}$.", "unc_a_0 = np.sqrt(Vmean)\nprint(unc_a_0)", "The final polynomial model for our system response prediction $\\hat{y}$ therefore becomes:\n$$\n\\hat{y} = ( 2.744 \\pm 0.025 ) - ( 0.295 \\pm 0.025 ) x_1 - (0.175 \\pm 0.025) x_2 + (0.375 \\pm 0.025) x_3\n$$\n<a name=\"discussion\"></a>\nDiscussion\nAt this point, we would usually dive deeper into the details of the actual problem of interest. By tying the empirical model to the system, we can draw conclusions about the physical system - for example, if we were analyzing a chemically reacting process, and we found the response to be particularly sensitive to temperature, it would indicate that the chemical reaction is sensitive to temperature, and that the reaction should be studied more deeply (in isolation from the more complicated system) to better understand the impact of temperature on the response.\nIt's also valuable to explore the linear model that we obtained more deeply, by looking at contours of the response surface, taking first derivatives, and optimizing the input variable values to maximize or minimize the response value. We'll leave those tasks for later, and illustrate them in later notebooks.\nAt this point we have accomplished the goal of illustrating the design, execution, and analysis of a two-level, three-factor full factorial experimental design, so we'll leave things at that.\n<a name=\"conclusion\"></a>\nConclusion\nIn this notebook, we've covered a 2-level, three-factor factorial design from start to finish, including incorporation of uncertainty information. The design of the experiment was made simple by using the itertools and pandas libraries, and we showed how to transform variables to have low and high levels, as well as demonstrating a system response transformation. The results were analyzed to obtain a linear polynomial model.\nHowever, this process was a bit cumbersome. What we'll see in later notebooks is that we can use Python modules designed for statistical modeling to fit linear models to data using least squares and regression, and carry the analysis further." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
thomasantony/CarND-Projects
Exercises/Term1/TensorFlow-L2/solutions.ipynb
mit
[ "Solutions\nProblem 1\nImplement the Min-Max scaling function ($X'=a+{\\frac {\\left(X-X_{\\min }\\right)\\left(b-a\\right)}{X_{\\max }-X_{\\min }}}$) with the parameters:\n$X_{\\min }=0$\n$X_{\\max }=255$\n$a=0.1$\n$b=0.9$", "# Problem 1 - Implement Min-Max scaling for greyscale image data\ndef normalize_greyscale(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n a = 0.1\n b = 0.9\n greyscale_min = 0\n greyscale_max = 255\n return a + ( ( (image_data - greyscale_min)*(b - a) )/( greyscale_max - greyscale_min ) )", "Problem 2\n\nUse tf.placeholder() for features and labels since they are the inputs to the model.\nAny math operations must have the same type on both sides of the operator. The weights are float32, so the features and labels must also be float32.\nUse tf.Variable() to allow weights and biases to be modified.\nThe weights must be the dimensions of features by labels. The number of features is the size of the image, 28*28=784. The size of labels is 10.\nThe biases must be the dimensions of the labels, which is 10.", "features_count = 784\nlabels_count = 10\n\n# Problem 2 - Set the features and labels tensors\nfeatures = tf.placeholder(tf.float32)\nlabels = tf.placeholder(tf.float32)\n\n# Problem 2 - Set the weights and biases tensors\nweights = tf.Variable(tf.truncated_normal((features_count, labels_count)))\nbiases = tf.Variable(tf.zeros(labels_count))", "Problem 3\nConfiguration 1\n* Epochs: 1\n* Batch Size: 50\n* Learning Rate: 0.01\nConfiguration 2\n* Epochs: 1\n* Batch Size: 100\n* Learning Rate: 0.1\nConfiguration 3\n* Epochs: 4 or 5\n* Batch Size: 100\n* Learning Rate: 0.2" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
EmissionsIndex/Emissions-Index
Code/EPA Emissions data.ipynb
gpl-3.0
[ "Download hourly EPA emissions", "import io, time, json\nimport requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nimport urllib, urllib2\nimport re\nimport os\nimport numpy as np\nimport ftplib\nfrom ftplib import FTP\nimport timeit", "Final script\nThis downloads hourly data from the ftp server over a range of years, and saves all of the file names/last update times in a list. The downloads can take some time depending on how much data is being retrieved.\nSome of the code below assumes that we only need to retrieve new or modified files. If you are retrieving this data for the first time, create an empty dataframe named already_downloaded with column names file name and last updated.", "# Replace the filename with whatever csv stores already downloaded file info\npath = os.path.join('EPA downloads', 'name_time 2015-2016.csv')\nalready_downloaded = pd.read_csv(path, parse_dates=['last updated'])\n\n# Uncomment the line below to create an empty dataframe\n# already_downloaded = pd.DataFrame(columns=['file name', 'last updated'])\n\nalready_downloaded.head()\n\n# Timestamp\nstart_time = timeit.default_timer()\nname_time_list = []\n# Open ftp connection and navigate to the correct folder\nprint 'Opening ftp connection'\nftp = FTP('ftp.epa.gov')\nftp.login()\nftp.cwd('/dmdnload/emissions/hourly/monthly')\n\nfor year in [2015, 2016, 2017]:\n print year\n\n year_str = str(year)\n print 'Change directory to', year_str\n try:\n ftp.cwd(year_str)\n except ftplib.all_errors as e:\n print e\n break\n \n # Use ftplib to get the list of filenames\n print 'Fetch filenames'\n fnames = ftp.nlst()\n \n # Create new directory path if it doesn't exist\n new_path = os.path.join('EPA downloads', year_str)\n try:\n os.mkdir(new_path)\n except:\n pass\n \n \n # Look for files without _HLD in the name\n name_list = []\n time_list = []\n print 'Find filenames without _HLD and time last updated'\n for name in fnames:\n if '_HLD' not in name:\n try:\n # The ftp command \"MDTM\" asks what time a file was last modified\n # It returns a code and the date/time\n # If the file name isn't already downloaded, or the time isn't the same\n tm = pd.to_datetime(ftp.sendcmd('MDTM '+ name).split()[-1])\n if name not in already_downloaded['file name'].values:\n \n time_list.append(tm)\n name_list.append(name)\n elif already_downloaded.loc[already_downloaded['file name']==name, 'last updated'].values[0] != tm:\n tm = ftp.sendcmd('MDTM '+ name)\n time_list.append(pd.to_datetime(tm.split()[-1]))\n name_list.append(name)\n except ftplib.all_errors as e:\n print e\n # If ftp.sendcmd didn't work, assume the connection was lost\n ftp = FTP('ftp.epa.gov')\n ftp.login()\n ftp.cwd('/dmdnload/emissions/hourly/monthly')\n ftp.cwd(year_str)\n tm = ftp.sendcmd('MDTM '+ name)\n time_list.append(pd.to_datetime(tm.split()[-1]))\n name_list.append(name)\n \n \n # Store all filenames and update times\n print 'Store names and update times'\n name_time_list.extend(zip(name_list, time_list))\n \n # Download and store data\n print 'Downloading data'\n for name in name_list:\n try:\n with open(os.path.join('EPA downloads', year_str, name), 'wb') as f:\n ftp.retrbinary('RETR %s' % name, f.write)\n except ftplib.all_errors as e:\n print e\n try:\n ftp.quit()\n except ftplib.all_errors as e:\n print e\n pass\n ftp = FTP('ftp.epa.gov')\n ftp.login()\n ftp.cwd('/dmdnload/emissions/hourly/monthly')\n ftp.cwd(year_str)\n with open(os.path.join('EPA downloads', year_str, name), 'wb') as f:\n ftp.retrbinary('RETR %s' % name, f.write)\n\n print 'Download finished'\n print round((timeit.default_timer() - start_time)/60.0,2), 'min so far'\n \n # Go back up a level on the ftp server\n ftp.cwd('..')\n \n# Timestamp\nelapsed = round((timeit.default_timer() - start_time)/60.0,2)\n\nprint 'Data download completed in %s mins' %(elapsed)", "Export file names and update timestamp", "name_time_df = pd.DataFrame(name_time_list, columns=['file name', 'last updated'])\n\nname_time_df.head()\n\nlen(name_time_df)\n\npath = os.path.join('EPA downloads', 'name_time 2015-2016.csv')\nname_time_df.to_csv(path, index=False)", "Check number of columns and column names", "import csv\nimport zipfile\nimport StringIO\nfrom collections import Counter", "This takes ~100 ms per file.", "base_path = 'EPA downloads'\nnum_cols = {}\ncol_names = {}\nfor year in range(2001, 2017):\n\n n_cols_list = []\n col_name_list = []\n path = os.path.join(base_path, str(year))\n fnames = os.listdir(path)\n \n for name in fnames:\n csv_name = name.split('.')[0] + '.csv'\n fullpath = os.path.join(path, name)\n filehandle = open(fullpath, 'rb')\n zfile = zipfile.ZipFile(filehandle)\n data = StringIO.StringIO(zfile.read(csv_name)) #don't forget this line!\n reader = csv.reader(data)\n columns = reader.next()\n \n # Add the column names to the large list\n col_name_list.extend(columns)\n # Add the number of columns to the list\n n_cols_list.append(len(columns))\n \n col_names[year] = Counter(col_name_list)\n num_cols[year] = Counter(n_cols_list)\n ", "From the table below, recent years always have the units after an emission name. Before 2009 some files have the units and some don't. UNITID is consistent through all years, but UNIT_ID was added in after 2008 (not the same thing).", "pd.DataFrame(col_names)\n\npd.DataFrame(col_names).index\n\npd.DataFrame(num_cols)", "Correct column names and export all files (1 file per year)\nAlso convert the OP_DATE column to a datetime object\nUsing joblib for this.\nJoblib on windows requires the if name == 'main': statement. And in a Jupyter notebook the function needs to be imported from an external script. I probably should have done the parallel part at a higher level - the longest part is saving the csv files. Could use this method - disable a check - to speed up the process.\nJoblib has to be at least version 10.0, which is only available through pip - got some errors when using the version installed by conda.\nCreate a dictionary mapping column names. Any values on the left (keys) should be replaced by values on the right (values).", "col_name_map = {'CO2_MASS' : 'CO2_MASS (tons)',\n 'CO2_RATE' : 'CO2_RATE (tons/mmBtu)',\n 'GLOAD' : 'GLOAD (MW)',\n 'HEAT_INPUT' : 'HEAT_INPUT (mmBtu)',\n 'NOX_MASS' : 'NOX_MASS (lbs)',\n 'NOX_RATE' : 'NOX_RATE (lbs/mmBtu)',\n 'SLOAD' : 'SLOAD (1000lb/hr)',\n 'SLOAD (1000 lbs)' : 'SLOAD (1000lb/hr)',\n 'SO2_MASS' : 'SO2_MASS (lbs)',\n 'SO2_RATE' : 'SO2_RATE (lbs/mmBtu)'\n }\n\nfrom joblib import Parallel, delayed\nfrom scripts import import_clean_epa\n\nif __name__ == '__main__':\n start_time = timeit.default_timer()\n base_path = 'EPA downloads'\n for year in range(2015, 2017):\n print 'Starting', str(year)\n df_list = []\n path = os.path.join(base_path, str(year))\n fnames = os.listdir(path)\n\n df_list = Parallel(n_jobs=-1)(delayed(import_clean_epa)(path, name, col_name_map) for name in fnames)\n\n print 'Combining data'\n df = pd.concat(df_list)\n print 'Saving file'\n path_out = os.path.join('Clean data', 'EPA emissions', 'EPA emissions ' + str(year) + '.csv')\n df.to_csv(path_out, index=False)\n print round((timeit.default_timer() - start_time)/60.0,2), 'min so far'\n \n # Timestamp\n elapsed = round((timeit.default_timer() - start_time)/60.0,2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
quests/data-science-on-gcp-edition1_tf2/09_cloudml/flights_caip.ipynb
apache-2.0
[ "Developing, Training, and Deploying a TensorFlow model on Google Cloud Platform (using CloudShell and Cloud AI Platform)\nIn this notebook, we will develop a Keras model to predict flight delays using TensorFlow 2.0 as the backend. Unlike flights_model_tf2.ipynb, we will use bash commands so that these can be run from CloudShell. We will also deploy to Cloud AI Platform so that the model can be executed in a serverless way.\nInstall necessary Python package", "%pip install --upgrade --quiet cloudml-hypertune", "Restart the kernel after doing the pip install\nSet environment variables\nI'm doing this for the notebook. In CloudShell, you'd do:\n<pre>\nexport PROJECT=cloud-training-demos\n</pre>\netc.", "# change these to try this notebook out\nBUCKET = 'cloud-training-demos-ml'\nPROJECT = 'cloud-training-demos'\nREGION = 'us-central1'\n\nimport os\nos.environ['BUCKET'] = BUCKET\nos.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "Try out different functions in the model\nReading lines", "%%bash\nexport PYTHONPATH=\"$PWD/flights\"\npython3 -m trainer.task --bucket $BUCKET --train_batch_size=3 --func=read_lines", "Finding average label", "%%bash\nexport PYTHONPATH=\"$PWD/flights\"\npython3 -m trainer.task --bucket $BUCKET --num_examples=100 --func=find_average_label", "Linear model", "%%bash\nexport PYTHONPATH=\"$PWD/flights\"\ngsutil -m rm -rf gs://$BUCKET/flights/trained_model\npython3 -m trainer.task --bucket $BUCKET --num_examples=1000 --func=linear\n\ngsutil ls gs://$BUCKET/flights/trained_model", "Wide-and-deep model", "%%bash\nexport PYTHONPATH=\"$PWD/flights\"\ngsutil -m rm -rf gs://$BUCKET/flights/trained_model\npython3 -m trainer.task --bucket $BUCKET --num_examples=1000 --func=wide_deep\n\ngsutil ls gs://$BUCKET/flights/trained_model\n\n%%bash\nmodel_dir=$(gsutil ls gs://$BUCKET/flights/trained_model/export | tail -1)\necho $model_dir\nsaved_model_cli show --tag_set serve --signature_def serving_default --dir $model_dir", "Run on full dataset\nSubmit the Python package to Cloud AI Platform Training and have it process full dataset.\nThis is serverless, so you can do the equivalent by running ./train_model.sh from CloudShell.\nYou don't need a Notebook environment to do this.", "%%bash\nJOBID=flights_$(date +%Y%m%d_%H%M%S)\ngsutil -m rm -rf gs://$BUCKET/flights/trained_model\n\ngcloud beta ai-platform jobs submit training $JOBID \\\n --staging-bucket=gs://$BUCKET --region=$REGION \\\n\n --master-machine-type=n1-standard-4 --scale-tier=CUSTOM \\\n -- \\\n --bucket=$BUCKET --num_examples=1000000 --func=wide_deep", "The final validation RMSE was 0.214", "%%bash\nmodel_dir=$(gsutil ls gs://$BUCKET/flights/trained_model/export | tail -1)\necho $model_dir\nsaved_model_cli show --tag_set serve --signature_def serving_default --dir $model_dir", "Option 1: Single model training on Cloud AI Platform\nThis will take 15-20 minutes", "%%bash\n./train_model.sh $BUCKET wide_deep 500000", "Option 2: Hyperparameter tuning on Cloud AI Platform\nThis will take 2-3 hours", "%%bash\n\nComment this line. Make sure to wait for hyperparam job to finish, find best model and change BEST_MODEL in following cell!\n\nJOBID=flights_$(date +%Y%m%d_%H%M%S)\ngsutil -m rm -rf gs://$BUCKET/flights/trained_model\n\ngcloud ai-platform jobs submit training $JOBID \\\n --staging-bucket=gs://$BUCKET --region=$REGION \\\n --module-name=trainer.task \\\n --python-version=3.7 --runtime-version=2.1 \\\n --package-path=${PWD}/flights/trainer \\\n --config=hyperparam.yaml \\\n -- \\\n --bucket=$BUCKET --num_examples=100000 --func=wide_deep", "Deploy model", "%%bash\nMODEL_NAME=flights\nVERSION_NAME=tf2\nBEST_MODEL=\"\" # Option 1: No hyperparameter tuning\n#BEST_MODEL=\"15/\" # Option 2: CHANGE AS NEEDED \nEXPORT_PATH=$(gsutil ls gs://$BUCKET/flights/trained_model/${BEST_MODEL}export | tail -1)\necho $EXPORT_PATH\n\nif [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then\n echo \"$MODEL_NAME already exists\"\nelse\n # create model\n echo \"Creating $MODEL_NAME\"\n gcloud ai-platform models create --regions=$REGION $MODEL_NAME\nfi\n\nif [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then\n echo \"Deleting already existing $MODEL_NAME:$VERSION_NAME ... \"\n gcloud ai-platform versions delete --quiet --model=$MODEL_NAME $VERSION_NAME\n echo \"Please run this cell again if you don't see a Creating message ... \"\n sleep 10\nfi\n\n# create model\necho \"Creating $MODEL_NAME:$VERSION_NAME\"\ngcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \\\n --framework=tensorflow --python-version=3.7 --runtime-version=2.1 \\\n --origin=$EXPORT_PATH --staging-bucket=gs://$BUCKET\n\n%%writefile example_input.json\n{\"dep_delay\": 14.0, \"taxiout\": 13.0, \"distance\": 319.0, \"avg_dep_delay\": 25.863039, \"avg_arr_delay\": 27.0, \"carrier\": \"WN\", \"dep_lat\": 32.84722, \"dep_lon\": -96.85167, \"arr_lat\": 31.9425, \"arr_lon\": -102.20194, \"origin\": \"DAL\", \"dest\": \"MAF\"}\n{\"dep_delay\": -9.0, \"taxiout\": 21.0, \"distance\": 301.0, \"avg_dep_delay\": 41.050808, \"avg_arr_delay\": -7.0, \"carrier\": \"EV\", \"dep_lat\": 29.984444, \"dep_lon\": -95.34139, \"arr_lat\": 27.544167, \"arr_lon\": -99.46167, \"origin\": \"IAH\", \"dest\": \"LRD\"}\n\n!gcloud ai-platform predict --model=flights --version=tf2 --json-instances=example_input.json\n\n%%bash\n./call_predict.py --project=$PROJECT\necho\necho \"With reasoning\"\n./call_predict_reason.py --project=$PROJECT", "Copyright 2016-2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ssanderson/notebooks
quanto/Quantopian_Meetup_Pandas.ipynb
apache-2.0
[ "<img src=http://pandas.pydata.org/_static/pandas_logo.png></img>\nFrom Rob Pike's\nNotes on Programming in C:\n\nRule 5. Data dominates. If you've chosen the right data structures and\n organized things well, the algorithms will almost always be\n self-evident. Data structures, not algorithms, are central to programming.\n\nPandas is built on a hierarchy of a few powerful data structures. Each of these\nstructures is composed of, and designed to interoperate with, the simpler\nstructures.\n\nIndex (1-Dimensional immutable ordered hash table)\nSeries (1-Dimensional Labelled Array)\nDataFrame (2-Dimensional Labelled Array)\nPanel (3-Dimensional Labelled Array)", "# Tell IPython to display mapltplotlib plots inline.\n%matplotlib inline\n\n# Set default font attributes.\nimport matplotlib\nfont = {'family' : 'normal',\n 'weight' : 'bold',\n 'size' : 13}\nmatplotlib.rc('font', **font)\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nrandn = np.random.randn\n\npd.set_option('display.mpl_style', 'default')\npd.set_option('display.max_rows', 15)\n\n# Make a default figure size for later use.\nDEFAULT_FIGSIZE = (12, 6)", "Series\nBasics", "s = pd.Series([3,5,7,2])\ns\n\n# An important concept to understand when working with a `Series` is that it's\n# actually composed of two pieces: an index array, and a data array.\n\nprint \"The index is {0}.\".format(s.index)\nprint \"The values are {0}.\".format(s.values)\n\n# You can explicitly pass your own labels to use as an index. If you don't\n# Pandas will construct a default index with integer labels.\npd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd'])\n\n# You can also construct a Series from a dictionary.\n# The keys are used as the index, and the values are used as the Series' values\npd.Series(\n {\n 'a': 1, \n 'b': 2,\n 'c': 3,\n }\n)\n\n# You get performance (and code clarity!) benefits if your Series'\n# labels/values are homogenously-typed, but mixed-type arrays are supported.\npd.Series(\n [1, 2.6, 'a', {'a': 'b'}], \n index=[1, 'a', 2, 2.5],\n)", "Slicing Series with __getitem__ (aka [])\nPandas objects support a wide range of selection and filtering methods. An important idea to keep in mind is the following:\nIf you have an N-dimensional object: \n- Indexing with a scalar returns a value of dimension N-1.\n- Indexing with a slice filters the object, but maintains the original dimension.", "s = pd.Series(range(10), index=list('ABCDEFGHIJ'))\ns\n\n\n# Lookups by key work as you'd expect.\ns['E']\n\n# We can look up multiple values at a time by passing a list of keys.\n# The resulting value is a new `Series`.\ns[['E', 'I', 'B']]\n\n# Because the Index is ordered, we can use Python's slicing syntax.\ns['E':]\n\n # Label-based slicing is inclusive of both endpoints.\ns[:'I']\n\ns['E':'I']\n\n# Step arguments work just like Python lists.\ns['E':'I':2]\n\n# If you don't know the label you want, but you do know the position, you can\n# use `iloc`.\nprint \"The first entry is: %d\" % s.iloc[0]\nprint \"The last entry is: %d\" % s.iloc[-1]\n\n# Slicing works with `iloc` as well.\n\n# Note that, unlike with label-based slicing, integer-based slices are\n# right-open intervals, i.e. doing s.iloc[X:Y] gives you elements with indices\n# in [X, Y). This is the same as the semantics for list slicing.\ns.iloc[5:]\n\nprint s.iloc[:5]\n\ns.iloc[-3:]", "Numerical Operations", "# Create two Series objects containing 100 samples each of sine and cosine.\nsine = pd.Series(np.sin(np.linspace(0, 3.14 * 2, 100)), name='sine')\ncosine = pd.Series(np.cos(np.linspace(0, 3.14 * 2, 100)), name='cosine')\n\nsine\n\ncosine\n\n# Multiplying two Series objects produces a new Series by multiplying values that have the same keys.\nproduct = cosine * sine\nproduct\n\n# Adding or multiplying a Series by a scalar applies that operation to each value in the Series.\ncosine_plus_one = cosine + 1\ncosine_plus_one\n\n# Other binary operators work as you'd expect. \n\n# Note how much cleaner and clearer this is\n# compared to looping over two containers and \n# performing multiple operations on elements \n# from each.\nidentity = (sine ** 2) + (cosine ** 2)\nidentity", "All of the pandas data structures have plot methods that provide a user-friendly interface to matplotlib.", "# Plot our sines values.\ntrigplot = sine.plot(\n ylim=(-1.2, 1.2),\n legend=True,\n figsize=DEFAULT_FIGSIZE,\n linewidth=3,\n label='sine',\n)\n# Add our other Series' to the same plot.\ncosine.plot(ax=trigplot, legend=True, linewidth=3)\nproduct.plot(ax=trigplot, legend=True, linewidth=3, label='product')\nidentity.plot(ax=trigplot, legend=True, linewidth=3, label='identity')", "We can map more complicated functions over a Series using the apply method.", "def tenths_place(N):\n s = str(N)\n return s[s.find('.') + 1]\nproduct.apply(tenths_place)", "Handling missing data.", "# A major problem when working with real world data is handling missing entries.\n# Pandas handles missing data by taking \ns1 = pd.Series({'a': 1, 'b': 2, 'c': 3})\n# s2 is missing an entry for 'b'\ns2 = pd.Series({'a': 4, 'c': 5})\ns1 + s2", "Boolean Operations on Series", "s1 = pd.Series(\n {\n 'A': 1,\n 'B': 2,\n 'C': 3,\n 'D': 4,\n 'E': 3,\n 'F': 2,\n 'G': 1,\n }\n)\n# You can create a constant Series by passing a scalar value and an index.\ns2 = pd.Series(2, index=s1.index)\n\ngreater = s1 > s2\ngreater\n\nless = s1 < s2\nless\n\nequal = s1 == s2\nequal\n\n# Comparisons against scalars also work.\ns1_equal_to_3 = s1 == 3\ns1_equal_to_3\n\n#TODO: Move this down?\n\npd.DataFrame({\n 's1': s1,\n 's2': s2,\n 's1 > s2': greater,\n 's1 == s2': equal,\n 's1 < s2': less,\n 's1 == 3': s1_equal_to_3,\n}, columns=['s1','s2', 's1 > s2', 's1 == s2', 's1 < s2', 's1 == 3'])", "Boolean-valued Series can be used for slicing. You can think of this as\nmarking particular index values as \"keep\" (True) or \"drop\" (False).", "# Indexing into a series with a boolean Series masks away the values which were\n# false in the passed Series.\ns1[s1 > s2]\n\n# We can combine these operators to concisely express complex\n# computations/filters.\ns1[(s1 > 1) & ~(s1 > s2)]", "Working with Time Series Data", "# Pandas has a special index class, `DatetimeIndex`, for representing\n# TimeSeries data.\nstart = pd.Timestamp('2014-01-01', tz='UTC')\nend = pd.Timestamp('2014-01-09', tz='UTC')\n\n# date_range is an easy way to construct a DatetimeIndex\ndaily_index = pd.date_range(start, end)\ndaily_index\n\n# DatetimeIndex has a notion of its Frequency.\nfrom pandas.tseries.offsets import Day, Hour, BDay, Minute\n\nhourly_index = pd.date_range(\n pd.Timestamp('2014-01-01', tz='UTC'),\n pd.Timestamp('2014-01-9', tz='UTC'),\n freq=Hour(),\n)\nhourly_index\n\nbihourly_index = pd.date_range(\n pd.Timestamp('2014-01-01', tz='UTC'),\n pd.Timestamp('2014-01-09', tz='UTC'),\n freq=Hour(2),\n)\nbihourly_index\n\nweekday_index = pd.date_range(\n pd.Timestamp('2014-01-01', tz='UTC'),\n pd.Timestamp('2014-01-09', tz='UTC'),\n freq=BDay(),\n)\nprint weekday_index\n[i for i in weekday_index]", "If your Series has a DatetimeIndex, then you immediately get access to\nsophisticated resampling tools.", "ts = pd.Series(\n np.arange(30) ** 2,\n pd.date_range(\n start=pd.Timestamp('2014-01-01', tz='UTC'),\n freq='1D',\n periods=30,\n )\n)\nts.plot()\n\n# By default, resampling to a lower frequency takes the mean of the entries\n# that were downsampled.\nresampled = ts.resample('5D')\nresampled\n\n# We can customize this behavior though.\nresampled_first = ts.resample('5D', how='first')\nresampled_first\n\nresampled_last = ts.resample('5D', how='last')\nresampled_last\n\n# We can even define our own custom sampling methods.\ndef geometric_mean(subseries):\n return np.product(subseries.values) ** (1.0 / len(subseries))\nresampled_geometric = ts.resample('5D', how=geometric_mean)\nprint resampled_geometric\n\npd.DataFrame(\n {\n \"resampled\": resampled,\n \"resampled_first\": resampled_first,\n \"resampled_last\": resampled_last,\n \"resampled_geometric\": resampled_geometric,\n }\n).plot(linewidth=2, figsize=DEFAULT_FIGSIZE)\n\n# Upsampling creates missing data, which is represented by numpy.nan.\nts.resample('6H')", "We can handle missing data in a variety of ways.", "# We can fill empty values with fillna.\nzero_filled = ts.resample('6H').fillna(0)\nprint zero_filled\n\n# We can forward-fill with the last known prior value.\nffilled = ts.resample('6H').ffill()\nprint ffilled\n\n# We can backfill with earliest known next value.\nbfilled = ts.resample('6H').bfill()\nprint bfilled\n\n# We can interpolate between known values.\n\n# Note: `interpolate` is new as of pandas 0.14.0\n# Quantopian is currently on pandas 0.12.0 due to breaking changes in the\n# pandas API in 0.13.0.\nlinear_interpolated = ts.resample('6H').interpolate()\nlinear_interpolated\n\nquadratic_interpolated = ts.resample('6H').interpolate('polynomial', order=2)\nquadratic_interpolated\n# Note: `interpolate` is new as of pandas 0.14.0\n# Quantopian is currently on pandas 0.12.0 due to breaking changes in the\n# pandas API in 0.13.0.\n\npd.DataFrame(\n {\n \"linear_interpolated\": linear_interpolated,\n \"quadratic_interpolated\": quadratic_interpolated,\n \"bfilled\": bfilled,\n \"ffilled\": ffilled,\n \"zero_filled\": zero_filled,\n }\n).plot(linewidth=2, figsize=DEFAULT_FIGSIZE)", "DataFrame - 2D Tables of Interwoven Series", "# Oftentimes we have more than one axis on which we want to store data.\nfrom pandas.io.data import get_data_yahoo\nspy = get_data_yahoo(\n symbols='SPY',\n start=pd.Timestamp('2011-01-01'),\n end=pd.Timestamp('2014-01-01'),\n adjust_price=True,\n)\nspy\n\n# Just plotting this DataFrame with the default arguments isn't very useful,\n# because the scale of volume is so much greater than all the other columns.\nspy.plot(figsize=DEFAULT_FIGSIZE)\n\n# Let's make a more interesting plot.\n\n# Create a figure\nfig = plt.figure()\n\n# Add a subplot for price.\nprice_subplot = fig.add_subplot('311', xlabel='Date', ylabel='Price')\nspy['Close'].plot(ax=price_subplot, lw=2) # lw means \"line width\"\n\n# Add another subplot for each day's spread.\nspread_subplot = fig.add_subplot('312', xlabel='Date', ylabel='Spread')\nspread = spy['High'] - spy['Low']\nspread.plot(ax=spread_subplot, lw=2, color='r')\n\n# And add a third plot for volume.\nvolume_subplot = fig.add_subplot('313', xlabel='Date', ylabel='Volume')\nspy['Volume'].plot(ax=volume_subplot, lw=2)\n\n# matplotlib.pyplot.gcf is short for \"Get Current Figure\". It provides an easy\n# way to modify the last drawn plot.\nplt.gcf().set_size_inches(*DEFAULT_FIGSIZE)\n\n# Unsurprisingly, spread is strongly correlated with daily volume\nspread.corr(spy['Volume'])", "Selecting Data with DataFrames\nA DataFrame has two indices, representing row labels and column labels.\nSince these are Index objects, we can use all the same slicing tools we used\nwhen working with Series.", "# Default slicing acts on column labels.\n# Passing a scalar value drops the dimension by one.\nspy['Close'] # Returns a Series\n\n# Passing a list filters the columns down to the supplied values.\nspy[['Close', 'Volume']]\n\n# Using .loc with one argument takes a slice of rows based on label.\nspy.loc[pd.Timestamp('2013-02-01'):pd.Timestamp('2013-02-28')]\n\n# Using .loc with two arguments takes a slice of rows based on label, then a\n# slice of columns based on name.\n\n# Note the comma between the first slice and the second slice!\nspy.loc[pd.Timestamp('2013-02-01'):pd.Timestamp('2013-02-28'), 'Open':'Low']\n\n# We can use iloc when we want lookups by position.\nspy.iloc[-20:-10, [0,2]]", "Boolean Series slicing is very useful with DataFrame.", "# Get the days on which SPY closed higher than it opened.\nup_days = spy['Close'] > spy['Open']\nup_days\n\nspy[up_days]\n\n# We can use .ix when we want mixed lookups.\nspy.ix[-20:-10, 'Open':'High']", "Example: Investigating a Momentum Strategy", "five_day_returns = spy['Close'].pct_change(5)\nfive_day_returns\n\n# Checking for equality of floating point numbers is a bad idea because of\n# roundoff error. `numpy.allclose` does an appropriate epsilon test.\ntest_return = (spy['Close'].iloc[5] - spy['Close'].iloc[0]) / spy['Close'].iloc[0]\nnp.allclose(five_day_returns.iloc[5], test_return)\n\nthirty_day_forward_returns = (spy['Close'].shift(-30) - spy['Close']) / spy['Close']\ntest_return = (spy['Close'].iloc[30] - spy['Close'].iloc[0]) / spy['Close'].iloc[0]\nnp.allclose(thirty_day_forward_returns.iloc[0], test_return)\n\nreturns = pd.DataFrame(\n {\n 'forward_30Day': thirty_day_forward_returns,\n 'backA_2Day': spy['Close'].pct_change(2),\n 'backB_5Day': spy['Close'].pct_change(5),\n 'backD_50Day': spy['Close'].pct_change(50),\n 'backE_100Day': spy['Close'].pct_change(100),\n 'backF_200Day': spy['Close'].pct_change(200),\n 'backG_300Day': spy['Close'].pct_change(300),\n }\n).dropna(how='any')\nreturns.plot(figsize=DEFAULT_FIGSIZE)\n\n# Pairwise correlation of forward and backward returns.\ncorr = returns.corr()\ncorr\n\ncorr.ix['forward_30Day',:-1].plot(kind='bar', position=.5, xlim=(-1, 6))\nplt.gcf().set_size_inches(9, 6)", "Example: Investigating a Pair Trade for Pepsi and Coca-Cola\nHere we show how to load data for two securities, graph the data, and compute correlation of returns and volatility for each security over the specified period.", "# Load data for Pepsi and Coca-Cola from Yahoo.\nsymbols = [\n 'PEP',\n 'KO',\n]\ncola_data = get_data_yahoo(['PEP', 'KO'], adjust_price=True)\ncola_data\n\n# Compute the 1-day forward log returns for both securities' close prices.\ncloses = cola_data['Close']\nyesterday_closes = cola_data['Close'].shift(1)\ncola_log_returns = (closes / yesterday_closes).apply(np.log)\ncola_raw_returns = closes.pct_change(1)\n\n# Look at the data we just calculated by throwing it into a Panel and\n# pulling out just the DataFrame or Kola.\npd.Panel({\n 'closes' : closes,\n 'prev_closes': yesterday_closes,\n 'log_returns': cola_log_returns,\n 'raw_returns': cola_raw_returns,\n}).loc[:,:,'KO']\n\n# Pull the standard returns and the log returns into a single DataFrame using DataFrame.join.\ncloses.join(cola_log_returns, rsuffix='_lr')\\\n .join(cola_raw_returns, rsuffix='_rr')\\\n .dropna(how='any')\n\n# Create a figure with three 'slots' for subplots.\nfig = plt.figure()\n# 311 here means \"Put the subplot in the 1st slot of a 3 x 1 grid.\n# 312 and 313 tell matplotlib to place the subsequent plots in the 2nd and 3rd slot\nprice_subplot = fig.add_subplot('311', xlabel='Date', ylabel='Price')\nreturn_subplot_pep = fig.add_subplot('312', xlabel='Date', ylabel='PEP Log Returns')\nreturn_subplot_ko = fig.add_subplot('313', xlabel='Date', ylabel='KO Log Returns')\n\ncola_data['Close'].plot(ax=price_subplot, color=['purple', 'red'])\ncola_log_returns['PEP'].plot(ax=return_subplot_pep, color='red')\ncola_log_returns['KO'].plot(ax=return_subplot_ko, color='purple')\n\n# Set the size of the whole plot array. gcf stands for `get_current_figure`.\nplt.gcf().set_size_inches(14, 10)\n\n# Compute the correlation of our log returns\ncorrelation = (cola_log_returns['PEP']).corr(cola_log_returns['KO'])\ncorrelation\n\n# Compute column-wise standard deviation of daily returns and divide by\n# 1 / sqrt(252) to get annualized volatility.\nvolatility = cola_log_returns.std() * np.sqrt(252)\nvolatility", "More Advanced Topics:\n\nHierarchical Indexing\nGroupby Operations\nSQL-Style Joins\nIO Tools" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/bcc/cmip6/models/bcc-esm1/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: BCC\nSource ID: BCC-ESM1\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bcc', 'bcc-esm1', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
moonbury/pythonanywhere
learn_scipy/7702OS_Chap_03_rev20141229.ipynb
gpl-3.0
[ "<center><font color=red>Learning SciPy for Numerical and Scientific Computing</font></center>\n\nContent under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com).\n\n<b><font color='red'>\n NOTE: This IPython notebook should be read alonside the corresponding chapter in the book, where each piece of code is fully explained.\n </font></b>\n<br>\n<center>Chapter 3. SciPy for Linear Algebra</center>\nSummary\nThis chapter explores the treatment of matrices (whether normal or sparse) with the modules on linear algebra – linalg and sparse.linalg, which expand and improve the NumPy module with the same name.\nReferences\n\nLinear Algebra (scipy.linalg)<br>\nhttp://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html\n<br>\n<br>\nSparse Eigenvalue Problems with ARPACK<br>\nhttp://docs.scipy.org/doc/scipy/reference/tutorial/arpack.html\n<br>\n<br>\n\nVector creation", "import numpy \nvectorA = numpy.array([1,2,3,4,5,6,7]) \nvectorA\n\nvectorB = vectorA[::-1].copy()\nvectorB \n\nvectorB[0]=123 \nvectorB \n\nvectorA \n\nvectorB = vectorA[::-1].copy() \nvectorB ", "Vector Operations\nAdditon/subtraction", "vectorC = vectorA + vectorB\nvectorC\n\nvectorD = vectorB - vectorA \nvectorD", "Scalar/Dot product", "dotProduct1 = numpy.dot(vectorA,vectorB)\ndotProduct1\n\ndotProduct2 = (vectorA*vectorB).sum()\ndotProduct2", "Cross/vectorial product (on 3 dimensional space vectors)", "vectorA = numpy.array([5, 6, 7])\nvectorA\n\nvectorB = numpy.array([7, 6, 5])\nvectorB\n\ncrossProduct = numpy.cross(vectorA,vectorB) \ncrossProduct\n\ncrossProduct = numpy.cross(vectorB,vectorA)\ncrossProduct", "Matrix creation", "import numpy\n\nA=numpy.matrix(\"1,2,3;4,5,6\")\nprint(A)\n\n\n\nA=numpy.matrix([[1,2,3],[4,5,6]])\nprint(A)", "<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following matrix </b></font>\n$$ \\boxed{ \\begin{pmatrix} 0 & 10 & 0 & 0 & 0 \\ 0 & 0 & 20 & 0 & 0 \\ 0 & 0 & 0 & 30 & 0 \\ 0 & 0 & 0 & 0 & 40 \\ 0 & 0 & 0 & 0 & 0 \\end{pmatrix} }$$", "A=numpy.matrix([ [0,10,0,0,0], [0,0,20,0,0], [0,0,0,30,0],\n [0,0,0,0,40], [0,0,0,0,0] ]) \nA\n\nA[0,1],A[1,2],A[2,3],A[3,4]\n\nrows=numpy.array([0,1,2,3])\ncols=numpy.array([1,2,3,4])\nvals=numpy.array([10,20,30,40]) \n\nimport scipy.sparse\n\nA=scipy.sparse.coo_matrix( (vals,(rows,cols)) )\nprint(A)\n\nprint(A.todense())\n\nscipy.sparse.isspmatrix_coo(A)\n\nB=numpy.mat(numpy.ones((3,3)))\nW=numpy.mat(numpy.zeros((3,3)))\nprint(numpy.bmat('B,W;W,B'))\n\na=numpy.array([[1,2],[3,4]])\na\n\na*a", "<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following matrix product </b></font>\n$$ \\boxed{ \\begin{align} \\begin{pmatrix} 1 & 2 \\ 3 & 4 \\end{pmatrix} & \\begin{pmatrix} 1 & 2 \\ 3 & 4 \\end{pmatrix} = \\begin{pmatrix} 7 & 10 \\ 15 & 22 \\end{pmatrix} \\end{align} }$$", "A=numpy.mat(a)\nA\n\nA*A\n\nnumpy.dot(A,A) \n\nb=numpy.array([[1,2,3],[3,4,5]]) \nnumpy.dot(a,b) \n\nnumpy.multiply(A,A)\n\na=numpy.arange(5); A=numpy.mat(a)\na.shape, A.shape, a.transpose().shape, A.transpose().shape\n\nimport scipy.linalg\n\nA=scipy.linalg.hadamard(8)\nzero_sum_rows = (numpy.sum(A,0)==0)\nB=A[zero_sum_rows,:]\nprint(B[0:3,:])", "Matrix methods", "import numpy\nA = numpy.matrix(\"1+1j, 2-1j; 3-1j, 4+1j\")\nprint (A)\n\nprint (A.T)\n\nprint (A.H)", "Operations between matrices\n<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning \nof the following basis vectors </b></font>\n$$ \\boxed{ \\begin{align} v_{1} & = \\frac{1}{\\sqrt{2}}\\begin{pmatrix} 1,0,1 \\end{pmatrix} \\ v_{2} & = \\frac{1}{\\sqrt{2}}\\begin{pmatrix} 0,0,0 \\end{pmatrix} \\ v_{3} & = \\frac{1}{\\sqrt{2}}\\begin{pmatrix} 1,0,-1 \\end{pmatrix} \\end{align} }$$", "mu=1/numpy.sqrt(2)\nA=numpy.matrix([[mu,0,mu],[0,1,0],[mu,0,-mu]])\nB=scipy.linalg.kron(A,A)\n\n\nprint (B[:,0:-1:2])", "Functions on matrices", "A=numpy.matrix(\"1,1j;21,3\")\nprint (A**2); \n\nprint (numpy.asarray(A)**2)\n\na=numpy.arange(0,2*numpy.pi,1.6)\nA = scipy.linalg.toeplitz(a)\nprint (A)\n\nprint (numpy.exp(A))\n\nprint (scipy.linalg.expm(A))\n\nx=10**100; y=9; v=numpy.matrix([x,y])\nscipy.linalg.norm(v,2) # the right method", "<font color=red>As mentioned in the book, the following command will generate an error from the python computational engine </font>", "numpy.sqrt(x*x+y*y) # the wrong method", "Eigenvalue problems and matrix decompositions\nThis section refers the reader to the SciPy documentation related to eigenvalues problems and matrix decomposition\nhttp://docs.scipy.org/doc/scipy/reference/linalg.html\nhttp://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvals.html\nImage compression via the singular value decomposition\n<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following two eaquations </b></font>\n$$ \\boxed{ \\begin{align} A = U \\cdot S \\cdot V^{}, & \\quad U = \\begin{pmatrix} u_{1} \\ \\vdots \\ u_{n} \\end{pmatrix}, & S = \\begin{pmatrix} s_{1} & & \\ & \\ddots & \\ & & s_{n} \\end{pmatrix}, & \\quad V^{} = \\begin{pmatrix} v_{1} \\quad \\cdots \\quad v_{n} \\end{pmatrix} \\end{align} }$$\n$$ \\begin{equation} \\boxed{ \\sum_{j=1}^{k} s_{j}(u_{j} \\cdot v_{j}) } \\end{equation} $$", "%matplotlib inline\n\nimport numpy\nimport scipy.misc\nfrom scipy.linalg import svd\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (12.0, 8.0)\nimg=scipy.misc.lena()\nU,s,Vh=svd(img) # Singular Value Decomposition\nA = numpy.dot( U[:,0:32], # use only 32 singular values\n numpy.dot( numpy.diag(s[0:32]),\n Vh[0:32,:]))\nplt.subplot(121,aspect='equal'); \nplt.gray()\nplt.imshow(img)\nplt.subplot(122,aspect='equal'); \nplt.imshow(A)\nplt.show()", "Solvers\n<font color=red><b> Please, refers to the corresponding section on the book to read about the meaning of the following eaquation </b></font>\n$$ \\boxed{ \\begin{align} \\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\end{pmatrix} & \\begin{pmatrix} x \\ y \\ z \\end{pmatrix} = \\begin{pmatrix} 1 \\ 2\\ 3 \\end{pmatrix} \\end{align} }$$", "A=numpy.mat(numpy.eye(3,k=1))\nprint(A)\n\nb=numpy.mat(numpy.arange(3) + 1).T\nprint(b)\n\nxinfo=scipy.linalg.lstsq(A,b)\nprint (xinfo[0].T) # output the solution", "<center> This is the end of the working codes shown and thoroughly discussed in Chapter 3 of the book <font color=red>Learning SciPy for Numerical and Scientific Computing</font> </center>\n\nContent under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 Sergio Rojas (srojas@usb.ve) and Erik A Christensen (erikcny@aol.com)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/csir-csiro/cmip6/models/sandbox-3/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: CSIR-CSIRO\nSource ID: SANDBOX-3\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:54\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
2.1/tutorials/20_21_nparray.ipynb
gpl-3.0
[ "2.0 - 2.1 Migration: nparray\nLet's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).", "!pip install -I \"phoebe>=2.1,<2.2\"", "Although not well-documented, PHOEBE 2.0 included the ability to directly set linspace or arange to an array while only storing the properties (start, stop, step, etc). If for some reason you managed to find and use the capability, the behavior has changed slightly and is included in a separate package called nparray, which is included and built within PHOEBE 2.1 as phoebe.dependencies.nparray.\nBesides having much more flexibility, the only user-interface changes are that you cannot directly set the attributes for the properties from the Parameter.\nIn PHOEBE 2.0.x:\nb.get_parameter('times').stop = 1\nIn PHOEBE 2.1.x:\nb.get_parameter('times').set_property(stop=1)\nIntroduction to nparray\nIf you didn't happen to stumble on this in PHOEBE 2.0, you may find it useful. The nparray functionality allows you to store the start, stop, step values (in the case of linspace) instead of the entire array in memory. This is significantly cheaper to store when saving to json, for example, and allows for easily editing your step size without having to change the entire array. If you are writing your PHOEBE code within a script, this may seem like no use to you, but if you are in an interactive python console, then this can be quite handy.\nThe most useful of these \"array creation functions\" are also copied into the top-level of the phoebe package. These include: linspace, arange, logspace, and geomspace.", "import phoebe\nphoebe.linspace(0, 1, 11)", "By setting a PHOEBE parameter to this value, you can then later edit any of the properties.", "b = phoebe.default_binary()\nb.add_dataset('lc', times=phoebe.linspace(0, 1, 11))\n\nprint b.get_parameter('times')", "You can see here that the value is being stored not as an array, but as an nparray object with the properties to create that array. Once you (or PHOEBE) call get_value (or get_quantity), the array is temporarily created on-the-fly and returned.", "print b.get_value('times')", "You can now \"edit\" the properties of this array without re-building it.", "b.get_parameter('times').set_property(stop=2)\n\nprint b.get_parameter('times')\n\nprint b.get_value('times')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/io/tutorials/kafka.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow IO Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Robust machine learning on streaming data using Kafka and Tensorflow-IO\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/io/tutorials/kafka\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/kafka.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/io/blob/master/docs/tutorials/kafka.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/kafka.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nThis tutorial focuses on streaming data from a Kafka cluster into a tf.data.Dataset which is then used in conjunction with tf.keras for training and inference.\nKafka is primarily a distributed event-streaming platform which provides scalable and fault-tolerant streaming data across data pipelines. It is an essential technical component of a plethora of major enterprises where mission-critical data delivery is a primary requirement.\nNOTE: A basic understanding of the kafka components will help you in following the tutorial with ease.\nNOTE: A Java runtime environment is required to run this tutorial.\nSetup\nInstall the required tensorflow-io and kafka packages", "!pip install tensorflow-io\n!pip install kafka-python", "Import packages", "import os\nfrom datetime import datetime\nimport time\nimport threading\nimport json\nfrom kafka import KafkaProducer\nfrom kafka.errors import KafkaError\nfrom sklearn.model_selection import train_test_split\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow_io as tfio", "Validate tf and tfio imports", "print(\"tensorflow-io version: {}\".format(tfio.__version__))\nprint(\"tensorflow version: {}\".format(tf.__version__))", "Download and setup Kafka and Zookeeper instances\nFor demo purposes, the following instances are setup locally:\n\nKafka (Brokers: 127.0.0.1:9092)\nZookeeper (Node: 127.0.0.1:2181)", "!curl -sSOL https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz\n!tar -xzf kafka_2.13-3.1.0.tgz", "Using the default configurations (provided by Apache Kafka) for spinning up the instances.", "!./kafka_2.13-3.1.0/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.1.0/config/zookeeper.properties\n!./kafka_2.13-3.1.0/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.1.0/config/server.properties\n!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n!sleep 10", "Once the instances are started as daemon processes, grep for kafka in the processes list. The two java processes correspond to zookeeper and the kafka instances.", "!ps -ef | grep kafka", "Create the kafka topics with the following specs:\n\nsusy-train: partitions=1, replication-factor=1 \nsusy-test: partitions=2, replication-factor=1", "!./kafka_2.13-3.1.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 1 --topic susy-train\n!./kafka_2.13-3.1.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 2 --topic susy-test\n", "Describe the topic for details on the configuration", "!./kafka_2.13-3.1.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic susy-train\n!./kafka_2.13-3.1.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic susy-test\n", "The replication factor 1 indicates that the data is not being replicated. This is due to the presence of a single broker in our kafka setup.\nIn production systems, the number of bootstrap servers can be in the range of 100's of nodes. That is where the fault-tolerance using replication comes into picture.\nPlease refer to the docs for more details.\nSUSY Dataset\nKafka being an event streaming platform, enables data from various sources to be written into it. For instance:\n\nWeb traffic logs\nAstronomical measurements\nIoT sensor data\nProduct reviews and many more.\n\nFor the purpose of this tutorial, lets download the SUSY dataset and feed the data into kafka manually. The goal of this classification problem is to distinguish between a signal process which produces supersymmetric particles and a background process which does not.", "!curl -sSOL https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz", "Explore the dataset\nThe first column is the class label (1 for signal, 0 for background), followed by the 18 features (8 low-level features then 10 high-level features).\nThe first 8 features are kinematic properties measured by the particle detectors in the accelerator. The last 10 features are functions of the first 8 features. These are high-level features derived by physicists to help discriminate between the two classes.", "COLUMNS = [\n # labels\n 'class',\n # low-level features\n 'lepton_1_pT',\n 'lepton_1_eta',\n 'lepton_1_phi',\n 'lepton_2_pT',\n 'lepton_2_eta',\n 'lepton_2_phi',\n 'missing_energy_magnitude',\n 'missing_energy_phi',\n # high-level derived features\n 'MET_rel',\n 'axial_MET',\n 'M_R',\n 'M_TR_2',\n 'R',\n 'MT2',\n 'S_R',\n 'M_Delta_R',\n 'dPhi_r_b',\n 'cos(theta_r1)'\n ]", "The entire dataset consists of 5 million rows. However, for the purpose of this tutorial, let's consider only a fraction of the dataset (100,000 rows) so that less time is spent on the moving the data and more time on understanding the functionality of the api.", "susy_iterator = pd.read_csv('SUSY.csv.gz', header=None, names=COLUMNS, chunksize=100000)\nsusy_df = next(susy_iterator)\nsusy_df.head()\n\n# Number of datapoints and columns\nlen(susy_df), len(susy_df.columns)\n\n# Number of datapoints belonging to each class (0: background noise, 1: signal)\nlen(susy_df[susy_df[\"class\"]==0]), len(susy_df[susy_df[\"class\"]==1])", "Split the dataset", "train_df, test_df = train_test_split(susy_df, test_size=0.4, shuffle=True)\nprint(\"Number of training samples: \",len(train_df))\nprint(\"Number of testing sample: \",len(test_df))\n\nx_train_df = train_df.drop([\"class\"], axis=1)\ny_train_df = train_df[\"class\"]\n\nx_test_df = test_df.drop([\"class\"], axis=1)\ny_test_df = test_df[\"class\"]\n\n# The labels are set as the kafka message keys so as to store data\n# in multiple-partitions. Thus, enabling efficient data retrieval\n# using the consumer groups.\nx_train = list(filter(None, x_train_df.to_csv(index=False).split(\"\\n\")[1:]))\ny_train = list(filter(None, y_train_df.to_csv(index=False).split(\"\\n\")[1:]))\n\nx_test = list(filter(None, x_test_df.to_csv(index=False).split(\"\\n\")[1:]))\ny_test = list(filter(None, y_test_df.to_csv(index=False).split(\"\\n\")[1:]))\n\n\nNUM_COLUMNS = len(x_train_df.columns)\nlen(x_train), len(y_train), len(x_test), len(y_test)", "Store the train and test data in kafka\nStoring the data in kafka simulates an environment for continuous remote data retrieval for training and inference purposes.", "def error_callback(exc):\n raise Exception('Error while sendig data to kafka: {0}'.format(str(exc)))\n\ndef write_to_kafka(topic_name, items):\n count=0\n producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'])\n for message, key in items:\n producer.send(topic_name, key=key.encode('utf-8'), value=message.encode('utf-8')).add_errback(error_callback)\n count+=1\n producer.flush()\n print(\"Wrote {0} messages into topic: {1}\".format(count, topic_name))\n\nwrite_to_kafka(\"susy-train\", zip(x_train, y_train))\nwrite_to_kafka(\"susy-test\", zip(x_test, y_test))\n", "Define the tfio train dataset\nThe IODataset class is utilized for streaming data from kafka into tensorflow. The class inherits from tf.data.Dataset and thus has all the useful functionalities of tf.data.Dataset out of the box.", "def decode_kafka_item(item):\n message = tf.io.decode_csv(item.message, [[0.0] for i in range(NUM_COLUMNS)])\n key = tf.strings.to_number(item.key)\n return (message, key)\n\nBATCH_SIZE=64\nSHUFFLE_BUFFER_SIZE=64\ntrain_ds = tfio.IODataset.from_kafka('susy-train', partition=0, offset=0)\ntrain_ds = train_ds.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)\ntrain_ds = train_ds.map(decode_kafka_item)\ntrain_ds = train_ds.batch(BATCH_SIZE)", "Build and train the model", "# Set the parameters\n\nOPTIMIZER=\"adam\"\nLOSS=tf.keras.losses.BinaryCrossentropy(from_logits=True)\nMETRICS=['accuracy']\nEPOCHS=10\n\n\n# design/build the model\nmodel = tf.keras.Sequential([\n tf.keras.layers.Input(shape=(NUM_COLUMNS,)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(256, activation='relu'),\n tf.keras.layers.Dropout(0.4),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.4),\n tf.keras.layers.Dense(1, activation='sigmoid')\n])\n\nprint(model.summary())\n\n# compile the model\nmodel.compile(optimizer=OPTIMIZER, loss=LOSS, metrics=METRICS)\n\n# fit the model\nmodel.fit(train_ds, epochs=EPOCHS)", "Note: Please do not confuse the training step with online training. It's an entirely different paradigm which will be covered in a later section.\nSince only a fraction of the dataset is being utilized, our accuracy is limited to ~78% during the training phase. However, please feel free to store additional data in kafka for a better model performance. Also, since the goal was to just demonstrate the functionality of the tfio kafka datasets, a smaller and less-complicated neural network was used. However, one can increase the complexity of the model, modify the learning strategy, tune hyper-parameters etc for exploration purposes. For a baseline approach, please refer to this article.\nInfer on the test data\nTo infer on the test data by adhering to the 'exactly-once' semantics along with fault-tolerance, the streaming.KafkaGroupIODataset can be utilized. \nDefine the tfio test dataset\nThe stream_timeout parameter blocks for the given duration for new data points to be streamed into the topic. This removes the need for creating new datasets if the data is being streamed into the topic in an intermittent fashion.", "test_ds = tfio.experimental.streaming.KafkaGroupIODataset(\n topics=[\"susy-test\"],\n group_id=\"testcg\",\n servers=\"127.0.0.1:9092\",\n stream_timeout=10000,\n configuration=[\n \"session.timeout.ms=7000\",\n \"max.poll.interval.ms=8000\",\n \"auto.offset.reset=earliest\"\n ],\n)\n\ndef decode_kafka_test_item(raw_message, raw_key):\n message = tf.io.decode_csv(raw_message, [[0.0] for i in range(NUM_COLUMNS)])\n key = tf.strings.to_number(raw_key)\n return (message, key)\n\ntest_ds = test_ds.map(decode_kafka_test_item)\ntest_ds = test_ds.batch(BATCH_SIZE)", "Though this class can be used for training purposes, there are caveats which need to be addressed. Once all the messages are read from kafka and the latest offsets are committed using the streaming.KafkaGroupIODataset, the consumer doesn't restart reading the messages from the beginning. Thus, while training, it is possible only to train for a single epoch with the data continuously flowing in. This kind of a functionality has limited use cases during the training phase wherein, once a datapoint has been consumed by the model it is no longer required and can be discarded.\nHowever, this functionality shines when it comes to robust inference with exactly-once semantics.\nevaluate the performance on the test data", "res = model.evaluate(test_ds)\nprint(\"test loss, test acc:\", res)\n", "Since the inference is based on 'exactly-once' semantics, the evaluation on the test set can be run only once. In order to run the inference again on the test data, a new consumer group should be used.\nTrack the offset lag of the testcg consumer group", "!./kafka_2.13-3.1.0/bin/kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --describe --group testcg\n", "Once the current-offset matches the log-end-offset for all the partitions, it indicates that the consumer(s) have completed fetching all the messages from the kafka topic.\nOnline learning\nThe online machine learning paradigm is a bit different from the traditional/conventional way of training machine learning models. In the former case, the model continues to incrementally learn/update it's parameters as soon as the new data points are available and this process is expected to continue indefinitely. This is unlike the latter approaches where the dataset is fixed and the model iterates over it n number of times. In online learning, the data once consumed by the model may not be available for training again.\nBy utilizing the streaming.KafkaBatchIODataset, it is now possible to train the models in this fashion. Let's continue to use our SUSY dataset for demonstrating this functionality.\nThe tfio training dataset for online learning\nThe streaming.KafkaBatchIODataset is similar to the streaming.KafkaGroupIODataset in it's API. Additionally, it is recommended to utilize the stream_timeout parameter to configure the duration for which the dataset will block for new messages before timing out. In the instance below, the dataset is configured with a stream_timeout of 10000 milliseconds. This implies that, after all the messages from the topic have been consumed, the dataset will wait for an additional 10 seconds before timing out and disconnecting from the kafka cluster. If new messages are streamed into the topic before timing out, the data consumption and model training resumes for those newly consumed data points. To block indefinitely, set it to -1.", "online_train_ds = tfio.experimental.streaming.KafkaBatchIODataset(\n topics=[\"susy-train\"],\n group_id=\"cgonline\",\n servers=\"127.0.0.1:9092\",\n stream_timeout=10000, # in milliseconds, to block indefinitely, set it to -1.\n configuration=[\n \"session.timeout.ms=7000\",\n \"max.poll.interval.ms=8000\",\n \"auto.offset.reset=earliest\"\n ],\n)", "Every item that the online_train_ds generates is a tf.data.Dataset in itself. Thus, all the standard transformations can be applied as usual.", "def decode_kafka_online_item(raw_message, raw_key):\n message = tf.io.decode_csv(raw_message, [[0.0] for i in range(NUM_COLUMNS)])\n key = tf.strings.to_number(raw_key)\n return (message, key)\n \nfor mini_ds in online_train_ds:\n mini_ds = mini_ds.shuffle(buffer_size=32)\n mini_ds = mini_ds.map(decode_kafka_online_item)\n mini_ds = mini_ds.batch(32)\n if len(mini_ds) > 0:\n model.fit(mini_ds, epochs=3)", "The incrementally trained model can be saved in a periodic fashion (based on use-cases) and can be utilized to infer on the test data in either online or offline modes.\nNote: The streaming.KafkaBatchIODataset and streaming.KafkaGroupIODataset are still in experimental phase and have scope for improvements based on user-feedback.\nReferences:\n\n\nBaldi, P., P. Sadowski, and D. Whiteson. “Searching for Exotic Particles in High-energy Physics with Deep Learning.” Nature Communications 5 (July 2, 2014)\n\n\nSUSY Dataset: https://archive.ics.uci.edu/ml/datasets/SUSY#" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kingb12/languagemodelRNN
report_notebooks/encdec_noing10_bow_200_512_04drbef.ipynb
mit
[ "Encoder-Decoder Analysis\nModel Architecture", "report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drbef/encdec_noing10_bow_200_512_04drbef.json'\nlog_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drbef/encdec_noing10_bow_200_512_04drbef_logs.json'\n\nimport json\nimport matplotlib.pyplot as plt\nwith open(report_file) as f:\n report = json.loads(f.read())\nwith open(log_file) as f:\n logs = json.loads(f.read())\nprint'Encoder: \\n\\n', report['architecture']['encoder']\nprint'Decoder: \\n\\n', report['architecture']['decoder']", "Perplexity on Each Dataset", "print('Train Perplexity: ', report['train_perplexity'])\nprint('Valid Perplexity: ', report['valid_perplexity'])\nprint('Test Perplexity: ', report['test_perplexity'])", "Loss vs. Epoch", "%matplotlib inline\nfor k in logs.keys():\n plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')\n plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')\nplt.title('Loss v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "Perplexity vs. Epoch", "%matplotlib inline\nfor k in logs.keys():\n plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')\n plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')\nplt.title('Perplexity v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Perplexity')\nplt.legend()\nplt.show()", "Generations", "def print_sample(sample, best_bleu=None):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n print('Input: '+ enc_input + '\\n')\n print('Gend: ' + sample['generated'] + '\\n')\n print('True: ' + gold + '\\n')\n if best_bleu is not None:\n cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])\n print('Closest BLEU Match: ' + cbm + '\\n')\n print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\\n')\n print('\\n')\n \n\nfor i, sample in enumerate(report['train_samples']):\n print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)\n\nfor i, sample in enumerate(report['valid_samples']):\n print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)\n\nfor i, sample in enumerate(report['test_samples']):\n print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)", "BLEU Analysis", "def print_bleu(blue_struct):\n print 'Overall Score: ', blue_struct['score'], '\\n'\n print '1-gram Score: ', blue_struct['components']['1']\n print '2-gram Score: ', blue_struct['components']['2']\n print '3-gram Score: ', blue_struct['components']['3']\n print '4-gram Score: ', blue_struct['components']['4']\n\n# Training Set BLEU Scores\nprint_bleu(report['train_bleu'])\n\n# Validation Set BLEU Scores\nprint_bleu(report['valid_bleu'])\n\n# Test Set BLEU Scores\nprint_bleu(report['test_bleu'])\n\n# All Data BLEU Scores\nprint_bleu(report['combined_bleu'])", "N-pairs BLEU Analysis\nThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations", "# Training Set BLEU n-pairs Scores\nprint_bleu(report['n_pairs_bleu_train'])\n\n# Validation Set n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_valid'])\n\n# Test Set n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_test'])\n\n# Combined n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_all'])\n\n# Ground Truth n-pairs BLEU Scores\nprint_bleu(report['n_pairs_bleu_gold'])", "Alignment Analysis\nThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores", "print 'Average (Train) Generated Score: ', report['average_alignment_train']\nprint 'Average (Valid) Generated Score: ', report['average_alignment_valid']\nprint 'Average (Test) Generated Score: ', report['average_alignment_test']\nprint 'Average (All) Generated Score: ', report['average_alignment_all']\nprint 'Average Gold Score: ', report['average_alignment_gold']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dineshpackt/Fast-Data-Processing-with-Spark-2
extras/050-Recommendation.ipynb
mit
[ "Recommendation\n1. Read Movielens 1 Million Data (Medium)\n2. Partition data into Train, Validation & Test Datasets (60-20-20)\n3. Train ALS Recommender\n4. Measure Model Performance\n5. Optimize Model parameters viz. Rank, Lambda & Number Of Iterations based on the Validation Dataset\n6. Predict optmized Model performance (RMSE) on the Test Data", "from pyspark.context import SparkContext\nprint \"Running Spark Version %s\" % (sc.version)\n\nfrom pyspark.conf import SparkConf\nconf = SparkConf()\nprint conf.toDebugString()\n\nmovies_file = sc.textFile(\"movielens/medium/movies.dat\")\nmovies_rdd = movies_file.map(lambda line: line.split('::'))\nmovies_rdd.count()\n\nmovies_rdd.first()\n\nratings_file = sc.textFile(\"movielens/medium/ratings.dat\")\nratings_rdd = ratings_file.map(lambda line: line.split('::'))\nratings_rdd.count()\n\nratings_rdd.first()\n\ndef parse_ratings(x):\n user_id = int(x[0])\n movie_id = int(x[1])\n rating = float(x[2])\n timestamp = int(x[3])/10\n return [user_id,movie_id,rating,timestamp]\n\nratings_rdd_01 = ratings_rdd.map(lambda x: parse_ratings(x))\nratings_rdd_01.count()\n\nratings_rdd_01.first()\n\nnumRatings = ratings_rdd_01.count()\nnumUsers = ratings_rdd_01.map(lambda r: r[0]).distinct().count()\nnumMovies = ratings_rdd_01.map(lambda r: r[1]).distinct().count()\n\nprint \"Got %d ratings from %d users on %d movies.\" % (numRatings, numUsers, numMovies)", "A quick scheme to partition the training, validation & test datasets\nTimestamp ending with [6,8) = Validation\nTimestamp ending with [8,9] = Test (ie >= 8)\nRest = Train\nApprox: Training = 60%, Validation = 20%, Test = 20%\nCoding Exercise\nPartition Data", "import time\nstart_time = time.time()\ntraining = ratings_rdd_01.filter(lambda x: (x[3] % 10) < 6)\nvalidation = ratings_rdd_01.filter(lambda x: (x[3] % 10) >= 6 and (x[3] % 10) < 8)\ntest = ratings_rdd_01.filter(lambda x: (x[3] % 10) >= 8)\nnumTraining = training.count()\nnumValidation = validation.count()\nnumTest = test.count()\nprint \"Training: %d, validation: %d, test: %d\" % (numTraining, numValidation, numTest)\nprint \"Elapsed : %f\" % (time.time() - start_time)\n\nfrom pyspark.mllib.recommendation import ALS\nrank = 10\nnumIterations = 20\ntrain_data = training.map(lambda p: (p[0], p[1], p[2]))\nstart_time = time.time()\nmodel = ALS.train(train_data, rank, numIterations)\nprint \"Elapsed : %f\" % (time.time() - start_time)\nprint model", "In order to calculate model performance we need a keypair with key=(userID, movieID), value=(pred,actual)\nThen we can do calculations on the Predicted vs Actual values", "# Evaluate the model on validation data\nvalidation_data = validation.map(lambda p: (p[0], p[1]))\npredictions = model.predictAll(validation_data).map(lambda r: ((r[0], r[1]), r[2]))\npredictions.count()\n\npredictions.first()", "Now let us turn the Validation data to KV pair", "validation_data.first()\n\nvalidation.first()\n\nvalidation_key_rdd = validation.map(lambda r: ((r[0], r[1]), r[2]))\nprint validation_key_rdd.count()\nvalidation_key_rdd.first()\n\n#ratesAndPreds = validation.map(lambda r: ((r[0], r[1]), r[2])).join(predictions)\nratesAndPreds = validation_key_rdd.join(predictions)\nratesAndPreds.count()\n\nratesAndPreds.first()", "Now we have the values where we want them !", "MSE = ratesAndPreds.map(lambda r: (r[1][0] - r[1][1])**2).reduce(lambda x, y: x + y)/ratesAndPreds.count()\nprint(\"Mean Squared Error = \" + str(MSE))\n\n# 1.4.0 Mean Squared Error = 0.876346112824\n# 1.3.0 Mean Squared Error = 0.871456869392\n# 1.2.1 Mean Squared Error = 0.877305629074", "Advanced - to try later *** system will hang if it has less memory\nValidation Run\nLet us use the Validation Data to optimize Rank, Lambda & Number Of Iterations\nAnd Predict the model performance using our test data", "def computeRmse(model, data, n):\n \"\"\"\n Compute RMSE (Root Mean Squared Error).\n \"\"\"\n predictions = model.predictAll(data.map(lambda x: (x[0], x[1])))\n predictionsAndRatings = predictions.map(lambda x: ((x[0], x[1]), x[2])) \\\n .join(data.map(lambda x: ((x[0], x[1]), x[2]))) \\\n .values()\n return sqrt(predictionsAndRatings.map(lambda x: (x[0] - x[1]) ** 2).reduce(add) / float(n))\n\nimport itertools\nfrom math import sqrt\nfrom operator import add\nranks = [8, 12]\nlambdas = [0.1, 1.0, 10.0]\nnumIters = [10, 20]\nbestModel = None\nbestValidationRmse = float(\"inf\")\nbestRank = 0\nbestLambda = -1.0\nbestNumIter = -1\nstart_time = time.time()\nfor rank, lmbda, numIter in itertools.product(ranks, lambdas, numIters):\n model = ALS.train(train_data, rank, numIter, lmbda)\n validationRmse = computeRmse(model, validation, numValidation)\n print \"RMSE (validation) = %f for the model trained with \" % validationRmse + \\\n \"rank = %d, lambda = %.1f, and numIter = %d.\" % (rank, lmbda, numIter)\n if (validationRmse < bestValidationRmse):\n bestModel = model\n bestValidationRmse = validationRmse\n bestRank = rank\n bestLambda = lmbda\n bestNumIter = numIter\n\ntestRmse = computeRmse(bestModel, test, numTest)\n\n# evaluate the best model on the test set\nprint \"Best model was trained with rank = %d and lambda = %.1f, \" % (bestRank, bestLambda) \\\n + \"and numIter = %d, and its RMSE on the test set is %f.\" % (bestNumIter, testRmse)\nprint \"Elapsed : %f\" % (time.time() - start_time)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
01-pandas_introduction.ipynb
bsd-2-clause
[ "<!--<img width=700px; src=\"../img/logoUPSayPlusCDS_990.png\"> -->\n\n<p style=\"margin-top: 3em; margin-bottom: 2em;\"><b><big><big><big><big>Introduction to Pandas</big></big></big></big></b></p>", "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\npd.options.display.max_rows = 8", "1. Let's start with a showcase\nCase 1: titanic survival data", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "Starting from reading this dataset, to answering questions about this data in a few lines of code:\nWhat is the age distribution of the passengers?", "df['Age'].hist()", "How does the survival rate of the passengers differ between sexes?", "df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))", "Or how does it differ between the different classes?", "df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')", "All the needed functionality for the above examples will be explained throughout this tutorial.\nCase 2: air quality measurement timeseries\nAirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe\nStarting from these hourly data for different stations:", "data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)\n\ndata.head()", "to answering questions about this data in a few lines of code:\nDoes the air pollution show a decreasing trend over the years?", "data['1999':].resample('M').mean().plot(ylim=[0,120])\n\ndata['1999':].resample('A').mean().plot(ylim=[0,100])", "What is the difference in diurnal profile between weekdays and weekend?", "data['weekday'] = data.index.weekday\ndata['weekend'] = data['weekday'].isin([5, 6])\ndata_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)\ndata_weekend.plot()", "We will come back to these example, and build them up step by step.\n2. Pandas: data analysis in python\nFor data-intensive work in Python the Pandas library has become essential.\nWhat is pandas?\n\nPandas can be thought of as NumPy arrays with labels for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.\nPandas can also be thought of as R's data.frame in Python.\nPowerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...\n\nIt's documentation: http://pandas.pydata.org/pandas-docs/stable/\n When do you need pandas? \nWhen working with tabular or structured data (like R dataframe, SQL table, Excel spreadsheet, ...):\n\nImport data\nClean up messy data\nExplore data, gain insight into data\nProcess and prepare your data for analysis\nAnalyse your data (together with scikit-learn, statsmodels, ...)\n\n<div class=\"alert alert-warning\">\n<b>ATTENTION!</b>: <br><br>\n\nPandas is great for working with heterogeneous and tabular 1D/2D data, but not all types of data fit in such structures!\n<ul>\n<li>When working with array data (e.g. images, numerical algorithms): just stick with numpy</li>\n<li>When working with multidimensional labeled data (e.g. climate data): have a look at [xarray](http://xarray.pydata.org/en/stable/)</li>\n</ul>\n</div>\n\n2. The pandas data structures: DataFrame and Series\nA DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.\n<img align=\"left\" width=50% src=\"img/schema-dataframe.svg\">", "df", "Attributes of the DataFrame\nA DataFrame has besides a index attribute, also a columns attribute:", "df.index\n\ndf.columns", "To check the data types of the different columns:", "df.dtypes", "An overview of that information can be given with the info() method:", "df.info()", "Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:", "df.values", "Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.\nNote that in the IPython notebook, the dataframe will display in a rich HTML view:", "data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}\ndf_countries = pd.DataFrame(data)\ndf_countries", "One-dimensional data: Series (a column of a DataFrame)\nA Series is a basic holder for one-dimensional labeled data.", "df['Age']\n\nage = df['Age']", "Attributes of a Series: index and values\nThe Series has also an index and values attribute, but no columns", "age.index", "You can access the underlying numpy array representation with the .values attribute:", "age.values[:10]", "We can access series values via the index, just like for NumPy arrays:", "age[0]", "Unlike the NumPy array, though, this index can be something other than integers:", "df = df.set_index('Name')\ndf\n\nage = df['Age']\nage\n\nage['Dooley, Mr. Patrick']", "but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.\nEg element-wise operations:", "age * 1000", "A range of methods:", "age.mean()", "Fancy indexing, like indexing with a list or boolean indexing:", "age[age > 70]", "But also a lot of pandas specific methods, e.g.", "df['Embarked'].value_counts()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>What is the maximum Fare that was paid? And the median?</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction31.py\n\n# %load snippets/01-pandas_introduction32.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Calculate the average survival ratio for all passengers (note: the 'Survived' column indicates whether someone survived (1) or not (0)).</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction33.py", "3. Data import and export\nA wide range of input/output formats are natively supported by pandas:\n\nCSV, text\nSQL database\nExcel\nHDF5\njson\nhtml\npickle\nsas, stata\n(parquet)\n...", "#pd.read\n\n#df.to", "Very powerful csv reader:", "pd.read_csv?", "Luckily, if we have a well formed csv file, we don't need many of those arguments:", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: Read the `data/20000101_20161231-NO2.csv` file into a DataFrame `no2`\n<br><br>\nSome aspects about the file:\n <ul>\n <li>Which separator is used in the file?</li>\n <li>The second row includes unit information and should be skipped (check `skiprows` keyword)</li>\n <li>For missing values, it uses the `'n/d'` notation (check `na_values` keyword)</li>\n <li>We want to parse the 'timestamp' column as datetimes (check the `parse_dates` keyword)</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction39.py\n\nno2", "4. Exploration\nSome useful methods:\nhead and tail", "no2.head(3)\n\nno2.tail()", "info()", "no2.info()", "Getting some basic summary statistics about the data with describe:", "no2.describe()", "Quickly visualizing the data", "no2.plot(kind='box', ylim=[0,250])\n\nno2['BASCH'].plot(kind='hist', bins=50)", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: \n\n <ul>\n <li>Plot the age distribution of the titanic passengers</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction47.py", "The default plot (when not specifying kind) is a line plot of all columns:", "no2.plot(figsize=(12,6))", "This does not say too much ..\nWe can select part of the data (eg the latest 500 data points):", "no2[-500:].plot(figsize=(12,6))", "Or we can use some more advanced time series features -> see further in this notebook!\n5. Selecting and filtering data\n<div class=\"alert alert-warning\">\n<b>ATTENTION!</b>: <br><br>\n\nOne of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. <br><br> We now have to distuinguish between:\n\n <ul>\n <li>selection by **label**</li>\n <li>selection by **position**</li>\n</ul>\n</div>", "df = pd.read_csv(\"data/titanic.csv\")", "df[] provides some convenience shortcuts\nFor a DataFrame, basic indexing selects the columns.\nSelecting a single column:", "df['Age']", "or multiple columns:", "df[['Age', 'Fare']]", "But, slicing accesses the rows:", "df[10:15]", "Systematic indexing with loc and iloc\nWhen using [] like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes:\n\nloc: selection by label\niloc: selection by position", "df = df.set_index('Name')\n\ndf.loc['Bonnell, Miss. Elizabeth', 'Fare']\n\ndf.loc['Bonnell, Miss. Elizabeth':'Andersson, Mr. Anders Johan', :]", "Selecting by position with iloc works similar as indexing numpy arrays:", "df.iloc[0:2,1:3]", "The different indexing methods can also be used to assign data:", "df.loc['Braund, Mr. Owen Harris', 'Survived'] = 100\n\ndf", "Boolean indexing (filtering)\nOften, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy. \nThe indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed.", "df['Fare'] > 50\n\ndf[df['Fare'] > 50]", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Based on the titanic data set, select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers</li>\n</ul>\n</div>", "df = pd.read_csv(\"data/titanic.csv\")\n\n# %load snippets/01-pandas_introduction63.py\n\n# %load snippets/01-pandas_introduction64.py\n\n# %load snippets/01-pandas_introduction65.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Based on the titanic data set, how many passengers older than 70 were on the Titanic?</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction66.py\n\n# %load snippets/01-pandas_introduction67.py", "6. The group-by operation\nSome 'theory': the groupby operation (split-apply-combine)", "df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],\n 'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})\ndf", "Recap: aggregating functions\nWhen analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:", "df['data'].sum()", "However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.\nFor example, in the above dataframe df, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:", "for key in ['A', 'B', 'C']:\n print(key, df[df['key'] == key]['data'].sum())", "This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.\nWhat we did above, applying a function on different groups, is a \"groupby operation\", and pandas provides some convenient functionality for this.\nGroupby: applying functions per group\nThe \"group by\" concept: we want to apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets\nThis operation is also referred to as the \"split-apply-combine\" operation, involving the following steps:\n\nSplitting the data into groups based on some criteria\nApplying a function to each group independently\nCombining the results into a data structure\n\n<img src=\"img/splitApplyCombine.png\">\nSimilar to SQL GROUP BY\nInstead of doing the manual filtering as above\ndf[df['key'] == \"A\"].sum()\ndf[df['key'] == \"B\"].sum()\n...\n\npandas provides the groupby method to do exactly this:", "df.groupby('key').sum()\n\ndf.groupby('key').aggregate(np.sum) # 'sum'", "And many more methods are available.", "df.groupby('key')['data'].sum()", "Application of the groupby concept on the titanic data\nWe go back to the titanic passengers survival data:", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Calculate the average age for each sex again, but now using groupby.</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction76.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Calculate the average survival ratio for all passengers.</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction77.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing).</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction78.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>What is the difference in the survival ratio between the sexes?</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction79.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Or how does it differ between the different classes? Make a bar plot visualizing the survival ratio for the 3 classes.</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction80.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below.</li>\n</ul>\n</div>", "df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))\n\n# %load snippets/01-pandas_introduction82.py", "7. Working with time series data", "no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)", "When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available:", "no2.index", "Indexing a time series works with strings:", "no2[\"2010-01-01 09:00\": \"2010-01-01 12:00\"]", "A nice feature is \"partial string\" indexing, so you don't need to provide the full datetime string.\nE.g. all data of January up to March 2012:", "no2['2012-01':'2012-03']", "Time and date components can be accessed from the index:", "no2.index.hour\n\nno2.index.year", "Converting your time series with resample\nA very powerfull method is resample: converting the frequency of the time series (e.g. from hourly to daily data).\nRemember the air quality data:", "no2.plot()", "The time series has a frequency of 1 hour. I want to change this to daily:", "no2.head()\n\nno2.resample('D').mean().head()", "Above I take the mean, but as with groupby I can also specify other methods:", "no2.resample('D').max().head()", "The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases\nThese strings can also be combined with numbers, eg '10D'.\nFurther exploring the data:", "no2.resample('M').mean().plot() # 'A'\n\n# no2['2012'].resample('D').plot()\n\n# %load snippets/01-pandas_introduction95.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: The evolution of the yearly averages with, and the overall mean of all stations\n\n <ul>\n <li>Use `resample` and `plot` to plot the yearly averages for the different stations.</li>\n <li>The overall mean of all stations can be calculated by taking the mean of the different columns (`.mean(axis=1)`).</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction96.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: how does the *typical monthly profile* look like for the different stations?\n\n <ul>\n <li>Add a 'month' column to the dataframe.</li>\n <li>Group by the month to obtain the typical monthly averages over the different years.</li>\n</ul>\n</div>\n\nFirst, we add a column to the dataframe that indicates the month (integer value of 1 to 12):", "# %load snippets/01-pandas_introduction97.py", "Now, we can calculate the mean of each month over the different years:", "# %load snippets/01-pandas_introduction98.py\n\n# %load snippets/01-pandas_introduction99.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: The typical diurnal profile for the different stations\n\n <ul>\n <li>Similar as for the month, you can now group by the hour of the day.</li>\n</ul>\n</div>", "# %load snippets/01-pandas_introduction100.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: What is the difference in the typical diurnal profile between week and weekend days for the 'BASCH' station.\n\n <ul>\n <li>Add a column 'weekday' defining the different days in the week.</li>\n <li>Add a column 'weekend' defining if a days is in the weekend (i.e. days 5 and 6) or not (True/False).</li>\n <li>You can groupby on multiple items at the same time. In this case you would need to group by both weekend/weekday and hour of the day.</li>\n</ul>\n</div>\n\nAdd a column indicating the weekday:", "no2.index.weekday?\n\n# %load snippets/01-pandas_introduction102.py", "Add a column indicating week/weekend", "# %load snippets/01-pandas_introduction103.py", "Now we can groupby the hour of the day and the weekend (or use pivot_table):", "# %load snippets/01-pandas_introduction104.py\n\n# %load snippets/01-pandas_introduction105.py\n\n# %load snippets/01-pandas_introduction106.py\n\n# %load snippets/01-pandas_introduction107.py", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?\n\nCount the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year?\n<br><br>\n\nHints:\n\n <ul>\n <li>Create a new DataFrame, called `exceedances`, (with boolean values) indicating if the threshold is exceeded or not</li>\n <li>Remember that the sum of True values can be used to count elements. Do this using groupby for each year.</li>\n <li>Adding a horizontal line can be done with the matplotlib function `ax.axhline`.</li>\n</ul>\n</div>", "# re-reading the data to have a clean version\nno2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)\n\n# %load snippets/01-pandas_introduction109.py\n\n# %load snippets/01-pandas_introduction110.py\n\n# %load snippets/01-pandas_introduction111.py", "9. What I didn't talk about\n\nConcatenating data: pd.concat\nMerging and joining data: pd.merge\nReshaping data: pivot_table, melt, stack, unstack\nWorking with missing data: isnull, dropna, interpolate, ...\n...\n\nFurther reading\n\n\nPandas documentation: http://pandas.pydata.org/pandas-docs/stable/\n\n\nBooks\n\n\"Python for Data Analysis\" by Wes McKinney\n\"Python Data Science Handbook\" by Jake VanderPlas\n\n\n\nTutorials (many good online tutorials!)\n\n\nhttps://github.com/jorisvandenbossche/pandas-tutorial\n\n\nhttps://github.com/brandon-rhodes/pycon-pandas-tutorial\n\n\nTom Augspurger's blog\n\n\nhttps://tomaugspurger.github.io/modern-1.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
materialsproject/mapidoc
example_notebooks/Using the Materials API with Python.ipynb
bsd-3-clause
[ "Introduction\nThis notebook demonstrates the use of the Materials API using Python. We will do so with Python Materials Genomics (pymatgen)'s high level tools as well as using the requests package.\nUsing pymatgen's MPRester (Recommended)", "# We start by importing MPRester, which is available from the root import of pymatgen.\nfrom pymatgen.ext.matproj import MPRester\nfrom pprint import pprint\n\n# Initializing MPRester. Note that you can call MPRester. MPRester looks for the API key in two places: \n# - Supplying it directly as an __init__ arg.\n# - Setting the \"MAPI_KEY\" environment variable.\n# Please obtain your API key at https://www.materialsproject.org/dashboard\n\nm = MPRester()", "Doing simple queries using the high-level methods.\nMany methods in MPRester supports the extremely simple yet powerful query syntax for materials. There are three kinds of queries:\n\nFormulae, e.g., \"Li2O\", \"Fe2O3\", \"*TiO3\nChemical systems, e.g., \"Li-Fe-O\", \"*-Fe-O\"\nMaterials ids, e.g., \"mp-1234\"\n\nThe MPRester automatically detects what kind of query is being made. Also, for formulas and chemical systems, wildcards are supported with a *. That means *2O will yield a list of the following formula results:\nB2O, Xe2O, Li2O ...", "#The following query returns all structures in the Materials Project with formula \"Fe2O3\"\npprint(m.get_data(\"Li2O\", prop=\"structure\"))\n\n# These query returns the chemical formula and material id of all Materials with formula of form \"*3O4\". \n# The material_id is always returned with any use of get_data.\npprint(m.get_data(\"*3O4\", prop=\"pretty_formula\"))\n\n# Getting a DOS object and plotting it. Bandstructures are similar.\n\ndos = m.get_dos_by_material_id(\"mp-19017\")\nbs = m.get_bandstructure_by_material_id(\"mp-19017\")\n\nfrom pymatgen.electronic_structure.plotter import DosPlotter, BSPlotter\n%matplotlib inline\n\ndos_plotter = DosPlotter()\ndos_plotter.add_dos_dict(dos.get_spd_dos())\ndos_plotter.show()\n\nbs_plotter = BSPlotter(bs)\nbs_plotter.show()", "More sophisticated queries using MPRester's very powerful query method.\nThe query() method essentially works almost like a raw MongoDB query on the Materials Project database. With it, you can perform extremely sophisticated queries to obtain large and customized quantities of materials data easily. The way to use query is\npython\nquery(criteria, properties)\nThe criteria argument can either be a simple string similar to the powerful wildcard based formula and chemical system search described above, or a full MongoDB query dict with all the features of the Mongo query syntax.", "# Get material ids for everything in the Materials Project database\n\ndata = m.query(criteria={}, properties=[\"task_id\"])\n\n# Get the energy for materials with material_ids \"mp-1234\" and \"mp-2345\".\ndata = m.query(criteria={\"task_id\": {\"$in\": [\"mp-1234\", \"mp-1\"]}}, properties=[\"final_energy\"])\nprint(data)\n\n# Get the spacegroup symbol for all materials with formula Li2O.\ndata = m.query(criteria={\"pretty_formula\": \"Li2O\"}, properties=[\"spacegroup.symbol\"])\nprint(data)\n\n# Get the ICSD of all compounds containing either K, Li or Na with O.\ndata = m.query(criteria={\"elements\": {\"$in\": [\"K\", \"Li\", \"Na\"], \"$all\": [\"O\"]}, \"nelements\": 2}, \n properties=[\"icsd_id\", \"pretty_formula\", \"spacegroup.symbol\"])\npprint(data)", "Using requests (or urllib)\nIf you decide not to install pymatgen, you can still make use of the Materials API by calling the relevant URLs directly. Here, we will demonstrate how you can do so using the requests library, though any http library should work similarly. All the queries demonstrated here are similar to the above queries.", "import requests\nimport os\nimport json\n\nr = requests.get(\"https://www.materialsproject.org/rest/v2/materials/Li2O/vasp/final_structure\",\n headers={\"X-API-KEY\": os.environ[\"PMG_MAPI_KEY\"]})\ncontent = r.json() # a dict\n\nr = requests.get(\"https://www.materialsproject.org/rest/v2/materials/*3O4/vasp/pretty_formula\",\n headers={\"X-API-KEY\": os.environ[\"PMG_MAPI_KEY\"]})\ncontent = r.json() # a dict\npprint(content[\"response\"])", "Note that we cannot demonstrate DOS nad Bandstructure plotting here, since those rely on pymatgen's high level plotting utilities for these objects. But you can of course query for the DOS and Bandstructure data and implement your own customized plotting in your favorite graphing utility.", "data = {\n \"criteria\": {\n \"elements\": {\"$in\": [\"Li\", \"Na\", \"K\"], \"$all\": [\"O\"]},\n \"nelements\": 2,\n },\n \"properties\": [\n \"icsd_id\",\n \"pretty_formula\",\n \"spacegroup.symbol\"\n ]\n}\nr = requests.post('https://materialsproject.org/rest/v2/query',\n headers={'X-API-KEY': os.environ[\"MAPI_KEY\"]},\n data={k: json.dumps(v) for k,v in data.iteritems()})\ncontent = r.json() # a dict\npprint(content[\"response\"])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ijstokes/bokeh-blaze-tutorial
1.1 Charts - Timeseries.ipynb
mit
[ "<img src=images/continuum_analytics_b&w.png align=\"left\" width=\"15%\" style=\"margin-right:15%\">\n<h1 align='center'>Bokeh Tutorial</h1>\n\n1.1 Charts - Timeseries\nExercise: Visualize the evolution of the temperature anomaly monthly average over time with a timeseries chart\n\nData: 'data/Land_Ocean_Monthly_Anomaly_Average.csv'\n\nTips:\nimport pandas as pd\npd.read_csv()\npd.to_datetime()", "import pandas as pd\nfrom bokeh.charts import TimeSeries, output_notebook, show\n\noutput_notebook()\n\n# Get data\nbe = pd.read_csv('data/Land_Ocean_Monthly_Anomaly_Average.csv',\n parse_dates=[0])\n\nbe.head()\n\nbe.datetime[:10]\n\n# Process data\nbe.datetime = pd.to_datetime(be.datetime)\nbe = be[['anomaly','datetime']]\n\n# Output option\noutput_notebook()\n\n# Create timeseries chart\nt = TimeSeries(be, x='datetime', y='anomaly')\n\n# Show chart\nshow(t)", "Exercise: Style your plot\nIdeas:\n\nAdd a title\nAdd axis labels\nChange width and height\nDeactivate toolbox or customize available tools\nChange line color\n\nCharts arguments can be found: http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html#generic-arguments", "# Style your timeseries chart \n\n# Show new chart", "Exercise: Add the moving annual average to your chart\nTips:\npd.rolling_mean()", "# Compute moving average\n\n# Create chart with moving average\n\n# Show chart with moving average" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
harpolea/pyro2
mesh/mesh-examples.ipynb
bsd-3-clause
[ "Mesh examples\nthis notebook illustrates the basic ways of interacting with the pyro2 mesh module. We create some data that lives on a grid and show how to fill the ghost cells. The pretty_print() function shows us that they work as expected.", "from __future__ import print_function\nimport numpy as np\nimport mesh.boundary as bnd\nimport mesh.patch as patch\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# for unit testing, we want to ensure the same random numbers\nnp.random.seed(100)", "Setup a Grid with Variables\nThere are a few core classes that we deal with when creating a grid with associated variables:\n\n\nGrid2d : this holds the size of the grid (in zones) and the physical coordinate information, including coordinates of cell edges and centers\n\n\nBC : this is a container class that simply holds the type of boundary condition on each domain edge.\n\n\nArrayIndexer : this is an array of data along with methods that know how to access it with different offsets into the data that usually arise in stencils (like {i+1, j})\n\n\nCellCenterData2d : this holds the data that lives on a grid. Each variable that is part of this class has its own boundary condition type.\n\n\nWe start by creating a Grid2d object with 4 x 6 cells and 2 ghost cells", "g = patch.Grid2d(4, 6, ng=2)\nprint(g)\n\nhelp(g)", "Then create a dataset that lives on this grid and add a variable name. For each variable that lives on the grid, we need to define the boundary conditions -- this is done through the BC object.", "bc = bnd.BC(xlb=\"periodic\", xrb=\"periodic\", ylb=\"reflect\", yrb=\"outflow\")\nprint(bc)\n\nd = patch.CellCenterData2d(g)\nd.register_var(\"a\", bc)\nd.create()\nprint(d)", "Working with the data\nNow we fill the grid with random data. get_var() returns an ArrayIndexer object that has methods for accessing views into the data. Here we use a.v() to get the \"valid\" region, i.e. excluding ghost cells.", "a = d.get_var(\"a\")\na.v()[:,:] = np.random.rand(g.nx, g.ny)", "when we pretty_print() the variable, we see the ghost cells colored red. Note that we just filled the interior above.", "a.pretty_print()", "pretty_print() can also take an argumet, specifying the format string to be used for the output.", "a.pretty_print(fmt=\"%7.3g\")", "now fill the ghost cells -- notice that the left and right are periodic, the upper is outflow, and the lower is reflect, as specified when we registered the data above.", "d.fill_BC(\"a\")\na.pretty_print()", "We can find the L2 norm of the data easily", "a.norm()", "and the min and max", "print(a.min(), a.max())", "ArrayIndexer\nWe we access the data, an ArrayIndexer object is returned. The ArrayIndexer sub-classes the NumPy ndarray, so it can do all of the methods that a NumPy array can, but in addition, we can use the ip(), jp(), or ipjp() methods to the ArrayIndexer object shift our view in the x, y, or x & y directions.\nTo make this clearer, we'll change our data set to be nicely ordered numbers. We index the ArrayIndex the same way we would a NumPy array. The index space includes ghost cells, so the ilo and ihi attributes from the grid object are useful to index just the valid region. The .v() method is a shortcut that also gives a view into just the valid data.\nNote: when we use one of the ip(), jp(), ipjp(), or v() methods, the result is a regular NumPy ndarray, not an ArrayIndexer object. This is because it only spans part of the domain (e.g., no ghost cells), and therefore cannot be associated with the Grid2d object that the ArrayIndexer is built from.", "type(a)\n\ntype(a.v())\n\na[:,:] = np.arange(g.qx*g.qy).reshape(g.qx, g.qy)\n\na.pretty_print()", "We index our arrays as {i,j}, so x (indexed by i) is the row and y (indexed by j) is the column in the NumPy array. Note that python arrays are stored in row-major order, which means that all of the entries in the same row are adjacent in memory. This means that when we simply print out the ndarray, we see constant-x horizontally, which is the transpose of what we are used to.", "a.v()", "We can offset our view into the array by one in x -- this would be like {i+1, j} when we loop over data. The ip() method is used here, and takes an argument which is the (positive) shift in the x (i) direction. So here's a shift by 1", "a.ip(-1, buf=1)", "A shifted view is necessarily smaller than the original array, and relies on ghost cells to bring new data into view. Because of this, the underlying data is no longer the same size as the original data, so we return it as an ndarray (which is actually just a view into the data in the ArrayIndexer object, so no copy is made.\nTo see that it is simply a view, lets shift and edit the data", "d = a.ip(1)\nd[1,1] = 0.0\na.pretty_print()", "Here, since d was really a view into $a_{i+1,j}$, and we accessed element (1,1) into that view (with 0,0 as the origin), we were really accessing the element (2,1) in the valid region\nDifferencing\nArrayIndexer objects are easy to use to construct differences, like those that appear in a stencil for a finite-difference, without having to explicitly loop over the elements of the array.\nHere's we'll create a new dataset that is initialized with a sine function", "g = patch.Grid2d(8, 8, ng=2)\nd = patch.CellCenterData2d(g)\nbc = bnd.BC(xlb=\"periodic\", xrb=\"periodic\", ylb=\"periodic\", yrb=\"periodic\")\nd.register_var(\"a\", bc)\nd.create()\n\na = d.get_var(\"a\")\na[:,:] = np.sin(2.0*np.pi*a.g.x2d)\nd.fill_BC(\"a\")", "Our grid object can provide us with a scratch array (an ArrayIndexer object) define on the same grid", "b = g.scratch_array()\ntype(b)", "We can then fill the data in this array with differenced data from our original array -- since b has a separate data region in memory, its elements are independent of a. We do need to make sure that we have the same number of elements on the left and right of the =. Since by default, ip() will return a view with the same size as the valid region, we can use .v() on the left to accept the differences.\nHere we compute a centered-difference approximation to the first derivative", "b.v()[:,:] = (a.ip(1) - a.ip(-1))/(2.0*a.g.dx)\n# normalization was 2.0*pi\nb[:,:] /= 2.0*np.pi\n\nplt.plot(g.x[g.ilo:g.ihi+1], a[g.ilo:g.ihi+1,a.g.jc])\nplt.plot(g.x[g.ilo:g.ihi+1], b[g.ilo:g.ihi+1,b.g.jc])\nprint (a.g.dx)", "Coarsening and prolonging\nwe can get a new ArrayIndexer object on a coarser grid for one of our variables", "c = d.restrict(\"a\")\n\nc.pretty_print()", "or a finer grid", "f = d.prolong(\"a\")\n\nf.pretty_print(fmt=\"%6.2g\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marcelomiky/PythonCodes
Coursera/CICCP2/Curso Introdução à Ciência da Computação com Python - Parte 2.ipynb
mit
[ "Semana 1", "def cria_matriz(tot_lin, tot_col, valor):\n matriz = [] #lista vazia\n for i in range(tot_lin):\n linha = []\n for j in range(tot_col):\n linha.append(valor)\n matriz.append(linha)\n return matriz\n\nx = cria_matriz(2, 3, 99)\nx\n\ndef cria_matriz(tot_lin, tot_col, valor):\n matriz = [] #lista vazia\n for i in range(tot_lin):\n linha = []\n for j in range(tot_col):\n linha.append(valor)\n matriz.append(linha)\n return matriz\n\nx = cria_matriz(2, 3, 99)\nx", "Este código faz com que primeiramente toda a primeira linha seja preenchida, em seguida a segunda e assim sucessivamente. Se nós quiséssemos que a primeira coluna fosse preenchida e em seguida a segunda coluna e assim por diante, como ficaria o código?\nUm exemplo: se o usuário digitasse o seguinte comando “x = cria_matriz(2,3)” e em seguida informasse os seis números para serem armazenados na matriz, na seguinte ordem: 1, 2, 3, 4, 5, 6; o x teria ao final da função a seguinte matriz: [[1, 3, 5], [2, 4, 6]].", "def cria_matriz(num_linhas, num_colunas):\n matriz = [] #lista vazia\n for i in range(num_linhas):\n linha = []\n for j in range(num_colunas):\n linha.append(0)\n matriz.append(linha)\n\n for i in range(num_colunas):\n for j in range(num_linhas):\n matriz[j][i] = int(input(\"Digite o elemento [\" + str(j) + \"][\" + str(i) + \"]: \"))\n\n return matriz\n\nx = cria_matriz(2, 3)\n\nx\n\ndef tarefa(mat):\n dim = len(mat)\n for i in range(dim):\n print(mat[i][dim-1-i], end=\" \")\n\nmat = [[1,2,3],[4,5,6],[7,8,9]]\ntarefa(mat)\n\n# Observação: o trecho do print (end = \" \") irá mudar a finalização padrão do print\n# que é pular para a próxima linha. Com esta mudança, o cursor permanecerá na mesma \n# linha aguardando a impressão seguinte.", "Exercício 1: Tamanho da matriz\nEscreva uma função dimensoes(matriz) que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.\nExemplos:\nminha_matriz = [[1], [2], [3]]\ndimensoes(minha_matriz)\n3X1\nminha_matriz = [[1, 2, 3], [4, 5, 6]]\ndimensoes(minha_matriz)\n2X3", "def dimensoes(A):\n \n '''Função que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.\n \n Obs: i = colunas, j = linhas\n \n Exemplo: \n >>> minha_matriz = [[1], \n [2], \n [3]\n ]\n >>> dimensoes(minha_matriz)\n >>> 3X1\n '''\n \n lin = len(A)\n col = len(A[0])\n \n return print(\"%dX%d\" % (lin, col))\n\nmatriz1 = [[1], [2], [3]]\ndimensoes(matriz1)\n\nmatriz2 = [[1, 2, 3], [4, 5, 6]]\ndimensoes(matriz2)", "Exercício 2: Soma de matrizes\nEscreva a função soma_matrizes(m1, m2) que recebe 2 matrizes e devolve uma matriz que represente sua soma caso as matrizes tenham dimensões iguais. Caso contrário, a função deve devolver False.\nExemplos:\nm1 = [[1, 2, 3], [4, 5, 6]]\nm2 = [[2, 3, 4], [5, 6, 7]]\nsoma_matrizes(m1, m2) => [[3, 5, 7], [9, 11, 13]]\nm1 = [[1], [2], [3]]\nm2 = [[2, 3, 4], [5, 6, 7]]\nsoma_matrizes(m1, m2) => False", "def soma_matrizes(m1, m2):\n \n def dimensoes(A):\n lin = len(A)\n col = len(A[0])\n \n return ((lin, col))\n \n if dimensoes(m1) != dimensoes(m2):\n return False\n else:\n matriz = []\n for i in range(len(m1)):\n linha = []\n for j in range(len(m1[0])):\n linha.append(m1[i][j] + m2[i][j])\n matriz.append(linha)\n return matriz\n\nm1 = [[1, 2, 3], [4, 5, 6]]\nm2 = [[2, 3, 4], [5, 6, 7]]\nsoma_matrizes(m1, m2)\n\nm1 = [[1], [2], [3]]\nm2 = [[2, 3, 4], [5, 6, 7]]\nsoma_matrizes(m1, m2)", "Praticar tarefa de programação: Exercícios adicionais (opcionais)\nExercício 1: Imprimindo matrizes\nComo proposto na primeira vídeo-aula da semana, escreva uma função imprime_matriz(matriz), que recebe uma matriz como parâmetro e imprime a matriz, linha por linha. Note que NÃO se deve imprimir espaços após o último elemento de cada linha!\nExemplos:\nminha_matriz = [[1], [2], [3]]\nimprime_matriz(minha_matriz)\n1\n2\n3\nminha_matriz = [[1, 2, 3], [4, 5, 6]]\nimprime_matriz(minha_matriz)\n1 2 3\n4 5 6", "def imprime_matriz(A):\n \n for i in range(len(A)):\n for j in range(len(A[i])):\n print(A[i][j])\n\nminha_matriz = [[1], [2], [3]]\nimprime_matriz(minha_matriz)\n\nminha_matriz = [[1, 2, 3], [4, 5, 6]]\nimprime_matriz(minha_matriz)", "Exercício 2: Matrizes multiplicáveis\nDuas matrizes são multiplicáveis se o número de colunas da primeira é igual ao número de linhas da segunda. Escreva a função sao_multiplicaveis(m1, m2) que recebe duas matrizes como parâmetro e devolve True se as matrizes forem multiplicavéis (na ordem dada) e False caso contrário.\nExemplos:\nm1 = [[1, 2, 3], [4, 5, 6]]\nm2 = [[2, 3, 4], [5, 6, 7]]\nsao_multiplicaveis(m1, m2) => False\nm1 = [[1], [2], [3]]\nm2 = [[1, 2, 3]]\nsao_multiplicaveis(m1, m2) => True", "def sao_multiplicaveis(m1, m2):\n \n '''Recebe duas matrizes como parâmetros e devolve True se as matrizes forem multiplicáveis (número de colunas \n da primeira é igual ao número de linhs da segunda). False se não forem\n '''\n \n if len(m1) == len(m2[0]):\n return True\n else:\n return False\n\nm1 = [[1, 2, 3], [4, 5, 6]]\nm2 = [[2, 3, 4], [5, 6, 7]]\nsao_multiplicaveis(m1, m2)\n\nm1 = [[1], [2], [3]]\nm2 = [[1, 2, 3]]\nsao_multiplicaveis(m1, m2)", "Semana 2", "\"áurea gosta de coentro\".capitalize()\n\n\"AQUI\".capitalize()\n\n# função para remover espaços em branco\n\n\" email@company.com \".strip()\n\n\"o abecedário da Xuxa é didático\".count(\"a\")\n\n\"o abecedário da Xuxa é didático\".count(\"á\")\n\n\"o abecedário da Xuxa é didático\".count(\"X\")\n\n\"o abecedário da Xuxa é didático\".count(\"x\")\n\n\"o abecedário da Xuxa é didático\".count(\"z\")\n\n\"A vida como ela seje\".replace(\"seje\", \"é\")\n\n\"áurea gosta de coentro\".capitalize().center(80) #80 caracteres de largura, no centro apareça este texto\n\ntexto = \"Ao que se percebe, só há o agora\"\ntexto\n\ntexto.find(\"q\")\n\ntexto.find('se')\n\ntexto[7] + texto[8]\n\ntexto.find('w')\n\nfruta = 'amora'\n\nfruta[:4] # desde o começo até a posição TRÊS!\n\nfruta[1:] # desde a posição 1 (começa no zero) até o final\n\nfruta[2:4] # desde a posição 2 até a posição 3", "Exercício\nEscrever uma função que recebe uma lista de Strings contendo nomes de pessoas como parâmetro e devolve o nome mais curto. A função deve ignorar espaços antes e depois do nome e deve devolver o nome com a primeira letra maiúscula.", "def mais_curto(lista_de_nomes):\n \n menor = lista_de_nomes[0] # considerando que o menor nome está no primeiro lugar\n \n for i in lista_de_nomes:\n if len(i) < len(menor):\n menor = i\n \n return menor.capitalize()\n\nlista = ['carlos', 'césar', 'ana', 'vicente', 'maicon', 'washington']\n\nmais_curto(lista)\n\nord('a')\n\nord('A')\n\nord('b')\n\nord('m')\n\nord('M')\n\nord('AA')\n\n'maçã' > 'banana'\n\n'Maçã' > 'banana'\n\n'Maçã'.lower() > 'banana'.lower()\n\ntxt = 'José'\ntxt = txt.lower()\ntxt\n\nlista = ['ana', 'maria', 'José', 'Valdemar']\nlen(lista)\n\nlista[3].lower()\n\nlista[2]\n\nlista[2] = lista[2].lower()\n\nlista\n\nfor i in lista:\n print(i)\n\nlista[0][0]", "Exercício\nEscreva uma função que recebe um array de strings como parâmetro e devolve o primeiro string na ordem lexicográfica, ignorando-se maiúsculas e minúsculas", "def menor_string(array_string):\n \n for i in range(len(array_string)):\n array_string[i] = array_string[i].lower()\n \n menor = array_string[0] # considera o primeiro como o menor\n \n for i in array_string:\n if ord(i[0][0]) < ord(menor[0]):\n menor = i\n \n return menor\n\nlista = ['maria', 'José', 'Valdemar']\n\nmenor_string(lista)\n\n# Código para inverter string e deixa maiúsculo\n\ndef fazAlgo(string):\n pos = len(string)-1\n string = string.upper()\n while pos >= 0:\n print(string[pos],end = \"\")\n pos = pos - 1\n\nfazAlgo(\"paralelepipedo\")\n\n# Código que deixa maiúsculo as letras de ordem ímpar: \n\ndef fazAlgo(string):\n pos = 0\n string1 = \"\"\n string = string.lower()\n stringMa = string.upper()\n while pos < len(string):\n if pos % 2 == 0:\n string1 = string1 + stringMa[pos]\n else:\n string1 = string1 + string[pos]\n pos = pos + 1\n return string1 \n\nprint(fazAlgo(\"paralelepipedo\"))\n\n# Código que tira os espaços em branco\n\ndef fazAlgo(string):\n pos = 0\n string1 = \"\"\n while pos < len(string):\n if string[pos] != \" \":\n string1 = string1 + string[pos]\n pos = pos + 1\n return string1 \n\nprint(fazAlgo(\"ISTO É UM TESTE\"))\n\n# e para retornar \"Istoéumteste\", ou seja, só deixar a primeira letra maiúscula...\n\ndef fazAlgo(string):\n pos = 0\n string1 = \"\" \n while pos < len(string):\n if string[pos] != \" \":\n string1 = string1 + string[pos]\n pos = pos + 1\n string1 = string1.capitalize()\n return string1\n\nprint(fazAlgo(\"ISTO É UM TESTE\"))\n\nx, y = 10, 20\n\nx, y\n\nx\n\ny\n\ndef peso_altura():\n\n return 77, 1.83\n\npeso_altura()\n\npeso, altura = peso_altura()\n\npeso\n\naltura\n\n# Atribuição múltipla em C (vacas magras...)\n'''\nint a, b, temp\n\na = 10\nb = 20\n\ntemp = a\na = b\nb = temp\n'''\n\na, b = 10, 20\na, b = b, a\na, b\n\n# Atribuição aumentada\n\nx = 10\n\nx = x + 10\n\nx\n\nx = 10\n\nx += 10\n\nx\n\nx = 3\n\nx *= 2\n\nx\n\nx = 2\nx **= 10\nx\n\nx = 100\nx /= 3\nx\n\ndef pagamento_semanal(valor_por_hora, num_horas = 40):\n return valor_por_hora * num_horas\n\npagamento_semanal(10)\n\npagamento_semanal(10, 20) # aceita, mesmo assim, o segundo parâmetro.\n\n# Asserção de Invariantes\n\ndef pagamento_semanal(valor_por_hora, num_horas = 40):\n assert valor_por_hora >= 0 and num_horas > 0 \n return valor_por_hora * num_horas\n\npagamento_semanal(30, 10)\n\npagamento_semanal(10, -10)\n\nx, y = 10, 12\nx, y = y, x\nprint(\"x = \",x,\"e y = \",y)\n\nx = 10\nx += 10\nx /= 2\nx //= 3\nx %= 2\nx *= 9\nprint(x)\n\ndef calculo(x, y = 10, z = 5):\n return x + y * z;\n\ncalculo(1, 2, 3)\n\ncalculo(1, 2) # 2 entra em y.\n\ndef calculo(x, y = 10, z = 5):\n return x + y * z;\n\nprint(calculo(1, 2, 3))\n\ncalculo()\n\nprint(calculo( ,12, 10))\n\ndef horario_em_segundos(h, m, s):\n assert h >= 0 and m >= 0 and s >= 0\n return h * 3600 + m * 60 + s\n\nprint(horario_em_segundos (3,0,50))\n\nprint(horario_em_segundos(1,2,3))\n\nprint(horario_em_segundos (-1,20,30))\n\n# Módulos em Python\n\ndef fib(n): # escreve a série de Fibonacci até n\n a, b = 0, 1\n while b < n:\n print(b, end = ' ')\n a, b = b, a + b\n print()\n\ndef fib2(n):\n result = []\n a, b = 0, 1\n while b < n:\n result.append(b)\n a, b = b, a + b\n return result\n\n'''\nE no shell do Python (chamado na pasta que contém o arquivo fibo.py)\n>>> import fibo\n>>> fibo.fib(100)\n1 1 2 3 5 8 13 21 34 55 89 \n>>> fibo.fib2(100)\n[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\n>>> fibo.fib2(1000)\n[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987]\n>>> meuFib = fibo.fib\n>>> meuFib(20)\n1 1 2 3 5 8 13 \n\n'''\n\n", "Incluindo <pre>print(__name__)</pre> na última linha de fibo.py, ao fazer a importação import fibo no shell do Python, imprime 'fibo', que é o nome do programa.\nAo incluir \n<pre>\nif __name__ == \"__main__\": \n import sys\n fib(int(sys.argv[1]))\n</pre>\n\npodemos ver se está sendo executado como script (com o if do jeito que está) ou como módulo dentro de outro código (se o nome não for main, está sendo importado pra usar alguma função lá dentro).", "def fazAlgo(string): # inverte a string e deixa as vogais maiúsculas\n pos = len(string)-1 # define a variável posição do array\n stringMi = string.lower() # aqui estão todas minúsculas\n string = string.upper() # aqui estão todas maiúsculas\n stringRe = \"\" # string de retorno\n \n while pos >= 0:\n if string[pos] == 'A' or string[pos] == 'E' or string[pos] == 'I' or string[pos] == 'O' or string[pos] == 'U':\n stringRe = stringRe + string[pos]\n else:\n stringRe = stringRe + stringMi[pos]\n pos = pos - 1\n return stringRe\n\nif __name__ == \"__main__\":\n print(fazAlgo(\"teste\"))\n print(fazAlgo(\"o ovo do avestruz\"))\n print(fazAlgo(\"A CASA MUITO ENGRAÇADA\"))\n print(fazAlgo(\"A TELEvisão queBROU\"))\n print(fazAlgo(\"A Vaca Amarela\"))", "Exercício 1: Letras maiúsculas\nEscreva a função maiusculas(frase) que recebe uma frase (uma string) como parâmetro e devolve uma string com as letras maiúsculas que existem nesta frase, na ordem em que elas aparecem.\nPara resolver este exercício, pode ser útil verificar uma tabela ASCII, que contém os valores de cada caractere. Ver http://equipe.nce.ufrj.br/adriano/c/apostila/tabascii.htm\nNote que para simplificar a solução do exercício, as frases passadas para a sua função não possuirão caracteres que não estejam presentes na tabela ASCII apresentada, como ç, á, É, ã, etc.\nDica: Os valores apresentados na tabela são os mesmos devolvidos pela função ord apresentada nas aulas.\nExemplos:", "maiusculas('Programamos em python 2?')\n# deve devolver 'P'\n\nmaiusculas('Programamos em Python 3.')\n# deve devolver 'PP'\n\nmaiusculas('PrOgRaMaMoS em python!')\n# deve devolver 'PORMMS'\n\ndef maiusculas(frase):\n \n listRe = [] # lista de retorno vazia\n stringRe = '' # string de retorno vazia\n \n for ch in frase:\n if ord(ch) >=65 and ord(ch) <= 91:\n listRe.append(ch) \n \n # retornando a lista para string\n stringRe = ''.join(listRe)\n \n return stringRe\n\nmaiusculas('Programamos em python 2?')\n\nmaiusculas('Programamos em Python 3.')\n\nmaiusculas('PrOgRaMaMoS em python!')\n\nx = ord('A')\ny = ord('a')\nx, y\n\nord('B')\n\nord('Z')", "Exercício 2: Menor nome\nComo pedido no primeiro vídeo desta semana, escreva uma função menor_nome(nomes) que recebe uma lista de strings com nome de pessoas como parâmetro e devolve o nome mais curto presente na lista.\nA função deve ignorar espaços antes e depois do nome e deve devolver o menor nome presente na lista. Este nome deve ser devolvido com a primeira letra maiúscula e seus demais caracteres minúsculos, independente de como tenha sido apresentado na lista passada para a função.\nQuando houver mais de um nome com o menor comprimento dentre os nomes na lista, a função deve devolver o primeiro nome com o menor comprimento presente na lista.\nExemplos:", "menor_nome(['maria', 'josé', 'PAULO', 'Catarina'])\n# deve devolver 'José'\n\nmenor_nome(['maria', ' josé ', ' PAULO', 'Catarina '])\n# deve devolver 'José'\n\nmenor_nome(['Bárbara', 'JOSÉ ', 'Bill'])\n# deve devolver José\n\ndef menor_nome(nomes):\n \n tamanho = len(nomes) # pega a quantidade de nomes na lista\n menor = '' # variável para escolher o menor nome\n lista_limpa = [] # lista de nomes sem os espaços em branco\n \n # ignora espaços em branco\n for str in nomes:\n lista_limpa.append(str.strip())\n \n # verifica o menor nome\n menor = lista_limpa[0] # considera o primeiro como menor\n for str in lista_limpa:\n if len(str) < len(menor): # não deixei <= senão pegará um segundo menor de mesmo tamanho\n menor = str\n \n return menor.capitalize() # deixa a primeira letra maiúscula\n\nmenor_nome(['maria', 'josé', 'PAULO', 'Catarina'])\n# deve devolver 'José'\n\nmenor_nome(['maria', ' josé ', ' PAULO', 'Catarina '])\n# deve devolver 'José'\n\nmenor_nome(['Bárbara', 'JOSÉ ', 'Bill'])\n# deve devolver José\n\nmenor_nome(['Bárbara', 'JOSÉ ', 'Bill', ' aDa '])", "Exercícios adicionais\nExercício 1: Contando vogais ou consoantes\nEscreva a função conta_letras(frase, contar=\"vogais\"), que recebe como primeiro parâmetro uma string contendo uma frase e como segundo parâmetro uma outra string. Este segundo parâmetro deve ser opcional.\nQuando o segundo parâmetro for definido como \"vogais\", a função deve devolver o numero de vogais presentes na frase. Quando ele for definido como \"consoantes\", a função deve devolver o número de consoantes presentes na frase. Se este parâmetro não for passado para a função, deve-se assumir o valor \"vogais\" para o parâmetro.\nExemplos:\nconta_letras('programamos em python')\n6\nconta_letras('programamos em python', 'vogais')\n6\nconta_letras('programamos em python', 'consoantes')\n13", "def conta_letras(frase, contar = 'vogais'):\n \n pos = len(frase) - 1 # atribui na variável pos (posição) a posição do array\n count = 0 # define o contador de vogais\n \n while pos >= 0: # conta as vogais\n if frase[pos] == 'a' or frase[pos] == 'e' or frase[pos] == 'i' or frase[pos] == 'o' or frase[pos] == 'u':\n count += 1\n pos = pos - 1\n \n if contar == 'consoantes':\n frase = frase.replace(' ', '') # retira espaços em branco\n return len(frase) - count # subtrai do total as vogais\n else:\n return count\n\nconta_letras('programamos em python')\n\nconta_letras('programamos em python', 'vogais')\n\nconta_letras('programamos em python', 'consoantes')\n\nconta_letras('bcdfghjklmnpqrstvxywz', 'consoantes')\n\nlen('programamos em python')\n\nfrase = 'programamos em python'\nfrase.replace(' ', '')\nfrase", "Exercício 2: Ordem lexicográfica\nComo pedido no segundo vídeo da semana, escreva a função primeiro_lex(lista) que recebe uma lista de strings como parâmetro e devolve o primeiro string na ordem lexicográfica. Neste exercício, considere letras maiúsculas e minúsculas.\nDica: revise a segunda vídeo-aula desta semana.\nExemplos:\nprimeiro_lex(['oĺá', 'A', 'a', 'casa'])\n'A'\nprimeiro_lex(['AAAAAA', 'b'])\n'AAAAAA'", "def primeiro_lex(lista):\n \n resposta = lista[0] # define o primeiro item da lista como a resposta...mas verifica depois.\n \n for str in lista:\n if ord(str[0]) < ord(resposta[0]):\n resposta = str\n \n return resposta\n\nassert primeiro_lex(['oĺá', 'A', 'a', 'casa']), 'A'\n\nassert primeiro_lex(['AAAAAA', 'b']), 'AAAAAA'\n\nprimeiro_lex(['casa', 'a', 'Z', 'A'])\n\nprimeiro_lex(['AAAAAA', 'b'])", "Semana 3 - POO – Programação Orientada a Objetos", "def cria_matriz(tot_lin, tot_col, valor):\n\n matriz = [] #lista vazia\n\n for i in range(tot_lin):\n linha = []\n for j in range(tot_col):\n linha.append(valor)\n matriz.append(linha)\n \n return matriz\n\n# import matriz # descomentar apenas no arquivo .py\n\ndef soma_matrizes(A, B):\n \n num_lin = len(A)\n num_col = len(A[0])\n \n C = cria_matriz(num_lin, num_col, 0) # matriz com zeros\n \n for lin in range(num_lin): # percorre as linhas da matriz\n for col in range(num_col): # percorre as colunas da matriz\n C[lin][col] = A[lin][col] + B[lin][col]\n \n return C\n\nif __name__ == '__main__':\n A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]\n print(soma_matrizes(A, B))\n\n# No arquivo matriz.py\n\ndef cria_matriz(tot_lin, tot_col, valor):\n\n matriz = [] #lista vazia\n\n for i in range(tot_lin):\n linha = []\n for j in range(tot_col):\n linha.append(valor)\n matriz.append(linha)\n \n return matriz\n\n# E no arquivo soma_matrizes.py\n\nimport matriz\n\ndef soma_matrizes(A, B):\n \n num_lin = len(A)\n num_col = len(A[0])\n \n C = matriz.cria_matriz(num_lin, num_col, 0) # matriz com zeros\n \n for lin in range(num_lin): # percorre as linhas da matriz\n for col in range(num_col): # percorre as colunas da matriz\n C[lin][col] = A[lin][col] + B[lin][col]\n \n return C\n\nif __name__ == '__main__':\n A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]\n B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]\n print(soma_matrizes(A, B))\n \n\n'''\nMultiplicação de matrizes:\n\n1 2 3 1 2 22 28\n4 5 6 * 3 4 = 49 64\n 5 6 \n \n1*1 + 2*3 + 3*5 = 22\n1*2 + 2*4 + 3*6 = 28\n4*1 + 5*3 + 6*5 = 49 \n4*2 + 5*4 + 6*6 = 64\n\nc11 = a11*b11 + a12*b21 + c13*c31\nc12 = a11*b21 + a12*b22 + c13*c23\nc21 = a21*b11 + a22*b21 + c23*c31\nc22 = a21*b21 + a22*b22 + c23*c23\n'''\n\ndef multiplica_matrizes (A, B):\n \n num_linA, num_colA = len(A), len(A[0])\n num_linB, num_colB = len(B), len(B[0])\n \n assert num_colA == num_linB\n \n C = []\n \n for lin in range(num_linA): # percorre as linhas da matriz A\n # começando uma nova linha\n C.append([])\n for col in range(num_colB): # percorre as colunas da matriz B\n # Adicionando uma nova coluna na linha\n C[lin].append(0)\n for k in range(num_colA):\n C[lin][col] += A[lin][k] * B[k][col] \n\n return C\n\nif __name__ == '__main__':\n A = [[1, 2, 3], [4, 5, 6]]\n B = [[1, 2], [3, 4], [5, 6]]\n print(multiplica_matrizes(A, B))\n ", "POO", "class Carro:\n pass\n\nmeu_carro = Carro()\nmeu_carro\n\ncarro_do_trabalho = Carro()\ncarro_do_trabalho\n\nmeu_carro.ano = 1968\nmeu_carro.modelo = 'Fusca'\nmeu_carro.cor = 'azul'\n\nmeu_carro.ano\n\nmeu_carro.cor\n\ncarro_do_trabalho.ano = 1981\ncarro_do_trabalho.modelo = 'Brasília'\ncarro_do_trabalho.cor = 'amarela'\n\ncarro_do_trabalho.ano\n\nnovo_fusca = meu_carro # duas variáveis apontando para o mesmo objeto\n\nnovo_fusca #repare que é o mesmo end. de memória\n\nnovo_fusca.ano += 10\n\nnovo_fusca.ano\n\nnovo_fusca", "Testes para praticar", "class Pato:\n pass\n\npato = Pato()\npatinho = Pato()\nif pato == patinho:\n print(\"Estamos no mesmo endereço!\")\nelse:\n print(\"Estamos em endereços diferentes!\")\n\nclass Carro:\n def __init__(self, modelo, ano, cor): # init é o Construtor da classe\n self.modelo = modelo\n self.ano = ano\n self.cor = cor\n\ncarro_do_meu_avo = Carro('Ferrari', 1980, 'vermelha')\ncarro_do_meu_avo\n\ncarro_do_meu_avo.cor", "POO – Programação Orientada a Objetos – Parte 2", "def main():\n carro1 = Carro('Brasília', 1968, 'amarela', 80)\n carro2 = Carro('Fuscão', 1981, 'preto', 95)\n \n carro1.acelere(40)\n carro2.acelere(50)\n carro1.acelere(80)\n carro1.pare()\n carro2.acelere(100)\n \nclass Carro:\n \n def __init__(self, modelo, ano, cor, vel_max):\n self.modelo = modelo\n self.ano = ano\n self.cor = cor\n self.vel = 0\n self.maxV = vel_max # velocidade máxima\n \n def imprima(self):\n if self.vel == 0: # parado dá para ver o ano\n print('%s %s %d' % (self.modelo, self.cor, self.ano))\n elif self.vel < self.maxV:\n print('%s %s indo a %d km/h' % (self.modelo, self.cor, self.vel))\n else:\n print('%s %s indo muito rapido!' % (self.modelo, self.cor))\n \n def acelere(self, velocidade):\n self.vel = velocidade\n if self.vel > self.maxV:\n self.vel = self.maxV\n self.imprima()\n \n def pare(self):\n self.vel = 0\n self.imprima()\n \nmain()\n ", "TESTE PARA PRATICAR POO – Programação Orientada a Objetos – Parte 2", "class Cafeteira:\n def __init__(self, marca, tipo, tamanho, cor):\n self.marca = marca\n self.tipo = tipo\n self.tamanho = tamanho\n self.cor = cor\n\nclass Cachorro:\n def __init__(self, raça, idade, nome, cor):\n self.raça = raça\n self.idade = idade\n self.nome = nome\n self.cor = cor\n \nrex = Cachorro('vira-lata', 2, 'Bobby', 'marrom')\n\n'vira-lata' == rex.raça\n\nrex.idade > 2\n\nrex.idade == '2'\n\nrex.nome == 'rex'\n\nBobby.cor == 'marrom'\n\nrex.cor == 'marrom'\n\nclass Lista:\n def append(self, elemento):\n return \"Oops! Este objeto não é uma lista\"\n \nlista = []\n\na = Lista()\nb = a.append(7)\n\nlista.append(b)\n\na\n\nb\n\nlista", "Códigos Testáveis", "import math\n\nclass Bhaskara:\n\n def delta(self, a, b, c):\n return b ** 2 - 4 * a * c\n\n def main(self):\n a_digitado = float(input(\"Digite o valor de a:\"))\n b_digitado = float(input(\"Digite o valor de b:\"))\n c_digitado = float(input(\"Digite o valor de c:\"))\n print(self.calcula_raizes(a_digitado, b_digitado, c_digitado))\n\n def calcula_raizes(self, a, b, c):\n d = self.delta(self, a, b, c)\n if d == 0:\n raiz1 = (-b + math.sqrt(d)) / (2 * a)\n return 1, raiz1 # indica que tem uma raiz e o valor dela\n else:\n if d < 0:\n return 0\n else:\n raiz1 = (-b + math.sqrt(d)) / (2 * a)\n raiz2 = (-b - math.sqrt(d)) / (2 * a)\n return 2, raiz1, raiz2\n\nmain()\n\nmain()\n\nimport Bhaskara\n\nclass TestBhaskara:\n\n def testa_uma_raiz(self):\n b = Bhaskara.Bhaskara()\n assert b.calcula_raizes(1, 0, 0) == (1, 0)\n\n def testa_duas_raizes(self):\n b = Bhaskara.Bhaskara()\n assert b.calcula_raizes(1, -5, 6) == (2, 3, 2)\n\n def testa_zero_raizes(self):\n b = Bhaskara.Bhaskara()\n assert b.calcula_raizes(10, 10, 10) == 0\n \n def testa_raiz_negativa(self):\n b = Bhaskara.Bhaskara()\n assert b.calcula_raizes(10, 20, 10) == (1, -1)", "Fixture: valor fixo para um conjunto de testes\n@pytest.fixture", "# Nos estudos ficou pytest_bhaskara.py\n\nimport Bhaskara\nimport pytest\n\nclass TestBhaskara:\n\n @pytest.fixture\n def b(self):\n return Bhaskara.Bhaskara()\n \n def testa_uma_raiz(self, b):\n assert b.calcula_raizes(1, 0, 0) == (1, 0)\n\n def testa_duas_raizes(self, b):\n assert b.calcula_raizes(1, -5, 6) == (2, 3, 2)\n\n def testa_zero_raizes(self, b):\n assert b.calcula_raizes(10, 10, 10) == 0\n \n def testa_raiz_negativa(self, b):\n assert b.calcula_raizes(10, 20, 10) == (1, -1)", "Parametrização", "def fatorial(n):\n if n < 0:\n return 0\n i = fat = 1\n while i <= n:\n fat = fat * i\n i += 1\n return fat\n\nimport pytest\n\n@pytest.mark.parametrize(\"entrada, esperado\", [\n (0, 1),\n (1, 1), \n (-10, 0), \n (4, 24),\n (5, 120)\n ])\n\ndef testa_fatorial(entrada, esperado):\n assert fatorial(entrada) == esperado", "Exercícios\n\n\nEscreva uma versão do TestaBhaskara usando @pytest.mark.parametrize\n\n\nEscreva uma bateria de testes para o seu código preferido\n\n\nTarefa de programação: Lista de exercícios - 3\nExercício 1: Uma classe para triângulos\nDefina a classe Triangulo cujo construtor recebe 3 valores inteiros correspondentes aos lados a, b e c de um triângulo.\nA classe triângulo também deve possuir um método perimetro, que não recebe parâmetros e devolve um valor inteiro correspondente ao perímetro do triângulo.\nt = Triangulo(1, 1, 1)\ndeve atribuir uma referência para um triângulo de lados 1, 1 e 1 à variável t\nUm objeto desta classe deve responder às seguintes chamadas:\nt.a\ndeve devolver o valor do lado a do triângulo\nt. b\ndeve devolver o valor do lado b do triângulo\nt.c\ndeve devolver o valor do lado c do triângulo\nt.perimetro()\ndeve devolver um inteiro correspondente ao valor do perímetro do triângulo.", "class Triangulo:\n \n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n \n def perimetro(self):\n return self.a + self.b + self.c\n\nt = Triangulo(1, 1, 1)\n\nt.a\n\nt.b\n\nt.c\n\nt.perimetro()", "Exercício 2: Tipos de triângulos\nNa classe triângulo, definida na Questão 1, escreva o metodo tipo_lado() que devolve uma string dizendo se o triângulo é:\nisóceles (dois lados iguais)\nequilátero (todos os lados iguais)\nescaleno (todos os lados diferentes)\nNote que se o triângulo for equilátero, a função não deve devolver isóceles.\nExemplos:\nt = Triangulo(4, 4, 4)\nt.tipo_lado()\ndeve devolver 'equilátero'\nu = Triangulo(3, 4, 5)\n.tipo_lado()\ndeve devolver 'escaleno'", "class Triangulo:\n \n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n \n def tipo_lado(self):\n if self.a == self.b and self.a == self.c:\n return 'equilátero'\n elif self.a != self.b and self.a != self.c and self.b != self.c:\n return 'escaleno'\n else:\n return 'isósceles'\n\nt = Triangulo(4, 4, 4)\nt.tipo_lado()\n\nu = Triangulo(3, 4, 5)\nu.tipo_lado()\n\nv = Triangulo(1, 3, 3)\nv.tipo_lado()\n\nt = Triangulo(5, 8, 5)\nt.tipo_lado()\n\nt = Triangulo(5, 5, 6)\nt.tipo_lado()\n\n'''\nExercício 1: Triângulos retângulos\n\nEscreva, na classe Triangulo, o método retangulo() que devolve \nTrue se o triângulo for retângulo, e False caso contrário.\n\nExemplos:\n\nt = Triangulo(1, 3, 5)\nt.retangulo()\n# deve devolver False\n\nu = Triangulo(3, 4, 5)\nu.retangulo()\n# deve devolver True\n'''\n\nclass Triangulo:\n \n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n \n def retangulo(self):\n if self.a > self.b and self.a > self.c:\n if self.a ** 2 == self.b ** 2 + self.c ** 2:\n return True\n else:\n return False\n elif self.b > self.a and self.b > self.c:\n if self.b ** 2 == self.c ** 2 + self.a ** 2:\n return True\n else:\n return False\n else:\n if self.c ** 2 == self.a ** 2 + self.b ** 2:\n return True\n else:\n return False\n\nt = Triangulo(1, 3, 5)\nt.retangulo()\n\nt = Triangulo(3, 1, 5)\nt.retangulo()\n\nt = Triangulo(5, 1, 3)\nt.retangulo()\n\nu = Triangulo(3, 4, 5)\nu.retangulo()\n\nu = Triangulo(4, 5, 3)\nu.retangulo()\n\nu = Triangulo(5, 3, 4)\nu.retangulo()", "Exercício 2: Triângulos semelhantes\nAinda na classe Triangulo, escreva um método semelhantes(triangulo) \nque recebe um objeto do tipo Triangulo como parâmetro e verifica \nse o triângulo atual é semelhante ao triângulo passado como parâmetro. \nCaso positivo, o método deve devolver True. Caso negativo, \ndeve devolver False.\nVerifique a semelhança dos triângulos através do comprimento \ndos lados.\nDica: você pode colocar os lados de cada um dos triângulos em uma \nlista diferente e ordená-las.\nExemplo:\nt1 = Triangulo(2, 2, 2)\nt2 = Triangulo(4, 4, 4)\nt1.semelhantes(t2)\ndeve devolver True\n'''", "class Triangulo:\n\n '''\n O resultado dos testes com seu programa foi:\n\n ***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(3, 4, 5) - Falhou *****\n TypeError: 'Triangulo' object is not iterable\n\n ***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(6, 8, 10) - Falhou *****\n TypeError: 'Triangulo' object is not iterable\n\n ***** [0.2 pontos]: Testando método semelhantes(Triangulo(6, 8, 10)) para Triangulo(3, 4, 5) - Falhou *****\n TypeError: 'Triangulo' object is not iterable\n\n ***** [0.4 pontos]: Testando método semelhantes(Triangulo(3, 3, 3)) para Triangulo(3, 4, 5) - Falhou *****\n TypeError: 'Triangulo' object is not iterable\n\n '''\n def __init__(self, a, b, c):\n self.a = a\n self.b = b\n self.c = c\n\n# https://stackoverflow.com/questions/961048/get-class-that-defined-method\n def semelhantes(self, Triangulo):\n \n list1 = []\n \n for arg in self:\n list1.append(arg)\n \n list2 = []\n \n for arg in self1:\n list2.append(arg)\n \n \n for i in list2:\n print(i)\n \n\nt1 = Triangulo(2, 2, 2)\nt2 = Triangulo(4, 4, 4)\nt1.semelhantes(t2)", "Week 4\nBusca Sequencial", "def busca_sequencial(seq, x):\n '''(list, bool) -> bool'''\n for i in range(len(seq)):\n if seq[i] == x:\n return True\n return False\n\n# código com cara de C =\\\n\nlist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n\nbusca_sequencial(list, 3)\n\nlist = ['casa', 'texto', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n\nbusca_sequencial(list, 'texto')\n\nclass Musica:\n \n def __init__(self, titulo, interprete, compositor, ano):\n self.titulo = titulo\n self.interprete = interprete\n self.compositor = compositor\n self.ano = ano\n\nclass Buscador:\n \n def busca_por_titulo(self, playlist, titulo):\n for i in range(len(playlist)):\n if playlist[i].titulo == titulo:\n return i\n return -1\n\n def vamos_buscar(self):\n playlist = [Musica(\"Ponta de Areia\", \"Milton Nascimento\", \"Milton Nascimento\", 1975),\n Musica(\"Podres Poderes\", \"Caetano Veloso\", \"Caetano Veloso\", 1984), \n Musica(\"Baby\", \"Gal Costa\", \"Caetano Veloso\", 1969)]\n \n onde_achou = self.busca_por_titulo(playlist, \"Baby\")\n \n if onde_achou == -1:\n print(\"A música buscada não está na playlist\")\n else:\n preferida = playlist[onde_achou]\n print(preferida.titulo, preferida.interprete, preferida.compositor, preferida.ano, sep = ', ')\n\nb = Buscador()\n\nb.vamos_buscar()", "Complexidade Computacional\n\n\nAnálise matemática do desempenho de um algoritmo\n\n\nEstudo analítico de:\n\nQuantas operações um algoritmo requer para que ele seja executado\nQuanto tempo ele vai demorar para ser executado\nQuanto de memória ele vai ocupar\n\n\n\nAnálise da Busca Sequencial\nExemplo: \nLista telefônica de São Paulo, supondo 2 milhões de telefones fixos.\nSupondo que cada iteração do for comparação de string dure 1 milissegundo. \nPior caso: 2000s = 33,3 minutos\nCaso médio (1 milhão): 1000s = 16,6 minutos\nComplexidade Computacional da Busca Sequencial\n\n\nDada uma lista de tamanho n\n\n\nA complexidade computacional da busca sequencial é: \n\nn, no pior caso\nn/2, no caso médio\n\n\n\nConclusão\n\nBusca sequencial é boa pois é bem simples\n\nFunciona bem quando a busca é feita num volume pequeno de dados\n\n\nSua Complexidade Computacional é muito alta\n\nÉ muito lenta quando o volume de dados é grande\nPortanto, dizemos que é um algoritmo ineficiente\n\nAlgoritmo de Ordenação Seleção Direta\nSeleção Direta\nA cada passo, busca pelo menor elemento do pedaço ainda não ordenado da lista e o coloca no início da lista\nNo 1º passo, busca o menor elemento de todos e coloca na posição inicial da lista.\nNo 2º passo, busca o 2º menor elemento da lista e coloca na 2ª posição da lista.\nNo 3º passo, busca o 3º menor elemento da lista e coloca na 3ª posição da lista.\nRepete até terminar a lista", "class Ordenador:\n \n def selecao_direta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1):\n # Inicialmente o menor elemento já visto é o i-ésimo\n posicao_do_minimo = i\n \n for j in range(i + 1, fim):\n if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...\n posicao_do_minimo = j # ...substitui.\n \n # Coloca o menor elemento encontrado no início da sub-lista\n # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo\n \n lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]\n\nlista = [10, 3, 8, -10, 200, 17, 32]\no = Ordenador()\n\no.selecao_direta(lista)\n\nlista\n\nlista_nomes = ['maria', 'carlos', 'wilson', 'ana']\no.selecao_direta(lista_nomes)\nlista_nomes\n\nimport random\n\nprint(random.randint(1, 10))\n\nfrom random import shuffle\nx = [i for i in range(100)]\nshuffle(x)\nx\n\no.selecao_direta(x)\nx\n\ndef comprova_ordem(list):\n \n flag = True\n \n for i in range(len(list) - 1):\n if list[i] > list[i + 1]:\n flag = False\n \n return flag\n\ncomprova_ordem(x)\n\nlist = [1, 2, 3, 4, 5]\nlist2 = [1, 3, 2, 4, 5]\n\ncomprova_ordem(list)\n\ncomprova_ordem(list2)\n\ndef busca_sequencial(seq, x):\n for i in range(len(seq)):\n if seq[i] == x:\n return True\n return False\n\ndef selecao_direta(lista):\n fim = len(lista)\n for i in range(fim-1):\n pos_menor = i\n for j in range(i+1,fim):\n if lista[j] < lista[pos_menor]:\n pos_menor = j\n lista[i],lista[pos_menor] = lista[pos_menor],lista[i]\n return lista\n\nnumeros = [55,33,0,900,-432,10,77,2,11]", "Tarefa de programação: Lista de exercícios - 4\nExercício 1: Lista ordenada\nEscreva a função ordenada(lista), que recebe uma lista com números inteiros como parâmetro e devolve o booleano True se a lista estiver ordenada e False se a lista não estiver ordenada.", "def ordenada(list):\n \n flag = True\n \n for i in range(len(list) - 1):\n if list[i] > list[i + 1]:\n flag = False\n \n return flag", "Exercício 2: Busca sequencial\nImplemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca sequencial. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False.\nbusca(['a', 'e', 'i'], 'e')\ndeve devolver => 1\nbusca([12, 13, 14], 15)\ndeve devolver => False", "def busca(lista, elemento):\n\n for i in range(len(lista)):\n if lista[i] == elemento:\n return i\n return False\n\nbusca(['a', 'e', 'i'], 'e')\n\nbusca([12, 13, 14], 15)", "Praticar tarefa de programação: Exercícios adicionais (opcionais)\nExercício 1: Gerando listas grandes\nEscreva a função lista_grande(n), que recebe como parâmetro um número inteiro n e devolve uma lista contendo n números inteiros aleatórios.", "def lista_grande(n):\n \n import random\n return random.sample(range(1, 1000), n)\n\nlista_grande(10)", "Exercício 2: Ordenação com selection sort\nImplemente a função ordena(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo selection sort.", "def ordena(lista):\n \n fim = len(lista)\n \n for i in range(fim - 1):\n min = i\n \n for j in range(i + 1, fim):\n if lista[j] < lista[min]:\n min = j\n \n lista[i], lista[min] = lista[min], lista[i]\n \n return lista\n\nlista = [10, 3, 8, -10, 200, 17, 32]\nordena(lista)\nlista", "Week 5 - Algoritmo de Ordenação da Bolha - Bubblesort\nLista como um tubo de ensaio vertical, os elementos mais leves sobem à superfície como uma bolha, os mais pesados afundam. \nPercorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem", "class Ordenador:\n \n def selecao_direta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1):\n # Inicialmente o menor elemento já visto é o i-ésimo\n posicao_do_minimo = i\n \n for j in range(i + 1, fim):\n if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...\n posicao_do_minimo = j # ...substitui.\n \n # Coloca o menor elemento encontrado no início da sub-lista\n # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo\n \n lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]\n \n def bolha(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1, 0, -1):\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j] ", "Exemplo do algoritmo bubblesort em ação: \nInicial: \n5 1 7 3 2\n1 5 7 3 2\n1 5 3 7 2\n1 5 3 2 7 (fim da primeira iteração)\n1 3 5 2 7\n1 3 2 5 7 (fim da segunda iteração)\n1 2 3 5 7", "lista = [10, 3, 8, -10, 200, 17, 32]\n\no = Ordenador()\no.bolha(lista)\nlista", "Comparação de Desempenho\nMódulo time:\n\nfunção time()\ndevolve o tempo decorrido (em segundos) desde 1/1/1970 (no Unix)\n\nPara medir um intervalo de tempo\nimport time\nantes = time.time()\nalgoritmo_a_ser_cronometrado()\ndepois = time.time()\nprint(\"A execução do algoritmo demorou \", depois - antes, \"segundos\")", "class Ordenador:\n \n def selecao_direta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1):\n # Inicialmente o menor elemento já visto é o i-ésimo\n posicao_do_minimo = i\n \n for j in range(i + 1, fim):\n if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...\n posicao_do_minimo = j # ...substitui.\n \n # Coloca o menor elemento encontrado no início da sub-lista\n # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo\n \n lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]\n \n def bolha(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1, 0, -1):\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j] \n\nimport random\nimport time\n\nclass ContaTempos:\n \n def lista_aleatoria(self, n): # n = número de elementos da lista\n from random import randrange\n lista = [0 for x in range(n)] # lista com n elementos, todos sendo zero\n for i in range(n):\n lista[i] = random.randrange(1000) # inteiros entre 0 e 999\n return lista\n \n def compara(self, n):\n \n lista1 = self.lista_aleatoria(n)\n lista2 = lista1\n \n o = Ordenador()\n \n antes = time.time()\n o.bolha(lista1)\n depois = time.time()\n print(\"Bolha demorou\", depois - antes, \"segundos\")\n \n antes = time.time()\n o.selecao_direta(lista2)\n depois = time.time()\n print(\"Seleção direta demorou\", depois - antes, \"segundos\")\n\nc = ContaTempos()\nc.compara(1000)\n\nprint(\"Diferença de\", 0.16308164596557617 - 0.05245494842529297)\n\nc.compara(5000)", "Melhoria no Algoritmo de Ordenação da Bolha\nPercorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem.\nMelhoria: se em uma das iterações, nenhuma troca é realizada, isso significa que a lista já está ordenada e podemos finalizar o algoritmo.", "class Ordenador:\n \n def selecao_direta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1):\n # Inicialmente o menor elemento já visto é o i-ésimo\n posicao_do_minimo = i\n \n for j in range(i + 1, fim):\n if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...\n posicao_do_minimo = j # ...substitui.\n \n # Coloca o menor elemento encontrado no início da sub-lista\n # Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo\n \n lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]\n \n def bolha(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1, 0, -1):\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j] \n \n def bolha_curta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1, 0, -1):\n trocou = False\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j] \n trocou = True\n if not trocou: # que é igual a if trocou == False\n return \n\nimport random\nimport time\n\nclass ContaTempos:\n \n def lista_aleatoria(self, n): # n = número de elementos da lista\n from random import randrange\n lista = [random.randrange(1000) for x in range(n)] # lista com n elementos, todos sendo aleatórios de 0 a 999\n return lista\n \n def lista_quase_ordenada(self, n):\n lista = [x for x in range(n)] # lista ordenada\n lista[n//10] = -500 # localizou o -500 no primeiro décimo da lista\n return lista \n \n def compara(self, n):\n \n lista1 = self.lista_aleatoria(n)\n lista2 = lista1\n lista3 = lista2\n \n o = Ordenador()\n \n print(\"Comparando lista aleatórias\")\n \n antes = time.time()\n o.bolha(lista1)\n depois = time.time()\n print(\"Bolha demorou\", depois - antes, \"segundos\")\n \n antes = time.time()\n o.selecao_direta(lista2)\n depois = time.time()\n print(\"Seleção direta demorou\", depois - antes, \"segundos\")\n \n antes = time.time()\n o.bolha_curta(lista3)\n depois = time.time()\n print(\"Bolha otimizada\", depois - antes, \"segundos\")\n\n print(\"\\nComparando lista quase ordenadas\")\n \n lista1 = self.lista_quase_ordenada(n)\n lista2 = lista1\n lista3 = lista2\n \n antes = time.time()\n o.bolha(lista1)\n depois = time.time()\n print(\"Bolha demorou\", depois - antes, \"segundos\")\n \n antes = time.time()\n o.selecao_direta(lista2)\n depois = time.time()\n print(\"Seleção direta demorou\", depois - antes, \"segundos\")\n \n antes = time.time()\n o.bolha_curta(lista3)\n depois = time.time()\n print(\"Bolha otimizada\", depois - antes, \"segundos\")\n\n\nc = ContaTempos()\nc.compara(1000)\n\nc.compara(5000)", "Site com algoritmos de ordenação http://nicholasandre.com.br/sorting/\nTestes automatizados dos algoritmos de ordenação", "class Ordenador:\n \n def selecao_direta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1):\n posicao_do_minimo = i\n \n for j in range(i + 1, fim):\n if lista[j] < lista[posicao_do_minimo]: \n posicao_do_minimo = j \n lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]\n \n def bolha(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1, 0, -1):\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j] \n \n def bolha_curta(self, lista):\n \n fim = len(lista)\n \n for i in range(fim - 1, 0, -1):\n trocou = False\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j] \n trocou = True\n if not trocou:\n return \n\nimport random\nimport time\n\nclass ContaTempos:\n \n def lista_aleatoria(self, n): \n from random import randrange\n lista = [random.randrange(1000) for x in range(n)] \n return lista\n \n def lista_quase_ordenada(self, n):\n lista = [x for x in range(n)] \n lista[n//10] = -500 \n return lista \n \nimport pytest\n\nclass TestaOrdenador:\n \n @pytest.fixture\n def o(self):\n return Ordenador()\n \n @pytest.fixture\n def l_quase(self):\n c = ContaTempos()\n return c.lista_quase_ordenada(100)\n \n @pytest.fixture\n def l_aleatoria(self):\n c = ContaTempos()\n return c.lista_aleatoria(100)\n \n def esta_ordenada(self, l):\n for i in range(len(l) - 1):\n if l[i] > l[i+1]:\n return False\n return True \n \n def test_bolha_curta_aleatoria(self, o, l_aleatoria):\n o.bolha_curta(l_aleatoria)\n assert self.esta_ordenada(l_aleatoria)\n\n def test_selecao_direta_aleatoria(self, o, l_aleatoria):\n o.selecao_direta(l_aleatoria)\n assert self.esta_ordenada(l_aleatoria) \n \n def test_bolha_curta_quase(self, o, l_quase):\n o.bolha_curta(l_quase)\n assert self.esta_ordenada(l_quase)\n\n def test_selecao_direta_quase(self, o, l_quase):\n o.selecao_direta(l_quase)\n assert self.esta_ordenada(l_quase) \n\n[5, 2, 1, 3, 4]\n\n2 5 1 3 4\n2 1 5 3 4\n2 1 3 5 4\n2 1 3 4 5\n\n[2, 3, 4, 5, 1]\n\n2 3 4 1 5\n2 3 1 4 5\n2 1 3 4 5\n1 2 3 4 5\n", "Busca Binária\nObjetivo: localizar o elemento x em uma lista\n\nConsidere o elemento m do meio da lista\nse x == m ==> encontrou!\nse x < m ==> procure apenas na 1ª metade (da esquerda)\nse x > m ==> procure apenas na 2ª metade (da direita),\nrepetir o processo até que o x seja encontrado ou que a sub-lista em questão esteja vazia", "class Buscador:\n \n def busca_por_titulo(self, playlist, titulo):\n for i in range(len(playlist)):\n if playlist[i].titulo == titulo:\n return i\n return -1\n \n def busca_binaria(self, lista, x):\n primeiro = 0\n ultimo = len(lista) - 1\n \n while primeiro <= ultimo:\n meio = (primeiro + ultimo) // 2\n if lista[meio] == x:\n return meio\n else:\n if x < lista[meio]: # busca na primeira metade da lista\n ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos\n else: \n primeiro = meio + 1\n return -1\n\nlista = [-100, 0, 20, 30, 50, 100, 3000, 5000]\nb = Buscador()\nb.busca_binaria(lista, 30)", "Complexidade da Busca Binária\nDado uma lista de n elementos\nNo pior caso, teremos que efetuar: \n$$log_2n$$ comparações\nNo exemplo da lista telefônica (com 2 milhões de números):\n$$log_2(2 milhões) = 20,9$$\nPortanto: resposta em menos de 21 milissegundos!\nConclusão\n\n\nBusca Binária é um algoritmo bastante eficiente\n\n\nAo estudar a eficiência de um algoritmo é interessante: \n\nAnalisar a complexidade computacional\nRealizar experimentos medindo o desempenho\n\n\n\nTarefa de programação: Lista de exercícios - 5\nExercício 1: Busca binária\nImplemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca binária. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False.\nAlém de devolver o índice correspondente à posição do elemento encontrado, sua função deve imprimir cada um dos índices testados pelo algoritmo.\nExemplo:\nbusca(['a', 'e', 'i'], 'e')\n1\ndeve devolver => 1\nbusca([1, 2, 3, 4, 5], 6)\n2\n3\n4\ndeve devolver => False\nbusca([1, 2, 3, 4, 5, 6], 4)\n2\n4\n3\ndeve devolver => 3", "def busca(lista, elemento):\n \n primeiro = 0\n ultimo = len(lista) - 1\n\n while primeiro <= ultimo:\n meio = (primeiro + ultimo) // 2\n if lista[meio] == elemento:\n print(meio)\n return meio\n else:\n if elemento < lista[meio]: # busca na primeira metade da lista\n ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos\n print(meio) # função deve imprimir cada um dos índices testados pelo algoritmo.\n else: \n primeiro = meio + 1\n print(meio)\n return False\n\nbusca(['a', 'e', 'i'], 'e')\n\nbusca([1, 2, 3, 4, 5], 6)\n\nbusca([1, 2, 3, 4, 5, 6], 4)", "Exercício 2: Ordenação com bubble sort\nImplemente a função bubble_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo bubble sort.\nAlém de devolver uma lista ordenada, sua função deve imprimir os resultados parciais da ordenação ao fim de cada iteração do algoritmo ao longo da lista. Observe que, como a última iteração do algoritmo apenas verifica que a lista está ordenada, o último resultado deve ser impresso duas vezes. Portanto, se seu algoritmo precisa de duas passagens para ordenar a lista, e uma terceira para verificar que a lista está ordenada, 3 resultados parciais devem ser impressos.\nbubble_sort([5, 1, 4, 2, 8])\n[1, 4, 2, 5, 8]\n[1, 2, 4, 5, 8]\n[1, 2, 4, 5, 8]\ndeve devolver [1, 2, 4, 5, 8]", "def bubble_sort(lista):\n \n fim = len(lista)\n\n for i in range(fim - 1, 0, -1):\n for j in range(i):\n if lista[j] > lista[j + 1]:\n lista[j], lista[j + 1] = lista[j + 1], lista[j]\n print(lista)\n print(lista)\n \n return lista\n\nbubble_sort([5, 1, 4, 2, 8])\n\n#[1, 4, 2, 5, 8]\n#[1, 2, 4, 5, 8]\n#[1, 2, 4, 5, 8]\n#deve devolver [1, 2, 4, 5, 8]\n\nbubble_sort([1, 3, 4, 2, 0, 5])\n\n#Esperado:\n\n#[1, 3, 2, 0, 4, 5]\n#[1, 2, 0, 3, 4, 5]\n#[1, 0, 2, 3, 4, 5]\n#[0, 1, 2, 3, 4, 5]\n#[0, 1, 2, 3, 4, 5]\n\n#O resultado dos testes com seu programa foi:\n\n#***** [0.6 pontos]: Verificando funcionamento do bubble sort - Falhou *****\n#AssertionError: Expected \n#[1, 3, 4, 2, 0, 5]\n#[1, 3, 2, 0, 4, 5]\n#[1, 2, 0, 3, 4, 5]\n#[1, 0, 2, 3, 4, 5]\n#[0, 1, 2, 3, 4, 5]\n#[0, 1, 2, 3, 4, 5]\n\n# but got \n#[1, 3, 4, 2, 0, 5]\n#[1, 3, 2, 0, 4, 5]\n#[1, 2, 0, 3, 4, 5]\n#[1, 0, 2, 3, 4, 5]\n#[0, 1, 2, 3, 4, 5]", "Praticar tarefa de programação: Exercício adicional (opcional)\nExercício 1: Ordenação com insertion sort\nImplemente a função insertion_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo insertion sort.", "def insertion_sort(lista):\n\n fim = len(lista)\n\n for i in range(fim - 1):\n posicao_do_minimo = i\n\n for j in range(i + 1, fim):\n if lista[j] < lista[posicao_do_minimo]: \n posicao_do_minimo = j \n lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]\n \n return lista", "Week 6\nRecursão (Definição. Como resolver um problema recursivo. Exemplos. Implementações.)", "def fatorial(n):\n if n <= 1: # base da recursão\n return 1\n else:\n return n * fatorial(n - 1) # chamada recursiva\n\nimport pytest\n\n@pytest.mark.parametrize(\"entrada, esperado\", [\n (0, 1),\n (1, 1), \n (2, 2), \n (3, 6),\n (4, 24), \n (5, 120) \n])\n\ndef testa_fatorial(entrada, esperado):\n assert fatorial(entrada) == esperado\n\n#fatorial.py\n\ndef fatorial(n):\n if n <= 1: # base da recursão\n return 1\n else:\n return n * fatorial(n - 1) # chamada recursiva\n\nimport pytest\n\n@pytest.mark.parametrize(\"entrada, esperado\", [\n (0, 1),\n (1, 1), \n (2, 2), \n (3, 6),\n (4, 24), \n (5, 120) \n])\n\ndef testa_fatorial(entrada, esperado):\n assert fatorial(entrada) == esperado\n\n# fibonacci.py\n\n# Fn = 0 if n = 0\n# Fn = 1 if n = 1\n# Fn+1 + Fn-2 if n > 1\n\ndef fibonacci(n):\n if n < 2: \n return n\n else:\n return fibonacci(n - 1) + fibonacci(n - 2)\n\nimport pytest\n\n@pytest.mark.parametrize(\"entrada, esperado\", [\n (0, 0),\n (1, 1), \n (2, 1), \n (3, 2),\n (4, 3), \n (5, 5),\n (6, 8),\n (7, 13)\n])\n\ndef testa_fibonacci(entrada, esperado):\n assert fibonacci(entrada) == esperado\n\n# busca binária\n\ndef busca_binaria(lista, elemento, min = 0, max = None):\n \n if max == None: # se nada for passado, o tamanho máximo é o tamanho da lista\n max = len(lista) - 1\n \n if max < min: # situação que não encontrou o elemento\n return False\n else:\n meio = min + (max - min) // 2\n \n if lista[meio] > elemento:\n return busca_binaria(lista, elemento, min, meio - 1)\n \n elif lista[meio] < elemento:\n return busca_binaria(lista, elemento, meio + 1, max)\n \n else:\n return meio \n\na = [10, 20, 30, 40, 50, 60]\n\nimport pytest\n\n@pytest.mark.parametrize(\"lista, valor, esperado\", [\n (a, 10, 0),\n (a, 20, 1),\n (a, 30, 2),\n (a, 40, 3),\n (a, 50, 4),\n (a, 60, 5),\n (a, 70, False),\n (a, 70, False),\n (a, 15, False),\n (a, -10, False)\n])\n\ndef testa_busca_binaria(lista, valor, esperado):\n assert busca_binaria(lista, valor) == esperado", "Mergesort\nOrdenação por Intercalação:\n\n\nDivida a lista na metade recursivamente, até que cada sublista contenha apenas 1 elemento (portanto, já ordenada).\n\n\nRepetidamente, intercale as sublistas para produzir novas listas ordenadas.\n\n\nRepita até que tenhamos apenas 1 lista no final (que estará ordenada).\n\n\nEx: \n6 5 3 1 8 7 2 4\n5 6&nbsp;&nbsp;&nbsp;&nbsp;1 3&nbsp;&nbsp;&nbsp;&nbsp;7 8&nbsp;&nbsp;&nbsp;&nbsp;2 4 \n1 3 5 6&nbsp;&nbsp;&nbsp;&nbsp;2 4 7 8\n1 2 3 4 5 6 7 8", "def merge_sort(lista):\n if len(lista) <= 1:\n return lista\n \n meio = len(lista) // 2\n \n lado_esquerdo = merge_sort(lista[:meio])\n lado_direito = merge_sort(lista[meio:])\n \n return merge(lado_esquerdo, lado_direito) # intercala os dois lados\n\ndef merge(lado_esquerdo, lado_direito):\n if not lado_esquerdo: # se o lado esquerdo for uma lista vazia...\n return lado_direito\n \n if not lado_direito: # se o lado direito for uma lista vazia...\n return lado_esquerdo\n \n if lado_esquerdo[0] < lado_direito[0]: # compara o primeiro elemento da posição do lado esquerdo com o primeiro do lado direito\n return [lado_esquerdo[0]] + merge(lado_esquerdo[1:], lado_direito) # merge(lado_esquerdo[1:]) ==> pega o lado esquerdo, menos o primeiro elemento\n \n return [lado_direito[0]] + merge(lado_esquerdo, lado_direito[1:])", "Base da recursão é a condição que faz o problema ser definitivamente resolvido. Caso essa condição, essa base da recursão, não seja satisfeita, o problema continua sendo reduzido em instâncias menores até que a condição passe a ser satisfeita.\nChamada recursiva é a linha onde a função faz uma chamada a ela mesma.\nFunção recursiva é a função que chama ela mesma.\n\n\n\nA linha 2 tem a condição que é a base da recursão\nA linha 5 tem a chamada recursiva\nPara o algoritmo funcionar corretamente, é necessário trocar a linha 3 por “return 1”\n\n\n\nif (n < 2):\nif (n <= 1):\n\n\n\nNo <espaço A> e no <espaço C>\n\n\nlooping infinito\n\n\n\nResultado: 6. Chamadas recursivas: nenhuma.\n\n\n\nResultado: 20. Chamadas recursivas: 24\n\n\n\n1", "def x(n):\n if n == 0:\n #<espaço A>\n print(n)\n else:\n #<espaço B>\n x(n-1)\n print(n)\n #<espaço C>\n #<espaço D>\n#<espaço E>\n\nx(10)\n\ndef x(n):\n if n >= 0 or n <= 2:\n print(n)\n # return n\n else:\n print(n-1)\n print(n-2)\n print(n-3)\n #return x(n-1) + x(n-2) + x(n-3)\n\nprint(x(6))\n\ndef busca_binaria(lista, elemento, min=0, max=None):\n if max == None:\n max = len(lista)-1\n\n if max < min:\n return False\n else:\n meio = min + (max-min)//2\n print(lista[meio])\n \n if lista[meio] > elemento:\n return busca_binaria(lista, elemento, min, meio - 1)\n elif lista[meio] < elemento:\n return busca_binaria(lista, elemento, meio + 1, max)\n else:\n return meio\n\na = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]\n\na\n\nbusca_binaria(a, 99)", "Tarefa de programação: Lista de exercícios - 6\nExercício 1: Soma dos elementos de uma lista\nImplemente a função soma_lista(lista), que recebe como parâmetro uma lista de números inteiros e devolve um número inteiro correspondente à soma dos elementos desta lista.\nSua solução deve ser implementada utilizando recursão.", "def soma_lista_tradicional_way(lista):\n \n soma = 0\n \n for i in range(len(lista)):\n soma += lista[i]\n \n return soma\n\na = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]\nsoma_lista_tradicional_way(a)\n\nb = [-10, -2, 0, 5]\nsoma_lista_tradicional_way(b)\n\ndef soma_lista(lista):\n \n if len(lista) == 1:\n return lista[0]\n else:\n return lista[0] + soma_lista(lista[1:])\n\na = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]\nsoma_lista(a) # retorna 2952\n\nb = [-10, -2, 0, 5]\nsoma_lista(b)", "Exercício 2: Encontrando ímpares em uma lista\nImplemente a função encontra_impares(lista), que recebe como parâmetro uma lista de números inteiros e devolve uma outra lista apenas com os números ímpares da lista dada.\nSua solução deve ser implementada utilizando recursão.\nDica: você vai precisar do método extend() que as listas possuem.", "def encontra_impares_tradicional_way(lista):\n \n lista_impares = []\n \n for i in lista:\n if i % 2 != 0: # é impar!\n lista_impares.append(i)\n\n return lista_impares \n\na = [5, 66, 77, 99, 102, 239, 567, 875, 934]\nencontra_impares_tradicional_way(a)\n\nb = [2, 5, 34, 66, 100, 102, 999]\nencontra_impares_tradicional_way(b)\n\nstack = ['a','b']\nstack.extend(['g','h'])\nstack\n\ndef encontra_impares(lista):\n \n if len(lista) == 0:\n return []\n if lista[0] % 2 != 0: # se o elemento é impar\n return [lista[0]] + encontra_impares(lista[1:])\n else:\n return encontra_impares(lista[1:])\n\na = [5, 66, 77, 99, 102, 239, 567, 875, 934]\nencontra_impares(a)\n\nencontra_impares([5])\n\nencontra_impares([1, 2, 3])\n\nencontra_impares([2, 4, 6, 8])\n\nencontra_impares([9])\n\nencontra_impares([4, 11])\n\nencontra_impares([2, 10, 20, 7, 30, 12, 6, 6])\n\nencontra_impares([])\n\nencontra_impares([4, 331, 1001, 4])", "Exercício 3: Elefantes\nEste exercício tem duas partes:\nImplemente a função incomodam(n) que devolve uma string contendo \"incomodam \" (a palavra seguida de um espaço) n vezes. Se n não for um inteiro estritamente positivo, a função deve devolver uma string vazia. Essa função deve ser implementada utilizando recursão.\nUtilizando a função acima, implemente a função elefantes(n) que devolve uma string contendo a letra de \"Um elefante incomoda muita gente...\" de 1 até n elefantes. Se n não for maior que 1, a função deve devolver uma string vazia. Essa função também deve ser implementada utilizando recursão.\nObserve que, para um elefante, você deve escrever por extenso e no singular (\"Um elefante...\"); para os demais, utilize números e o plural (\"2 elefantes...\").\nDica: lembre-se que é possível juntar strings com o operador \"+\". Lembre-se também que é possível transformar números em strings com a função str().\nDica: Será que neste caso a base da recursão é diferente de n==1?\nPor exemplo, uma chamada a elefantes(4) deve devolver uma string contendo:\nUm elefante incomoda muita gente\n2 elefantes incomodam incomodam muito mais\n2 elefantes incomodam incomodam muita gente\n3 elefantes incomodam incomodam incomodam muito mais\n3 elefantes incomodam incomodam incomodam muita gente\n4 elefantes incomodam incomodam incomodam incomodam muito mais", "def incomodam(n):\n \n if type(n) != int or n <= 0:\n return ''\n else:\n s1 = 'incomodam '\n return s1 + incomodam(n - 1)\n\nincomodam('-1')\n\nincomodam(2)\n\nincomodam(3)\n\nincomodam(8)\n\nincomodam(-3)\n\nincomodam(1)\n\nincomodam(7)\n\ndef incomodam(n):\n \n if type(n) != int or n <= 0:\n return ''\n else:\n s1 = 'incomodam '\n return s1 + incomodam(n - 1)\n\ndef elefantes(n):\n \n if type(n) != int or n <= 0:\n return ''\n if n == 1:\n return \"Um elefante incomoda muita gente\"\n else:\n return elefantes(n - 1) + str(n) + \" elefantes \" + incomodam(n) + (\"muita gente\" if n % 2 > 0 else \"muito mais\") + \"\\r\\n\"\n\nelefantes(1)\n\nprint(elefantes(3))\n\nelefantes(2)\n\nelefantes(3)\n\nprint(elefantes(4))\n\ntype(str(3))\n\ndef incomodam(n):\n \n if type(n) != int or n < 0:\n return ''\n else:\n return print('incomodam ' * n)\n \ndef elefantes(n):\n \n texto_inicial = 'Um elefante incomoda muita gente\\n'\n texto_posterior1 = '%d elefantes ' + incomodam(n) + 'muito mais\\n\\n'\n texto_posterior2 = 'elefantes ' + incomodam(n) + 'muita gente\\n'\n \n if n == 1:\n return print(texto_inicial)\n else:\n return print(texto_inicial) + print(texto_posterior1)\n\nelefantes(1)\n\nelefantes(2)", "Praticar tarefa de programação: Exercícios adicionais (opcionais)\nExercício 1: Fibonacci\nImplemente a função fibonacci(n), que recebe como parâmetro um número inteiro e devolve um número inteiro correspondente ao n-ésimo elemento da sequência de Fibonacci. Sua solução deve ser implementada utilizando recursão.\nExemplo:\nfibonacci(4)\ndeve devolver => 3\nfibonacci(2)\ndeve devolver => 1", "def fib(n): # escreve a série de Fibonacci até n\n a, b = 0, 1\n while b < n:\n print(b, end = ' ')\n a, b = b, a + b\n print('\\n\\nO último termo é:', a)\n\nfib(10)\n\ndef fib2(n):\n result = []\n a, b = 0, 1\n while b < n:\n result.append(b)\n a, b = b, a + b\n return result\n\nfib2(60)\n\n## Example 2: Using recursion\ndef fibR(n):\n if n==1 or n==2:\n return 1\n return fibR(n-1)+fibR(n-2)\n\nprint (fibR(4))\n\ndef F(n):\n if n == 0: \n return 0\n elif n == 1: \n return 1\n else: \n return F(n-1)+F(n-2)\n\nF(2)\n\ndef fibonacci(n):\n if n == 0: \n return 0\n elif n == 1: \n return 1\n else: \n return fibonacci(n - 1) + fibonacci(n - 2) \n\nfibonacci(4)\n\nfibonacci(2)", "Exercício 2: Fatorial\nImplemente a função fatorial(x), que recebe como parâmetro um número inteiro e devolve um número inteiro correspondente ao fatorial de x.\nSua solução deve ser implementada utilizando recursão.", "def fatorial(x):\n \n if x == 0 or x == 1:\n return 1\n else:\n return x * fatorial(x - 1) \n\nfatorial(4)\n\nfatorial(5)\n\nfatorial(3)", "Scrapy - Automatizando extração de informações da Web" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kit-cel/wt
mloc/ch4_Autoencoders/Autoencoder_Compression_Binarizer_Sweep.ipynb
gpl-2.0
[ "Image Compression using Autoencoders with BPSK\nThis code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>\nThis code illustrates\n* joint compression and error protection of images by auto-encoders\n* generation of BPSK symbols using stochastic quantizers\n* transmission over a binary symmetric channel (BSC)", "import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torchvision\nimport numpy as np\nfrom matplotlib import pyplot as plt\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\nprint(\"We are using the following device for learning:\",device)", "Import and load MNIST dataset (Preprocessing)\nDataloader are powerful instruments, which help you to prepare your data. E.g. you can shuffle your data, transform data (standardize/normalize), divide it into batches, ... For more information see https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader <br>\nIn our case, we just use the dataloader to download the Dataset and preprocess the data on our own.", "batch_size_train = 60000 # Samples per Training Batch\nbatch_size_test = 10000 # just create one large test dataset (MNIST test dataset has 10.000 Samples)\n\n# Get Training and Test Dataset with a Dataloader\ntrain_loader = torch.utils.data.DataLoader(\n torchvision.datasets.MNIST('./files/', train=True, download=True,\n transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])),\n batch_size=batch_size_train, shuffle=True)\n\ntest_loader = torch.utils.data.DataLoader(\n torchvision.datasets.MNIST('./files/', train=False, download=True,\n transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])),\n batch_size=batch_size_test, shuffle=True)\n\n# We are only interessted in the data and not in the targets\nfor idx, (data, targets) in enumerate(train_loader):\n x_train = data[:,0,:,:]\n\nfor idx, (data, targets) in enumerate(test_loader):\n x_test = data[:,0,:,:]\n\nimage_size = x_train.shape[1]\nx_test_flat = torch.reshape(x_test, (x_test.shape[0], image_size*image_size))", "Plot 8 random images", "plt.figure(figsize=(16,2))\nfor k in range(8):\n plt.subplot(1,8,k+1)\n plt.imshow(x_train[np.random.randint(x_train.shape[0])], interpolation='nearest', cmap='binary')\n plt.xticks(())\n plt.yticks(())", "Specify Autoencoder\nAs explained in the lecture, we are using Stochstic Quantization. This means for the training process (def forward), we employ stochastic quantization in forward path but during back-propagation, we consider the quantization device as being\nnon-existent (.detach()). While validating and testing, use deterministic quantization (def test) <br>\nNote: .detach() removes the tensor from the computation graph", "hidden_encoder_1 = 500\nhidden_encoder_2 = 250\nhidden_encoder_3 = 100\nhidden_encoder = [hidden_encoder_1, hidden_encoder_2, hidden_encoder_3]\n\nhidden_decoder_1 = 100\nhidden_decoder_2 = 250\nhidden_decoder_3 = 500\nhidden_decoder = [hidden_decoder_1, hidden_decoder_2, hidden_decoder_3]\n\n\nclass Autoencoder(nn.Module):\n def __init__(self, hidden_encoder, hidden_decoder, image_size, bit_per_image):\n super(Autoencoder, self).__init__()\n \n # Define Transmitter Layer: Linear function, M input neurons (symbols), 2 output neurons (real and imaginary part) \n self.We1 = nn.Linear(image_size*image_size, hidden_encoder[0]) \n self.We2 = nn.Linear(hidden_encoder[0], hidden_encoder[1]) \n self.We3 = nn.Linear(hidden_encoder[1], hidden_encoder[2]) \n self.We4 = nn.Linear(hidden_encoder[2], bit_per_image) \n \n # Define Receiver Layer: Linear function, 2 input neurons (real and imaginary part), M output neurons (symbols)\n self.Wd1 = nn.Linear(bit_per_image,hidden_decoder[0]) \n self.Wd2 = nn.Linear(hidden_decoder[0], hidden_decoder[1]) \n self.Wd3 = nn.Linear(hidden_decoder[1], hidden_decoder[2]) \n self.Wd4 = nn.Linear(hidden_decoder[2], image_size*image_size) \n\n # Non-linearity (used in transmitter and receiver)\n self.activation_function = nn.ELU() \n self.sigmoid = nn.Sigmoid()\n self.softsign = nn.Softsign()\n\n def forward(self, training_data, Pe):\n encoded = self.encoder(training_data)\n # random binarization in training\n ti = encoded.clone()\n compressed = ti + (self.binarizer(ti) - ti).detach()\n # add error pattern (flip the bit or not)\n error_tensor = torch.distributions.Bernoulli(Pe * torch.ones_like(compressed)).sample() \n received = torch.mul( compressed, 1 - 2*error_tensor)\n \n reconstructed = self.decoder(received)\n return reconstructed\n \n def test(self, valid_data, Pe):\n encoded_test = self.encoder(valid_data)\n compressed_test = self.binarizer_deterministic(encoded_test)\n error_tensor_test = torch.distributions.Bernoulli(Pe * torch.ones_like(compressed_test)).sample()\n received_test = torch.mul( compressed_test, 1 - 2*error_tensor_test )\n reconstructed_test = self.decoder(received_test)\n loss_test = torch.mean(torch.square(valid_data - reconstructed_test))\n\n reconstructed_test_noerror = self.decoder(compressed_test)\n return reconstructed_test\n \n def encoder(self, batch):\n temp = self.activation_function(self.We1(batch))\n temp = self.activation_function(self.We2(temp))\n temp = self.activation_function(self.We3(temp))\n output = self.softsign(self.We4(temp))\n return output\n \n def decoder(self, batch):\n temp = self.activation_function(self.Wd1(batch))\n temp = self.activation_function(self.Wd2(temp))\n temp = self.activation_function(self.Wd3(temp))\n output = self.sigmoid(self.Wd4(temp))\n return output\n \n def binarizer(self, input):\n # This is the stochastic quatizer which we use for the training\n prob = torch.div(torch.add(input, 1.0), 2.0)\n bernoulli = torch.distributions.Bernoulli(prob) # torch.distributions.bernoulli.\n # bernoulli = tf.distributions.Bernoulli(probs=prob, dtype=tf.float32)\n return 2*bernoulli.sample() - 1\n\n def binarizer_deterministic(self, input):\n # This is the deteministic quatizer which we use for \n return torch.sign(input)", "Helper function to get a random mini-batch of images", "def get_batch(x, batch_size):\n idxs = np.random.randint(0, x.shape[0], (batch_size))\n return torch.stack([torch.reshape(x[k], (-1,)) for k in idxs])", "Perform the training", "batch_size = 250\nPe_range = np.array([0, 0.01, 0.1, 0.2])\nbit_range = np.array([5, 10, 20, 30, 40, 50, 60, 70, 80, 100])\n\nSNR_result = np.zeros( (len(Pe_range), len(bit_range)) )\n\n# Mean Squared Error loss\nloss_fn = nn.MSELoss()\n\n\n\nfor i in range(len(Pe_range)):\n for j in range(len(bit_range)):\n best_SNR = -9999;\n print('Initializing ....')\n \n model = Autoencoder(hidden_encoder, hidden_decoder, image_size, bit_range[j])\n model.to(device)\n\n\n # Adam Optimizer\n optimizer = optim.Adam(model.parameters()) \n \n print('Start Training') # Training loop\n\n for it in range(100000): # Original paper does 50k iterations \n mini_batch = torch.Tensor(get_batch(x_train, batch_size)).to(device)\n # Propagate (training) data through the net\n reconstructed = model(mini_batch, Pe_range[i])\n \n # compute loss\n loss = loss_fn(mini_batch, reconstructed)\n\n # compute gradients\n loss.backward()\n\n # Adapt weights\n optimizer.step()\n\n # reset gradients\n optimizer.zero_grad()\n \n # Evaulation with the test data\n if it % 500 == 0:\n reconstructed_test = model.test(x_test_flat.to(device), Pe_range[i])\n noise = torch.mean(torch.square(x_test_flat.to(device) - reconstructed_test))\n SNR = 10.0 * (torch.log(torch.mean(torch.square(x_test_flat.to(device)))) - torch.log(noise)) / np.log(10.0) \n cur_SNR = SNR.detach().cpu().numpy().squeeze()\n if cur_SNR > best_SNR:\n best_SNR = cur_SNR\n \n if it % 10000 == 0: \n print('Pe = %1.2f, bits = %d, It %d: (best SNR: %1.4f dB)' % (Pe_range[i], bit_range[j], it, best_SNR))\n \n SNR_result[i,j] = best_SNR\n print('Finished learning for e = %1.2f, bits = %d. Best SNR: %1.4f' % (Pe_range[i], bit_range[j], best_SNR))\n \nprint('Training finished')\nnp.savetxt('SNR_result.txt', SNR_result, delimiter=',')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
whitead/numerical_stats
unit_4/hw_2018/Homework_4_Key.ipynb
gpl-3.0
[ "Homework 4\nCHE 116: Numerical Methods and Statistics\n2/8/2018\n\n1. String Formatting (4 Points)\nAnswer these in Python\n\n[1 point] Print $\\pi$ to 4 digits of precision. Use pi from the math module.\n[2 points] Print the square root of the first 5 integers starting from 1 with 4 digits of precision. Print the terms with one per line and have them all take up exactly 5 spaces. 2 Bonus points if you use a for loop\n[1 point] Show how to print the same variable twice using the variable index part of the : operator in a string.}", "#1.1\nimport math\nprint('{:.4}'.format(math.pi))\n\n#1.2 - no for loop\nprint('{:5.4}'.format(math.sqrt(1)))\nprint('{:5.4}'.format(math.sqrt(2)))\nprint('{:5.4}'.format(math.sqrt(3)))\nprint('{:5.4}'.format(math.sqrt(4)))\nprint('{:5.4}'.format(math.sqrt(5)))\n\n#1.2 - for loop\nfor i in range(1, 6):\n print('{:5.4}'.format(math.sqrt(i)))\n\nprint('{0:} {0:}'.format('test'))", "2. Representing Numbers (5 Points)\nAnswers these symbolically\n\n[1 point] What is the mantissa and what is the exponent of $0.3422 \\times 10^{-4}$. \n[1 point] What is the largest representable number with 10 bits?\n[1 point] In a number system of base 8, how many numbers can be represented with 4 digits places?\n[2 points] How do we do boolean expressions with floating numbers? Is it valid to compare floating numbers at all?\n\n2.1\n0.3442\n2.2\n$$\n2^{10} - 1 = 1023\n$$\n2.3\n$$\n8^4 = 4096\n$$\n2.4\nFloating point comparisons are fine with inequalities (&lt;, &gt;), but not equalities (==)\n3. Booleans (20 Points)\n\n[4 points] Write python code that will print 'greater than 10' when the variable z is greater than 10.\n[4 points] Write python code that will print 'special' if the variable z less than -25 but greater than -35.\n[2 points] Use the % operator in python, which computes the remainder of the left number after being divided by the right number, to compute the remainder of 12 divided by 5.\n[4 points] Use the % operator in a boolean expression to print 'even' if the variable z is even.\n[6 points] Write code that prints out if the variable z is positive or negative, even or odd, and greater in magnitude than 10.", "#3.1\nz = -27\nif z > 10:\n print('greater than 10')\n\n#3.2\nif z < -25 and z > -35:\n print('special')\n\n#3.3\n12 % 5\n\n#3.4\nif z % 2 == 0:\n print('even')\n\n#3.5\nif z % 2 == 0:\n print('even')\nelse:\n print('odd')\n\nif z > 0:\n print ('positive')\nelif z < 0:\n print('negative')\n\nif abs(z) > 10:\n print('greater in magnitude than 10')\n", "4. Lists and Slicing (11 Points)\nAnswers these questions in Python. Print your result\n\n[2 points] Create a list of all positive even integers less than 20 using the range function.\n[1 point] Using the list created above and a slice, print out the third-largest even integer less than 20.\n[2 points] Print out the first 3 integers from your list in question 4.1\n[2 points] Create a list of the integers 1 to 10 and then using the assignment operator (=), override the last element to be 100. \n[4 points] Create a list of the integers 1 to 4 and then use a for loop print out $2^i$, where $i$ is the elements in your list.", "#4.1\nx = list(range(2,20,2))\nx\n\n#4.2\nx[-3]\n\n#4.3\nx[:3]\n\n#4.4\nx = list(range(1,11))\nx[-1] = 100\nx\n\n#4.5\nx = list(range(1,5))\nfor thing in x:\n print(2**thing)", "5. Numpy (12 Points)\nAnswers these questions in Python\nWhen specifying an interval, parenthesis indicate an exclusive end-point and brackets indicate an inclusive end-point. So $\\left(0,1\\right)$ means from 0 to 1, not including 0 or 1. $\\left[0, 1\\right)$ means from 0 to 1 including 0 but not including 1. \n\n[2 points] Create an array of numbers that goes from $\\left[1, 10\\right)$ in increments of 0.25 using the arange function.\n[2 points] Create an array of numbers that goes from $\\left[1, 10\\right)$ in increments of 0.25 using the linspace function.\n[2 points] Create an array of 21 numbers that go from $\\left[0,1\\right]$ using the arange function.\n[2 points] Create an array of 21 numbers that go from $\\left[0,1\\right]$ using the linspace function.\n[4 points] Using an array, print out the first 8 powers of 3 (e.g., $3^0$, $3^1$, $3^2$, etc)", "#5.1\nimport numpy as np\nnp.arange(1, 10, 0.25)\n\n#5.2\nnp.linspace(1,9.75,36)\n\n#5.3\nnp.arange(0,1.05, 0.05)\n\n#5.4\nnp.linspace(0,1,21)\n\n#5.5\nx = np.arange(0,8)\nprint(3**x)", "6. Plotting (16 Points)\n*Answer these in questions in Python. Label your axes as x-axis and y-axis. *\n\n\n[4 points] Make a plot of the cosine from $-pi$ to $0$.\n\n\n[4 points] We will see that the equation $e^{-x^2}$ is important in a few weeks. Make a plot of it from -2, to 2. \n\n\n[8 pints] Plot $x$, $x^2$, and $x^3$ from -1 to 1. Label your lines with a legend, add a title, use the seaborn-darkgrid style, and set the figure size as $4\\times3$.", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\n#6.1\nx = np.linspace(-np.pi, 0, 100)\ny = np.cos(x)\nplt.plot(x,y)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n\n#6.2\nx = np.linspace(-2, 2, 100)\ny = np.exp(-x**2)\nplt.plot(x,y)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()\n\n#6.3\nplt.style.use('seaborn-darkgrid')\nplt.figure(figsize=(4,3))\nx = np.linspace(-1, 1, 100)\ny1 = x\ny2 = x**2\ny3 = x**3\n\nplt.plot(x,y1, label='x')\nplt.plot(x,y2, label='x^2')\nplt.plot(x,y3, label='x^3')\nplt.xlabel('x')\nplt.ylabel('y')\nplt.legend()\nplt.title('Nice Title')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jaehyuk/maths-with-python
05-classes-oop.ipynb
mit
[ "Classes and Object Oriented Programming\nWe have looked at functions which take input and return output (or do things to the input). However, sometimes it is useful to think about objects first rather than the actions applied to them.\nThink about a polynomial, such as the cubic\n\\begin{equation}\n p(x) = 12 - 14 x + 2 x^3.\n\\end{equation}\nThis is one of the standard forms that we would expect to see for a polynomial. We could imagine representing this in Python using a container containing the coefficients, such as:", "p_normal = (12, -14, 0, 2)", "The order of the polynomial is given by the number of coefficients (minus one), which is given by len(p_normal)-1.\nHowever, there are many other ways it could be written, which are useful in different contexts. For example, we are often interested in the roots of the polynomial, so would want to express it in the form\n\\begin{equation}\n p(x) = 2 (x - 1)(x - 2)(x + 3).\n\\end{equation}\nThis allows us to read off the roots directly. We could imagine representing this in Python using a container containing the roots, such as:", "p_roots = (1, 2, -3)", "combined with a single variable containing the leading term,", "p_leading_term = 2", "We see that the order of the polynomial is given by the number of roots (and hence by len(p_roots)). This form represents the same polynomial but requires two pieces of information (the roots and the leading coefficient).\nThe different forms are useful for different things. For example, if we want to add two polynomials the standard form makes it straightforward, but the factored form does not. Conversely, multiplying polynomials in the factored form is easy, whilst in the standard form it is not.\nBut the key point is that the object - the polynomial - is the same: the representation may appear different, but it's the object itself that we really care about. So we want to represent the object in code, and work with that object.\nClasses\nPython, and other languages that include object oriented concepts (which is most modern languages) allow you to define and manipulate your own objects. Here we will define a polynomial object step by step.", "class Polynomial(object):\n explanation = \"I am a polynomial\"\n \n def explain(self):\n print(self.explanation)", "We have defined a class, which is a single object that will represent a polynomial. We use the keyword class in the same way that we use the keyword def when defining a function. The definition line ends with a colon, and all the code defining the object is indented by four spaces.\nThe name of the object - the general class, or type, of the thing that we're defining - is Polynomial. The convention is that class names start with capital letters, but this convention is frequently ignored.\nThe type of object that we are building on appears in brackets after the name of the object. The most basic thing, which is used most often, is the object type as here.\nClass variables are defined in the usual way, but are only visible inside the class. Variables that are set outside of functions, such as explanation above, will be common to all class variables.\nFunctions are defined inside classes in the usual way (using the def keyword, indented by four additional spaces). They work in a special way: they are not called directly, but only when you have a member of the class. This is what the self keyword does: it takes the specific instance of the class and uses its data. Class functions are often called methods.\nLet's see how this works on a specific example:", "p = Polynomial()\nprint(p.explanation)\np.explain()\np.explanation = \"I change the string\"\np.explain()", "The first line, p = Polynomial(), creates an instance of the class. That is, it creates a specific Polynomial. It is assigned to the variable named p. We can access class variables using the \"dot\" notation, so the string can be printed via p.explanation. The method that prints the class variable also uses the \"dot\" notation, hence p.explain(). The self variable in the definition of the function is the instance itself, p. This is passed through automatically thanks to the dot notation.\nNote that we can change class variables in specific instances in the usual way (p.explanation = ... above). This only changes the variable for that instance. To check that, let us define two polynomials:", "p = Polynomial()\np.explanation = \"Changed the string again\"\nq = Polynomial()\np.explanation = \"Changed the string a third time\"\np.explain()\nq.explain()", "We can of course make the methods take additional variables. We modify the class (note that we have to completely re-define it each time):", "class Polynomial(object):\n explanation = \"I am a polynomial\"\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))", "We then use this, remembering that the self variable is passed through automatically:", "r = Polynomial()\nr.explain_to(\"Alice\")", "At the moment the class is not doing anything interesting. To do something interesting we need to store (and manipulate) relevant variables. The first thing to do is to add those variables when the instance is actually created. We do this by adding a special function (method) which changes how the variables of type Polynomial are created:", "class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))", "This __init__ function is called when a variable is created. There are a number of special class functions, each of which has two underscores before and after the name. So now we can create a variable that represents a specific polynomial by storing its roots and the leading term:", "p = Polynomial(p_roots, p_leading_term)\np.explain_to(\"Alice\")\nq = Polynomial((1,1,0,-2), -1)\nq.explain_to(\"Bob\")", "Another special function that is very useful is __repr__. This gives a representation of the class. In essence, if you ask Python to print a variable, it will print the string returned by the __repr__ function. We can use this to create a simple string representation of the polynomial:", "class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def __repr__(self):\n string = str(self.leading_term)\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))\n\np = Polynomial(p_roots, p_leading_term)\nprint(p)\nq = Polynomial((1,1,0,-2), -1)\nprint(q)", "The final special function we'll look at (although there are many more, many of which may be useful) is __mul__. This allows Python to multiply two variables together. With this we can take the product of two polynomials:", "class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def __repr__(self):\n string = str(self.leading_term)\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n \n def __mul__(self, other):\n roots = self.roots + other.roots\n leading_term = self.leading_term * other.leading_term\n return Polynomial(roots, leading_term)\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))\n\np = Polynomial(p_roots, p_leading_term)\nq = Polynomial((1,1,0,-2), -1)\nr = p*q\nprint(r)", "We now have a simple class that can represent polynomials and multiply them together, whilst printing out a simple string form representing itself. This can obviously be extended to be much more useful.\nInheritance\nAs we can see above, building a complete class from scratch can be lengthy and tedious. If there is another class that does much of what we want, we can build on top of that. This is the idea behind inheritance. \nIn the case of the Polynomial we declared that it started from the object class in the first line defining the class: class Polynomial(object). But we can build on any class, by replacing object with something else. Here we will build on the Polynomial class that we've started with.\nA monomial is a polynomial whose leading term is simply 1. A monomial is a polynomial, and could be represented as such. However, we could build a class that knows that the leading term is always 1: there may be cases where we can take advantage of this additional simplicity.\nWe build a new monomial class as follows:", "class Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n \n def __init__(self, roots):\n self.roots = roots\n self.leading_term = 1\n self.order = len(roots)", "Variables of the Monomial class are also variables of the Polynomial class, so can use all the methods and functions from the Polynomial class automatically:", "m = Monomial((-1, 4, 9))\nm.explain_to(\"Caroline\")\nprint(m)", "We note that these functions, methods and variables may not be exactly right, as they are given for the general Polynomial class, not by the specific Monomial class. If we redefine these functions and variables inside the Monomial class, they will override those defined in the Polynomial class. We do not have to override all the functions and variables, just the parts we want to change:", "class Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n explanation = \"I am a monomial\"\n \n def __init__(self, roots):\n self.roots = roots\n self.leading_term = 1\n self.order = len(roots)\n \n def __repr__(self):\n string = \"\"\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n\nm = Monomial((-1, 4, 9))\nm.explain_to(\"Caroline\")\nprint(m)", "This has had no effect on the original Polynomial class and variables, which can be used as before:", "s = Polynomial((2, 3), 4)\ns.explain_to(\"David\")\nprint(s)", "And, as Monomial variables are Polynomials, we can multiply them together to get a Polynomial:", "t = m*s\nt.explain_to(\"Erik\")\nprint(t)", "In fact, we can be a bit smarter than this. Note that the __init__ function of the Monomial class is identical to that of the Polynomial class, just with the leading_term set explicitly to 1. Rather than duplicating the code and modifying a single value, we can call the __init__ function of the Polynomial class directly. This is because the Monomial class is built on the Polynomial class, so knows about it. We regenerate the class, but only change the __init__ function:", "class Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n explanation = \"I am a monomial\"\n \n def __init__(self, roots):\n Polynomial.__init__(self, roots, 1)\n \n def __repr__(self):\n string = \"\"\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n\nv = Monomial((2, -3))\nv.explain_to(\"Fred\")\nprint(v)", "We are now being very explicit in saying that a Monomial really is a Polynomial with leading_term being 1. Note, that in this case we are calling the __init__ function directly, so have to explicitly include the self argument.\nBy building on top of classes in this fashion, we can build classes that transparently represent the objects that we are interested in. \nMost modern programming languages include some object oriented features. Many (including Python) will have more complex features than are introduced above. However, the key points where\n\na single variable representing an object can be defined,\nmethods that are specific to those objects can be defined,\nnew classes of object that inherit from and extend other classes can be defined,\n\nare the essential steps that are common across nearly all.\nExercise: Equivalence classes\nAn equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \\sim 10$ to denote two objects within the same equivalence class.\nHere, we are going to define the positive integers programmatically from equivalent sequences.\nExercise 1\nDefine a Python class Eqint. This should be\n\nInitialized by a sequence;\nStore the sequence;\nDefine its representation (via the __repr__ function) to be the integer length of the sequence;\nRedefine equality (via the __eq__ function) so that two eqints are equal if their sequences have same length.\n\nExercise 2\nDefine a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example\npython\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\nCheck that none of the one objects equal the zero object, but all equal the other one objects. Print each object to check that the representation gives the integer length.\nExercise 3\nRedefine the class by including an __add__ method that combines the two sequences. That is, if a and b are Eqints then a+b should return an Eqint defined from combining a and bs sequences.\nNote\nAdding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.\nExercise 4\nCheck your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get 3, and also print its internal sequence.\nExercise 5\nWe will sketch a construction of the positive integers from nothing.\n\nDefine an empty list positive_integers.\nDefine an Eqint called zero from the empty list. Append it to positive_integers.\nDefine an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.\nRepeat step 3 as often as needed.\n\nUse this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check.\nExercise: Rational numbers\nInstead of working with floating point numbers, which are not \"exact\", we could work with the rational numbers $\\mathbb{Q}$. A rational number $q \\in \\mathbb{Q}$ is defined by the numerator $n$ and denominator $d$ as $q = \\frac{n}{d}$, where $n$ and $d$ are coprime (ie, have no common divisor other than $1$).\nExercise 1\nFind a Python function that finds the greatest common divisor (gcd) of two numbers. Use this to write a function normal_form that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \\frac{3}{2}$, $q = \\frac{15}{3}$, and $q = \\frac{20}{42}$.\nExercise 2\nDefine a class Rational that uses the normal_form function to store the rational number in the appropriate form. Define a __repr__ function that prints a string that looks like $\\frac{n}{d}$ (hint: use len(str(number)) to find the number of digits of an integer). Test it on the cases above.\nExercise 3\nOverload the __add__ function so that you can add two rational numbers. Test it on $\\frac{1}{2} + \\frac{1}{3} + \\frac{1}{6} = 1$.\nExercise 4\nOverload the __mul__ function so that you can multiply two rational numbers. Test it on $\\frac{1}{3} \\times \\frac{15}{2} \\times \\frac{2}{5} = 1$.\nExercise 5\nOverload the __rmul__ function so that you can multiply a rational by an integer. Check that $\\frac{1}{2} \\times 2 = 1$ and $\\frac{1}{2} + (-1) \\times \\frac{1}{2} = 0$. Also overload the __sub__ function (using previous functions!) so that you can subtract rational numbers and check that $\\frac{1}{2} - \\frac{1}{2} = 0$.\nExercise 6\nOverload the __float__ function so that float(q) returns the floating point approximation to the rational number q. Test this on $\\frac{1}{2}, \\frac{1}{3}$, and $\\frac{1}{11}$.\nExercise 7\nOverload the __lt__ function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \\dots, 11$ and the numerator is the floored integer $n/2$, ie n//2. Use the sorted function on that list (which relies on the __lt__ function).\nExercise 8\nThe Wallis formula for $\\pi$ is\n\\begin{equation}\n \\pi = 2 \\prod_{n=1}^{\\infty} \\frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.\n\\end{equation}\nWe can define a partial product $\\pi_N$ as\n\\begin{equation}\n \\pi_N = 2 \\prod_{n=1}^{N} \\frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},\n\\end{equation}\neach of which are rational numbers.\nConstruct a list of the first 20 rational number approximations to $\\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a numpy array, and subtract this array from $\\pi$ to see how accurate they are." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jaeoh2/self-driving-car-nd
CarND-LaneLines-P1/P1.ipynb
mit
[ "Self-Driving Car Engineer Nanodegree\nProject: Finding Lane Lines on the Road\n\nIn this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip \"raw-lines-example.mp4\" (also contained in this repository) to see what the output should look like after using the helper functions below. \nOnce you have a result that looks roughly like \"raw-lines-example.mp4\", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.\nIn addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.\n\nLet's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the \"play\" button above) to display the image.\nNote: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the \"Kernel\" menu above and selecting \"Restart & Clear Output\".\n\nThe tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.\n\n<figure>\n <img src=\"examples/line-segments-example.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> \n </figcaption>\n</figure>\n<p></p>\n<figure>\n <img src=\"examples/laneLines_thirdPass.jpg\" width=\"380\" alt=\"Combined Image\" />\n <figcaption>\n <p></p> \n <p style=\"text-align: center;\"> Your goal is to connect/average/extrapolate line segments to get output like this</p> \n </figcaption>\n</figure>\n\nRun the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips. \nImport Packages", "#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n%matplotlib inline", "Read in an Image", "#reading in an image\nimage = mpimg.imread('test_images/solidWhiteRight.jpg')\n\n#printing out some stats and plotting\nprint('This image is:', type(image), 'with dimensions:', image.shape)\nplt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')", "Ideas for Lane Detection Pipeline\nSome OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:\ncv2.inRange() for color selection\ncv2.fillPoly() for regions selection\ncv2.line() to draw lines on an image given endpoints\ncv2.addWeighted() to coadd / overlay two images\ncv2.cvtColor() to grayscale or change color\ncv2.imwrite() to output images to file\ncv2.bitwise_and() to apply a mask to an image\nCheck out the OpenCV documentation to learn about these and discover even more awesome functionality!\nHelper Functions\nBelow are some helper functions to help get you started. They should look familiar from the lesson!", "import math\n\ndef grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\ndef x_result(m,y,b):\n return int((y-b)/m)\n\ndef draw_lines(img, lines, color=[255, 0, 0], thickness=10):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n# for line in lines:\n# for x1,y1,x2,y2 in line:\n# cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n \n m_list = []\n m_left = []\n m_right = []\n b_left = []\n b_right = []\n for line in lines:\n for x1,y1,x2,y2 in line:\n m = ((y2-y1)/(x2-x1))\n b = y1 - m*x1\n m_list.append([m,b])\n\n for m,b in m_list:\n if m<0:\n m_left.append(m)\n b_left.append(b)\n else:\n m_right.append(m)\n b_right.append(b)\n\n ml = np.mean(m_left)\n mr = np.mean(m_right)\n bl = np.mean(b_left)\n br = np.mean(b_right)\n \n cv2.line(img, (x_result(ml,roi_images[0].shape[1],bl), roi_images[0].shape[1]), \n (x_result(ml,330,bl),330), [255,0,0], 10)\n cv2.line(img, (x_result(mr,roi_images[0].shape[1],br), roi_images[0].shape[1]), \n (x_result(mr,330,br),330), [255,0,0], 10)\n\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + λ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, λ)", "Test Images\nBuild your pipeline to work on the images in the directory \"test_images\"\nYou should make sure your pipeline works well on these images before you try the videos.", "import os\nos.listdir(\"test_images/\")", "Build a Lane Finding Pipeline\nBuild the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.\nTry tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.", "# TODO: Build your pipeline that will draw lane lines on the test_images\n# then save them to the test_images directory.\n\n## Import Images\nimage_list = os.listdir(\"test_images/\")\nimages = []\nfor i in range(len(image_list)):\n images.append(mpimg.imread(os.path.join('test_images/',image_list[i])))\n \nimages[0].shape\n\n## Pre-Process Images(gray,blur,cany)\npp_images = []\nfor i in range(len(images)):\n pp_images.append(canny(gaussian_blur(grayscale(images[i]),kernel_size=5),low_threshold=50, high_threshold=150))\n\nplt.imshow(pp_images[0],cmap='gray')\n\n## Set ROI\nimshape = pp_images[0].shape\nvertices = np.array([[(100,imshape[0]),(420, 330), (550, 330), (imshape[1]-50,imshape[0])]], dtype=np.int32)\nroi_images = []\nfor i in range(len(pp_images)):\n roi_images.append(region_of_interest(pp_images[i], vertices))\n \nplt.imshow(roi_images[5])\n\n## Draw line\ndraw_images = []\nfor i in range(len(roi_images)):\n# draw_images.append(hough_lines(roi_images[i], rho=1, theta=np.pi/180, threshold=15, min_line_len=40, max_line_gap=20))\n draw_images.append(hough_lines(roi_images[i], rho=1, theta=np.pi/180, threshold=40, min_line_len=40, max_line_gap=20))\n\nplt.subplot(1,2,1),plt.imshow(images[5])\nplt.subplot(1,2,2),plt.imshow(draw_images[5])\n\n## Overlay images\nresult = []\ninitial_img = images\nfor i in range(len(draw_images)):\n result.append(weighted_img(draw_images[i], initial_img[i], α=0.8, β=1., λ=0.))\n \nplt.imshow(result[5])", "Test on Videos\nYou know what's cooler than drawing lanes over images? Drawing lanes over video!\nWe can test our solution on two provided videos:\nsolidWhiteRight.mp4\nsolidYellowLeft.mp4\nNote: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.\nIf you get an error that looks like this:\nNeedDownloadError: Need ffmpeg exe. \nYou can download it by calling: \nimageio.plugins.ffmpeg.download()\nFollow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.", "# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n\ndef process_image(image):\n # NOTE: The output you return should be a color image (3 channel) for processing video below\n # TODO: put your pipeline here,\n # you should return the final output (image where lines are drawn on lanes)\n\n pp_img = canny(gaussian_blur(grayscale(image),kernel_size=5),low_threshold=50, high_threshold=150)\n \n imshape = pp_img.shape\n vertices = np.array([[(100,imshape[0]),(420, 330), (550, 330), (imshape[1]-50,imshape[0])]], dtype=np.int32)\n roi_img = region_of_interest(pp_img, vertices)\n \n line = hough_lines(roi_img, rho=1, theta=np.pi/180, threshold=40, min_line_len=40, max_line_gap=200)\n result = weighted_img(line, image, α=0.8, β=1., λ=0.)\n return result", "Let's try the one with the solid white lane on the right first ...", "white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\")\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)", "Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.", "HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(white_output))", "Improve the draw_lines() function\nAt this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video \"P1_example.mp4\".\nGo back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.\nNow for the one with the solid yellow lane on the left. This one's more tricky!", "yellow_output = 'test_videos_output/solidYellowLeft.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)\nclip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')\nyellow_clip = clip2.fl_image(process_image)\n%time yellow_clip.write_videofile(yellow_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(yellow_output))", "Writeup and Submission\nIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.\nOptional Challenge\nTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!", "challenge_output = 'test_videos_output/challenge.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)\nclip3 = VideoFileClip('test_videos/challenge.mp4')\nchallenge_clip = clip3.fl_image(process_image)\n%time challenge_clip.write_videofile(challenge_output, audio=False)\n\nHTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
d00d/quantNotebooks
Notebooks/quantopian_research_public/notebooks/lectures/Plotting_Data/notebook.ipynb
unlicense
[ "Graphical Representations of Data\nBy Evgenia \"Jenny\" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\nRepresenting data graphically can be incredibly useful for learning how the data behaves and seeing potential structure or flaws. Care should be taken, as humans are incredibly good at seeing only evidence that confirms our beliefs, and visual data lends itself well to that. Plots are good to use when formulating a hypothesis, but should not be used to test a hypothesis.\nWe will go over some common plots here.", "# Import our libraries\n\n# This is for numerical processing\nimport numpy as np\n# This is the library most commonly used for plotting in Python.\n# Notice how we import it 'as' plt, this enables us to type plt\n# rather than the full string every time.\nimport matplotlib.pyplot as plt", "Getting Some Data\nIf we're going to plot data we need some data to plot. We'll get the pricing data of Apple (AAPL) and Microsoft (MSFT) to use in our examples.\nData Structure\nKnowing the structure of your data is very important. Normally you'll have to do a ton work molding your data into the form you need for testing. Quantopian has done a lot of cleaning on the data, but you still need to put it into the right shapes and formats for your purposes.\nIn this case the data will be returned as a pandas dataframe object. The rows are timestamps, and the columns are the two assets, AAPL and MSFT.", "start = '2014-01-01'\nend = '2015-01-01'\ndata = get_pricing(['AAPL', 'MSFT'], fields='price', start_date=start, end_date=end)\ndata.head()", "Indexing into the data with data['AAPL'] will yield an error because the type of the columns are equity objects and not simple strings. Let's change that using this little piece of Python code. Don't worry about understanding it right now, unless you do, in which case congratulations.", "data.columns = [e.symbol for e in data.columns]\ndata.head()", "Much nicer, now we can index. Indexing into the 2D dataframe will give us a 1D series object. The index for the series is timestamps, the value upon index is a price. Similar to an array except instead of integer indecies it's times.", "data['MSFT'].head()", "Histogram\nA histogram is a visualization of how frequent different values of data are. By displaying a frequency distribution using bars, it lets us quickly see where most of the observations are clustered. The height of each bar represents the number of observations that lie in each interval. You can think of a histogram as an empirical and discrete Propoability Density Function (PDF).", "# Plot a histogram using 20 bins\nplt.hist(data['MSFT'], bins=20)\nplt.xlabel('Price')\nplt.ylabel('Number of Days Observed')\nplt.title('Frequency Distribution of MSFT Prices, 2014');", "Returns Histogram\nIn finance rarely will we look at the distribution of prices. The reason for this is that prices are non-stationary and move around a lot. For more info on non-stationarity please see this lecture. Instead we will use daily returns. Let's try that now.", "# Remove the first element because percent change from nothing to something is NaN\nR = data['MSFT'].pct_change()[1:]\n\n# Plot a histogram using 20 bins\nplt.hist(R, bins=20)\nplt.xlabel('Return')\nplt.ylabel('Number of Days Observed')\nplt.title('Frequency Distribution of MSFT Returns, 2014');", "The graph above shows, for example, that the daily returns of MSFT were above 0.03 on fewer than 5 days in 2014. Note that we are completely discarding the dates corresponding to these returns. \nIMPORTANT: Note also that this does not imply that future returns will have the same distribution.\nCumulative Histogram (Discrete Estimated CDF)\nAn alternative way to display the data would be using a cumulative distribution function, in which the height of a bar represents the number of observations that lie in that bin or in one of the previous ones. This graph is always nondecreasing since you cannot have a negative number of observations. The choice of graph depends on the information you are interested in.", "# Remove the first element because percent change from nothing to something is NaN\nR = data['MSFT'].pct_change()[1:]\n\n# Plot a histogram using 20 bins\nplt.hist(R, bins=20, cumulative=True)\nplt.xlabel('Return')\nplt.ylabel('Number of Days Observed')\nplt.title('Cumulative Distribution of MSFT Returns, 2014');", "Scatter plot\nA scatter plot is useful for visualizing the relationship between two data sets. We use two data sets which have some sort of correspondence, such as the date on which the measurement was taken. Each point represents two corresponding values from the two data sets. However, we don't plot the date that the measurements were taken on.", "plt.scatter(data['MSFT'], data['AAPL'])\nplt.xlabel('MSFT')\nplt.ylabel('AAPL')\nplt.title('Daily Prices in 2014');\n\nR_msft = data['MSFT'].pct_change()[1:]\nR_aapl = data['AAPL'].pct_change()[1:]\n\nplt.scatter(R_msft, R_aapl)\nplt.xlabel('MSFT')\nplt.ylabel('AAPL')\nplt.title('Daily Returns in 2014');", "Line graph\nA line graph can be used when we want to track the development of the y value as the x value changes. For instance, when we are plotting the price of a stock, showing it as a line graph instead of just plotting the data points makes it easier to follow the price over time. This necessarily involves \"connecting the dots\" between the data points, which can mask out changes that happened between the time we took measurements.", "plt.plot(data['MSFT'])\nplt.plot(data['AAPL'])\nplt.ylabel('Price')\nplt.legend(['MSFT', 'AAPL']);\n\n# Remove the first element because percent change from nothing to something is NaN\nR = data['MSFT'].pct_change()[1:]\n\nplt.plot(R)\nplt.ylabel('Return')\nplt.title('MSFT Returns');", "Never Assume Conditions Hold\nAgain, whenever using plots to visualize data, do not assume you can test a hypothesis by looking at a graph. Also do not assume that because a distribution or trend used to be true, it is still true. In general much more sophisticated and careful validation is required to test whether models hold, plots are mainly useful when initially deciding how your models should work.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
dev/_downloads/a9e07affc8c71aa96bb4ffe855ff552c/morph_surface_stc.ipynb
bsd-3-clause
[ "%matplotlib inline", "Morph surface source estimate\nThis example demonstrates how to morph an individual subject's\n:class:mne.SourceEstimate to a common reference space. We achieve this using\n:class:mne.SourceMorph. Pre-computed data will be morphed based on\na spherical representation of the cortex computed using the spherical\nregistration of FreeSurfer &lt;tut-freesurfer-mne&gt;\n(https://surfer.nmr.mgh.harvard.edu/fswiki/SurfaceRegAndTemplates)\n:footcite:GreveEtAl2013. This\ntransform will be used to morph the surface vertices of the subject towards the\nreference vertices. Here we will use 'fsaverage' as a reference space (see\nhttps://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage).\nThe transformation will be applied to the surface source estimate. A plot\ndepicting the successful morph will be created for the spherical and inflated\nsurface representation of 'fsaverage', overlaid with the morphed surface\nsource estimate.\n<div class=\"alert alert-info\"><h4>Note</h4><p>For background information about morphing see `ch_morph`.</p></div>", "# Author: Tommy Clausner <tommy.clausner@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport os\nimport os.path as op\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)", "Setup paths", "data_path = sample.data_path()\nsample_dir = op.join(data_path, 'MEG', 'sample')\nsubjects_dir = op.join(data_path, 'subjects')\nfname_src = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')\nfname_fwd = op.join(sample_dir, 'sample_audvis-meg-oct-6-fwd.fif')\nfname_fsaverage_src = os.path.join(subjects_dir, 'fsaverage', 'bem',\n 'fsaverage-ico-5-src.fif')\n\nfname_stc = os.path.join(sample_dir, 'sample_audvis-meg')", "Load example data", "# Read stc from file\nstc = mne.read_source_estimate(fname_stc, subject='sample')", "Setting up SourceMorph for SourceEstimate\nIn MNE, surface source estimates represent the source space simply as\nlists of vertices (see tut-source-estimate-class).\nThis list can either be obtained from :class:mne.SourceSpaces (src) or from\nthe stc itself. If you use the source space, be sure to use the\nsource space from the forward or inverse operator, because vertices\ncan be excluded during forward computation due to proximity to the BEM\ninner skull surface:", "src_orig = mne.read_source_spaces(fname_src)\nprint(src_orig) # n_used=4098, 4098\nfwd = mne.read_forward_solution(fname_fwd)\nprint(fwd['src']) # n_used=3732, 3766\nprint([len(v) for v in stc.vertices])", "We also need to specify the set of vertices to morph to. This can be done\nusing the spacing parameter, but for consistency it's better to pass the\nsrc_to parameter.\n<div class=\"alert alert-info\"><h4>Note</h4><p>Since the default values of :func:`mne.compute_source_morph` are\n ``spacing=5, subject_to='fsaverage'``, in this example\n we could actually omit the ``src_to`` and ``subject_to`` arguments\n below. The ico-5 ``fsaverage`` source space contains the\n special values ``[np.arange(10242)] * 2``, but in general this will\n not be true for other spacings or other subjects. Thus it is recommended\n to always pass the destination ``src`` for consistency.</p></div>\n\nInitialize SourceMorph for SourceEstimate", "src_to = mne.read_source_spaces(fname_fsaverage_src)\nprint(src_to[0]['vertno']) # special, np.arange(10242)\nmorph = mne.compute_source_morph(stc, subject_from='sample',\n subject_to='fsaverage', src_to=src_to,\n subjects_dir=subjects_dir)", "Apply morph to (Vector) SourceEstimate\nThe morph will be applied to the source estimate data, by giving it as the\nfirst argument to the morph we computed above.", "stc_fsaverage = morph.apply(stc)", "Plot results", "# Define plotting parameters\nsurfer_kwargs = dict(\n hemi='lh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',\n initial_time=0.09, time_unit='s', size=(800, 800),\n smoothing_steps=5)\n\n# As spherical surface\nbrain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)\n\n# Add title\nbrain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',\n font_size=16)", "As inflated surface", "brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)\n\n# Add title\nbrain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',\n font_size=16)", "Reading and writing SourceMorph from and to disk\nAn instance of SourceMorph can be saved, by calling\n:meth:morph.save &lt;mne.SourceMorph.save&gt;.\nThis method allows for specification of a filename under which the morph\nwill be save in \".h5\" format. If no file extension is provided, \"-morph.h5\"\nwill be appended to the respective defined filename::\n&gt;&gt;&gt; morph.save('my-file-name')\n\nReading a saved source morph can be achieved by using\n:func:mne.read_source_morph::\n&gt;&gt;&gt; morph = mne.read_source_morph('my-file-name-morph.h5')\n\nOnce the environment is set up correctly, no information such as\nsubject_from or subjects_dir must be provided, since it can be\ninferred from the data and use morph to 'fsaverage' by default. SourceMorph\ncan further be used without creating an instance and assigning it to a\nvariable. Instead :func:mne.compute_source_morph and\n:meth:mne.SourceMorph.apply can be\neasily chained into a handy one-liner. Taking this together the shortest\npossible way to morph data directly would be:", "stc_fsaverage = mne.compute_source_morph(stc,\n subjects_dir=subjects_dir).apply(stc)", "For more examples, check out examples using SourceMorph.apply\n&lt;sphx_glr_backreferences_mne.SourceMorph.apply&gt;.\nReferences\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wtgme/labeldoc2vec
docs/notebooks/distance_metrics.ipynb
lgpl-2.1
[ "New Distance Metrics for Probability Distribution and Bag of Words\nA small tutorial to illustrate the new distance functions.\nWe would need this mostly when comparing how similar two probability distributions are, and in the case of gensim, usually for LSI or LDA topic distributions after we have a LDA model.\nGensim already has functionalities for this, in the sense of getting most similar documents - this, this and this are such examples of documentation and tutorials.\nWhat this tutorial shows is a building block of these larger methods, which are a small suite of distance metrics.\nWe'll start by setting up a small corpus and showing off the methods.", "from gensim.corpora import Dictionary\nfrom gensim.models import ldamodel\nfrom gensim.matutils import kullback_leibler, jaccard, hellinger, sparse2full\nimport numpy\n\n# you can use any corpus, this is just illustratory\n\ntexts = [['bank','river','shore','water'],\n ['river','water','flow','fast','tree'],\n ['bank','water','fall','flow'],\n ['bank','bank','water','rain','river'],\n ['river','water','mud','tree'],\n ['money','transaction','bank','finance'],\n ['bank','borrow','money'], \n ['bank','finance'],\n ['finance','money','sell','bank'],\n ['borrow','sell'],\n ['bank','loan','sell']]\n\ndictionary = Dictionary(texts)\ncorpus = [dictionary.doc2bow(text) for text in texts]\n\nnumpy.random.seed(1) # setting random seed to get the same results each time.\nmodel = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=2)\n\nmodel.show_topics()", "Let's take a few sample documents and get them ready to test Similarity. Let's call the 1st topic the water topic and the second topic the finance topic.\nNote: these are all distance metrics. This means that a value between 0 and 1 is returned, where values closer to 0 indicate a smaller 'distance' and therefore a larger similarity.", "doc_water = ['river', 'water', 'shore']\ndoc_finance = ['finance', 'money', 'sell']\ndoc_bank = ['finance', 'bank', 'tree', 'water']\n\n# now let's make these into a bag of words format\n\nbow_water = model.id2word.doc2bow(doc_water) \nbow_finance = model.id2word.doc2bow(doc_finance) \nbow_bank = model.id2word.doc2bow(doc_bank) \n\n# we can now get the LDA topic distributions for these\nlda_bow_water = model[bow_water]\nlda_bow_finance = model[bow_finance]\nlda_bow_bank = model[bow_bank]", "Hellinger and Kullback–Leibler\nWe're now ready to apply our distance metrics.\nLet's start with the popular Hellinger distance. \nThe Hellinger distance metric gives an output in the range [0,1] for two probability distributions, with values closer to 0 meaning they are more similar.", "hellinger(lda_bow_water, lda_bow_finance)\n\nhellinger(lda_bow_finance, lda_bow_bank)", "Makes sense, right? In the first example, Document 1 and Document 2 are hardly similar, so we get a value of roughly 0.5. \nIn the second case, the documents are a lot more similar, semantically. Trained with the model, they give a much less distance value.\nLet's run similar examples down with Kullback Leibler.", "kullback_leibler(lda_bow_water, lda_bow_bank)\n\nkullback_leibler(lda_bow_finance, lda_bow_bank)", "NOTE!\nKL is not a Distance Metric in the mathematical sense, and hence is not symmetrical. \nThis means that kullback_leibler(lda_bow_finance, lda_bow_bank) is not equal to kullback_leibler(lda_bow_bank, lda_bow_finance).", "# As you can see, the values are not equal. We'll get more into the details of this later on in the notebook.\nkullback_leibler(lda_bow_bank, lda_bow_finance)", "In our previous examples we saw that there were lower distance values between bank and finance than for bank and water, even if it wasn't by a huge margin. What does this mean?\nThe bank document is a combination of both water and finance related terms - but as bank in this context is likely to belong to the finance topic, the distance values are less between the finance and bank bows.", "# just to confirm our suspicion that the bank bow is more to do with finance:\n\nmodel.get_document_topics(bow_bank)", "It's evident that while it isn't too skewed, it it more towards the finance topic.\nDistance metrics (also referred to as similarity metrics), as suggested in the examples above, are mainly for probability distributions, but the methods can accept a bunch of formats for input. You can do some further reading on Kullback Leibler and Hellinger to figure out what suits your needs.\nJaccard\nLet us now look at the Jaccard Distance metric for similarity between bags of words (i.e, documents)", "jaccard(bow_water, bow_bank)\n\njaccard(doc_water, doc_bank)\n\njaccard(['word'], ['word'])", "The three examples above feature 2 different input methods. \nIn the first case, we present to jaccard document vectors already in bag of words format. The distance can be defined as 1 minus the size of the intersection upon the size of the union of the vectors. \nWe can see (on manual inspection as well), that the distance is likely to be high - and it is. \nThe last two examples illustrate the ability for jaccard to accept even lists (i.e, documents) as inputs.\nIn the last case, because they are the same vectors, the value returned is 0 - this means the distance is 0 and they are very similar. \nDistance Metrics for Topic Distributions\nWhile there are already standard methods to identify similarity of documents, our distance metrics has one more interesting use-case: topic distributions. \nLet's say we want to find out how similar our two topics are, water and finance.", "topic_water, topic_finance = model.show_topics()\n\n# some pre processing to get the topics in a format acceptable to our distance metrics\n\ndef make_topics_bow(topic):\n # takes the string returned by model.show_topics()\n # split on strings to get topics and the probabilities\n topic = topic.split('+')\n # list to store topic bows\n topic_bow = []\n for word in topic:\n # split probability and word\n prob, word = word.split('*')\n # get rid of spaces\n word = word.replace(\" \",\"\")\n # convert to word_type\n word = model.id2word.doc2bow([word])[0][0]\n topic_bow.append((word, float(prob)))\n return topic_bow\n\nfinance_distribution = make_topics_bow(topic_finance[1])\nwater_distribution = make_topics_bow(topic_water[1])\n\n# the finance topic in bag of words format looks like this:\nfinance_distribution", "Now that we've got our topics in a format more acceptable by our functions, let's use a Distance metric to see how similar the word distributions in the topics are.", "hellinger(water_distribution, finance_distribution)", "Our value of roughly 0.36 means that the topics are not TOO distant with respect to their word distributions.\nThis makes sense again, because of overlapping words like bank and a small size dictionary.\nSome things to take care of\nIn our previous example we didn't use Kullback Leibler to test for similarity for a reason - KL is not a Distance 'Metric' in the technical sense (you can see what a metric is here. The nature of it, mathematically also means we must be a little careful before using it, because since it involves the log function, a zero can mess things up. For example:", "# 16 here is the number of features the probability distribution draws from\nkullback_leibler(water_distribution, finance_distribution, 16) ", "That wasn't very helpful, right? This just means that we have to be a bit careful about our inputs. Our old example didn't work out because they were some missing values for some words (because show_topics() only returned the top 10 topics). \nThis can be remedied, though.", "# return ALL the words in the dictionary for the topic-word distribution.\ntopic_water, topic_finance = model.show_topics(num_words=len(model.id2word))\n\n# do our bag of words transformation again\nfinance_distribution = make_topics_bow(topic_finance[1])\nwater_distribution = make_topics_bow(topic_water[1])\n\n# and voila!\nkullback_leibler(water_distribution, finance_distribution)", "You may notice that the distance for this is quite less, indicating a high similarity. This may be a bit off because of the small size of the corpus, where all topics are likely to contain a decent overlap of word probabilities. You will likely get a better value for a bigger corpus.\nSo, just remember, if you intend to use KL as a metric to measure similarity or distance between two distributions, avoid zeros by returning the ENTIRE distribution. Since it's unlikely any probability distribution will ever have absolute zeros for any feature/word, returning all the values like we did will make you good to go.\nSo - what exactly are Distance Metrics?\nHaving seen the practical usages of these measures (i.e, to find similarity), let's learn a little about what exactly Distance Measures and Metrics are. \nI mentioned in the previous section that KL was not a distance metric. There are 4 conditons for for a distance measure to be a matric:\n\nd(x,y) >= 0\nd(x,y) = 0 <=> x = y\nd(x,y) = d(y,x)\nd(x,z) <= d(x,y) + d(y,z)\n\nThat is: it must be non-negative; if x and y are the same, distance must be zero; it must be symmetric; and it must obey the triangle inequality law. \nSimple enough, right? \nLet's test these out for our measures.", "# normal Hellinger\nhellinger(water_distribution, finance_distribution)\n\n# we swap finance and water distributions and get the same value. It is indeed symmetric!\nhellinger(finance_distribution, water_distribution)\n\n# if we pass the same values, it is zero.\nhellinger(water_distribution, water_distribution)\n\n# for triangle inequality let's use LDA document distributions\nhellinger(lda_bow_finance, lda_bow_bank)\n\n# Triangle inequality works too!\nhellinger(lda_bow_finance, lda_bow_water) + hellinger(lda_bow_water, lda_bow_bank)", "So Hellinger is indeed a metric. Let's check out KL.", "kullback_leibler(finance_distribution, water_distribution)\n\nkullback_leibler(water_distribution, finance_distribution)", "We immediately notice that when we swap the values they aren't equal! One of the four conditions not fitting is enough for it to not be a metric. \nHowever, just because it is not a metric, (strictly in the mathematical sense) does not mean that it is not useful to figure out the distance between two probability distributions. KL Divergence is widely used for this purpose, and is probably the most 'famous' distance measure in fields like Information Theory.\nFor a nice review of the mathematical differences between Hellinger and KL, this link does a very good job. \nConclusion\nThat brings us to the end of this small tutorial.\nThe scope for adding new similarity metrics is large, as there exist an even larger suite of metrics and methods to add to the matutils.py file. (This is one paper which talks about some of them)\nLooking forward to more PRs towards this functionality in Gensim! :)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
machine-learning/naive_bayes_classifier_from_scratch.ipynb
mit
[ "Title: Naive Bayes Classifier From Scratch \nSlug: naive_bayes_classifier_from_scratch \nSummary: How to build a naive bayes classifier from scratch in Python. \nDate: 2016-12-12 12:00 \nCategory: Machine Learning \nTags: Naive Bayes \nAuthors: Chris Albon \nNaive bayes is simple classifier known for doing well when only a small number of observations is available. In this tutorial we will create a gaussian naive bayes classifier from scratch and use it to predict the class of a previously unseen data point. This tutorial is based on an example on Wikipedia's naive bayes classifier page, I have implemented it in Python and tweaked some notation to improve explanation. \nPreliminaries", "import pandas as pd\nimport numpy as np", "Create Data\nOur dataset is contains data on eight individuals. We will use the dataset to construct a classifier that takes in the height, weight, and foot size of an individual and outputs a prediction for their gender.", "# Create an empty dataframe\ndata = pd.DataFrame()\n\n# Create our target variable\ndata['Gender'] = ['male','male','male','male','female','female','female','female']\n\n# Create our feature variables\ndata['Height'] = [6,5.92,5.58,5.92,5,5.5,5.42,5.75]\ndata['Weight'] = [180,190,170,165,100,150,130,150]\ndata['Foot_Size'] = [12,11,12,10,6,8,7,9]\n\n# View the data\ndata", "The dataset above is used to construct our classifier. Below we will create a new person for whom we know their feature values but not their gender. Our goal is to predict their gender.", "# Create an empty dataframe\nperson = pd.DataFrame()\n\n# Create some feature values for this single row\nperson['Height'] = [6]\nperson['Weight'] = [130]\nperson['Foot_Size'] = [8]\n\n# View the data \nperson", "Bayes Theorem\nBayes theorem is a famous equation that allows us to make predictions based on data. Here is the classic version of the Bayes theorem:\n$$\\displaystyle P(A\\mid B)={\\frac {P(B\\mid A)\\,P(A)}{P(B)}}$$\nThis might be too abstract, so let us replace some of the variables to make it more concrete. In a bayes classifier, we are interested in finding out the class (e.g. male or female, spam or ham) of an observation given the data:\n$$p(\\text{class} \\mid \\mathbf {\\text{data}} )={\\frac {p(\\mathbf {\\text{data}} \\mid \\text{class}) * p(\\text{class})}{p(\\mathbf {\\text{data}} )}}$$\nwhere: \n\n$\\text{class}$ is a particular class (e.g. male)\n$\\mathbf {\\text{data}}$ is an observation's data\n$p(\\text{class} \\mid \\mathbf {\\text{data}} )$ is called the posterior\n$p(\\text{data|class})$ is called the likelihood\n$p(\\text{class})$ is called the prior\n$p(\\mathbf {\\text{data}} )$ is called the marginal probability\n\nIn a bayes classifier, we calculate the posterior (technically we only calculate the numerator of the posterior, but ignore that for now) for every class for each observation. Then, classify the observation based on the class with the largest posterior value. In our example, we have one observation to predict and two possible classes (e.g. male and female), therefore we will calculate two posteriors: one for male and one for female.\n$$p(\\text{person is male} \\mid \\mathbf {\\text{person's data}} )={\\frac {p(\\mathbf {\\text{person's data}} \\mid \\text{person is male}) * p(\\text{person is male})}{p(\\mathbf {\\text{person's data}} )}}$$\n$$p(\\text{person is female} \\mid \\mathbf {\\text{person's data}} )={\\frac {p(\\mathbf {\\text{person's data}} \\mid \\text{person is female}) * p(\\text{person is female})}{p(\\mathbf {\\text{person's data}} )}}$$\nGaussian Naive Bayes Classifier\nA gaussian naive bayes is probably the most popular type of bayes classifier. To explain what the name means, let us look at what the bayes equations looks like when we apply our two classes (male and female) and three feature variables (height, weight, and footsize):\n$${\\displaystyle {\\text{posterior (male)}}={\\frac {P({\\text{male}})\\,p({\\text{height}}\\mid{\\text{male}})\\,p({\\text{weight}}\\mid{\\text{male}})\\,p({\\text{foot size}}\\mid{\\text{male}})}{\\text{marginal probability}}}}$$\n$${\\displaystyle {\\text{posterior (female)}}={\\frac {P({\\text{female}})\\,p({\\text{height}}\\mid{\\text{female}})\\,p({\\text{weight}}\\mid{\\text{female}})\\,p({\\text{foot size}}\\mid{\\text{female}})}{\\text{marginal probability}}}}$$\nNow let us unpack the top equation a bit:\n\n$P({\\text{male}})$ is the prior probabilities. It is, as you can see, simply the probability an observation is male. This is just the number of males in the dataset divided by the total number of people in the dataset.\n$p({\\text{height}}\\mid{\\text{female}})\\,p({\\text{weight}}\\mid{\\text{female}})\\,p({\\text{foot size}}\\mid{\\text{female}})$ is the likelihood. Notice that we have unpacked $\\mathbf {\\text{person's data}}$ so it is now every feature in the dataset. The \"gaussian\" and \"naive\" come from two assumptions present in this likelihood:\nIf you look each term in the likelihood you will notice that we assume each feature is uncorrelated from each other. That is, foot size is independent of weight or height etc.. This is obviously not true, and is a \"naive\" assumption - hence the name \"naive bayes.\"\nSecond, we assume have that the value of the features (e.g. the height of women, the weight of women) are normally (gaussian) distributed. This means that $p(\\text{height}\\mid\\text{female})$ is calculated by inputing the required parameters into the probability density function of the normal distribution: \n\n\n\n$$ \np(\\text{height}\\mid\\text{female})=\\frac{1}{\\sqrt{2\\pi\\text{variance of female height in the data}}}\\,e^{ -\\frac{(\\text{observation's height}-\\text{average height of females in the data})^2}{2\\text{variance of female height in the data}} }\n$$\n\n$\\text{marginal probability}$ is probably one of the most confusing parts of bayesian approaches. In toy examples (including ours) it is completely possible to calculate the marginal probability. However, in many real-world cases, it is either extremely difficult or impossible to find the value of the marginal probability (explaining why is beyond the scope of this tutorial). This is not as much of a problem for our classifier as you might think. Why? Because we don't care what the true posterior value is, we only care which class has a the highest posterior value. And because the marginal probability is the same for all classes 1) we can ignore the denominator, 2) calculate only the posterior's numerator for each class, and 3) pick the largest numerator. That is, we can ignore the posterior's denominator and make a prediction solely on the relative values of the posterior's numerator.\n\nOkay! Theory over. Now let us start calculating all the different parts of the bayes equations.\nCalculate Priors\nPriors can be either constants or probability distributions. In our example is the simply the probability of being a gender. Calculating this is simple:", "# Number of males\nn_male = data['Gender'][data['Gender'] == 'male'].count()\n\n# Number of males\nn_female = data['Gender'][data['Gender'] == 'female'].count()\n\n# Total rows\ntotal_ppl = data['Gender'].count()\n\n# Number of males divided by the total rows\nP_male = n_male/total_ppl\n\n# Number of females divided by the total rows\nP_female = n_female/total_ppl", "Calculate Likelihood\nRemember that each term (e.g. $p(\\text{height}\\mid\\text{female})$) in our likelihood is assumed to be a normal pdf. For example:\n$$ \np(\\text{height}\\mid\\text{female})=\\frac{1}{\\sqrt{2\\pi\\text{variance of female height in the data}}}\\,e^{ -\\frac{(\\text{observation's height}-\\text{average height of females in the data})^2}{2\\text{variance of female height in the data}} }\n$$\nThis means that for each class (e.g. female) and feature (e.g. height) combination we need to calculate the variance and mean value from the data. Pandas makes this easy:", "# Group the data by gender and calculate the means of each feature\ndata_means = data.groupby('Gender').mean()\n\n# View the values\ndata_means\n\n# Group the data by gender and calculate the variance of each feature\ndata_variance = data.groupby('Gender').var()\n\n# View the values\ndata_variance", "Now we can create all the variables we need. The code below might look complex but all we are doing is creating a variable out of each cell in both of the tables above.", "# Means for male\nmale_height_mean = data_means['Height'][data_variance.index == 'male'].values[0]\nmale_weight_mean = data_means['Weight'][data_variance.index == 'male'].values[0]\nmale_footsize_mean = data_means['Foot_Size'][data_variance.index == 'male'].values[0]\n\n# Variance for male\nmale_height_variance = data_variance['Height'][data_variance.index == 'male'].values[0]\nmale_weight_variance = data_variance['Weight'][data_variance.index == 'male'].values[0]\nmale_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'male'].values[0]\n\n# Means for female\nfemale_height_mean = data_means['Height'][data_variance.index == 'female'].values[0]\nfemale_weight_mean = data_means['Weight'][data_variance.index == 'female'].values[0]\nfemale_footsize_mean = data_means['Foot_Size'][data_variance.index == 'female'].values[0]\n\n# Variance for female\nfemale_height_variance = data_variance['Height'][data_variance.index == 'female'].values[0]\nfemale_weight_variance = data_variance['Weight'][data_variance.index == 'female'].values[0]\nfemale_footsize_variance = data_variance['Foot_Size'][data_variance.index == 'female'].values[0]", "Finally, we need to create a function to calculate the probability density of each of the terms of the likelihood (e.g. $p(\\text{height}\\mid\\text{female})$).", "# Create a function that calculates p(x | y):\ndef p_x_given_y(x, mean_y, variance_y):\n\n # Input the arguments into a probability density function\n p = 1/(np.sqrt(2*np.pi*variance_y)) * np.exp((-(x-mean_y)**2)/(2*variance_y))\n \n # return p\n return p", "Apply Bayes Classifier To New Data Point\nAlright! Our bayes classifier is ready. Remember that since we can ignore the marginal probability (the demoninator), what we are actually calculating is this:\n$${\\displaystyle {\\text{numerator of the posterior}}={P({\\text{female}})\\,p({\\text{height}}\\mid{\\text{female}})\\,p({\\text{weight}}\\mid{\\text{female}})\\,p({\\text{foot size}}\\mid{\\text{female}})}{}}$$\nTo do this, we just need to plug in the values of the unclassified person (height = 6), the variables of the dataset (e.g. mean of female height), and the function (p_x_given_y) we made above:", "# Numerator of the posterior if the unclassified observation is a male\nP_male * \\\np_x_given_y(person['Height'][0], male_height_mean, male_height_variance) * \\\np_x_given_y(person['Weight'][0], male_weight_mean, male_weight_variance) * \\\np_x_given_y(person['Foot_Size'][0], male_footsize_mean, male_footsize_variance)\n\n# Numerator of the posterior if the unclassified observation is a female\nP_female * \\\np_x_given_y(person['Height'][0], female_height_mean, female_height_variance) * \\\np_x_given_y(person['Weight'][0], female_weight_mean, female_weight_variance) * \\\np_x_given_y(person['Foot_Size'][0], female_footsize_mean, female_footsize_variance)", "Because the numerator of the posterior for female is greater than male, then we predict that the person is female." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kaggle/learntools
notebooks/machine_learning/raw/ex6.ipynb
apache-2.0
[ "Recap\nHere's the code you've written so far.", "# Code you have previously used to load data\nimport pandas as pd\nfrom sklearn.metrics import mean_absolute_error\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeRegressor\n\n\n# Path of the file to read\niowa_file_path = '../input/home-data-for-ml-course/train.csv'\n\nhome_data = pd.read_csv(iowa_file_path)\n# Create target object and call it y\ny = home_data.SalePrice\n# Create X\nfeatures = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']\nX = home_data[features]\n\n# Split into validation and training data\ntrain_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)\n\n# Specify Model\niowa_model = DecisionTreeRegressor(random_state=1)\n# Fit Model\niowa_model.fit(train_X, train_y)\n\n# Make validation predictions and calculate mean absolute error\nval_predictions = iowa_model.predict(val_X)\nval_mae = mean_absolute_error(val_predictions, val_y)\nprint(\"Validation MAE when not specifying max_leaf_nodes: {:,.0f}\".format(val_mae))\n\n# Using best value for max_leaf_nodes\niowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)\niowa_model.fit(train_X, train_y)\nval_predictions = iowa_model.predict(val_X)\nval_mae = mean_absolute_error(val_predictions, val_y)\nprint(\"Validation MAE for best value of max_leaf_nodes: {:,.0f}\".format(val_mae))\n\n\n# Set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.machine_learning.ex6 import *\nprint(\"\\nSetup complete\")", "Exercises\nData science isn't always this easy. But replacing the decision tree with a Random Forest is going to be an easy win.\nStep 1: Use a Random Forest", "from sklearn.ensemble import RandomForestRegressor\n\n# Define the model. Set random_state to 1\nrf_model = ____\n\n# fit your model\n____\n\n# Calculate the mean absolute error of your Random Forest model on the validation data\nrf_val_mae = ____\n\nprint(\"Validation MAE for Random Forest Model: {}\".format(rf_val_mae))\n\n# Check your answer\nstep_1.check()\n\n# The lines below will show you a hint or the solution.\n# step_1.hint() \n# step_1.solution()\n\n\n#%%RM_IF(PROD)%%\n\nfrom sklearn.ensemble import RandomForestRegressor\n\n# Define the model. Set random_state to 1\nrf_model = RandomForestRegressor(random_state=1)\n\n# fit your model\nrf_model.fit(train_X, train_y)\n\n# Calculate the mean absolute error of your Random Forest model on the validation data\nrf_val_predictions = rf_model.predict(val_X)\nrf_val_mae = mean_absolute_error(rf_val_predictions, val_y)\n\nprint(\"Validation MAE for Random Forest Model: {:,.0f}\".format(rf_val_mae))\n\nstep_1.assert_check_passed()", "So far, you have followed specific instructions at each step of your project. This helped learn key ideas and build your first model, but now you know enough to try things on your own. \nMachine Learning competitions are a great way to try your own ideas and learn more as you independently navigate a machine learning project. \n$KEEP_GOING$" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive/05_review/serving/python_streaming.ipynb
apache-2.0
[ "(Bonus) Streaming data prediction using Cloud ML Engine\nThis notebook illustrates:\n\nCreate a PubSub Topic and Subscription.\nCreate a Dataflow Streaming pipeline to consume messages.\nUse the deployed Cloud ML Engine API to make prediction.\nStroe the data and the prediction in BigQuery.\nRun a stream data simulator.", "TIME_FORMAT = '%Y-%m-%d %H:%M:%S'\n\nDATASET = 'playground_ds'\nTABLE = 'babyweight_estimates'\n\nPROJECT = 'cloud-training-demos'\nSTG_BUCKET = 'cloud-training-demos-ml'\nREGION = 'us-central1'\n\nTOPIC = 'babyweights'\nSUBSCRIPTION='babyweights-sub'\n\nMODEL_NAME='babyweight_estimator'\nVERSION='v1'\n\n%%bash\n\npip install google-cloud-dataflow\npip install apache_beam==2.3\npip install six==1.10\n\nimport time\nimport datetime\nfrom google.cloud import pubsub\nimport json\nimport apache_beam as beam\nimport os\nprint beam.__version__", "Create PubSub Topic and Subscription", "client = pubsub.Client()\ntopic = client.topic(TOPIC)\n\nif not topic.exists():\n print('Creating pub/sub topic {}...'.format(TOPIC))\n topic.create()\n\nprint('Pub/sub topic {} is up and running'.format(TOPIC))\nprint(\"\")", "Submit Dataflow Stream Processing Job\nData source (PubSub topic) and sink (BigQuery table)", "pubsub_topic = \"projects/{}/topics/{}\".format(PROJECT, TOPIC)\n\nschema_definition = {\n 'source_id':'INTEGER',\n 'source_timestamp':'TIMESTAMP',\n 'estimated_weight_kg':'FLOAT',\n 'is_male': 'STRING',\n 'mother_age': 'FLOAT',\n 'mother_race': 'STRING',\n 'plurality': 'FLOAT',\n 'gestation_weeks': 'INTEGER',\n 'mother_married': 'BOOLEAN',\n 'cigarette_use': 'BOOLEAN',\n 'alcohol_use': 'BOOLEAN'\n}\n\nschema = str(schema_definition).replace('{','').replace('}','').replace(\"'\",'').replace(' ','')\n\nprint('Pub/Sub Topic URL: {}'.format(pubsub_topic))\nprint('')\nprint('BigQuery Dataset: {}'.format(DATASET))\nprint('BigQuery Tabe: {}'.format(TABLE))\nprint('')\nprint('BigQuery Table Schema: {}'.format(schema))", "Cloud ML Engine prediction function", "def estimate_weight(json_message):\n \n import json\n from googleapiclient import discovery\n from oauth2client.client import GoogleCredentials\n \n global cmle_api\n \n # only do it once, not every time the function is called\n if cmle_api is None:\n credentials = GoogleCredentials.get_application_default()\n cmle_api = discovery.build('ml', 'v1', credentials=credentials,\n discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json',\n cache_discovery=False)\n\n instance = json.loads(json_message)\n source_id = instance.pop('source_id')\n source_timestamp = instance.pop('source_timestamp')\n \n request_data = {'instances': [instance]}\n\n model_url = 'projects/{}/models/{}/versions/{}'.format(PROJECT, MODEL_NAME, VERSION)\n response = cmle_api.projects().predict(body=request_data, name=model_url).execute()\n\n estimates = list(map(lambda item: round(item[\"scores\"],2)\n ,response[\"predictions\"]\n ))\n \n estimated_weight_kg = round(int(estimates[0]) * 0.453592,2)\n \n instance['estimated_weight_kg'] = estimated_weight_kg\n instance['source_id'] = source_id\n instance['source_timestamp'] = source_timestamp\n\n return instance", "Beam streaming pipeline", "def run_babyweight_estimates_streaming_pipeline():\n \n job_name = 'ingest-babyweight-estimates-{}'.format(datetime.datetime.now().strftime('%y%m%d-%H%M%S'))\n print 'Launching Dataflow job {}'.format(job_name)\n print 'Check the Dataflow jobs on Google Cloud Console...'\n\n STG_DIR = 'gs://{}/babyweight'.format(STG_BUCKET)\n\n options = {\n 'region': REGION,\n 'staging_location': os.path.join(STG_DIR, 'tmp', 'staging'),\n 'temp_location': os.path.join(STG_DIR, 'tmp'),\n 'job_name': job_name,\n 'project': PROJECT,\n 'streaming': True,\n 'teardown_policy': 'TEARDOWN_ALWAYS',\n 'no_save_main_session': True\n }\n\n opts = beam.pipeline.PipelineOptions(flags=[], **options)\n \n pipeline = beam.Pipeline(runner=\"Dataflow\", options=opts)\n \n (\n pipeline | 'Read data from PubSub' >> beam.io.ReadStringsFromPubSub(topic=pubsub_topic) \n | 'Process message' >> beam.Map(estimate_weight)\n | 'Write to BigQuery' >> beam.io.WriteToBigQuery(project=PROJECT, dataset=DATASET, table=TABLE, \n schema=schema,\n create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED\n )\n )\n\n pipeline.run()", "Run Pipeline on Dataflow", "run_babyweight_estimates_streaming_pipeline()", "Prepare Sample Data Points", "instances = [\n {\n 'is_male': 'True',\n 'mother_age': 26.0,\n 'mother_race': 'Asian Indian',\n 'plurality': 1.0,\n 'gestation_weeks': 39,\n 'mother_married': 'True',\n 'cigarette_use': 'False',\n 'alcohol_use': 'False'\n },\n {\n 'is_male': 'False',\n 'mother_age': 29.0,\n 'mother_race': 'Asian Indian',\n 'plurality': 1.0,\n 'gestation_weeks': 38,\n 'mother_married': 'True',\n 'cigarette_use': 'False',\n 'alcohol_use': 'False'\n },\n {\n 'is_male': 'True',\n 'mother_age': 26.0,\n 'mother_race': 'White',\n 'plurality': 1.0,\n 'gestation_weeks': 39,\n 'mother_married': 'True',\n 'cigarette_use': 'False',\n 'alcohol_use': 'False'\n },\n {\n 'is_male': 'True',\n 'mother_age': 26.0,\n 'mother_race': 'White',\n 'plurality': 2.0,\n 'gestation_weeks': 37,\n 'mother_married': 'True',\n 'cigarette_use': 'False',\n 'alcohol_use': 'True'\n }\n ]", "Send Data Points to PubSub", "from random import shuffle\n\niterations = 10000\nsleep_time = 1\n\nfor i in range(iterations):\n \n shuffle(instances)\n \n for data_point in instances:\n \n source_timestamp = datetime.datetime.now().strftime(TIME_FORMAT)\n source_id = str(abs(hash(str(data_point)+str(source_timestamp))) % (10 ** 10))\n data_point['source_id'] = source_id\n data_point['source_timestamp'] = source_timestamp\n \n message = json.dumps(data_point)\n topic.publish(message=message, source_id = source_id, source_timestamp=source_timestamp)\n\n print(\"Batch {} was sent to {}. \\n\\r Last Message was: {}\".format(i, topic.full_name, message))\n print(\"\")\n\n time.sleep(sleep_time)\n\nprint(\"Done!\")", "Consume PubSub Topic", "client = pubsub.Client()\ntopic = client.topic(TOPIC)\n\nsubscription = topic.subscription(name=SUBSCRIPTION)\nif not subscription.exists():\n print('Creating pub/sub subscription {}...'.format(SUBSCRIPTION))\n subscription.create(client=client)\n\nprint ('Pub/sub subscription {} is up and running'.format(SUBSCRIPTION))\nprint(\"\")\n\nmessage = subscription.pull()\n\nprint(\"source_id\", message[0][1].attributes[\"source_id\"])\nprint(\"source_timestamp:\", message[0][1].attributes[\"source_timestamp\"])\nprint(\"\")\nprint(message[0][1].data)", "Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chunweixu/Deep-Learning
language-translation/dlnd_language_translation.ipynb
mit
[ "Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end.\nYou can get the &lt;EOS&gt; word id by doing:\npython\ntarget_vocab_to_int['&lt;EOS&gt;']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.", "def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n eos = target_vocab_to_int['<EOS>']\n \n source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\\n')]\n target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [eos] for sentence in target_text.split('\\n')]\n \n return source_id_text, target_id_text\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()", "Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoder_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\nTarget sequence length placeholder named \"target_sequence_length\" with rank 1\nMax target sequence length tensor named \"max_target_len\" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.\nSource sequence length placeholder named \"source_sequence_length\" with rank 1\n\nReturn the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)", "def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.\n :return: Tuple (input, targets, learning rate, keep probability, target sequence length,\n max target sequence length, source sequence length)\n \"\"\"\n inputs = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n learning_rate = tf.placeholder(tf.float32, name='learning_rate')\n probs = tf.placeholder(tf.float32, name='keep_prob')\n target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length')\n max_target_len = tf.reduce_max(target_seq_len, name='max_target_len')\n source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')\n return inputs, targets, learning_rate, probs, target_seq_len, max_target_len, source_seq_len\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Process Decoder Input\nImplement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.", "def process_decoder_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for encoding\n :param target_data: Target Placehoder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_encoding_input(process_decoder_input)", "Encoding\nImplement encoding_layer() to create a Encoder RNN layer:\n * Embed the encoder input using tf.contrib.layers.embed_sequence\n * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper\n * Pass cell and embedded input to tf.nn.dynamic_rnn()", "from imp import reload\nreload(tests)\n\ndef encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, \n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :param source_sequence_length: a list of the lengths of each sequence in the batch\n :param source_vocab_size: vocabulary size of source data\n :param encoding_embedding_size: embedding size of source data\n :return: tuple (RNN output, RNN state)\n \"\"\"\n enc_embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.contrib.layers.xavier_initializer(seed=1)),output_keep_prob=keep_prob)\n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)", "Decoding - Training\nCreate a training decoding layer:\n* Create a tf.contrib.seq2seq.TrainingHelper \n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "\ndef decoding_layer_train(encoder_state, dec_cell, dec_embed_input, \n target_sequence_length, max_summary_length, \n output_layer, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_summary_length: The length of the longest sequence in the batch\n :param output_layer: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing training logits and sample_id\n \"\"\"\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n encoder_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_summary_length) \n return training_decoder_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)", "Decoding - Inference\nCreate inference decoder:\n* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper\n* Create a tf.contrib.seq2seq.BasicDecoder\n* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode", "def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,\n end_of_sequence_id, max_target_sequence_length,\n vocab_size, output_layer, batch_size, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param max_target_sequence_length: Maximum length of target sequences\n :param vocab_size: Size of decoder/target vocabulary\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_layer: Function to apply the output layer\n :param batch_size: Batch size\n :param keep_prob: Dropout keep probability\n :return: BasicDecoderOutput containing inference logits and sample_id\n \"\"\"\n \n start_tokens = tf.tile(tf.constant([start_of_sequence_id['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n end_of_sequence_id['<EOS>'])\n \n dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n encoder_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n return inference_decoder_output\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)", "Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nEmbed the target sequences\nConstruct the decoder LSTM cell (just like you constructed the encoder cell above)\nCreate an output layer to map the outputs of the decoder to the elements of our vocabulary\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.", "def decoding_layer(dec_input, encoder_state,\n target_sequence_length, max_target_sequence_length,\n rnn_size,\n num_layers, target_vocab_to_int, target_vocab_size,\n batch_size, keep_prob, decoding_embedding_size):\n \"\"\"\n Create decoding layer\n :param dec_input: Decoder input\n :param encoder_state: Encoder state\n :param target_sequence_length: The lengths of each sequence in the target batch\n :param max_target_sequence_length: Maximum length of target sequences\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param target_vocab_size: Size of target vocabulary\n :param batch_size: The size of the batch\n :param keep_prob: Dropout keep probability\n :param decoding_embedding_size: Decoding embedding size\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)", "Build the Neural Network\nApply the functions you implemented above to:\n\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).\nProcess target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.\nDecode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.", "def seq2seq_model(input_data, target_data, keep_prob, batch_size,\n source_sequence_length, target_sequence_length,\n max_target_sentence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size,\n rnn_size, num_layers, target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param source_sequence_length: Sequence Lengths of source sequences in the batch\n :param target_sequence_length: Sequence Lengths of target sequences in the batch\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability\nSet display_step to state how many steps between each debug output statement", "# Number of Epochs\nepochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Number of Layers\nnum_layers = None\n# Embedding Size\nencoding_embedding_size = None\ndecoding_embedding_size = None\n# Learning Rate\nlearning_rate = None\n# Dropout Keep Probability\nkeep_probability = None\ndisplay_step = None", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_target_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()\n\n #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n\n train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),\n targets,\n keep_prob,\n batch_size,\n source_sequence_length,\n target_sequence_length,\n max_target_sequence_length,\n len(source_vocab_to_int),\n len(target_vocab_to_int),\n encoding_embedding_size,\n decoding_embedding_size,\n rnn_size,\n num_layers,\n target_vocab_to_int)\n\n\n training_logits = tf.identity(train_logits.rnn_output, name='logits')\n inference_logits = tf.identity(inference_logits.sample_id, name='predictions')\n\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Batch and pad the source and target sequences", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\n\ndef get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n\n # Slice the right amount for the batch\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n\n # Pad\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n\n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n\n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n\n yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths\n", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1])],\n 'constant')\n\n return np.mean(np.equal(target, logits))\n\n# Split data to training and validation sets\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\nvalid_source = source_int_text[:batch_size]\nvalid_target = target_int_text[:batch_size]\n(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,\n valid_target,\n batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])) \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(\n get_batches(train_source, train_target, batch_size,\n source_vocab_to_int['<PAD>'],\n target_vocab_to_int['<PAD>'])):\n\n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths,\n keep_prob: keep_probability})\n\n\n if batch_i % display_step == 0 and batch_i > 0:\n\n\n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch,\n source_sequence_length: sources_lengths,\n target_sequence_length: targets_lengths,\n keep_prob: 1.0})\n\n\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_sources_batch,\n source_sequence_length: valid_sources_lengths,\n target_sequence_length: valid_targets_lengths,\n keep_prob: 1.0})\n\n train_acc = get_accuracy(target_batch, batch_train_logits)\n\n valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)\n\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')", "Save Parameters\nSave the batch_size and save_path parameters for inference.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()", "Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the &lt;UNK&gt; word id.", "def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)", "Translate\nThis will translate translate_sentence from English to French.", "translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,\n target_sequence_length: [len(translate_sentence)*2]*batch_size,\n source_sequence_length: [len(translate_sentence)]*batch_size,\n keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in translate_logits]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in translate_logits])))\n", "Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quantopian/research_public
drafts/Price Momentum Factor Notebook.ipynb
apache-2.0
[ "Price Momentum Factor Algorithm\nBy Gil Wassermann\nStrategy taken from \"130/30: The New Long-Only\" by Andrew Lo and Pankaj Patel\nPart of the Quantopian Lecture Series:\n* www.quantopian.com/lectures\n* github.com/quantopian/research_public\nNotebook released under the Creative Commons Attribution 4.0 License. Please do not remove this attribution.\nLet us imagine that we are traders at a large bank, watching out screens as stock prices fluctuate up and down. Suddenly, everyone around us is buying one particular security. Demand has increased so the stock price increases. We panic. Is there some information that we missed out on -are we out of the loop? In our panic, we blindly decide to buy some shares so we do not miss the boat on the next big thing. Demand further increases as a result of the hype surrounding the stock, driving up the price even more. \nNow let us take a step back. From the observational perspective of a quant, the price of the security is increasing because of the animal spirits of investors. In essence, the price is going up because the price is going up. As quants, if we can identify these irrational market forces, we can profit from them.\nIn this notebook we will go step-by-step through the contruction of an algorithm to find and trade equities experiencing momentum in price.\nFirst, let us import all the necessary libraries for our algorithm.", "import numpy as np\nimport pandas as pd\nfrom scipy.signal import argrelmin, argrelmax\nimport statsmodels.api as sm\nimport talib\nimport matplotlib.pyplot as plt\nfrom quantopian.pipeline.classifiers.morningstar import Sector\nfrom quantopian.pipeline import Pipeline\nfrom quantopian.pipeline.factors import Latest\nfrom quantopian.pipeline.data.builtin import USEquityPricing\nfrom quantopian.research import run_pipeline\nfrom quantopian.pipeline.data import morningstar\nfrom quantopian.pipeline.factors import CustomFactor", "Price Momentum\nIn this notebook, we will use indicators outlined in \"130/30: The New Long Only\" by Andrew Lo and Pankaj Patel and combine them to create a single factor. It should be clarified that we are looking for long-term momentum as opposed to intra-day momentum. These indicators are:\n\nSlope of the 52-Week Trendline (20-day Lag)\nPercent Above 260-Day Low (20-day Lag)\n4/52-Week Oscillator (20-day Lag)\n39-Week Return (20-day Lag)\n\nLag\nOne thing that all of the indicators have in common is that they are calculated using a 20-day lag. This lag is a way of smoothing out the stock signal so that we can filter out noise and focus on concrete, underlying trends. To calculate lag, we will take our desired data series and calculate its 20-day simple moving average, which is the arithmetic mean of window of the series' last 20 entries. \nLet's see an example of this for the closing price of Apple (AAPL) stock from August 2014 to August 2015. We will abstract out the lag calculation into a helper function because we will be needing it so often in the algorithm.\nNB: we remove the first 20 entries of the results as these will always be undefined (here, NaN) as because there is not a 20-day window with which to calculate the lag. We also have a check to determine if the entire row of data in NaN, as this can cause issues with the TA-Lib library.", "# check if entire column is NaN. If yes, return True\ndef nan_check(col):\n if np.isnan(np.sum(col)):\n return True\n else: \n return False\n\n# helper to calculate lag\ndef lag_helper(col):\n \n # TA-Lib raises an error if whole colum is NaN,\n # so we check if this is true and, if so, skip\n # the lag calculation\n if nan_check(col):\n return np.nan\n # 20-day simple moving average\n else:\n return talib.SMA(col, 20)[20:]\n\nAAPL_frame = get_pricing('AAPL', start_date='2014-08-08', end_date='2015-08-08', fields='close_price')\n\n# convert to np.array for helper function and save index of timeseries\nAAPL_index = AAPL_frame.index\nAAPL_frame = AAPL_frame.as_matrix()\n\n# calculate lag\nAAPL_frame_lagged = lag_helper(AAPL_frame)\n\nplt.plot(AAPL_index, AAPL_frame, label='Close')\nplt.plot(AAPL_index[20:], AAPL_frame_lagged, label='Lagged Close')\nplt.legend(loc=2)\nplt.xlabel('Date')\nplt.title('Close Prices vs Close Prices (20-Day Lag)')\nplt.ylabel('AAPL Price');", "As you can see from the graph, the lagged closing prices generally follow the same general pattern as the unlagged prices, but do not experience as extreme peaks and troughs. For the rest of the notebook we will use lagged prices as we are interested in long-term trends.\nSlope of 52-Week Trendline\nOne of the oldest indicators of price momentum is the trendline. The basic idea is to create a bounding line around stock prices that predict when a price should pivot. A trendline that predicts a ceiling is called a resistance trendline, and one that predicts a floor is a support trendline. \nTo calculate a support trendline here, we take a lagged series, and find its pronounced local minima (here, a local minimum is defined as a data point lower than the five previous and five proceeding points). We then connect the first local minimum and the last local minimum by a straight line. For a resistance trendline, the process is the same, except it uses local maxima. This is just one of many methodologies for calculating trendlines.\nLet us code up a function to return the gradient of the trendline. We will include a boolean variable support that, when set to True gives a support trendline and when set to False gives a resistance trendline. Let us have a look at the same dataset of AAPL stock and plot its trendlines. \nNB: The y-intercepts used here are purely aesthetic and have no meaning as the indicator itself is only based on the slope of the trendline", "# Custom Factor 1 : Slope of 52-Week trendline\ndef trendline_function(col, support):\n \n # NaN check for speed\n if nan_check(col):\n return np.nan \n \n # lag transformation\n col = lag_helper(col)\n \n # support trendline\n if support:\n \n # get local minima\n minima_index = argrelmin(col, order=5)[0]\n \n # make sure line can be drawn\n if len(minima_index) < 2:\n return np.nan\n else:\n # return gradient\n return (col[minima_index[-1]] - col[minima_index[0]]) / (minima_index[-1] - minima_index[0])\n \n # resistance trandline\n else:\n \n # get local maxima\n maxima_index = argrelmax(col, order=5)[0]\n if len(maxima_index) < 2:\n return np.nan\n else:\n return (col[maxima_index[-1]] - col[maxima_index[0]]) / (maxima_index[-1] - maxima_index[0])\n\n# make the lagged frame the default \nAAPL_frame = AAPL_frame_lagged\n\n# use day count rather than dates to ensure straight lines\ndays = list(range(0,len(AAPL_frame),1))\n\n# get points to plot\npoints_low = [(101.5 + (trendline_function(AAPL_frame, True)*day)) for day in days]\npoints_high = [94 + (trendline_function(AAPL_frame, False)*day) for day in days]\n\n# create graph\nplt.plot(days, points_low, label='Support')\nplt.plot(days, points_high, label='Resistance')\nplt.plot(days, AAPL_frame, label='Lagged Closes')\nplt.xlim([0, max(days)])\nplt.xlabel('Days Elapsed')\nplt.ylabel('AAPL Price')\nplt.legend(loc=2);", "As you can see, at the beginning of the time frame these lines seem to describe the pivot points of the curve well. Therefore it appears that betting against the stock when its price nears the resistance line and betting on the stock when its price nears the support line is a decent strategy. One issue with this is that these trendlines change over time. Even at the end of the above graph, it appears that the lines need to be redrawn in order to accomodate new prevailing price trends.\nNow let us create our factor. In order to maintain flexibility between the types of trendlines, we need a way to pass the variable support into our Pipeline calculation. To do this we create a function that returns a CustomFactor class that can take a variable that is in scope of our indicator.\nAlso, we have abstracted out the trendline calculation so that we can use the builtin Numpy function apply_along_axis instead of creating and appending the results of the trendline calculation for each column to a list, which is a slower process.", "def create_trendline_factor(support):\n \n class Trendline(CustomFactor):\n\n # 52 week + 20d lag\n window_length = 272\n inputs=[USEquityPricing.close]\n\n def compute(self, today, assets, out, close): \n out[:] = np.apply_along_axis(trendline_function, 0, close, support)\n return Trendline\n \ntemp_pipe_1 = Pipeline()\ntrendline = create_trendline_factor(support=True)\ntemp_pipe_1.add(trendline(), 'Trendline')\nresults_1 = run_pipeline(temp_pipe_1, '2015-08-08', '2015-08-08')\nresults_1.head(20)", "Percent Above 260-Day Low\nThis indicator is relatively self explanitory. Whereas the trendline metric gives a more indepth picture of price momentum (as the line itself shows how this momentum has evolves over time), this metric is fairly blunt. It is calculated as the price of a stock today less the minimum price in a retrospective 260-day window, all divided by that minimum price.\nLet us have a look at a visualization of this metric for the same window of AAPL stock.", "# Custom Factor 2 : % above 260 day low\ndef percent_helper(col):\n if nan_check(col):\n return np.nan \n else:\n col = lag_helper(col)\n return (col[-1] - min(col)) / min(col)\n\nprint 'Percent above 260-day Low: %f%%' % (percent_helper(AAPL_frame) * 100)\n\n# create the graph\nplt.plot(days, AAPL_frame)\nplt.axhline(min(AAPL_frame), color='r', label='260-Day Low')\nplt.axhline(AAPL_frame[-1], color='y', label='Latest Price')\nplt.fill_between(days, AAPL_frame)\nplt.xlabel('Days Elapsed')\nplt.ylabel('AAPL Price')\nplt.xlim([0, max(days)])\nplt.title('Percent Above 260-Day Low')\nplt.legend();", "Now we will create the CustomFactor for this metric. We will use the same abstraction process as above for run-time efficiency.", "class Percent_Above_Low(CustomFactor):\n \n # 260 days + 20 lag\n window_length = 280\n inputs=[USEquityPricing.close]\n \n def compute(self, today, asseys, out, close):\n out[:] = np.apply_along_axis(percent_helper, 0, close)\n\ntemp_pipe_2 = Pipeline()\ntemp_pipe_2.add(Percent_Above_Low(), 'Percent Above Low')\nresults_2 = run_pipeline(temp_pipe_2, '2015-08-08', '2015-08-08')\nresults_2.head(20)", "NB: There are a lot of 0's here for this output. Although this might seem odd at first, it makes sense when we consider that there are many securities on a downwards trend. These stocks would be prime candidates to give a value of 0 as their current price is as low as it has ever been in this lookback window.\n4/52-Week Price Oscillator\nThis is calculated as the average close price over 4 weeks over the average close price over 52 weeks, all subtracted by 1. To understand this value measures, let us consider what happens to the oscillator in different scenarios. This particular oscillator gives a sense of relative performance between the previous four weeks and the previous year. A value given by this oscillator could be \"0.05\", which would indicate that the stocks recent closes are outperforming its previous year's performance by 5%. A positive value is an indicator of momentum as more recent performance is stronger than normal and the larger the number, the more momentum.\nAs close prices can not be negative, this oscillator is bounded by -1 and positive infinity. Let us create a graph to show how, given a particular 52-week average, the value of the oscillator is affected by its four-week average.", "# set 48-week average\nav_52w = 100.\n\n# create list of possible last four-week averages\nav_4w = xrange(0,200)\n\n# create list of oscillator values\nosc = [(x / av_52w) - 1 for x in av_4w]\n\n# draw graph\nplt.plot(av_4w, osc)\nplt.axvline(100, color='r', label='52-Week Average')\nplt.xlabel('Four-Week Average')\nplt.ylabel('4/52 Oscillator')\nplt.legend();", "Now let us create a Pipeline factor and observe some values.", "# Custom Factor 3: 4/52 Price Oscillator\ndef oscillator_helper(col):\n if nan_check(col):\n return np.nan \n else:\n col = lag_helper(col)\n return np.nanmean(col[-20:]) / np.nanmean(col) - 1\n\nclass Price_Oscillator(CustomFactor):\n inputs = [USEquityPricing.close]\n window_length = 272\n \n def compute(self, today, assets, out, close):\n out[:] = np.apply_along_axis(oscillator_helper, 0, close)\n \ntemp_pipe_3 = Pipeline()\ntemp_pipe_3.add(Price_Oscillator(), 'Price Oscillator')\nresults_3 = run_pipeline(temp_pipe_3, '2015-08-08', '2015-08-08')\nresults_3.head(20)", "Once again, let us use AAPL stock as an example.", "# get two averages\nav_4w = np.nanmean(AAPL_frame[-20:])\nav_52w = np.nanmean(AAPL_frame)\n\n# create the graph\nplt.plot(days, AAPL_frame)\nplt.fill_between(days[-20:], AAPL_frame[-20:])\nplt.axhline(av_4w, color='y', label='Four-week Average' )\nplt.axhline(av_52w, color='r', label='Year-long Average')\nplt.ylim([80,140])\nplt.xlabel('Days Elapsed')\nplt.ylabel('AAPL Price')\nplt.title('4/52 Week Oscillator')\nplt.legend();\n", "The section shaded blue under the graph represents the last four weeks of close prices. The fact that this average (shown by the yellow line) is greater than the year-long average (shown by the red line), means that the 4/52 week oscillator for this date will be positive. This fact is backed by our pipeline output, which gives the value of the metric to be 9.4%. \n39-Week Return\nThis is calculated as the difference price between today and 39-weks prior, all over the price 39-weeks prior.\nAlthough returns as a metric might seem too ubitquitous to be useful or special, the important thing to highlight hear is the window length chosen. By choosing a larger window length (here, 39-weeks) as opposed to daily returns, we see larger fluctuations in value. This is because a larger time window exposes the metric to larger trends and higher volatility. \nIn the graph below, we illustrate this point by plotting returns calculated over different time windows. To do this we will look at a AAPL close prices between 2002 and 2016. We will also mark important dates in the history of Apple in order to highlight this metric's descriptive power for larger trends.\nNB: 39-week return is not a metric that is event driven. The inclusion of these dates is illustrative as opposed to predictive.", "# create a new longer frame of AAPL close prices\nAAPL_frame = get_pricing('AAPL', start_date='2002-08-08', end_date='2016-01-01', fields='close_price')\n\n# use dates as index\nAAPL_index = AAPL_frame.index[20:]\nAAPL_frame = lag_helper(AAPL_frame.as_matrix())\n\n# 1d returns\nAAPL_1d_returns = ((AAPL_frame - np.roll(AAPL_frame, 1))/ np.roll(AAPL_frame,1))[1:]\n\n# 1w returns\nAAPL_1w_returns = ((AAPL_frame - np.roll(AAPL_frame, 5))/ np.roll(AAPL_frame, 5))[5:]\n\n# 1m returns\nAAPL_1m_returns = ((AAPL_frame - np.roll(AAPL_frame, 30))/ np.roll(AAPL_frame, 30))[30:]\n\n# 39w returns\nAAPL_39w_returns = ((AAPL_frame - np.roll(AAPL_frame, 215))/ np.roll(AAPL_frame, 215))[215:]\n\n# plot close prices\nplt.plot(AAPL_index[1:], AAPL_1d_returns, label='1-day Returns')\nplt.plot(AAPL_index[5:], AAPL_1w_returns, label='1-week Returns')\nplt.plot(AAPL_index[30:], AAPL_1m_returns, label='1-month Returns')\nplt.plot(AAPL_index[215:], AAPL_39w_returns, label='39-week Returns')\n\n# show events\n# iPhone release\nplt.axvline('2007-07-29')\n# iPod mini 2nd gen. release\nplt.axvline('2005-02-23')\n# iPad release\nplt.axvline('2010-04-03')\n# iPhone 5 release\nplt.axvline('2012-09-21')\n# Apple Watch\nplt.axvline('2015-04-24')\n\n# labels\nplt.xlabel('Days')\nplt.ylabel('Returns')\nplt.title('Returns')\nplt.legend();", "There are a few important characteristics to note on the graph above.\nFirstly, as we expected, the amplitude of the signal of returns with a longer window length is larger.\nSecondly, these new releases, many of which were announced several months before, all lie in or adjacent to a peak in the 39-week return price. Therefore, it would seem that this window length is a useful tool for capturing information on larger trends.\nNow let us create the custom factor and run the Pipeline.", "# Custom Fator 4: 39-week Returns\ndef return_helper(col):\n if nan_check(col):\n return np.nan \n else:\n col = lag_helper(col)\n return (col[-1] - col[-215]) / col[-215]\n\nclass Return_39_Week(CustomFactor):\n inputs = [USEquityPricing.close]\n window_length = 235\n \n def compute(self, today, assets, out, close):\n out[:] = np.apply_along_axis(return_helper, 0, close)\n \ntemp_pipe_4 = Pipeline()\ntemp_pipe_4.add(Return_39_Week(), '39 Week Return')\nresults_4 = run_pipeline(temp_pipe_4, '2015-08-08','2015-08-08')\nresults_4.head(20)", "Aggregation\nLet us create the full Pipeline. Once again we will need a proxy for the S&P500 for the ordering logic. Also, given the large window lengths needed for the algorithm, we will employ the trick of multiple outputs per factor. This is explained in detail here (https://www.quantopian.com/posts/new-feature-multiple-output-pipeline-custom-factors). Instead of having to process several data frames, we only need to deal with one large one and then apply our helper functions. This will speed up out computation exponentially in the backtester.", "# This factor creates the synthetic S&P500\nclass SPY_proxy(CustomFactor):\n inputs = [morningstar.valuation.market_cap]\n window_length = 1\n \n def compute(self, today, assets, out, mc):\n out[:] = mc[-1]\n\n# using helpers to boost speed\nclass Pricing_Pipe(CustomFactor):\n \n inputs = [USEquityPricing.close]\n outputs = ['trendline', 'percent', 'oscillator', 'returns']\n window_length=280\n \n def compute(self, today, assets, out, close):\n out.trendline[:] = np.apply_along_axis(trendline_function, 0, close[-272:], True)\n out.percent[:] = np.apply_along_axis(percent_helper, 0, close)\n out.oscillator[:] = np.apply_along_axis(oscillator_helper, 0, close[-272:])\n out.returns[:] = np.apply_along_axis(return_helper, 0, close[-235:])\n \ndef Data_Pull():\n \n # create the piepline for the data pull\n Data_Pipe = Pipeline()\n \n # create SPY proxy\n Data_Pipe.add(SPY_proxy(), 'SPY Proxy')\n\n # run all on same dataset for speed\n trendline, percent, oscillator, returns = Pricing_Pipe()\n \n # add the calculated values\n Data_Pipe.add(trendline, 'Trendline')\n Data_Pipe.add(percent, 'Percent')\n Data_Pipe.add(oscillator, 'Oscillator')\n Data_Pipe.add(returns, 'Returns')\n \n return Data_Pipe\n\nresults = run_pipeline(Data_Pull(), '2015-08-08', '2015-08-08')\nresults.head(20)", "We will now use the Lo/Patel ranking logic described in the Traditional Value notebook (https://www.quantopian.com/posts/quantopian-lecture-series-long-slash-short-traditional-value-case-study) in order to combine these desriptive metrics into a single factor.\nNB: standard_frame_compute and composite_score have been combined into a single function called aggregate_data.", "# limit effect of outliers\ndef filter_fn(x):\n if x <= -10:\n x = -10.0\n elif x >= 10:\n x = 10.0\n return x \n\n# combine data\ndef aggregate_data(df):\n\n # basic clean of dataset to remove infinite values\n df = df.replace([np.inf, -np.inf], np.nan)\n df = df.dropna()\n\n # need standardization params from synthetic S&P500\n df_SPY = df.sort(columns='SPY Proxy', ascending=False)\n\n # create separate dataframe for SPY\n # to store standardization values\n df_SPY = df_SPY.head(500)\n\n # get dataframes into numpy array\n df_SPY = df_SPY.as_matrix()\n\n # store index values\n index = df.index.values\n\n # get data intp a numpy array for speed\n df = df.as_matrix()\n\n # get one empty row on which to build standardized array\n df_standard = np.empty(df.shape[0])\n\n for col_SPY, col_full in zip(df_SPY.T, df.T):\n\n # summary stats for S&P500\n mu = np.mean(col_SPY)\n sigma = np.std(col_SPY)\n col_standard = np.array(((col_full - mu) / sigma))\n\n # create vectorized function (lambda equivalent)\n fltr = np.vectorize(filter_fn)\n col_standard = (fltr(col_standard))\n\n # make range between -10 and 10\n col_standard = (col_standard / df.shape[1])\n\n # attach calculated values as new row in df_standard\n df_standard = np.vstack((df_standard, col_standard))\n\n # get rid of first entry (empty scores)\n df_standard = np.delete(df_standard, 0, 0)\n\n # sum up transformed data\n df_composite = df_standard.sum(axis=0)\n\n # put into a pandas dataframe and connect numbers\n # to equities via reindexing\n df_composite = pd.Series(data=df_composite, index=index)\n\n # sort descending\n df_composite.sort(ascending=False)\n\n return df_composite\n\nranked_scores = aggregate_data(results)\nranked_scores", "Stock Choice\nNow that we have our ranking system, let us have a look at the histogram of the ranked scores. This will allow us to see general trends in the metric and diagnose any issues with our ranking system as a factor. The red lines give our cut-off points for our trading baskets", "# histogram\nranked_scores.hist()\n\n# baskets\nplt.axvline(ranked_scores[26], color='r')\nplt.axvline(ranked_scores[-6], color='r')\nplt.xlabel('Ranked Scores')\nplt.ylabel('Frequency')\nplt.title('Histogram of Ranked Scores of Stock Universe');", "Although there does appear to be some positive skew, this looks to be a robust metric as the tails of this distribution are very thin. A thinner tail means that our ranking system has identified special characteristics about our stock universe possessed by only a few equities. More thorough statistical analysis would have to be conducted in order to see if this strategy could generate good alpha returns. This robust factor analysis will be covered in a later notebook.\nPlease see the attached algorithm for a full implementation!\nThe material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory or other services by Quantopian.\nIn addition, the content of the website neither constitutes investment advice nor offers any opinion with respect to the suitability of any security or any specific investment. Quantopian makes no guarantees as to accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
maxis42/ML-DA-Coursera-Yandex-MIPT
5 Data analysis applications/Lectures notebooks/1 wine sales time series/wine.ipynb
mit
[ "Продажи австралийского вина\nИзвестны ежемесячные продажи австралийского вина в тысячах литров с января 1980 по июль 1995, необходимо построить прогноз на следующие три года.", "%pylab inline\nimport pandas as pd\nfrom scipy import stats\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nimport warnings\nfrom itertools import product\n\ndef invboxcox(y,lmbda):\n if lmbda == 0:\n return(np.exp(y))\n else:\n return(np.exp(np.log(lmbda*y+1)/lmbda))\n\nwine = pd.read_csv('monthly-australian-wine-sales.csv',',', index_col=['month'], parse_dates=['month'], dayfirst=True)\nwine.sales = wine.sales * 1000\nplt.figure(figsize(15,7))\nwine.sales.plot()\nplt.ylabel('Wine sales')\npylab.show()", "Проверка стационарности и STL-декомпозиция ряда:", "plt.figure(figsize(15,10))\nsm.tsa.seasonal_decompose(wine.sales).plot()\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(wine.sales)[1])", "Стабилизация дисперсии\nСделаем преобразование Бокса-Кокса для стабилизации дисперсии:", "wine['sales_box'], lmbda = stats.boxcox(wine.sales)\nplt.figure(figsize(15,7))\nwine.sales_box.plot()\nplt.ylabel(u'Transformed wine sales')\nprint(\"Оптимальный параметр преобразования Бокса-Кокса: %f\" % lmbda)\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(wine.sales_box)[1])", "Стационарность\nКритерий Дики-Фуллера отвергает гипотезу нестационарности, но визуально в данных виден тренд. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:", "wine['sales_box_diff'] = wine.sales_box - wine.sales_box.shift(12)\nplt.figure(figsize(15,10))\nsm.tsa.seasonal_decompose(wine.sales_box_diff[12:]).plot()\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(wine.sales_box_diff[12:])[1])", "Критерий Дики-Фуллера не отвергает гипотезу нестационарности, и полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование:", "wine['sales_box_diff2'] = wine.sales_box_diff - wine.sales_box_diff.shift(1)\nplt.figure(figsize(15,10))\nsm.tsa.seasonal_decompose(wine.sales_box_diff2[13:]).plot() \nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(wine.sales_box_diff2[13:])[1])", "Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — тренда больше нет. \nПодбор модели\nПосмотрим на ACF и PACF полученного ряда:", "plt.figure(figsize(15,8))\nax = plt.subplot(211)\nsm.graphics.tsa.plot_acf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)\npylab.show()\nax = plt.subplot(212)\nsm.graphics.tsa.plot_pacf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)\npylab.show()", "Начальные приближения: Q=1, q=2, P=1, p=4", "ps = range(0, 5)\nd=1\nqs = range(0, 3)\nPs = range(0, 2)\nD=1\nQs = range(0, 2)\n\nparameters = product(ps, qs, Ps, Qs)\nparameters_list = list(parameters)\nlen(parameters_list)\n\n%%time\nresults = []\nbest_aic = float(\"inf\")\nwarnings.filterwarnings('ignore')\n\nfor param in parameters_list:\n #try except нужен, потому что на некоторых наборах параметров модель не обучается\n try:\n model=sm.tsa.statespace.SARIMAX(wine.sales_box, order=(param[0], d, param[1]), \n seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)\n #выводим параметры, на которых модель не обучается и переходим к следующему набору\n except ValueError:\n print('wrong parameters:', param)\n continue\n aic = model.aic\n #сохраняем лучшую модель, aic, параметры\n if aic < best_aic:\n best_model = model\n best_aic = aic\n best_param = param\n results.append([param, model.aic])\n \nwarnings.filterwarnings('default')", "Если в предыдущей ячейке возникает ошибка, убедитесь, что обновили statsmodels до версии не меньше 0.8.0rc1.", "result_table = pd.DataFrame(results)\nresult_table.columns = ['parameters', 'aic']\nprint(result_table.sort_values(by = 'aic', ascending=True).head())", "Лучшая модель:", "print(best_model.summary())", "Её остатки:", "plt.figure(figsize(15,8))\nplt.subplot(211)\nbest_model.resid[13:].plot()\nplt.ylabel(u'Residuals')\n\nax = plt.subplot(212)\nsm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)\n\nprint(\"Критерий Стьюдента: p=%f\" % stats.ttest_1samp(best_model.resid[13:], 0)[1])\nprint(\"Критерий Дики-Фуллера: p=%f\" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])", "Остатки несмещены (подтверждается критерием Стьюдента) стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой).\nПосмотрим, насколько хорошо модель описывает данные:", "wine['model'] = invboxcox(best_model.fittedvalues, lmbda)\nplt.figure(figsize(15,7))\nwine.sales.plot()\nwine.model[13:].plot(color='r')\nplt.ylabel('Wine sales')\npylab.show()", "Прогноз", "wine2 = wine[['sales']]\ndate_list = [datetime.datetime.strptime(\"1994-09-01\", \"%Y-%m-%d\") + relativedelta(months=x) for x in range(0,36)]\nfuture = pd.DataFrame(index=date_list, columns= wine2.columns)\nwine2 = pd.concat([wine2, future])\nwine2['forecast'] = invboxcox(best_model.predict(start=176, end=211), lmbda)\n\nplt.figure(figsize(15,7))\nwine2.sales.plot()\nwine2.forecast.plot(color='r')\nplt.ylabel('Wine sales')\npylab.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ioos/notebooks_demos
notebooks/2016-11-15-glider_data_example.ipynb
mit
[ "Plotting Glider data with Python tools\nIn this notebook we demonstrate how to obtain and plot glider data using iris and cartopy. We will explore data from the Rutgers University RU29 Challenger glider that was launched from Ubatuba, Brazil on June 23, 2015 to travel across the Atlantic Ocean. After 282 days at sea, the Challenger was picked up off the coast of South Africa, on March 31, 2016. For more information on this ground breaking excusion see: https://marine.rutgers.edu/main/announcements/the-challenger-glider-mission-south-atlantic-mission-complete\nData collected from this glider mission are available on the IOOS Glider DAC THREDDS via OPeNDAP.", "# See https://github.com/Unidata/netcdf-c/issues/1299 for the explanation of `#fillmismatch`.\n\nurl = (\n \"https://data.ioos.us/thredds/dodsC/deployments/rutgers/\"\n \"ru29-20150623T1046/ru29-20150623T1046.nc3.nc#fillmismatch\"\n)\n\nimport iris\n\niris.FUTURE.netcdf_promote = True\n\nglider = iris.load(url)\n\nprint(glider)", "Iris requires the data to adhere strictly to the CF-1.6 data model.\nThat is why we see all those warnings about Missing CF-netCDF ancillary data variable.\n Note that if the data is not CF at all iris will refuse to load it!\nThe other hand, the advantage of following the CF-1.6 conventions,\nis that the iris cube has the proper metadata is attached it.\nWe do not need to extract the coordinates or any other information separately .\nAll we need to do is to request the phenomena we want, in this case sea_water_density, sea_water_temperature and sea_water_salinity.", "temp = glider.extract_strict(\"sea_water_temperature\")\nsalt = glider.extract_strict(\"sea_water_salinity\")\ndens = glider.extract_strict(\"sea_water_density\")\n\nprint(temp)", "Glider data is not something trivial to visualize. The very first thing to do is to plot the glider track to check its path.", "import numpy.ma as ma\n\nT = temp.data.squeeze()\nS = salt.data.squeeze()\nD = dens.data.squeeze()\n\nx = temp.coord(axis=\"X\").points.squeeze()\ny = temp.coord(axis=\"Y\").points.squeeze()\nz = temp.coord(axis=\"Z\")\nt = temp.coord(axis=\"T\")\n\nvmin, vmax = z.attributes[\"actual_range\"]\n\nz = ma.masked_outside(z.points.squeeze(), vmin, vmax)\nt = t.units.num2date(t.points.squeeze())\n\nlocation = y.mean(), x.mean() # Track center.\nlocations = list(zip(y, x)) # Track points.\n\nimport folium\n\ntiles = (\n \"http://services.arcgisonline.com/arcgis/rest/services/\"\n \"World_Topo_Map/MapServer/MapServer/tile/{z}/{y}/{x}\"\n)\n\nm = folium.Map(location, tiles=tiles, attr=\"ESRI\", zoom_start=4)\n\nfolium.CircleMarker(locations[0], fill_color=\"green\", radius=10).add_to(m)\nfolium.CircleMarker(locations[-1], fill_color=\"red\", radius=10).add_to(m)\n\nline = folium.PolyLine(\n locations=locations,\n color=\"orange\",\n weight=8,\n opacity=0.6,\n popup=\"Slocum Glider ru29 Deployed on 2015-06-23\",\n).add_to(m)\n\nm", "One might be interested in a the individual profiles of each dive. Lets extract the deepest dive and plot it.", "import numpy as np\n\n# Find the deepest profile.\nidx = np.nonzero(~T[:, -1].mask)[0][0]\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nncols = 3\nfig, (ax0, ax1, ax2) = plt.subplots(\n sharey=True, sharex=False, ncols=ncols, figsize=(3.25 * ncols, 5)\n)\n\nkw = dict(linewidth=2, color=\"cornflowerblue\", marker=\".\")\nax0.plot(T[idx], z[idx], **kw)\nax1.plot(S[idx], z[idx], **kw)\nax2.plot(D[idx] - 1000, z[idx], **kw)\n\n\ndef spines(ax):\n ax.spines[\"right\"].set_color(\"none\")\n ax.spines[\"bottom\"].set_color(\"none\")\n ax.xaxis.set_ticks_position(\"top\")\n ax.yaxis.set_ticks_position(\"left\")\n\n\n[spines(ax) for ax in (ax0, ax1, ax2)]\n\nax0.set_ylabel(\"Depth (m)\")\nax0.set_xlabel(\"Temperature ({})\".format(temp.units))\nax0.xaxis.set_label_position(\"top\")\n\nax1.set_xlabel(\"Salinity ({})\".format(salt.units))\nax1.xaxis.set_label_position(\"top\")\n\nax2.set_xlabel(\"Density ({})\".format(dens.units))\nax2.xaxis.set_label_position(\"top\")\n\nax0.invert_yaxis()", "We can also visualize the whole track as a cross-section.", "import numpy as np\nimport seawater as sw\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\n\n\ndef distance(x, y, units=\"km\"):\n dist, pha = sw.dist(x, y, units=units)\n return np.r_[0, np.cumsum(dist)]\n\n\ndef plot_glider(\n x, y, z, t, data, cmap=plt.cm.viridis, figsize=(9, 3.75), track_inset=False\n):\n\n fig, ax = plt.subplots(figsize=figsize)\n dist = distance(x, y, units=\"km\")\n z = np.abs(z)\n dist, z = np.broadcast_arrays(dist[..., np.newaxis], z)\n cs = ax.pcolor(dist, z, data, cmap=cmap, snap=True)\n kw = dict(orientation=\"vertical\", extend=\"both\", shrink=0.65)\n cbar = fig.colorbar(cs, **kw)\n\n if track_inset:\n axin = inset_axes(ax, width=\"25%\", height=\"30%\", loc=4)\n axin.plot(x, y, \"k.\")\n start, end = (x[0], y[0]), (x[-1], y[-1])\n kw = dict(marker=\"o\", linestyle=\"none\")\n axin.plot(*start, color=\"g\", **kw)\n axin.plot(*end, color=\"r\", **kw)\n axin.axis(\"off\")\n\n ax.invert_yaxis()\n ax.set_xlabel(\"Distance (km)\")\n ax.set_ylabel(\"Depth (m)\")\n return fig, ax, cbar\n\nfrom palettable import cmocean\n\nhaline = cmocean.sequential.Haline_20.mpl_colormap\nthermal = cmocean.sequential.Thermal_20.mpl_colormap\ndense = cmocean.sequential.Dense_20.mpl_colormap\n\n\nfig, ax, cbar = plot_glider(x, y, z, t, S, cmap=haline, track_inset=False)\ncbar.ax.set_xlabel(\"(g kg$^{-1}$)\")\ncbar.ax.xaxis.set_label_position(\"top\")\nax.set_title(\"Salinity\")\n\nfig, ax, cbar = plot_glider(x, y, z, t, T, cmap=thermal, track_inset=False)\ncbar.ax.set_xlabel(r\"($^\\circ$C)\")\ncbar.ax.xaxis.set_label_position(\"top\")\nax.set_title(\"Temperature\")\n\nfig, ax, cbar = plot_glider(x, y, z, t, D - 1000, cmap=dense, track_inset=False)\ncbar.ax.set_xlabel(r\"(kg m$^{-3}$C)\")\ncbar.ax.xaxis.set_label_position(\"top\")\nax.set_title(\"Density\")\n\nprint(\"Data collected from {} to {}\".format(t[0], t[-1]))", "Glider cross-section also very be useful but we need to be careful when interpreting those due to the many turns the glider took,\nand the time it took to complete the track.\nNote that the x-axis can be either time or distance. Note that this particular track took ~281 days to complete!\nFor those interested into more fancy ways to plot glider data check @lukecampbell's profile_plots.py script." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gwu-libraries/notebooks
20161110-twitter-interaction/interactions.ipynb
mit
[ "On retweets, replies, quotes & favorites: A guide for researchers\nThis notebook explores the affordances of the Twitter API for retweets, replies, quotes, and favorites. It is motivated by questions from several George Washington University researchers who are interested in using Social Feed Manager to collect datasets for studying dialogues and interaction on Twitter.\nWe will not discuss affordances of the Twitter API that are perspectival, that is, depend on the Twitter account that is used to access the API. So, for example, we will not consider GET statuses/retweets_of_me.\nSetup\nBefore proceeding, we will install Twarc. Twarc is a Twitter client. It is generally used from the commandline,\nbut we will use it as a library.\nThis assumes that you have run Twarc locally and already have credentials stored in ~/.twarc.\nAs you are reading this, feel free to skip any of the sections of code.", "# This installs Twarc\n# !pip install twarc\n# This is temporary until https://github.com/DocNow/twarc/pull/118 is merged.\n!pip install git+https://github.com/justinlittman/twarc.git@retweets#egg=twarc\n# This imports some classes and functions that will be used later in this notebook.\nfrom twarc import Twarc, load_config, default_config_filename\nimport json\nimport codecs\n\n# This creates an instance of Twarc.\ncredentials = load_config(default_config_filename(), 'main')\nt = Twarc(consumer_key=credentials['consumer_key'],\n consumer_secret=credentials['consumer_secret'],\n access_token=credentials['access_token'],\n access_token_secret=credentials['access_token_secret'])\n\n# Create a summary of a tweet, only showing relevant fields.\ndef summarize(tweet, extra_fields = None):\n new_tweet = {}\n for field, value in tweet.items():\n if field in [\"text\", \"id_str\", \"screen_name\", \"retweet_count\", \"favorite_count\", \"in_reply_to_status_id_str\", \"in_reply_to_screen_name\", \"in_reply_to_user_id_str\"] and value is not None:\n new_tweet[field] = value\n elif extra_fields and field in extra_fields:\n new_tweet[field] = value\n elif field in [\"retweeted_status\", \"quoted_status\", \"user\"]:\n new_tweet[field] = summarize(value)\n return new_tweet\n\n# Print out a tweet, with optional colorizing of selected fields.\ndef dump(tweet, colorize_fields=None, summarize_tweet=True):\n colorize_field_strings = []\n for line in json.dumps(summarize(tweet) if summarize_tweet else tweet, indent=4, sort_keys=True).splitlines():\n colorize = False\n for colorize_field in colorize_fields or []:\n if \"\\\"{}\\\":\".format(colorize_field) in line: \n print \"\\x1b[31m\" + line + \"\\x1b[0m\"\n break\n else:\n print line", "Tweet types\nA tweet\nBefore examining the specific types of tweets that we're interested in, we're going to look at a plain-old tweet.\nHere's my first tweet.", "%%html\n<!-- This renders embeds a tweet in the notebook. -->\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">First day at Gelman Library. First tweet. <a href=\"http://t.co/Gz5ybAD6os\">pic.twitter.com/Gz5ybAD6os</a></p>&mdash; Justin Littman (@justin_littman) <a href=\"https://twitter.com/justin_littman/status/503873833213104128\">August 25, 2014</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>", "Tweets retrieved from the Twitter API are in JSON, a simple structured text format. Below I will provide the entire tweet; in the rest of this notebook I will only provide a subset of the tweet containing the relevant fields. Twitter provides documentation on the complete set of fields in a tweet.", "# Retrieve a single tweet from the Twitter API\ntweet = list(t.hydrate(['503873833213104128']))[0]\n# Pretty-print the tweet\ndump(tweet, summarize_tweet=False)", "Here's what the summary of that same tweet:", "dump(tweet)", "A tweet that has been retweeted\nFirst, we want to look at a tweet that has been retweeted. I'll choose this one from my user timeline:", "%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">.<a href=\"https://twitter.com/DameWendyDBE\">@DameWendyDBE</a>: Invest in data science training for librarians. In future, libraries will be data warehouses. <a href=\"https://twitter.com/hashtag/SaveTheWeb?src=hash\">#SaveTheWeb</a></p>&mdash; Justin Littman (@justin_littman) <a href=\"https://twitter.com/justin_littman/status/743520583518920704\">June 16, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>", "Let's retrieve the JSON for this tweet from the Twitter API.", "tweet = list(t.hydrate(['743520583518920704']))[0]\ndump(tweet, colorize_fields=['retweet_count'])", "The relevant field is retweet_count. This field provides the number of times this tweet was retweeted. Note that this number may vary over time, as additional people retweet the tweet.\nA tweet that is a retweet\nSecond, we want to look at a tweet that is a retweet. I'll also choose this one from my user timeline:", "%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Reproducible Research: Citing your execution env using <a href=\"https://twitter.com/docker\">@Docker</a> and a DOI: <a href=\"https://t.co/S4DChzE9Au\">https://t.co/S4DChzE9Au</a> via <a href=\"https://twitter.com/SoftwareSaved\">@SoftwareSaved</a> <a href=\"https://t.co/SPMcKa35J4\">pic.twitter.com/SPMcKa35J4</a></p>&mdash; Docker (@docker) <a href=\"https://twitter.com/docker/status/720856949407940608\">April 15, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>", "Here is the JSON from the Twitter API.", "tweet = list(t.hydrate(['724575206937899008']))[0]\ndump(tweet, colorize_fields=['retweeted_status', 'retweet_count'])", "Two fields are significant. First, the retweeted_status contains the source tweet (i.e., the tweet that was retweeted). The present or absence of this field can be used to identify tweets that are retweets. Second, the retweet_count is the count of the retweets of the source tweet, not this tweet.\nA tweet that is a retweet of a retweet\nAs a corollary to our look at a retweet, let's look at a tweet that is a retweet of a retweet. (I'll refer to this as a second order retweet.) Here's a tweet that I retweeted from my @jlittman_dev account that was a retweet from my @justin_littman account of a source tweet from @SocialFeedMgr.", "tweet = list(t.hydrate(['794490469627686913']))[0]\ndump(tweet, colorize_fields=['retweet_count', 'retweeted_status'])", "The second order tweet is treated as if it is a retweet of the source tweet. The retweet_count of the source tweet is incremented and the retweeted_status that appears in the second order tweet is the source tweet. There is no indication that this is a retweet of a retweet. Thus, in reconstructing interaction, you can't determine from who a user discovered a tweet that she later retweeted.\nA tweet that has been quoted\nThird, we want to consider a tweet that has been quoted. A quote tweet is a retweet that contains some additional text.\nTo test this, I quoted my first tweet from a different twitter account (@jlittman_dev).", "tweet = list(t.hydrate(['503873833213104128']))[0]\ndump(summarize(tweet))", "There is nothing in the tweet to indicate that it has been quoted. This is similar to what you find on Twitter website: if you look at the full rendering of this tweet, there is no indication that it was quoted.\nQuotes don't count as a retweet, as the retweet_count on the source tweet is 0.\nA tweet that is a quote\nFourth, we want to look at a tweet that is a quote.", "%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Let us know what can add to <a href=\"https://twitter.com/SocialFeedMgr\">@SocialFeedMgr</a> docs to take the &quot;crash&quot; out of &quot;crash course&quot; <a href=\"https://twitter.com/ianmilligan1\">@ianmilligan1</a>. And all other feedback welcome. <a href=\"https://t.co/BbjOLSvdCm\">https://t.co/BbjOLSvdCm</a></p>&mdash; Justin Littman (@justin_littman) <a href=\"https://twitter.com/justin_littman/status/794162076717613056\">November 3, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n\ntweet = list(t.hydrate(['794162076717613056']))[0]\ndump(summarize(tweet, extra_fields=['quoted_status_id', 'quoted_status_id_str']), colorize_fields=['quoted_status', 'quoted_status_id', 'quoted_status_id_str'], summarize_tweet=False)", "The relevant field in this quote tweet is quoted_status, which contains the source tweet. quoted_status_id and quoted_status_id_str are the tweet id of the source tweet, which is redundant of the tweet id contained in quoted_status.\nA tweet that has been replied\nFifth, we want to look at a tweet to which another user has replied. Here's a tweet that I posted, to which @jefferson_bail replied:", "%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Yesterday I learned about the <a href=\"https://twitter.com/jefferson_bail\">@jefferson_bail</a> test for projects: Is it sufficiently &quot;do-goody and feel-goody&quot;?</p>&mdash; Justin Littman (@justin_littman) <a href=\"https://twitter.com/justin_littman/status/789411809807572992\">October 21, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>", "There is nothing to indicate that this tweet has a reply.\nA tweet that is a reply\nSixth, we want to look at a tweet that is a reply to another tweet. Here's @jefferson_bail's response to my tweet:", "%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\"><a href=\"https://twitter.com/justin_littman\">@justin_littman</a> Ha! I don&#39;t even remember what I was talking about. I believe that was my fifth meeting in a row starting at 7am, so...</p>&mdash; Jefferson Bailey (@jefferson_bail) <a href=\"https://twitter.com/jefferson_bail/status/789486128189444096\">October 21, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n\ntweet = list(t.hydrate(['789486128189444096']))[0]\ndump(summarize(tweet, extra_fields=['in_reply_to_status_id_str', 'in_reply_to_user_id']), colorize_fields=['in_reply_to_status_id', 'in_reply_to_status_id_str', 'in_reply_to_screen_name', 'in_reply_to_user_id', 'in_reply_to_user_id_str'], summarize_tweet=False)", "The relevant fields in a reply tweet are in_reply_to_status_id, in_reply_to_status_id_str, in_reply_to_screen_name, in_reply_to_user_id, in_reply_to_user_id_str. The names of each of these fields reasonably describe their contents. The most significant of these is in_reply_to_status_id, which supports finding the tweet to which the reply tweet is a reply.\nThus, based on the metadata that is provided for a tweet, a chain of replies can be followed backwards from the reply tweet to the replied to tweet, but not vice versa, i.e., from the replied to tweet to the reply tweet.\nA tweet that is favorited\nHere is the most favorited tweet from my user timeline:", "%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Slides from my <a href=\"https://twitter.com/hashtag/iipcWAC16?src=hash\">#iipcWAC16</a> presentation on aligning social media archiving and web archiving: <a href=\"https://t.co/Rj8LEbBOp8\">https://t.co/Rj8LEbBOp8</a></p>&mdash; Justin Littman (@justin_littman) <a href=\"https://twitter.com/justin_littman/status/720621197550071808\">April 14, 2016</a></blockquote>\n<script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>\n\ntweet = list(t.hydrate(['720621197550071808']))[0]\ndump(tweet, colorize_fields=['favorite_count'])", "The favorite_count provides the number of times the tweet has been favorited.\nIn the case of a retweet, favorite_count is the favorite count of the source tweet. (This is similar to retweet_count.)\nThe Twitter API\nIn this section, we look at the various methods in Twitter's APIs that are relevant to retweets, replies, quotes, and favorites.\nGET statuses/show/:id and GET statuses/lookup\nGET statuses/show/:id is used to retrieve a single tweet by tweet id. GET statuses/lookup is used to retrieve multiple tweets by tweet ids.\nIn the above examples, GET statuses/lookup using only a single tweet id was used to retrieve the tweets.\nGET statuses/user_timeline\nGET statuses/user_timeline retrieves a user timeline given a screen name or user id. This is one of the primary methods for collecting social media data.\nWhile GET statuses/user_timeline supports getting tweets from the past, it is limited to the last 3,200 tweets.\nTo test this, we will retrieve the user timeline of @jlittman_dev and looks for retweets, quotes, replies, favorited tweets, and retweeted tweets.", "found_retweet = False\nfound_quote = False\nfound_reply = False\nfound_favorited = False\nfound_retweeted = False\nfor tweet in t.timeline(screen_name='jlittman_dev'):\n if 'retweeted_status' in tweet:\n print \"{} is a retweet.\".format(tweet['id_str'])\n found_retweet = True\n if 'quoted_status' in tweet:\n print \"{} is a quote.\".format(tweet['id_str'])\n found_quote = True\n if tweet['in_reply_to_status_id']:\n print \"{} is a reply to {} by {}\".format(tweet['id_str'], tweet['in_reply_to_status_id_str'], tweet['in_reply_to_screen_name'])\n found_reply = True\n if tweet['retweet_count'] > 0:\n print \"{} has been retweeted {} times.\".format(tweet['id_str'], tweet['retweet_count'])\n found_retweeted = True\n if tweet['favorite_count'] > 0:\n print \"{} has been favorited {} times.\".format(tweet['id_str'], tweet['favorite_count'])\n found_favorited = True\nprint \"Found retweet: {}\".format(found_retweet)\nprint \"Found quote: {}\".format(found_quote)\nprint \"Found reply: {}\".format(found_reply)\nprint \"Found favorited: {}\".format(found_favorited)\nprint \"Found retweeted: {}\".format(found_retweeted)", "This demonstrates that the following are available from the user timeline:\n* retweets by the user\n* quotes by the user\n* replies by the user\n* favorited tweets\n* retweeted tweets\nOther than the counts for favorited tweets and retweeted tweets, it does not include the tweets of other users such as quotes of this user or replies to tweets of this user.\nGET statuses/retweets/:id\nGET statuses/retweets/:id returns the most recent retweets for a tweet. Only the most recent 100 retweets are available.\nTo test this, let's compare the retweet_count against the number of tweets returned by GET statuses/retweets/:id for that tweet.", "tweet = list(t.hydrate(['743520583518920704']))[0]\nprint \"The retweet count is {}\".format(tweet['retweet_count'])\nretweets = t.retweets('743520583518920704')\nprint \"Retrieved {} retweets\".format(len(list(retweets)))", "GET statuses/retweeters/ids\nGET statuses/retweeters/ids retrieves the user ids that retweeted a tweet.\nGET search/tweets\nGET search/tweets (also known as the Twitter Search API) allows searching \"against a sampling of recent Tweets published in the past 7 days.\"\nSome of the query parameters that are relevant to retweets, quotes, and replies are:\n\nfrom for tweets posted by a user, e.g., from:justin_littman\nto for tweets that are a reply to the user, e.g., to:justin_littman\n@ for tweets mentioning that screen name, e.g., @justin_littman\n\nBecause the Search API is time limited and an unknown size sample, it will not be further explored in this notebook.\nPOST statuses/filter\nPOST statuses/filter allows filtering of the stream of tweets on the Twitter platform by keywords (track), users (follow), and geolocation (location).\nPOST statuses/filter only allows collecting tweets moving forward; it cannot be used to retrieve past tweets.\nFollow parameter\nFor this test, I will use the follow parameter to determine what is captured when following a user. Note that the follow parameter takes a list of user ids. User ids do not change (unlike screen names).\nBecause this test requires creating tweets from multiple accounts and recording the filter stream, it will not be performed live in this notebook. Rather, I used Twarc to record the filter stream of @jlittman_dev (user id 2875189485):\ntwarc.py --follow 2875189485 &gt; follow.json\n\nI then performed the following actions on the Twitter website:\n\n@jlittman_dev: Posted a tweet.\n@jlittman_dev2: Retweeted @jlittman_dev's tweet from step 1.\n@jlittman_dev2: Posted a tweet.\n@jlittman_dev: Quotes @jlittman_dev's tweet from step 3.\n@jlittman_dev2: Quoted a tweet by @jlittman_dev.\n@jlittman_dev2: Replied to a tweet by @jlittman_dev.\n@jlittman_dev: Replied to the reply of @jlittman_dev2 from step 6.\n@jlittman_dev: Replied to a tweet by @jlittman_dev2.\n\nWe will now look at the tweets that were captured by the filter stream.\nThe first tweet is the tweet posted by @jlittman_dev in step 1. Thus, tweets by the followed user are captured.", "# Load the tweets\nwith codecs.open('./follow.json', 'r') as f:\n lines = f.readlines()\n# Print the number of tweets\nprint len(lines)\n# Print the first tweet\ntweet1 = json.loads(lines[0])\ndump(tweet1)", "The second tweet is @jlittman_dev2's retweet of @jlittman_dev's tweet. This is step 2, showing that retweets by other users of tweets by the followed user are captured.", "# Print the second tweet\ntweet2 = json.loads(lines[1])\ndump(tweet2)", "The third tweet is the quote by @jlittman_dev of @jlittman_dev2 tweet. This is step 4, showing that quote tweets posted by the followed user are captured.\nNote that the quoted tweet (step 3) is not captured because @jlittman_dev2 isn't being followed; however, it is available as the quoted_status of the quote tweet.", "# Print the third tweet\ntweet3 = json.loads(lines[2])\ndump(tweet3)", "The fourth tweet is a reply by @jlittman_dev2 to a tweet by @jlittman_dev. This is step 6. Thus, replies to the followed user are captured.\nNote that the tweet from step 5 (@jlittman_dev2's quote tweet of @jlittman_dev's tweet) was not captured. Thus, quote tweets in which the followed user is quoted are not captured.", "# Print the fourth tweet\ntweet4 = json.loads(lines[3])\ndump(tweet4)", "The fifth tweet is from step 7, @jlittman_dev's reply to @jlittman_dev's reply. Thus, replies by the followed user to replies are captured.", "# Print the fifth tweet\ntweet5 = json.loads(lines[4])\ndump(tweet5)", "The final tweet is a reply by @jlittman_dev to a tweet by @jlittman_dev2. Thus, replies by the followed user are captured.", "# Print the sixth tweet\ntweet6 = json.loads(lines[5])\ndump(tweet6)", "The only tweet in our test of the follow parameter of the twitter filter stream that wasn't captured was the quote of a followed user's tweet by another user.\nTrack parameter\nLet's see if we can capture that with the track parameter, by using the user's screen name as the keyword.\nNote that a user can change her screen name, so that will need to be monitored if using this approach.\nAgain, I used Twarc to record the filter stream, this time tracking @jlittman_dev (as a keyword):\ntwarc.py --track @jlittman_dev &gt; track.json\n\nI then performed the following actions on the Twitter website:\n\n@jlittman_dev2: Posted a tweet mentioning @justin_littman.\n@jlittman_dev2: Quoted a tweet by @jlittman_dev.\n\nOnly a single tweet is captured.", "# Load the tweets\nwith codecs.open('./track.json', 'r') as f:\n lines = f.readlines()\n# Print the number of tweets\nprint len(lines)\n# Print the first tweet\ntweet1 = json.loads(lines[0])\ndump(tweet1)", "This is the tweet that resulted from the mention of @jlittman_dev (step 1). Again, the tweet quoting the followed user wasn't captured.\nPOST statuses/filter summary\nThus, to summarize for a given user, the following can be captured using the filter stream and the follow parameter:\n\nTweets, quotes, and replies by that user.\nRetweets of tweets by that user.\nReplies to that user by another user.\n\nbut not quotes of that user's tweets by another user. The track parameter does not help with catching quotes of that user's tweets.\nSummary\nThe Twitter API provides extensive support for retrieving data for studying dialogues and interaction on Twitter.\nThe following table summarizes what is available in a tweet for retweets, replies, quotes, and favorites.\nFor a tweet that is ... | Available\n------------ | -------------\nRetweeted | Count of retweets\nA retweet | Source tweet\nQuoted | No\nA quote | Quoted tweet\nFavorited | Count of favorites\nReplied to | No\nA reply | Replied to tweet\nThe two most helpful API methods for retweets, replies, quotes, and favorites are GET statuses/user_timeline and POST statuses/filter. The following table summarizes the affordances of these methods:\nTweet type | GET statuses/user_timeline | POST statuses/filter\n------------ | ------------- | -------------\nTweets by the user | Yes | Yes\nRetweets by the user | Yes | Yes\nRetweets by other users of tweet by the user | No | Yes\nQuotes by the user | Yes | Yes\nQuotes by other users of tweet by the user | No | No\nReplies by user | Yes | Yes\nReplies by other users to tweet by the user | No | Yes\nNote that Social Feed Manager supports collecting using both of these methods." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tsob/cnn-music-structure
src/simple_cnn.ipynb
mit
[ "Simple CNN\nSimple implementation of a CNN on our data.", "# A bit of setup\n\n# Usual imports\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Notebook plotting magic\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# For auto-reloading external modules\n%load_ext autoreload\n%autoreload 2\n\n# Deep learning related\nimport theano\nimport theano.tensor as T\nimport lasagne\n\n# My modules\nimport generate_data as d\n\ndef rel_error(x, y):\n \"\"\" Returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8. np.abs(x) + np.abs(y))))", "Function to load data", "def load_dataset(num=5):\n \"\"\"\n Load a bit of data from SALAMI.\n Argument: num (number of songs to load. Default=5)\n Returns: X_train, y_train, X_val, y_val, X_test, y_test\n \"\"\"\n X, y = d.get_data(num)\n\n # Keep last 6000 data points for test\n X_test, y_test = X[-6000:], y[-6000:]\n X_train, y_train = X[:-6000], y[:-6000]\n\n # We reserve the last 10000 training examples for validation.\n X_train, X_val = X_train[:-10000], X_train[-10000:]\n y_train, y_val = y_train[:-10000], y_train[-10000:]\n\n # Make column vectors\n y_train = y_train[:,np.newaxis]\n y_val = y_val[:,np.newaxis]\n y_test = y_test[:,np.newaxis]\n \n return X_train, y_train, X_val, y_val, X_test, y_test", "Function to build network", "def build_cnn(input_var=None):\n \"\"\"\n Build the CNN architecture.\n \"\"\"\n\n # Make an input layer\n network = lasagne.layers.InputLayer(\n shape=(\n None,\n 1,\n 20,\n 515\n ),\n input_var=input_var\n )\n\n # Add a conv layer\n network = lasagne.layers.Conv2DLayer(\n network, # Incoming\n num_filters=32, # Number of convolution filters to use\n filter_size=(5, 5),\n stride=(1, 1), # Stride fo (1,1)\n pad='same', # Keep output size same as input\n nonlinearity=lasagne.nonlinearities.rectify, # ReLU\n W=lasagne.init.GlorotUniform() # W initialization\n )\n\n # Apply max-pooling of factor 2 in second dimension\n network = lasagne.layers.MaxPool2DLayer(\n network, pool_size=(1, 2)\n )\n # Then a fully-connected layer of 256 units with 50% dropout on its inputs\n network = lasagne.layers.DenseLayer(\n lasagne.layers.dropout(network, p=.5),\n num_units=256,\n nonlinearity=lasagne.nonlinearities.rectify\n )\n # Finally add a 1-unit softmax output layer\n network = lasagne.layers.DenseLayer(\n network,\n num_units=1,\n nonlinearity=lasagne.nonlinearities.softmax\n )\n\n return network", "Dataset iteration\nThis is just a simple helper function iterating over training data in\nmini-batches of a particular size, optionally in random order. It assumes\ndata is available as numpy arrays. For big datasets, you could load numpy\narrays as memory-mapped files (np.load(..., mmap_mode='r')), or write your\nown custom data iteration function. For small datasets, you can also copy\nthem to GPU at once for slightly improved performance. This would involve\nseveral changes in the main program, though, and is not demonstrated here.", "def iterate_minibatches(inputs, targets, batchsize, shuffle=False):\n \"\"\"\n Generate a minibatch.\n Arguments: inputs (numpy array)\n targets (numpy array)\n batchsize (int)\n shuffle (bool, default=False) \n Returns: inputs[excerpt], targets[excerpt]\n \"\"\"\n assert len(inputs) == len(targets)\n \n if shuffle:\n indices = np.arange(len(inputs))\n np.random.shuffle(indices)\n \n for start_idx in range(0, len(inputs) - batchsize + 1, batchsize):\n if shuffle:\n excerpt = indices[start_idx:start_idx + batchsize]\n else:\n excerpt = slice(start_idx, start_idx + batchsize)\n yield inputs[excerpt], targets[excerpt]", "Main function", "# Theano config\ntheano.config.floatX = 'float32'\n\n# Load the dataset\nprint(\"Loading data...\")\nX_train, y_train, X_val, y_val, X_test, y_test = load_dataset(3)\n\n# Print the dimensions\nfor datapt in [X_train, y_train, X_val, y_val, X_test, y_test]:\n print datapt.shape\n\n# Parse dimensions\nn_train = y_train.shape[0]\nn_val = y_val.shape[0]\nn_test = y_test.shape[0]\nn_chan = X_train.shape[1]\nn_feats = X_train.shape[2]\nn_frames = X_train.shape[3]\n\nprint \"n_train = {0}\".format(n_train)\nprint \"n_val = {0}\".format(n_val)\nprint \"n_test = {0}\".format(n_test)\nprint \"n_chan = {0}\".format(n_chan)\nprint \"n_feats = {0}\".format(n_feats)\nprint \"n_frames = {0}\".format(n_frames)\n\n# Prepare Theano variables for inputs and targets\ninput_var = T.tensor4( name='inputs' )\ntarget_var = T.fcol( name='targets' )\n\n# Create neural network model (depending on first command line parameter)\nprint(\"Building model and compiling functions...\"),\nnetwork = build_cnn(input_var)\nprint(\"Done.\")\n\n# Create a loss expression for training, i.e., a scalar objective we want to minimize\nprediction = lasagne.layers.get_output(network)\nloss = lasagne.objectives.squared_error(prediction, target_var)\nloss = loss.mean()\n\n# Create update expressions for training\n# Here, we'll use adam\nparams = lasagne.layers.get_all_params(\n network,\n trainable=True\n)\nupdates = lasagne.updates.adam(\n loss,\n params\n)\n\n# Create a loss expression for validation/testing.\n# The crucial difference here is that we do a deterministic forward pass\n# through the network, disabling dropout layers.\ntest_prediction = lasagne.layers.get_output(network, deterministic=True)\n\ntest_loss = lasagne.objectives.squared_error(test_prediction,\n target_var)\ntest_loss = test_loss.mean()\n\n# As a bonus, also create an expression for the classification accuracy:\ntest_acc = T.mean(T.eq(T.argmax(test_prediction, axis=1), target_var),\n dtype=theano.config.floatX)\n\n# Compile a function performing a training step on a mini-batch (by giving\n# the updates dictionary) and returning the corresponding training loss:\ntrain_fn = theano.function(\n [input_var, target_var],\n loss,\n updates=updates,\n allow_input_downcast=True\n)\n\n# Compile a second function computing the validation loss and accuracy:\nval_fn = theano.function(\n [input_var, target_var],\n [test_loss, test_acc],\n allow_input_downcast=True\n)\n\nnum_epochs = 1\n\n# Finally, launch the training loop.\nprint(\"Starting training...\")\n\n# We iterate over epochs:\nfor epoch in range(num_epochs):\n\n # In each epoch, we do a full pass over the training data:\n train_err = 0\n train_batches = 0\n start_time = time.time()\n \n for batch in iterate_minibatches(X_train, y_train, 500, shuffle=True):\n inputs, targets = batch\n train_err += train_fn(inputs, targets)\n train_batches += 1\n\n # And a full pass over the validation data:\n val_err = 0\n val_acc = 0\n val_batches = 0\n for batch in iterate_minibatches(X_val, y_val, 500, shuffle=False):\n inputs, targets = batch\n err, acc = val_fn(inputs, targets)\n val_err += err\n val_acc += acc\n val_batches += 1\n\n # Then we print the results for this epoch:\n print(\"Epoch {} of {} took {:.3f}s\".format(\n epoch + 1, num_epochs, time.time() - start_time))\n print(\" training loss:\\t\\t{:.6f}\".format(train_err / train_batches))\n print(\" validation loss:\\t\\t{:.6f}\".format(val_err / val_batches))\n print(\" validation accuracy:\\t\\t{:.2f} %\".format(\n val_acc / val_batches * 100))\nprint(\"Done training.\") \n\n# After training, we compute and print the test error:\ntest_err = 0\ntest_acc = 0\ntest_batches = 0\nfor batch in iterate_minibatches(X_test, y_test, 500, shuffle=False):\n inputs, targets = batch\n err, acc = val_fn(inputs, targets)\n test_err += err\n test_acc += acc\n test_batches += 1\nprint(\"Final results:\")\nprint(\" test loss:\\t\\t\\t{:.6f}\".format(test_err / test_batches))\nprint(\" test accuracy:\\t\\t{:.2f} %\".format(\n test_acc / test_batches * 100))\n\n# Optionally, you could now dump the network weights to a file like this:\n# np.savez('model.npz', *lasagne.layers.get_all_param_values(network))\n#\n# And load them again later on like this:\n# with np.load('model.npz') as f:\n# param_values = [f['arr_%d' % i] for i in range(len(f.files))]\n# lasagne.layers.set_all_param_values(network, param_values)\n\ntrained_params = lasagne.layers.get_all_param_values(network)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UoS-SNe/LSST_tools
opsimout/notebooks/OpSim_basics_notebook.ipynb
gpl-3.0
[ "OpSim Basics Notebook\n\nopsim : https://www.lsst.org/scientists/simulations/opsim\n\nThe Operations Simulator (OpSim) is an application that simulates the field selection and image acquisition process of the LSST over the 10-year life of the planned survey. Each visit or image of a field in a particular filter is selected by combining science program requirements, the mechanics of the telescope design, and the modelled environmental conditions. The output of the simulator is a detailed record of the telescope movements and a complete description of the observing conditions as well as the characteristics of each image. OpSim is capable of balancing cadence goals from multiple science programs, and attempts to minimize time spent slewing as it carries out these goals. LSST operations can be simulated using realistic seeing distributions, historical weather data, scheduled engineering downtime and current telescope and camera parameters. \nThe Simulator has a sophisticated model of the telescope and dome to properly constrain potential observing cadences. This model has also proven useful for investigating various engineering issues ranging from sizing of slew motors, to design of cryogen lines to the camera. The LSST Project developed the Operations Simulator to verify that the LSST Science Requirements could be met with the telescope design. It was used to demonstrated the capability of the LSST to deliver a 26,000 square degree survey probing the time domain and with 18,000 square degrees for the Wide-Fast-Deep survey to the design specifications of the Science Requirements Document, while effectively surveying for NEOs over the same area. Currently, the Operations Simulation Team is investigating how to optimally observe the sky to obtain a single 10-year dataset that can be used to accomplish multiple science goals. \n\n\nOutputs\n|Column Name | Type | Units | Description |\n|------------|-------|------- |------------------------------------------------------------|\n|obsHistID |integer|- |Unique visit identifier (same as ObsHistory.obsHistID). |\n|sessionID |integer|- |Session identifier which is unique for simulated surveys created on a particular machine or hostname. Simulated surveys are uniquely named using the form hostname_sessionID.|\n|propID |integer|- |Unique (on each machine) identifier for every proposal (observing mode) specified in a simulated survey. Note that a single visit can satisfy multiple proposals, and so duplicate rows (except for the propID) can exist in the Summary table (same as Proposal.propID).|\n|fieldID |integer|- |Unique field (or target on the sky) identifier (same as Field.fieldID). OpSim uses a set of 5292 fields (targets) obtained from a fixed tessellation of the sky.|\n|fieldRA |real |radians |Right Ascension (J2000) of the field center for this visit (same as Field.fieldRA).|\n|fieldDec |real |radians |Declination (J2000) of the field center for this visit (same as Field.fieldDec).|\n|filter |text |- |Filter used during the visit; one of u, g, r, i, z, or y.|\n|expDate |integer|seconds |Time of the visit relative to 0 sec at the start of a simulated survey.\n|expMJD |real |days |Modified Julian Date at the start of a visit.|\n|night |integer|none |The integer number of nights since the start (expDate = 0 sec) of the survey. The first night is night = 0.\n|visitTime |real |seconds |Currently, a visit comprises two 15-second exposures and each exposure needs 1 sec for the shutter action and 2 sec for the CCD readout. The second readout is assumed to occur while moving to the next field (see slewTime), so the length of each visit for the WFD observing mode is 34 sec.\n|visitExpTime|real |seconds |Total integration time on the sky during a visit, which for current observing modes is 30 sec (see visitTime).|\n|finRank |real |- |Target rank among all proposals including all priorities and penalties (generally used for diagnostic purposes).|\n|FWHMgeom |real |arcseconds |\"Geometrical\" full-width at half maximum. The actual width at half the maximum brightness. Use FWHMgeom to represent the FWHM of a double-gaussian representing the physical width of a PSF.|\n|FWHMeff |real |arcseconds |\"Effective\" full-width at half maximum, typically ~15% larger than FWHMgeom. Use FWHMeff to calculate SNR for point sources, using FWHMeff as the FWHM of a single gaussian describing the PSF.|\n|transparency|real |- |The value (in 8ths) from the Cloud table closest in time to this visit.|\n|airmass |real |- |Airmass at the field center of the visit.\n|vSkyBright |real |mag/arcsec2|The sky brightness in the Johnson V band calculated from a Krisciunas and Schaeffer model with a few modifications. This model uses the Moon phase, angular distance between the field and the Moon and the field’s airmass to calculate added brightness to the zero-Moon, zenith sky brightness (e.g. Krisciunas 1997, PASP, 209, 1181; Krisciunas and Schaefer 1991, PASP, 103, 1033; Benn and Ellison 1998, La Palma Technical Note 115).|\n|filtSkyBrightness|real|mag/arcsec2|Measurements of the color of the sky as a function of lunar phase are used to correctvSkyBright to the sky brightness in the filter used during this visit.|\n|rotSkyPos|real|radians|The orientation of the sky in the focal plane measured as the angle between North on the skyand the \"up\" direction in the focal plane.\n|rotTelPos|real|radians|The physical angle of the rotator with respect to the mount. rotSkyPos = rotTelPos - ParallacticAngle|\n|lst|real|radians|Local SiderealTime at the start of the visit.|\n|altitude|real|radians|Altitude of the field center at the start of the visit.|\n|azimuth|real|radians|Azimuth of the field center at the start of the visit.|\n|dist2Moon|real|radians|Distance from the field center to the moon's center on the sky.|\n|solarElong|real|degrees|Solar elongation or the angular distance between the field center and the sun (0 - 180 deg).|\n|moonRA|real|radians|Right Ascension of the Moon.|\n|moonDec|real|radians|Declination of the Moon.|\n|moonAlt|real|radians|Altitude of the Moon taking into account the elevation of the site.|\n|moonAZ|real|radians|Azimuth of the Moon|\n|moonPhase|real|%|Percent illumination of the Moon (0=new, 100=full)|\n|sunAlt|real|radians|Altitude of the Sun taking into account the elevation of the site, but with no correction for atmospheric refraction.|\n|sunAz|real|radians|Azimuth of the Sun with no correction for atmospheric refraction.|\n|phaseAngle|real|-|Intermediate values in the calculation of vSkyBright using the Krisciunas and Schaeffer models.|\n|rScatter|real|-|\" \"|\n|mieScatter|real|-|\" \"|\n|moonBright|real|-|\" \"|\n|darkBright|real|-|\" \"|\n|rawSeeing|real|arcseconds|The seeing as taken from the Seeing table which is an ideal seeing at zenith and at 500 nm.|\n|wind|real|-|A placeholder for real telemetry.|\n|humidity|real|-|A placeholder for real telemetry.|\n|slewDist|real|radians|Distance on the sky between the target field center and the field center of the previous visit.|\n|slewTime|real|seconds|The time between the end of the second exposure in the previous visit and the beginning of the first exposure in the current visit.|\n|fiveSigmaDepth|real|magnitudes|The magnitude of a point source that would be a 5-sigma detection (see Z. Ivezic et al, http://arxiv.org/pdf/0805.2366.pdf (link is external)).|\n|ditheredRA|real|radians|The offset from the Right Ascension of the field center representing a \"hex-dithered\" pattern.|\n|ditheredDec|real|radians|The offset from the Declination of the field center representing a \"hex-dithered\" pattern.|", "from __future__ import print_function ## Force python3-like printing\n%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport os\nimport time\n\nimport sqlite3\nfrom sqlalchemy import create_engine\n\nopsimdbpath = os.environ.get('OPSIMDBPATH')\n\nprint(opsimdbpath)\n\nengine = create_engine('sqlite:///' + opsimdbpath)\n\nconn = sqlite3.connect(opsimdbpath)\n\ncursor = conn.cursor()\n\nquery = 'SELECT COUNT(*) FROM Summary'\ncursor.execute(query)\n\ncursor.fetchall()\n\n# opsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night < 1000', engine)\n## Look at the first year - to get a feel\n# opsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night < 366', engine)\n## Just one night\nopsimdf = pd.read_sql_query('SELECT * FROM Summary WHERE night = 1000', engine)\nnow = time.time()\nprint (now - then)\n\nopsimdf.head()\n\n# Definitions of the columns are \nopsimdf[['obsHistID', 'filter', 'night', 'expMJD',\n 'fieldID', 'fieldRA', 'ditheredRA', 'ditheredRA', 'ditheredDec',\n 'propID', 'fiveSigmaDepth']].head()\n\nopsimdf.propID.unique()\n\nddf = opsimdf.query('propID == 56') ## 56 is DDF\nprint(len(ddf))\n\nfilters = np.unique(ddf[\"filter\"])\nprint(filters)", "Looking at outputs\n\nThe Raw Seeing is the ideal seeing at zenith", "fig = plt.figure(figsize=[8, 4])\nfig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,\n right = 0.99, hspace=0, wspace = 0)\n\nax1 = fig.add_subplot(111)\n\nhiststruct = {}\n\nbins = np.arange(0, 2.0, 0.1)\nfor i, f in enumerate(filters):\n seeing_dist = ddf.query(\"filter == u'\" + f + \"'\")[\"rawSeeing\"]\n histstruct[f] = ax1.hist(seeing_dist, color = rfc.hex[f], histtype = \"step\",\n lw = 2, bins = bins)\n", "A better value to use is FWHMeff", "fig = plt.figure(figsize=[8, 4])\nfig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,\n right = 0.99, hspace=0, wspace = 0)\n\nax1 = fig.add_subplot(111)\n\nhiststruct = {}\n\nbins = np.arange(0, 2.0, 0.1)\nfor i, f in enumerate(filters):\n seeing_dist = ddf.query(\"filter == u'\" + f + \"'\")[\"FWHMeff\"]\n histstruct[f] = ax1.hist(seeing_dist, color = rfc.hex[f], histtype = \"step\",\n lw = 2, bins = bins)\n\nfig = plt.figure(figsize=[8, 4])\nfig.subplots_adjust(left = 0.09, bottom = 0.13, top = 0.99,\n right = 0.99, hspace=0, wspace = 0)\n\nax1 = fig.add_subplot(111)\n\nhiststruct = {}\nbins = np.arange(20, 26, 0.1)\n\nfor i, f in enumerate(filters):\n depth_dist = ddf.query(\"filter == u'\" + f + \"'\")[\"fiveSigmaDepth\"]\n histstruct[f] = ax1.hist(depth_dist, color = rfc.hex[f], histtype = \"step\",\n lw = 2, bins= bins)\n\nfor i, f in enumerate(filters):\n print(i, f, len(ddf.query(\"filter == u'\" + f + \"'\")))\n\nxx = opsimdf.query('fieldID == 316')\n\nxx.head()", "Some unexpected issues", "xx.query('propID == 54')", "How to read the table:\n\nobsHistID indexes a pointing ('fieldRA', 'fieldDec', 'ditheredRA', 'ditheredDec')\nAdditionally a pointing may be assigned a propID to describe what a pointing achieves\nThe meaning of the propID is given in the Proposal Table. For minion_1016_sqlite.db, the WFD is 54, and the DDF is 56, but this coding might change.\nIf a pointing achieves the task of succeeding in two different proposals, this is represented by haveing two records with the same pointng and different propID", "test = opsimdf.drop_duplicates()\n\nall(test == opsimdf)\n\ntest = opsimdf.drop_duplicates(subset='obsHistID')\n\nlen(test) == len(opsimdf)\n\nopsimdf.obsHistID.size\n\nopsimdf.obsHistID.unique().size\n\ntest.obsHistID.size" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
minimav/misc-scripts
mcmc.ipynb
mit
[ "Some examples for learning the Pymc library.", "import pymc\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn\n% matplotlib inline\nseaborn.set_style('darkgrid')", "Quarry disaster example:\nA model for the disasters data with a changepoint\n\nchangepoint ~ Unif(0, 110)\nearly_mean ~ Exp(1.)\nlate_mean ~ Exp(1.)\ndisasters[t] ~ Pois(early_mean if t <= switchpoint, late_mean otherwise)", "from pymc import DiscreteUniform, Exponential, deterministic, Poisson, Uniform, MCMC\n\ndisasters_array = np.array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,\n 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,\n 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,\n 1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,\n 0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,\n 3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,\n 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])\n\n# Define data and stochastics\n\nswitchpoint = DiscreteUniform(\n 'switchpoint',\n lower=0,\n upper=110,\n doc='Switchpoint[year]')\nearly_mean = Exponential('early_mean', beta=1.)\nlate_mean = Exponential('late_mean', beta=1.)\n\n\n@deterministic(plot=False)\ndef rate(s=switchpoint, e=early_mean, l=late_mean):\n ''' Concatenate Poisson means '''\n out = np.empty(len(disasters_array))\n out[:s] = e\n out[s:] = l\n return out\n\ndisasters = Poisson('disasters', mu=rate, value=disasters_array, observed=True)\n\n#run the model\ndisaster_model = [disasters, rate, switchpoint, early_mean, late_mean]\nM = MCMC(disaster_model)\nM.sample(iter=50000, burn=10000, thin=10)", "Rate is a vector, so can't be plotted in the same manner as the other variables.", "fig, ax = plt.subplots(2,2, figsize=(10,10))\nfor idx, rv in enumerate(M.stats().iterkeys()):\n ax[idx % 2, idx // 2].set_title(rv)\n if rv != 'rate':\n ax[idx % 2, idx // 2].plot(M.trace(rv)[:])\n else:\n ax[idx % 2, idx // 2].plot(M.trace(rv)[-1,:])", "Can get such a plot automatically via Matplot submodule:", "from pymc.Matplot import plot\nplot(M)", "Compare the data with the (mean) model:", "#plt.xlim(0, len(disasters_array))\n#plt.scatter(*zip(*enumerate(disasters_array)))\n\nplt.xlim(1851, 1962)\nplt.xlabel('year')\nplt.ylabel('count')\nyears = xrange(1851, 1962)\nplt.scatter(years, disasters_array)\n\n#mean of poisson is parameter itself\nplt.plot(years, M.trace('rate')[-1,:])\n\nchange = 1851 + M.trace('switchpoint')[-1]\nplt.title('change in legislation {}?'.format(change))", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
massimo-nocentini/simulation-methods
notes/set-based-type-system/set-based-type-system.ipynb
mit
[ "<p>\n<img src=\"http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg\" \n alt=\"UniFI logo\" style=\"float: left; width: 20%; height: 20%;\">\n<div align=\"right\">\nMassimo Nocentini<br>\n</div>\n</p>\n<br>\n<div align=\"center\">\n<b>Abstract</b><br>\nIn this document we collect a naive <i>type system</i> based on sets.\n</div>", "from itertools import repeat\nfrom sympy import *\n#from type_system import *\n\n%run ../../src/commons.py\n\n%run ./type-system.py", "", "init_printing()\n\nx,y,m,n,t,z = symbols('x y m n t z', commutative=True)\nalpha, beta, gamma, eta = symbols(r'\\alpha \\beta \\gamma \\eta', commutative=True)\nf,g = Function('f'), Function('g')", "Non-commutative symbols", "((1/(1-w[0]*z))*(1/(1-w[1]*z))).diff(z).series(z, n=6)\n\ndefine(f(z), z/((1-z)**2),ctor=FEq).series(z,n=10)\n\ndefine(f(z), 1/(1-alpha*z), ctor=FEq).series(z,n=10)\n\ndefine(f(z), 1/(1-(u[0]+u[1])*z), ctor=FEq).series(z,n=4)\n\ndefine(f(z), 1/(1-(o[0]+o[1])*z), ctor=FEq).series(z,n=4)", "Exponential gf recap", "define(f(z), z*(1/(1-z))*(1/(1-z)), ctor=FEq).series(z,n=10)\n\ndefine(f(z), z**3,ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), exp(z),ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), z*exp(z), ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), z**2*exp(z)/factorial(2,evaluate=False), \n ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), z**3*exp(z)/factorial(3, evaluate=False), \n ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), (exp(z)+exp(-z))/2, ctor=FEq).series(z, n=20, kernel='exponential')\n\ndefine(f(z), exp(m*z), ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), (exp(z)-1)/z, ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), 1/(1-z), ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), (1/(1-z))*(1/(1-z)), ctor=FEq).series(z, n=10, kernel='exponential')\n\ndefine(f(z), exp(z)**2, ctor=FEq).series(z, n=10, kernel='exponential')", "Linear types", "tyvar(x).gf()\n\n(tyvar(u[0]) * tyvar(u[1]) * tyvar(u[2])).gf()\n\n(tyvar(o[0]) * tyvar(o[1]) * tyvar(o[2])).gf()\n\n(tyvar(u[0]) | tyvar(u[1]) | tyvar(u[2])).gf()\n\n(tyvar(o[0]) | tyvar(o[1]) | tyvar(o[2])).gf()\n\ntruth.gf() + falsehood.gf()\n\nboolean.gf()\n\nmaybe(tyvar(alpha)[z]).gf()", "occupancies", "nel = 4\nsyms=[u[i] for i in range(nel)]\nocc_prb, = cp(maybe(tyvar(u[i]*z)) for i in range(nel)).gf() # here we can use the `[z]` notation too.\nocc_prb\n\noccupancy(occ_prb, syms, objects='unlike', boxes='unlike').series(z)\n\noccupancy(occ_prb, syms, objects='unlike', boxes='like').series(z)\n\noccupancy(occ_prb, syms, objects='like', boxes='unlike').series(z)\n\noccupancy(occ_prb, syms, objects='like', boxes='like').series(z)", "", "u_hat = symbols(r'␣_0:10')\nnel = 3\nocc_prb, = cp(tyvar(z*(sum(u[j] for j in range(nel) if j != i))) | tyvar(u_hat[i]) \n for i in range(nel)).gf()\nocc_prb\n\nsyms=[u[i] for i in range(nel)]+[u_hat[i] for i in range(nel)]\noccupancy(occ_prb, syms, objects='unlike', boxes='unlike').series(z)\n\noccupancy(occ_prb, syms, objects='unlike', boxes='like').series(z)\n\noccupancy(occ_prb, syms, objects='like', boxes='unlike').series(z)\n\noccupancy(occ_prb, syms, objects='like', boxes='like').series(z)", "", "occupancy_problem, = cp(maybe(du(tyvar((u[i]*z)**(j+1)) for j in range(i+1))) \n for i in range(3)).gf()\noccupancy_problem\n\noccupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='unlike', boxes='unlike').series(z)\n\noccupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='unlike', boxes='like').series(z)\n\noccupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='like', boxes='unlike').series(z)\n\noccupancy(occupancy_problem, syms=[u[i] for i in range(3)], objects='like', boxes='like').series(z)\n\n((1+t)*(1+t+t**2)*(1+t+t**2+t**3)).series(t,n=10) # just for checking", "", "def sums_of_powers(boxes, base):\n p = IndexedBase('\\space')\n return cp(cp() | tyvar(p[j]*z**(base**i)) \n for i in range(0,boxes) \n for j in [Pow(base,i,evaluate=False)] # implicit let\n ).gf()\n\noccupancy, = sums_of_powers(boxes=4, base=2)\noccupancy.series(z, n=32)\n\noccupancy, = sums_of_powers(boxes=4, base=3)\noccupancy.series(z, n=100)\n\noccupancy, = sums_of_powers(boxes=4, base=5)\noccupancy.series(z, n=200)\n\noccupancy, = sums_of_powers(boxes=4, base=7)\noccupancy.series(z, n=500)\n\nassert 393 == 7**0 + 7**2 + 7**3 # _.rhs.rhs.coeff(z, 393)", "Differences", "difference = (cp() | tyvar(-gamma*z))\nones = nats * difference\nones_gf, = ones.gf()\nones_gf\n\nones_gf(z,1,1,1).series(z, n=10) # check!\n\none_gf, = (ones * difference).gf()\none_gf.series(z, n=10).rhs.rhs.subs({w[0]:1, w[1]:1, gamma:1})", "", "l = IndexedBase(r'\\circ')\ndef linear_comb_of_powers(boxes, base):\n return cp(lst(tyvar(Mul(l[j], z**(base**i), evaluate=False)))\n for i in range(boxes) \n for j in [Pow(base,i,evaluate=False)]).gf()\n\noccupancy, = linear_comb_of_powers(boxes=4, base=Integer(2))\noccupancy.series(z, n=8)\n\noccupancy, = linear_comb_of_powers(boxes=4, base=3)\noccupancy.series(z, n=9)\n\noccupancy, = linear_comb_of_powers(boxes=4, base=5)\noccupancy.series(z, n=10)\n\ndef uniform_rv(n):\n return tyvar(S(1)/nel) * lst(tyvar(x))\noccupancy, = uniform_rv(n=10).gf()\noccupancy.series(x,n=10)\n\nclass lst_structure_w(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha | lst(me)\n \n def label(self):\n return r'\\mathcal{L}_{w}' # `_s` stands for \"structure\"\n\nlst_structure_w(tyvar(alpha)).gf()\n\n[gf.series(alpha) for gf in _]\n\nclass lst_structure(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha | (lst(me) * me * me)\n \n def label(self):\n return r'\\mathcal{L}_{s}' # `_s` stands for \"structure\"\n\nlst_structure(tyvar(alpha)).gf()\n\n_[0].series(alpha, n=10)\n\nclass structure(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha | (bin_tree(me) * me * me)\n \n def label(self):\n return r'\\mathcal{S}'\n\nstructure(tyvar(alpha)).gf()\n\ngf = _[0]\n\ngf.simplify()\n\nnel = 7\ns = gf.simplify().series(alpha, n=nel).rhs.rhs\n[s.coeff(alpha, n=i).subs({pow(-1,S(1)/3):-1}).radsimp().powsimp() for i in range(nel)]\n\nclass structure(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha | (nnbin_tree(me) * me)\n \n def label(self):\n return r'\\mathcal{S}'\n\nstructure(tyvar(alpha)).gf()\n\ngf = _[0]\n\ngf.simplify()\n\nnel = 20\ns = gf.simplify().series(alpha, n=nel).rhs.rhs\n[s.coeff(alpha, n=i).subs({pow(-1,S(1)/3):-1}).radsimp().powsimp() for i in range(nel)]\n\nclass nn_structure(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha * bin_tree(nnbin_tree(me))\n \n def label(self):\n return r'\\mathcal{L}_{s}^{+}' # `_s` stands for \"structure\"\n\nnn_structure(tyvar(alpha)).gf()\n\n_[0].series(alpha, n=10)\n\nclass nnlst_structure(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha * lst(nnlst(me))\n \n def label(self):\n return r'\\mathcal{L}_{s}^{+}' # `_s` stands for \"structure\"\n\nnnlst_structure(tyvar(alpha)).gf()\n\n_[0].series(alpha, n=10)\n\nclass tree(rec):\n \n def definition(self, alpha):\n return alpha * lst(self.me())\n \n def label(self):\n return r'\\mathcal{T}'\n\ntree(tyvar(alpha)).gf()\n\n_[0].series(alpha, n=10)\n\nclass combination(rec):\n \n def definition(self, alpha):\n me = self.me()\n return alpha | (me * me)\n \n def label(self):\n return r'\\mathcal{C}'\n\ncombination(tyvar(alpha)).gf()\n\n_[0].series(alpha, n=10)\n\nclass ab_tree(rec):\n \n def definition(self, alpha, beta):\n me = self.me()\n return beta | (alpha * me * me)\n \n def label(self):\n return r'\\mathcal{T}_{a,b}'\n\nab_tree_gfs = ab_tree(tyvar(alpha), tyvar(beta)).gf()\nab_tree_gfs\n\nab_tree_gf = ab_tree_gfs[0]\n\nfab_eq = FEq(ab_tree_gf.lhs, ab_tree_gf.rhs.series(beta, n=20).removeO(), evaluate=False)\nfab_eq\n\nfab_eq(x,x)\n\n(_*alpha).expand()\n\n#with lift_to_Lambda(fab_eq) as F:\nB = fab_eq(x,1)\nA = fab_eq(1,x)\nA,B,\n\n(A+B).expand()\n\n((1+x)*A).expand()\n\nclass dyck(rec):\n \n def definition(self, alpha, beta):\n me = self.me()\n return cp() | (alpha * me * beta * me)\n \n def label(self):\n return r'\\mathcal{D}'\n\ndyck_gfs = dyck(tyvar(alpha*x), tyvar(beta*x)).gf()\ndyck_gfs\n\ndyck_gf = dyck_gfs[0]\n\ndyck_gf.series(x,n=10)\n\nclass motzkin(rec):\n \n def definition(self, alpha, beta, gamma):\n me = self.me()\n return cp() | (alpha * me * beta * me) | (gamma * me)\n \n def label(self):\n return r'\\mathcal{M}'\n\nmotzkin_gfs = motzkin(tyvar(alpha*x), tyvar(beta*x), tyvar(gamma*x),).gf()\nmotzkin_gfs\n\nmotzkin_gf = motzkin_gfs[0]\n\nmotzkin_gf.series(x,n=10)\n\nmotzkin_gf(x,1,1,1).series(x,n=10)\n\nclass motzkin_p(rec):\n \n def definition(self, alpha, beta, gamma, eta):\n me = self.me()\n return cp() | (alpha * me * beta * me) | (gamma * me) | (eta * me)\n \n def label(self):\n return r'\\mathcal{M}^{+}'\n\nmotzkinp_gfs = motzkin_p(tyvar(alpha*x), tyvar(beta*x), tyvar(gamma*x), tyvar(eta*x),).gf()\nmotzkinp_gfs\n\nmotzkinp_gf = motzkinp_gfs[0]\n\nmotzkinp_gf.series(x,n=6)\n\nmotzkinp_gf(x,1,1,1,1).series(x,n=10)\n\nclass fibo(rec):\n \n def definition(self, alpha, beta):\n me = self.me()\n return cp() | alpha | ((beta | (alpha * beta)) * me)\n \n def label(self):\n return r'\\mathcal{F}'\n\nfibo_gf, = fibo(tyvar(alpha*x), tyvar(beta*x),).gf()\nfibo_gf\n\nfibo_gf.series(x,n=10)\n\nfibo_gf(1,x,1).series(x,n=10)\n\nlst_of_truth_gf, = lst(tyvar(x)).gf()\nlst_of_truth_gf.series(x, n=10, is_exp=True)\n\nlst_of_boolean_gf.series(x,n=10,is_exp=True)\n\n_.rhs.rhs.subs({w[0]:1,w[1]:1})\n\nsum((_.rhs.rhs.coeff(x,i)/factorial(i))*x**i for i in range(1,10))\n\nclass powerset(ty):\n \n def gf_rhs(self, ty):\n return [exp(self.mulfactor() * gf.rhs) for gf in ty.gf()]\n \n def mulfactor(self):\n return 1\n \n def label(self):\n return r'\\mathcal{P}'\n\npowerset_of_tyvar_gf, = (2**(nnlst(tyvar(alpha)))).gf()\npowerset_of_tyvar_gf\n\npowerset_of_tyvar_gf.series(alpha, n=10, is_exp=True)\n\npowerset_of_tyvar_gf, = (2**(nnlst(boolean))).gf()\npowerset_of_tyvar_gf\n\npowerset_of_tyvar_gf.series(x, n=5, is_exp=True)\n\n_.rhs.rhs.subs({w[0]:1,w[1]:1})\n\npowerset_of_tyvar_gf, _ = (2**(bin_tree(tyvar(alpha)))).gf()\npowerset_of_tyvar_gf\n\npowerset_of_tyvar_gf.series(alpha, n=10, is_exp=True)\n\nl, = (2**(2**(nnlst(tyvar(alpha))))).gf()\ndefine(l.lhs, l.rhs.ratsimp(), ctor=FEq).series(alpha,n=8,is_exp=True)\n\nclass cycle(ty):\n \n def gf_rhs(self, ty):\n return [log(gf.rhs) for gf in ty.gf()]\n \n def label(self):\n return r'\\mathcal{C}'\n\ncycle_of_tyvar_gf, = (~(lst(tyvar(alpha)))).gf()\ncycle_of_tyvar_gf\n\ncycle_of_tyvar_gf.series(alpha, n=10, is_exp=True)\n\ncycle_of_tyvar_gf, = (~(lst(boolean))).gf()\ncycle_of_tyvar_gf\n\ncycle_of_tyvar_gf.series(x, n=8, is_exp=True)\n\n_.rhs.rhs.subs({w[0]:1,w[1]:1})\n\nPstar_gf, = (2**(~(lst(tyvar(alpha))))).gf()\nPstar_gf.series(alpha, n=10, is_exp=True)\n\nclass ipowerset(powerset):\n \n def mulfactor(self):\n return -1\n\nderangements_gf, = ((-2)**tyvar(alpha)).gf()\nderangements_gf.series(alpha, n=10, is_exp=True)\n\nderangements_gf, = ((-2)**nnlst(tyvar(alpha))).gf()\nderangements_gf.series(alpha, n=10, is_exp=True)\n\n[1,2][1:]\n\ndef foldr(f, l, i):\n if not l:\n return i\n else:\n car, *cdr = l\n return f(car, foldr(f, cdr, i))\n \nclass arrow(ty):\n \n def label(self):\n return r'\\rightarrow'\n \n def gf_rhs(self, alpha, beta):\n v = Dummy()\n return [foldr(lambda gf, acc: Lambda([x], acc(gf.rhs)), \n gfs[:-1], \n Lambda([x], gfs[-1].rhs))(x)\n for gfs in self.gfs_space()]\n return [foldr(lambda gf, acc: acc**gf.rhs, gfs[:-1], gfs[-1].rhs)\n for gfs in self.gfs_space()]\n\narr, = arrow(boolean, boolean).gf()\narr\n\narr.series(x,n=5,is_exp=False)\n\n_.rhs.rhs.removeO().subs({w[0]:1,w[1]:1})\n\narr, = arrow(lst(boolean), lst(boolean)).gf()\narr\n\narr.series(x,n=5,is_exp=False)\n\n_.rhs.rhs.removeO().subs({w[0]:1,w[1]:1})", "", "lamda_gf = lamda(tyvar(x)).gf_rhs(tyvar(x))\nlamda_gf\n\nlamda_gf.rhs.series(x,n=10)", "<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
alexandrnikitin/algorithm-sandbox
courses/DAT256x/Module01/01-07-Quadratic Equations.ipynb
mit
[ "Quadratic Equations\nConsider the following equation:\n\\begin{equation}y = 2(x - 1)(x + 2)\\end{equation}\nIf you multiply out the factored x expressions, this equates to:\n\\begin{equation}y = 2x^{2} + 2x - 4\\end{equation}\nNote that the highest ordered term includes a squared variable (x<sup>2</sup>).\nLet's graph this equation for a range of x values:", "import pandas as pd\n\n# Create a dataframe with an x column containing values to plot\ndf = pd.DataFrame ({'x': range(-9, 9)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 2*df['x']**2 + 2 *df['x'] - 4\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()", "Note that the graph shows a parabola, which is an arc-shaped line that reflects the x and y values calculated for the equation.\nNow let's look at another equation that includes an x<sup>2</sup> term:\n\\begin{equation}y = -2x^{2} + 6x + 7\\end{equation}\nWhat does that look like as a graph?:", "import pandas as pd\n\n# Create a dataframe with an x column containing values to plot\ndf = pd.DataFrame ({'x': range(-8, 12)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = -2*df['x']**2 + 6*df['x'] + 7\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\nplt.show()", "Again, the graph shows a parabola, but this time instead of being open at the top, the parabola is open at the bottom.\nEquations that assign a value to y based on an expression that includes a squared value for x create parabolas. If the relationship between y and x is such that y is a positive multiple of the x<sup>2</sup> term, the parabola will be open at the top; when y is a negative multiple of the x<sup>2</sup> term, then the parabola will be open at the bottom.\nThese kinds of equations are known as quadratic equations, and they have some interesting characteristics. There are several ways quadratic equations can be written, but the standard form for quadratic equation is:\n\\begin{equation}y = ax^{2} + bx + c\\end{equation}\nWhere a, b, and c are numeric coefficients or constants.\nLet's start by examining the parabolas generated by quadratic equations in more detail.\nParabola Vertex and Line of Symmetry\nParabolas are symmetrical, with x and y values converging exponentially towards the highest point (in the case of a downward opening parabola) or lowest point (in the case of an upward opening parabola). The point where the parabola meets the line of symmetry is known as the vertex.\nRun the following cell to see the line of symmetry and vertex for the two parabolas described previously (don't worry about the calculations used to find the line of symmetry and vertex - we'll explore that later):", "%matplotlib inline\n\ndef plot_parabola(a, b, c):\n import pandas as pd\n import numpy as np\n from matplotlib import pyplot as plt\n \n # get the x value for the line of symmetry\n vx = (-1*b)/(2*a)\n \n # get the y value when x is at the line of symmetry\n vy = a*vx**2 + b*vx + c\n\n # Create a dataframe with an x column containing values from x-10 to x+10\n minx = int(vx - 10)\n maxx = int(vx + 11)\n df = pd.DataFrame ({'x': range(minx, maxx)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = a*df['x']**2 + b *df['x'] + c\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # Plot the line\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # plot the line of symmetry\n sx = [vx, vx]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex\n plt.scatter(vx,vy, color=\"red\")\n plt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy + 5)* np.sign(a)))\n\n plt.show()\n\n\nplot_parabola(2, 2, -4) \n\nplot_parabola(-2, 3, 5) ", "Parabola Intercepts\nRecall that linear equations create lines that intersect the x and y axis of a graph, and we call the points where these intersections occur intercepts. Now look at the graphs of the parabolas we've worked with so far. Note that these parabolas both have a y-intercept; a point where the line intersects the y axis of the graph (in other words, when x is 0). However, note that the parabolas have two x-intercepts; in other words there are two points at which the line crosses the x axis (and y is 0). Additionally, imagine a downward opening parabola with its vertex at -1, -1. This is perfectly possible, and the line would never have an x value greater than -1, so it would have no x-intercepts.\nRegardless of whether the parabola crosses the x axis or not, other than the vertex, for every y point in the parabola, there are two x points; one on the right (or positive) side of the axis of symmetry, and one of the left (or negative) side. The implications of this are what make quadratic equations so interesting. When we solve the equation for x, there are two correct answers.\nLet's take a look at an example to demonstrate this. Let's return to the first of our quadratic equations, and we'll look at it in its factored form:\n\\begin{equation}y = 2(x - 1)(x + 2)\\end{equation}\nNow, let's solve this equation for a y value of 0. We can restate the equation like this:\n\\begin{equation}2(x - 1)(x + 2) = 0\\end{equation}\nThe equation is the product of two expressions 2(x - 1) and (x + 2). In this case, we know that the product of these expressions is 0, so logically one or both of the expressions must return 0.\nLet's try the first one:\n\\begin{equation}2(x - 1) = 0\\end{equation}\nIf we distrbute this, we get:\n\\begin{equation}2x - 2 = 0\\end{equation}\nThis simplifies to:\n\\begin{equation}2x = 2\\end{equation}\nWhich gives us a value for x of 1.\nNow let's try the other expression:\n\\begin{equation}x + 2 = 0\\end{equation}\nThis gives us a value for x of -2.\nSo, when y is 0, x is -2 or 1. Let's plot these points on our parabola:", "import pandas as pd\n\n# Assign the calculated x values\nx1 = -2\nx2 = 1\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-5, x2+6)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 2*(df['x'] - 1) * (df['x'] + 2)\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = 2*(vx -1)*(vx + 2)\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 5)))\n\nplt.show()", "So from the plot, we can see that both of the values we calculated for x align with the parabola when y is 0. Additionally, because the parabola is symmetrical, we know that every pair of x values for each y value will be equidistant from the line of symmetry, so we can calculate the x value for the line of symmetry as the average of the x values for any value of y. This in turn means that we know the x coordinate for the vertex (it's on the line of symmetry), and we can use the quadratic equation to calculate y for this point.\nSolving Quadratics Using the Square Root Method\nThe technique we just looked at makes it easy to calculate the two possible values for x when y is 0 if the equation is presented as the product two expressions. If the equation is in standard form, and it can be factored, you could do the necessary manipulation to restate it as the product of two expressions. Otherwise, you can calculate the possible values for x by applying a different method that takes advantage of the relationship between squared values and the square root.\nLet's consider this equation:\n\\begin{equation}y = 3x^{2} - 12\\end{equation}\nNote that this is in the standard quadratic form, but there is no b term; in other words, there's no term that contains a coeffecient for x to the first power. This type of equation can be easily solved using the square root method. Let's restate it so we're solving for x when y is 0:\n\\begin{equation}3x^{2} - 12 = 0\\end{equation}\nThe first thing we need to do is to isolate the x<sup>2</sup> term, so we'll remove the constant on the left by adding 12 to both sides:\n\\begin{equation}3x^{2} = 12\\end{equation}\nThen we'll divide both sides by 3 to isolate x<sup>2</sup>:\n\\begin{equation}x^{2} = 4\\end{equation}\nNo we can isolate x by taking the square root of both sides. However, there's an additional consideration because this is a quadratic equation. The x variable can have two possibe values, so we must calculate the principle and negative square roots of the expression on the right:\n\\begin{equation}x = \\pm\\sqrt{4}\\end{equation}\nThe principle square root of 4 is 2 (because 2<sup>2</sup> is 4), and the corresponding negative root is -2 (because -2<sup>2</sup> is also 4); so x is 2 or -2.\nLet's see this in Python, and use the results to calculate and plot the parabola with its line of symmetry and vertex:", "import pandas as pd\nimport math\n\ny = 0\nx1 = int(- math.sqrt(y + 12 / 3))\nx2 = int(math.sqrt(y + 12 / 3))\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-10, x2+11)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = 3*df['x']**2 - 12\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = 3*vx**2 - 12\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 20)))\n\nplt.show()", "Solving Quadratics Using the Completing the Square Method\nIn quadratic equations where there is a b term; that is, a term containing x to the first power, it is impossible to directly calculate the square root. However, with some algebraic manipulation, you can take advantage of the ability to factor a polynomial expression in the form a<sup>2</sup> + 2ab + b<sup>2</sup> as a binomial perfect square expression in the form (a + b)<sup>2</sup>.\nAt first this might seem like some sort of mathematical sleight of hand, but follow through the steps carefull and you'll see that there's nothing up my sleeve!\nThe underlying basis of this approach is that a trinomial expression like this:\n\\begin{equation}x^{2} + 24x + 12^{2}\\end{equation}\nCan be factored to this:\n\\begin{equation}(x + 12)^{2}\\end{equation}\nOK, so how does this help us solve a quadratic equation? Well, let's look at an example:\n\\begin{equation}y = x^{2} + 6x - 7\\end{equation}\nLet's start as we've always done so far by restating the equation to solve x for a y value of 0:\n\\begin{equation}x^{2} + 6x - 7 = 0\\end{equation}\nNow we can move the constant term to the right by adding 7 to both sides:\n\\begin{equation}x^{2} + 6x = 7\\end{equation}\nOK, now let's look at the expression on the left: x<sup>2</sup> + 6x. We can't take the square root of this, but we can turn it into a trinomial that will factor into a perfect square by adding a squared constant. The question is, what should that constant be? Well, we know that we're looking for an expression like x<sup>2</sup> + 2cx + c<sup>2</sup>, so our constant c is half of the coefficient we currently have for x. This is 6, making our constant 3, which when squared is 9 So we can create a trinomial expression that will easily factor to a perfect square by adding 9; giving us the expression x<sup>2</sup> + 6x + 9.\nHowever, we can't just add something to one side without also adding it to the other, so our equation becomes:\n\\begin{equation}x^{2} + 6x + 9 = 16\\end{equation}\nSo, how does that help? Well, we can now factor the trinomial expression as a perfect square binomial expression:\n\\begin{equation}(x + 3)^{2} = 16\\end{equation}\nAnd now, we can use the square root method to find x + 3:\n\\begin{equation}x + 3 =\\pm\\sqrt{16}\\end{equation}\nSo, x + 3 is -4 or 4. We isolate x by subtracting 3 from both sides, so x is -7 or 1:\n\\begin{equation}x = -7, 1\\end{equation}\nLet's see what the parabola for this equation looks like in Python:", "import pandas as pd\nimport math\n\nx1 = int(- math.sqrt(16) - 3)\nx2 = int(math.sqrt(16) - 3)\n\n# Create a dataframe with an x column containing some values to plot\ndf = pd.DataFrame ({'x': range(x1-10, x2+11)})\n\n# Add a y column by applying the quadratic equation to x\ndf['y'] = ((df['x'] + 3)**2) - 16\n\n# Get x at the line of symmetry (halfway between x1 and x2)\nvx = (x1 + x2) / 2\n\n# Get y when x is at the line of symmetry\nvy = ((vx + 3)**2) - 16\n\n# get min and max y values\nminy = df.y.min()\nmaxy = df.y.max()\n\n# Plot the line\n%matplotlib inline\nfrom matplotlib import pyplot as plt\n\nplt.plot(df.x, df.y, color=\"grey\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.grid()\nplt.axhline()\nplt.axvline()\n\n# Plot calculated x values for y = 0\nplt.scatter([x1,x2],[0,0], color=\"green\")\nplt.annotate('x1',(x1, 0))\nplt.annotate('x2',(x2, 0))\n\n# plot the line of symmetry\nsx = [vx, vx]\nsy = [miny, maxy]\nplt.plot(sx,sy, color='magenta')\n\n# Annotate the vertex\nplt.scatter(vx,vy, color=\"red\")\nplt.annotate('vertex',(vx, vy), xytext=(vx - 1, (vy - 10)))\n\nplt.show()", "Vertex Form\nLet's look at another example of a quadratic equation in standard form:\n\\begin{equation}y = 2x^{2} - 16x + 2\\end{equation}\nWe can start to solve this by subtracting 2 from both sides to move the constant term from the right to the left:\n\\begin{equation}y - 2 = 2x^{2} - 16x\\end{equation}\nNow we can factor out the coefficient for x<sup>2</sup>, which is 2. 2x<sup>2</sup> is 2 &bull; x<sup>2</sup>, and -16x is 2 &bull; 8x:\n\\begin{equation}y - 2 = 2(x^{2} - 8x)\\end{equation}\nNow we're ready to complete the square, so we add the square of half of the -8x coefficient on the right side to the parenthesis. Half of -8 is -4, and -4<sup>2</sup> is 16, so the right side of the equation becomes 2(x<sup>2</sup> - 8x + 16). Of course, we can't add something to one side of the equation without also adding it to the other side, and we've just added 2 &bull; 16 (which is 32) to the right, so we must also add that to the left.\n\\begin{equation}y - 2 + 32 = 2(x^{2} - 8x + 16)\\end{equation}\nNow we can simplify the left and factor out a perfect square binomial expression on the right:\n\\begin{equation}y + 30 = 2(x - 4)^{2}\\end{equation}\nWe now have a squared term for x, so we could use the square root method to solve the equation. However, we can also isolate y by subtracting 30 from both sides. So we end up restating the original equation as:\n\\begin{equation}y = 2(x - 4)^{2} - 30\\end{equation}\nLet's just quickly check our math with Python:", "from random import randint\nx = randint(1,100)\n\n2*x**2 - 16*x + 2 == 2*(x - 4)**2 - 30", "So we've managed to take the expression 2x<sup>2</sup> - 16x + 2 and change it to 2(x - 4)<sup>2</sup> - 30. How does that help?\nWell, when a quadratic equation is stated this way, it's in vertex form, which is generically described as:\n\\begin{equation}y = a(x - h)^{2} + k\\end{equation}\nThe neat thing about this form of the equation is that it tells us the coordinates of the vertex - it's at h,k.\nSo in this case, we know that the vertex of our equation is 4, -30. Moreover, we know that the line of symmetry is at x = 4.\nWe can then just use the equation to calculate two more points, and the three points will be enough for us to determine the shape of the parabola. We can simply choose any x value we like and substitute it into the equation to calculate the corresponding y value. For example, let's calculate y when x is 0:\n\\begin{equation}y = 2(0 - 4)^{2} - 30\\end{equation}\nWhen we work through the equation, it gives us the answer 2, so we know that the point 0, 2 is in our parabola.\nSo, we know that the line of symmetry is at x = h (which is 4), and we now know that the y value when x is 0 (h - h) is 2. The y value at the same distance from the line of symmetry in the negative direction will be the same as the value in the positive direction, so when x is h + h, the y value will also be 2.\nThe following Python code encapulates all of this in a function that draws and annotates a parabola using only the a, h, and k values from a quadratic equation in vertex form:", "def plot_parabola_from_vertex_form(a, h, k):\n import pandas as pd\n import math\n\n # Create a dataframe with an x column a range of x values to plot\n df = pd.DataFrame ({'x': range(h-10, h+11)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = (a*(df['x'] - h)**2) + k\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # calculate y when x is 0 (h+-h)\n y = a*(0 - h)**2 + k\n\n # Plot the line\n %matplotlib inline\n from matplotlib import pyplot as plt\n\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # Plot calculated y values for x = 0 (h-h and h+h)\n plt.scatter([h-h, h+h],[y,y], color=\"green\")\n plt.annotate(str(h-h) + ',' + str(y),(h-h, y))\n plt.annotate(str(h+h) + ',' + str(y),(h+h, y))\n\n # plot the line of symmetry (x = h)\n sx = [h, h]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex (h,k)\n plt.scatter(h,k, color=\"red\")\n plt.annotate('v=' + str(h) + ',' + str(k),(h, k), xytext=(h - 1, (k - 10)))\n\n plt.show()\n\n \n# Call the function for the example discussed above\nplot_parabola_from_vertex_form(2, 4, -30)", "It's important to note that the vertex form specifically requires a subtraction operation in the factored perfect square term. For example, consider the following equation in the standard form:\n\\begin{equation}y = 3x^{2} + 6x + 2\\end{equation}\nThe steps to solve this are:\n1. Move the constant to the left side:\n\\begin{equation}y - 2 = 3x^{2} + 6x\\end{equation}\n2. Factor the x expressions on the right:\n\\begin{equation}y - 2 = 3(x^{2} + 2x)\\end{equation}\n3. Add the square of half the x coefficient to the right, and the corresponding multiple on the left:\n\\begin{equation}y - 2 + 3 = 3(x^{2} + 2x + 1)\\end{equation}\n4. Factor out a perfect square binomial:\n\\begin{equation}y + 1 = 3(x + 1)^{2}\\end{equation}\n5. Move the constant back to the right side:\n\\begin{equation}y = 3(x + 1)^{2} - 1\\end{equation}\nTo express this in vertex form, we need to convert the addition in the parenthesis to a subtraction:\n\\begin{equation}y = 3(x - -1)^{2} - 1\\end{equation}\nNow, we can use the a, h, and k values to define a parabola:", "plot_parabola_from_vertex_form(3, -1, -1)", "Shortcuts for Solving Quadratic Equations\nWe've spent some time in this notebook discussing how to solve quadratic equations to determine the vertex of a parabola and the x values in relation to y. It's important to understand the techniques we've used, which incude:\n- Factoring\n- Calculating the Square Root\n- Completing the Square\n- Using the vertex form of the equation\nThe underlying algebra for all of these techniques is the same, and this consistent algebra results in some shortcuts that you can memorize to make it easier to solve quadratic equations without going through all of the steps:\nCalculating the Vertex from Standard Form\nYou've already seen that converting a quadratic equation to the vertex form makes it easy to identify the vertex coordinates, as they're encoded as h and k in the equation itself - like this:\n\\begin{equation}y = a(x - \\textbf{h})^{2} + \\textbf{k}\\end{equation}\nHowever, what if you have an equation in standard form?:\n\\begin{equation}y = ax^{2} + bx + c\\end{equation}\nThere's a quick and easy technique you can apply to get the vertex coordinates. \n\nTo find h (which is the x-coordinate of the vertex), apply the following formula:\n\\begin{equation}h = \\frac{-b}{2a}\\end{equation}\nAfter you've found h, use it in the quadratic equation to solve for k:\n\\begin{equation}\\textbf{k} = a\\textbf{h}^{2} + b\\textbf{h} + c\\end{equation}\n\nFor example, here's the quadratic equation in standard form that we previously converted to the vertex form:\n\\begin{equation}y = 2x^{2} - 16x + 2\\end{equation}\nTo find h, we perform the following calculation:\n\\begin{equation}h = \\frac{-b}{2a}\\;\\;\\;\\;=\\;\\;\\;\\;\\frac{-1 \\cdot16}{2\\cdot2}\\;\\;\\;\\;=\\;\\;\\;\\;\\frac{16}{4}\\;\\;\\;\\;=\\;\\;\\;\\;4\\end{equation}\nThen we simply plug the value we've obtained for h into the quadratic equation in order to find k:\n\\begin{equation}k = 2\\cdot(4^{2}) - 16\\cdot4 + 2\\;\\;\\;\\;=\\;\\;\\;\\;32 - 64 + 2\\;\\;\\;\\;=\\;\\;\\;\\;-30\\end{equation}\nNote that a vertex at 4,-30 is also what we previously calculated for the vertex form of the same equation:\n\\begin{equation}y = 2(x - 4)^{2} - 30\\end{equation}\nThe Quadratic Formula\nAnother useful formula to remember is the quadratic formula, which makes it easy to calculate values for x when y is 0; or in other words:\n\\begin{equation}ax^{2} + bx + c = 0\\end{equation}\nHere's the formula:\n\\begin{equation}x = \\frac{-b \\pm \\sqrt{b^{2} - 4ac}}{2a}\\end{equation}\nLet's apply that formula to our equation, which you may remember looks like this:\n\\begin{equation}y = 2x^{2} - 16x + 2\\end{equation}\nOK, let's plug the a, b, and c variables from our equation into the quadratic formula:\n\\begin{equation}x = \\frac{--16 \\pm \\sqrt{-16^{2} - 4\\cdot2\\cdot2}}{2\\cdot2}\\end{equation}\nThis simplifes to:\n\\begin{equation}x = \\frac{16 \\pm \\sqrt{256 - 16}}{4}\\end{equation}\nThis in turn (with the help of a calculator) simplifies to:\n\\begin{equation}x = \\frac{16 \\pm 15.491933384829668}{4}\\end{equation}\nSo our positive value for x is:\n\\begin{equation}x = \\frac{16 + 15.491933384829668}{4}\\;\\;\\;\\;=7.872983346207417\\end{equation}\nAnd the negative value for x is:\n\\begin{equation}x = \\frac{16 - 15.491933384829668}{4}\\;\\;\\;\\;=0.12701665379258298\\end{equation}\nThe following Python code uses the vertex formula and the quadtratic formula to calculate the vertex and the -x and +x for y = 0, and then plots the resulting parabola:", "def plot_parabola_from_formula (a, b, c):\n import math\n\n # Get vertex\n print('CALCULATING THE VERTEX')\n print('vx = -b / 2a')\n\n nb = -b\n a2 = 2*a\n print('vx = ' + str(nb) + ' / ' + str(a2))\n\n vx = -b/(2*a)\n print('vx = ' + str(vx))\n\n print('\\nvy = ax^2 + bx + c')\n print('vy =' + str(a) + '(' + str(vx) + '^2) + ' + str(b) + '(' + str(vx) + ') + ' + str(c))\n\n avx2 = a*vx**2\n bvx = b*vx\n print('vy =' + str(avx2) + ' + ' + str(bvx) + ' + ' + str(c))\n\n vy = avx2 + bvx + c\n print('vy = ' + str(vy))\n\n print ('\\nv = ' + str(vx) + ',' + str(vy))\n\n # Get +x and -x (showing intermediate calculations)\n print('\\nCALCULATING -x AND +x FOR y=0')\n print('x = -b +- sqrt(b^2 - 4ac) / 2a')\n\n\n b2 = b**2\n ac4 = 4*a*c\n print('x = ' + str(nb) + '+-sqrt(' + str(b2) + ' - ' + str(ac4) + ')/' + str(a2))\n\n sr = math.sqrt(b2 - ac4)\n print('x = ' + str(nb) + ' +- ' + str(sr) + ' / ' + str(a2))\n print('-x = ' + str(nb) + ' - ' + str(sr) + ' / ' + str(a2))\n print('+x = ' + str(nb) + ' + ' + str(sr) + ' / ' + str(a2))\n\n posx = (nb + sr) / a2\n negx = (nb - sr) / a2\n print('-x = ' + str(negx))\n print('+x = ' + str(posx))\n\n\n print('\\nPLOTTING THE PARABOLA')\n import pandas as pd\n\n # Create a dataframe with an x column a range of x values to plot\n df = pd.DataFrame ({'x': range(round(vx)-10, round(vx)+11)})\n\n # Add a y column by applying the quadratic equation to x\n df['y'] = a*df['x']**2 + b*df['x'] + c\n\n # get min and max y values\n miny = df.y.min()\n maxy = df.y.max()\n\n # Plot the line\n %matplotlib inline\n from matplotlib import pyplot as plt\n\n plt.plot(df.x, df.y, color=\"grey\")\n plt.xlabel('x')\n plt.ylabel('y')\n plt.grid()\n plt.axhline()\n plt.axvline()\n\n # Plot calculated x values for y = 0\n plt.scatter([negx, posx],[0,0], color=\"green\")\n plt.annotate('-x=' + str(negx) + ',' + str(0),(negx, 0), xytext=(negx - 3, 5))\n plt.annotate('+x=' + str(posx) + ',' + str(0),(posx, 0), xytext=(posx - 3, -10))\n\n # plot the line of symmetry\n sx = [vx, vx]\n sy = [miny, maxy]\n plt.plot(sx,sy, color='magenta')\n\n # Annotate the vertex\n plt.scatter(vx,vy, color=\"red\")\n plt.annotate('v=' + str(vx) + ',' + str(vy),(vx, vy), xytext=(vx - 1, vy - 10))\n\n plt.show()\n \n\nplot_parabola_from_formula (2, -16, 2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bayespy/bayespy
doc/source/examples/regression.ipynb
mit
[ "This example is a Jupyter notebook. You can download it or run it interactively on mybinder.org.\nLinear regression\nData\nThe true parameters of the linear regression:", "import numpy as np\nk = 2 # slope\nc = 5 # bias\ns = 2 # noise standard deviation\n\n# This cell content is hidden from Sphinx-generated documentation\n%matplotlib inline\nnp.random.seed(42)", "Generate data:", "x = np.arange(10)\ny = k*x + c + s*np.random.randn(10)", "Model\nThe regressors, that is, the input data:", "X = np.vstack([x, np.ones(len(x))]).T", "Note that we added a column of ones to the regressor matrix for the bias term. We model the slope and the bias term in the same node so we do not factorize between them:", "from bayespy.nodes import GaussianARD\nB = GaussianARD(0, 1e-6, shape=(2,))", "The first element is the slope which multiplies x and the second element is the bias term which multiplies the constant ones. Now we compute the dot product of X and B:", "from bayespy.nodes import SumMultiply\nF = SumMultiply('i,i', B, X)", "The noise parameter:", "from bayespy.nodes import Gamma\ntau = Gamma(1e-3, 1e-3)", "The noisy observations:", "Y = GaussianARD(F, tau)", "Inference\nObserve the data:", "Y.observe(y)", "Construct the variational Bayesian (VB) inference engine by giving all stochastic nodes:", "from bayespy.inference import VB\nQ = VB(Y, B, tau)", "Iterate until convergence:", "Q.update(repeat=1000)", "Results\nCreate a simple predictive model for new inputs:", "xh = np.linspace(-5, 15, 100)\nXh = np.vstack([xh, np.ones(len(xh))]).T\nFh = SumMultiply('i,i', B, Xh)", "Note that we use the learned node B but create a new regressor array for predictions. Plot the predictive distribution of noiseless function values:", "import bayespy.plot as bpplt\nbpplt.pyplot.figure()\nbpplt.plot(Fh, x=xh, scale=2)\nbpplt.plot(y, x=x, color='r', marker='x', linestyle='None')\nbpplt.plot(k*xh+c, x=xh, color='r');", "Note that the above plot shows two standard deviation of the posterior of the noiseless function, thus the data points may lie well outside this range. The red line shows the true linear function. Next, plot the distribution of the noise parameter and the true value, 2−2=0.25:", "bpplt.pyplot.figure()\nbpplt.pdf(tau, np.linspace(1e-6,1,100), color='k')\nbpplt.pyplot.axvline(s**(-2), color='r');", "The noise level is captured quite well, although the posterior has more mass on larger noise levels (smaller precision parameter values). Finally, plot the distribution of the regression parameters and mark the true value:", "bpplt.pyplot.figure();\nbpplt.contour(B, np.linspace(1,3,1000), np.linspace(1,9,1000),\n n=10, colors='k');\nbpplt.plot(c, x=k, color='r', marker='x', linestyle='None',\n markersize=10, markeredgewidth=2)\nbpplt.pyplot.xlabel(r'$k$');\nbpplt.pyplot.ylabel(r'$c$');", "In this case, the true parameters are captured well by the posterior distribution.\nImproving accuracy\nThe model can be improved by not factorizing between B and tau but learning their joint posterior distribution. This requires a slight modification to the model by using GaussianGammaISO node:", "from bayespy.nodes import GaussianGamma\nB_tau = GaussianGamma(np.zeros(2), 1e-6*np.identity(2), 1e-3, 1e-3)", "This node contains both the regression parameter vector and the noise parameter. We compute the dot product similarly as before:", "F_tau = SumMultiply('i,i', B_tau, X)", "However, Y is constructed as follows:", "Y = GaussianARD(F_tau, 1)", "Because the noise parameter is already in F_tau we can give a constant one as the second argument. The total noise parameter for Y is the product of the noise parameter in F_tau and one. Now, inference is run similarly as before:", "Y.observe(y)\nQ = VB(Y, B_tau)\nQ.update(repeat=1000)", "Note that the method converges immediately. This happens because there is only one unobserved stochastic node so there is no need for iteration and the result is actually the exact true posterior distribution, not an approximation. Currently, the main drawback of using this approach is that BayesPy does not yet contain any plotting utilities for nodes that contain both Gaussian and gamma variables jointly.\nFurther extensions\nThe approach discussed in this example can easily be extended to non-linear regression and multivariate regression. For non-linear regression, the inputs are first transformed by some known non-linear functions and then linear regression is applied to this transformed data. For multivariate regression, X and B are concatenated appropriately: If there are more regressors, add more columns to both X and B. If there are more output dimensions, add plates to B." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tyarkoni/pliers
examples/Quickstart.ipynb
bsd-3-clause
[ "# Example-specific imports are in individual cells below; here we\n# just import stuff we reuse repeatedly.\nfrom pliers.extractors import merge_results\nfrom pliers.tests.utils import get_test_data_path\nfrom os.path import join\nfrom matplotlib import pyplot as plt\n\n%matplotlib inline", "Pliers Quickstart\nThis notebook contains a few examples that demonstrate how to extract various kinds of features with pliers. We start with very simple examples, and gradually scale up in complexity.\nFace detection\nThis first example uses the face_recognition package's location extraction method to detect the location of Barack Obama's face within a single image. The tools used to do this are completely local (i.e., the image isn't sent to an external API).\nWe output the result as a pandas DataFrame; the 'face_locations' column contains the coordinates of the bounding box in CSS format (i.e., top, right, bottom, and left edges).", "from pliers.extractors import FaceRecognitionFaceLocationsExtractor\n\n# A picture of Barack Obama\nimage = join(get_test_data_path(), 'image', 'obama.jpg')\n\n# Initialize Extractor\next = FaceRecognitionFaceLocationsExtractor()\n\n# Apply Extractor to image\nresult = ext.transform(image)\n\nresult.to_df()", "Face detection with multiple inputs\nWhat if we want to run the face detector on multiple images? Naively, we could of course just loop over input images and apply the Extractor to each one. But pliers makes this even easier for us, by natively accepting iterables as inputs. The following code is almost identical to the above snippet. The only notable difference is that, because the result we get back is now also a list (because the features extracted from each image are stored separately), we need to explicitly combine the results using the merge_results utility.", "from pliers.extractors import FaceRecognitionFaceLocationsExtractor, merge_results\n\nimages = ['apple.jpg', 'obama.jpg', 'thai_people.jpg']\nimages = [join(get_test_data_path(), 'image', img) for img in images]\n\next = FaceRecognitionFaceLocationsExtractor()\nresults = ext.transform(images)\ndf = merge_results(results)\ndf", "Note how the merged pandas DataFrame contains 5 rows, even though there were only 3 input images. The reason is that there are 5 detected faces across the inputs (0 in the first image, 1 in the second, and 4 in the third). You can discern the original sources from the stim_name and source_file columns.\nFace detection using a remote API\nThe above examples use an entirely local package (face_recognition) for feature extraction. In this next example, we use the Google Cloud Vision API to extract various face-related attributes from an image of Barack Obama. The syntax is identical to the first example, save for the use of the GoogleVisionAPIFaceExtractor instead of the FaceRecognitionFaceLocationsExtractor. Note, however, that successful execution of this code requires you to have a GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to your Google credentials JSON file. See the documentation for more details.", "from pliers.extractors import GoogleVisionAPIFaceExtractor\n\next = GoogleVisionAPIFaceExtractor()\nimage = join(get_test_data_path(), 'image', 'obama.jpg')\nresult = ext.transform(image)\n\nresult.to_df(format='long', timing=False, object_id=False)", "Notice that the output in this case contains many more features. That's because the Google face recognition service gives us back a lot more information than just the location of the face within the image. Also, the example illustrates our ability to control the format of the output, by returning the data in \"long\" format, and suppressing output of columns that are uninformative in this context.\nSentiment analysis on text\nHere we use the VADER sentiment analyzer (Hutto & Gilbert, 2014) implemented in the nltk package to extract sentiment for (a) a coherent block of text, and (b) each word in the text separately. This example also introduces the Stim hierarchy of objects explicitly, whereas the initialization of Stim objects was implicit in the previous examples.\nTreat text as a single block", "from pliers.stimuli import TextStim, ComplexTextStim\nfrom pliers.extractors import VADERSentimentExtractor, merge_results\n\nraw = \"\"\"We're not claiming that VADER is a very good sentiment analysis tool.\nSentiment analysis is a really, really difficult problem. But just to make a\npoint, here are some clearly valenced words: disgusting, wonderful, poop,\nsunshine, smile.\"\"\"\n\n# First example: we treat all text as part of a single token\ntext = TextStim(text=raw)\n\next = VADERSentimentExtractor()\nresults = ext.transform(text)\nresults.to_df()", "Analyze each word individually", "# Second example: we construct a ComplexTextStim, which will\n# cause each word to be represented as a separate TextStim.\ntext = ComplexTextStim(text=raw)\n\next = VADERSentimentExtractor()\nresults = ext.transform(text)\n\n# Because results is a list of ExtractorResult objects\n# (one per word), we need to merge the results explicitly.\ndf = merge_results(results, object_id=False)\ndf.head(10)", "Extract chromagram from an audio clip\nWe have an audio clip, and we'd like to compute its chromagram (i.e., to extract the normalized energy in each of the 12 pitch classes). This is trivial thanks to pliers' support for the librosa package, which contains all kinds of useful functions for spectral feature extraction.", "from pliers.extractors import ChromaSTFTExtractor\n\naudio = join(get_test_data_path(), 'audio', 'barber.wav')\n# Audio is sampled at 11KHz; let's compute power in 1 sec bins\next = ChromaSTFTExtractor(hop_length=11025)\nresult = ext.transform(audio).to_df()\nresult.head(10)\n\n# And a plot of the chromagram...\nplt.imshow(result.iloc[:, 4:].values.T, aspect='auto')", "Sentiment analysis on speech transcribed from audio\nSo far all of our examples involve the application of a feature extractor to an input of the expected modality (e.g., a text sentiment analyzer applied to text, a face recognizer applied to an image, etc.). But we often want to extract features that require us to first convert our input to a different modality. Let's see how pliers handles this kind of situation.\nSay we have an audio clip. We want to run sentiment analysis on the audio. This requires us to first transcribe any speech contained in the audio. As it turns out, we don't have to do anything special here; we can just feed an audio clip directly to an Extractor class that expects a text input (e.g., the VADER sentiment analyzer we used earlier). How? Magic! Pliers is smart enough to implicitly convert the audio clip to a ComplexTextStim internally. By default, it does this using IBM's Watson speech transcription API. Which means you'll need to make sure your API key is set up properly in order for the code below to work. (But if you'd rather use, say, Google's Cloud Speech API, you could easily configure pliers to make that the default for audio-to-text conversion.)", "audio = join(get_test_data_path(), 'audio', 'homer.wav')\next = VADERSentimentExtractor()\nresult = ext.transform(audio)\ndf = merge_results(result, object_id=False)\ndf", "Object recognition on selectively sampled video frames\nA common scenario when analyzing video is to want to apply some kind of feature extraction tool to individual video frames (i.e., still images). Often, there's little to be gained by analyzing every single frame, so we want to sample frames with some specified frequency. The following example illustrates how easily this can be accomplished in pliers. It also demonstrates the concept of chaining multiple Transformer objects. We first convert a video to a series of images, and then apply an object-detection Extractor to each image.\nNote, as with other examples above, that the ClarifaiAPIImageExtractor wraps the Clarifai object recognition API, so you'll need to have an API key set up appropriately (if you don't have an API key, and don't want to set one up, you can replace ClarifaiAPIExtractor with TensorFlowInceptionV3Extractor to get similar, though not quite as accurate, results).", "from pliers.filters import FrameSamplingFilter\nfrom pliers.extractors import ClarifaiAPIImageExtractor, merge_results\n\nvideo = join(get_test_data_path(), 'video', 'small.mp4')\n\n# Sample 2 frames per second\nsampler = FrameSamplingFilter(hertz=2)\nframes = sampler.transform(video)\n\next = ClarifaiAPIImageExtractor()\nresults = ext.transform(frames)\ndf = merge_results(results, )\ndf", "The resulting data frame has 41 columns (!), most of which are individual object labels like 'lego', 'toy', etc., selected for us by the Clarifai API on the basis of the content detected in the video (we could have also forced the API to return values for specific labels).\nMultiple extractors\nSo far we've only used a single Extractor at a time to extract information from our inputs. Now we'll start to get a little more ambitious. Let's say we have a video that we want to extract lots of different features from--in multiple modalities. Specifically, we want to extract all of the following:\n\nObject recognition and face detection applied to every 10th frame of the video;\nA second-by-second estimate of spectral power in the speech frequency band;\nA word-by-word speech transcript;\nEstimates of several lexical properties (e.g., word length, written word frequency, etc.) for every word in the transcript;\nSentiment analysis applied to the entire transcript.\n\nWe've already seen some of these features extracted individually, but now we're going to extract all of them at once. As it turns out, the code looks almost exactly like a concatenated version of several of our examples above.", "from pliers.tests.utils import get_test_data_path\nfrom os.path import join\nfrom pliers.filters import FrameSamplingFilter\nfrom pliers.converters import GoogleSpeechAPIConverter\nfrom pliers.extractors import (ClarifaiAPIImageExtractor, GoogleVisionAPIFaceExtractor,\n ComplexTextExtractor, PredefinedDictionaryExtractor,\n STFTAudioExtractor, VADERSentimentExtractor,\n merge_results)\n\nvideo = join(get_test_data_path(), 'video', 'obama_speech.mp4')\n\n# Store all the returned features in a single list (nested lists\n# are fine, the merge_results function will flatten everything)\nfeatures = []\n\n# Sample video frames and apply the image-based extractors\nsampler = FrameSamplingFilter(every=10)\nframes = sampler.transform(video)\n\nobj_ext = ClarifaiAPIImageExtractor()\nobj_features = obj_ext.transform(frames)\nfeatures.append(obj_features)\n\nface_ext = GoogleVisionAPIFaceExtractor()\nface_features = face_ext.transform(frames)\nfeatures.append(face_features)\n\n# Power in speech frequencies\nstft_ext = STFTAudioExtractor(freq_bins=[(100, 300)])\nspeech_features = stft_ext.transform(video)\nfeatures.append(speech_features)\n\n# Explicitly transcribe the video--we could also skip this step\n# and it would be done implicitly, but this way we can specify\n# that we want to use the Google Cloud Speech API rather than\n# the package default (IBM Watson)\ntext_conv = GoogleSpeechAPIConverter()\ntext = text_conv.transform(video)\n \n# Text-based features\ntext_ext = ComplexTextExtractor()\ntext_features = text_ext.transform(text)\nfeatures.append(text_features)\n\ndict_ext = PredefinedDictionaryExtractor(\n variables=['affect/V.Mean.Sum', 'subtlexusfrequency/Lg10WF'])\nnorm_features = dict_ext.transform(text)\nfeatures.append(norm_features)\n\nsent_ext = VADERSentimentExtractor()\nsent_features = sent_ext.transform(text)\nfeatures.append(sent_features)\n\n# Ask for data in 'long' format, and code extractor name as a separate\n# column instead of prepending it to feature names.\ndf = merge_results(features, format='long', extractor_names='column')\n\n# Output rows in a sensible order\ndf.sort_values(['extractor', 'feature', 'onset', 'duration', 'order']).head(10)", "The resulting pandas DataFrame is quite large; even for our 9-second video, we get back over 3,000 rows! Importantly, though, the DataFrame contains all kinds of metadata that makes it easy to filter and sort the results in whatever way we might want to (e.g., we can filter on the extractor, stim class, onset or duration, etc.).\nMultiple extractors with a Graph\nThe above code listing is already pretty terse, and has the advantage of being explicit about every step. But if it's brevity we're after, pliers is happy to oblige us. The package includes a Graph abstraction that allows us to load an arbitrary number of Transformer into a graph, and execute them all in one shot. The code below is functionally identical to the last example, but only about the third of the length. It also requires fewer imports, since Transformer objects that we don't need to initialize with custom arguments can be passed to the Graph as strings. \nThe upshot of all this is that, in just a few lines of Python code, we're abvle to extract a broad range of multimodal features from video, image, audio or text inputs, using state-of-the-art tools and services!", "from pliers.tests.utils import get_test_data_path\nfrom os.path import join\nfrom pliers.graph import Graph\nfrom pliers.filters import FrameSamplingFilter\nfrom pliers.extractors import (PredefinedDictionaryExtractor, STFTAudioExtractor,\n merge_results)\n\n\nvideo = join(get_test_data_path(), 'video', 'obama_speech.mp4')\n\n# Define nodes\nnodes = [\n (FrameSamplingFilter(every=10),\n ['ClarifaiAPIImageExtractor', 'GoogleVisionAPIFaceExtractor']),\n (STFTAudioExtractor(freq_bins=[(100, 300)])),\n ('GoogleSpeechAPIConverter',\n ['ComplexTextExtractor',\n PredefinedDictionaryExtractor(['affect/V.Mean.Sum',\n 'subtlexusfrequency/Lg10WF']),\n 'VADERSentimentExtractor'])\n]\n\n# Initialize and execute Graph\ng = Graph(nodes)\n\n# Arguments to merge_results can be passed in here\ndf = g.transform(video, format='long', extractor_names='column')\n\n# Output rows in a sensible order\ndf.sort_values(['extractor', 'feature', 'onset', 'duration', 'order']).head(10)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cloudera/ibis
docs/source/user_guide/geospatial_analysis.ipynb
apache-2.0
[ "Ibis and Geospatial Operations\nOne of the most popular extensions to PostgreSQL is PostGIS,\nwhich adds support for storing geospatial geometries,\nas well as functionality for reasoning about and performing operations on those geometries.\nThis is a demo showing how to assemble ibis expressions for a PostGIS-enabled database.\nWe will be using a database that has been loaded with an Open Street Map\nextract for Southern California.\nThis extract can be found here,\nand loaded into PostGIS using a tool like ogr2ogr.\nPreparation\nWe first need to set up a demonstration database and load it with the sample data.\nIf you have Docker installed, you can download and launch a PostGIS database with the following:", "# Launch the postgis container.\n# This may take a bit of time if it needs to download the image.\n!docker run -d -p 5432:5432 --name postgis-db -e POSTGRES_PASSWORD=supersecret mdillon/postgis:9.6-alpine", "Next, we download our OSM extract (about 400 MB):", "!wget https://download.geofabrik.de/north-america/us/california/socal-latest.osm.pbf", "Finally, we load it into the database using ogr2ogr (this may take some time):", "!ogr2ogr -f PostgreSQL PG:\"dbname='postgres' user='postgres' password='supersecret' port=5432 host='localhost'\" -lco OVERWRITE=yes --config PG_USE_COPY YES socal-latest.osm.pbf", "Connecting to the database\nWe first make the relevant imports, and connect to the PostGIS database:", "import os\nimport geopandas\nimport ibis\n\n%matplotlib inline\n\nclient = ibis.postgres.connect(\n url='postgres://postgres:supersecret@localhost:5432/postgres'\n)", "Let's look at the tables available in the database:", "client.list_tables()", "As you can see, this Open Street Map extract stores its data according to the geometry type.\nLet's grab references to the polygon and line tables:", "polygons = client.table('multipolygons')\nlines = client.table('lines')", "Querying the data\nWe query the polygons table for shapes with an administrative level of 8,\nwhich corresponds to municipalities.\nWe also reproject some of the column names so we don't have a name collision later.", "cities = polygons[polygons.admin_level == '8']\n\ncities = cities[\n cities.name.name('city_name'),\n cities.wkb_geometry.name('city_geometry')\n]", "We can assemble a specific query for the city of Los Angeles,\nand execute it to get the geometry of the city.\nThis will be useful later when reasoning about other geospatial relationships in the LA area:", "los_angeles = cities[cities.city_name == 'Los Angeles']\nla_city = los_angeles.execute()\nla_city_geom = la_city.iloc[0].city_geometry\nla_city_geom", "Let's also extract freeways from the lines table,\nwhich are indicated by the value 'motorway' in the highway column:", "highways = lines[(lines.highway == 'motorway')]\nhighways = highways[\n highways.name.name('highway_name'),\n highways.wkb_geometry.name('highway_geometry')\n]", "Making a spatial join\nLet's test a spatial join by selecting all the highways that intersect the city of Los Angeles,\nor one if its neighbors.\nWe begin by assembling an expression for Los Angeles and its neighbors.\nWe consider a city to be a neighbor if it has any point of intersection\n(by this critereon we also get Los Angeles itself).\nWe can pass in the city geometry that we selected above when making our query by marking it as a literal value in ibis:", "la_neighbors_expr = cities[\n cities.city_geometry.intersects(\n ibis.literal(la_city_geom, type='multipolygon;4326:geometry')\n )\n]\n\nla_neighbors = la_neighbors_expr.execute().dropna()\nla_neighbors", "Now we join the neighbors expression with the freeways expression,\non the condition that the highways intersect any of the city geometries:", "la_highways_expr = highways.inner_join(\n la_neighbors_expr,\n highways.highway_geometry.intersects(la_neighbors_expr.city_geometry)\n).materialize()\n\nla_highways = la_highways_expr.execute()\nla_highways.plot()", "Combining the results\nNow that we have made a number of queries and joins, let's combine them into a single plot.\nTo make the plot a bit nicer, we can also load some shapefiles for the coast and land:", "ocean = geopandas.read_file('https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_ocean.zip')\nland = geopandas.read_file('https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_land.zip')\n\nax = la_neighbors.dropna().plot(figsize=(16,16), cmap='rainbow', alpha=0.9)\nax.set_autoscale_on(False)\nax.set_axis_off()\nland.plot(ax=ax, color='tan', alpha=0.4)\n\nax = ocean.plot(ax=ax, color='navy')\nla_highways.plot(ax=ax, color='maroon')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
keras-team/autokeras
docs/ipynb/structured_data_regression.ipynb
apache-2.0
[ "!pip install autokeras\n\n\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom sklearn.datasets import fetch_california_housing\n\nimport autokeras as ak\n", "A Simple Example\nThe first step is to prepare your data. Here we use the California housing\ndataset\nas an example.", "\nhouse_dataset = fetch_california_housing()\ndf = pd.DataFrame(\n np.concatenate(\n (house_dataset.data, house_dataset.target.reshape(-1, 1)), axis=1\n ),\n columns=house_dataset.feature_names + [\"Price\"],\n)\ntrain_size = int(df.shape[0] * 0.9)\ndf[:train_size].to_csv(\"train.csv\", index=False)\ndf[train_size:].to_csv(\"eval.csv\", index=False)\ntrain_file_path = \"train.csv\"\ntest_file_path = \"eval.csv\"\n", "The second step is to run the\nStructuredDataRegressor.\nAs a quick demo, we set epochs to 10.\nYou can also leave the epochs unspecified for an adaptive number of epochs.", "# Initialize the structured data regressor.\nreg = ak.StructuredDataRegressor(\n overwrite=True, max_trials=3\n) # It tries 3 different models.\n# Feed the structured data regressor with training data.\nreg.fit(\n # The path to the train.csv file.\n train_file_path,\n # The name of the label column.\n \"Price\",\n epochs=10,\n)\n# Predict with the best model.\npredicted_y = reg.predict(test_file_path)\n# Evaluate the best model with testing data.\nprint(reg.evaluate(test_file_path, \"Price\"))\n", "Data Format\nThe AutoKeras StructuredDataRegressor is quite flexible for the data format.\nThe example above shows how to use the CSV files directly. Besides CSV files,\nit also supports numpy.ndarray, pandas.DataFrame or tf.data.Dataset. The\ndata should be two-dimensional with numerical or categorical values.\nFor the regression targets, it should be a vector of numerical values.\nAutoKeras accepts numpy.ndarray, pandas.DataFrame, or pandas.Series.\nThe following examples show how the data can be prepared with numpy.ndarray,\npandas.DataFrame, and tensorflow.data.Dataset.", "\n# x_train as pandas.DataFrame, y_train as pandas.Series\nx_train = pd.read_csv(train_file_path)\nprint(type(x_train)) # pandas.DataFrame\ny_train = x_train.pop(\"Price\")\nprint(type(y_train)) # pandas.Series\n\n# You can also use pandas.DataFrame for y_train.\ny_train = pd.DataFrame(y_train)\nprint(type(y_train)) # pandas.DataFrame\n\n# You can also use numpy.ndarray for x_train and y_train.\nx_train = x_train.to_numpy()\ny_train = y_train.to_numpy()\nprint(type(x_train)) # numpy.ndarray\nprint(type(y_train)) # numpy.ndarray\n\n# Preparing testing data.\nx_test = pd.read_csv(test_file_path)\ny_test = x_test.pop(\"Price\")\n\n# It tries 10 different models.\nreg = ak.StructuredDataRegressor(max_trials=3, overwrite=True)\n# Feed the structured data regressor with training data.\nreg.fit(x_train, y_train, epochs=10)\n# Predict with the best model.\npredicted_y = reg.predict(x_test)\n# Evaluate the best model with testing data.\nprint(reg.evaluate(x_test, y_test))\n", "The following code shows how to convert numpy.ndarray to tf.data.Dataset.", "train_set = tf.data.Dataset.from_tensor_slices((x_train, y_train))\ntest_set = tf.data.Dataset.from_tensor_slices((x_test, y_test))\n\nreg = ak.StructuredDataRegressor(max_trials=3, overwrite=True)\n# Feed the tensorflow Dataset to the regressor.\nreg.fit(train_set, epochs=10)\n# Predict with the best model.\npredicted_y = reg.predict(test_set)\n# Evaluate the best model with testing data.\nprint(reg.evaluate(test_set))\n", "You can also specify the column names and types for the data as follows. The\ncolumn_names is optional if the training data already have the column names,\ne.g. pandas.DataFrame, CSV file. Any column, whose type is not specified will\nbe inferred from the training data.", "# Initialize the structured data regressor.\nreg = ak.StructuredDataRegressor(\n column_names=[\n \"MedInc\",\n \"HouseAge\",\n \"AveRooms\",\n \"AveBedrms\",\n \"Population\",\n \"AveOccup\",\n \"Latitude\",\n \"Longitude\",\n ],\n column_types={\"MedInc\": \"numerical\", \"Latitude\": \"numerical\"},\n max_trials=10, # It tries 10 different models.\n overwrite=True,\n)\n\n", "Validation Data\nBy default, AutoKeras use the last 20% of training data as validation data. As\nshown in the example below, you can use validation_split to specify the\npercentage.", "reg.fit(\n x_train,\n y_train,\n # Split the training data and use the last 15% as validation data.\n validation_split=0.15,\n epochs=10,\n)\n", "You can also use your own validation set\ninstead of splitting it from the training data with validation_data.", "split = 500\nx_val = x_train[split:]\ny_val = y_train[split:]\nx_train = x_train[:split]\ny_train = y_train[:split]\nreg.fit(\n x_train,\n y_train,\n # Use your own validation set.\n validation_data=(x_val, y_val),\n epochs=10,\n)\n", "Customized Search Space\nFor advanced users, you may customize your search space by using\nAutoModel instead of\nStructuredDataRegressor. You can configure the\nStructuredDataBlock for some high-level\nconfigurations, e.g., categorical_encoding for whether to use the\nCategoricalToNumerical. You can also do\nnot specify these arguments, which would leave the different choices to be\ntuned automatically. See the following example for detail.", "\ninput_node = ak.StructuredDataInput()\noutput_node = ak.StructuredDataBlock(categorical_encoding=True)(input_node)\noutput_node = ak.RegressionHead()(output_node)\nreg = ak.AutoModel(\n inputs=input_node, outputs=output_node, overwrite=True, max_trials=3\n)\nreg.fit(x_train, y_train, epochs=10)\n", "The usage of AutoModel is similar to the\nfunctional API of Keras.\nBasically, you are building a graph, whose edges are blocks and the nodes are\nintermediate outputs of blocks. To add an edge from input_node to\noutput_node with output_node = ak.[some_block]([block_args])(input_node).\nYou can even also use more fine grained blocks to customize the search space\neven further. See the following example.", "\ninput_node = ak.StructuredDataInput()\noutput_node = ak.CategoricalToNumerical()(input_node)\noutput_node = ak.DenseBlock()(output_node)\noutput_node = ak.RegressionHead()(output_node)\nreg = ak.AutoModel(\n inputs=input_node, outputs=output_node, max_trials=3, overwrite=True\n)\nreg.fit(x_train, y_train, epochs=10)\n", "You can also export the best model found by AutoKeras as a Keras Model.", "model = reg.export_model()\nmodel.summary()\n# numpy array in object (mixed type) is not supported.\n# you need convert it to unicode or float first.\nmodel.predict(x_train)\n\n", "Reference\nStructuredDataRegressor,\nAutoModel,\nStructuredDataBlock,\nDenseBlock,\nStructuredDataInput,\nRegressionHead,\nCategoricalToNumerical." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NewKnowledge/punk
examples/Feature Selection.ipynb
mit
[ "The goal of punk is to make available sime wrappers for a variety of machine learning pipelines.\nThe pipelines are termed primitves and each primitive is designed with a functional programming approach in mind.\nAt the time of this writing, punk is being periodically updated. Any new primitives will be realesed as a pip-installable python package every friday along with their corresponding annotations files for the broader D3M community.\nHere we will briefly show how the primitives in the punk package can be utilized.", "import punk\nhelp(punk)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn import datasets\nfrom sklearn.preprocessing import StandardScaler \nfrom sklearn.model_selection import train_test_split\n\nfrom punk import feature_selection", "Feature Selection\nFeature Selection for Classification Problems\nThe rfclassifier_feature_selection primitive takes in a dataset (training data along with labels) to output a ranking of features as shown below:", "# Wine dataset\ndf_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/'\n 'python-machine-learning-book/master/code/datasets/wine/wine.data', \n header=None) \ncolumns = np.array(['Alcohol', 'Malic acid', 'Ash',\n 'Alcalinity of ash', 'Magnesium', 'Total phenols',\n 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',\n 'Color intensity', 'Hue', 'OD280/OD315 of diluted wines',\n 'Proline'])\n# Split dataset \nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values \nX, _, y, _ = train_test_split(X, y, test_size=0.3, random_state=0)\n\n%%time\n\n# Run primitive\nrfc = feature_selection.RFFeatures(problem_type=\"classification\", \n cv=3, scoring=\"accuracy\", verbose=0, n_jobs=1)\nrfc.fit((\"matrix\", \"matrix\"), (X, y))\nindices = rfc.transform()\n\nfeature_importances = rfc.feature_importances\n#feature_indices = rfc.indices\n\nfor i in range(len(columns)):\n print(\"{:>2}) {:^30} {:.5f}\".format(i+1, \n columns[indices[i]],\n feature_importances[indices[i]]\n ))\n \nplt.figure(figsize=(9, 5))\nplt.title('Feature Importances')\nplt.bar(range(len(columns)), feature_importances[indices], color='lightblue', align='center')\n\nplt.xticks(range(len(columns)), columns[indices], rotation=90, fontsize=14)\nplt.xlim([-1, len(columns)])\nplt.tight_layout()\nplt.savefig('./random_forest.png', dpi=300)\nplt.show()", "Feature Selection for Regression Problems\nSimilarly, rfregressor_feature_selection can be used for regression type problems:", "# Get boston dataset\nboston = datasets.load_boston()\nX, y = boston.data, boston.target\n\n%%time\n\n# Run primitive\nrfr = feature_selection.RFFeatures(problem_type=\"regression\", \n cv=3, scoring=\"r2\", verbose=0, n_jobs=1)\nrfr.fit((\"matrix\", \"matrix\"), (X, y))\nindices = rfr.transform()\n\nfeature_importances = rfr.feature_importances\n#feature_indices = rfr.indices\n\ncolumns = boston.feature_names\nfor i in range(len(columns)):\n print(\"{:>2}) {:^15} {:.5f}\".format(i+1, \n columns[indices[i]],\n feature_importances[indices[i]]\n ))\n \nplt.figure(figsize=(9, 5))\nplt.title('Feature Importances')\nplt.bar(range(len(columns)), feature_importances[indices], color='lightblue', align='center')\n\nplt.xticks(range(len(columns)), columns[indices], rotation=90, fontsize=14)\nplt.xlim([-1, len(columns)])\nplt.tight_layout()\nplt.savefig('./random_forest.png', dpi=300)\nplt.show()", "To provide some context, below we show the correlation coefficients between some of the features in the boston dataset.\nNotice how the two features that were ranked the most important by our primitve are also the two features with the highest correlation coefficient (in absolute value) with the dependent variable MEDV.\nThis figure was taken from python machine learning book.", "import matplotlib.image as mpimg\n\nimg=mpimg.imread(\"heatmap.png\")\nplt.figure(figsize=(10, 10))\nplt.axis(\"off\")\nplt.imshow(img);", "Ranking Features by their Contributions to Principal\npca_feature_selection does a feature ranking based on the contributions each feature has to each of the principal components and by their contributions to the first principal component.", "# Get iris dataset\niris = datasets.load_iris()\nsc = StandardScaler()\nX = sc.fit_transform(iris.data)\n\n# run primitive\niris_ranking = feature_selection.PCAFeatures()\niris_ranking.fit([\"matrix\"], X)\nimportances = iris_ranking.transform()\n\nfeature_names = np.array(iris.feature_names)\n\nprint(feature_names, '\\n')\nfor i in range(importances[\"importance_onallpcs\"].shape[0]):\n print(\"{:>2}) {:^19}\".format(i+1, feature_names[iris_ranking.importance_onallpcs[i]]))\n\nplt.figure(figsize=(9, 5))\nplt.bar(range(1, 5), iris_ranking.explained_variance_ratio_, alpha=0.5, align='center')\nplt.step(range(1, 5), np.cumsum(iris_ranking.explained_variance_ratio_), where='mid')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Principal components')\nplt.xticks([1, 2, 3, 4])\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
QuantEcon/QuantEcon.notebooks
UN_demography.ipynb
bsd-3-clause
[ "Data Bootcamp: Demography\nDavid Backus\nWe love demography, specifically the dynamics of population growth and decline. You can drill down seemingly without end, as this terrific graphic about causes of death suggests. \nWe take a look here at the UN's population data: the age distribution of the population, life expectancy, fertility (the word we use for births), and mortality (deaths). Explore the website, it's filled with interesting data. There are other sources that cover longer time periods, and for some countries you can get detailed data on specific things (causes of death, for example). \nWe use a number of countries as examples, but Japan and China are the most striking. The code is written so that the country is easily changed. \nThis IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp. \nPreliminaries\nImport statements and a date check for future reference.", "# import packages \nimport pandas as pd # data management\nimport matplotlib.pyplot as plt # graphics \nimport matplotlib as mpl # graphics parameters\nimport numpy as np # numerical calculations \n\n# IPython command, puts plots in notebook \n%matplotlib inline\n\n# check Python version \nimport datetime as dt \nimport sys\nprint('Today is', dt.date.today())\nprint('What version of Python are we running? \\n', sys.version, sep='') ", "Population by age\nWe have both \"estimates\" of the past (1950-2015) and \"projections\" of the future (out to 2100). Here we focus on the latter, specifically what the UN refers to as the medium variant: their middle of the road projection. It gives us a sense of how Japan's population might change over the next century. \nIt takes a few seconds to read the data. \nWhat are the numbers? Thousands of people in various 5-year age categories.", "url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/'\nurl2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/'\nurl3 = 'WPP2015_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLS'\nurl = url1 + url2 + url3 \n\ncols = [2, 5] + list(range(6,28))\n#est = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])\nprj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…'])\n\nprj.head(3)[list(range(6))]\n\n# rename some variables \npop = prj \nnames = list(pop) \npop = pop.rename(columns={names[0]: 'Country', \n names[1]: 'Year'}) \n# select country and years \ncountry = ['Japan']\nyears = [2015, 2055, 2095]\npop = pop[pop['Country'].isin(country) & pop['Year'].isin(years)]\npop = pop.drop(['Country'], axis=1)\n\n# set index = Year \n# divide by 1000 to convert numbers from thousands to millions\npop = pop.set_index('Year')/1000\n\npop.head()[list(range(8))]\n\n# transpose (T) so that index = age \npop = pop.T\npop.head(3)\n\nax = pop.plot(kind='bar', \n color='blue', \n alpha=0.5, subplots=True, sharey=True, figsize=(8,6))\n\nfor axnum in range(len(ax)): \n ax[axnum].set_title('')\n ax[axnum].set_ylabel('Millions')\n \nax[0].set_title('Population by age', fontsize=14, loc='left') ", "Exercise. What do you see here? What else would you like to know? \nExercise. Adapt the preceeding code to do the same thing for China. Or some other country that sparks your interest. \nFertility: aka birth rates\nWe might wonder, why is the population falling in Japan? Other countries? Well, one reason is that birth rates are falling. Demographers call this fertility. Here we look at the fertility using the same UN source as the previous example. We look at two variables: total fertility and fertility by age of mother. In both cases we explore the numbers to date, but the same files contain projections of future fertility.", "# fertility overall \nuft = 'http://esa.un.org/unpd/wpp/DVD/Files/'\nuft += '1_Indicators%20(Standard)/EXCEL_FILES/'\nuft += '2_Fertility/WPP2015_FERT_F04_TOTAL_FERTILITY.XLS'\n\ncols = [2] + list(range(5,18))\nftot = pd.read_excel(uft, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])\n\nftot.head(3)[list(range(6))]\n\n# rename some variables \nnames = list(ftot)\nf = ftot.rename(columns={names[0]: 'Country'}) \n\n# select countries \ncountries = ['China', 'Japan', 'Germany', 'United States of America']\nf = f[f['Country'].isin(countries)]\n\n# shape\nf = f.set_index('Country').T \nf = f.rename(columns={'United States of America': 'United States'})\nf.tail(3)\n\nfig, ax = plt.subplots()\nf.plot(ax=ax, kind='line', alpha=0.5, lw=3, figsize=(6.5, 4))\nax.set_title('Fertility (births per woman, lifetime)', fontsize=14, loc='left')\nax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)\nax.set_ylim(ymin=0)\nax.hlines(2.1, -1, 13, linestyles='dashed')\nax.text(8.5, 2.4, 'Replacement = 2.1')", "Exercise. What do you see here? What else would you like to know? \nExercise. Add Canada to the figure. How does it compare to the others? What other countries would you be interested in?\nLife expectancy\nOne of the bottom line summary numbers for mortality is life expectancy: if mortaility rates fall, people live longer, on average. Here we look at life expectancy at birth. There are also numbers for life expectancy given than you live to some specific age; for example, life expectancy given that you survive to age 60.", "# life expectancy at birth, both sexes \nule = 'http://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/3_Mortality/'\nule += 'WPP2015_MORT_F07_1_LIFE_EXPECTANCY_0_BOTH_SEXES.XLS'\n\ncols = [2] + list(range(5,34))\nle = pd.read_excel(ule, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])\n\nle.head(3)[list(range(10))]\n\n# rename some variables \noldname = list(le)[0]\nl = le.rename(columns={oldname: 'Country'}) \nl.head(3)[list(range(8))]\n\n# select countries \ncountries = ['China', 'Japan', 'Germany', 'United States of America']\nl = l[l['Country'].isin(countries)]\n\n# shape\nl = l.set_index('Country').T \nl = l.rename(columns={'United States of America': 'United States'})\nl.tail()\n\nfig, ax = plt.subplots()\nl.plot(ax=ax, kind='line', alpha=0.5, lw=3, figsize=(6, 8), grid=True)\nax.set_title('Life expectancy at birth', fontsize=14, loc='left')\nax.set_ylabel('Life expectancy in years')\nax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)\nax.set_ylim(ymin=0)", "Exercise. What other countries would you like to see? Can you add them? The code below generates a list.", "countries = le.rename(columns={oldname: 'Country'})['Country']", "Exercise. Why do you think the US is falling behind? What would you look at to verify your conjecture?\nMortality: aka death rates\nAnother thing that affects the age distribution of the population is the mortality rate: if mortality rates fall people live longer, on average. Here we look at how mortality rates have changed over the past 60+ years. Roughly speaking, people live an extra five years every generation. Which is a lot. Some of you will live to be a hundred. (Look at the 100+ agen category over time for Japan.) \nThe experts look at mortality rates by age. The UN has a whole page devoted to mortality numbers. We take 5-year mortality rates from the Abridged Life Table. \nThe numbers are percentages of people in a given age group who die over a 5-year period. 0.1 means that 90 percent of an age group is still alive in five years.", "# mortality overall \nurl = 'http://esa.un.org/unpd/wpp/DVD/Files/'\nurl += '1_Indicators%20(Standard)/EXCEL_FILES/3_Mortality/'\nurl += 'WPP2015_MORT_F17_1_ABRIDGED_LIFE_TABLE_BOTH_SEXES.XLS'\n\ncols = [2, 5, 6, 7, 9]\nmort = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])\nmort.tail(3)\n\n# change names \nnames = list(mort)\nm = mort.rename(columns={names[0]: 'Country', names[2]: 'Age', names[3]: 'Interval', names[4]: 'Mortality'})\nm.head(3)", "Comment. At this point, we need to pivot the data. That's not something we've done before, so take it as simply something we can do easily if we have to. We're going to do this twice to produce different graphs: \n\nCompare countries for the same period. \nCompare different periods for the same country.", "# compare countries for most recent period\ncountries = ['China', 'Japan', 'Germany', 'United States of America']\nmt = m[m['Country'].isin(countries) & m['Interval'].isin([5]) & m['Period'].isin(['2010-2015'])] \nprint('Dimensions:', mt.shape) \n\nmp = mt.pivot(index='Age', columns='Country', values='Mortality') \nmp.head(3)\n\nfig, ax = plt.subplots()\nmp.plot(ax=ax, kind='line', alpha=0.5, linewidth=3, \n# logy=True, \n figsize=(6, 4))\nax.set_title('Mortality by age', fontsize=14, loc='left')\nax.set_ylabel('Mortality Rate (log scale)')\nax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)", "Exercises.\n\nWhat country's old people have the lowest mortality?\nWhat do you see here for the US? Why is our life expectancy shorter?\nWhat other countries would you like to see? Can you adapt the code to show them? \nAnything else cross your mind?", "# compare periods for the one country -- countries[0] is China \nmt = m[m['Country'].isin([countries[0]]) & m['Interval'].isin([5])] \nprint('Dimensions:', mt.shape) \n\nmp = mt.pivot(index='Age', columns='Period', values='Mortality') \nmp = mp[[0, 6, 12]]\nmp.head(3)\n\nfig, ax = plt.subplots()\nmp.plot(ax=ax, kind='line', alpha=0.5, linewidth=3, \n# logy=True, \n figsize=(6, 4))\nax.set_title('Mortality over time', fontsize=14, loc='left')\nax.set_ylabel('Mortality Rate (log scale)')\nax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)", "Exercise. What do you see? What else would you like to know? \nExercise. Repeat this graph for the United States? How does it compare?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kit-cel/wt
nt1/vorlesung/3_mod_demod/qpsk_oqpsk.ipynb
gpl-2.0
[ "Content and Objectives\n\nShow that QPSK suffers from zero crossing and that those are avoided when using OQPSK\nRandom QPSK symbols are being sampled, pulse shaped and the resulting signals are depicted as trajectories in the complex plane\n\nImport", "# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 20}\nplt.rc('font', **font)\nplt.rc('text', usetex=matplotlib.checkdep_usetex(True))\n\nmatplotlib.rc('figure', figsize=(18, 6) )", "Function for determining the impulse response of an RC filter", "########################\n# find impulse response of an RC filter\n########################\ndef get_rc_ir(K, n_up, t_symbol, r):\n \n ''' \n Determines coefficients of an RC filter \n \n Formula out of: K.-D. Kammeyer, Nachrichtenübertragung\n At poles, l'Hospital was used \n \n NOTE: Length of the IR has to be an odd number\n \n IN: length of IR, upsampling factor, symbol time, roll-off factor\n OUT: filter coefficients\n '''\n\n # check that IR length is odd\n assert K % 2 == 1, 'Length of the impulse response should be an odd number'\n \n # map zero r to close-to-zero\n if r == 0:\n r = 1e-32\n\n\n # initialize output length and sample time\n rc = np.zeros( K )\n t_sample = t_symbol / n_up\n \n \n # time indices and sampled time\n k_steps = np.arange( -(K-1) / 2.0, (K-1) / 2.0 + 1 ) \n t_steps = k_steps * t_sample\n \n for k in k_steps.astype(int):\n \n if t_steps[k] == 0:\n rc[ k ] = 1. / t_symbol\n \n elif np.abs( t_steps[k] ) == t_symbol / ( 2.0 * r ):\n rc[ k ] = r / ( 2.0 * t_symbol ) * np.sin( np.pi / ( 2.0 * r ) )\n \n else:\n rc[ k ] = np.sin( np.pi * t_steps[k] / t_symbol ) / np.pi / t_steps[k] \\\n * np.cos( r * np.pi * t_steps[k] / t_symbol ) \\\n / ( 1.0 - ( 2.0 * r * t_steps[k] / t_symbol )**2 )\n \n return rc", "Parameters", "# constellation points of modulation\nM = 4\nconstellation_points = [ np.exp( 1j * 2 * np.pi * m / M + 1j * np.pi / M ) for m in range( M ) ]\n\n\n# symbol time and number of symbols \nt_symb = 1.0 \nn_symb = 100", "Get QPSK and OQPSK signal", "# get filter impulse response\nr = 0.33\nn_up = 16 # samples per symbol\nsyms_per_filt = 4 # symbols per filter (plus-minus in both directions) \nK_filt = 2 * syms_per_filt * n_up + 1 # length of the fir filter\n\n\n# generate random vector and modulate the specified modulation scheme\ndata = np.random.randint( M, size = n_symb )\ns = [ constellation_points[ d ] for d in data ]\n\n\n# prepare sequence to be filtered\ns_up = np.zeros( n_symb * n_up, dtype=complex ) \ns_up[ : : n_up ] = s\n\n\n# get RC pulse \nrc = get_rc_ir( n_up * syms_per_filt * 2 + 1, n_up, t_symb, r )\n\n\n# pulse-shaping\ns_rc = np.convolve( rc, s_up )\n \n# extracting real and imaginary part \ns_rc_I = np.real( s_rc )\ns_rc_Q = np.imag( s_rc )\n\n\n# generating OQPSK by relatively shifting I and Q component\ns_oqpsk = s_rc_I[ : - n_up//2 ] + 1j * s_rc_Q[ n_up//2 : ] ", "Plotting", "# plotting \nplt.subplot(121)\nplt.plot( np.real( s_rc[syms_per_filt*n_up:-syms_per_filt*n_up] ), np.imag( s_rc[syms_per_filt*n_up:-syms_per_filt*n_up] ), linewidth=2.0, c=(0,0.59,0.51) ) \n\nplt.grid( True )\nplt.xlabel( '$\\mathrm{Re}\\\\{s(t)\\\\}$' ) \nplt.ylabel(' $\\mathrm{Im}\\\\{s(t)\\\\}$' ) \nplt.gca().set_aspect('equal', adjustable='box')\nplt.title( 'QPSK signal' )\n\nplt.subplot(122)\nplt.plot( np.real( s_oqpsk[syms_per_filt*n_up:-syms_per_filt*n_up] ), np.imag( s_oqpsk[syms_per_filt*n_up:-syms_per_filt*n_up] ), linewidth=2.0, c=(0,0.59,0.51) ) \n\nplt.grid( True )\nplt.xlabel( '$\\mathrm{Re}\\\\{s(t)\\\\}$' ) \nplt.ylabel(' $\\mathrm{Im}\\\\{s(t)\\\\}$' ) \nplt.gca().set_aspect('equal', adjustable='box')\nplt.title( 'OQPSK signal' )\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
InsightLab/data-science-cookbook
2019/12-spark/12-spark-intro/Transformations.ipynb
mit
[ "map(func)\nRetorna um novo RDD formado pela passagem de cada elemento do RDD de origem através de uma da função func.\nExemplo:", "data = sc.parallelize(range(1, 11))\n\ndef duplicar(x): return x*x\n\n# data é um rdd\nres = data.map( duplicar )\n\nprint (res.collect())\n", "filter(func)\nRetorna um novo RDD formado pela seleção daqueles elemento do RDD de origem que, quando passados para função func, retorna true.\nExemplo:", "data = sc.parallelize(range(1, 11))\n\nres = data.filter(lambda x: x%2 ==1)\n\nprint(res.collect())", "flatMap(func)\nSemelhante ao map, porém cada item de entrada pode ser mapeado para 0 ou mais itens de saída (assim, func deve retornar uma lista em vez de um único item).\nExemplo:", "data = sc.parallelize([\"Linha 1\", \"Linha 2\"])\n\ndef partir(l): return l.split(\" \")\n\nprint ('map:', data.map(partir).collect())\n\nprint ('flatMap:', data.flatMap(partir).collect())", "intersection(otherRDD)\nRetorna um novo RDD que contém a interseção dos elementos no RDD de origem e o outro RDD (argumento).\nExemplo:", "two_multiples = sc.parallelize(range(0, 20, 2))\n\nthree_multiples = sc.parallelize(range(0, 20, 3))\n\nprint (two_multiples.intersection(three_multiples).collect())", "groupByKey()\nQuando chamado em um RDD de pares (K, V), retorna um conjunto de dados de pares (K, Iterable&lt;V&gt;).\nExemplo:", "data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])\n\nfor pair in data.groupByKey().collect():\n print (pair[0], list(pair[1]))\n", "reduceByKey(func)\nQuando chamado em um RDD de pares (K, V), retorna um RDD de pares (K, V) onde os valores de cada chave são agregados usando a função de redução func, que deve ser do tipo (V, V): V (recebe 2 valores e retorna um novo valor).\nExemplo:", "data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])\n\nres = data.reduceByKey( lambda x,y: x+y )\n\nprint (res.collect())\n", "sortByKey([asceding])\nQuando chamado em um RDD de pares (K, V) em que K é ordenável, retorna um RDD de pares (K, V) ordenados por chaves em ordem ascendente ou descendente, conforme especificado no argumento ascending.\nExemplo:", "data = sc.parallelize([ ('a', 1), ('b', 2), ('c', 3) , ('a', 2), ('b', 5), ('a', 3)])\n\nprint(data.sortByKey(ascending=False).collect())", "Links:\n* Lista mais completa com mais funções comuns: http://spark.apache.org/docs/1.6.3/programming-guide.html#transformations\n* Documentação da API RDD do Spark: http://spark.apache.org/docs/1.6.3/api/python/pyspark.html#pyspark.RDD" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
albahnsen/PracticalMachineLearningClass
exercises/E20-NeuralNetworksKeras.ipynb
mit
[ "E20- Neural Networks in Keras\nUse keras framework to solve the below exercises.", "import numpy as np\nimport keras \nimport pandas as pd\nimport matplotlib.pyplot as plt", "20.1 Predicting Student Admissions with Neural Networks\nIn this notebook, we predict student admissions to graduate schools based on six pieces of data:\n\nGRE Scores (Test)\nTOEFL Scores (Test)\nUniversity Ranking (1-5)\nStatement of Purpose (SOP) and Letter of Recommendation Strength ( out of 5 )\nUndergraduate GPA Scores (Grades)\nResearch Experience ( either 0 or 1 )\n\nExercise: Design and train a shallow neural network to predict the chance of Admission for each entry. Choose the number of hidden layer and neurons that minimizes the error.", "# Import dataset\n\ndata = pd.read_csv('https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/master/datasets/universityGraduateAdmissions.csv', index_col=0)\ndata.head()\n\ndata.columns\n\nX = data.drop(data.columns[-1], axis=1)\nY = data[data.columns[-1]]\n\nfrom sklearn.model_selection import train_test_split\n\nxTrain, xTest, yTrain, yTest = train_test_split(X,Y,test_size=0.3, random_state=22)\n\nfrom keras import initializers\nfrom keras import optimizers\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout", "20.2 Decision Boundary -- Moons Dataset\nExercise: Use keras framework to find a decision boundary for point in the make_moons.", "# Create moons dataset.\n\nfrom sklearn.datasets.samples_generator import make_moons\n\nx_train, y_train = make_moons(n_samples=1000, noise= 0.2, random_state=3)\nplt.figure(figsize=(12, 8))\nplt.scatter(x_train[:, 0], x_train[:,1], c=y_train, s=40, cmap=plt.cm.Spectral);", "Hint: Use the next function to plt the decision boundary,", "model = 'Sequential neural network in keras'\n\ndef plot_decision_region(model, X, pred_fun):\n min_x = np.min(X[:, 0])\n max_x = np.max(X[:, 0])\n min_y = np.min(X[:, 1])\n max_y = np.max(X[:, 1])\n min_x = min_x - (max_x - min_x) * 0.05\n max_x = max_x + (max_x - min_x) * 0.05\n min_y = min_y - (max_y - min_y) * 0.05\n max_y = max_y + (max_y - min_y) * 0.05\n x_vals = np.linspace(min_x, max_x, 30)\n y_vals = np.linspace(min_y, max_y, 30)\n XX, YY = np.meshgrid(x_vals, y_vals)\n grid_r, grid_c = XX.shape\n ZZ = np.zeros((grid_r, grid_c))\n for i in range(grid_r):\n for j in range(grid_c):\n '''\n Here 'model' is the neural network you previous trained.\n '''\n ZZ[i, j] = pred_fun(model, XX[i, j], YY[i, j])\n plt.contourf(XX, YY, ZZ, 30, cmap = pl.cm.coolwarm, vmin= 0, vmax=1)\n plt.colorbar()\n plt.xlabel(\"x\")\n plt.ylabel(\"y\")\n \ndef pred_fun(model,x1, x2):\n '''\n Here 'model' is the neural network you previous trained.\n '''\n xval = np.array([[x1, x2]])\n return model.predict(xval)[0, 0]\n\nplt.figure(figsize = (8,16/3)) \n'''\nHere 'model' is the neural network you previous trained.\n'''\nplot_decision_region(model, x_train, pred_fun)\nplot_data(x_train, y_train)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wangyu16/Introduction-to-Polymer-Science
.ipynb_checkpoints/RDRP_Kinetic_Simulator_Moments-checkpoint.ipynb
cc0-1.0
[ "<center>RDRP Kinetic Simulator - by the Method of Moments </center>\n<center>Version 1.0</center>\nAbout this program\nThis a reversible-deactivation radical polymerization (RDRP) kinetic simulator based on the method of moments.[1][2] The types of polymerizations supported are conventional radical polymerization, normal atom transfer radical polymerization (ATRP), activators generated by electron transfer (AGET) ATRP, activators regenerated by electron transfer (ARGET) ATRP, supplemental activator and reducing agent (SARA) ATRP, electrochemically mediated ATRP (eATRP), ATRP by continuous feeding of activators (CFA), initiators for continuous activator regeneration (ICAR) ATRP, nitroxide mediated polymerization (NMP), and reversible addition-fragmentation chain transfer (RAFT). The input includes the reaction time, the initial concentrations of reagents, and the rate coefficients of all reactions involved. The results provide the concentration changes of all species vs. time, the monomer conversion vs. time, the number average molecular weight vs. time, molecular weight distribution vs. time, and the mole percent of end group loss vs. time. All results could be exported to a CSV file. \nFor more information, please visit https://wangyu16.github.io/macroarchilab/simulation/RDRP-kinetic-simulator/. \nTo download this program, please visit https://github.com/wangyu16/PolymerScienceEducation.\n[1]: Shiping Zhu, Modeling of molecular weight development in atom transfer radical polymerization, Macromol. Theory Simul. 1999, 8, 29–37.\n[2]: Erlita Mastan and Shiping Zhu, Method of moments: A versatile tool for deterministic modeling of polymerization kinetics, European Polymer Journal 2015, 68, 139–160.\nSystem requirement\nThis program is written in Python and runs in Jupyter Notebook. The easiest way to set up the Jupyter Notebook is to install Anaconda which includes python, Jupyter Notebook and many python packages for scientific programming. Make sure you choose Python 3.6 or above because it is required for this program. After installing Anaconda, you also need to install a python package named 'chempy'. \nNot sure what you can change in the code?\nNo worry! If you are not familiar with programming, all that you may change are located in the section \"Reaction conditions\". For example, in the following code block, you can change the red colored word 'normal' after \"Poly_type = \", and the green colored number 90000 after \"react_time = \". \n```python\n'sara' for SARA ATRP;\n'cfa' for ATRP by continuous feeding of activators;\n'icar' for ICAR ATRP.\nPoly_type = 'normal' \n######################\n2. Set the reaction time\n######################\nThis is a required section.\nSet reaction time limit in seconds.\nreact_time = 90000 \n```\nSimilarly, in the following code, only change the green colored numbers. \n```python\nSet the rate coefficients for the addition of the first monomer to the primary radical;\nthe termination between primary radicals; the termination between a primary radical and a propagating radical.\nk_p_R = 1.3e3 \nk_t_R = 1e9 \nk_t_R_Pn = 1e9\n```\nImport packages", "from chempy import ReactionSystem, Substance\nfrom chempy.kinetics.ode import get_odesys\nfrom collections import defaultdict\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interact\nimport datetime\nimport csv", "Reaction conditions", "###################################\n# 1. Select the type of reaction # \n###################################\n# This is a required section. \n# Choose the type of polymerization you want to simulate\n# by setting the value of Poly_type as:\n# 'conven' for conventional polymerization;\n# 'normal' for normal ATRP;\n# 'arget' for ARGET and AGET ATRP;\n# 'eatrp' for electrochemical ATRP;\n# 'sara' for SARA ATRP;\n# 'cfa' for ATRP by continuous feeding of activators;\n# 'icar' for ICAR ATRP;\n# 'nmp' for NMP;\n# 'raft' for RAFT. \nPoly_type = 'conven'\n\n############################ \n# 2. Set the reaction time #\n############################ \n# This is a required section.\n# Set reaction time limit in seconds. \n# For a 10 hour reaction, you can set the time as 36000 or 10*3600.\nreact_time = 15*3600 \n\n###########################################################\n# 3. Set the initial concentrations and rate coefficients #\n###########################################################\n \n##############################\n# 3.1. Monomer concentration #\n##############################\n# This is a required section.\n# Set the initial concentration of monomer. All the concentrations are in M/L unless otherwise specified.\nc0_M = 5 \n# Set the molecular weight of the monomer. \nMM = 100.12\n\n####################################\n# 3.2. Propagation and termination #\n####################################\n# This is a required section.\n# Set the rate coefficients for propagation and termination (by coupling or by disproportionation).\n# Chain transfer is neglected. \nk_p = 1.6e3 \nk_tc = 2e8 \nk_td = 1e8\n\n# Set the rate coefficients for the addition of the first monomer to the primary radical;\n# the termination between primary radicals; the termination between a primary radical and a propagating radical. \nk_p_R = 1.6e3\nk_t_R = 3e8\nk_t_R_Pn = 3e8 \n\n####################################################################\n# 3.3. For conventional radical polymerization, ICAR ATRP and RAFT #\n####################################################################\n# If thermal initiator (TI), e.g., AIBN, is used in your reaction,\n# set the initial concentration of the initiator.\nc0_TI = 0.1\n\n# Set the decomposition rate coefficient and the initiation efficiency for the initiator.\nk_d_TI = 1e-5\nf_TI = 0.6\n\n##############################\n# 3.4. For all kinds of RDRP #\n##############################\n# Set the initial concentration of RX. \nc0_RX = 0.05 \n\n##############################\n# 3.5. For all kinds of ATRP #\n##############################\n# Set the initial concentration of Cu(I) and Cu(II).\n# Assume there is sufficient amount of ligand to coordinate with all Cu(I) and Cu(II),\n# and the concentration of free ligand does not affect the reaction kinetics. \nc0_CuI = 0 \nc0_CuII = 5e-4 \n\n# Set the rate coefficients for activation of RX; deactivation of radical R; \n# activation of a polymer with a halogen chain end; deactivation of a propagating radical. \nk_a_0_atrp = 14\nk_d_0_atrp = 4.7e7 \nk_a_atrp = 140 \nk_d_atrp = 4.7e7\n\n################################## \n# 3.5.1. For AGET and ARGET ATRP #\n##################################\n# Set the initial concentration of the reducing agent. \nc0_Reduc = 2.5e-3 \n\n# Set the reduction rate coefficient\nk_reduc = 1e-1 \n\n####################\n# 3.5.2. For eATRP #\n####################\n# set the initial concentration of the electrons as a large number, e.g. > 200 times of RX, \n# which will remain nearly constant.\nc0_elec = 10 \n\n# Set the rate coefficient of electronic reduction. This will mimic an eATRP process with constant current. \nk_e_reduc = 1e-7 \n\n###################\n# 3.5.3. For SARA #\n###################\n# set the initial concentration of Cu(0) in cm^2/mL. \nc0_Cu0 = 2 \n\n# Set the rate coefficients of Cu(0) activation and comproportionation.\n# Deactivation by Cu(I) and disproportionation are neglected. \nk_comp = 1e-4 \nk_a_Cu0 = 1e-4 \n\n#######################################################\n# 3.5.4. For ATRP by continuous feeding of Activators #\n#######################################################\n# Set the initial concentration of Cu(I) source as any large number, e.g. > 200 times of RX, which will remain nearly constant. \nc0_CuIsour = 10 \n\n# Set the rate coefficient to minic different feeding rate. \nk_cfa = 1.4e-8 \n\n################\n# 3.6. For NMP #\n################\n# Set the rate coefficients of activation (i.e., dissociation) and deactivation (i.e., coupling). \nk_a_0_nmp = 1\nk_d_0_nmp = 1e8\nk_a_nmp = 1\nk_d_nmp = 1e8\n\n#################\n# 3.7. For RAFT #\n#################\n# Set the rate coefficients of activation (i.e., fragmentation) and deactivation (i.e., addition).\nk_a_0_raft = 1e6\nk_d_0_raft = 1e8\nk_a_raft = 1e6\nk_d_raft = 1e8", "Construct the reaction system", "# Initiate the reaction system with null value. \n# The rsys_orig is the system of the actual reactions.\n# The rsys_pseudo includes the pseudo reactions with pseudo species which are used to introduce 1st and 2nd order moments\n# and to adjust the reactions to take into account the initiation efficiency, etc. \nrsys_orig = ReactionSystem.from_string(\"\"\"\n \"\"\", substance_factory=Substance)\nrsys_pseudo = ReactionSystem.from_string(\"\"\"\n \"\"\", substance_factory=Substance)\n\n# Initial concentrations of monomer, dead chains, radicals and pseudo species of moments.\nc0 = defaultdict(float, {'M': c0_M, 'D': 0, 'PnD': 0, 'PnDPn': 0, 'R': 0, 'Pn': 0, 'M1_Pn': 0, \\\n 'M1_PnX': 0, 'M1_PnD': 0, 'M1_PnDPn': 0, 'M2_total': 0}) \n\n# Add propagation and termination reactions to the reaction system. \nrsys_orig += ReactionSystem.from_string(f\"\"\"\n R + M -> Pn; {k_p_R}\n Pn + M -> Pn; {k_p} \n R + R -> D + D; {0.5*k_t_R}\n Pn + R -> PnD; {0.5*k_t_R_Pn}\n Pn + Pn -> PnD + PnD; {0.5*k_td}\n Pn + Pn -> PnDPn; {0.5*k_tc}\n \"\"\", substance_factory=Substance)\nrsys_pseudo += ReactionSystem.from_string(f\"\"\"\n R + M -> R + M + M1_Pn + M2_total; {k_p}\n Pn + M -> Pn + M + M1_Pn + M2_total; {k_p}\n M1_Pn + M -> M1_Pn + M + M2_total; {2*k_p}\n M1_Pn + R -> M1_PnD + R; {0.5*k_t_R_Pn}\n M1_Pn + Pn -> M1_PnD + Pn; {k_td}\n M1_Pn + Pn -> M1_PnDPn + Pn; {k_tc}\n M1_Pn + M1_Pn -> M1_Pn + M1_Pn + M2_total; {k_tc}\n \"\"\", substance_factory=Substance)\n\n# For all kinds of RDRP\nif Poly_type != 'conven':\n c0.update({'RX': c0_RX, 'PnX':0})\n \n# For all kinds of ATRP \nif Poly_type != 'conven' and Poly_type != 'nmp' and Poly_type != 'raft':\n c0.update({'CuI': c0_CuI, 'CuII': c0_CuII})\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n CuI + RX -> CuII + R; {k_a_0_atrp} \n CuII + R -> CuI + RX; {k_d_0_atrp}\n CuI + PnX -> CuII + Pn; {k_a_atrp}\n CuII + Pn -> CuI + PnX; {k_d_atrp} \n \"\"\", substance_factory=Substance) \n rsys_pseudo += ReactionSystem.from_string(f\"\"\"\n M1_PnX + CuI -> M1_Pn + CuI; {k_a_atrp}\n M1_Pn + CuII -> M1_PnX + CuII; {k_d_atrp}\n \"\"\", substance_factory=Substance) \n \n# For AGET and ARGET ATRP\nif Poly_type == 'arget':\n c0.update({'Reduc': c0_Reduc, 'ReducX':0})\n\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n Reduc + CuII -> ReducX + CuI; {k_reduc}\n \"\"\", substance_factory=Substance) \n \n# For eATRP \nif Poly_type == 'eatrp':\n c0.update({'elec': c0_elec}) \n\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n elec + CuII -> CuI; {k_e_reduc}\n \"\"\", substance_factory=Substance)\n\n# For SARA ATRP\nif Poly_type == 'sara':\n c0.update({'Cu0': c0_Cu0}) \n\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n Cu0 + CuII -> CuI + CuI; {k_comp}\n Cu0 + RX -> CuI + R; {k_a_Cu0}\n \"\"\", substance_factory=Substance)\n\n# For ATRP by continuous feeding of activators\nif Poly_type == 'cfa':\n c0.update({'CuIsour': c0_CuIsour}) \n\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n CuIsour -> CuI; {k_cfa}\n \"\"\", substance_factory=Substance) \n\n# For conventional radical polymerization ICAR ATRP and raft \nif Poly_type == 'conven' or Poly_type == 'icar' or Poly_type == 'raft':\n c0.update({'TI': c0_TI, 'PR': 0}) \n rsys_orig += ReactionSystem.from_string(f\"\"\"\n TI -> R + R; {k_d_TI}\n \"\"\", substance_factory=Substance)\n rsys_pseudo += ReactionSystem.from_string(f\"\"\"\n TI -> R + R; {-k_d_TI}\n TI -> PR + PR; {k_d_TI} \n TI -> TI + R + R; {f_TI*k_d_TI}\n \"\"\", substance_factory=Substance) \n \n# For NMP\nif Poly_type == 'nmp':\n c0.update({'X': 0})\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n RX -> X + R; {k_a_0_nmp} \n X + R -> RX; {k_d_0_nmp}\n PnX -> X + Pn; {k_a_nmp}\n X + Pn -> PnX; {k_d_nmp} \n \"\"\", substance_factory=Substance) \n rsys_pseudo += ReactionSystem.from_string(f\"\"\"\n M1_PnX -> M1_Pn; {k_a_nmp}\n M1_Pn + X -> M1_PnX + X; {k_d_nmp}\n \"\"\", substance_factory=Substance) \n\n# For RAFT (need to be revised)\nif Poly_type == 'raft':\n c0.update({'RXR': 0, 'RXPn': 0, 'PnXPn': 0, 'RXM1_Pn': 0, 'M1_PnXM1_Pn': 0})\n rsys_orig += ReactionSystem.from_string(f\"\"\"\n RXR -> RX + R; {k_a_0_raft} \n RX + R -> RXR; {k_d_0_raft}\n R + PnX -> RXPn; {k_d_0_raft}\n RX + Pn -> RXPn; {k_d_raft}\n RXPn -> RX + Pn; {k_a_raft}\n RXPn -> PnX + R; {k_a_0_raft}\n PnXPn -> PnX + Pn; {k_a_raft}\n PnX + Pn -> PnXPn; {k_d_raft} \n \"\"\", substance_factory=Substance) \n rsys_pseudo += ReactionSystem.from_string(f\"\"\"\n RX + M1_Pn -> RXM1_Pn + RX; {k_d_0_raft}\n R + M1_PnX -> RXM1_Pn + R; {k_d_raft}\n RXM1_Pn -> M1_Pn; {k_a_raft}\n RXM1_Pn -> M1_PnX; {k_a_0_raft} \n M1_PnXM1_Pn -> M1_PnX + M1_Pn; {k_a_raft}\n M1_PnX + M1_Pn -> M1_PnXM1_Pn; {k_d_raft}\n \"\"\", substance_factory=Substance) \n\n# Show the reactions and the rate coefficients in the system\nrsys_orig\n\n# List the initial concentrations of reagents\nfor key in c0:\n if c0[key] != 0:\n print(key, ': ', c0[key])", "Simulation", "# Combine the actual and the pseudo reaction systems\nrsys = rsys_orig + rsys_pseudo\n\n# Get the differential equation system from the reactions.\nodesys, extra = get_odesys(rsys)\n\n# List the differential equations\nfor index, exp in enumerate(odesys.exprs):\n print(odesys.names[index], ': ', f'dy_{index}/dt', '= ', exp)", "The differential equation system includes not only the real species, i.e., those appear in the actual reactions, but also the pseudo species, e.g., the first and second order moments. In case of conventional radical polymerization, ICAR ATRP and RAFT, the thermal initiator decomposites with a rate coefficient k_d_TI, but the primary radicals used to initiate polymerization are produced with a rate coefficient f*k_d_TI. To take into account this initiation efficiency, a pseudo species, PR as the cumulative amount of primary radicals generated from the thermal initiator, is introduced to the differential equation system. If it is confusing, just ignore it.", "# Integration\ntout = sorted(np.concatenate((np.linspace(0, react_time), np.logspace(0, np.floor(np.log10(react_time))))))\nresult = odesys.integrate(tout, c0, integrator='scipy', method='BDF', atol=1e-11, rtol=1e-6)", "About the integrator and the integration method\nIf you are not familiar with the ode integrators and the numerical methods, just leave the default setting without any change. \nFor advanced users, the following information could be helpful. \nBy default, the program uses scipy.integrate.ode as the ODE integrator. The method “BDF” is used by default, which is an implicit multi-step variable-order (1 to 5) method based on a backward differentiation formula for the derivative approximation. The implicit method is suitable for stiff problems. Other available methods can be found from the official website of scipy. For more options, please visit the website of pyodesys. You can install packages and use integrators other than scipy. \nResults", "# Plot the concentrations of species in the reaction system vs time \nfig, axes = plt.subplots(1, 2, figsize=(12, 5))\nfor ax in axes:\n _ = result.plot(names=[k for k in rsys_orig.substances if k != 'CuIsour' \\\n and k != 'M' and k!= 'elec' and k != 'Cu0'], ax=ax) \n _ = ax.legend(loc='best', prop={'size': 9})\n _ = ax.set_xlabel('Time (s)')\n _ = ax.set_ylabel('Concentration')\n_ = axes[1].set_ylim([1e-10, 1e1])\n_ = axes[1].set_xscale('log')\n_ = axes[1].set_yscale('log')\n_ = fig.tight_layout()", "Meanings of the species produced during the polymerization\nR: primary radical either from RX or from thermal initiator\nD: termination product from primary radicals\nPn: propagating polymer chain with a chain end radical\nPnD: dead polymer chain produced by termination through disproportionation\nPnDPn: dead polymer chain produced by termination through coupling\nPnX: dormant polymer chain with an active chain end\nRXR, RXPn and PnXPn: the intermediate addition products in RAFT polymerization, i.e., the intermediate radicals.", "# Get concentrations and calculate conversion, Mn and Mw/Mn.\nConcM = result[1][:,result.odesys.names.index('M')]\nConcD = result[1][:,result.odesys.names.index('D')]\nConcPnD = result[1][:,result.odesys.names.index('PnD')]\nConcPnDPn = result[1][:,result.odesys.names.index('PnDPn')]\nConcPn = result[1][:,result.odesys.names.index('Pn')]\nConcM2_total = result[1][:,result.odesys.names.index('M2_total')]\n\nif Poly_type != 'conven':\n ConcPnX = result[1][:,result.odesys.names.index('PnX')]\nelse:\n ConcPnX = np.zeros(len(result[0]))\n \nif Poly_type == 'raft':\n ConcPnXPn = result[1][:,result.odesys.names.index('PnXPn')]\n ConcRXPn = result[1][:,result.odesys.names.index('RXPn')]\nelse:\n ConcPnXPn = np.zeros(len(result[0]))\n ConcRXPn = np.zeros(len(result[0]))\n\nConvM =(ConcM[0]-ConcM)/ConcM[0]\nLnM0_M = np.log(ConcM[0]/ConcM)\n\nMn = np.zeros(len(result[0]))\nMn_th = np.zeros(len(result[0]))\nMw = np.zeros(len(result[0]))\nMw_Mn = np.ones(len(result[0]))\n\nMn[1:] = (ConcM[0]-ConcM[1:])/(ConcPnX[1:] + ConcPnD[1:] + ConcPnDPn[1:] + \\\n ConcPn[1:] + ConcPnXPn[1:] + ConcRXPn[1:])*MM\nMn_th[1:] = (ConcM[0]-ConcM[1:])/c0_RX*MM\nMw[1:] = ConcM2_total[1:]/(ConcM[0]-ConcM[1:])*MM \nMw_Mn[1:] = Mw[1:]/Mn[1:]\n\n# Get mole percent of end group loss, i.e., Tmol%.\nif Poly_type != 'conven':\n x=result.odesys.names.index('RX')\n Tmol = 100*(ConcD + ConcPnD + 2*ConcPnDPn)/result[1][0,x]\nelse:\n Tmol = 100*np.ones(len(result[0]))\n \nresult_cal = [result[0],ConcM,ConvM,LnM0_M,Mn,Mw_Mn,Tmol]\n\n# Monomer conversion vs. time and first order kinetic plots. \nfig, axes = plt.subplots(1, 2, figsize=(10, 5))\ni=2\nfor ax in axes:\n _ = ax.plot(result_cal[0], result_cal[i])\n _ = ax.grid()\n i += 1\n_ = axes[0].set(xlabel = 'time (s)', ylabel='Conversion')\n_ = axes[1].set(xlabel = 'time (s)', ylabel='Ln([M]0/[M])')\n_ = fig.tight_layout()\n\n# Plot the Mn, Mw/Mn and Tmol% vs. conversion. \nfig, axes = plt.subplots(1, 3, figsize=(15, 5))\ni=4\nfor ax in axes:\n _ = ax.plot(result_cal[2][1:], result_cal[i][1:])\n _ = ax.grid()\n i += 1\n_ = axes[0].set(xlabel = 'conversion', ylabel='Mn')\n_ = axes[1].set(xlabel = 'conversion', ylabel='Mw/Mn')\n_ = axes[2].set(xlabel = 'conversion', ylabel='Tmol% (%)')\n_ = fig.tight_layout()\n\nif Poly_type == 'conven':\n print('Tmol% does not apply to conventional radical polymerization.')", "Export the results", "# Export the result to a CSV file.\n# The CSV file is saved in the same folder as this ipynb file. \n\nnow = datetime.datetime.now()\nfilename = str(now.strftime(\"%Y-%m-%d-%Hh%Mm%Ss\")) + '-RDRP-Simulation-' + str(Poly_type) + '.csv'\n\nwith open(filename, 'w', newline='') as f:\n thewriter = csv.writer(f)\n for rxn in rsys_orig.rxns:\n thewriter.writerow([rxn])\n if Poly_type == 'conven':\n thewriter.writerow(['Tmol% does not apply to conventional radical polymerization.'])\n thewriter.writerow(['time (s)']+[k for k in rsys_orig.substances]+['conversion']+['ln([M]0/[M])']\\\n +['Mn']+['Mw/Mn']+['Tmol% (%)'])\n i=0\n for concen in result[1]:\n thewriter.writerow([result_cal[0][i]]+[concen[result.odesys.names.index(k)] for k in rsys_orig.substances]\\\n +[ConvM[i]]+[LnM0_M[i]]+[Mn[i]]+[Mw_Mn[i]]+[Tmol[i]])\n i+=1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/DataExploration
distribution.ipynb
mit
[ "Visualizing distributions\nCopyright 2015 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "from __future__ import print_function, division\n\nimport numpy as np\nimport thinkstats2\n\nimport nsfg\n\nimport thinkplot\n\n%matplotlib inline", "Let's load up the NSFG pregnancy data.", "preg = nsfg.ReadFemPreg()\npreg.shape", "And select the rows corresponding to live births.", "live = preg[preg.outcome == 1]\nlive.shape", "We can use describe to generate summary statistics.", "live.prglngth.describe()", "But there is no substitute for looking at the whole distribution, not just a summary.\nOne way to represent a distribution is a Probability Mass Function (PMF).\nthinkstats2 provides a class named Pmf that represents a PMF.\nA Pmf object contains a Python dictionary that maps from each possible value to its probability (that is, how often it appears in the dataset).\nItems returns a sorted list of values and their probabilities:", "pmf = thinkstats2.Pmf(live.prglngth)\nfor val, prob in pmf.Items():\n print(val, prob)", "There are some values here that are certainly errors, and some that are suspect. For now we'll take them at face value.\nThere are several ways to visualize Pmfs.\nthinkplot provides functions to plot Pmfs and other types from thinkstats2.\nthinkplot.Pmf renders a Pmf as histogram (bar chart).", "thinkplot.PrePlot(1)\nthinkplot.Hist(pmf)\nthinkplot.Config(xlabel='Pregnancy length (weeks)',\n ylabel='PMF', \n xlim=[0, 50],\n legend=False)", "Pmf renders the outline of the histogram.", "thinkplot.PrePlot(1)\nthinkplot.Pmf(pmf)\nthinkplot.Config(xlabel='Pregnancy length (weeks)',\n ylabel='PMF', \n xlim=[0, 50])", "Pdf tries to render the Pmf with a smooth curve.", "thinkplot.PrePlot(1)\nthinkplot.Pdf(pmf)\nthinkplot.Config(xlabel='Pregnancy length (weeks)',\n ylabel='PMF', \n xlim=[0, 50])", "I started with PMFs and histograms because they are familiar, but I think they are bad for exploration.\nFor one thing, they don't hold up well when the number of values increases.", "pmf_weight = thinkstats2.Pmf(live.totalwgt_lb)\nthinkplot.PrePlot(1)\nthinkplot.Hist(pmf_weight)\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='PMF')\n\npmf_weight = thinkstats2.Pmf(live.totalwgt_lb)\nthinkplot.PrePlot(1)\nthinkplot.Pmf(pmf_weight)\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='PMF')\n\npmf_weight = thinkstats2.Pmf(live.totalwgt_lb)\nthinkplot.PrePlot(1)\nthinkplot.Pdf(pmf_weight)\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='PMF')", "Sometimes you can make the visualization better by binning the data:", "def bin_and_pmf(weights, num_bins):\n bins = np.linspace(0, 15.5, num_bins)\n indices = np.digitize(weights, bins)\n values = bins[indices]\n pmf_weight = thinkstats2.Pmf(values)\n\n thinkplot.PrePlot(1)\n thinkplot.Pdf(pmf_weight)\n thinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='PMF')\n \nbin_and_pmf(live.totalwgt_lb.dropna(), 50)", "Binning is simple enough, but it is still a nuisance.\nAnd it is fragile. If you have too many bins, the result is noisy. Too few, you obliterate features that might be important.\nAnd if the bin boundaries don't align well with data boundaries, you can create artifacts.", "bin_and_pmf(live.totalwgt_lb.dropna(), 51)", "There must be a better way!\nIndeed there is. In my opinion, cumulative distribution functions (CDFs) are a better choice for data exploration.\nYou don't have to bin the data or make any other transformation.\nthinkstats2 provides a function that makes CDFs, and thinkplot provides a function for plotting them.", "data = [1, 2, 2, 5]\npmf = thinkstats2.Pmf(data)\npmf\n\ncdf = thinkstats2.Cdf(data)\ncdf\n\nthinkplot.PrePlot(1)\nthinkplot.Cdf(cdf)\nthinkplot.Config(ylabel='CDF', \n xlim=[0.5, 5.5])", "Let's see what that looks like for real data.", "cdf_weight = thinkstats2.Cdf(live.totalwgt_lb)\nthinkplot.PrePlot(1)\nthinkplot.Cdf(cdf_weight)\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='CDF')", "A CDF is a map from each value to its cumulative probability.\nYou can use it to compute percentiles:", "cdf_weight.Percentile(50)", "Or if you are given a value, you can compute its percentile rank.", "cdf_weight.PercentileRank(8.3)", "Looking at the CDF, it is easy to see the range of values, the central tendency and spread, as well as the overall shape of the distribution.\nIf there are particular values that appear often, they are visible as vertical lines. If there are ranges where no values appear, they are visible as horizontal lines.\nAnd one of the best things about CDFs is that you can plot several of them on the same axes for comparison. For example, let's see if first babies are lighter than others.", "firsts = live[live.birthord == 1]\nothers = live[live.birthord != 1]\nlen(firsts), len(others)\n\ncdf_firsts = thinkstats2.Cdf(firsts.totalwgt_lb, label='firsts')\ncdf_others = thinkstats2.Cdf(others.totalwgt_lb, label='others')\n\nthinkplot.PrePlot(2)\nthinkplot.Cdfs([cdf_firsts, cdf_others])\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='CDF',\n legend=True)", "Plotting the two distributions on the same axes, we can see that the distribution for others is shifted to the right; that is, toward higher values. And we can see that the shift is close to the same over the whole distribution.\nLet's see how well we can make this comparison with PMFs:", "pmf_firsts = thinkstats2.Pmf(firsts.totalwgt_lb, label='firsts')\npmf_others = thinkstats2.Pmf(others.totalwgt_lb, label='others')\n\nthinkplot.PrePlot(2)\nthinkplot.Pdfs([pmf_firsts, pmf_others])\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='PMF')", "With PMFs it is hard to compare distributions. And if you plot more than two PMFs on the same axes, it is likely to be a mess.\nReading CDFs takes some getting used to, but it is worth it! For data exploration and visualization, CDFs are better than PMFs in almost every way.\nBut if you really have to generate a PMF, a good option is to estimate a smoothed PDF using Kernel Density Estimation (KDE).", "pdf_firsts = thinkstats2.EstimatedPdf(firsts.totalwgt_lb.dropna(), label='firsts')\npdf_others = thinkstats2.EstimatedPdf(others.totalwgt_lb.dropna(), label='others')\n\nthinkplot.PrePlot(2)\nthinkplot.Pdfs([pdf_firsts, pdf_others])\nthinkplot.Config(xlabel='Birth weight (lbs)',\n ylabel='PDF')", "Like binning, KDE involves smoothing the data, so you lose some information.\nAnd you might have to tune the \"bandwidth\" parameter to get the right amount of smoothing." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fajifr/recontent
gensim_trial.ipynb
mit
[ "import logging\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\nfrom gensim import corpora\nimport urllib\nfrom pprint import pprint\nfrom nltk.corpus import stopwords\nfrom collections import defaultdict", "First start with a collection of 5 abstracts, manually scraped. \nClean out the stop words and words that only appear once. \nConstruct a dictionary and it has 75 words. \nThe output is the words in dictionary and their corresponding integer IDs.", "doc1=\"Electron acceleration in a post-flare decimetric continuum source Prasad Subramanian, S. M. White, M. Karlický, R. Sych, H. S. Sawant, S. Ananthakrishnan(Submitted on 23 Mar 2007)Aims: To calculate the power budget for electron acceleration and the efficiency of the plasma emission mechanism in a post-flare decimetric continuum source. Methods: We have imaged a high brightness temperature (∼109K) post-flare source at 1060 MHz with the Giant Metrewave Radio Telescope (GMRT). We use information from these images and the dynamic spectrum from the Hiraiso spectrograph together with the theoretical method described in Subramanian & Becker (2006) to calculate the power input to the electron acceleration process. The method assumes that the electrons are accelerated via a second-order Fermi acceleration mechanism. Results: We find that the power input to the nonthermal electrons is in the range 3×1025--1026 erg/s. The efficiency of the overall plasma emission process starting from electron acceleration and culminating in the observed emission could range from 2.87×10−9 to 2.38×10−8.\"\n\ndoc2=\"Local (shearing box) simulations of the nonlinear evolution of the magnetorotational instability in a collisionless plasma show that angular momentum transport by pressure anisotropy (p⊥≠p∥, where the directions are defined with respect to the local magnetic field) is comparable to that due to the Maxwell and Reynolds stresses. Pressure anisotropy, which is effectively a large-scale viscosity, arises because of adiabatic invariants related to p⊥ and p∥ in a fluctuating magnetic field. In a collisionless plasma, the magnitude of the pressure anisotropy, and thus the viscosity, is determined by kinetic instabilities at the cyclotron frequency. Our simulations show that ∼50 % of the gravitational potential energy is directly converted into heat at large scales by the viscous stress (the remaining energy is lost to grid-scale numerical dissipation of kinetic and magnetic energy). We show that electrons receive a significant fraction (∼[Te/Ti]1/2) of this dissipated energy. Employing this heating by an anisotropic viscous stress in one dimensional models of radiatively inefficient accretion flows, we find that the radiative efficiency of the flow is greater than 0.5% for M˙≳10−4M˙Edd. Thus a low accretion rate, rather than just a low radiative efficiency, is necessary to explain the low luminosity of many accreting black holes. For Sgr A* in the Galactic Center, our predicted radiative efficiencies imply an accretion rate of ≈3×10−8M⊙yr−1 and an electron temperature of ≈3×1010 K at ≈10 Schwarzschild radii; the latter is consistent with the brightness temperature inferred from VLBI observations.\"\n\ndoc3=\"We review the theory of electron-conduction opacity, a fundamental ingredient in the computation of low-mass stellar models; shortcomings and limitations of the existing calculations used in stellar evolution are discussed. We then present new determinations of the electron-conduction opacity in stellar conditions for an arbitrary chemical composition, that improve over previous works and, most importantly, cover the whole parameter space relevant to stellar evolution models (i.e., both the regime of partial and high electron degeneracy). A detailed comparison with the currently used tabulations is also performed. The impact of our new opacities on the evolution of low-mass stars is assessed by computing stellar models along both the H- and He-burning evolutionary phases, as well as Main Sequence models of very low-mass stars and white dwarf cooling tracks.\"\n\ndoc4=\"The best measurement of the cosmic ray positron flux available today was performed by the HEAT balloon experiment more than 10 years ago. Given the limitations in weight and power consumption for balloon experiments, a novel approach was needed to design a detector which could increase the existing data by more than a factor of 100. Using silicon photomultipliers for the readout of a scintillating fiber tracker and of an imaging electromagnetic calorimeter, the PEBS detector features a large geometrical acceptance of 2500 cm^2 sr for positrons, a total weight of 1500 kg and a power consumption of 600 W. The experiment is intended to measure cosmic ray particle spectra for a period of up to 20 days at an altitude of 40 km circulating the North or South Pole. A full Geant 4 simulation of the detector concept has been developed and key elements have been verified in a testbeam in October 2006 at CERN.\"\n\ndoc5=\"The fluorescence detection of ultra high energy (> 10^18 eV) cosmic rays requires a detailed knowledge of the fluorescence light emission from nitrogen molecules, which are excited by the cosmic ray shower particles along their path in the atmosphere. We have made a precise measurement of the fluorescence light spectrum excited by MeV electrons in dry air. We measured the relative intensities of 34 fluorescence bands in the wavelength range from 284 to 429 nm with a high resolution spectrograph. The pressure dependence of the fluorescence spectrum was also measured from a few hPa up to atmospheric pressure. Relative intensities and collisional quenching reference pressures for bands due to transitions from a common upper level were found in agreement with theoretical expectations. The presence of argon in air was found to have a negligible effect on the fluorescence yield. We estimated that the systematic uncertainty on the cosmic ray shower energy due to the pressure dependence of the fluorescence spectrum is reduced to a level of 1% by the AIRFLY results presented in this paper.\"\n\ndocuments=[doc1,doc2,doc3,doc4,doc5]\n\n# remove words from the stopwords and tokenize\n#stoplist = set('for a of the and to in'.split())\ntexts = [[word for word in document.lower().split() if word not in stopwords.words('english')] for document in documents]\n# remove words that appear only once\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1\n#texts contain all the key words\ntexts = [[token for token in text if frequency[token] > 1] for text in texts]\n#save texts as a dictionary\ndictionary = corpora.Dictionary(texts)\ndictionary.save('firstdic.dict') # store the dictionary, for future reference\n#print(dictionary)\n#look at the unique integer IDs for the 75 words\nprint(dictionary.token2id)", "Now test our small dictionary on a new abstract. It returns with a vector that represents [[word ID, frequency]]", "#test a new document\ndoc6=\"This paper presents the effects of electron-positron pair production on the linear growth of the resistive hose instability of a filamentary beam that could lead to snake-like distortion. For both the rectangular radial density profile and the diffuse profile reflecting the Bennett-type equilibrium for a self-collimating flow, the modified eigenvalue equations are derived from a Vlasov-Maxwell equation. While for the simple rectangular profile, current perturbation is localized at the sharp radial edge, for the realistic Bennett profile with an obscure edge, it is non-locally distributed over the entire beam, removing catastrophic wave-particle resonance. The pair production effects likely decrease the betatron frequency, and expand the beam radius to increase the resistive decay time of the perturbed current; these also lead to a reduction of the growth rate. It is shown that, for the Bennett profile case, the characteristic growth distance for a preferential mode can exceed the observational length-scale of astrophysical jets. This might provide the key to the problem of the stabilized transport of the astrophysical jets including extragalactic jets up to Mpc (∼3×1024 cm) scales.\"\nnew_vec = dictionary.doc2bow(doc6.lower().split())\nprint(new_vec)", "Now use the arxiv API instead of the manual scraping; try it on one abstract and see how it works:", "#this is how to grab the summary from each api link\nurl = 'http://export.arxiv.org/api/query?search_query=all:electron&start=0&max_results=1'\ndata=urllib.request.urlopen(url).read()\n#datastring=str(data,'utf-8')\ndatasummary=str(data,'utf-8').split(\"<summary>\",1)[1].split('</summary',1)[0]\n#convert the bytes to string, split out the summary\nprint(datasummary)", "Now try our small dictionary on 10 abstracts:", "for articleN in range(0,10):\n url = 'http://export.arxiv.org/api/query?search_query=all:electron&start='+str(articleN)+'&max_results=1'\n print(url)\n data=urllib.request.urlopen(url).read()\n datasummary=str(data,'utf-8').split(\"<summary>\",1)[1].split('</summary',1)[0]\n vec=dictionary.doc2bow(datasummary.lower().split())\n print(vec)", "Now build a class to stream the corpus:", "class MyCorpus(object):\n def _iter_(self):\n for articleN in range(0,5):\n url='http://export.arxiv.org/api/query?search_query=all:electron&start='+str(articleN)+'&max_results=1'\n data=urllib.request.urlopen(url).read()\n datasummary=str(data,'utf-8').split(\"<summary>\",1)[1].split('</summary',1)[0]\n yield dictionary.doc2bow(datasummary.lower().split())\n\ncorpus_memory_friendly=MyCorpus()\nprint(corpus_memory_friendly)\n\nfor vector in corpus_memory_friendly:\n print(vector)\n\nclass MyCorpus(object):\n def __iter__(self):\n for line in open('mycorpus.txt'): \n #assume there's one document per line, tokens separated by whitespace\n yield dictionary.doc2bow(line.lower().split())" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
agaveapi/SC17-container-tutorial
content/notebooks/Visualization.ipynb
bsd-3-clause
[ "%cd ~/agave", "<h2>Fun With Visualization</h2>\n\nThe examples in this section are designed for use with Funwave's output, which is nothing more than an ascii array of floating point values separated by whitespace. This happens to be the ideal format for matplotlib's genfromtxt() function to consume.", "%cd ~/agave\n\n# IMPORT SOME USEFUL PACKAGES\n\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\nimport numpy as np\nfrom matplotlib import animation, rc\nfrom IPython.display import HTML\nrc('animation', html='html5')\n\n# SLURP IN THE DATA\n\nframes = []\nfor i in range(1,11):\n frames += [np.genfromtxt(\"output/eta_%05d\" % i)]\n\n# Your basic surface plot of the last frame \n\nf = frames[9]\nxv = np.linspace(0,f.shape[1],f.shape[1])\nyv = np.linspace(0,f.shape[0],f.shape[0])\nx2,y2 = np.meshgrid(xv,yv)\nfig = plt.figure(figsize=(12,10))\nax = fig.gca(projection='3d')\nax.clear()\n# This is the viewing angle, theta and phi\nax.view_init(20,60)\n# For more colormaps, see https://matplotlib.org/examples/color/colormaps_reference.html\n# The strides make the image really sharp. They slow down the rendering, however.\nsurf = ax.plot_surface(x2, y2, f, cmap=cm.coolwarm,\n linewidth=0, antialiased=False, rstride=1, cstride=1)\nfig.colorbar(surf)\nplt.show()\n\n# Your basic animation of the color plot\n\nfig2, ax = plt.subplots(figsize=(12,12))\n\ndef animate(i):\n ax.clear()\n pltres = plt.imshow(frames[i])\n return pltres,\n\nanim = animation.FuncAnimation(fig2, animate, frames=10, interval=200, repeat=True)\nHTML(anim.to_html5_video())\n\n# A rotating plot that cycles through the frames\n\nfig = plt.figure(figsize=(12,10))\nax = fig.gca(projection='3d')\nzmin = np.min(frames[0])\nzmax = np.max(frames[0])\nfor i in range(1,10):\n zmin = min(zmin,np.min(frames[i]))\n zmax = max(zmax,np.max(frames[i]))\n\ndef animate(i):\n global ax\n ax.clear()\n # Change the viewing angle\n ax.view_init(20,i*6)\n ax.set_zlim(top=zmax,bottom=zmin)\n # Cycle through the frames\n f = frames[i % 10]\n # vmax and vmin control the color normalization\n surf = ax.plot_surface(x2, y2, f, cmap=cm.coolwarm,\n linewidth=0, antialiased=False, vmax=zmax, vmin=zmin)\n return surf,\n\nanim = animation.FuncAnimation(fig, animate, frames=36, interval=200, repeat=True)\nHTML(anim.to_html5_video())\n\n# Other things you might want to set: https://matplotlib.org/mpl_toolkits/mplot3d/api.html", "<h3>A final word about movies.... </h3>\nThere's a download button on the bottom right of the animations that will give you an mp4 file. You can use the ImageMagick conversion utility to turn that into an animated gif, suitable for pasting on your website.\nHowever, surface plots aren't the only interesting kind of data to explore in funwave. Funwave also provides the wave direction vectors, u (x-direction) and v (y-direction). You can view the vector fields (with color-coded direction) using quiver.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy import ma\n\nU = np.genfromtxt(\"output/u_00010\")\nV = np.genfromtxt(\"output/v_00010\")\nC = np.abs(np.arctan2(U,V))\n\nplt.figure(figsize=(12,12))\nplt.title('Colors and Vectors')\nQ = plt.quiver(x2, y2, U, V, C, cmap=cm.inferno, units='width')\nplt.show()", "Because it's hard to see the vectors on the plot above, we zoom in on one section of the grid.", "# Adjust these numbers to zoom in on a different section of the grid.\n# If it doesn't work, reloading fixes.\nlox = 0\nhix = 20\nloy = 0\nhiy = 20\n\nplt.figure(figsize=(15,15))\nplt.title('Colors and Vectors: Closeup')\nQ = plt.quiver(x2[lox:hix,loy:hiy],\n y2[lox:hix,loy:hiy], \n U[lox:hix,loy:hiy],\n V[lox:hix,loy:hiy],\n C[lox:hix,loy:hiy], cmap=cm.inferno, units='width')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kkwteh/cal_hacks_flight_delays
flight_delays.ipynb
mit
[ "import pandas as pd\n\ndf = pd.read_csv('unzipped_data/On_Time_On_Time_Performance_2016_8.csv')\n\nlist(df.columns)", "What questions would you have about this data?", "df.shape\n\nlen(set(df.Origin))\n\ndf.FlightDate.min()\n\ndf.FlightDate.max()\n\ndf.DepTime.count()\ndf.DepTime.dropna().describe()\n\nneeded_columns = ['Year',\n 'Quarter',\n 'Month',\n 'DayofMonth',\n 'DayOfWeek',\n 'FlightDate',\n 'UniqueCarrier',\n 'Origin',\n 'OriginCityName',\n 'Dest',\n 'DestCityName',\n 'CRSDepTime',\n 'DepTime',\n 'DepDelay',\n 'DepDelayMinutes',\n 'DepDel15',\n 'DepartureDelayGroups',\n 'DepTimeBlk',\n 'CRSArrTime',\n 'ArrTime',\n 'ArrDelay',\n 'ArrDelayMinutes',\n 'ArrDel15',\n 'ArrTimeBlk',\n 'Cancelled',\n 'Diverted',\n 'CRSElapsedTime',\n 'ActualElapsedTime',\n 'Distance',\n 'DistanceGroup',\n]\n\n# Percentage of flights. 1=Monday, 2=Tuesday, etc.\ndf[['DayOfWeek', 'DepDel15']].groupby('DayOfWeek').mean()\n\ndf[['UniqueCarrier', 'DepDelayMinutes']].groupby('UniqueCarrier').count()\n\n# Mean minutes of delay by carrier\ndf[['UniqueCarrier', 'DepDelayMinutes']].groupby('UniqueCarrier').mean()\n\nCARRIERS = {\n 'AA': 'American',\n 'AS': 'Alaska',\n 'B6': 'Jet Blue',\n 'DL': 'Delta',\n 'EV': 'Express Jet',\n 'F9': 'Frontier',\n 'HA': 'Hawaiian',\n 'NK': 'Spirit',\n 'OO': 'SkyWest',\n 'UA': 'United',\n 'VX': 'Virgin',\n 'WN': 'Southwest'\n}\n\ndf[['Origin', 'DepDel15']].groupby('Origin').mean()", "How would you encode categorical data such a carrier, day of week and origin airport as numerical features?", "# Percent of flights arriving within 15 minute of time by origin\nmean_dep_delay15 = df[['Origin', 'DepDel15']].groupby('Origin').mean()\n\nlen(set(df.Origin))\n\nfor x in df.Origin:\n break\n\nimport numpy as np", "Features:\n\nOrigin group\nUniqueCarrier one hot encoding\nDay of week one hot encoding\nTime of day bucket\n\n\n\nObjective function:\n\nDepDelay15 (Whether or not the flight will be delayed by 15 minutes or more\n\n\n\nOrigin Groups", "match = [x==y for (x,y) in zip((df.DepDelayMinutes >= 15), df.DepDel15)]\n\nquantiles = [0] + list(np.percentile(mean_dep_delay15, [20,40,60,80])) + [1.1]\n\nquantiles\n\norigin_groups = []\nfor (low, high) in list(zip(quantiles, quantiles[1:])):\n origin_groups.append(\n set(mean_dep_delay15[(mean_dep_delay15 >= low) & (mean_dep_delay15 < high)].dropna().index)\n )\n\n[len(x) for x in origin_groups]\n\nfor i, group in enumerate(origin_groups):\n df['OriginGroup%s' % i] = [int(o in group) for o in df.Origin]", "Unique Carrier One-Hot Feature", "unique_carriers = list(set(df.UniqueCarrier))\nfor carrier in unique_carriers:\n df['Carrier%s' % carrier] = [int(x == carrier) for x in df.UniqueCarrier]", "Day Of Week One-Hot Feature", "days_of_week = sorted(list(set(df.DayOfWeek)))\nfor dow in days_of_week:\n df['DayOfWeek%s' % dow] = [int(x == dow) for x in df.DayOfWeek]", "Time of Day Bucket Feature:", "clean_df = df[['DepTime', 'DepDelayMinutes']].dropna()\n\nplt.scatter(x=clean_df.DepTime.iloc[:5000], y = clean_df.DepDelayMinutes.iloc[:5000])\n\nthresholds = [-1, 400, 800, 1200, 1600, 2000, 2401]\n\nbuckets = []\nfor i, (min_time, max_time) in enumerate(list(zip(thresholds, thresholds[1:]))):\n df[\"DepTimeBucket%s\" % i] = ((df.DepTime >= min_time) & (df.DepTime < max_time)).astype(int)\n\nfeatures = (['OriginGroup%s' % i for i in range(5)] +\n ['Carrier%s' % carrier for carrier in unique_carriers] +\n ['DayOfWeek%s' % dow for dow in days_of_week] + \n ['DepTimeBucket%s' % i for i in range(6)]\n )\n\nfrom sklearn.linear_model import LogisticRegression, LinearRegression\n\nmodel = LogisticRegression()\n\nclean_df = df[features + ['DepDelayMinutes']].dropna()\n\nclean_df['Delayed'] = [int(x >= 15) for x in clean_df.DepDelayMinutes]\n\ntrain_size = int(len(clean_df) * 0.7)\n\nmodel.fit(clean_df[features].iloc[:train_size], clean_df.Delayed.iloc[:train_size])\n\npredictions = model.predict(clean_df[features].iloc[train_size:])\n\nactuals = clean_df.Delayed.iloc[train_size:]\n\norigin_groups\n\nfrom sklearn.metrics import roc_curve, auc\n\npredict_probs = [tpl[1] for tpl in model.predict_proba(clean_df[features].iloc[train_size:])]\n\n# Compute micro-average ROC curve and ROC area\nfpr, tpr, _ = roc_curve(actuals, predict_probs)\n\n\nimport matplotlib.pyplot as plt\n\n#Plot of a ROC curve for a specific class\nplt.figure()\nlw = 2\nplt.plot(fpr, tpr, color='darkorange',\n lw=lw)\nplt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Receiver operating characteristic example')\nplt.legend(loc=\"lower right\")\nplt.show()\n\nauc(fpr, tpr)\n\nmodel.intercept_\n\ncoefs = dict(list(zip(features, model.coef_[0])))\n\ncoefs" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ptitjano/bokeh
examples/howto/charts/donut.ipynb
bsd-3-clause
[ "from bokeh.charts import Donut, show, output_notebook, vplot\nfrom bokeh.charts.utils import df_from_json\nfrom bokeh.sampledata.olympics2014 import data\nfrom bokeh.sampledata.autompg import autompg\n\noutput_notebook()\n\nimport pandas as pd", "Generic Examples\nValues with implied index", "d = Donut([2, 4, 5, 2, 8])\nshow(d)", "Values with Explicit Index", "d = Donut(pd.Series([2, 4, 5, 2, 8], index=['a', 'b', 'c', 'd', 'e']))\nshow(d)", "Autompg Data\nTake a look at the data", "autompg.head()", "Simple example implies count when object or categorical", "d = Donut(autompg.cyl.astype(str))\nshow(d)", "Equivalent with columns specified", "d = Donut(autompg, label='cyl', agg='count')\nshow(d)", "Given an indexed series of data pre-aggregated", "d = Donut(autompg.groupby('cyl').displ.mean())\nshow(d)", "Equivalent with columns specified", "d = Donut(autompg, label='cyl',\n values='displ', agg='mean')\nshow(d)", "Given a multi-indexed series fo data pre-aggregated\nSince the aggregation type isn't specified, we must provide it to the chart for use in the tooltip, otherwise it will just say \"value\".", "d = Donut(autompg.groupby(['cyl', 'origin']).displ.mean(), hover_text='mean')\nshow(d)", "Column Labels Produces Slightly Different Result\nIn previous series input example we do not have the original values so we cannot size the wedges based on the mean of displacement for Cyl, then size the wedges proportionally inside of the Cyl wedge. This column labeled example can perform the right sizing, so would be preferred for any aggregated values.", "d = Donut(autompg, label=['cyl', 'origin'],\n values='displ', agg='mean')\nshow(d)", "The spacing between each donut level can be altered\nBy default, this is applied to only the levels other than the first.", "d = Donut(autompg, label=['cyl', 'origin'],\n values='displ', agg='mean', level_spacing=0.15)\nshow(d)", "Can specify the spacing for each level\nThis is applied to each level individually, including the first.", "d = Donut(autompg, label=['cyl', 'origin'],\n values='displ', agg='mean', level_spacing=[0.8, 0.3])\nshow(d)", "Olympics Example\nTake a look at source data", "print(data.keys())\ndata['data'][0]", "Look at table formatted data", "# utilize utility to make it easy to get json/dict data converted to a dataframe\ndf = df_from_json(data)\ndf.head()", "Prepare the data\nThis data is in a \"pivoted\" format, and since the charts interface is built around referencing columns, it is more convenient to de-pivot the data.\n\nWe will sort the data by total medals and select the top rows by the total medals.\nUse pandas.melt to de-pivot the data.", "# filter by countries with at least one medal and sort by total medals\ndf = df[df['total'] > 8]\ndf = df.sort(\"total\", ascending=False)\nolympics = pd.melt(df, id_vars=['abbr'],\n value_vars=['bronze', 'silver', 'gold'],\n value_name='medal_count', var_name='medal')\nolympics.head()\n\n# original example\nd0 = Donut(olympics, label=['abbr', 'medal'], values='medal_count',\n text_font_size='8pt', hover_text='medal_count')\nshow(d0)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mohanprasath/Course-Work
coursera/python_for_data_science/4.2Writing_and_Saving_Files.ipynb
gpl-3.0
[ "<a href=\"http://cocl.us/topNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png\" width = 750, align = \"center\"></a>\n<a href=\"https://www.bigdatauniversity.com\"><img src = \"https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png\" width = 300, align = \"center\"></a>\n<h1 align=center><font size = 5> Writing and Saving Files in PYTHON</font></h1>\n\n<br>\nThis notebook will provide information regarding writing and saving data into .txt files. \nTable of Contents\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n\n<li><a href=\"#refw\">Writing files Text Files</a></li>\n\n<br>\n<p></p>\nEstimated Time Needed: <strong>15 min</strong>\n</div>\n\n<hr>\n\n<a id=\"ref3\"></a>\n<h2 align=center>Writing Files</h2>\n\nWe can open a file object using the method write() to save the text file to a list. To write the mode, argument must be set to write w. Let’s write a file Example2.txt with the line: “This is line A”", "with open('/resources/data/Example2.txt','w') as writefile:\n writefile.write(\"This is line A\")", "We can read the file to see if it worked:", "with open('/resources/data/Example2.txt','r') as testwritefile:\n print(testwritefile.read())", "We can write multiple lines:", "with open('/resources/data/Example2.txt','w') as writefile:\n writefile.write(\"This is line A\\n\")\n writefile.write(\"This is line B\\n\")", "The method .write() works similar to the method .readline(), except instead of reading a new line it writes a new line. The process is illustrated in the figure , the different colour coding of the grid represents a new line added to the file after each method call.\n<a ><img src = \"https://ibm.box.com/shared/static/4d86eysjv7fiy5nocgvpbddyj2uckw6z.png\" width = 500, align = \"center\"></a>\n\n<h4 align=center> \n An example of “.write()”, the different colour coding of the grid represents a new line added after each method call.\n\n\n </h4>\n\nYou can check the file to see if your results are correct", "with open('/resources/data/Example2.txt','r') as testwritefile:\n print(testwritefile.read())", "By setting the mode argument to append a you can append a new line as follows:", "with open('/resources/data/Example2.txt','a') as testwritefile:\n testwritefile.write(\"This is line C\\n\")", "You can verify the file has changed by running the following cell:", "with open('/resources/data/Example2.txt','r') as testwritefile:\n print(testwritefile.read())", "We write a list to a .txt file as follows:", "Lines=[\"This is line A\\n\",\"This is line B\\n\",\"This is line C\\n\"]\nLines\n\nwith open('Example2.txt','w') as writefile:\n for line in Lines:\n print(line)\n writefile.write(line)", "We can verify the file is written by reading it and printing out the values:", "with open('Example2.txt','r') as testwritefile:\n print(testwritefile.read())", "We can again append to the file by changing the second parameter to a. This adds the code:", "with open('Example2.txt','a') as testwritefile:\n testwritefile.write(\"This is line D\\n\")", "We can see the results of appending the file:", "with open('Example2.txt','r') as testwritefile:\n print(testwritefile.read())", "Copy a file\nLet's copy the file Example2.txt to the file Example3.txt:", "with open('Example2.txt','r') as readfile:\n with open('Example3.txt','w') as writefile:\n for line in readfile:\n writefile.write(line)", "We can read the file to see if everything works:", "with open('Example3.txt','r') as testwritefile:\n print(testwritefile.read())", "After reading files, we can also write data into files and save them in different file formats like .txt, .csv, .xls (for excel files) etc. Let's take a look at an example.", "# Write CSV file example\n\nstudent_list = [{\"Student ID\": 1, \"Gender\": \"F\", \"Name\": \"Emma\"}, \n {\"Student ID\": 2, \"Gender\": \"M\", \"Name\": \"John\"}, \n {\"Student ID\": 3, \"Gender\": \"F\", \"Name\": \"Linda\"}]\n\n# Write csv file\nwith open('Example_csv.csv','w') as writefile:\n \n # Set header for each column\n for col_header in list(student_list[0].keys()):\n writefile.write(str(col_header) + \", \")\n writefile.write(\"\\n\")\n \n # Set value for each column\n for student in student_list:\n for col_ele in list(student.values()):\n writefile.write(str(col_ele) + \", \")\n writefile.write(\"\\n\") \n\n# Print out the result csv\nwith open('Example_csv.csv','r') as testwritefile:\n print(testwritefile.read())", "Now go to the directory to ensure the .txt file exists and contains the summary data that we wrote.\n<a href=\"http://cocl.us/bottemNotebooksPython101Coursera\"><img src = \"https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png\" width = 750, align = \"center\"></a>\n<hr>\nAbout the Author:\nJoseph Santarcangelo has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.\n<hr>\nCopyright &copy; 2017 cognitiveclass.ai. This notebook and its source code are released under the terms of the MIT License.​" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MegaShow/college-programming
Homework/Principles of Artificial Neural Networks/Week 5 CNN 1/Week5.ipynb
mit
[ "Week 5: CNN-1\n实验准备\n\n熟悉python语言的使用和numpy,torch的基本用法\n熟悉神经网络的训练过程与优化方法\n结合理论课的内容,了解卷积与卷积神经网络(CNN)的内容和原理\n了解常用的CNN模型的基本结构,如AlexNet,Vgg,ResNet\n\n实验过程\n1. 卷积与卷积层\n\nnumpy实现卷积\npytorch中的卷积层和池化层\n\n2. CNN\n\n实现并训练一个基本的CNN网络\nResNet\nVGG\n\n卷积\n\n在实验课上我们已经了解过卷积运算的操作当我们对一张二维的图像做卷积时,将卷积核沿着图像进行滑动乘加即可(如上图所示).\n下面的conv函数实现了对二维单通道图像的卷积.考虑输入的卷积核kernel的长宽相同,padding为对图像的四个边缘补0,stride为卷积核窗口滑动的步长.", "import numpy as np\n\ndef convolution(img, kernel, padding=1, stride=1):\n \"\"\"\n img: input image with one channel\n kernel: convolution kernel\n \"\"\"\n \n h, w = img.shape\n kernel_size = kernel.shape[0]\n \n # height and width of image with padding \n ph, pw = h + 2 * padding, w + 2 * padding\n padding_img = np.zeros((ph, pw))\n padding_img[padding:h + padding, padding:w + padding] = img\n \n # height and width of output image\n result_h = (h + 2 * padding - kernel_size) // stride + 1\n result_w = (w + 2 * padding - kernel_size) // stride + 1\n \n result = np.zeros((result_h, result_w))\n \n # convolution\n x, y = 0, 0\n for i in range(0, ph - kernel_size + 1, stride):\n for j in range(0, pw - kernel_size + 1, stride):\n roi = padding_img[i:i+kernel_size, j:j+kernel_size]\n result[x, y] = np.sum(roi * kernel)\n y += 1\n y = 0\n x += 1\n return result", "下面在图像上简单一下测试我们的conv函数,这里使用3*3的高斯核对下面的图像进行滤波.", "from PIL import Image\nimport matplotlib.pyplot as plt\nimg = Image.open('pics/lena.jpg').convert('L')\nplt.imshow(img, cmap='gray')\n\n# a Laplace kernel\nlaplace_kernel = np.array([[-1, -1, -1],\n [-1, 8, -1],\n [-1, -1, -1]])\n\n# Gauss kernel with kernel_size=3\ngauss_kernel3 = (1/ 16) * np.array([[1, 2, 1], \n [2, 4, 2], \n [1, 2, 1]])\n\n# Gauss kernel with kernel_size=5\ngauss_kernel5 = (1/ 84) * np.array([[1, 2, 3, 2, 1],\n [2, 5, 6, 5, 2], \n [3, 6, 8, 6, 3],\n [2, 5, 6, 5, 2],\n [1, 2, 3, 2, 1]])\n\nfig, ax = plt.subplots(1, 3, figsize=(12, 8))\n\nlaplace_img = convolution(np.array(img), laplace_kernel, padding=1, stride=1)\nax[0].imshow(Image.fromarray(laplace_img), cmap='gray')\nax[0].set_title('laplace')\n\ngauss3_img = convolution(np.array(img), gauss_kernel3, padding=1, stride=1)\nax[1].imshow(Image.fromarray(gauss3_img), cmap='gray')\nax[1].set_title('gauss kernel_size=3')\n\ngauss5_img = convolution(np.array(img), gauss_kernel5, padding=2, stride=1)\nax[2].imshow(Image.fromarray(gauss5_img), cmap='gray')\nax[2].set_title('gauss kernel_size=5')", "上面我们实现了实现了对单通道输入单通道输出的卷积.在CNN中,一般使用到的都是多通道输入多通道输出的卷积,要实现多通道的卷积, 我们只需要对循环调用上面的conv函数即可.", "def myconv2d(features, weights, padding=0, stride=1):\n \"\"\"\n features: input, in_channel * h * w\n weights: kernel, out_channel * in_channel * kernel_size * kernel_size\n return output with out_channel\n \"\"\"\n in_channel, h, w = features.shape\n out_channel, _, kernel_size, _ = weights.shape\n \n # height and width of output image\n output_h = (h + 2 * padding - kernel_size) // stride + 1\n output_w = (w + 2 * padding - kernel_size) // stride + 1\n output = np.zeros((out_channel, output_h, output_w))\n \n # call convolution out_channel * in_channel times\n for i in range(out_channel):\n weight = weights[i]\n for j in range(in_channel):\n feature_map = features[j]\n kernel = weight[j]\n output[i] += convolution(feature_map, kernel, padding, stride)\n return output", "接下来, 让我们测试我们写好的myconv2d函数.", "input_data=[\n [[0,0,2,2,0,1],\n [0,2,2,0,0,2],\n [1,1,0,2,0,0],\n [2,2,1,1,0,0],\n [2,0,1,2,0,1],\n [2,0,2,1,0,1]],\n\n [[2,0,2,1,1,1],\n [0,1,0,0,2,2],\n [1,0,0,2,1,0],\n [1,1,1,1,1,1],\n [1,0,1,1,1,2],\n [2,1,2,1,0,2]]\n ]\nweights_data=[[ \n [[ 0, 1, 0],\n [ 1, 1, 1],\n [ 0, 1, 0]],\n \n [[-1, -1, -1],\n [ -1, 8, -1],\n [ -1, -1, -1]] \n ]]\n\n# numpy array\ninput_data = np.array(input_data)\nweights_data = np.array(weights_data)\n\n# show the result\nprint(myconv2d(input_data, weights_data, padding=3, stride=3))", "在Pytorch中,已经为我们提供了卷积和卷积层的实现.使用同样的input和weights,以及stride,padding,pytorch的卷积的结果应该和我们的一样.可以在下面的代码中进行验证.", "import torch\nimport torch.nn.functional as F\ninput_tensor = torch.tensor(input_data).unsqueeze(0).float()\n\nF.conv2d(input_tensor, weight=torch.tensor(weights_data).float(), bias=None, stride=3, padding=3)\n", "作业:\n上述代码中convolution的实现只考虑卷积核以及padding和stride长宽一致的情况,若输入的卷积核可能长宽不一致,padding与stride的输入可能为两个元素的元祖(代表两个维度上的padding与stride)并使用下面test input对你的convolutionV2进行测试.", "def convolutionV2(img, kernel, padding=(0,0), stride=(1,1)):\n h, w = img.shape\n kh, kw = kernel.shape\n\n # height and width of image with padding \n ph, pw = h + 2 * padding[0], w + 2 * padding[1]\n padding_img = np.zeros((ph, pw))\n padding_img[padding[0]:h + padding[0], padding[1]:w + padding[1]] = img\n \n # height and width of output image\n result_h = (h + 2 * padding[0] - kh) // stride[0] + 1\n result_w = (w + 2 * padding[1] - kw) // stride[1] + 1\n \n result = np.zeros((result_h, result_w))\n \n # convolution\n x, y = 0, 0\n for i in range(0, ph - kh + 1, stride[0]):\n for j in range(0, pw - kw + 1, stride[1]):\n roi = padding_img[i:i+kh, j:j+kw]\n result[x, y] = np.sum(roi * kernel)\n y += 1\n y = 0\n x += 1\n return result\n\n# test input\ntest_input = np.array([[1, 1, 2, 1],\n [0, 1, 0, 2],\n [2, 2, 0, 2],\n [2, 2, 2, 1],\n [2, 3, 2, 3]])\n\ntest_kernel = np.array([[1, 0], [0, 1], [0, 0]])\n\n# output\nprint(convolutionV2(test_input, test_kernel, padding=(1, 0), stride=(1, 1)))\n\nprint(convolutionV2(test_input, test_kernel, padding=(2, 1), stride=(1, 2)))", "卷积层\nPytorch提供了卷积层和池化层供我们使用.\n卷积层与上面相似, 而池化层与卷积层相似,Pooling layer的主要目的是缩小features的size.常用的有MaxPool(滑动窗口取最大值)与AvgPool(滑动窗口取均值)", "import torch\nimport torch.nn as nn\n\n\nx = torch.randn(1, 1, 32, 32)\n\nconv_layer = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3, stride=1, padding=0)\ny = conv_layer(x)\nprint(x.shape)\nprint(y.shape)", "请问:\n1. 输入与输出的tensor的size分别是多少?该卷积层的参数量是多少?\n2. 若kernel_size=5,stride=2,padding=2, 输出的tensor的size是多少?在上述代码中改变参数后试验后并回答.\n3. 若输入的tensor size为N*C*H*W,若第5行中卷积层的参数为in_channels=C,out_channels=Cout,kernel_size=k,stride=s,padding=p,那么输出的tensor size是多少? \n答: \n1. 输入的tensor的大小为$1 * 1 * 32 * 32$,输出的tensor的大小为$1 * 3 * 30 * 30$,这说明卷积核是$1 * 1 * 3 * 3$的规模,一共有3个卷积核。\n2. 输出的tensor的大小为$1 * 3 * 16 * 16$,代码验证如下。\n3. 输出的tensor大小为$N * C_{out} * ((H+2p-k)//s+1) * ((W+2p-k)//s+1)$。", "x = torch.randn(1, 1, 32, 32)\n\nconv_layer = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=5, stride=2, padding=2)\ny = conv_layer(x)\nprint(x.shape)\nprint(y.shape)\n\n# input N * C * H * W\nx = torch.randn(1, 1, 4, 4)\n\n# maxpool\nmaxpool = nn.MaxPool2d(kernel_size=2, stride=2)\ny = maxpool(x)\n\n# avgpool\navgpool = nn.AvgPool2d(kernel_size=2, stride=2)\nz = avgpool(x)\n\n#avgpool\nprint(x)\nprint(y)\nprint(z)", "GPU\n我们可以选择在cpu或gpu上来训练我们的模型. \n实验室提供了4卡的gpu服务器,要查看各个gpu设备的使用情况,可以在服务器上的jupyter主页点击new->terminal,在terminal中输入nvidia-smi即可查看每张卡的使用情况.如下图.\n\n上图左边一栏显示了他们的设备id(0,1,2,3),风扇转速,温度,性能状态,能耗等信息,中间一栏显示他们的bus-id和显存使用量,右边一栏是GPU使用率等信息.注意到中间一栏的显存使用量,在训练模型前我们可以根据空余的显存来选择我们使用的gpu设备. \n在本次实验中我们将代码中的torch.device('cuda:0')的0更换成所需的设备id即可选择在相应的gpu设备上运行程序.\nCNN(卷积神经网络)\n一个简单的CNN\n接下来,让我们建立一个简单的CNN分类器.\n这个CNN的整体流程是 \n卷积(Conv2d) -> BN(batch normalization) -> 激励函数(ReLU) -> 池化(MaxPooling) -> \n卷积(Conv2d) -> BN(batch normalization) -> 激励函数(ReLU) -> 池化(MaxPooling) -> \n全连接层(Linear) -> 输出.", "import torch\nimport torch.nn as nn\nimport torch.utils.data as Data\nimport torchvision\n\n\nclass MyCNN(nn.Module):\n \n def __init__(self, image_size, num_classes):\n super(MyCNN, self).__init__()\n # conv1: Conv2d -> BN -> ReLU -> MaxPool\n self.conv1 = nn.Sequential(\n nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1),\n nn.BatchNorm2d(16),\n nn.ReLU(), \n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n # conv2: Conv2d -> BN -> ReLU -> MaxPool\n self.conv2 = nn.Sequential(\n nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2, stride=2),\n )\n # fully connected layer\n self.fc = nn.Linear(32 * (image_size // 4) * (image_size // 4), num_classes)\n \n\n def forward(self, x):\n \"\"\"\n input: N * 3 * image_size * image_size\n output: N * num_classes\n \"\"\"\n x = self.conv1(x)\n x = self.conv2(x)\n # view(x.size(0), -1): change tensor size from (N ,H , W) to (N, H*W)\n x = x.view(x.size(0), -1)\n output = self.fc(x)\n return output", "这样,一个简单的CNN模型就写好了.与前面的课堂内容相似,我们需要对完成网络进行训练与评估的代码.", "def train(model, train_loader, loss_func, optimizer, device):\n \"\"\"\n train model using loss_fn and optimizer in an epoch.\n model: CNN networks\n train_loader: a Dataloader object with training data\n loss_func: loss function\n device: train on cpu or gpu device\n \"\"\"\n total_loss = 0\n # train the model using minibatch\n for i, (images, targets) in enumerate(train_loader):\n images = images.to(device)\n targets = targets.to(device)\n\n # forward\n outputs = model(images)\n loss = loss_func(outputs, targets)\n\n # backward and optimize\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n total_loss += loss.item()\n \n # every 100 iteration, print loss\n if (i + 1) % 100 == 0:\n print (\"Step [{}/{}] Train Loss: {:.4f}\"\n .format(i+1, len(train_loader), loss.item()))\n return total_loss / len(train_loader)\n\ndef evaluate(model, val_loader, device):\n \"\"\"\n model: CNN networks\n val_loader: a Dataloader object with validation data\n device: evaluate on cpu or gpu device\n return classification accuracy of the model on val dataset\n \"\"\"\n # evaluate the model\n model.eval()\n # context-manager that disabled gradient computation\n with torch.no_grad():\n correct = 0\n total = 0\n \n for i, (images, targets) in enumerate(val_loader):\n # device: cpu or gpu\n images = images.to(device)\n targets = targets.to(device)\n \n \n outputs = model(images)\n \n # return the maximum value of each row of the input tensor in the \n # given dimension dim, the second return vale is the index location\n # of each maxium value found(argmax)\n _, predicted = torch.max(outputs.data, dim=1)\n \n \n correct += (predicted == targets).sum().item()\n \n total += targets.size(0)\n \n accuracy = correct / total\n print('Accuracy on Test Set: {:.4f} %'.format(100 * accuracy))\n return accuracy\n\ndef save_model(model, save_path):\n # save model\n torch.save(model.state_dict(), save_path)\n\nimport matplotlib.pyplot as plt\ndef show_curve(ys, title):\n \"\"\"\n plot curlve for Loss and Accuacy\n Args:\n ys: loss or acc list\n title: loss or accuracy\n \"\"\"\n x = np.array(range(len(ys)))\n y = np.array(ys)\n plt.plot(x, y, c='b')\n plt.axis()\n plt.title('{} curve'.format(title))\n plt.xlabel('epoch')\n plt.ylabel('{}'.format(title))\n plt.show()", "准备数据与训练模型\n接下来,我们使用CIFAR10数据集来对我们的CNN模型进行训练.\nCIFAR-10:该数据集共有60000张彩色图像,这些图像是32*32,分为10个类,每类6000张图.这里面有50000张用于训练,构成了5个训练批,每一批10000张图;另外10000用于测试,单独构成一批.在本次实验中,使用CIFAR-10数据集来训练我们的模型.我们可以用torchvision.datasets.CIFAR10来直接使用CIFAR10数据集.", "import torch\nimport torch.nn as nn\nimport torchvision\nimport torchvision.transforms as transforms\n\n# mean and std of cifar10 in 3 channels \ncifar10_mean = (0.49, 0.48, 0.45)\ncifar10_std = (0.25, 0.24, 0.26)\n\n# define transform operations of train dataset \ntrain_transform = transforms.Compose([\n # data augmentation\n transforms.Pad(4),\n transforms.RandomHorizontalFlip(),\n transforms.RandomCrop(32),\n\n transforms.ToTensor(),\n transforms.Normalize(cifar10_mean, cifar10_std)])\n\ntest_transform = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(cifar10_mean, cifar10_std)])\n\n# torchvision.datasets provide CIFAR-10 dataset for classification\ntrain_dataset = torchvision.datasets.CIFAR10(root='./data/',\n train=True, \n transform=train_transform,\n download=True)\n\ntest_dataset = torchvision.datasets.CIFAR10(root='./data/',\n train=False, \n transform=test_transform)\n\n# Data loader: provides single- or multi-process iterators over the dataset.\ntrain_loader = torch.utils.data.DataLoader(dataset=train_dataset,\n batch_size=100, \n shuffle=True)\n\ntest_loader = torch.utils.data.DataLoader(dataset=test_dataset,\n batch_size=100, \n shuffle=False)\n\n", "训练过程中使用交叉熵(cross-entropy)损失函数与Adam优化器来训练我们的分类器网络.\n阅读下面的代码并在To-Do处,根据之前所学的知识,补充前向传播和反向传播的代码来实现分类网络的训练.", "def fit(model, num_epochs, optimizer, device):\n \"\"\"\n train and evaluate an classifier num_epochs times.\n We use optimizer and cross entropy loss to train the model. \n Args: \n model: CNN network\n num_epochs: the number of training epochs\n optimizer: optimize the loss function\n \"\"\"\n \n # loss and optimizer\n loss_func = nn.CrossEntropyLoss()\n \n model.to(device)\n loss_func.to(device)\n \n # log train loss and test accuracy\n losses = []\n accs = []\n \n for epoch in range(num_epochs):\n \n print('Epoch {}/{}:'.format(epoch + 1, num_epochs))\n # train step\n loss = train(model, train_loader, loss_func, optimizer, device)\n losses.append(loss)\n \n # evaluate step\n accuracy = evaluate(model, test_loader, device)\n accs.append(accuracy)\n \n \n # show curve\n show_curve(losses, \"train loss\")\n show_curve(accs, \"test accuracy\")\n\n# hyper parameters\nnum_epochs = 10\nlr = 0.01\nimage_size = 32\nnum_classes = 10\n\n# declare and define an objet of MyCNN\nmycnn = MyCNN(image_size, num_classes)\nprint(mycnn)\n\n# Device configuration, cpu, cuda:0/1/2/3 available\ndevice = torch.device('cuda:0')\n\noptimizer = torch.optim.Adam(mycnn.parameters(), lr=lr)\n\n# start training on cifar10 dataset\nfit(mycnn, num_epochs, optimizer, device)", "ResNet\n接下来,让我们完成更复杂的CNN的实现.\nResNet又叫做残差网络.在ResNet网络结构中会用到两种残差模块,一种是以两个3*3的卷积网络串接在一起作为一个残差模块,另外一种是1*1、3*3、1*1的3个卷积网络串接在一起作为一个残差模块。他们如下图所示。\n\n我们以左边的模块为例实现一个ResidualBlock.注意到由于我们在两次卷积中可能会使输入的tensor的size与输出的tensor的size不相等,为了使它们能够相加,所以输出的tensor与输入的tensor size不同时,我们使用downsample(由外部传入)来使保持size相同\n现在,试在To-Do补充代码完成下面的forward函数来完成ResidualBlock的实现,并运行它.", "# 3x3 convolution\ndef conv3x3(in_channels, out_channels, stride=1):\n return nn.Conv2d(in_channels, out_channels, kernel_size=3, \n stride=stride, padding=1, bias=False)\n\n# Residual block\nclass ResidualBlock(nn.Module):\n def __init__(self, in_channels, out_channels, stride=1, downsample=None):\n super(ResidualBlock, self).__init__()\n self.conv1 = conv3x3(in_channels, out_channels, stride)\n self.bn1 = nn.BatchNorm2d(out_channels)\n self.relu = nn.ReLU(inplace=True)\n self.conv2 = conv3x3(out_channels, out_channels)\n self.bn2 = nn.BatchNorm2d(out_channels)\n self.downsample = downsample\n \n def forward(self, x):\n \"\"\"\n Defines the computation performed at every call.\n x: N * C * H * W\n \"\"\"\n residual = x\n # if the size of input x changes, using downsample to change the size of residual\n if self.downsample:\n residual = self.downsample(x)\n out = self.conv1(x)\n out = self.bn1(out)\n \n out = self.relu(out)\n out = self.conv2(out)\n out = self.bn2(out)\n \n out += residual\n out = self.relu(out)\n return out", "下面是一份针对cifar10数据集的ResNet的实现. \n它先通过一个conv3x3,然后经过3个包含多个残差模块的layer(一个layer可能包括多个ResidualBlock, 由传入的layers列表中的数字决定), 然后经过一个全局平均池化层,最后通过一个线性层.", "class ResNet(nn.Module):\n def __init__(self, block, layers, num_classes=10):\n \"\"\"\n block: ResidualBlock or other block\n layers: a list with 3 positive num.\n \"\"\"\n super(ResNet, self).__init__()\n self.in_channels = 16\n self.conv = conv3x3(3, 16)\n self.bn = nn.BatchNorm2d(16)\n self.relu = nn.ReLU(inplace=True)\n # layer1: image size 32\n self.layer1 = self.make_layer(block, 16, num_blocks=layers[0])\n # layer2: image size 32 -> 16\n self.layer2 = self.make_layer(block, 32, num_blocks=layers[1], stride=2)\n # layer1: image size 16 -> 8\n self.layer3 = self.make_layer(block, 64, num_blocks=layers[2], stride=2)\n # global avg pool: image size 8 -> 1\n self.avg_pool = nn.AvgPool2d(8)\n \n self.fc = nn.Linear(64, num_classes)\n \n def make_layer(self, block, out_channels, num_blocks, stride=1):\n \"\"\"\n make a layer with num_blocks blocks.\n \"\"\"\n \n downsample = None\n if (stride != 1) or (self.in_channels != out_channels):\n # use Conv2d with stride to downsample\n downsample = nn.Sequential(\n conv3x3(self.in_channels, out_channels, stride=stride),\n nn.BatchNorm2d(out_channels))\n \n # first block with downsample\n layers = []\n layers.append(block(self.in_channels, out_channels, stride, downsample))\n \n self.in_channels = out_channels\n # add num_blocks - 1 blocks\n for i in range(1, num_blocks):\n layers.append(block(out_channels, out_channels))\n \n # return a layer containing layers\n return nn.Sequential(*layers)\n \n def forward(self, x):\n out = self.conv(x)\n out = self.bn(out)\n out = self.relu(out)\n out = self.layer1(out)\n out = self.layer2(out)\n out = self.layer3(out)\n out = self.avg_pool(out)\n # view: here change output size from 4 dimensions to 2 dimensions\n out = out.view(out.size(0), -1)\n out = self.fc(out)\n return out\n\nresnet = ResNet(ResidualBlock, [2, 2, 2])\nprint(resnet)", "使用fit函数训练实现的ResNet,观察结果变化.", "# Hyper-parameters\nnum_epochs = 10\nlr = 0.001\n# Device configuration\ndevice = torch.device('cuda:0')\n# optimizer\noptimizer = torch.optim.Adam(resnet.parameters(), lr=lr)\nfit(resnet, num_epochs, optimizer, device)", "作业\n尝试改变学习率lr,使用SGD或Adam优化器,训练10个epoch,提高ResNet在测试集上的accuracy.", "resnet = ResNet(ResidualBlock, [2, 2, 2])\nnum_epochs = 10\nlr = 0.0009\ndevice = torch.device('cuda:0')\noptimizer = torch.optim.Adam(resnet.parameters(), lr=lr)\nfit(resnet, num_epochs, optimizer, device)", "作业\n下图表示将SE模块嵌入到ResNet的残差模块.\n\n其中,global pooling表示全局池化层(将输入的size池化为1*1), 将c*h*w的输入变为c*1*1的输出.FC表示全连接层(线性层),两层FC之间使用ReLU作为激活函数.通过两层FC后使用sigmoid激活函数激活.最后将得到的c个值与原输入c*h*w按channel相乘,得到c*h*w的输出.\n补充下方的代码完成SE-Resnet block的实现.", "from torch import nn\n\n\nclass SELayer(nn.Module):\n def __init__(self, channel, reduction=16):\n super(SELayer, self).__init__()\n # The output of AdaptiveAvgPool2d is of size H x W, for any input size.\n self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))\n self.fc1 = nn.Linear(channel, channel // reduction)\n self.fc2 = nn.Linear(channel // reduction, channel)\n self.sigmoid = nn.Sigmoid()\n\n def forward(self, x):\n b, c, _, _ = x.shape\n out = self.avg_pool(x).view(b, c)\n out = self.fc1(out)\n out = self.fc2(out)\n out = self.sigmoid(out).view(b, c, 1, 1)\n return out * x\n\nclass SEResidualBlock(nn.Module):\n def __init__(self, in_channels, out_channels, stride=1, downsample=None, reduction=16):\n super(SEResidualBlock, self).__init__()\n self.conv1 = conv3x3(in_channels, out_channels, stride)\n self.bn1 = nn.BatchNorm2d(out_channels)\n self.relu = nn.ReLU(inplace=True)\n self.conv2 = conv3x3(out_channels, out_channels)\n self.bn2 = nn.BatchNorm2d(out_channels)\n self.se = SELayer(out_channels, reduction)\n self.downsample = downsample\n \n def forward(self, x):\n residual = x\n if self.downsample:\n residual = self.downsample(x)\n out = self.conv1(x)\n out = self.bn1(out)\n out = self.relu(out)\n out = self.conv2(out)\n out = self.bn2(out)\n out = self.se(out)\n out += residual\n out = self.relu(out)\n return out\n\nse_resnet = ResNet(SEResidualBlock, [2, 2, 2])\nprint(se_resnet)\n\n# Hyper-parameters\nnum_epochs = 10\nlr = 0.001\n# Device configuration\ndevice = torch.device('cuda:0')\n# optimizer\noptimizer = torch.optim.Adam(se_resnet.parameters(), lr=lr)\n\nfit(se_resnet, num_epochs, optimizer, device) ", "Vgg\n接下来让我们阅读vgg网络的实现代码.VGGNet全部使用3*3的卷积核和2*2的池化核,通过不断加深网络结构来提升性能。Vgg表明了卷积神经网络的深度增加和小卷积核的使用对网络的最终分类识别效果有很大的作用.\n\n下面是一份用于训练cifar10的简化版的vgg代码. \n有时间的同学可以阅读并训练它.", "import math\n\nclass VGG(nn.Module):\n def __init__(self, cfg):\n super(VGG, self).__init__()\n self.features = self._make_layers(cfg)\n # linear layer\n self.classifier = nn.Linear(512, 10)\n\n def forward(self, x):\n out = self.features(x)\n out = out.view(out.size(0), -1)\n out = self.classifier(out)\n return out\n\n def _make_layers(self, cfg):\n \"\"\"\n cfg: a list define layers this layer contains\n 'M': MaxPool, number: Conv2d(out_channels=number) -> BN -> ReLU\n \"\"\"\n layers = []\n in_channels = 3\n for x in cfg:\n if x == 'M':\n layers += [nn.MaxPool2d(kernel_size=2, stride=2)]\n else:\n layers += [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),\n nn.BatchNorm2d(x),\n nn.ReLU(inplace=True)]\n in_channels = x\n layers += [nn.AvgPool2d(kernel_size=1, stride=1)]\n return nn.Sequential(*layers)\n\ncfg = {\n 'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],\n 'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],\n 'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],\n 'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],\n}\nvggnet = VGG(cfg['VGG11'])\nprint(vggnet)\n\n# Hyper-parameters\nnum_epochs = 10\nlr = 1e-3\n# Device configuration\ndevice = torch.device('cuda:0')\n\n# optimizer\noptimizer = torch.optim.Adam(vggnet.parameters(), lr=lr)\n\nfit(vggnet, num_epochs, optimizer, device)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
avallarino-ar/MCDatos
Notas/Notas-Python/.ipynb_checkpoints/01_estructuras-checkpoint.ipynb
mit
[ "Resumen de estructuras de datos de Python\nTuplas, Listas, Sets, Diccionarios, Listas de comprehension, Funciones, Clases\nVamos a ver ejemplos de estrucutras de datos en Python\nTuplas\nSon el tipo mas simple de estr. puede almacenar en una misma variable mas de un tipo de dato.", "x = (1,2,3,0,2,1)\nx\n\nx = (0, 'Hola', (1,2))\n\nx[1]", "Lo Malo de las tuplas es que son inmutables", "id(x)\n\nx = (0, 'Cambio', (1,2))\nid(x)\n\nx", "Listas\nSon elementos", "x = [1,2,3]\nx.append('Nuevo valor')\nx\n\nx.insert(2, 'Valor Intermedio')\nx", "Qué es ams rapido: Tulpas o Listas?", "import timeit\ntimeit.timeit('x = (1,2,3,4,5,6)')\n\ntimeit.timeit('x = [1,2,3,4,5,6]')", "Atencion a los usuarios de R: referencia o asignacion?", "x = [1,2,3] # Asignacion\ny = [0, x] # Referencia\ny\n\nx[0] = -1 # Asigno otra lista a x\ny # al cambiar el valor en x se cambio en y (y apunta a x)", "Diccionarios\nEn una gran cantidad de problemas, quieren almacenar claves y asignarle a cada clave un valor.\nUn mejor nombre para un diccc. es un \"directorio telefonico\"", "dir_tel = {'juan':5512345, 'pedro':5554321, 'itam':'is fun'}\ndir_tel['juan']\n\ndir_tel.keys()\n\ndir_tel.values()", "Sets\nson conj. matematicos", "A = set([1,2,3])\nB = set([2,3,4])\n\nA | B # Union\n\nA & B # Intersección\n\nA - B # Diferencia de conj.\n\nA ^ B # Diferencia simetrica", "Condicionales y Loops, For, While, If, Elif\ntruco para hacer loops en python es la func. range", "range(1000)\n\nfor i in range(5):\n print(i)\n\nfor i in range(10):\n if i % 2 == 0:\n print(str(i) + ' Par')\n else:\n print(str(i) + ' Impar')\n\ni = 0\nwhile i < 10:\n print(i)\n i = i + 1", "Clases", "class Person:\n def __init__(self, first, last):\n self.first = first\n self.last = last\n \n def greet(self, add_msg = ''):\n print('Hello ' + self.first + ' ' + add_msg)\n\njuan = Person('juan', 'dominguez')\njuan.first\n\njuan.greet()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yunfeiz/py_learnt
sample_code/tushare_pyecahrts_v0.1.ipynb
apache-2.0
[ "1、top 10 share holder", "import tushare as ts\nimport pandas as pd\nfrom IPython.display import HTML\nstock_selected='600487'\n#历年前十大股东持股情况\n#df1为季度统计摘要,data1为前十大持股明细统计\ndf1, data1 = ts.top10_holders(code=stock_selected, gdtype='0') #gdtype等于1时表示流通股,默认为0 \n#df1, data1 = ts.top10_holders(code='002281', year=2015, quarter=1, gdtype='1')\n\ndf1 = df1.sort_values('quarter', ascending=False)\n\ndf1.head(10)\n\n#qts = list(df1['quarter'])\n#data = list(df1['props'])\n#name = ts.get_realtime_quotes(stock_selected)['name'][0]", "2、Top 10 share holder", "import tushare as ts\nimport pandas as pd\nfrom IPython.display import HTML\n#浦发银行2016三季度前十大流通股东情况\ndf2, data2 = ts.top10_holders(code=stock_selected, year=2016, quarter=3, gdtype='1')\n\n#取前十大流通股东名称\ntop10name = str(list(data2['name']))\nprint(top10name)", "获取沪深上市公司基本情况。属性包括:\ncode,代码\nname,名称\nindustry,所属行业\narea,地区\npe,市盈率\noutstanding,流通股本(亿)\ntotals,总股本(亿)\ntotalAssets,总资产(万)\nliquidAssets,流动资产\nfixedAssets,固定资产\nreserved,公积金\nreservedPerShare,每股公积金\nesp,每股收益\nbvps,每股净资\npb,市净率\ntimeToMarket,上市日期\nundp,未分利润\nperundp, 每股未分配\nrev,收入同比(%)\nprofit,利润同比(%)\ngpr,毛利率(%)\nnpr,净利润率(%)\nholders,股东人数\n调用方法:", "import tushare as ts\ndf=ts.get_stock_basics()\ndf.head(5)\natt=df.columns.values.tolist()\nprint(att)\n\n#df.ix['002281']\n#df.ix['002281']\n#df.info()\ndf[df.name == u'四维图新']['esp']\n\n\nfrom xpinyin import Pinyin\npin=Pinyin()\n\npin.get_initials(u'四维图新', u'')\n\ndf.name\n\ndf['UP'] = None\nfor index, row in df.iterrows():\n name_str = df.name[index]\n #print(name_str)\n up_letter = pin.get_initials(name_str,u'')\n #print(up_letter)\n df.at[index,['UP']]=up_letter\ndf[df['UP']=='HTGD']\n\ndf_out=df[(df.profit>20) & \n (df.gpr > 25) &\n (df.pe <120) &\n (df.pe >0) &\n (df.rev >0)][['name','industry','pe','profit','esp','rev','holders','gpr','npr']]\ndf_out.sort_values(by='npr',ascending=False, inplace = True)\ndf_out.rename(columns={'name':u'股票','industry':u'行业','pe':u'市盈率', \n 'profit':u'利润同比','esp':u'每股收益','rev':u'收入同比',\n 'holders':u'股东人数','gpr':u'毛利率','npr':u'净利率'})[:50]", "业绩报告(主表)\n按年度、季度获取业绩报表数据。数据获取需要一定的时间,网速取决于您的网速,请耐心等待。结果返回的数据属性说明如下:\ncode,代码\nname,名称\nesp,每股收益\neps_yoy,每股收益同比(%)\nbvps,每股净资产\nroe,净资产收益率(%)\nepcf,每股现金流量(元)\nnet_profits,净利润(万元)\nprofits_yoy,净利润同比(%)\ndistrib,分配方案\nreport_date,发布日期\n调用方法:\n获取2014年第3季度的业绩报表数据\nts.get_report_data(2014,3)\n结果返回:\n code name esp eps_yoy bvps roe epcf net_profits", "import tushare as ts\ndf=ts.get_report_data(2016,4)\n\n#df[df.code=='002405']\ndf", "盈利能力\n按年度、季度获取盈利能力数据,结果返回的数据属性说明如下:\ncode,代码\nname,名称\nroe,净资产收益率(%)\nnet_profit_ratio,净利率(%)\ngross_profit_rate,毛利率(%)\nnet_profits,净利润(万元)\nesp,每股收益\nbusiness_income,营业收入(百万元)\nbips,每股主营业务收入(元)\n调用方法:\n获取2014年第3季度的盈利能力数据\nts.get_profit_data(2014,3)\n结果返回:", "import tushare as ts\ndf_profit = ts.get_profit_data(2017,1)\n\n#df_profit.info()\n#df_profit[df_profit.code == '002405']\ndf_out=df_profit[(df_profit.roe>10) & (df_profit.gross_profit_rate > 25) & (df_profit.net_profits >0)]\ndf_out.sort_values(by='roe',ascending=False, inplace = True)\ndf_out[:50]", "营运能力\n按年度、季度获取营运能力数据,结果返回的数据属性说明如下:\ncode,代码\nname,名称\narturnover,应收账款周转率(次)\narturndays,应收账款周转天数(天)\ninventory_turnover,存货周转率(次)\ninventory_days,存货周转天数(天)\ncurrentasset_turnover,流动资产周转率(次)\ncurrentasset_days,流动资产周转天数(天)\n调用方法:\n获取2014年第3季度的营运能力数据\nts.get_operation_data(2014,3)\n结果返回:\n code name arturnover arturndays inventory_turnover inventory_days \\", "import tushare as ts\ndf_operation = ts.get_operation_data(2017,1)\n\ndf_out=df_operation[df_operation.currentasset_days<120]\ndf_out.sort_values(by='currentasset_days',ascending=False, inplace = True)\ndf_out[:50]", "成长能力\n按年度、季度获取成长能力数据,结果返回的数据属性说明如下:\ncode,代码\nname,名称\nmbrg,主营业务收入增长率(%)\nnprg,净利润增长率(%)\nnav,净资产增长率\ntarg,总资产增长率\nepsg,每股收益增长率\nseg,股东权益增长率\n调用方法:\n获取2014年第3季度的成长能力数据\nts.get_growth_data(2014,3)\n结果返回:", "# -*- coding: UTF-8 -*-\nimport tushare as ts\ndf_growth = ts.get_growth_data(2017,1)\n\nimport numpy as np\ndf_out = df_growth[(df_growth.nprg >20) &\n (df_growth.mbrg >20)]\ndf_out.sort_values(by= 'nprg', ascending = True, inplace=True)\n#df_out.to_csv(\".\\growth.csv\",encoding=\"utf_8_sig\",dtype={'code':np.string})\ndf_out[:50]", "偿债能力\n按年度、季度获取偿债能力数据,结果返回的数据属性说明如下:\ncode,代码\nname,名称\ncurrentratio,流动比率\nquickratio,速动比率\ncashratio,现金比率\nicratio,利息支付倍数\nsheqratio,股东权益比率\nadratio,股东权益增长率\n调用方法:\n获取2014年第3季度的偿债能力数据\nts.get_debtpaying_data(2014,3)\n结果返回:\n code name currentratio quickratio cashratio icratio \\\n\n现金流量\n按年度、季度获取现金流量数据,结果返回的数据属性说明如下:\ncode,代码\nname,名称\ncf_sales,经营现金净流量对销售收入比率\nrateofreturn,资产的经营现金流量回报率\ncf_nm,经营现金净流量与净利润的比率\ncf_liabilities,经营现金净流量对负债比率\ncashflowratio,现金流量比率\n调用方法:\n获取2014年第3季度的现金流量数据\nts.get_cashflow_data(2014,3)\n结果返回:\n code name cf_sales rateofreturn cf_nm cf_liabilities \\\n\n'''", "import tushare as ts\ndf_cash = ts.get_cashflow_data(2016,4)\n\ndf_out = df_cash[(df_cash.cf_sales > 0)]\ndf_out.sort_values(by = 'cf_sales', ascending = True, inplace = True)\ndf_out[:50]", "3、CandleStick", "import tushare as ts\nimport pandas as pd\nfrom IPython.display import HTML\n#中国联通前复权数据\n#df = ts.get_k_data(stock_selected, start='2016-01-01', end='2016-12-02')\ndf = ts.get_k_data(stock_selected, start='2016-01-01')\n\n\ndatastr = ''\nfor idx in df.index:\n rowstr = '[\\'%s\\',%s,%s,%s,%s]' % (df.ix[idx]['date'], df.ix[idx]['open'], \n df.ix[idx]['close'], df.ix[idx]['low'], \n df.ix[idx]['high'])\n datastr += rowstr + ','\ndatastr = datastr[:-1]\n#取股票名称\nname = ts.get_realtime_quotes(stock_selected)['name'][0]\n\ndatahead = \"\"\"\n<div id=\"chart\" style=\"width:800px; height:600px;\"></div> \n<script> \nrequire.config({ paths:{ echarts: '//cdn.bootcss.com/echarts/3.2.3/echarts.min', } });\nrequire(['echarts'],function(ec){\nvar myChart = ec.init(document.getElementById('chart'));\n\"\"\"\ndatavar = 'var data0 = splitData([%s]);' % datastr\nfuncstr = \"\"\"\nfunction splitData(rawData) {\n var categoryData = [];\n var values = []\n for (var i = 0; i < rawData.length; i++) {\n categoryData.push(rawData[i].splice(0, 1)[0]);\n values.push(rawData[i])\n }\n return {\n categoryData: categoryData,\n values: values\n };\n}\n\nfunction calculateMA(dayCount) {\n var result = [];\n for (var i = 0, len = data0.values.length; i < len; i++) {\n if (i < dayCount) {\n result.push('-');\n continue;\n }\n var sum = 0;\n for (var j = 0; j < dayCount; j++) {\n sum += data0.values[i - j][1];\n }\n result.push((sum / dayCount).toFixed(2));\n }\n return result;\n}\n\noption = {\n title: {\n\"\"\"\n\nnamestr = 'text: \\'%s\\',' %name\n\nfunctail = \"\"\"\n left: 0\n },\n tooltip: {\n trigger: 'axis',\n axisPointer: {\n type: 'line'\n }\n },\n legend: {\n data: ['日K', 'MA5', 'MA10', 'MA20', 'MA30']\n },\n grid: {\n left: '10%',\n right: '10%',\n bottom: '15%'\n },\n xAxis: {\n type: 'category',\n data: data0.categoryData,\n scale: true,\n boundaryGap : false,\n axisLine: {onZero: false},\n splitLine: {show: false},\n splitNumber: 20,\n min: 'dataMin',\n max: 'dataMax'\n },\n yAxis: {\n scale: true,\n splitArea: {\n show: true\n }\n },\n dataZoom: [\n {\n type: 'inside',\n start: 50,\n end: 100\n },\n {\n show: true,\n type: 'slider',\n y: '90%',\n start: 50,\n end: 100\n }\n ],\n series: [\n {\n name: '日K',\n type: 'candlestick',\n data: data0.values,\n markPoint: {\n label: {\n normal: {\n formatter: function (param) {\n return param != null ? Math.round(param.value) : '';\n }\n }\n },\n data: [\n {\n name: '标点',\n coord: ['2013/5/31', 2300],\n value: 2300,\n itemStyle: {\n normal: {color: 'rgb(41,60,85)'}\n }\n },\n {\n name: 'highest value',\n type: 'max',\n valueDim: 'highest'\n },\n {\n name: 'lowest value',\n type: 'min',\n valueDim: 'lowest'\n },\n {\n name: 'average value on close',\n type: 'average',\n valueDim: 'close'\n }\n ],\n tooltip: {\n formatter: function (param) {\n return param.name + '<br>' + (param.data.coord || '');\n }\n }\n },\n markLine: {\n symbol: ['none', 'none'],\n data: [\n [\n {\n name: 'from lowest to highest',\n type: 'min',\n valueDim: 'lowest',\n symbol: 'circle',\n symbolSize: 10,\n label: {\n normal: {show: false},\n emphasis: {show: false}\n }\n },\n {\n type: 'max',\n valueDim: 'highest',\n symbol: 'circle',\n symbolSize: 10,\n label: {\n normal: {show: false},\n emphasis: {show: false}\n }\n }\n ],\n {\n name: 'min line on close',\n type: 'min',\n valueDim: 'close'\n },\n {\n name: 'max line on close',\n type: 'max',\n valueDim: 'close'\n }\n ]\n }\n },\n {\n name: 'MA5',\n type: 'line',\n data: calculateMA(5),\n smooth: true,\n lineStyle: {\n normal: {opacity: 0.5}\n }\n },\n {\n name: 'MA10',\n type: 'line',\n data: calculateMA(10),\n smooth: true,\n lineStyle: {\n normal: {opacity: 0.5}\n }\n },\n {\n name: 'MA20',\n type: 'line',\n data: calculateMA(20),\n smooth: true,\n lineStyle: {\n normal: {opacity: 0.5}\n }\n },\n {\n name: 'MA30',\n type: 'line',\n data: calculateMA(30),\n smooth: true,\n lineStyle: {\n normal: {opacity: 0.5}\n }\n },\n\n ]\n};\nmyChart.setOption(option);\n });\n</script>\n\"\"\"\n\nHTML(datahead + datavar + funcstr + namestr + functail)\n\nimport tushare as ts\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nstock_selected='002281'\ndf = ts.get_k_data(stock_selected, start='2016-01-01')\ndf.info()\n\n#df['close'].plot(grid=True)\n\n#df['42d']= np.round(pd.rolling_mean(df['close'],window=42),2)\n#df['252d']= np.round(pd.rolling_mean(df['close'],window=252),2)\ndf['42d']= np.round(pd.Series.rolling(df['close'],window=42).mean(),2)\ndf['252d']= np.round(pd.Series.rolling(df['close'],window=252).mean(),2)\n\n#df[['close','42d','252d']].tail(10)\n\ndf[['close','42d','252d']].plot(grid=True)\n\ndf['42-252']=df['42d']-df['252d']\n#df['42-252'].tail(10)\n\nSD=1\ndf['regime'] = np.where(df['42-252']>SD,1,0)\ndf['regime'] = np.where(df['42-252'] < -SD,-1,df['regime'])\n#df['regime'].head(10)\n\ndf['regime'].tail(10)\n\n#df['regime'].plot(lw=1.5)\n#plt.ylim(-1.1, 1.1)\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Chipe1/aima-python
probability4e.ipynb
mit
[ "Probability and Bayesian Networks\nProbability theory allows us to compute the likelihood of certain events, given assumptioons about the components of the event. A Bayesian network, or Bayes net for short, is a data structure to represent a joint probability distribution over several random variables, and do inference on it. \nAs an example, here is a network with five random variables, each with its conditional probability table, and with arrows from parent to child variables. The story, from Judea Pearl, is that there is a house burglar alarm, which can be triggered by either a burglary or an earthquake. If the alarm sounds, one or both of the neighbors, John and Mary, might call the owwner to say the alarm is sounding.\n<p><img src=\"http://norvig.com/ipython/burglary2.jpg\">\n\nWe implement this with the help of seven Python classes:\n\n\n## `BayesNet()`\n\nA `BayesNet` is a graph (as in the diagram above) where each node represents a random variable, and the edges are parent&rarr;child links. You can construct an empty graph with `BayesNet()`, then add variables one at a time with the method call `.add(`*variable_name, parent_names, cpt*`)`, where the names are strings, and each of the `parent_names` must already have been `.add`ed.\n\n## `Variable(`*name, cpt, parents*`)`\n\nA random variable; the ovals in the diagram above. The value of a variable depends on the value of the parents, in a probabilistic way specified by the variable's conditional probability table (CPT). Given the parents, the variable is independent of all the other variables. For example, if I know whether *Alarm* is true or false, then I know the probability of *JohnCalls*, and evidence about the other variables won't give me any more information about *JohnCalls*. Each row of the CPT uses the same order of variables as the list of parents.\nWe will only allow variables with a finite discrete domain; not continuous values. \n\n## `ProbDist(`*mapping*`)`<br>`Factor(`*mapping*`)`\n\nA probability distribution is a mapping of `{outcome: probability}` for every outcome of a random variable. \nYou can give `ProbDist` the same arguments that you would give to the `dict` initializer, for example\n`ProbDist(sun=0.6, rain=0.1, cloudy=0.3)`.\nAs a shortcut for Boolean Variables, you can say `ProbDist(0.95)` instead of `ProbDist({T: 0.95, F: 0.05})`. \nIn a probability distribution, every value is between 0 and 1, and the values sum to 1.\nA `Factor` is similar to a probability distribution, except that the values need not sum to 1. Factors\nare used in the variable elimination inference method.\n\n## `Evidence(`*mapping*`)`\n\nA mapping of `{Variable: value, ...}` pairs, describing the exact values for a set of variables&mdash;the things we know for sure.\n\n## `CPTable(`*rows, parents*`)`\n\nA conditional probability table (or *CPT*) describes the probability of each possible outcome value of a random variable, given the values of the parent variables. A `CPTable` is a a mapping, `{tuple: probdist, ...}`, where each tuple lists the values of each of the parent variables, in order, and each probability distribution says what the possible outcomes are, given those values of the parents. The `CPTable` for *Alarm* in the diagram above would be represented as follows:\n\n CPTable({(T, T): .95,\n (T, F): .94,\n (F, T): .29,\n (F, F): .001},\n [Burglary, Earthquake])\n\nHow do you read this? Take the second row, \"`(T, F): .94`\". This means that when the first parent (`Burglary`) is true, and the second parent (`Earthquake`) is fale, then the probability of `Alarm` being true is .94. Note that the .94 is an abbreviation for `ProbDist({T: .94, F: .06})`.\n\n## `T = Bool(True); F = Bool(False)`\n\nWhen I used `bool` values (`True` and `False`), it became hard to read rows in CPTables, because the columns didn't line up:\n\n (True, True, False, False, False)\n (False, False, False, False, True)\n (True, False, False, True, True)\n\nTherefore, I created the `Bool` class, with constants `T` and `F` such that `T == True` and `F == False`, and now rows are easier to read:\n\n (T, T, F, F, F)\n (F, F, F, F, T)\n (T, F, F, T, T)\n\nHere is the code for these classes:", "from collections import defaultdict, Counter\nimport itertools\nimport math\nimport random\n\nclass BayesNet(object):\n \"Bayesian network: a graph of variables connected by parent links.\"\n \n def __init__(self): \n self.variables = [] # List of variables, in parent-first topological sort order\n self.lookup = {} # Mapping of {variable_name: variable} pairs\n \n def add(self, name, parentnames, cpt):\n \"Add a new Variable to the BayesNet. Parentnames must have been added previously.\"\n parents = [self.lookup[name] for name in parentnames]\n var = Variable(name, cpt, parents)\n self.variables.append(var)\n self.lookup[name] = var\n return self\n \nclass Variable(object):\n \"A discrete random variable; conditional on zero or more parent Variables.\"\n \n def __init__(self, name, cpt, parents=()):\n \"A variable has a name, list of parent variables, and a Conditional Probability Table.\"\n self.__name__ = name\n self.parents = parents\n self.cpt = CPTable(cpt, parents)\n self.domain = set(itertools.chain(*self.cpt.values())) # All the outcomes in the CPT\n \n def __repr__(self): return self.__name__\n \nclass Factor(dict): \"An {outcome: frequency} mapping.\"\n\nclass ProbDist(Factor):\n \"\"\"A Probability Distribution is an {outcome: probability} mapping. \n The values are normalized to sum to 1.\n ProbDist(0.75) is an abbreviation for ProbDist({T: 0.75, F: 0.25}).\"\"\"\n def __init__(self, mapping=(), **kwargs):\n if isinstance(mapping, float):\n mapping = {T: mapping, F: 1 - mapping}\n self.update(mapping, **kwargs)\n normalize(self)\n \nclass Evidence(dict): \n \"A {variable: value} mapping, describing what we know for sure.\"\n \nclass CPTable(dict):\n \"A mapping of {row: ProbDist, ...} where each row is a tuple of values of the parent variables.\"\n \n def __init__(self, mapping, parents=()):\n \"\"\"Provides two shortcuts for writing a Conditional Probability Table. \n With no parents, CPTable(dist) means CPTable({(): dist}).\n With one parent, CPTable({val: dist,...}) means CPTable({(val,): dist,...}).\"\"\"\n if len(parents) == 0 and not (isinstance(mapping, dict) and set(mapping.keys()) == {()}):\n mapping = {(): mapping}\n for (row, dist) in mapping.items():\n if len(parents) == 1 and not isinstance(row, tuple): \n row = (row,)\n self[row] = ProbDist(dist)\n\nclass Bool(int):\n \"Just like `bool`, except values display as 'T' and 'F' instead of 'True' and 'False'\"\n __str__ = __repr__ = lambda self: 'T' if self else 'F'\n \nT = Bool(True)\nF = Bool(False)", "And here are some associated functions:", "def P(var, evidence={}):\n \"The probability distribution for P(variable | evidence), when all parent variables are known (in evidence).\"\n row = tuple(evidence[parent] for parent in var.parents)\n return var.cpt[row]\n\ndef normalize(dist):\n \"Normalize a {key: value} distribution so values sum to 1.0. Mutates dist and returns it.\"\n total = sum(dist.values())\n for key in dist:\n dist[key] = dist[key] / total\n assert 0 <= dist[key] <= 1, \"Probabilities must be between 0 and 1.\"\n return dist\n\ndef sample(probdist):\n \"Randomly sample an outcome from a probability distribution.\"\n r = random.random() # r is a random point in the probability distribution\n c = 0.0 # c is the cumulative probability of outcomes seen so far\n for outcome in probdist:\n c += probdist[outcome]\n if r <= c:\n return outcome\n \ndef globalize(mapping):\n \"Given a {name: value} mapping, export all the names to the `globals()` namespace.\"\n globals().update(mapping)", "Sample Usage\nHere are some examples of using the classes:", "# Example random variable: Earthquake:\n# An earthquake occurs on 0.002 of days, independent of any other variables.\nEarthquake = Variable('Earthquake', 0.002)\n\n# The probability distribution for Earthquake\nP(Earthquake)\n\n# Get the probability of a specific outcome by subscripting the probability distribution\nP(Earthquake)[T]\n\n# Randomly sample from the distribution:\nsample(P(Earthquake))\n\n# Randomly sample 100,000 times, and count up the results:\nCounter(sample(P(Earthquake)) for i in range(100000))\n\n# Two equivalent ways of specifying the same Boolean probability distribution:\nassert ProbDist(0.75) == ProbDist({T: 0.75, F: 0.25})\n\n# Two equivalent ways of specifying the same non-Boolean probability distribution:\nassert ProbDist(win=15, lose=3, tie=2) == ProbDist({'win': 15, 'lose': 3, 'tie': 2})\nProbDist(win=15, lose=3, tie=2)\n\n# The difference between a Factor and a ProbDist--the ProbDist is normalized:\nFactor(a=1, b=2, c=3, d=4)\n\nProbDist(a=1, b=2, c=3, d=4)", "Example: Alarm Bayes Net\nHere is how we define the Bayes net from the diagram above:", "alarm_net = (BayesNet()\n .add('Burglary', [], 0.001)\n .add('Earthquake', [], 0.002)\n .add('Alarm', ['Burglary', 'Earthquake'], {(T, T): 0.95, (T, F): 0.94, (F, T): 0.29, (F, F): 0.001})\n .add('JohnCalls', ['Alarm'], {T: 0.90, F: 0.05})\n .add('MaryCalls', ['Alarm'], {T: 0.70, F: 0.01})) \n\n# Make Burglary, Earthquake, etc. be global variables\nglobalize(alarm_net.lookup) \nalarm_net.variables\n\n# Probability distribution of a Burglary\nP(Burglary)\n\n# Probability of Alarm going off, given a Burglary and not an Earthquake:\nP(Alarm, {Burglary: T, Earthquake: F})\n\n# Where that came from: the (T, F) row of Alarm's CPT:\nAlarm.cpt", "Bayes Nets as Joint Probability Distributions\nA Bayes net is a compact way of specifying a full joint distribution over all the variables in the network. Given a set of variables {X<sub>1</sub>, ..., X<sub>n</sub>}, the full joint distribution is:\nP(X<sub>1</sub>=x<sub>1</sub>, ..., X<sub>n</sub>=x<sub>n</sub>) = <font size=large>&Pi;</font><sub>i</sub> P(X<sub>i</sub> = x<sub>i</sub> | parents(X<sub>i</sub>))\nFor a network with n variables, each of which has b values, there are b<sup>n</sup> rows in the joint distribution (for example, a billion rows for 30 Boolean variables), making it impractical to explicitly create the joint distribution for large networks. But for small networks, the function joint_distribution creates the distribution, which can be instructive to look at, and can be used to do inference.", "def joint_distribution(net):\n \"Given a Bayes net, create the joint distribution over all variables.\"\n return ProbDist({row: prod(P_xi_given_parents(var, row, net)\n for var in net.variables)\n for row in all_rows(net)})\n\ndef all_rows(net): return itertools.product(*[var.domain for var in net.variables])\n\ndef P_xi_given_parents(var, row, net):\n \"The probability that var = xi, given the values in this row.\"\n dist = P(var, Evidence(zip(net.variables, row)))\n xi = row[net.variables.index(var)]\n return dist[xi]\n\ndef prod(numbers):\n \"The product of numbers: prod([2, 3, 5]) == 30. Analogous to `sum([2, 3, 5]) == 10`.\"\n result = 1\n for x in numbers:\n result *= x\n return result\n\n# All rows in the joint distribution (2**5 == 32 rows)\nset(all_rows(alarm_net))\n\n# Let's work through just one row of the table:\nrow = (F, F, F, F, F)\n\n# This is the probability distribution for Alarm\nP(Alarm, {Burglary: F, Earthquake: F})\n\n# Here's the probability that Alarm is false, given the parent values in this row:\nP_xi_given_parents(Alarm, row, alarm_net)\n\n# The full joint distribution:\njoint_distribution(alarm_net)\n\n# Probability that \"the alarm has sounded, but neither a burglary nor an earthquake has occurred, \n# and both John and Mary call\" (page 514 says it should be 0.000628)\n\nprint(alarm_net.variables)\njoint_distribution(alarm_net)[F, F, T, T, T]", "Inference by Querying the Joint Distribution\nWe can use P(variable, evidence) to get the probability of aa variable, if we know the vaues of all the parent variables. But what if we don't know? Bayes nets allow us to calculate the probability, but the calculation is not just a lookup in the CPT; it is a global calculation across the whole net. One inefficient but straightforward way of doing the calculation is to create the joint probability distribution, then pick out just the rows that\nmatch the evidence variables, and for each row check what the value of the query variable is, and increment the probability for that value accordningly:", "def enumeration_ask(X, evidence, net):\n \"The probability distribution for query variable X in a belief net, given evidence.\"\n i = net.variables.index(X) # The index of the query variable X in the row\n dist = defaultdict(float) # The resulting probability distribution over X\n for (row, p) in joint_distribution(net).items():\n if matches_evidence(row, evidence, net):\n dist[row[i]] += p\n return ProbDist(dist)\n\ndef matches_evidence(row, evidence, net):\n \"Does the tuple of values for this row agree with the evidence?\"\n return all(evidence[v] == row[net.variables.index(v)]\n for v in evidence)\n\n# The probability of a Burgalry, given that John calls but Mary does not: \nenumeration_ask(Burglary, {JohnCalls: F, MaryCalls: T}, alarm_net)\n\n# The probability of an Alarm, given that there is an Earthquake and Mary calls:\nenumeration_ask(Alarm, {MaryCalls: T, Earthquake: T}, alarm_net)", "Variable Elimination\nThe enumeration_ask algorithm takes time and space that is exponential in the number of variables. That is, first it creates the joint distribution, of size b<sup>n</sup>, and then it sums out the values for the rows that match the evidence. We can do better than that if we interleave the joining of variables with the summing out of values.\nThis approach is called variable elimination. The key insight is that\nwhen we compute\nP(X<sub>1</sub>=x<sub>1</sub>, ..., X<sub>n</sub>=x<sub>n</sub>) = <font size=large>&Pi;</font><sub>i</sub> P(X<sub>i</sub> = x<sub>i</sub> | parents(X<sub>i</sub>))\nwe are repeating the calculation of, say, P(X<sub>3</sub> = x<sub>4</sub> | parents(X<sub>3</sub>))\nmultiple times, across multiple rows of the joint distribution.", "# TODO: Copy over and update Variable Elimination algorithm. Also, sampling algorithms.", "Example: Flu Net\nIn this net, whether a patient gets the flu is dependent on whether they were vaccinated, and having the flu influences whether they get a fever or headache. Here Fever is a non-Boolean variable, with three values, no, mild, and high.", "flu_net = (BayesNet()\n .add('Vaccinated', [], 0.60)\n .add('Flu', ['Vaccinated'], {T: 0.002, F: 0.02})\n .add('Fever', ['Flu'], {T: ProbDist(no=25, mild=25, high=50),\n F: ProbDist(no=97, mild=2, high=1)})\n .add('Headache', ['Flu'], {T: 0.5, F: 0.03}))\n\nglobalize(flu_net.lookup)\n\n# If you just have a headache, you probably don't have the Flu.\nenumeration_ask(Flu, {Headache: T, Fever: 'no'}, flu_net)\n\n# Even more so if you were vaccinated.\nenumeration_ask(Flu, {Headache: T, Fever: 'no', Vaccinated: T}, flu_net)\n\n# But if you were not vaccinated, there is a higher chance you have the flu.\nenumeration_ask(Flu, {Headache: T, Fever: 'no', Vaccinated: F}, flu_net)\n\n# And if you have both headache and fever, and were not vaccinated, \n# then the flu is very likely, especially if it is a high fever.\nenumeration_ask(Flu, {Headache: T, Fever: 'mild', Vaccinated: F}, flu_net)\n\nenumeration_ask(Flu, {Headache: T, Fever: 'high', Vaccinated: F}, flu_net)", "Entropy\nWe can compute the entropy of a probability distribution:", "def entropy(probdist):\n \"The entropy of a probability distribution.\"\n return - sum(p * math.log(p, 2)\n for p in probdist.values())\n\nentropy(ProbDist(heads=0.5, tails=0.5))\n\nentropy(ProbDist(yes=1000, no=1))\n\nentropy(P(Alarm, {Earthquake: T, Burglary: F}))\n\nentropy(P(Alarm, {Earthquake: F, Burglary: F}))", "For non-Boolean variables, the entropy can be greater than 1 bit:", "entropy(P(Fever, {Flu: T}))", "Unknown Outcomes: Smoothing\nSo far we have dealt with discrete distributions where we know all the possible outcomes in advance. For Boolean variables, the only outcomes are T and F. For Fever, we modeled exactly three outcomes. However, in some applications we will encounter new, previously unknown outcomes over time. For example, we could train a model on the distribution of words in English, and then somebody could coin a brand new word. To deal with this, we introduce\nthe DefaultProbDist distribution, which uses the key None to stand as a placeholder for any unknown outcome(s).", "class DefaultProbDist(ProbDist):\n \"\"\"A Probability Distribution that supports smoothing for unknown outcomes (keys).\n The default_value represents the probability of an unknown (previously unseen) key. \n The key `None` stands for unknown outcomes.\"\"\"\n def __init__(self, default_value, mapping=(), **kwargs):\n self[None] = default_value\n self.update(mapping, **kwargs)\n normalize(self)\n \n def __missing__(self, key): return self[None] \n\nimport re\n\ndef words(text): return re.findall(r'\\w+', text.lower())\n\nenglish = words('''This is a sample corpus of English prose. To get a better model, we would train on much\nmore text. But this should give you an idea of the process. So far we have dealt with discrete \ndistributions where we know all the possible outcomes in advance. For Boolean variables, the only \noutcomes are T and F. For Fever, we modeled exactly three outcomes. However, in some applications we \nwill encounter new, previously unknown outcomes over time. For example, when we could train a model on the \nwords in this text, we get a distribution, but somebody could coin a brand new word. To deal with this, \nwe introduce the DefaultProbDist distribution, which uses the key `None` to stand as a placeholder for any \nunknown outcomes. Probability theory allows us to compute the likelihood of certain events, given \nassumptions about the components of the event. A Bayesian network, or Bayes net for short, is a data \nstructure to represent a joint probability distribution over several random variables, and do inference on it.''')\n\nE = DefaultProbDist(0.1, Counter(english))\n\n# 'the' is a common word:\nE['the']\n\n# 'possible' is a less-common word:\nE['possible']\n\n# 'impossible' was not seen in the training data, but still gets a non-zero probability ...\nE['impossible']\n\n# ... as do other rare, previously unseen words:\nE['llanfairpwllgwyngyll']", "Note that this does not mean that 'impossible' and 'llanfairpwllgwyngyll' and all the other unknown words\neach have probability 0.004.\nRather, it means that together, all the unknown words total probability 0.004. With that\ninterpretation, the sum of all the probabilities is still 1, as it should be. In the DefaultProbDist, the\nunknown words are all represented by the key None:", "E[None]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aje/POT
notebooks/plot_optim_OTreg.ipynb
mit
[ "%matplotlib inline", "Regularized OT with generic solver\nIllustrates the use of the generic solver for regularized OT with\nuser-designed regularization term. It uses Conditional gradient as in [6] and\ngeneralized Conditional Gradient as proposed in [5][7].\n[5] N. Courty; R. Flamary; D. Tuia; A. Rakotomamonjy, Optimal Transport for\nDomain Adaptation, in IEEE Transactions on Pattern Analysis and Machine\nIntelligence , vol.PP, no.99, pp.1-1.\n[6] Ferradans, S., Papadakis, N., Peyré, G., & Aujol, J. F. (2014).\nRegularized discrete optimal transport. SIAM Journal on Imaging Sciences,\n7(3), 1853-1882.\n[7] Rakotomamonjy, A., Flamary, R., & Courty, N. (2015). Generalized\nconditional gradient: analysis of convergence and applications.\narXiv preprint arXiv:1510.06567.", "import numpy as np\nimport matplotlib.pylab as pl\nimport ot\nimport ot.plot", "Generate data", "#%% parameters\n\nn = 100 # nb bins\n\n# bin positions\nx = np.arange(n, dtype=np.float64)\n\n# Gaussian distributions\na = ot.datasets.get_1D_gauss(n, m=20, s=5) # m= mean, s= std\nb = ot.datasets.get_1D_gauss(n, m=60, s=10)\n\n# loss matrix\nM = ot.dist(x.reshape((n, 1)), x.reshape((n, 1)))\nM /= M.max()", "Solve EMD", "#%% EMD\n\nG0 = ot.emd(a, b, M)\n\npl.figure(3, figsize=(5, 5))\not.plot.plot1D_mat(a, b, G0, 'OT matrix G0')", "Solve EMD with Frobenius norm regularization", "#%% Example with Frobenius norm regularization\n\n\ndef f(G):\n return 0.5 * np.sum(G**2)\n\n\ndef df(G):\n return G\n\n\nreg = 1e-1\n\nGl2 = ot.optim.cg(a, b, M, reg, f, df, verbose=True)\n\npl.figure(3)\not.plot.plot1D_mat(a, b, Gl2, 'OT matrix Frob. reg')", "Solve EMD with entropic regularization", "#%% Example with entropic regularization\n\n\ndef f(G):\n return np.sum(G * np.log(G))\n\n\ndef df(G):\n return np.log(G) + 1.\n\n\nreg = 1e-3\n\nGe = ot.optim.cg(a, b, M, reg, f, df, verbose=True)\n\npl.figure(4, figsize=(5, 5))\not.plot.plot1D_mat(a, b, Ge, 'OT matrix Entrop. reg')", "Solve EMD with Frobenius norm + entropic regularization", "#%% Example with Frobenius norm + entropic regularization with gcg\n\n\ndef f(G):\n return 0.5 * np.sum(G**2)\n\n\ndef df(G):\n return G\n\n\nreg1 = 1e-3\nreg2 = 1e-1\n\nGel2 = ot.optim.gcg(a, b, M, reg1, reg2, f, df, verbose=True)\n\npl.figure(5, figsize=(5, 5))\not.plot.plot1D_mat(a, b, Gel2, 'OT entropic + matrix Frob. reg')\npl.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gonzmg88/cnn_basic_course
FC_and_CNN.ipynb
gpl-3.0
[ "Hands-on DL course\nInstalation\nFrom base anaconda:\npip install keras\nThis will install keras and theano.\nIn windows we have to configure theano use the windows compiler:\nconda install mingw libpython\nData exploration", "import numpy as np\nimport dogs_vs_cats as dvc\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nall_files = dvc.image_files()\n\nn_images_train=5000\nn_images_val=500\nn_images_test=500\ninput_image_shape = (50,50,3) \ntrain_val_features, train_val_labels,train_val_files, \\\ntest_features, test_labels, test_files = dvc.training_test_datasets(all_files,\n n_images_train+n_images_val,n_images_test,\n input_image_shape)\n\n# split train and val\nindex_files_selected = np.random.permutation(n_images_train+n_images_val)\n\ntrain_files = train_val_files[index_files_selected[:n_images_train]]\nval_files = train_val_files[index_files_selected[n_images_train:]]\ntrain_features = train_val_features[index_files_selected[:n_images_train]]\nval_features = train_val_features[index_files_selected[n_images_train:]]\ntrain_labels = train_val_labels[index_files_selected[:n_images_train]]\nval_labels = train_val_labels[index_files_selected[n_images_train:]]\n\ntrain_features.shape, train_labels.shape,train_files[:10],train_labels[:10]\n\n\nindex_example = 31\nfig,ax = plt.subplots(1,2,figsize=(14,8))\nax[0].imshow(train_features[index_example,]/255)\nax[1].set_title(train_files[index_example])\n_ = ax[1].hist(train_features[index_example,].ravel(),bins=40)\n\n\nfrom IPython.display import Image,display\n\ndisplay(Image(train_files[index_example]))", "Preprocessing", "#media = np.mean(train_features,axis=(0,2,3),keepdims=True)\n#print(media.shape,media.ravel())\n\n#media = media[:,np.newaxis,np.newaxis]\n#train_features-=media\n#val_features-=media\n#test_features-=media\n\nfrom keras.applications.imagenet_utils import preprocess_input\n\npreprocesado = False\n\nif not preprocesado:\n train_features = preprocess_input(train_features)\n val_features = preprocess_input(val_features)\n test_features = preprocess_input(test_features)\n preprocesado = True\n", "FC network\nkeras notation:\n* epoch: Each epoch is a full loop over all the training data.\n* nb_epoch: Number of epochs to train the model. \n* batch_size: Number of samples to use in each stochastic gradient update.\nFor example, 80 epochs consist on 80 loops over all the training examples. If there are 1000 examples and batch size is set to 32 there will be 1000/32 * 80 gradient updates. Each gradient is estimated using 32 samples.", "input_shape_flat = np.prod(input_image_shape)\ntrain_features_flat = train_features.reshape((train_features.shape[0],input_shape_flat))\nval_features_flat = val_features.reshape((val_features.shape[0],input_shape_flat))\ntest_features_flat = test_features.reshape((test_features.shape[0],input_shape_flat))\nprint(train_features_flat.shape,val_features_flat.shape,test_features_flat.shape)\n\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\n\ntrain_fc_model = True\n\nif train_fc_model:\n fc_model = Sequential([\n Dense(1024, input_dim=input_shape_flat),\n Activation('sigmoid'),\n Dense(512),\n Activation('sigmoid'),\n Dense(256),\n Activation('sigmoid'),\n Dense(1),\n Activation('sigmoid')\n ])\n\n fc_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\n nb_epoch=20\n hist=fc_model.fit(train_features_flat,\n train_labels, \n epochs=nb_epoch,validation_data=(val_features_flat,\n val_labels),\n batch_size=32,verbose=2)\n\n fc_model.save(\"fc_model.h5\")\nelse:\n fc_model = keras.models.load_model(\"fc_model_trained.h5\")\n \n\nfc_model.summary()\n\nif train_fc_model:\n hist.history\n\nif train_fc_model:\n plt.xlabel('Epochs')\n plt.ylabel('Loss')\n plt.title('FCNN Loss Trend')\n plt.plot(hist.history[\"loss\"], 'blue', label='Training Loss')\n plt.plot(hist.history[\"val_loss\"], 'green', label='Validation Loss')\n #plt.xticks(range(0,nb_epoch,2))\n plt.legend()\n\nif train_fc_model:\n plt.xlabel('Epochs')\n plt.ylabel('Accuracy')\n plt.title('FCNN Accuracy Trend')\n plt.plot(hist.history[\"acc\"], 'blue', label='Training Accuracy')\n plt.plot(hist.history[\"val_acc\"], 'green', label='Validation Accuracy')\n plt.legend()\n\nresults = fc_model.evaluate(test_features_flat,test_labels)\nprint(\"\")\nprint(\" \".join([\"%s: %.4f\"%(metric_name,valor) for metric_name,valor in zip(fc_model.metrics_names,results)]))\n\npreds = fc_model.predict(test_features_flat)\ndvc.plotROC(test_labels,preds)", "CNN", "from keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom keras.optimizers import Adam\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.callbacks import EarlyStopping\n\ntrain_cnn_model = True\ndef catdog_cnn(input_image_shape):\n model = Sequential()\n model.add(Conv2D(32, (3, 3),input_shape=input_image_shape))\n model.add(BatchNormalization())\n model.add(Activation('relu'))\n model.add(Conv2D(32, (3, 3)))\n model.add(BatchNormalization())\n model.add(Activation('relu'))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n \n model.add(Conv2D(64, (3, 3)))\n model.add(BatchNormalization())\n model.add(Activation('relu'))\n model.add(Conv2D(64, 3, 3))\n model.add(BatchNormalization())\n model.add(Activation('relu'))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Dropout(0.25))\n \n model.add(Conv2D(128, (3, 3)))\n model.add(BatchNormalization())\n model.add(Activation('relu'))\n model.add(Conv2D(128, (3, 3)))\n model.add(BatchNormalization())\n model.add(Activation('relu'))\n model.add(MaxPooling2D(pool_size=(2, 2)))\n model.add(Dropout(0.25))\n \n model.add(Flatten())\n model.add(Dense(256, activation='relu'))\n model.add(Dropout(0.5))\n model.add(Dense(1))\n \n model.add(Activation('sigmoid'))\n optimizer = Adam()\n objective = 'binary_crossentropy'\n model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])\n return model\n\nif train_cnn_model:\n cnn_model = catdog_cnn(input_image_shape)\nelse:\n cnn_model = keras.models.load_model(\"cnn_model_trained.h5\")\ncnn_model.summary()\n\nif train_cnn_model:\n nb_epoch=80\n print(\"Model compiled, start training\")\n early_stopping_callback = EarlyStopping(monitor='val_loss', min_delta=0, patience=8, \n verbose=1, mode='auto')\n history = cnn_model.fit(train_features, train_labels,validation_data=(val_features,val_labels),\n batch_size=32, epochs=nb_epoch,verbose=2,callbacks=[early_stopping_callback])\n cnn_model.save(\"cnn_model.h5\")\n\n\nif train_cnn_model:\n plt.xlabel('Epochs')\n plt.ylabel('Loss')\n plt.title('CNN Loss Trend')\n plt.plot(history.history[\"loss\"], 'blue', label='Training Loss')\n plt.plot(history.history[\"val_loss\"], 'green', label='Validation Loss')\n #plt.xticks(range(0,nb_epoch,2))\n plt.legend(loc=\"best\")\n\nif train_cnn_model:\n plt.xlabel('Epochs')\n plt.ylabel('Accuracy')\n plt.title('CNN Accuracy Trend')\n plt.plot(history.history[\"acc\"], 'blue', label='Training Accuracy')\n plt.plot(history.history[\"val_acc\"], 'green', label='Validation Accuracy')\n plt.legend(loc=\"best\")\n\n# evaluate the model\nresults = cnn_model.evaluate(test_features,test_labels)\nprint(\"\")\nprint(\" \".join([\"%s: %.4f\"%(metric_name,valor) for metric_name,valor in zip(cnn_model.metrics_names,results)]))\n\npreds = cnn_model.predict(test_features)\ndvc.plotROC(test_labels,preds)", "Things to explore:\n* Use more data.\n* Data augmentation. ImageDataGenerator\n* Icrease regularization\n* Save best model (instead of geting the latest) ModelCheckPoint\n* Test differences running in gpu vs cpu." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/model-card-toolkit
model_card_toolkit/documentation/examples/Scikit_Learn_Model_Card_Toolkit_Demo.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Scikit-Learn Model Card Toolkit Demo\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/responsible_ai/model_card_toolkit/examples/Scikit_Learn_Model_Card_Toolkit_Demo\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/model-card-toolkit/blob/master/model_card_toolkit/documentation/examples/Scikit_Learn_Model_Card_Toolkit_Demo.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/model-card-toolkit/blob/master/model_card_toolkit/documentation/examples/Scikit_Learn_Model_Card_Toolkit_Demo.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/model-card-toolkit/model_card_toolkit/documentation/examples/Scikit_Learn_Model_Card_Toolkit_Demo.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nBackground\nThis notebook demonstrates how to generate a model card using the Model Card Toolkit with a scikit-learn model in a Jupyter/Colab environment. You can learn more about model cards at https://modelcards.withgoogle.com/about.\nSetup\nWe first need to install and import the necessary packages.\nUpgrade to Pip 20.2 and Install Packages", "!pip install --upgrade pip==21.3\n!pip install -U seaborn scikit-learn model-card-toolkit", "Did you restart the runtime?\nIf you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...).\nImport packages\nWe import necessary packages, including scikit-learn.", "from datetime import date\nfrom io import BytesIO\nfrom IPython import display\nimport model_card_toolkit as mctlib\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import plot_roc_curve, plot_confusion_matrix\n\nimport base64\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nimport uuid", "Load data\nThis example uses the Breast Cancer Wisconsin Diagnostic dataset that scikit-learn can load using the load_breast_cancer() function.", "cancer = load_breast_cancer()\n\nX = pd.DataFrame(cancer.data, columns=cancer.feature_names)\ny = pd.Series(cancer.target)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y)\n\nX_train.head()\n\ny_train.head()", "Plot data\nWe will create several plots from the data that we will include in the model card.", "# Utility function that will export a plot to a base-64 encoded string that the model card will accept.\n\ndef plot_to_str():\n img = BytesIO()\n plt.savefig(img, format='png')\n return base64.encodebytes(img.getvalue()).decode('utf-8')\n\n# Plot the mean radius feature for both the train and test sets\n\nsns.displot(x=X_train['mean radius'], hue=y_train)\nmean_radius_train = plot_to_str()\n\nsns.displot(x=X_test['mean radius'], hue=y_test)\nmean_radius_test = plot_to_str()\n\n# Plot the mean texture feature for both the train and test sets\n\nsns.displot(x=X_train['mean texture'], hue=y_train)\nmean_texture_train = plot_to_str()\n\nsns.displot(x=X_test['mean texture'], hue=y_test)\nmean_texture_test = plot_to_str()", "Train model", "# Create a classifier and fit the training data\n\nclf = GradientBoostingClassifier().fit(X_train, y_train)", "Evaluate model", "# Plot a ROC curve\n\nplot_roc_curve(clf, X_test, y_test)\nroc_curve = plot_to_str()\n\n# Plot a confusion matrix\n\nplot_confusion_matrix(clf, X_test, y_test)\nconfusion_matrix = plot_to_str()", "Create a model card\nInitialize toolkit and model card", "mct = mctlib.ModelCardToolkit()\n\nmodel_card = mct.scaffold_assets()", "Annotate information into model card", "model_card.model_details.name = 'Breast Cancer Wisconsin (Diagnostic) Dataset'\nmodel_card.model_details.overview = (\n 'This model predicts whether breast cancer is benign or malignant based on '\n 'image measurements.')\nmodel_card.model_details.owners = [\n mctlib.Owner(name= 'Model Cards Team', contact='model-cards@google.com')\n]\nmodel_card.model_details.references = [\n mctlib.Reference(reference='https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)'),\n mctlib.Reference(reference='https://minds.wisconsin.edu/bitstream/handle/1793/59692/TR1131.pdf')\n]\nmodel_card.model_details.version.name = str(uuid.uuid4())\nmodel_card.model_details.version.date = str(date.today())\n\nmodel_card.considerations.ethical_considerations = [mctlib.Risk(\n name=('Manual selection of image sections to digitize could create '\n 'selection bias'),\n mitigation_strategy='Automate the selection process'\n)]\nmodel_card.considerations.limitations = [mctlib.Limitation(description='Breast cancer diagnosis')]\nmodel_card.considerations.use_cases = [mctlib.UseCase(description='Breast cancer diagnosis')]\nmodel_card.considerations.users = [mctlib.User(description='Medical professionals'), mctlib.User(description='ML researchers')]\n\nmodel_card.model_parameters.data.append(mctlib.Dataset())\nmodel_card.model_parameters.data[0].graphics.description = (\n f'{len(X_train)} rows with {len(X_train.columns)} features')\nmodel_card.model_parameters.data[0].graphics.collection = [\n mctlib.Graphic(image=mean_radius_train),\n mctlib.Graphic(image=mean_texture_train)\n]\nmodel_card.model_parameters.data.append(mctlib.Dataset())\nmodel_card.model_parameters.data[1].graphics.description = (\n f'{len(X_test)} rows with {len(X_test.columns)} features')\nmodel_card.model_parameters.data[1].graphics.collection = [\n mctlib.Graphic(image=mean_radius_test),\n mctlib.Graphic(image=mean_texture_test)\n]\nmodel_card.quantitative_analysis.graphics.description = (\n 'ROC curve and confusion matrix')\nmodel_card.quantitative_analysis.graphics.collection = [\n mctlib.Graphic(image=roc_curve),\n mctlib.Graphic(image=confusion_matrix)\n]\n\nmct.update_model_card(model_card)", "Generate model card", "# Return the model card document as an HTML page\n\nhtml = mct.export_format()\n\ndisplay.display(display.HTML(html))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
alfkjartan/control-computarizado
discrete-time-systems/notebooks/Zero-order-hold sampling.ipynb
mit
[ "import numpy as np\nimport sympy as sy\nimport control.matlab as cm\nsy.init_printing()", "Zero order hold sampling of a first order system\n\\begin{equation}\nG(s) = \\frac{1}{s-\\lambda}\n\\end{equation}", "h, lam = sy.symbols('h, lambda', real=True, positive=True)\ns, z = sy.symbols('s, z', real=False)\n\nG = 1/(s-lam)\nY = G/s\nYp = sy.apart(Y, s)\nYp\n\nfrom sympy.integrals.transforms import inverse_laplace_transform\nfrom sympy.abc import t\n\ninverse_laplace_transform(Yp, s, t)", "Sampling and taking the z-transform of the step-response\n\\begin{equation}\nY(z) = \\frac{1}{\\lambda} \\left( \\frac{z}{z-\\mathrm{e}^{\\lambda h}} - \\frac{z}{z-1} \\right).\n\\end{equation}\nDividing by the z-transform of the input signal\n\\begin{equation}\nH(z) = \\frac{z-1}{z}Y(z) = \\frac{1}{\\lambda} \\left( \\frac{ \\mathrm{e}^{\\lambda h} - 1 }{ z - \\mathrm{e}^{\\lambda h} } \\right)\n\\end{equation}\nVerifying for specific value of lambda", "lam = -0.5\nh = 0.1\nG = cm.tf([1], [1, -lam])\nGd = cm.c2d(G, h)\nHd = 1/lam * cm.tf([np.exp(lam*h)-1],[1, np.exp(lam*h)])\n\nprint(Gd)\nprint(Hd)" ]
[ "code", "markdown", "code", "markdown", "code" ]
abmantz/lrgs
notebooks/example_python.ipynb
mit
[ "Example in Python\nThis is a fairly minimal example, demonstrating the slightly different calling convention in the Python version of LRGS, compared with the R version.\nOne notable and practical difference is that the Python version is significantly slower than the R version, despite using vectorization more efficiently in places. In the benchmarks I've done so far, it takes between 1.3 and 3 times longer to produce the same number of samples, depending on the model being fit.\nWe'll just look at univariate regression, with p(x) a single Gaussian.", "import lrgs\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "One immediate difference is that we need to pass x and y as Numpy matrices, with each row corresponding to a data point. (So, in the univariate case, column vectors.)", "## Use the same setup as the simple example in the R documentation\n## ... but with a bigger scatter and fewer data points, just because.\nx = np.matrix(np.random.normal(0.0, 5.0, size=100)).T\ny = np.matrix(np.random.normal(np.pi*x, np.exp(1.0)))\n\nplt.plot(x, y, 'o');", "If we were going to pass measurement errors, it would be as a list of matrices, e.g.", "# M = [np.asmatrix(np.eye(2)*1e-4) for i in range(x.shape[0])]", "Instead of doing everything in a single function call, like in R, we instead have one call to set up the model and a second to run the sampler. This does the first step (the first argument is the size of the Gaussian mixture):", "par = lrgs.Parameters_GaussMix(1, x, y)", "This object holds the current values of all the model parameters (currently rough default guesses) and has methods to update individual parameters or all parameters. We could call these directly to sample the parameters \"in place\", but of course we actually want to store the chain. For that, we use a second declaration. Here the second argument is the length of the chain we want to run, and the third encodes the parameters that we want to store (in the usual LRGS way):", "chain = lrgs.Chain(par, 100, 'bsmt')", "In the \"run\" call, we specify any parameters to be fixed (x and y in this case):", "chain.run(fix='xy')", "The chain can now be converted to a dict or recarray (and further converted to a pandas data frame, for e.g.). Note that the \"proper names\" of the parameters to include must be specified here, and that the resulting keys differ slightly from the column naming scheme used in the R function Gibbs.post2dataframe. (In particular, indices are offset by underscores and start from zero, consistent with Python indexing.)", "dchain = chain.to_dict([\"B\", \"Sigma\", \"mu\", \"Tau\"])", "For example, dchain[\"B_1_0\"] is the slope; in R it would have been called B21.\nPlot things. Red lines show the input values.", "keep = range(10, chain.nmc) # throw out a little burn-in\n\nplt.plot(dchain[\"B_0_0\"][keep], 'o');\nplt.axhline(y=0.0, color='r');\n\nplt.plot(dchain[\"B_1_0\"][keep], 'o');\nplt.axhline(y=np.pi, color='r');\n\nplt.plot(dchain[\"B_0_0\"][keep], dchain[\"B_1_0\"][keep], 'o');\nplt.plot(0.0, np.pi, 'or');\n\nplt.plot(dchain[\"Sigma_0_0\"][keep], 'o');\nplt.axhline(y=np.exp(2.0), color='r');\n\nplt.plot(dchain[\"mu_0_0\"][keep], 'o');\nplt.axhline(y=0.0, color='r');\n\nplt.plot(dchain[\"Tau_0_0_0\"][keep], 'o');\nplt.axhline(y=5.0**2, color='r');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DrkSephy/bokeh-tutorial
bokeh-tutorial.ipynb
mit
[ "Bokeh Tutorial\nAuthor: David Leonard (DrkSephy1025@gmail.com)\nThis notebook may be found in it's entirety at: https://github.com/DrkSephy/bokeh-tutorial\nIn this tutorial, we will explore the basics of Bokeh - ranging from simple plots to more complex figures. We'll be using Pandas to load and operate on a dataset containing Poker Hands and their distributions.", "# We use pandas to parse our CSV file\nimport pandas\n\n# Display Bokeh graphs inline\nfrom bokeh.io import output_notebook\n\n# render inline\noutput_notebook()", "The original dataset consists of a CSV (Comma Separates Values) with various attributes pertaining to each of the five cards in the hand, as well as their suits and the hand strength. Unfortuntely, the dataset does not have column names, so we begin by adding these in programmatically.", "# Column names from our CSV\ncolNames = ['S1', 'C1', 'S2', 'C2', 'S3', 'C3', 'S4', 'C4', 'S5', 'C5', 'CLASS']\n\n# Read CSV file using pandas\ndata = pandas.read_csv('data.csv', names=colNames)\n\n# Extract all data from the CLASS column\nhands = data.CLASS.tolist()\n\n# Remove the first element \nhands.pop(0)", "For our first visualization, we would like to show the distributions of winning poker hands across the dataset (consisting of 1,000,000 entries). To extract this data, we parse the last column called CLASS, which is the strength of the corresponding Poker Hand (annotated below).", "# Count occurances of each class\nclassZero = hands.count('0') # Nothing in hand\nclassOne = hands.count('1') # One pair\nclassTwo = hands.count('2') # Two pair\nclassThree = hands.count('3') # Three of a kind\nclassFour = hands.count('4') # Straight\nclassFive = hands.count('5') # Flush\nclassSix = hands.count('6') # Full House\nclassSeven = hands.count('7') # Four of a kind\nclassEight = hands.count('8') # Straight Flush\nclassNine = hands.count('9') # Royal Flush", "Next, we assemble an array of the occurances of each poker hand. This will be used to generate the x-axis points for our first visualization.", "# Bundle the dataset - all of the counts of each class\ndataset = [classZero, classOne, classTwo, classThree, classFour, classFive, classSix, classSeven, classEight, classNine]\n\n# Import functions for creating figures and showing them inline\nfrom bokeh.plotting import figure, show\n\n# Ranges function is used for generating the y-axis\nfrom bokeh.models.ranges import Range1d\n\n# Used for converting arrays to numpy arrays\nimport numpy", "Similar to Matplotlib, Bokeh supports a generic Figure class which allows us to build a figure from the ground up by specifying pieces through renderers and glyphs, to name a few. We'll be constructing a combination of a bar chart and line plot using the Figure class.", "# Create a figure object\np = figure(plot_width=1000, plot_height=400)\n\n# Set the bar values - the heights are the sums of occurances of each winning poker hand\nh = numpy.array(dataset)\n\n# Correcting the bottom position of the bars to be on the 0 line\nadj_h = h/2\n\n# add bar renderer\np.rect(x=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], y=adj_h, width=0.4, height=h, color=\"#CAB2D6\")\n\n# Add a line renderer\np.line([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dataset, line_width=2)\n\n# Add circles to our points on the line\np.circle([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dataset, fill_color=\"white\", size=8)\n\n# Setting the y axis range \np.y_range = Range1d(0, max(dataset))\n\n# Set the title of the graph\np.title = \"Distribution of Poker Hands\"\n\n# Set the x-axis label\np.xaxis.axis_label = 'Winning Poker Hand Class'\n\n# Set the y-axis label\np.yaxis.axis_label = 'Frequency'\n\n# Show our new graph - a combination of both line and bar charts\nshow(p)", "Neat! We've built our first visualization showing the frequencies of winning Poker hands. As we can see, class zero (empty hands) occurs more than 1/3 of the time (~35000 empty hands over 1,000,000 poker hands), while class one (one pair) occurs very frequently. \nNext, we'll explore one of Bokeh's best features - high-level chart APIs. Our goal is to create a Heatmap of the distribution of cards (suit and rank) in all winning hands containing one pair (class one).", "# Get all winning one-pair hands\nonePair = data.loc[data['CLASS'] == '1']\n\n# Get all the cards in these winning hands\ncard1 = onePair.C1.tolist()\ncard2 = onePair.C2.tolist()\ncard3 = onePair.C3.tolist()\ncard4 = onePair.C4.tolist()\ncard5 = onePair.C5.tolist()\n\n# Get all the suits in these winning hands\nsuit1 = onePair.S1.tolist()\nsuit2 = onePair.S2.tolist()\nsuit3 = onePair.S3.tolist()\nsuit4 = onePair.S4.tolist()\nsuit5 = onePair.S5.tolist()", "In order to preserve the ordered pairs of our cards, we make sure to append them into the corresponding arrays for both card rank and card suit. We also replace the names of all cards with a value greater than 10 for clarity.", "# Bundle all cards, preserving order\nx = []\nfor num in range(0, len(onePair)):\n x.append(card1[num])\n x.append(card2[num])\n x.append(card3[num])\n x.append(card4[num])\n x.append(card5[num])\n\n# Replace > 10 values with letters\nfor num in range(0, len(x)):\n if x[num] == '11':\n x[num] = 'J'\n \n if x[num] == '12':\n x[num] = 'Q'\n \n if x[num] == '13':\n x[num] = 'K'\n \n if x[num] == '1':\n x[num] = 'A'\n \n# Bundle all suits\ny = []\nfor num in range(0, len(onePair)):\n y.append(suit1[num])\n y.append(suit2[num])\n y.append(suit3[num])\n y.append(suit4[num])\n y.append(suit5[num])\n \n# Replace all values with suits\nfor num in range(0, len(y)):\n if y[num] == '1':\n y[num] = 'Hearts'\n \n if y[num] == '2':\n y[num] = 'Spades'\n \n if y[num] == '3':\n y[num] = 'Diamonds'\n \n if y[num] == '4':\n y[num] = 'Clubs'", "Using Bokeh's high-level chart API, we can create a Heatmap by creating a Pandas dataframe containing our x-axis (card rank) and y-axis (card suit).", "# One of the best features in Bokeh - high-level chart APIs\nfrom bokeh.charts import HeatMap\n\n# Create a dataframe consisting of a dictionary of the x and y axis\ndf = pandas.DataFrame(\n dict(\n cards=x,\n suits=y\n )\n )\n\n# Create a heatmap using the high-level heatmap function\np = HeatMap(df, title='Distribution', width=1000, tools='hover')\n\nshow(p)", "Viola! We've successfully built a heatmap showing the distribution of cards in all of the winning one-pair Poker hands. From this visualization, we can see that the King of Diamonds occurs mostly in winning hands. Bokeh has various other high-level charts, which can be explored here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/tfx_pipelines/pipeline/solutions/tfx_pipeline_vertex.ipynb
apache-2.0
[ "Continuous training with TFX and Vertex\nLearning Objectives\n\nContainerize your TFX code into a pipeline package using Cloud Build.\nUse the TFX CLI to compile a TFX pipeline.\nDeploy a TFX pipeline version to run on Vertex Pipelines using the Vertex Python SDK.\n\nSetup", "from google.cloud import aiplatform as vertex_ai", "Validate lab package version installation", "!python -c \"import tensorflow as tf; print(f'TF version: {tf.__version__}')\"\n!python -c \"import tfx; print(f'TFX version: {tfx.__version__}')\"\n!python -c \"import kfp; print(f'KFP version: {kfp.__version__}')\"\nprint(f\"vertex_ai: {vertex_ai.__version__}\")", "Note: this lab was built and tested with the following package versions:\nTF version: 2.6.2\nTFX version: 1.4.0 \nKFP version: 1.8.1\naiplatform: 1.7.1\nReview: example TFX pipeline design pattern for Vertex\nThe pipeline source code can be found in the pipeline_vertex folder.", "%cd pipeline_vertex\n\n!ls -la", "The config.py module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters. \nThe default values can be overwritten at compile time by providing the updated values in a set of environment variables. You will set custom environment variables later on this lab.\nThe pipeline.py module contains the TFX DSL defining the workflow implemented by the pipeline.\nThe preprocessing.py module implements the data preprocessing logic the Transform component.\nThe model.py module implements the TensorFlow model code and training logic for the Trainer component.\nThe runner.py module configures and executes KubeflowV2DagRunner. At compile time, the KubeflowDagRunner.run() method converts the TFX DSL into the pipeline package into a JSON format for execution on Vertex.\nThe features.py module contains feature definitions common across preprocessing.py and model.py.\nExercise: build your pipeline with the TFX CLI\nYou will use TFX CLI to compile and deploy the pipeline. As explained in the previous section, the environment specific settings can be provided through a set of environment variables and embedded into the pipeline package at compile time.\nConfigure your environment resource settings\nUpdate the below constants with the settings reflecting your lab environment. \n\nREGION - the compute region for AI Platform Training, Vizier, and Prediction.\nARTIFACT_STORE - An existing GCS bucket. You can use any bucket, but we will use here the bucket with the same name as the project.", "# TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.\nREGION = \"us-central1\"\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]\nARTIFACT_STORE = f\"gs://{PROJECT_ID}\"\n\n# Set your resource settings as environment variables. These override the default values in pipeline/config.py.\n%env REGION={REGION}\n%env ARTIFACT_STORE={ARTIFACT_STORE}\n%env PROJECT_ID={PROJECT_ID}\n\n!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}", "Set the compile time settings to first create a pipeline version without hyperparameter tuning\nDefault pipeline runtime environment values are configured in the pipeline folder config.py. You will set their values directly below:\n\n\nPIPELINE_NAME - the pipeline's globally unique name.\n\n\nDATA_ROOT_URI - the URI for the raw lab dataset gs://{PROJECT_ID}/data/tfxcovertype.\n\n\nTFX_IMAGE_URI - the image name of your pipeline container that will be used to execute each of your tfx components", "PIPELINE_NAME = \"tfxcovertype\"\nDATA_ROOT_URI = f\"gs://{PROJECT_ID}/data/tfxcovertype\"\nTFX_IMAGE_URI = f\"gcr.io/{PROJECT_ID}/{PIPELINE_NAME}\"\nPIPELINE_JSON = f\"{PIPELINE_NAME}.json\"\n\nTRAIN_STEPS = 10\nEVAL_STEPS = 5\n\n%env PIPELINE_NAME={PIPELINE_NAME}\n%env DATA_ROOT_URI={DATA_ROOT_URI}\n%env TFX_IMAGE_URI={TFX_IMAGE_URI}\n%env PIPELINE_JSON={PIPELINE_JSON}\n%env TRAIN_STEPS={TRAIN_STEPS}\n%env EVAL_STEPS={EVAL_STEPS}", "Let us populate the data bucket at DATA_ROOT_URI:", "!gsutil cp ../../../data/* $DATA_ROOT_URI/dataset.csv\n!gsutil ls $DATA_ROOT_URI/*", "Let us build and push the TFX container image described in the Dockerfile:", "!gcloud builds submit --timeout 15m --tag $TFX_IMAGE_URI .", "Compile your pipeline code\nThe following command will execute the KubeflowV2DagRunner that compiles the pipeline described in pipeline.py into a JSON representation consumable by Vertex:", "!tfx pipeline compile --engine vertex --pipeline_path runner.py", "Note: you should see a {PIPELINE_NAME}.json file appear in your current pipeline directory.\nExercise: deploy your pipeline on Vertex using the Vertex SDK\nOnce you have the {PIPELINE_NAME}.json available, you can run the tfx pipeline on Vertex by launching a pipeline job using the aiplatform handle:", "vertex_ai.init(project=PROJECT_ID, location=REGION)\n\npipeline = vertex_ai.PipelineJob(\n display_name=\"tfxcovertype4\",\n template_path=PIPELINE_JSON,\n enable_caching=False,\n)\n\npipeline.run()", "Next Steps\nIn this lab, you learned how to build and deploy a TFX pipeline with the TFX CLI and then update, build and deploy a new pipeline with automatic hyperparameter tuning. You practiced triggered continuous pipeline runs using the TFX CLI as well as the Kubeflow Pipelines UI.\nIn the next lab, you will construct a Cloud Build CI/CD workflow that further automates the building and deployment of the pipeline.\nLicense\nCopyright 2021 Google Inc. All Rights Reserved.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
QuantScientist/Deep-Learning-Boot-Camp
day02-PyTORCH-and-PyCUDA/PyTorch/18-PyTorch-NUMER.AI-Binary-Classification-BCELoss-0.691839667509 .ipynb
mit
[ "Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists\n<img src=\"../images/bcamp.png\" align=\"center\">\n18 PyTorch NUMER.AI Deep Learning Binary Classification using BCELoss\nWeb: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/\nNotebooks: <a href=\"https://github.com/QuantScientist/Data-Science-PyCUDA-GPU\"> On GitHub</a>\nShlomo Kashani\n<img src=\"../images/pt.jpg\" width=\"35%\" align=\"center\">\nWhat consists a Numerai competition?\n\n\nNumerai provides payments based on the number of correctly predictted labels (LOGG_LOSS) in a data-set which changes every week.\n\n\nTwo data-sets are provided: numerai_training_data.csv and numerai_tournament_data.csv\n\n\nCriteria\n\nOn top of LOG_LOSS, they also measure:\nConsistency\nOriginality \nConcordance \n\nPyTorch and Numerai\n\n\nThis tutorial was written in order to demonstrate a fully working example of a PyTorch NN on a real world use case, namely a Binary Classification problem on the NumerAI data set. If you are interested in the sk-learn version of this problem please refer to: https://github.com/QuantScientist/deep-ml-meetups/tree/master/hacking-kaggle/python/numer-ai \n\n\nFor the scientific foundation behind Binary Classification and Logistic Regression, refer to: https://github.com/QuantScientist/Deep-Learning-Boot-Camp/tree/master/Data-Science-Interviews-Book\n\n\nEvery step, from reading the CSV into numpy arrays, converting to GPU based tensors, training and validation, are meant to aid newcomers in their first steps in PyTorch. \n\n\nAdditionally, commonly used Kaggle metrics such as ROC_AUC and LOG_LOSS are logged and plotted both for the training set as well as for the validation set. \n\n\nThus, the NN architecture is naive and by no means optimized. Hopefully, I will improve it over time and I am working on a second CNN based version of the same problem. \n\n\nData\n\nDownload from https://numer.ai/leaderboard\n\n<img src=\"../images/numerai-logo.png\" width=\"35%\" align=\"center\">\nPyTorch Imports", "import torch\nimport sys\nimport torch\nfrom torch.utils.data.dataset import Dataset\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\nfrom torch import nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.autograd import Variable\n\nfrom sklearn import cross_validation\nfrom sklearn import metrics\nfrom sklearn.metrics import roc_auc_score, log_loss, roc_auc_score, roc_curve, auc\nfrom sklearn.cross_validation import StratifiedKFold, ShuffleSplit, cross_val_score, train_test_split\n\nprint('__Python VERSION:', sys.version)\nprint('__pyTorch VERSION:', torch.__version__)\nprint('__CUDA VERSION')\nfrom subprocess import call\n# call([\"nvcc\", \"--version\"]) does not work\n! nvcc --version\nprint('__CUDNN VERSION:', torch.backends.cudnn.version())\nprint('__Number CUDA Devices:', torch.cuda.device_count())\nprint('__Devices')\n# call([\"nvidia-smi\", \"--format=csv\", \"--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free\"])\nprint('Active CUDA Device: GPU', torch.cuda.current_device())\n\nprint ('Available devices ', torch.cuda.device_count())\nprint ('Current cuda device ', torch.cuda.current_device())\n\nimport numpy\nimport numpy as np\n\nuse_cuda = torch.cuda.is_available()\nFloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor\nLongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor\nTensor = FloatTensor\n\nimport pandas\nimport pandas as pd\n\nimport logging\nhandler=logging.basicConfig(level=logging.INFO)\nlgr = logging.getLogger(__name__)\n%matplotlib inline\n\n# !pip install psutil\nimport psutil\nimport os\ndef cpuStats():\n print(sys.version)\n print(psutil.cpu_percent())\n print(psutil.virtual_memory()) # physical memory usage\n pid = os.getpid()\n py = psutil.Process(pid)\n memoryUse = py.memory_info()[0] / 2. ** 30 # memory use in GB...I think\n print('memory GB:', memoryUse)\n\ncpuStats()", "CUDA", "# %%timeit\nuse_cuda = torch.cuda.is_available()\n# use_cuda = False\n\nFloatTensor = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor\nLongTensor = torch.cuda.LongTensor if use_cuda else torch.LongTensor\nTensor = FloatTensor\n\nlgr.info(\"USE CUDA=\" + str (use_cuda))\n\ntorch.backends.cudnn.benchmark = True\n\n# ! watch -n 0.1 'ps f -o user,pgrp,pid,pcpu,pmem,start,time,command -p `lsof -n -w -t /dev/nvidia*`'\n# sudo apt-get install dstat #install dstat\n# sudo pip install nvidia-ml-py #install Python NVIDIA Management Library\n# wget https://raw.githubusercontent.com/datumbox/dstat/master/plugins/dstat_nvidia_gpu.py\n# sudo mv dstat_nvidia_gpu.py /usr/share/dstat/ #move file to the plugins directory of dstat", "Global params", "# Data params\nTARGET_VAR= 'target'\nTOURNAMENT_DATA_CSV = 'numerai_tournament_data.csv'\nTRAINING_DATA_CSV = 'numerai_training_data.csv'\nBASE_FOLDER = 'numerai/'\n\n# fix seed\nseed=17*19\nnp.random.seed(seed)\ntorch.manual_seed(seed)\nif use_cuda:\n torch.cuda.manual_seed(seed)", "Load a CSV file for Binary classification (numpy)\nAs mentioned, NumerAI provided numerai_training_data.csv and numerai_tournament_data.csv.\n\nTraining_data.csv is labeled\nNumerai_tournament_data.csv has lebles for the validation set and no labels for the test set. See belo how I seperate them.", "# %%timeit\ndf_train = pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV)\ndf_train.head(5)", "Feature enrichement\n\nThis would be usually not required when using NN's; it is here for demonstration purposes.", "from sklearn.preprocessing import LabelEncoder\nfrom sklearn.pipeline import Pipeline\nfrom collections import defaultdict\n\n# def genBasicFeatures(inDF):\n# print('Generating basic features ...')\n# df_copy=inDF.copy(deep=True)\n# magicNumber=21\n# feature_cols = list(inDF.columns)\n\n# inDF['x_mean'] = np.mean(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_median'] = np.median(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_std'] = np.std(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_skew'] = scipy.stats.skew(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_kurt'] = scipy.stats.kurtosis(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_var'] = np.var(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_max'] = np.max(df_copy.ix[:, 0:magicNumber], axis=1)\n# inDF['x_min'] = np.min(df_copy.ix[:, 0:magicNumber], axis=1) \n\n# return inDF\n\ndef addPolyFeatures(inDF, deg=2):\n print('Generating poly features ...')\n df_copy=inDF.copy(deep=True)\n poly=PolynomialFeatures(degree=deg)\n p_testX = poly.fit(df_copy)\n # AttributeError: 'PolynomialFeatures' object has no attribute 'get_feature_names'\n target_feature_names = ['x'.join(['{}^{}'.format(pair[0],pair[1]) for pair in tuple if pair[1]!=0]) for tuple in [zip(df_copy.columns,p) for p in poly.powers_]]\n df_copy = pd.DataFrame(p_testX.transform(df_copy),columns=target_feature_names)\n \n return df_copy\n\ndef oneHOT(inDF):\n d = defaultdict(LabelEncoder)\n X_df=inDF.copy(deep=True)\n # Encoding the variable\n X_df = X_df.apply(lambda x: d['era'].fit_transform(x))\n \n return X_df\n ", "Train / Validation / Test Split\n\nNumerai provides a data set that is allready split into train, validation and test sets.", "from sklearn import preprocessing\n\n# Train, Validation, Test Split\ndef loadDataSplit():\n df_train = pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV)\n # TOURNAMENT_DATA_CSV has both validation and test data provided by NumerAI\n df_test_valid = pd.read_csv(BASE_FOLDER + TOURNAMENT_DATA_CSV)\n\n answers_1_SINGLE = df_train[TARGET_VAR]\n df_train.drop(TARGET_VAR, axis=1,inplace=True)\n df_train.drop('id', axis=1,inplace=True)\n df_train.drop('era', axis=1,inplace=True)\n df_train.drop('data_type', axis=1,inplace=True) \n \n# df_train=oneHOT(df_train)\n\n df_train.to_csv(BASE_FOLDER + TRAINING_DATA_CSV + 'clean.csv', header=False, index = False) \n df_train= pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV + 'clean.csv', header=None, dtype=np.float32) \n df_train = pd.concat([df_train, answers_1_SINGLE], axis=1)\n feature_cols = list(df_train.columns[:-1])\n# print (feature_cols)\n target_col = df_train.columns[-1]\n trainX, trainY = df_train[feature_cols], df_train[target_col]\n \n \n # TOURNAMENT_DATA_CSV has both validation and test data provided by NumerAI\n # Validation set\n df_validation_set=df_test_valid.loc[df_test_valid['data_type'] == 'validation'] \n df_validation_set=df_validation_set.copy(deep=True)\n answers_1_SINGLE_validation = df_validation_set[TARGET_VAR]\n df_validation_set.drop(TARGET_VAR, axis=1,inplace=True) \n df_validation_set.drop('id', axis=1,inplace=True)\n df_validation_set.drop('era', axis=1,inplace=True)\n df_validation_set.drop('data_type', axis=1,inplace=True)\n \n# df_validation_set=oneHOT(df_validation_set)\n \n df_validation_set.to_csv(BASE_FOLDER + TRAINING_DATA_CSV + '-validation-clean.csv', header=False, index = False) \n df_validation_set= pd.read_csv(BASE_FOLDER + TRAINING_DATA_CSV + '-validation-clean.csv', header=None, dtype=np.float32) \n df_validation_set = pd.concat([df_validation_set, answers_1_SINGLE_validation], axis=1)\n feature_cols = list(df_validation_set.columns[:-1])\n\n target_col = df_validation_set.columns[-1]\n valX, valY = df_validation_set[feature_cols], df_validation_set[target_col]\n \n # Test set for submission (not labeled) \n df_test_set = pd.read_csv(BASE_FOLDER + TOURNAMENT_DATA_CSV)\n# df_test_set=df_test_set.loc[df_test_valid['data_type'] == 'live'] \n df_test_set=df_test_set.copy(deep=True)\n df_test_set.drop(TARGET_VAR, axis=1,inplace=True)\n tid_1_SINGLE = df_test_set['id']\n df_test_set.drop('id', axis=1,inplace=True)\n df_test_set.drop('era', axis=1,inplace=True)\n df_test_set.drop('data_type', axis=1,inplace=True) \n \n# df_test_set=oneHOT(df_validation_set)\n \n feature_cols = list(df_test_set.columns) # must be run here, we dont want the ID \n# print (feature_cols)\n df_test_set = pd.concat([tid_1_SINGLE, df_test_set], axis=1) \n testX = df_test_set[feature_cols].values\n \n return trainX, trainY, valX, valY, testX, df_test_set\n\n# %%timeit\ntrainX, trainY, valX, valY, testX, df_test_set = loadDataSplit()\n\nmin_max_scaler = preprocessing.MinMaxScaler()\n \n# # Number of features for the input layer\nN_FEATURES=trainX.shape[1]\nprint (trainX.shape)\nprint (trainY.shape)\nprint (valX.shape)\nprint (valY.shape)\nprint (testX.shape)\nprint (df_test_set.shape)\n\n# print (trainX)", "Correlated columns\n\nCorrelation plot\nScatter plots", "# seperate out the Categorical and Numerical features\nimport seaborn as sns\n\nnumerical_feature=trainX.dtypes[trainX.dtypes!= 'object'].index\ncategorical_feature=trainX.dtypes[trainX.dtypes== 'object'].index\n\nprint (\"There are {} numeric and {} categorical columns in train data\".format(numerical_feature.shape[0],categorical_feature.shape[0]))\n\ncorr=trainX[numerical_feature].corr()\nsns.heatmap(corr)\n\n\nfrom pandas import *\nimport numpy as np\nfrom scipy.stats.stats import pearsonr\nimport itertools\n\n# from https://stackoverflow.com/questions/17778394/list-highest-correlation-pairs-from-a-large-correlation-matrix-in-pandas\ndef get_redundant_pairs(df):\n '''Get diagonal and lower triangular pairs of correlation matrix'''\n pairs_to_drop = set()\n cols = df.columns\n for i in range(0, df.shape[1]):\n for j in range(0, i+1):\n pairs_to_drop.add((cols[i], cols[j]))\n return pairs_to_drop\n\ndef get_top_abs_correlations(df, n=5):\n au_corr = df.corr().abs().unstack()\n labels_to_drop = get_redundant_pairs(df)\n au_corr = au_corr.drop(labels=labels_to_drop).sort_values(ascending=False)\n return au_corr[0:n]\n\nprint(\"Top Absolute Correlations\")\nprint(get_top_abs_correlations(trainX, 5))", "Create PyTorch GPU tensors from numpy arrays\n\nNote how we transfrom the np arrays", "# Convert the np arrays into the correct dimention and type\n# Note that BCEloss requires Float in X as well as in y\ndef XnumpyToTensor(x_data_np):\n x_data_np = np.array(x_data_np, dtype=np.float32) \n print(x_data_np.shape)\n print(type(x_data_np))\n\n if use_cuda:\n lgr.info (\"Using the GPU\") \n X_tensor = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch \n else:\n lgr.info (\"Using the CPU\")\n X_tensor = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch\n \n print(type(X_tensor.data)) # should be 'torch.cuda.FloatTensor'\n print(x_data_np.shape)\n print(type(x_data_np)) \n return X_tensor\n\n\n# Convert the np arrays into the correct dimention and type\n# Note that BCEloss requires Float in X as well as in y\ndef YnumpyToTensor(y_data_np): \n y_data_np=y_data_np.reshape((y_data_np.shape[0],1)) # Must be reshaped for PyTorch!\n print(y_data_np.shape)\n print(type(y_data_np))\n\n if use_cuda:\n lgr.info (\"Using the GPU\") \n # Y = Variable(torch.from_numpy(y_data_np).type(torch.LongTensor).cuda())\n Y_tensor = Variable(torch.from_numpy(y_data_np)).type(torch.FloatTensor).cuda() # BCEloss requires Float \n else:\n lgr.info (\"Using the CPU\") \n # Y = Variable(torch.squeeze (torch.from_numpy(y_data_np).type(torch.LongTensor))) # \n Y_tensor = Variable(torch.from_numpy(y_data_np)).type(torch.FloatTensor) # BCEloss requires Float \n\n print(type(Y_tensor.data)) # should be 'torch.cuda.FloatTensor'\n print(y_data_np.shape)\n print(type(y_data_np)) \n return Y_tensor", "The NN model\nMLP model\n\n\nA multilayer perceptron is a logistic regressor where instead of feeding the input to the logistic regression you insert a intermediate layer, called the hidden layer, that has a nonlinear activation function (usually tanh or sigmoid) . One can use many such hidden layers making the architecture deep.\n\n\nHere we define a simple MLP structure. We map the input feature vector to a higher space, then later gradually decrease the dimension, and in the end into a 1-dimension space. Because we are calculating the probability of each genre independently, after the final layer we need to use a sigmoid layer. \n\n\nInitial weights selection\n\n\nThere are many ways to select the initial weights to a neural network architecture. A common initialization scheme is random initialization, which sets the biases and weights of all the nodes in each hidden layer randomly.\n\n\nBefore starting the training process, an initial value is assigned to each variable. This is done by pure randomness, using for example a uniform or Gaussian distribution. But if we start with weights that are too small, the signal could decrease so much that it is too small to be useful. On the other side, when the parameters are initialized with high values, the signal can end up to explode while propagating through the network.\n\n\nIn consequence, a good initialization can have a radical effect on how fast the network will learn useful patterns.For this purpose, some best practices have been developed. One famous example used is Xavier initialization. Its formulation is based on the number of input and output neurons and uses sampling from a uniform distribution with zero mean and all biases set to zero.\n\n\nIn effect (according to theory) initializing the weights of the network to values that would be closer to the optimal, and therefore require less epochs to train.\n\n\nReferences:\n\nnninit.xavier_uniform(tensor, gain=1) - Fills tensor with values according to the method described in \"Understanding the difficulty of training deep feedforward neural networks\" - Glorot, X. and Bengio, Y., using a uniform distribution.\nnninit.xavier_normal(tensor, gain=1) - Fills tensor with values according to the method described in \"Understanding the difficulty of training deep feedforward neural networks\" - Glorot, X. and Bengio, Y., using a normal distribution.\nnninit.kaiming_uniform(tensor, gain=1) - Fills tensor with values according to the method described in \"Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification\" - He, K. et al. using a uniform distribution.\nnninit.kaiming_normal(tensor, gain=1) - Fills tensor with values according to the method described in [\"Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification\" - He, K. et al.]", "# p is the probability of being dropped in PyTorch\n# NN params\nDROPOUT_PROB = 0.65\n\nLR = 0.005\nMOMENTUM= 0.9\ndropout = torch.nn.Dropout(p=1 - (DROPOUT_PROB))\nsigmoid = torch.nn.Sigmoid()\ntanh=torch.nn.Tanh()\nrelu=torch.nn.LeakyReLU()\n\nlgr.info(dropout)\n\nhiddenLayer1Size=256\nhiddenLayer2Size=int(hiddenLayer1Size/2)\nhiddenLayer3Size=int(hiddenLayer1Size/2)\nhiddenLayer4Size=int(hiddenLayer1Size/2)\n\nlinear1=torch.nn.Linear(N_FEATURES, hiddenLayer1Size, bias=True) \ntorch.nn.init.xavier_uniform(linear1.weight)\n\nlinear2=torch.nn.Linear(hiddenLayer1Size, hiddenLayer2Size)\ntorch.nn.init.xavier_uniform(linear2.weight)\n\nlinear3=torch.nn.Linear(hiddenLayer2Size,1)\ntorch.nn.init.xavier_uniform(linear3.weight)\n\nnet = torch.nn.Sequential(linear1,nn.BatchNorm1d(hiddenLayer1Size),dropout,relu,\n linear2,nn.BatchNorm1d(hiddenLayer2Size),dropout,relu,\n linear3,dropout,sigmoid, \n )\nlgr.info(net) # net architecture", "The cross-entropy loss function\nA binary cross-entropy Criterion (which expects 0 or 1 valued targets) :\npython\ncriterion = torch.nn.BCELoss() \nThe BCE loss is defined as :\n<img src=\"../images/bce2.png\" align=\"center\">", "# ! pip install sympy\nimport sympy as sp\nsp.interactive.printing.init_printing(use_latex=True)\nfrom IPython.display import display, Math, Latex\nmaths = lambda s: display(Math(s))\nlatex = lambda s: display(Latex(s))\n\n#the loss function is as follows:\nmaths(\"\\mathbf{Loss Function:} J(x, z) = -\\sum_k^d[x_k \\log z_k + (1-x_k)log(1-z_k)]\")\n\n# optimizer = torch.optim.SGD(net.parameters(), lr=0.02)\n# optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)\n# optimizer = optim.SGD(net.parameters(), lr=LR, momentum=MOMENTUM, weight_decay=5e-3)\n#L2 regularization can easily be added to the entire model via the optimizer\noptimizer = torch.optim.Adam(net.parameters(), lr=LR,weight_decay=5e-2) # L2 regularization\n\nloss_func=torch.nn.BCELoss() # Binary cross entropy: http://pytorch.org/docs/nn.html#bceloss\n# http://andersonjo.github.io/artificial-intelligence/2017/01/07/Cost-Functions/\n\nif use_cuda:\n lgr.info (\"Using the GPU\") \n net.cuda()\n loss_func.cuda()\n# cudnn.benchmark = True\n\nlgr.info (optimizer)\nlgr.info (loss_func)", "Training in batches + Measuring the performance of the deep learning model", "import time\nstart_time = time.time() \nepochs=160 # change to 400 for better results\ndiv_factor=20\nall_losses = []\nloss_arr =[]\n\nX_tensor_train= XnumpyToTensor(trainX)\nY_tensor_train= YnumpyToTensor(trainY)\nprint(type(X_tensor_train.data), type(Y_tensor_train.data)) # should be 'torch.cuda.FloatTensor'\n\n# CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.\n# X_tensor_train=X_tensor_train.contiguous()\n# Y_tensor_train=Y_tensor_train.contiguous()\n\n\n# dataset = TensorDataset(data_tensor = X_tensor_train,target_tensor = Y_tensor_train)\n# loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True)\n \n \n# From here onwards, we must only use PyTorch Tensors\nfor step in range(epochs): \n out = net(X_tensor_train) # input x and predict based on x\n cost = loss_func(out, Y_tensor_train) # must be (1. nn output, 2. target), the target label is NOT one-hotted\n\n optimizer.zero_grad() # clear gradients for next train\n cost.backward() # backpropagation, compute gradients\n optimizer.step() # apply gradients\n \n \n if step % div_factor == 0: \n loss = cost.data[0]\n all_losses.append(loss)\n print(step, cost.data.cpu().numpy())\n # RuntimeError: can't convert CUDA tensor to numpy (it doesn't support GPU arrays). \n # Use .cpu() to move the tensor to host memory first. \n prediction = (net(X_tensor_train).data).float() # probabilities \n# prediction = (net(X_tensor).data > 0.5).float() # zero or one\n# print (\"Pred:\" + str (prediction)) # Pred:Variable containing: 0 or 1\n# pred_y = prediction.data.numpy().squeeze() \n pred_y = prediction.cpu().numpy().squeeze()\n target_y = Y_tensor_train.cpu().data.numpy()\n \n tu = (log_loss(target_y, pred_y),roc_auc_score(target_y,pred_y ))\n print ('LOG_LOSS={}, ROC_AUC={} '.format(*tu)) \n \n loss_arr.append(cost.cpu().data.numpy()[0])\n \nend_time = time.time()\nprint ('{} {:6.3f} seconds'.format('GPU:', end_time-start_time))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.plot(all_losses)\nplt.show()\n\nfalse_positive_rate, true_positive_rate, thresholds = roc_curve(target_y,pred_y)\nroc_auc = auc(false_positive_rate, true_positive_rate)\n\nplt.title('LOG_LOSS=' + str(log_loss(target_y, pred_y)))\nplt.plot(false_positive_rate, true_positive_rate, 'b', label='AUC = %0.6f' % roc_auc)\nplt.legend(loc='lower right')\nplt.plot([0, 1], [0, 1], 'r--')\nplt.xlim([-0.1, 1.2])\nplt.ylim([-0.1, 1.2])\nplt.ylabel('True Positive Rate')\nplt.xlabel('False Positive Rate')\nplt.show()", "Visualize Loss Graph using Visdom\n\nMake sure you have Visdom installed and running\npip install visdom\npython -m visdom.server &", "# ! pip install visdom\n\nfrom visdom import Visdom\nviz = Visdom()\n\nnum_epoch=int(epochs/div_factor)\n\nx = np.reshape([i for i in range(num_epoch)],newshape=[num_epoch,1])\nloss_data = np.reshape(loss_arr,newshape=[num_epoch,1])\n\nwin3=viz.line(\n X = x,\n Y = loss_data,\n opts=dict(\n xtickmin=0,\n xtickmax=num_epoch,\n xtickstep=1,\n ytickmin=0,\n ytickmax=20,\n ytickstep=1,\n markercolor=np.random.randint(0, 255, num_epoch),\n ),\n)", "Performance of the deep learning model on the Validation set", "net.eval()\n# Validation data\nprint (valX.shape)\nprint (valY.shape)\n\nX_tensor_val= XnumpyToTensor(valX)\nY_tensor_val= YnumpyToTensor(valY)\n\n\nprint(type(X_tensor_val.data), type(Y_tensor_val.data)) # should be 'torch.cuda.FloatTensor'\n\npredicted_val = (net(X_tensor_val).data).float() # probabilities \n# predicted_val = (net(X_tensor_val).data > 0.5).float() # zero or one\npred_y = predicted_val.cpu().numpy()\ntarget_y = Y_tensor_val.cpu().data.numpy() \n\nprint (type(pred_y))\nprint (type(target_y))\n\ntu = (log_loss(target_y, pred_y),roc_auc_score(target_y,pred_y ))\nprint ('\\n')\nprint ('log_loss={} roc_auc={} '.format(*tu))\n\nfalse_positive_rate, true_positive_rate, thresholds = roc_curve(target_y,pred_y)\nroc_auc = auc(false_positive_rate, true_positive_rate)\n\nplt.title('LOG_LOSS=' + str(log_loss(target_y, pred_y)))\nplt.plot(false_positive_rate, true_positive_rate, 'b', label='AUC = %0.6f' % roc_auc)\nplt.legend(loc='lower right')\nplt.plot([0, 1], [0, 1], 'r--')\nplt.xlim([-0.1, 1.2])\nplt.ylim([-0.1, 1.2])\nplt.ylabel('True Positive Rate')\nplt.xlabel('False Positive Rate')\nplt.show()\n\n# print (pred_y)", "Submission on Test set", "# testX, df_test_set\n# df[df.columns.difference(['b'])]\n# trainX, trainY, valX, valY, testX, df_test_set = loadDataSplit()\n \nprint (df_test_set.shape)\ncolumns = ['id', 'probability']\ndf_pred=pd.DataFrame(data=np.zeros((0,len(columns))), columns=columns)\n# df_pred.id.astype(int)\n\nfor index, row in df_test_set.iterrows():\n rwo_no_id=row.drop('id') \n# print (rwo_no_id.values) \n x_data_np = np.array(rwo_no_id.values, dtype=np.float32) \n if use_cuda:\n X_tensor_test = Variable(torch.from_numpy(x_data_np).cuda()) # Note the conversion for pytorch \n else:\n X_tensor_test = Variable(torch.from_numpy(x_data_np)) # Note the conversion for pytorch\n \n X_tensor_test=X_tensor_test.view(1, trainX.shape[1]) # does not work with 1d tensors \n predicted_val = (net(X_tensor_test).data).float() # probabilities \n p_test = predicted_val.cpu().numpy().item() # otherwise we get an array, we need a single float\n \n df_pred = df_pred.append({'id':row['id'], 'probability':p_test},ignore_index=True)\n# df_pred = df_pred.append({'id':row['id'].astype(int), 'probability':p_test},ignore_index=True)\n\ndf_pred.head(5)", "Create a CSV with the ID's and the coresponding probabilities.", "# df_pred.id=df_pred.id.astype(int)\n\ndef savePred(df_pred, loss):\n# csv_path = 'pred/p_{}_{}_{}.csv'.format(loss, name, (str(time.time())))\n csv_path = 'pred/pred_{}_{}.csv'.format(loss, (str(time.time())))\n df_pred.to_csv(csv_path, columns=('id', 'probability'), index=None)\n print (csv_path)\n \nsavePred (df_pred, log_loss(target_y, pred_y))", "Actual score on Numer.ai - screenshot of the leader board\n<img src=\"../images/numerai-score.jpg\" width=\"35%\" align=\"center\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kerimlcr/ab2017-dpyo
ornek/osmnx/osmnx-0.3/examples/02-example-osm-to-shapefile.ipynb
gpl-3.0
[ "Get shapefiles from OpenStreetMap with OSMnx\n\nOverview of OSMnx\nGitHub repo\nExamples, demos, tutorials", "import osmnx as ox\n%matplotlib inline\nox.config(log_file=True, log_console=True, use_cache=True)", "Get the shapefile for one city, project it, display it, and save it", "# from some place name, create a GeoDataFrame containing the geometry of the place\ncity = ox.gdf_from_place('Walnut Creek, California, USA')\ncity\n\n# save the retrieved data as a shapefile\nox.save_gdf_shapefile(city)\n\n# project the geometry to the appropriate UTM zone (calculated automatically) then plot it\ncity = ox.project_gdf(city)\nfig, ax = ox.plot_shape(city)", "Create a shapefile for multiple cities, project it, display it, and save it", "# define a list of place names\nplace_names = ['Berkeley, California, USA', \n 'Oakland, California, USA',\n 'Piedmont, California, USA',\n 'Emeryville, California, USA',\n 'Alameda, Alameda County, CA, USA']\n\n# create a GeoDataFrame with rows for each place in the list\neast_bay = ox.gdf_from_places(place_names, gdf_name='east_bay_cities')\neast_bay\n\n# project the geometry to the appropriate UTM zone then plot it\neast_bay = ox.project_gdf(east_bay)\nfig, ax = ox.plot_shape(east_bay)\n\n# save the retrieved and projected data as a shapefile\nox.save_gdf_shapefile(east_bay)", "You can also construct buffered spatial geometries", "# pass in buffer_dist in meters\ncity_buffered = ox.gdf_from_place('Walnut Creek, California, USA', buffer_dist=250)\nfig, ax = ox.plot_shape(city_buffered)\n\n# you can buffer multiple places in a single query\neast_bay_buffered = ox.gdf_from_places(place_names, gdf_name='east_bay_cities', buffer_dist=250)\nfig, ax = ox.plot_shape(east_bay_buffered, alpha=0.7)", "You can download boroughs, counties, states, or countries too\nNotice the polygon geometries represent political boundaries, not physical/land boundaries.", "gdf = ox.gdf_from_place('Manhattan, New York, New York, USA')\ngdf = ox.project_gdf(gdf)\nfig, ax = ox.plot_shape(gdf)\n\ngdf = ox.gdf_from_place('Cook County, Illinois, United States')\ngdf = ox.project_gdf(gdf)\nfig, ax = ox.plot_shape(gdf)\n\ngdf = ox.gdf_from_place('Iowa')\ngdf = ox.project_gdf(gdf)\nfig, ax = ox.plot_shape(gdf)\n\ngdf = ox.gdf_from_places(['United Kingdom', 'Ireland'])\ngdf = ox.project_gdf(gdf)\nfig, ax = ox.plot_shape(gdf)", "Be careful to pass the right place name that OSM needs\nBe specific and explicit, and sanity check the results. The function logs a warning if you get a point returned instead of a polygon. In the first example below, OSM resolves 'Melbourne, Victoria, Australia' to a single point at the center of the city. In the second example below, OSM correctly resolves 'City of Melbourne, Victoria, Australia' to the entire city and returns its polygon geometry.", "melbourne = ox.gdf_from_place('Melbourne, Victoria, Australia')\nmelbourne = ox.project_gdf(melbourne)\ntype(melbourne['geometry'].iloc[0])\n\nmelbourne = ox.gdf_from_place('City of Melbourne, Victoria, Australia')\nmelbourne = ox.project_gdf(melbourne)\nfig, ax = ox.plot_shape(melbourne)", "Specify you wanted a country if it resolves to a city of the same name\nOSM resolves 'Mexico' to Mexico City and returns a single point at the center of the city. Instead we have a couple options:\n\nWe can pass a dict containing a structured query to specify that we want Mexico the country instead of Mexico the city.\nWe can also get multiple countries by passing a list of queries. These can be a mixture of strings and dicts.", "mexico = ox.gdf_from_place('Mexico')\nmexico = ox.project_gdf(mexico)\ntype(mexico['geometry'].iloc[0])\n\n# instead of a string, you can pass a dict containing a structured query\nmexico = ox.gdf_from_place({'country':'Mexico'})\nmexico = ox.project_gdf(mexico)\nfig, ax = ox.plot_shape(mexico)\n\n# you can pass multiple queries with mixed types (dicts and strings)\nmx_gt_tx = ox.gdf_from_places(queries=[{'country':'Mexico'}, 'Guatemala', {'state':'Texas'}])\nmx_gt_tx = ox.project_gdf(mx_gt_tx)\nfig, ax = ox.plot_shape(mx_gt_tx)", "You can request a specific result number\nBy default, we only request 1 result from OSM. But, we can pass an optional which_result parameter to query OSM for n results and then process/return the nth. If you query 'France', OSM returns the country with all its overseas territories as result #1 and European France alone as result #2. Querying for 'France' returns just the first result (and thus all of France's overseas territories), but passing which_result=2 instead retrieves the top 2 results from OSM and processes/returns the 2nd one (which is European France). You could have also done this to retrieve Mexico the country instead of Mexico City above.", "france = ox.gdf_from_place('France')\nfig, ax = ox.plot_shape(france)\n\nfrance = ox.gdf_from_place('France', which_result=2)\nfrance = ox.project_gdf(france)\nfig, ax = ox.plot_shape(france)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JustinShenk/genre-melodies
create_dataset.ipynb
mit
[ "Generate genre-based melodies using Magenta\nDownload the Lakh MIDI Dataset (http://hog.ee.columbia.edu/craffel/lmd/)", "import os\nimport shutil\nimport spotipy\nimport pickle\nimport pandas as pd\nimport numpy as np\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.manifold import TSNE\nfrom sklearn.decomposition import PCA\nfrom collections import Counter\n\n\nif not os.path.exists('clean_midi'):\n # Download the 'Clean MIDI' dataset from http://colinraffel.com/projects/lmd/\n from six.moves import urllib\n import StringIO\n import gzip\n import tarfile\n FILE_URL = 'http://hog.ee.columbia.edu/craffel/lmd/clean_midi.tar.gz'\n response = urllib.request.urlopen(FILE_URL)\n print(\"INFO: Downloaded {}\".format(FILE_URL))\n compressedFile = StringIO.StringIO()\n compressedFile.write(response.read())\n compressedFile.seek(0)\n decompressedFile = gzip.GzipFile(fileobj=compressedFile, mode='rb')\n OUTFILE_PATH = 'clean_midi.tar'\n with open(OUTFILE_PATH, 'wb') as outfile:\n outfile.write(decompressedFile.read())\n tar = tarfile.open(OUTFILE_PATH)\n tar.extractall()\n tar.close()\n print(\"INFO: Extracted data\")\nelse:\n print(\"INFO: Found `clean_midi` directory\")", "Preprocessing\nCreate author-genre dictionary for preprocessing and analysis", "if not os.path.exists(\"genres.p\"):\n # Use Spotify's API to genre lookup. Login first and get your OAuth token: \n # https://developer.spotify.com/web-api/search-item/\n # NOTE: Replace `AUTH` value with your token.\n AUTH = \"ENTER-MY-AUTH-KEY\"\n\n # Get artists from folder names\n artists = [item for item in os.listdir('clean_midi') if not item.startswith('.')]\n\n sp = spotipy.Spotify(auth=AUTH)\n genres = {}\n for i,artist in enumerate(artists):\n try:\n results = sp.search(q=artist, type='artist',limit=1)\n items = results['artists']['items']\n genre_list = items[0]['genres'] if len(items) else items['genres']\n genres[artist] = genre_list\n if i < 5:\n print(\"INFO: Preview {}/5\".format(i + 1),\n artist, genre_list[:5])\n except Exception as e:\n print(\"INFO: \", artist, \"not included: \", e)\n\n # Save to pickle file\n pickle.dump(genres,open(\"genres.p\",\"wb\"))\nelse:\n # Load genres meta-data\n genres = pickle.load(open(\"genres.p\",\"rb\"))", "Examine distribution of genres in dataset", "# Get the most common genres\nflattened_list = [item for sublist in list(genres.values()) for item in sublist]\nc = Counter(flattened_list)\nc.most_common()[:20]", "Load author-genre metadata into dataframe", "# Convert labels to vectors\ncategories = set(sorted(list(flattened_list)))\ndf = pd.DataFrame(columns=categories)\n\nfor author,genre_list in genres.items():\n row = pd.Series(np.zeros(len(categories)),name=author)\n for ind,genre in enumerate(categories):\n if genre in genre_list:\n row[ind] = 1\n d = pd.DataFrame(row).T\n d.columns = categories\n df = pd.concat([df,d])\ndf = df.reindex_axis(sorted(df.columns), axis=1)\n\n# Assign label for each author corresponding with meta-genre (eg, rock, classical)\ndef getStyle(genre_substring):\n \"\"\"Get data where features contain `genre_substring`.\"\"\"\n style_index = np.asarray([genre_substring in x for x in df.columns])\n style_array = df.iloc[:,style_index].any(axis=1)\n return style_array\n\n# Create array of color/labels\ncolor_array = np.zeros(df.shape[0])\ngenre_labels = ['other','rock','metal','pop','mellow','country', 'rap','classical']\nfor i,g in enumerate(genre_labels):\n if g == 'other':\n pass\n else:\n color_array[np.where(getStyle(g))] = i", "Visualize author classification by genre using TSNE and PCA", "# 2-dimensions\nmodel = TSNE(random_state=0)\nnp.set_printoptions(suppress=True)\nX_tsne = model.fit_transform(df.values)\nX_pca = PCA().fit_transform(df.values)\n\n# 3-dimensions\nmodel3d = TSNE(n_components=3, random_state=0)\nnp.set_printoptions(suppress=True)\nX_tsne3d = model3d.fit_transform(df.values)\nX_pca3d = PCA(n_components=3).fit_transform(df.values)\n\n%pylab inline\nCMAP_NAME = 'Set1'\ncmap = matplotlib.cm.get_cmap(CMAP_NAME,lut=max(color_array)+1)\nfigure(figsize=(10, 5))\nsuptitle('Artist-genre visualization (2-D)')\naxes_tsne = []\naxes_pca = []\n\n# TSNE\nsubplot(121)\ntitle('TSNE')\nfor l in set(color_array):\n ax = plt.scatter(X_tsne[color_array==l][:, 0], X_tsne[color_array==l][:, 1],c=cmap(l/max(color_array)), s=5)\n axes_tsne.append(ax)\nlegend(handles=axes_tsne, labels=genre_labels,frameon=True,markerscale=2)\n\n# PCA\nsubplot(122)\ntitle('PCA')\nfor l in set(color_array):\n ax = plt.scatter(X_pca[color_array==l][:, 0], X_pca[color_array==l][:, 1],c=cmap(l/max(color_array)), s=5)\n axes_pca.append(ax)\n\nlegend(handles=axes_pca, labels=genre_labels,frameon=True,markerscale=2)\nplt.show()\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure(figsize=(10, 5))\nsuptitle('Artist-genre visualization (3-D)')\nax = fig.add_subplot(121, projection='3d')\ntitle('TSNE')\naxes_tsne = []\naxes_pca = []\nscatter_proxies = []\nfor l in set(color_array):\n ax.scatter(X_tsne3d[color_array==l][:, 0], X_tsne3d[color_array==l][:, 1],X_tsne3d[color_array==l][:, 2],c=cmap(l/max(color_array)), s=5,zdir='x')\n axes_tsne.append(fig.gca())\n proxy = matplotlib.lines.Line2D([0],[0], linestyle=\"none\", c=cmap(l/max(color_array)), marker = 'o')\n scatter_proxies.append(proxy) \n\nlegend(handles=scatter_proxies, labels=genre_labels,numpoints=1,frameon=True,markerscale=0.8)\nax2 = fig.add_subplot(122, projection='3d')\ntitle('PCA')\nfor l in set(color_array): \n ax2.scatter(X_pca3d[color_array==l][:, 0], X_pca3d[color_array==l][:, 1], X_pca3d[color_array==l][:, 2],c=cmap(l/max(color_array)),s=5)\n axes_pca.append(fig.gca())\n \nlegend(handles=scatter_proxies, labels=genre_labels,numpoints=1,frameon=True,markerscale=0.8)\nplt.show()", "Choose 2 genres with many artists and with unlikely overlap\nMetal and classical are two candidates.", "MIDI_DIR = os.path.join(os.getcwd(),'clean_midi')\n\ndef get_artists(genre):\n \"\"\"Get artists with label `genre`.\"\"\"\n artists = [artist for artist, gs in genres.items() if genre in gs]\n return artists\n\n# Get artist with genres 'soft rock' and 'disco'\ngenre_data = {}\nmetal = get_artists('metal')\nclassical = get_artists('classical')\n\ngenre_data['metal'] = metal\ngenre_data['classical'] = classical\n\n# Copy artists to a genre-specific folder\nfor genre, artists in genre_data.items():\n try:\n for artist in artists:\n _genre = genre.replace(' ','_').replace('&','n')\n shutil.copytree(os.path.join(MIDI_DIR,artist),os.path.join(os.getcwd(),'subsets',_genre,artist))\n except Exception as e:\n print(e)", "Deep learning genre melodies\nConvert MIDIs to NoteSequence using Magenta's script\nRun ./train_model.sh in project directory, or enter the following commands in the subset folder and create corresponding .tfrecord files:\nsh\nfor genre in */\n do\n if [[ $genre == *examples* ]]\n then continue\n fi\n convert_dir_to_note_sequences \\\n --input_dir=$genre \\\n --output_file=/tmp/${genre%/}_notesequences.tfrecord \\\n --recursive &amp;&amp; echo \"INFO: ${genre%/} converted to NoteSequences\"\n done\nThe subsets folder content should now contain .tfrecord files:\nbash\nclassical/ metal/\nclassical_notesequences.tfrecord metal_notesequences.tfrecord\nCreate the melody datasets for each genre:\nbash\nfor genre in */\n do\n if [[ $genre == *examples* ]]\n then continue\n fi\n melody_rnn_create_dataset \\\n --config=attention_rnn \\\n --input=/tmp/${genre%/}_notesequences.tfrecord \\\n --output_dir=sequence_examples/${genre} \\\n --eval_ratio=0.10 &amp;&amp; echo \"INFO: ${genre%/} database created.\"\n done\nNow train the model:\nbash\n for genre in */\n do\n if [[ $genre == *examples* ]]\n then continue\n fi\n melody_rnn_train \\\n --config=attention_rnn \\\n --run_dir=/tmp/melody_rnn/logdir/run1/${genre} \\\n --sequence_example_file=$(pwd)/sequence_examples/${genre%/}/training_melodies.tfrecord \\\n --hparams=\"batch_size=64,rnn_layer_sizes=[64,64]\" \\\n --num_training_steps=2000 &amp;&amp; echo \"INFO: ${genre%/} model trained.\"\n done\nGenerate melodies:\nbash\nfor genre in */ ;\n do\n if [[ $genre == *examples* ]];\n then continue\n fi\n melody_rnn_generate \\\n --config=attention_rnn \\\n --run_dir=/tmp/melody_rnn/logdir/run1/${genre} \\\n --output_dir=/tmp/melody_rnn/generated/${genre} \\\n --num_outputs=10 \\\n --num_steps=128 \\\n --hparams=\"batch_size=64,rnn_layer_sizes=[64,64]\" \\\n --primer_melody=\"[60]\" &amp;&amp; echo \"INFO: ${genre%/} melodies generated.\"\n done\nYour melodies can be found in /tmp/melody_rnn/generated/[genre]. Convert to mp3's to play on modern browser or OS." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aselle/tensorflow
tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\").\nText Generation using a RNN\n<table class=\"tfo-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a> \n</td><td>\n<a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on Github</a></td></table>\n\nThis notebook demonstrates how to generate text using an RNN using tf.keras and eager execution. If you like, you can write a similar model using less code. Here, we show a lower-level impementation that's useful to understand as prework before diving in to deeper examples in a similar, like Neural Machine Translation with Attention.\nThis notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare's writing. We'll use a collection of plays, borrowed from Andrej Karpathy's excellent The Unreasonable Effectiveness of Recurrent Neural Networks. The notebook will train a model, and use it to generate sample output.\nHere is the output(with start string='w') after training a single layer GRU for 30 epochs with the default settings below:\n```\nwere to the death of him\nAnd nothing of the field in the view of hell,\nWhen I said, banish him, I will not burn thee that would live.\nHENRY BOLINGBROKE:\nMy gracious uncle--\nDUKE OF YORK:\nAs much disgraced to the court, the gods them speak,\nAnd now in peace himself excuse thee in the world.\nHORTENSIO:\nMadam, 'tis not the cause of the counterfeit of the earth,\nAnd leave me to the sun that set them on the earth\nAnd leave the world and are revenged for thee.\nGLOUCESTER:\nI would they were talking with the very name of means\nTo make a puppet of a guest, and therefore, good Grumio,\nNor arm'd to prison, o' the clouds, of the whole field,\nWith the admire\nWith the feeding of thy chair, and we have heard it so,\nI thank you, sir, he is a visor friendship with your silly your bed.\nSAMPSON:\nI do desire to live, I pray: some stand of the minds, make thee remedies\nWith the enemies of my soul.\nMENENIUS:\nI'll keep the cause of my mistress.\nPOLIXENES:\nMy brother Marcius!\nSecond Servant:\nWill't ple\n```\nOf course, while some of the sentences are grammatical, most do not make sense. But, consider:\n\n\nOur model is character based (when we began training, it did not yet know how to spell a valid English word, or that words were even a unit of text).\n\n\nThe structure of the output resembles a play (blocks begin with a speaker name, in all caps similar to the original text). Sentences generally end with a period. If you look at the text from a distance (or don't read the invididual words too closely, it appears as if it's an excerpt from a play).\n\n\nAs a next step, you can experiment training the model on a different dataset - any large text file(ASCII) will do, and you can modify a single line of code below to make that change. Have fun!\nInstall unidecode library\nA helpful library to convert unicode to ASCII.", "!pip install unidecode", "Import tensorflow and enable eager execution.", "# Import TensorFlow >= 1.9 and enable eager execution\nimport tensorflow as tf\n\n# Note: Once you enable eager execution, it cannot be disabled. \ntf.enable_eager_execution()\n\nimport numpy as np\nimport re\nimport random\nimport unidecode\nimport time", "Download the dataset\nIn this example, we will use the shakespeare dataset. You can use any other dataset that you like.", "path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')", "Read the dataset", "text = unidecode.unidecode(open(path_to_file).read())\n# length of text is the number of characters in it\nprint (len(text))", "Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs", "# unique contains all the unique characters in the file\nunique = sorted(set(text))\n\n# creating a mapping from unique characters to indices\nchar2idx = {u:i for i, u in enumerate(unique)}\nidx2char = {i:u for i, u in enumerate(unique)}\n\n# setting the maximum length sentence we want for a single input in characters\nmax_length = 100\n\n# length of the vocabulary in chars\nvocab_size = len(unique)\n\n# the embedding dimension \nembedding_dim = 256\n\n# number of RNN (here GRU) units\nunits = 1024\n\n# batch size \nBATCH_SIZE = 64\n\n# buffer size to shuffle our dataset\nBUFFER_SIZE = 10000", "Creating the input and output tensors\nVectorizing the input and the target text because our model cannot understand strings only numbers.\nBut first, we need to create the input and output vectors.\nRemember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first.\nFor example, consider that the string = 'tensorflow' and the max_length is 9\nSo, the input = 'tensorflo' and output = 'ensorflow'\nAfter creating the vectors, we convert each character into numbers using the char2idx dictionary we created above.", "input_text = []\ntarget_text = []\n\nfor f in range(0, len(text)-max_length, max_length):\n inps = text[f:f+max_length]\n targ = text[f+1:f+1+max_length]\n\n input_text.append([char2idx[i] for i in inps])\n target_text.append([char2idx[t] for t in targ])\n \nprint (np.array(input_text).shape)\nprint (np.array(target_text).shape)", "Creating batches and shuffling them using tf.data", "dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE)\ndataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))", "Creating the model\nWe use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model.\n\nEmbedding layer\nGRU layer (you can use an LSTM layer here)\nFully connected layer", "class Model(tf.keras.Model):\n def __init__(self, vocab_size, embedding_dim, units, batch_size):\n super(Model, self).__init__()\n self.units = units\n self.batch_sz = batch_size\n\n self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)\n\n if tf.test.is_gpu_available():\n self.gru = tf.keras.layers.CuDNNGRU(self.units, \n return_sequences=True, \n return_state=True, \n recurrent_initializer='glorot_uniform')\n else:\n self.gru = tf.keras.layers.GRU(self.units, \n return_sequences=True, \n return_state=True, \n recurrent_activation='sigmoid', \n recurrent_initializer='glorot_uniform')\n\n self.fc = tf.keras.layers.Dense(vocab_size)\n \n def call(self, x, hidden):\n x = self.embedding(x)\n\n # output shape == (batch_size, max_length, hidden_size) \n # states shape == (batch_size, hidden_size)\n\n # states variable to preserve the state of the model\n # this will be used to pass at every step to the model while training\n output, states = self.gru(x, initial_state=hidden)\n\n\n # reshaping the output so that we can pass it to the Dense layer\n # after reshaping the shape is (batch_size * max_length, hidden_size)\n output = tf.reshape(output, (-1, output.shape[2]))\n\n # The dense layer will output predictions for every time_steps(max_length)\n # output shape after the dense layer == (max_length * batch_size, vocab_size)\n x = self.fc(output)\n\n return x, states", "Call the model and set the optimizer and the loss function", "model = Model(vocab_size, embedding_dim, units, BATCH_SIZE)\n\noptimizer = tf.train.AdamOptimizer()\n\n# using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors\ndef loss_function(real, preds):\n return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds)", "Train the model\nHere we will use a custom training loop with the help of GradientTape()\n\n\nWe initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model.\n\n\nNext, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input.\n\n\nThere are a lot of interesting things happening here.\n\nThe model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0.\nThe model then returns the predictions P1 and H1.\nFor the next batch of input, the model receives I1 and H1.\nThe interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state.\n\nWe continue doing this until the dataset is exhausted and then we start a new epoch and repeat this.\n\n\nAfter calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input)\n\n\nFinally, we take a step in that direction with the help of the optimizer using the apply_gradients function.\n\n\nNote:- If you are running this notebook in Colab which has a Tesla K80 GPU it takes about 23 seconds per epoch.", "# Training step\n\nEPOCHS = 30\n\nfor epoch in range(EPOCHS):\n start = time.time()\n \n # initializing the hidden state at the start of every epoch\n hidden = model.reset_states()\n \n for (batch, (inp, target)) in enumerate(dataset):\n with tf.GradientTape() as tape:\n # feeding the hidden state back into the model\n # This is the interesting step\n predictions, hidden = model(inp, hidden)\n \n # reshaping the target because that's how the \n # loss function expects it\n target = tf.reshape(target, (-1,))\n loss = loss_function(target, predictions)\n \n grads = tape.gradient(loss, model.variables)\n optimizer.apply_gradients(zip(grads, model.variables), global_step=tf.train.get_or_create_global_step())\n\n if batch % 100 == 0:\n print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1,\n batch,\n loss))\n \n print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))\n print('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))", "Predicting using our trained model\nThe below code block is used to generated the text\n\n\nWe start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate.\n\n\nWe get predictions using the start_string and the hidden state\n\n\nThen we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model\n\n\nThe hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.\n\n\nIf you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome!", "# Evaluation step(generating text using the model learned)\n\n# number of characters to generate\nnum_generate = 1000\n\n# You can change the start string to experiment\nstart_string = 'Q'\n# converting our start string to numbers(vectorizing!) \ninput_eval = [char2idx[s] for s in start_string]\ninput_eval = tf.expand_dims(input_eval, 0)\n\n# empty string to store our results\ntext_generated = ''\n\n# low temperatures results in more predictable text.\n# higher temperatures results in more surprising text\n# experiment to find the best setting\ntemperature = 1.0\n\n# hidden state shape == (batch_size, number of rnn units); here batch size == 1\nhidden = [tf.zeros((1, units))]\nfor i in range(num_generate):\n predictions, hidden = model(input_eval, hidden)\n\n # using a multinomial distribution to predict the word returned by the model\n predictions = predictions / temperature\n predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()\n \n # We pass the predicted word as the next input to the model\n # along with the previous hidden state\n input_eval = tf.expand_dims([predicted_id], 0)\n \n text_generated += idx2char[predicted_id]\n\nprint (start_string + text_generated)", "Next steps\n\nChange the start string to a different character, or the start of a sentence.\nExperiment with training on a different, or with different parameters. Project Gutenberg, for example, contains a large collection of books.\nExperiment with the temperature parameter.\nAdd another RNN layer." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/inm/cmip6/models/sandbox-3/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: INM\nSource ID: SANDBOX-3\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:05\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/tapas
notebooks/tabfact_predictions.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/tabfact_predictions.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 The Google AI Language Team Authors\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Copyright 2019 The Google AI Language Team Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Running a Tapas fine-tuned checkpoint\nThis notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: TAPAS: Weakly Supervised Table Parsing via Pre-training\nClone and install the repository\nFirst, let's install the code.", "! pip install tapas-table-parsing", "Fetch models fom Google Storage\nNext we can get pretrained checkpoint from Google Storage. For the sake of speed, this is a medium sized model trained on TABFACT. Note that best results in the paper were obtained with a large model.", "! gsutil cp \"gs://tapas_models/2020_10_07/tapas_tabfact_inter_masklm_medium_reset.zip\" \"tapas_model.zip\" && unzip tapas_model.zip\n! mv tapas_tabfact_inter_masklm_medium_reset tapas_model", "Imports", "import tensorflow.compat.v1 as tf\nimport os \nimport shutil\nimport csv\nimport pandas as pd\nimport IPython\n\ntf.get_logger().setLevel('ERROR')\n\nfrom tapas.utils import tf_example_utils\nfrom tapas.protos import interaction_pb2\nfrom tapas.utils import number_annotation_utils\nimport math\n", "Load checkpoint for prediction\nHere's the prediction code, which will create and interaction_pb2.Interaction protobuf object, which is the datastructure we use to store examples, and then call the prediction script.", "os.makedirs('results/tabfact/tf_examples', exist_ok=True)\nos.makedirs('results/tabfact/model', exist_ok=True)\nwith open('results/tabfact/model/checkpoint', 'w') as f:\n f.write('model_checkpoint_path: \"model.ckpt-0\"')\nfor suffix in ['.data-00000-of-00001', '.index', '.meta']:\n shutil.copyfile(f'tapas_model/model.ckpt{suffix}', f'results/tabfact/model/model.ckpt-0{suffix}')\n\nmax_seq_length = 512\nvocab_file = \"tapas_model/vocab.txt\"\nconfig = tf_example_utils.ClassifierConversionConfig(\n vocab_file=vocab_file,\n max_seq_length=max_seq_length,\n max_column_id=max_seq_length,\n max_row_id=max_seq_length,\n strip_column_names=False,\n add_aggregation_candidates=False,\n)\nconverter = tf_example_utils.ToClassifierTensorflowExample(config)\n\ndef convert_interactions_to_examples(tables_and_queries):\n \"\"\"Calls Tapas converter to convert interaction to example.\"\"\"\n for idx, (table, queries) in enumerate(tables_and_queries):\n interaction = interaction_pb2.Interaction()\n for position, query in enumerate(queries):\n question = interaction.questions.add()\n question.original_text = query\n question.id = f\"{idx}-0_{position}\"\n for header in table[0]:\n interaction.table.columns.add().text = header\n for line in table[1:]:\n row = interaction.table.rows.add()\n for cell in line:\n row.cells.add().text = cell\n number_annotation_utils.add_numeric_values(interaction)\n for i in range(len(interaction.questions)):\n try:\n yield converter.convert(interaction, i)\n except ValueError as e:\n print(f\"Can't convert interaction: {interaction.id} error: {e}\")\n \ndef write_tf_example(filename, examples):\n with tf.io.TFRecordWriter(filename) as writer:\n for example in examples:\n writer.write(example.SerializeToString())\n\ndef predict(table_data, queries):\n table = [list(map(lambda s: s.strip(), row.split(\"|\"))) \n for row in table_data.split(\"\\n\") if row.strip()]\n examples = convert_interactions_to_examples([(table, queries)])\n write_tf_example(\"results/tabfact/tf_examples/test.tfrecord\", examples)\n write_tf_example(\"results/tabfact/tf_examples/dev.tfrecord\", [])\n \n ! python -m tapas.run_task_main \\\n --task=\"TABFACT\" \\\n --output_dir=\"results\" \\\n --noloop_predict \\\n --test_batch_size={len(queries)} \\\n --tapas_verbosity=\"ERROR\" \\\n --compression_type= \\\n --reset_position_index_per_cell \\\n --init_checkpoint=\"tapas_model/model.ckpt\" \\\n --bert_config_file=\"tapas_model/bert_config.json\" \\\n --mode=\"predict\" 2> error\n\n\n results_path = \"results/tabfact/model/test.tsv\"\n all_results = []\n df = pd.DataFrame(table[1:], columns=table[0])\n display(IPython.display.HTML(df.to_html(index=False)))\n print()\n with open(results_path) as csvfile:\n reader = csv.DictReader(csvfile, delimiter='\\t')\n for row in reader:\n supported = int(row[\"pred_cls\"])\n all_results.append(supported)\n score = float(row[\"logits_cls\"])\n position = int(row['position'])\n if supported:\n print(\"> SUPPORTS:\", queries[position])\n else:\n print(\"> REFUTES:\", queries[position])\n return all_results", "Predict", "# Based on TabFact table 2-1610384-4.html.csv\nresult = predict(\"\"\"\ntournament | wins | top - 10 | top - 25 | events | cuts made\nmasters tournament | 0 | 0 | 1 | 3 | 2 \nus open | 0 | 0 | 0 | 4 | 3 \nthe open championship | 0 | 0 | 0 | 2 | 1 \npga championship | 0 | 1 | 1 | 4 | 2 \ntotals | 0 | 1 | 2 | 13 | 8 \n\"\"\", [\"The most frequently occurring number of events is 4\", \"The most frequently occurring number of events is 3\"])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ptpro3/ptpro3.github.io
Projects/AptListingsAnalysis.ipynb
mit
[ "Rental Listings Analysis\nBy Prashant Tatineni\nProject Overview\nIn this project, I attempt to predict the popularity (target variable: interest_level) of apartment rental listings in New York City based on listing characteristics. The data itself comes from a Kaggle Competition hosted in conjunction with renthop.com. \nThe dataset was provided as a single file train.json (49,352 rows). \nAn additional file, test.json (74,659 rows) contains the same columns as train.json, except that the target variable, interest_level, is missing. Predictions of the target variable are to be made on the test.json file and submitted to Kaggle.\nSummary of Solution Steps\n\nLoad data from JSON\nBuild initial predictor variables, with interest_level as the target.\nInitial run of classification models.\nAdd category indicators and aggregated features based on manager_id.\nRun new Random Forest model.\nAn attempt to use the images for classification.\nFurther opportunities with this dataset.", "# imports\n\nimport pandas as pd\nimport dateutil.parser\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import precision_recall_fscore_support\nfrom sklearn.metrics import log_loss\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom wordcloud import WordCloud\n\n%matplotlib inline", "Step 1: Load Data", "# Load the training dataset from Kaggle.\ndf = pd.read_json('train.json')\nprint df.shape\n\ndf.head(2)", "Total number of columns is 14 + 1 target:\n- 1 target variable (interest_level), with classes low, medium, high\n- 1 list of photo links\n- lat/long, street address, display address\n- listing_id, building_id, manager_id\n- numerical (price, bathrooms, bedrooms)\n- created date\n- text (description, features)\nFeatures for modeling:\n- bathrooms\n- bedrooms\n- created date (calculate age of posting in days)\n- description (number of words in description)\n- features (number of features)\n- photos (number of photos)\n- price\n- features (split into category indicators)\n- manager_id (with manager skill level)\nFurther opportunities for modeling:\n- description (with NLP)\n- building_id (with a building popularity level)\n- photos (quality, discussed in Step 6)", "# Distribution of target value: interest_level\n\ns = df.groupby('interest_level')['listing_id'].count()\ns.plot.bar();\n\ndf_high = df.loc[df['interest_level'] == 'high']\ndf_medium = df.loc[df['interest_level'] == 'medium']\ndf_low = df.loc[df['interest_level'] == 'low']\n\nplt.figure(figsize=(6,10))\nplt.scatter(df_low.longitude, df_low.latitude, color='yellow', alpha=0.2, marker='.', label='Low')\nplt.scatter(df_medium.longitude, df_medium.latitude, color='green', alpha=0.2, marker='.', label='Medium')\nplt.scatter(df_high.longitude, df_high.latitude, color='purple', alpha=0.2, marker='.', label='High')\n\nplt.xlim(-74.04,-73.80)\nplt.ylim(40.6,40.9)\nplt.title('Map of the listings in NYC')\nplt.ylabel('N Lat.')\nplt.xlabel('W Long.')\nplt.legend(loc=2);", "Step 2: Initial Features", "(pd.to_datetime(df['created'])).sort_values(ascending=False).head()\n\n# The most recent records are 6/29/2016. Computing days old from 6/30/2016.\ndf['days_old'] = (dateutil.parser.parse('2016-06-30') - pd.to_datetime(df['created'])).apply(lambda x: x.days)\n\n# Add other \"count\" features\ndf['num_words'] = df['description'].apply(lambda x: len(x.split()))\ndf['num_features'] = df['features'].apply(len)\ndf['num_photos'] = df['photos'].apply(len)", "Step 3: Modeling, First Pass", "X = df[['bathrooms','bedrooms','price','latitude','longitude','days_old','num_words','num_features','num_photos']]\ny = df['interest_level']\n\n# Scaling is necessary for Logistic Regression and KNN\nX_scaled = pd.DataFrame(preprocessing.scale(X))\nX_scaled.columns = X.columns\n\nX_train, X_test, y_train, y_test = train_test_split(X_scaled, y, random_state=42)", "Logistic Regression", "lr = LogisticRegression()\nlr.fit(X_train, y_train)\n\ny_test_predicted_proba = lr.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)\n\nlr = LogisticRegression(solver='newton-cg', multi_class='multinomial')\nlr.fit(X_train, y_train)\n\ny_test_predicted_proba = lr.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)", "KNN", "for i in [95,100,105]:\n knn = KNeighborsClassifier(n_neighbors=i)\n knn.fit(X_train, y_train)\n y_test_predicted_proba = knn.predict_proba(X_test)\n print log_loss(y_test, y_test_predicted_proba)", "Random Forest", "rf = RandomForestClassifier(n_estimators=500, n_jobs=-1)\nrf.fit(X_train, y_train)\n\ny_test_predicted_proba = rf.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)", "Random Forest performs the best with respect to Log Loss.", "y_test_predicted = rf.predict(X_test)\naccuracy_score(y_test, y_test_predicted)\n\nprecision_recall_fscore_support(y_test, y_test_predicted)\n\nrf.classes_\n\nplt.figure(figsize=(5,5))\npd.Series(index = X_train.columns, data = rf.feature_importances_).sort_values().plot(kind= 'bar');", "The above bar plot shows feature importance for the Random Forest classifier. \"Price\" is the most informative feature related to the target variable \"interest_level\".\nNaive Bayes", "bnb = BernoulliNB()\nbnb.fit(X_train, y_train)\n\ny_test_predicted_proba = bnb.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)\n\ngnb = GaussianNB()\ngnb.fit(X_train, y_train)\n\ny_test_predicted_proba = gnb.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)", "Neural Network", "clf = MLPClassifier(hidden_layer_sizes=(100,50,10))\nclf.fit(X_train, y_train)\n\ny_test_predicted_proba = clf.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)", "Step 4: More Complex Features\nSplitting out categories into 0/1 dummy variables", "# Reduce 1556 unique category text values into 34 main categories\n\ndef reduce_categories(full_list):\n reduced_list = []\n for i in full_list:\n item = i.lower()\n if 'cats allowed' in item:\n reduced_list.append('cats')\n if 'dogs allowed' in item:\n reduced_list.append('dogs')\n if 'elevator' in item:\n reduced_list.append('elevator')\n if 'hardwood' in item:\n reduced_list.append('elevator')\n if 'doorman' in item or 'concierge' in item:\n reduced_list.append('doorman')\n if 'dishwasher' in item:\n reduced_list.append('dishwasher')\n if 'laundry' in item or 'dryer' in item:\n if 'unit' in item:\n reduced_list.append('laundry_in_unit')\n else:\n reduced_list.append('laundry')\n if 'no fee' in item:\n reduced_list.append('no_fee')\n if 'reduced fee' in item:\n reduced_list.append('reduced_fee')\n if 'fitness' in item or 'gym' in item:\n reduced_list.append('gym')\n if 'prewar' in item or 'pre-war' in item:\n reduced_list.append('prewar')\n if 'dining room' in item:\n reduced_list.append('dining')\n if 'pool' in item:\n reduced_list.append('pool')\n if 'internet' in item:\n reduced_list.append('internet')\n if 'new construction' in item:\n reduced_list.append('new_construction')\n if 'wheelchair' in item:\n reduced_list.append('wheelchair')\n if 'exclusive' in item:\n reduced_list.append('exclusive')\n if 'loft' in item:\n reduced_list.append('loft')\n if 'simplex' in item:\n reduced_list.append('simplex')\n if 'fire' in item:\n reduced_list.append('fireplace')\n if 'lowrise' in item or 'low-rise' in item:\n reduced_list.append('lowrise')\n if 'midrise' in item or 'mid-rise' in item:\n reduced_list.append('midrise')\n if 'highrise' in item or 'high-rise' in item:\n reduced_list.append('highrise')\n if 'pool' in item:\n reduced_list.append('pool')\n if 'ceiling' in item:\n reduced_list.append('high_ceiling')\n if 'garage' in item or 'parking' in item:\n reduced_list.append('parking')\n if 'furnished' in item:\n reduced_list.append('furnished')\n if 'multi-level' in item:\n reduced_list.append('multilevel')\n if 'renovated' in item:\n reduced_list.append('renovated')\n if 'super' in item:\n reduced_list.append('live_in_super')\n if 'green building' in item:\n reduced_list.append('green_building')\n if 'appliances' in item:\n reduced_list.append('new_appliances')\n if 'luxury' in item:\n reduced_list.append('luxury')\n if 'penthouse' in item:\n reduced_list.append('penthouse')\n if 'deck' in item or 'terrace' in item or 'balcony' in item or 'outdoor' in item or 'roof' in item or 'garden' in item or 'patio' in item:\n reduced_list.append('outdoor_space')\n return list(set(reduced_list))\n\ndf['categories'] = df['features'].apply(reduce_categories)\n\ntext = ''\nfor index, row in df.iterrows():\n for i in row.categories:\n text = text + i + ' '\n\nplt.figure(figsize=(12,6))\nwc = WordCloud(background_color='white', width=1200, height=600).generate(text)\nplt.title('Reduced Categories', fontsize=30)\nplt.axis(\"off\")\nwc.recolor(random_state=0)\nplt.imshow(wc);\n\n# Create indicators\nX_dummies = pd.get_dummies(df['categories'].apply(pd.Series).stack()).sum(level=0)", "Aggregate manager_id to get features representing manager performance\nNote: Need to aggregate manager performance ONLY over a training subset in order to validate against test subset. Otherwise, for any given manager, a portion of their calculated skill level might have been due to listings from the test set. So the train-test split is being performed in this step before creating the columns for manager performance.", "# Choose features for modeling (and sorting)\ndf = df.sort_values('listing_id')\nX = df[['bathrooms','bedrooms','price','latitude','longitude','days_old','num_words','num_features','num_photos','listing_id','manager_id']]\ny = df['interest_level']\n\n# Merge indicators to X dataframe and sort again to match sorting of y\nX = X.merge(X_dummies, how='outer', left_index=True, right_index=True).fillna(0)\nX = X.sort_values('listing_id')\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n\n# compute ratios and count for each manager\nmgr_perf = pd.concat([X_train.manager_id,pd.get_dummies(y_train)], axis=1).groupby('manager_id').mean()\n\nmgr_perf.head(2)\n\n# Apply weighting for each manager's listings: +1 for High, 0 for Medium, -1 for Low.\nmgr_perf['manager_count'] = X_train.groupby('manager_id').count().iloc[:,1]\nmgr_perf['manager_skill'] = mgr_perf['high']*1 + mgr_perf['medium']*0 + mgr_perf['low']*-1\n\n# for training set\nX_train = X_train.merge(mgr_perf.reset_index(), how='left', left_on='manager_id', right_on='manager_id')\n\n# for test set\nX_test = X_test.merge(mgr_perf.reset_index(), how='left', left_on='manager_id', right_on='manager_id')\n\n# Fill na's with mean skill and median count\nX_test['manager_skill'] = X_test.manager_skill.fillna(X_test.manager_skill.mean())\nX_test['manager_count'] = X_test.manager_count.fillna(X_test.manager_count.median())\n\n# Delete unnecessary columns before modeling\ndel X_train['listing_id']\ndel X_train['manager_id']\ndel X_test['listing_id']\ndel X_test['manager_id']\ndel X_train['high']\ndel X_train['medium']\ndel X_train['low']\ndel X_test['high']\ndel X_test['medium']\ndel X_test['low']", "Step 5: Modeling, second pass with Random Forest", "rf = RandomForestClassifier(n_estimators=500, n_jobs=-1)\nrf.fit(X_train, y_train)\n\ny_test_predicted_proba = rf.predict_proba(X_test)\nlog_loss(y_test, y_test_predicted_proba)\n\ny_test_predicted = rf.predict(X_test)\naccuracy_score(y_test, y_test_predicted)\n\nprecision_recall_fscore_support(y_test, y_test_predicted)\n\nrf.classes_\n\nplt.figure(figsize=(15,5))\npd.Series(index = X_train.columns, data = rf.feature_importances_).sort_values().plot(kind = 'bar');", "As seen here, introducing feature categories and manager performance has improved the model. In particular, manager_skill shows up as the dominant feature in terms of importance in this Random Forest model.\nStep 6: Image Classification Attempt\nI did not use the actual listing images in my model. Here, I outline an attempt at classifying the listing based on image quality using a \"blurry image\" detector that I created with a Convolutional Neural Network. For more details see my discussion of that project.", "import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras.layers.core import Flatten, Dense, Activation, Dropout\nfrom keras.preprocessing import image\n\n# My neural network layer sequence is based on the original LeNet architecture\nmodel = Sequential()\n\n# Layer 1\nmodel.add(Convolution2D(32, 5, 5, input_shape=(192, 192, 3)))\nmodel.add(Activation(\"relu\"))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\n\n# Layer 2\nmodel.add(Convolution2D(64, 5, 5))\nmodel.add(Activation(\"relu\"))\nmodel.add(MaxPooling2D(pool_size=(2, 2))) \n\nmodel.add(Flatten())\n\n# Layer 3\nmodel.add(Dense(1024))\nmodel.add(Activation(\"relu\"))\nmodel.add(Dropout(0.5))\n\n# Layer 4\nmodel.add(Dense(512))\nmodel.add(Activation(\"relu\"))\nmodel.add(Dropout(0.5))\n\n# Layer 5\nmodel.add(Dense(2))\nmodel.add(Activation(\"softmax\"))\n\n# \"lenet_weights.h5\" is the file containing weights from my trained neural network.\nmodel.load_weights('lenet_weights.h5')\n\n# Loading three images from the dataset.\n# img_1 & img_2 are from the same High popularity listing\n# img_3 is from a Low popularity listing.\npics = []\n\nimg_1 = image.load_img('6811966_1.jpg', target_size=(192,192))\npics.append(np.asarray(img_1))\nimg_2 = image.load_img('6811966_2.jpg', target_size=(192,192))\npics.append(np.asarray(img_2))\nimg_3 = image.load_img('6812150_1.jpg', target_size=(192,192))\npics.append(np.asarray(img_3))\npics_array = np.stack(pics)/255.\nplt.figure(figsize=(12,12))\nplt.subplot(131),plt.imshow(img_1),plt.title('6811966, interest_level: High')\nplt.xticks([]), plt.yticks([])\nplt.subplot(132),plt.imshow(img_2),plt.title('6811966, interest_level: High')\nplt.xticks([]), plt.yticks([])\nplt.subplot(133),plt.imshow(img_3),plt.title('6812150, interest_level: Low')\nplt.xticks([]), plt.yticks([])\nplt.show()\n\nmodel.predict_classes(pics_array)", "My model classified the first image for listing 6811966 as 0 = clear, but the second as 1 = blurry. This is likely due to the larger prevalance of a white wash-out effect in the second image from sunlight.\nThe third image was classified correctly as 1 = blurry; it is indeed a blurry image. However, this alone is probably not enough to decide listing popularity.\nAs seen below, the typical listing has 5 photos attached, while the High popularity listing we are discussing here has a total of 7 photos. The Low popularity listing meanwhile has only this 1 photo. So the number of photos is just as likely as blurriness to affect listing popularity in this case.", "df['num_photos'].mode()\n\ndf['num_photos'].median()\n\ndf[df.listing_id == 6811966][['listing_id','description','interest_level','num_photos']]\n\ndf[df.listing_id == 6812150][['listing_id','description','interest_level','num_photos']]", "Step 7: Prediction and Further Opportunities\nTo make a prediction for submission to Kaggle, this notebook can be recreated with the test.json dataset. The submission requires the predicted high, medium, and low probabilities for each listing_id. Kaggle rankings are based on the Log Loss value on the test set.\nFurther opportunities to improve prediction on this dataset may lie in NLP of the text descriptions, which were not used thus far except as a numerical \"length\" value. Building popularity could also be assessed via the building_id variable, similar to the aggregation of the manager_id variable." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tpin3694/tpin3694.github.io
python/pandas_create_column_with_loop.ipynb
mit
[ "Title: Create A Pandas Column With A For Loop\nSlug: pandas_create_column_with_loop\nSummary: Create A Pandas Column With A For Loop\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Data Wrangling\nAuthors: Chris Albon \nPreliminaries", "import pandas as pd\nimport numpy as np", "Create an example dataframe", "raw_data = {'student_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'], \n 'test_score': [76, 88, 84, 67, 53, 96, 64, 91, 77, 73, 52, np.NaN]}\ndf = pd.DataFrame(raw_data, columns = ['student_name', 'test_score'])", "Create a function to assign letter grades", "# Create a list to store the data\ngrades = []\n\n# For each row in the column,\nfor row in df['test_score']:\n # if more than a value,\n if row > 95:\n # Append a letter grade\n grades.append('A')\n # else, if more than a value,\n elif row > 90:\n # Append a letter grade\n grades.append('A-')\n # else, if more than a value,\n elif row > 85:\n # Append a letter grade\n grades.append('B')\n # else, if more than a value,\n elif row > 80:\n # Append a letter grade\n grades.append('B-')\n # else, if more than a value,\n elif row > 75:\n # Append a letter grade\n grades.append('C')\n # else, if more than a value,\n elif row > 70:\n # Append a letter grade\n grades.append('C-')\n # else, if more than a value,\n elif row > 65:\n # Append a letter grade\n grades.append('D')\n # else, if more than a value,\n elif row > 60:\n # Append a letter grade\n grades.append('D-')\n # otherwise,\n else:\n # Append a failing grade\n grades.append('Failed')\n \n# Create a column from the list\ndf['grades'] = grades\n\n# View the new dataframe\ndf" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tdamsma/sudoku
0_Introductie.ipynb
mit
[ "Introductie\nVoorbereiding\nOm zelf een jupyter notebook te draaien moet je het volgende doen:\n\nInstalleer Anaconda\n\nstart de python omgeving vanaf de command line met het commando\njupyter notebook\n\n\nopen de notebook omgeving in je browser: http://localhost:8888\n\n\nWat ook kan is een tijdelijke notebook openen op http://try.jupyter.org/, maar dit werkt helaas niet met de laatste chrome, wel met firefox. Deze notebook is tijdelijk, dus als je hem sluit is alles weer weg.\nLaatste optie, dat je een notebook opent vanaf mijn computer, maar dit werk alleen voor de duur van de workshop.\nSudoku\nEen sudoku is een puzzel die je vaak ziet, bijvoorbeeld in de krant. Hieronder een voorbeeld:\n<img src=./examples/hard.gif width=240px></img>\nMet heel veel geduld en gepuzzel kan je deze sudoku oplossen. Dat is natuurlijk een beetje saai... Wat ook kan, is hem door de computer laten oplossen!\nDe opgave\nOm het niet gelijk al te ingewikkeld te maken, gaan we deze sudoku proberen op te lossen\n<img src=./examples/2x2b.png width = 120px></img>\nDit is een 2x2 sudoku. Natuurlijk is deze zo makkelijk dat je die misschien wel uit je hoofd kan oplossen, maar het gaat er vandaag om dat je leert hoe je een programma schrijft dat hem voor jou oplost. Als je dat eenmaal kan, kan je die ook gebruiken om de (hele moeilijke) bovenste sudoku op te lossen. En wie weet kan vind je wel andere puzzels waarbij je dit kan toepassen.\nNotebooks\nEen notebook maakt het mogelijk code in je browser te draaien. Hieronder wat vorbeelden. In het grijze blokje staat de code, als je op ctrl+enter drukt wordt die uitgevoer. Onder het grijze blok zie je de uitvoer. Dit is uitkomst waarde van de laatste regel.", "a=2\nb=3\na+b\n\na", "Je kan ook het print commando gebruiken om meer uitvoer te laten zien:", "# er komen nu twee regels te staan\nprint('abc')\nprint('a+b =',a+b)", "Je kan de invoer veranderen, en per stukje opnieuw berekenen. Doe dit door op een blokje te klikken, iets aan te passen, en met ctr+enter opnieuw uit te voeren\nBasis python\nIn deze workshop gaan we een paar een paar bases (python) begrippen gebruiken, due handig als je alvast weet hoe het werkt\nList", "lijstje = ['nulste','eerste',2,'derde']\nprint('lijstje =',lijstje)\n\nprint('plek 1 =',lijstje[1])\nprint('plek 0 =',lijstje[0])\nprint('plek 1 tot en met 3 =',lijstje[1:3])", "Loop", "for ding in lijstje:\n print('ding =',ding)", "Dat we de elemeneten van lijstje hier ding noemen is een keuze. We kunnen ze net zo goed wat anders noemen", "for iets in lijstje:\n print('iets =',iets)", "Er is ook een andere manier om dit op te scrijven, we noemen dit list comprehension. Dit heeft als vordeel dat we het resultaat van de iteratie gelijk weer in een nieuwe lijst hebben", "nieuwe_lijst = [print(ding) for ding in lijstje]\nprint('nieuwe_lijst = ', nieuwe_lijst)\n\nnieuwe_lijst = [ding+ding for ding in lijstje]\nprint('nieuwe_lijst = ', nieuwe_lijst)", "Dict\nEen dict is, net als een list, een object waar je meerdere waarden in kan stoppen. Bij een list gaat het om de volgorde van je elementen, bij een dict om de key. Met de key kan je waardes opzoeken.", "mijn_dict = {\n 'lijstje': [1,2,3],\n 'nog een lijstje': [4,5,6],\n }\nprint('mijn dict =',mijn_dict)\nprint('nog een lijstje =',mijn_dict['nog een lijstje'])", "Set\nHeel handig voor deze workshop is een set. Een set is een verzameling unieke objecten. Die kan je snel vergelijken.", "set1 = set([1,2,3,4])\nset2 = set([3,4,5,6])\nprint('alles in set1 behalve als het in set2 is =',set1-set2)\nprint('alles wat zowel in set1 en set2 is=',set1 & set2)\nprint('alles wat in set1 en/of set2 is=',set1 | set2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/examples
jpx-tokyo-stock-exchange-kaggle-competition/jpx-tokyo-stock-exchange-prediction-kale.ipynb
apache-2.0
[ "JPX Tokyo Stock Exchange Kale Pipeline\nIn this Kaggle competition \n\nJapan Exchange Group, Inc. (JPX) is a holding company operating one of the largest stock exchanges in the world, Tokyo Stock Exchange (TSE), and derivatives exchanges Osaka Exchange (OSE) and Tokyo Commodity Exchange (TOCOM). JPX is hosting this competition and is supported by AI technology company AlpacaJapan Co.,Ltd.\nIn this competition, you will model real future returns of around 2,000 stocks. The competition will involve building portfolios from the stocks eligible for predictions. The stocks are ranked from highest to lowest expected returns and they are evaluated on the difference in returns between the top and bottom 200 stocks.", "!pip install -r requirements.txt --user --quiet", "Imports\nIn this section we import the packages we need for this example. Make it a habit to gather your imports in a single place. It will make your life easier if you are going to transform this notebook into a Kubeflow pipeline using Kale.", "import sys, os, subprocess\nfrom tqdm import tqdm\nimport numpy as np\nimport pandas as pd\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport zipfile\nimport joblib\n\nfrom lightgbm import LGBMRegressor\nfrom sklearn.metrics import mean_squared_error\npd.set_option('display.max_columns', 500)", "Project hyper-parameters\nIn this cell, we define the different hyper-parameters. Defining them in one place makes it easier to experiment with their values and also facilitates the execution of HP Tuning experiments using Kale and Katib.", "# Hyper-parameters\nLR = 0.379687157316759\nN_EST = 100", "Set random seed for reproducibility and ignore warning messages.", "np.random.seed(2022)", "Download and load the dataset\nIn this section, we download the data from kaggle to get it in a ready-to-use form by the model. \nFirst, let us load and analyze the data.\nThe data are in csv format, thus, we use the handy read_csv pandas method. There is one train data set and two test sets (one public and one private).", "# setup kaggle environment for data download\n# set kaggle.json path\nos.environ['KAGGLE_CONFIG_DIR'] = \"/home/jovyan/examples/jpx-tokyo-stock-exchange-kaggle-competition\"\n\n# grant rwo permission to .kaggle/kaggle.json\nsubprocess.run([\"chmod\",\"600\", f\"{os.environ['KAGGLE_CONFIG_DIR']}/kaggle.json\"])\n\n# download kaggle's jpx-tokyo-stock-exchange-prediction data\nsubprocess.run([\"kaggle\",\"competitions\", \"download\", \"-c\", \"jpx-tokyo-stock-exchange-prediction\"])\n\n# path to download to\ndata_path = 'data'\n\n# extract jpx-tokyo-stock-exchange-prediction.zip to load_data_path\nwith zipfile.ZipFile(\"jpx-tokyo-stock-exchange-prediction.zip\",\"r\") as zip_ref:\n zip_ref.extractall(data_path)\n\n# read train_files/stock_prices.csv\ndf_prices = pd.read_csv(f\"{data_path}/train_files/stock_prices.csv\", parse_dates=['Date'])\n\ndf_prices['Date'].max()\n\ndf_prices.tail(3)\n\n# lets check data dimensions\ndf_prices.shape\n\ndf_prices.info()\n\n# check total nan values per column\ndf_prices.isna().sum()", "Transform Data", "# sort data by 'Date' and 'SecuritiesCode'\ndf_prices.sort_values(by=['Date','SecuritiesCode'], inplace=True)\n\n# sort data by 'Date' and 'SecuritiesCode'\ndf_prices.sort_values(by=['Date','SecuritiesCode'], inplace=True)\n\n# count total trading stocks per day \nidcount = df_prices.groupby(\"Date\")[\"SecuritiesCode\"].count().reset_index()\nidcount\n\nplt.figure(figsize=(10, 5))\nplt.plot(idcount[\"Date\"],idcount[\"SecuritiesCode\"])\nplt.axvline(x=['2021-01-01'], color='blue', label='2021-01-01')\nplt.axvline(x=['2020-06-01'], color='red', label='2020-06-01')\nplt.legend()\nplt.show()\n\nidcount[idcount['SecuritiesCode'] >= 2000]\n\nidcount[idcount['SecuritiesCode'] >= 2000]['SecuritiesCode'].sum()\n\n# filter out data with less than 2000 stock counts in a day\n# dates before ‘2020-12-23’ all have stock counts less than 2000\n# This is done to work with consistent data \ndf_prices = df_prices[(df_prices[\"Date\"]>=\"2020-12-23\")]\n\ndf_prices = df_prices.reset_index(drop=True)\n\ndf_prices.head()\n\ndf_prices.columns\n\n#calculate z-scores of `df`\nz_scores = stats.zscore(df_prices[['Open', 'High', 'Low', 'Close','Volume']], nan_policy='omit')\nabs_z_scores = np.abs(z_scores)\nfiltered_entries = (abs_z_scores < 3).all(axis=1)\ndf_zscore = df_prices[filtered_entries]\ndf_zscore = df_zscore.reset_index(drop=True)\n\ndf_zscore = df_zscore.reset_index(drop=True)", "<h1>Feature Engineering", "def feat_eng(df, features):\n\n for i in tqdm(range(1, 4)):\n # creating lag features\n tmp = df[features].shift(i)\n tmp.columns = [c + f'_next_shift_{i}' for c in tmp.columns]\n df = pd.concat([df, tmp], sort=False, axis=1)\n\n for i in tqdm(range(1, 4)):\n df[f'weighted_vol_price_{i}'] = np.log(df[f'Volume_next_shift_{i}'] * df[[col for col in df if col.endswith(f'next_shift_{i}')][:-1]].apply(np.mean, axis=1))\n \n # feature engineering\n df['weighted_vol_price'] = np.log(df['Volume'] * (np.mean(df[features[:-1]], axis=1)))\n df['BOP'] = (df['Open']-df['Close'])/(df['High']-df['Low'])\n df['HL'] = df['High'] - df['Low']\n df['OC'] = df['Close'] - df['Open']\n df['OHLCstd'] = df[['Open','Close','High','Low']].std(axis=1)\n \n feats = df.select_dtypes(include=float).columns\n df[feats] = df[feats].apply(np.log)\n \n # replace inf with nan\n df.replace([np.inf, -np.inf], np.nan, inplace=True)\n \n # datetime features\n df['Date'] = pd.to_datetime(df['Date'])\n df['Day'] = df['Date'].dt.weekday.astype(np.int32)\n df[\"dayofyear\"] = df['Date'].dt.dayofyear\n df[\"is_weekend\"] = df['Day'].isin([5, 6])\n df[\"weekofyear\"] = df['Date'].dt.weekofyear\n df[\"month\"] = df['Date'].dt.month\n df[\"season\"] = (df[\"month\"]%12 + 3)//3\n \n # fill nan values\n df = df.fillna(0)\n return df\n\nnew_feats = feat_eng(df_zscore, ['High', 'Low', 'Open', 'Close', 'Volume'])\n\nnew_feats.shape\n\nnew_feats['Target'] = df_zscore['Target']\n\nnew_feats.head(7)\n\nnew_feats.columns", "Modelling", "# columns to be used for modelling.\nfeats = ['Date','SecuritiesCode', 'Open', 'High', 'Low', 'Close', 'Volume',\n 'weighted_vol_price_1', 'weighted_vol_price_2', 'weighted_vol_price_3', \n 'weighted_vol_price', 'BOP', 'HL', 'OC', 'OHLCstd', 'Day', 'dayofyear',\n 'is_weekend', 'weekofyear', 'month', 'season']\n\n# transform date to int\nnew_feats['Date'] = new_feats['Date'].dt.strftime(\"%Y%m%d\").astype(int)\n\n# split data into valid for validation and train for model training\nvalid = new_feats[(new_feats['Date'] >= 20211111)].copy()\ntrain = new_feats[(new_feats['Date'] < 20211111)].copy()\n\ntrain.shape, valid.shape\n\n# model parameter\nparams = {\n 'n_estimators': int(N_EST),\n 'learning_rate': float(LR),\n 'random_state': 2022,\n 'verbose' : 2}\n\n# model initialization\nmodel = LGBMRegressor(**params)\n\n\nX = train[feats]\ny = train[\"Target\"]\n\nX_test = valid[feats]\ny_test = valid[\"Target\"]\n\n# fitting\nmodel.fit(X, y, verbose=False, eval_set=(X_test, y_test))", "<h1> Evaluation and Prediction", "# model prediction\npreds = model.predict(X_test)\n\n# model evaluation\nrmse = np.round(mean_squared_error(preds, y_test)**0.5, 5)\n\nprint(rmse)", "Make submission", "sys.path.insert(0, 'helper-files')\nfrom local_api import local_api\n\nmyapi = local_api('data/supplemental_files')\nenv = myapi.make_env()\n\niter_test = env.iter_test()\nfor (prices, options, financials, trades, secondary_prices, sample_prediction) in iter_test:\n prices = feat_eng(prices, ['High', 'Low', 'Open', 'Close', 'Volume'])\n prices['Date'] = prices['Date'].dt.strftime(\"%Y%m%d\").astype(int)\n prices[\"Target\"] = model.predict(prices[feats])\n if prices[\"Volume\"].min()==0:\n sample_prediction[\"Prediction\"] = 0\n else:\n sample_prediction[\"Prediction\"] = prices[\"Target\"]/prices[\"Volume\"]\n sample_prediction[\"Prediction\"] = prices[\"Target\"]\n sample_prediction.sort_values(by=\"Prediction\", ascending=False, inplace=True)\n sample_prediction['Rank'] = np.arange(0,2000)\n sample_prediction.sort_values(by = \"SecuritiesCode\", ascending=True, inplace=True)\n submission = sample_prediction[[\"Date\",\"SecuritiesCode\",\"Rank\"]]\n env.predict(submission)\nprint(env.score())\nsubmission.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RogueAstro/RV_PS2017
notebooks/the_basics.ipynb
mit
[ "The basics of radial velocities\nWe can almost completely characterize the orbits of massive bodies around a star using a set of five orbital parameters for each body. Different parametrization schemes use a different set of five parameters, depending on what kind of optimization is needed, but for this tutorial we will use the one from Murray & Correia (2010):\n\n$K$: radial velocity semi-amplitude\n$T$: orbital period\n$e$: eccentricity of the orbit\n$\\omega$: argument of periapse\n$t_0$: time of periapse passage\n\nThese parameters exist for each body orbiting a star.\nIn this notebook, we will play around with the package radial to simulate radial velocity curves for a star orbited by bodies with different orbital parameters.", "from radial import body\nimport astropy.units as u\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Let's start with a very familiar object: the Sun.\nNote: if you are too lazy to convert units like me, I recommend using the astropy.units module.", "sun = body.MainStar(mass=1 * u.solMass, name='Sun')", "Now let's setup a couple of companions for the Sun. How about Earth and Jupiter?\nThei radial velocity semi-amplitude is not easily accessible from online databases, but we can compute them using the following equation:\n$$K = \\frac{m \\sin{I}}{m + M} \\frac{2\\pi}{T} \\frac{\\ a}{\\sqrt{1-e^2}} \\mathrm{,}$$\nwhere $m$ is the companion mass, $M$ is the mass of the main star, $I$ is the inclination angle between the reference plane and the axis of the orbit (let's consider $I = \\pi / 2$ in this example) and $a$ is the semi-major axis of the orbit. All these parameters are easily accessible to us.", "def compute_k(mass, period, semia, ecc, i=np.pi * u.rad):\n return mass / (mass + 1 * u.solMass) * 2 * np.pi / period * semia / (1 - ecc ** 2) ** 0.5\n\n# Computing K for the Earth\nmass_e = 1 * u.earthMass\nsemia_e = 1.00000011 * u.AU\nperiod_e = 1 * u.yr\necc_e = 0.01671022\nk_e = compute_k(mass_e, period_e, semia_e, ecc_e)\n\n# Computing K for Jupiter\nmass_j = 1 * u.jupiterMass\nsemia_j = 5.2026 * u.AU\nperiod_j = 11.8618 * u.yr\necc_j = 0.048498\nk_j = compute_k(mass_j, period_j, semia_j, ecc_j)\n\n# Setting up the companions\nearth = body.Companion(main_star=sun,\n k = k_e,\n period_orb=period_e,\n t_0=2457758.01181 * u.d, # Time of periastron passage, Julian Date\n omega=114.207 * u.deg, # Argument of periapsis/perihelion\n ecc=ecc_e)\n\njupiter = body.Companion(main_star=sun,\n k=k_j,\n period_orb=period_j,\n t_0=2455636.95833 * u.d,\n omega=273.867 * u.deg,\n ecc=ecc_j)", "The next step is to setup the Solar System with the Sun and its planetary companions. We need to state what is the time window that we want to simulate in Julian Dates.", "time = np.linspace(2453375, 2457758, 1000) * u.d # ~12 years\n\n# The Solar System\nsys = body.System(main_star=sun,\n companion=[earth, jupiter],\n time=time)", "Now to compute the radial velocities of the Sun due to Earth and Jupiter:", "sys.compute_rv()", "And the total RVs are shown below:", "sys.plot_rv(1, 'RVs due to Jupiter')\nsys.plot_rv(plot_title='Total RVs')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BojanPLOJ/Bipropagation
Bipropagation_fashionMNIST.ipynb
gpl-3.0
[ "<a href=\"https://colab.research.google.com/github/BojanPLOJ/Bipropagation/blob/master/Bipropagation_fashionMNIST.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2018 Bojan Ploj.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Train your first neural network: basic classification\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nThis guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go.\nThis guide uses tf.keras, a high-level API to build and train models in TensorFlow.", "# TensorFlow and tf.keras\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Helper libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)", "Import the Fashion MNIST dataset\nThis guide uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here:\n<table>\n <tr><td>\n <img src=\"https://tensorflow.org/images/fashion-mnist-sprite.png\"\n alt=\"Fashion MNIST sprite\" width=\"600\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://github.com/zalandoresearch/fashion-mnist\">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp;\n </td></tr>\n</table>\n\nFashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here.\nThis guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. \nWe will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, just import and load the data:", "fashion_mnist = keras.datasets.fashion_mnist\n\n(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()", "Loading the dataset returns four NumPy arrays:\n\nThe train_images and train_labels arrays are the training set—the data the model uses to learn.\nThe model is tested against the test set, the test_images, and test_labels arrays.\n\nThe images are 28x28 NumPy arrays, with pixel values ranging between 0 and 255. The labels are an array of integers, ranging from 0 to 9. These correspond to the class of clothing the image represents:\n<table>\n <tr>\n <th>Label</th>\n <th>Class</th> \n </tr>\n <tr>\n <td>0</td>\n <td>T-shirt/top</td> \n </tr>\n <tr>\n <td>1</td>\n <td>Trouser</td> \n </tr>\n <tr>\n <td>2</td>\n <td>Pullover</td> \n </tr>\n <tr>\n <td>3</td>\n <td>Dress</td> \n </tr>\n <tr>\n <td>4</td>\n <td>Coat</td> \n </tr>\n <tr>\n <td>5</td>\n <td>Sandal</td> \n </tr>\n <tr>\n <td>6</td>\n <td>Shirt</td> \n </tr>\n <tr>\n <td>7</td>\n <td>Sneaker</td> \n </tr>\n <tr>\n <td>8</td>\n <td>Bag</td> \n </tr>\n <tr>\n <td>9</td>\n <td>Ankle boot</td> \n </tr>\n</table>\n\nEach image is mapped to a single label. Since the class names are not included with the dataset, store them here to use later when plotting the images:", "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "Explore the data\nLet's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels:", "train_images.shape", "Likewise, there are 60,000 labels in the training set:", "len(train_labels)", "Each label is an integer between 0 and 9:", "train_labels", "There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels:", "test_images.shape", "And the test set contains 10,000 images labels:", "len(test_labels)", "Preprocess the data\nThe data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255:", "plt.figure()\nplt.imshow(train_images[0])\nplt.colorbar()\nplt.grid(False)", "We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, cast the datatype of the image components from an integer to a float, and divide by 255. Here's the function to preprocess the images:\nIt's important that the training set and the testing set are preprocessed in the same way:", "train_images = train_images / 255.0\n\ntest_images = test_images / 255.0", "Display the first 25 images from the training set and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network.", "plt.figure(figsize=(10,10))\nfor i in range(25):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])", "Build the model\nBuilding the neural network requires configuring the layers of the model, then compiling the model.\nSetup the layers\nThe basic building block of a neural network is the layer. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand.\nMost of deep learning consists of chaining together simple layers. Most layers, like tf.keras.layers.Dense, have parameters that are learned during training.", "model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(128, activation=tf.nn.relu),\n keras.layers.Dense(10, activation=tf.nn.softmax)\n])", "The first layer in this network, tf.keras.layers.Flatten, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.\nAfter the pixels are flattened, the network consists of a sequence of two tf.keras.layers.Dense layers. These are densely-connected, or fully-connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer is a 10-node softmax layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.\nCompile the model\nBefore the model is ready for training, it needs a few more settings. These are added during the model's compile step:\n\nLoss function —This measures how accurate the model is during training. We want to minimize this function to \"steer\" the model in the right direction.\nOptimizer —This is how the model is updated based on the data it sees and its loss function.\nMetrics —Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.", "model.compile(optimizer=tf.train.AdamOptimizer(), \n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])", "Train the model\nTraining the neural network model requires the following steps:\n\nFeed the training data to the model—in this example, the train_images and train_labels arrays.\nThe model learns to associate images and labels.\nWe ask the model to make predictions about a test set—in this example, the test_images array. We verify that the predictions match the labels from the test_labels array. \n\nTo start training, call the model.fit method—the model is \"fit\" to the training data:", "model.fit(train_images, train_labels, epochs=5)", "As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data.\nEvaluate accuracy\nNext, compare how the model performs on the test dataset:", "test_loss, test_acc = model.evaluate(test_images, test_labels)\n\nprint('Test accuracy:', test_acc)", "It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of overfitting. Overfitting is when a machine learning model performs worse on new data than on their training data. \nMake predictions\nWith the model trained, we can use it to make predictions about some images.", "predictions = model.predict(test_images)", "Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction:", "predictions[0]", "A prediction is an array of 10 numbers. These describe the \"confidence\" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value:", "np.argmax(predictions[0])", "So the model is most confident that this image is an ankle boot, or class_names[9]. And we can check the test label to see this is correct:", "test_labels[0]", "We can graph this to look at the full set of 10 channels", "def plot_image(i, predictions_array, true_label, img):\n predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n \n plt.imshow(img, cmap=plt.cm.binary)\n\n predicted_label = np.argmax(predictions_array)\n if predicted_label == true_label:\n color = 'blue'\n else:\n color = 'red'\n \n plt.xlabel(\"{} {:2.0f}% ({})\".format(class_names[predicted_label],\n 100*np.max(predictions_array),\n class_names[true_label]),\n color=color)\n\ndef plot_value_array(i, predictions_array, true_label):\n predictions_array, true_label = predictions_array[i], true_label[i]\n plt.grid(False)\n plt.xticks([])\n plt.yticks([])\n thisplot = plt.bar(range(10), predictions_array, color=\"#777777\")\n plt.ylim([0, 1]) \n predicted_label = np.argmax(predictions_array)\n \n thisplot[predicted_label].set_color('red')\n thisplot[true_label].set_color('blue')", "Let's look at the 0th image, predictions, and prediction array.", "i = 0\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)\n\ni = 12\nplt.figure(figsize=(6,3))\nplt.subplot(1,2,1)\nplot_image(i, predictions, test_labels, test_images)\nplt.subplot(1,2,2)\nplot_value_array(i, predictions, test_labels)", "Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident.", "# Plot the first X test images, their predicted label, and the true label\n# Color correct predictions in blue, incorrect predictions in red\nnum_rows = 5\nnum_cols = 3\nnum_images = num_rows*num_cols\nplt.figure(figsize=(2*2*num_cols, 2*num_rows))\nfor i in range(num_images):\n plt.subplot(num_rows, 2*num_cols, 2*i+1)\n plot_image(i, predictions, test_labels, test_images)\n plt.subplot(num_rows, 2*num_cols, 2*i+2)\n plot_value_array(i, predictions, test_labels)\n", "Finally, use the trained model to make a prediction about a single image.", "# Grab an image from the test dataset\nimg = test_images[0]\n\nprint(img.shape)", "tf.keras models are optimized to make predictions on a batch, or collection, of examples at once. So even though we're using a single image, we need to add it to a list:", "# Add the image to a batch where it's the only member.\nimg = (np.expand_dims(img,0))\n\nprint(img.shape)", "Now predict the image:", "predictions_single = model.predict(img)\n\nprint(predictions_single)\n\nplot_value_array(0, predictions_single, test_labels)\n_ = plt.xticks(range(10), class_names, rotation=45)", "model.predict returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch:", "np.argmax(predictions_single[0])", "And, as before, the model predicts a label of 9." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kscottz/PyBay2017
MovieTime.ipynb
apache-2.0
[ "# See requirements.txt to set up your dev environment.\nimport os\nimport cv2\nimport sys\nimport json\nimport scipy\nimport urllib\nimport datetime \nimport urllib3\nimport rasterio\nimport subprocess\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom osgeo import gdal, ogr, osr\nfrom planet import api\nfrom planet.api import filters\nfrom traitlets import link\nfrom shapely.geometry import mapping, shape\nfrom IPython.display import display, Image, HTML\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nurllib3.disable_warnings()\nfrom ipyleaflet import (\n Map,\n Marker,\n TileLayer, ImageOverlay,\n Polyline, Polygon, Rectangle, Circle, CircleMarker,\n GeoJSON,\n DrawControl\n)\n\n%matplotlib inline\n# will pick up api_key via environment variable PL_API_KEY\n# but can be specified using `api_key` named argument\napi_keys = json.load(open(\"apikeys.json\",'r'))\nclient = api.ClientV1(api_key=api_keys[\"PLANET_API_KEY\"])", "Let's pull it all together to do something cool.\n\nLet's reuse a lot of our code to make a movie of our travel around San Francisco.\nWe'll first select a bunch of recent scenes, activate, and download them.\nAfter that we'll create a mosaic, a path, and trace the path through the moasic. \nWe'll use the path to crop subregions, save them as images, and create a video.\nFirst step is to trace our AOI and a path through it.", "# Basemap Mosaic (v1 API)\nmosaicsSeries = 'global_quarterly_2017q1_mosaic'\n# Planet tile server base URL (Planet Explorer Mosaics Tiles)\nmosaicsTilesURL_base = 'https://tiles0.planet.com/experimental/mosaics/planet-tiles/' + mosaicsSeries + '/gmap/{z}/{x}/{y}.png'\n# Planet tile server url\nmosaicsTilesURL = mosaicsTilesURL_base + '?api_key=' + api_keys[\"PLANET_API_KEY\"]\n# Map Settings \n# Define colors\ncolors = {'blue': \"#009da5\"}\n# Define initial map center lat/long\ncenter = [37.774929,-122.419416]\n# Define initial map zoom level\nzoom = 11\n# Set Map Tiles URL\nplanetMapTiles = TileLayer(url= mosaicsTilesURL)\n# Create the map\nm = Map(\n center=center, \n zoom=zoom,\n default_tiles = planetMapTiles # Uncomment to use Planet.com basemap\n)\n# Define the draw tool type options\npolygon = {'shapeOptions': {'color': colors['blue']}}\nrectangle = {'shapeOptions': {'color': colors['blue']}} \n\n# Create the draw controls\n# @see https://github.com/ellisonbg/ipyleaflet/blob/master/ipyleaflet/leaflet.py#L293\ndc = DrawControl(\n polygon = polygon,\n rectangle = rectangle\n)\n# Initialize an action counter variable\nactionCount = 0\nAOIs = {}\n\n# Register the draw controls handler\ndef handle_draw(self, action, geo_json):\n # Increment the action counter\n global actionCount\n actionCount += 1\n # Remove the `style` property from the GeoJSON\n geo_json['properties'] = {}\n # Convert geo_json output to a string and prettify (indent & replace ' with \")\n geojsonStr = json.dumps(geo_json, indent=2).replace(\"'\", '\"')\n AOIs[actionCount] = json.loads(geojsonStr)\n \n# Attach the draw handler to the draw controls `on_draw` event\ndc.on_draw(handle_draw)\nm.add_control(dc)\nm", "Query the API\n\nNow we'll save the geometry for our AOI and the path.\nWe'll also filter and cleanup our data just like before.", "print AOIs\nareaAOI = AOIs[1][\"geometry\"]\npathAOI = AOIs[2][\"geometry\"]\n\naoi_file =\"san_francisco.geojson\" \nwith open(aoi_file,\"w\") as f:\n f.write(json.dumps(areaAOI))\n# build a query using the AOI and\n# a cloud_cover filter that excludes 'cloud free' scenes\n\nold = datetime.datetime(year=2017,month=1,day=1)\nnew = datetime.datetime(year=2017,month=8,day=10)\n\nquery = filters.and_filter(\n filters.geom_filter(areaAOI),\n filters.range_filter('cloud_cover', lt=0.05),\n filters.date_range('acquired', gt=old),\n filters.date_range('acquired', lt=new)\n\n)\n# build a request for only PlanetScope imagery\nrequest = filters.build_search_request(\n query, item_types=['PSScene3Band']\n)\n\n# if you don't have an API key configured, this will raise an exception\nresult = client.quick_search(request)\nscenes = []\nplanet_map = {}\nfor item in result.items_iter(limit=500):\n planet_map[item['id']]=item\n props = item['properties']\n props[\"id\"] = item['id']\n props[\"geometry\"] = item[\"geometry\"]\n props[\"thumbnail\"] = item[\"_links\"][\"thumbnail\"]\n scenes.append(props)\nscenes = pd.DataFrame(data=scenes)\ndisplay(scenes)\nprint len(scenes)", "Just like before we clean up our data and distill it down to just the scenes we want.", "# now let's clean up the datetime stuff\n# make a shapely shape from our aoi\nsanfran = shape(areaAOI)\nfootprints = []\noverlaps = []\n# go through the geometry from our api call, convert to a shape and calculate overlap area.\n# also save the shape for safe keeping\nfor footprint in scenes[\"geometry\"].tolist():\n s = shape(footprint)\n footprints.append(s)\n overlap = 100.0*(sanfran.intersection(s).area / sanfran.area)\n overlaps.append(overlap)\n# take our lists and add them back to our dataframe\nscenes['overlap'] = pd.Series(overlaps, index=scenes.index)\nscenes['footprint'] = pd.Series(footprints, index=scenes.index)\n# now make sure pandas knows about our date/time columns.\nscenes[\"acquired\"] = pd.to_datetime(scenes[\"acquired\"])\nscenes[\"published\"] = pd.to_datetime(scenes[\"published\"])\nscenes[\"updated\"] = pd.to_datetime(scenes[\"updated\"])\nscenes.head()\n\n# Now let's get it down to just good, recent, clear scenes\nclear = scenes['cloud_cover']<0.1\ngood = scenes['quality_category']==\"standard\"\nrecent = scenes[\"acquired\"] > datetime.date(year=2017,month=5,day=1)\npartial_coverage = scenes[\"overlap\"] > 60\ngood_scenes = scenes[(good&clear&recent&partial_coverage)]\nprint good_scenes", "To make sure we are good we'll visually inspect the scenes in our slippy map.", "# first create a list of colors\ncolors = [\"#ff0000\",\"#00ff00\",\"#0000ff\",\"#ffff00\",\"#ff00ff\",\"#00ffff\",\"#ff0000\",\"#00ff00\",\"#0000ff\",\"#ffff00\",\"#ff00ff\",\"#00ffff\"]\n# grab our scenes from the geometry/footprint geojson\n# Chane this number as needed\nfootprints = good_scenes[0:10][\"geometry\"].tolist()\n# for each footprint/color combo\nfor footprint,color in zip(footprints,colors):\n # create the leaflet object\n feat = {'geometry':footprint,\"properties\":{\n 'style':{'color': color,'fillColor': color,'fillOpacity': 0.2,'weight': 1}},\n 'type':u\"Feature\"}\n # convert to geojson\n gjson = GeoJSON(data=feat)\n # add it our map\n m.add_layer(gjson)\n# now we will draw our original AOI on top \nfeat = {'geometry':areaAOI,\"properties\":{\n 'style':{'color': \"#FFFFFF\",'fillColor': \"#FFFFFF\",'fillOpacity': 0.5,'weight': 1}},\n 'type':u\"Feature\"}\ngjson = GeoJSON(data=feat)\nm.add_layer(gjson) \nm ", "This is from the previous notebook. We are just activating and downloading scenes.", "def get_products(client, scene_id, asset_type='PSScene3Band'): \n \"\"\"\n Ask the client to return the available products for a \n given scene and asset type. Returns a list of product \n strings\n \"\"\"\n out = client.get_assets_by_id(asset_type,scene_id)\n temp = out.get()\n return temp.keys()\n\ndef activate_product(client, scene_id, asset_type=\"PSScene3Band\",product=\"analytic\"):\n \"\"\"\n Activate a product given a scene, an asset type, and a product.\n \n On success return the return value of the API call and an activation object\n \"\"\"\n temp = client.get_assets_by_id(asset_type,scene_id) \n products = temp.get()\n if( product in products.keys() ):\n return client.activate(products[product]),products[product]\n else:\n return None \n\ndef download_and_save(client,product):\n \"\"\"\n Given a client and a product activation object download the asset. \n This will save the tiff file in the local directory and return its \n file name. \n \"\"\"\n out = client.download(product)\n fp = out.get_body()\n fp.write()\n return fp.name\n\ndef scenes_are_active(scene_list):\n \"\"\"\n Check if all of the resources in a given list of\n scene activation objects is read for downloading.\n \"\"\"\n return True\n retVal = True\n for scene in scene_list:\n if scene[\"status\"] != \"active\":\n print \"{} is not ready.\".format(scene)\n return False\n return True\ndef load_image4(filename):\n \"\"\"Return a 4D (r, g, b, nir) numpy array with the data in the specified TIFF filename.\"\"\"\n path = os.path.abspath(os.path.join('./', filename))\n if os.path.exists(path):\n with rasterio.open(path) as src:\n b, g, r, nir = src.read()\n return np.dstack([r, g, b, nir])\n \ndef load_image3(filename):\n \"\"\"Return a 3D (r, g, b) numpy array with the data in the specified TIFF filename.\"\"\"\n path = os.path.abspath(os.path.join('./', filename))\n if os.path.exists(path):\n with rasterio.open(path) as src:\n b,g,r,mask = src.read()\n return np.dstack([b, g, r])\n \ndef get_mask(filename):\n \"\"\"Return a 1D mask numpy array with the data in the specified TIFF filename.\"\"\"\n path = os.path.abspath(os.path.join('./', filename))\n if os.path.exists(path):\n with rasterio.open(path) as src:\n b,g,r,mask = src.read()\n return np.dstack([mask])\n\ndef rgbir_to_rgb(img_4band):\n \"\"\"Convert an RGBIR image to RGB\"\"\"\n return img_4band[:,:,:3]", "Perform the actual activation ... go get coffee", "to_get = good_scenes[\"id\"][0:10].tolist()\nto_get = sorted(to_get)\nactivated = []\n# for each scene to get\nfor scene in to_get:\n # get the product \n product_types = get_products(client,scene)\n for p in product_types:\n # if there is a visual productfor p in labels:\n if p == \"visual\": # p == \"basic_analytic_dn\"\n print \"Activating {0} for scene {1}\".format(p,scene)\n # activate the product\n _,product = activate_product(client,scene,product=p)\n activated.append(product)", "Downloand the scenes", "tiff_files = []\nasset_type = \"_3B_Visual\"\n# check if our scenes have been activated\nif scenes_are_active(activated):\n for to_download,name in zip(activated,to_get):\n # create the product name\n name = name + asset_type + \".tif\"\n # if the product exists locally\n if( os.path.isfile(name) ):\n # do nothing \n print \"We have scene {0} already, skipping...\".format(name)\n tiff_files.append(name)\n elif to_download[\"status\"] == \"active\":\n # otherwise download the product\n print \"Downloading {0}....\".format(name)\n fname = download_and_save(client,to_download)\n tiff_files.append(fname)\n print \"Download done.\"\n else:\n print \"Could not download, still activating\"\nelse:\n print \"Scenes aren't ready yet\"\n\nprint tiff_files ", "Now, just like before, we will mosaic those scenes.\n\nIt is easier to call out using subprocess and use the command line util.\nJust iterate through the files and drop them into a single file sf_mosaic.tif", "subprocess.call([\"rm\",\"sf_mosaic.tif\"])\ncommands = [\"gdalwarp\", # t\n \"-t_srs\",\"EPSG:3857\",\n \"-cutline\",aoi_file,\n \"-crop_to_cutline\",\n \"-tap\",\n \"-tr\", \"3\", \"3\"\n \"-overwrite\"]\noutput_mosaic = \"_mosaic.tif\"\nfor tiff in tiff_files:\n commands.append(tiff)\ncommands.append(output_mosaic)\nprint \" \".join(commands)\nsubprocess.call(commands)", "Let's take a look at what we got", "merged = load_image3(output_mosaic)\nplt.figure(0,figsize=(18,18))\nplt.imshow(merged)\nplt.title(\"merged\")", "Now we are going to write a quick crop function.\n\nthis function takes in a, scene, a center position, and the width and height of a window.\nWe'll use numpy slice notation to make the crop.\nLet's pick a spot and see what we get.", "def crop_to_area(scene,x_c,y_c,w,h):\n tlx = x_c-(w/2)\n tly = y_c-(h/2)\n brx = x_c+(w/2)\n bry = y_c+(h/2)\n return scene[tly:bry,tlx:brx,:]\n\nplt.figure(0,figsize=(3,4))\nplt.imshow(crop_to_area(merged,3000,3000,640,480))\nplt.title(\"merged\")", "Now to figure out how our lat/long values map to pixels.\n\nThe next thing we need is a way to map from a lat and long in our slippy map to the pixel position in our image. \nWe'll use what we know about the lat/long of the corners of our image to do that. \nWe'll ask GDAL to tell us the extents of our scene and the geotransofrm.\nWe'll then apply the GeoTransform from GDAL to the coordinates that are the extents of our scene. \nNow we have the corners of our scene in Lat/Long", "# Liberally borrowed from this example\n# https://gis.stackexchange.com/questions/57834/how-to-get-raster-corner-coordinates-using-python-gdal-bindings\ndef GetExtent(gt,cols,rows):\n \"\"\"\n Get the list of corners in our output image in the format \n [[x,y],[x,y],[x,y]]\n \"\"\"\n ext=[]\n # for the corners of the image\n xarr=[0,cols]\n yarr=[0,rows]\n\n for px in xarr:\n for py in yarr:\n # apply the geo coordiante transform \n # using the affine transform we got from GDAL\n x=gt[0]+(px*gt[1])+(py*gt[2])\n y=gt[3]+(px*gt[4])+(py*gt[5])\n ext.append([x,y])\n yarr.reverse()\n return ext\n\ndef ReprojectCoords(coords,src_srs,tgt_srs):\n trans_coords=[]\n # create a transform object from the source and target ref system\n transform = osr.CoordinateTransformation( src_srs, tgt_srs)\n for x,y in coords:\n # transform the points\n x,y,z = transform.TransformPoint(x,y)\n # add it to the list. \n trans_coords.append([x,y])\n return trans_coords", "Here we'll call the functions we wrote.\n\nFirst we open the scene and get the width and height.\nThen from the geotransorm we'll reproject those points to lat and long.", "# TLDR: pixels => UTM coordiantes => Lat Long \nraster=output_mosaic\n# Load the GDAL File\nds=gdal.Open(raster)\n# get the geotransform\ngt=ds.GetGeoTransform()\n# get the width and height of our image\ncols = ds.RasterXSize\nrows = ds.RasterYSize\n# Generate the coordinates of our image in utm\next=GetExtent(gt,cols,rows)\n# get the spatial referencec object \nsrc_srs=osr.SpatialReference()\n# get the data that will allow us to move from UTM to Lat Lon. \nsrc_srs.ImportFromWkt(ds.GetProjection())\ntgt_srs = src_srs.CloneGeogCS()\nextents = ReprojectCoords(ext,src_srs,tgt_srs)\nprint extents", "Now we'll do a bit of hack.\n\nThat bit above is precise but complex, we are going to make everything easier to think about. \nWe are going to linearize our scene, which isn't perfect, but good enough for our application.\nWhat this function does is take in a given lat,long, the size of the image, and the extents as lat,lon coordinates.\nFor a given pixel we map it's x and y values to the value between a given lat and long and return the results.\nNow we can ask, for a given lat,long pair what is the corresponding pixel.", "def poor_mans_lat_lon_2_pix(lon,lat,w,h,extents):\n # split up our lat and longs \n lats = [e[1] for e in extents]\n lons = [e[0] for e in extents]\n # calculate our scene extents max and min\n lat_max = np.max(lats)\n lat_min = np.min(lats) \n lon_max = np.max(lons)\n lon_min = np.min(lons) \n # calculate the difference between our start point\n # and our minimum\n lat_diff = lat-lat_min\n lon_diff = lon-lon_min\n # create the linearization\n lat_r = float(h)/(lat_max-lat_min)\n lon_r = float(w)/(lon_max-lon_min) \n # generate the results. \n return int(lat_r*lat_diff),int(lon_r*lon_diff)", "Let's check our work\n\nFirst we'll create a draw point function that just puts a red dot at given pixel.\nWe'll get our scene, and map all of the lat/long points in our path to pixel values.\nFinally we'll load our image, plot the points and show our results", "def draw_point(x,y,img,t=40):\n h,w,d = img.shape\n y = h-y\n img[(y-t):(y+t),(x-t):(x+t),:] = [255,0,0]\nh,w,c = merged.shape\nwaypoints = [poor_mans_lat_lon_2_pix(point[0],point[1],w,h,extents) for point in pathAOI[\"coordinates\"]]\nprint waypoints\nmerged = load_image3(output_mosaic)\n[draw_point(pt[1],pt[0],merged) for pt in waypoints]\nplt.figure(0,figsize=(18,18))\nplt.imshow(merged)\nplt.title(\"merged\")", "Now things get interesting....\n\nOur path is just a few waypoint but to make a video we need just about every point between our waypoints.\nTo get all of the points between our waypoints we'll have to write a little interpolation script. \nInterpolation is just a fancy word for nicely space points bewteen or waypoints, we'll call the space between each point as our \"velocity.\"\nIf we were really slick we could define a heading vector and and build a spline so the camera faces the direction of heading. Our approach is fine as the top of the frame is always North, which makes reckoning easy.\nOnce we have our interpolation function all we need to do is to crop our large mosaic at each point in our interpolation point list and save it in a sequential file.", "def interpolate_waypoints(waypoints,velocity=10.0):\n retVal = []\n last_pt = waypoints[0]\n # for each point in our waypoints except the first\n for next_pt in waypoints[1:]:\n # calculate distance between the points\n distance = np.sqrt((last_pt[0]-next_pt[0])**2+(last_pt[1]-next_pt[1])**2)\n # use our velocity to calculate the number steps.\n steps = np.ceil(distance/velocity)\n # linearly space points between the two points on our line\n xs = np.array(np.linspace(last_pt[0],next_pt[0],steps),dtype='int64')\n ys = np.array(np.linspace(last_pt[1],next_pt[1],steps),dtype='int64')\n # zip the points together\n retVal += zip(xs,ys)\n # move to the next point\n last_pt = next_pt\n return retVal\n\ndef build_scenes(src,waypoints,window=[640,480],path=\"./movie/\"):\n count = 0 \n # Use opencv to change the color space of our image.\n src = cv2.cvtColor(src, cv2.COLOR_BGR2RGB)\n # define half our sampling window. \n w2 = window[0]/2\n h2 = window[1]/2\n # for our source image get the width and height\n h,w,d = src.shape\n for pt in waypoints:\n # for each point crop the area out.\n # the y value of our scene is upside down. \n temp = crop_to_area(src,pt[1],h-pt[0],window[0],window[1])\n # If we happen to hit the border of the scene, just skip\n if temp.shape[0]*temp.shape[1]== 0:\n # if we have an issue, just keep plugging along\n continue\n # Resample the image a bit, this just makes things look nice. \n temp = cv2.resize(temp, (int(window[0]*0.75), int(window[1]*.75))) \n # create a file name\n fname = os.path.abspath(path+\"img{num:06d}.png\".format(num=count))\n # Save it\n cv2.imwrite(fname,temp)\n count += 1", "Before we generate our video frames, let's check our work\n\nWe'll load our image. \nBuild the interpolated waypoints list.\nDraw the points on the image using our draw_point method.\nPlot the results", "# load the image\nmerged = load_image3(output_mosaic)\n# interpolate the waypoints\ninterp = interpolate_waypoints(waypoints, velocity=5)\n# draw them on our scene\n[draw_point(pt[1],pt[0],merged) for pt in interp]\n# display the scene\nplt.figure(0,figsize=(18,18))\nplt.imshow(merged)\nplt.title(\"merged\")", "Now let's re-load the image and run the scene maker.", "os.system(\"rm ./movie/*.png\")\nmerged = load_image3(output_mosaic)\nbuild_scenes(merged,interp,window=(640,480))", "Finally, let's make a movie.\n\nOur friend AVConv, which is like ffmpeg is a handy command line util for transcoding video.\nAVConv can also convert a series of images into a video and vice versa.\nWe'll set up our command and use subprocess to make the call.", "# avconv -framerate 30 -f image2 -i ./movie/img%06d.png -b 65536k out.mpg;\n#os.system(\"rm ./movie/*.png\")\nframerate = 30\noutput = \"out.mpg\"\ncommand = [\"avconv\",\"-framerate\", str(framerate), \"-f\", \"image2\", \"-i\", \"./movie/img%06d.png\", \"-b\", \"65536k\", output]\nos.system(\" \".join(command))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mperignon/CSDMS-lessons
python/notebooks/02-functions.ipynb
mit
[ "We wrote a script for importing streamgage data through the USGS web services, cleaning up the formatting, plotting the discharge over time, and saving the figure into a file. We would like to turn this script into a tool that we can reuse for different stations and date ranges without having to rewrite the code. Let's look at the code again (I deleted the rows that were commented out):", "import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnew_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']\n\nurl = 'http://waterservices.usgs.gov/nwis/iv/?format=rdb&sites=09380000&startDT=2016-01-01&endDT=2016-01-10&parameterCd=00060,00065'\ndata = pd.read_csv(url, header=1, sep='\\t', comment='#', names = new_column_names)\n\ndata['DateTime'] = pd.to_datetime(data['OldDateTime'])\n\nnew_station_name = \"0\" + str(data['Station'].unique()[0])\ndata['Station'] = new_station_name\n\ndata.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)\nplt.xlabel('Time')\nplt.ylabel('Discharge (cfs)')\nplt.savefig('data/discharge_' + new_station_name + '.png')\nplt.show()", "The station number and date range we are interested in are part of the URL that we use to communicate with the web services. The specific file we receive when the read_csv command runs doesn't exist -- when our script requests the data, the server reads the URL to see what we want, pulls data from a database, packages it, and passes it on to us. The API (the protocol that governs the communication between machines) establishes the \"formula\" for writing the URL. As long as we follow that formula (and request data that exists), the server will provide it for us.\nLet's decompose the URL into its parts and combine them back into a single string:", "url_root = 'http://waterservices.usgs.gov/nwis/iv/?' # root of URL\n\nurl_1 = 'format=' + 'rdb' # file format\n\nurl_2 = 'sites=' + '09380000' # station number\n\nurl_3 = 'startDT=' + '2016-01-01' # start date\n\nurl_4 = 'endDT=' + '2016-01-10' # end date\n\nurl_5 = 'parameterCd=' + '00060,00065' # data fields\n\n\nurl = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5\nprint url", "Python dictionaries to URLs {.callout}\nAnother useful data type built into Python is the dictionary. While lists and other sequences are indexed by a range of numbers, dictionaries are indexed by keys. A dictionary is an unordered collection of key:value pairs. Keys must be unique (within any one dictionary) and can be strings or numbers. Values in a dictionary can be of any type, and different pairs in one dictionary can have different types of values.\nWe can store the parameters of our URL in a dictionary. Here's one of several ways to add entries to a dictionary:", "url_dict = {} # create an empty dictionary\n\nurl_dict['format'] = 'rdb'\nurl_dict['sites'] = '09380000'\nurl_dict['startDT'] = '2016-01-01'\nurl_dict['endDT'] = '2016-01-10'\nurl_dict['parameterCd'] = ['00060','00065']\n\nprint url_dict", "Just like there is the Numpy library for matrices and Pandas for tabular data, there is a Python library that provides a simple interface for accessing resources through URLs (take a look at the most popular package repository: https://pypi.python.org/). Many of the most popular and useful libraries for scientific computing come pre-installed with the Anaconda distribution.\nWe can use the urllib package to convert the dictionary into a URL following the standard format used by web services. The order of the parameters doesn't matter to the server!", "import urllib\n\n# need to set the parameter doseq to 1 to handle the list in url_dict['parameterCd']\nurl_parameters = urllib.urlencode(url_dict, doseq=1)\n\nprint url_root + url_parameters", "This is not the most elegant way to write the URL but it accomplishes the job! To clean things up a bit, we can replace the values we want to be able to change with variables:", "this_station = '09380000'\nstartDate = '2016-01-01'\nendDate = '2016-01-10'\n\n\nurl_root = 'http://waterservices.usgs.gov/nwis/iv/?'\nurl_1 = 'format=' + 'rdb'\nurl_2 = 'sites=' + this_station\nurl_3 = 'startDT=' + startDate\nurl_4 = 'endDT=' + endDate\nurl_5 = 'parameterCd=' + '00060,00065'\n\nurl = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5\nprint url", "We can now combine it with the rest of our code:", "import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n########## change these values ###########\nthis_station = '09380000'\nstartDate = '2016-01-01'\nendDate = '2016-01-10'\n##########################################\n\n# create the URL\nurl_root = 'http://waterservices.usgs.gov/nwis/iv/?'\nurl_1 = 'format=' + 'rdb'\nurl_2 = 'sites=' + this_station\nurl_3 = 'startDT=' + startDate\nurl_4 = 'endDT=' + endDate\nurl_5 = 'parameterCd=' + '00060,00065'\n\nurl = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5\n\n# import the data\nnew_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']\n\ndata = pd.read_csv(url, header=1, sep='\\t', comment='#', names = new_column_names)\n\n# fix formatting\ndata['DateTime'] = pd.to_datetime(data['OldDateTime'])\nnew_station_name = \"0\" + str(data['Station'].unique()[0])\ndata['Station'] = new_station_name\n\n# plot and save figure\ndata.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)\nplt.xlabel('Time')\nplt.ylabel('Discharge (cfs)')\nplt.savefig('data/discharge_' + new_station_name + '.png')\nplt.show()", "Creating Functions\nIf we wanted to import data from a different station or for a different date range, we would manually change the first three variables and run the code again. It would be a lot less work than having to download the file and plot it by hand, but it could still be very tedious! At this point, our code is also getting long and complicated; what if we had thousands of datasets but didn't want to generate a figure for every single one? Commenting out the figure-drawing code is a nuisance. Also, what if we want to use that code again, on a different dataset or at a different point in our program? Cutting and pasting it is going to make our code get very long and very repetative, very quickly. We’d like a way to package our code so that it is easier to reuse, and Python provides for this by letting us define things called function - a shorthand way of re-executing longer pieces of code.\nLet's by defining a function fahr_to_kelvin that converts temperatures from Fahrenheit to Kelvin:", "def fahr_to_kelvin(temp):\n return ((temp - 32) * (5/9)) + 273.15", "The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces.\nWhen we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it.\nNotice that nothing happened when we ran the cell that contains the function. Python became aware of the function and what it is supposed to do, but until we call it, there is nothing for the function to do. Calling our own function is no different from calling any other function (see the resemblance with the help file for read_csv?):", "print 'freezing point of water:', fahr_to_kelvin(32)\nprint 'boiling point of water:', fahr_to_kelvin(212)", "The boiling point of water in Kelvin should be 373.15 K, not 273.15 K!\nFunctions make code easier to debug by isolating each possible source of error. In this case, the first term of the equation, ((temp - 32) * (5/9)), is returning 0 (instead of 100) when the temperature is 212 F. If we look at each part of that expression, we find:", "5/9", "5 divided by 9 should be 0.5556, but when we ask Python 2 to divide to integers, it returns an integer! If we want to want to keep the fractional part of the division, we need to convert one or the other number to floating point:", "print 'two integers:', 5/9\nprint '5.0/9:', 5.0/9\nprint '5/9.0:', 5/9.0", "You can also turn an integer into a float by casting:", "float(5)/9", "Casting {.challenge}\nWhat happens when you type float(5/9)?\nInteger division in Python 3 {.callout}\nThe problem of integer division does not exist in Python 3, where division always returns a floating point number. We use Python 2.7 because it is much more commonly used in our community, but always keep integer division in mind as it will be a common source of bugs in your code. And as annoying as it may seem, there are memory benefits to integer division!\nLet's rewrite our function with the fixed bug:", "def fahr_to_kelvin(temp):\n return ((temp - 32) * (5./9)) + 273.15\n\nprint 'freezing point of water:', fahr_to_kelvin(32)\nprint 'boiling point of water:', fahr_to_kelvin(212)", "Composing Functions\nNow that we’ve seen how to turn Fahrenheit into Kelvin, it’s easy to turn Kelvin into Celsius:", "def kelvin_to_celsius(temp_k):\n return temp_k - 273.15\n\nprint 'absolute zero in Celsius:', kelvin_to_celsius(0.0) ", "What about converting Fahrenheit to Celsius? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created:", "def fahr_to_celsius(temp_f):\n temp_k = fahr_to_kelvin(temp_f)\n temp_c = kelvin_to_celsius(temp_k)\n return temp_c\n\nprint 'freezing point of water in Celsius:', fahr_to_celsius(32.0) ", "This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-large chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here — typically half a dozen to a few dozen lines — but they shouldn’t ever be much longer than that, or the next person who reads it won’t be able to understand what’s going on.\nTidying up\nNow that we know how to wrap bits of code in functions, we can make our streamgage data plotting code easier to read and easier to reuse. First, let's make a import_streamgage_data function to pull the data file from the server and fix the formatting:", "def import_streamgage_data(url):\n \n new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']\n\n data = pd.read_csv(url, header=1, sep='\\t', comment='#', names = new_column_names)\n\n # fix formatting\n data['DateTime'] = pd.to_datetime(data['OldDateTime'])\n new_station_name = \"0\" + str(data['Station'].unique()[0])\n data['Station'] = new_station_name\n \n return data", "We can make another function plot_discharge to compose to plot and save the figures:", "def plot_discharge(data):\n \n data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)\n plt.xlabel('Time')\n plt.ylabel('Discharge (cfs)')\n plt.savefig('data/discharge_' + new_station_name + '.png')\n plt.show()", "The function plot_discharge produces output that is visible to us but has no return statement because it doesn't need to give anything back when it is called.\nWe can also wrap up the script for composing URLs into a function called generate_URL:", "def generate_URL(station, startDT, endDT):\n\n url_root = 'http://waterservices.usgs.gov/nwis/iv/?'\n url_1 = 'format=' + 'rdb'\n url_2 = 'sites=' + station\n url_3 = 'startDT=' + startDT\n url_4 = 'endDT=' + endDT\n url_5 = 'parameterCd=' + '00060,00065'\n\n url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5\n \n return url", "Now that these three functions exist, we can rewrite our previous code in a much simpler script:", "########## change these values ###########\nthis_station = '09380000'\nstartDate = '2016-01-01'\nendDate = '2016-01-10'\n##########################################\n\nurl = generate_URL(this_station, startDate, endDate)\ndata = import_streamgage_data(url)\nplot_discharge(data)\n", "Testing and Documenting\nIt doesn't long to forget what code we wrote in the past was supposed to do. We should always write some documentation for our functions to remind ourselves later what they are for and how they are supposed to be used.\nThe usual way to put documentation in software is to add comments:", "# plot_discharge(data): take a DataFrame containing streamgage data, plot the discharge and save a figure to file.\ndef plot_discharge(data):\n \n data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)\n plt.xlabel('Time')\n plt.ylabel('Discharge (cfs)')\n plt.savefig('data/discharge_' + new_station_name + '.png')\n plt.show()", "There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation. A string like this is called a docstring (one set of quotes for single line strings, three sets for multi-line strings!):", "def plot_discharge(data):\n '''\n Take a DataFrame containing streamgage data,\n plot the discharge and save a figure to file.\n '''\n \n data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)\n plt.xlabel('Time')\n plt.ylabel('Discharge (cfs)')\n plt.savefig('data/discharge_' + new_station_name + '.png')\n plt.show()", "This is better because we can now ask Python’s built-in help system to show us the documentation for the function:", "help(plot_discharge)", "Defining Defaults:\nWhen we use the read_csv method, we pass parameters in two ways: directly, as in pd.read_csv(url), and by name, as we did for the parameter sep in pd.read_csv(url, sep = '\\t').\nIf we look at the documentation for read_csv, all parameters but the first (filepath_or_buffer) have a default value in the function definition (sep=','). The function will not run if the parameters without default values are not provided, but all parameters with defaults are optional. This is handy: if we usually want a function to work one way but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier.\nThe example below shows how Python matches values to parameters:", "def display(a=1, b=2, c=3):\n print 'a:', a, 'b:', b, 'c:', c \n\nprint 'no parameters:' \ndisplay()\n\nprint 'one parameter:'\ndisplay(55)\n\nprint 'two parameters:'\ndisplay(55, 66)", "As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in:", "print('only setting the value of c')\ndisplay(c=77)", "Combining strings {.challenge}\n\"Adding\" two strings produces their concatenation:\n'a' + 'b' is 'ab'.\nWrite a function called fence that takes two parameters called original and wrapper\nand returns a new string that has the wrapper character at the beginning and end of the original.\nA call to your function should look like this:\n~~~ {.python}\nprint fence('name', '')\n~~~\n~~~ {.output}\nname*\n~~~\nSelecting characters from strings {.challenge}\nIf the variable s refers to a string,\nthen s[0] is the string's first character\nand s[-1] is its last.\nWrite a function called outer\nthat returns a string made up of just the first and last characters of its input.\nA call to your function should look like this:\n~~~ {.python}\nprint outer('helium')\n~~~\n~~~ {.output}\nhm\n~~~\nRescaling an array {.challenge}\nWrite a function rescale that takes an array as input\nand returns a corresponding array of values scaled to lie in the range 0.0 to 1.0.\n(Hint: If $L$ and $H$ are the lowest and highest values in the original array,\nthen the replacement for a value $v$ should be $(v-L) / (H-L)$.)\nTesting and documenting your function {.challenge}\nRun the commands help(numpy.arange) and help(numpy.linspace)\nto see how to use these functions to generate regularly-spaced values,\nthen use those values to test your rescale function.\nOnce you've successfully tested your function,\nadd a docstring that explains what it does.\nDefining defaults {.challenge}\nRewrite the rescale function so that it scales data to lie between 0.0 and 1.0 by default,\nbut will allow the caller to specify lower and upper bounds if they want.\nCompare your implementation to your neighbor's:\ndo the two functions always behave the same way?\nVariables inside and outside functions {.challenge}\nWhat does the following piece of code display when run - and why?\n~~~ {.python}\nf = 0\nk = 0\ndef f2k(f):\n k = ((f-32)*(5.0/9.0)) + 273.15\n return k\nf2k(8)\nf2k(41)\nf2k(32)\nprint k\n~~~" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
espressomd/espresso
doc/tutorials/charged_system/charged_system.ipynb
gpl-3.0
[ "A Charged System: Counterion Condensation\nTable of contents\n\nIntroduction\nSystem setup\nFirst run and observable setup\nProduction run and analysis\nOvercharging by added salt\n\nIntroduction\nIn this tutorial, we simulate a charged system consisting of a fixed charged rod with ions around it. This setup represents a simplified model for polyelectrolyte gels. We will investigate the condensation of ions onto the oppositely charged rod and compare the results to a meanfield analytical solution obtained from Poisson−Boltzmann (PB) theory.\nFinally we will go beyond the expected applicability of PB and add concentrated additional salt ions to observe an overcharging effect.\nThe tutorial follows \"Deserno, Markus, Christian Holm, and Sylvio May. \"Fraction of condensed counterions around a charged rod: Comparison of Poisson−Boltzmann theory and computer simulations. Macromolecules 33.1 (2000): 199-206, 10.1021/ma990897o\". We refer to that publication for further reading.", "import espressomd\nimport espressomd.electrostatics\nimport espressomd.observables\nimport espressomd.accumulators\nimport espressomd.math\n\nespressomd.assert_features(['ELECTROSTATICS', 'P3M', 'WCA'])\n\nimport tqdm\nimport numpy as np\nimport scipy.optimize\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nnp.random.seed(41)\nplt.rcParams.update({'font.size': 18})", "System setup\nAfter importing the necessary ESPResSo features and external modules, we define a cubic system geometry and some physical parameters (which define our unit system).", "# system parameters\nROD_LENGTH = 50\nBJERRUM_LENGTH = 1.0\n\n# we assume a unit system where the elementary charge and the thermal energy are both 1\nsystem = espressomd.System(box_l=3 * [ROD_LENGTH])\nKT = 1.\nQ_E = 1.\n\nsystem.time_step = 0.01\nsystem.cell_system.skin = 0.4", "We will build the charged rod from individual particles that are fixed in space. With this, we can use the particle-based electrostatics methods of ESPResSo. For analysis, we give the rod particles a different type than the counterions.", "# interaction parameters\nWCA_EPSILON = 1.0\nION_DIAMETER = 1.0\nROD_RADIUS = 1.0\n# particle types\nROD_TYPE = 1\nCOUNTERION_TYPE = 2", "Exercise:\n\nSetup the purely repulsive Weeks-Chandler-Anderson (WCA) interaction (Non-bonded Interactions) between the ions and between the ions and the rod particles. Use the parameters introduced in the cell above.\n\nHints:\n* The WCA potential uses the same parameters as the Lennard-Jones potential, but the cutoff and shift are calculated automatically\n* Use the Lorentz combining rule (arithmetic mean) to determine the sigma parameter of the interaction between the rod particles and the ions\n```python\nion-ion interaction\nsystem.non_bonded_inter[COUNTERION_TYPE, COUNTERION_TYPE].wca.set_params(\n epsilon=WCA_EPSILON, sigma=ION_DIAMETER)\nion-rod interaction\nsystem.non_bonded_inter[COUNTERION_TYPE, ROD_TYPE].wca.set_params(\n epsilon=WCA_EPSILON, sigma=ION_DIAMETER / 2. + ROD_RADIUS)\n```\nNow we need to place the particles in the box\nExercise:\n* Implement a function to place the rod particles along the $x_3$ axis in the middle of the simulation box and the ions randomly distributed \n* Use the signature setup_rod_and_counterions(system, ion_valency, counterion_type, rod_charge_dens, N_rod_beads, rod_type)\n* Determine the number of counterions from the condition of neutrality for the whole system (the rod should be positive, the counterions negative)\n* Assign the rod particles and counterions their correct type\n* Give the counterions a charge q according to their ion_valency\n* Give the rod particles a charge such that the rod_charge_dens is uniformly distributed along the N_rod_beads individual particles\n* Fix the rod particles in space so they do not get moved if forces act upon them\n* Return the newly created counterion particles\nHints:\n* Look into espresso particle properties to find the keywords to set charges and to fix particles\n* use np.random.random() to generate the counterion positions\n```python\ndef setup_rod_and_counterions(system, ion_valency, counterion_type,\n rod_charge_dens, N_rod_beads, rod_type):\n# calculate charge of the single rod beads\nrod_length = system.box_l[2]\ntotal_rod_charge = rod_charge_dens * rod_length\nrod_charge_per_bead = total_rod_charge / N_rod_beads\n\n# number of counterions\nN_ions = int(total_rod_charge / ion_valency)\n\nrod_zs = np.linspace(0, rod_length, num=N_rod_beads, endpoint=False)\nrod_positions = np.column_stack(([system.box_l[0] / 2.] * N_rod_beads,\n [system.box_l[1] / 2.] * N_rod_beads,\n rod_zs))\n\nsystem.part.add(pos=rod_positions, type=[rod_type] * N_rod_beads,\n q=[rod_charge_per_bead] * N_rod_beads,\n fix=[3 * [True]] * N_rod_beads)\n\nion_positions = np.random.random((N_ions, 3)) * system.box_l\n\ncounter_ions = system.part.add(pos=ion_positions, type=[\n counterion_type] * N_ions, q=[-ion_valency] * N_ions)\n\nreturn counter_ions\n\n```", "COUNTERION_VALENCY = 1\nROD_CHARGE_DENS = 2\n\n# number of beads that make up the rod\nN_rod_beads = int(ROD_LENGTH / ROD_RADIUS)\n\nsetup_rod_and_counterions(system, COUNTERION_VALENCY, COUNTERION_TYPE,\n ROD_CHARGE_DENS, N_rod_beads, ROD_TYPE)\n\n# check that the particle setup was done correctly\nassert abs(sum(system.part.all().q)) < 1e-10\nassert np.all(system.part.select(type=ROD_TYPE).fix)", "Now we set up the electrostatics method to calculate the forces and energies from the longrange coulomb interaction. ESPResSo uses so-called <tt>actors</tt> for electrostatics, magnetostatics and hydrodynamics. This ensures that unphysical combinations of algorithms are avoided, for example simultaneous usage of two electrostatic interactions. Adding an actor to the system also activates the method and calls necessary initialization routines. Here, we define a P$^3$M object using the Bjerrum length and rms force error. This automatically starts a tuning function which tries to find optimal parameters for P$^3$M and prints them to the screen. For more details, see the Espresso documentation.", "p3m_params = {'prefactor': KT * BJERRUM_LENGTH * Q_E**2,\n 'accuracy': 1e-3}", "For the accuracy, ESPResSo estimates the relative error in the force calculation introduced by the approximations of $P^3M$. We choose a relatively poor accuracy (large value) for this tutorial to make it run faster. For your own production simulations you should reduce that number.\nExercise:\n* Set up a p3m instance and add it to the actors of the system\npython\np3m = espressomd.electrostatics.P3M(**p3m_params)\nsystem.actors.add(p3m)\nBefore we can start the simulation, we need to remove the overlap between particles to avoid large forces which would crash the simulation. For this, we use the steepest descent integrator with a relative convergence criterion for forces and energies.", "def remove_overlap(system, sd_params):\n # Removes overlap by steepest descent until forces or energies converge\n # Set up steepest descent integration\n system.integrator.set_steepest_descent(f_max=0,\n gamma=sd_params['damping'],\n max_displacement=sd_params['max_displacement'])\n\n # Initialize integrator to obtain initial forces\n system.integrator.run(0)\n maxforce = np.max(np.linalg.norm(system.part.all().f, axis=1))\n energy = system.analysis.energy()['total']\n\n i = 0\n while i < sd_params['max_steps'] // sd_params['emstep']:\n prev_maxforce = maxforce\n prev_energy = energy\n system.integrator.run(sd_params['emstep'])\n maxforce = np.max(np.linalg.norm(system.part.all().f, axis=1))\n relforce = np.abs((maxforce - prev_maxforce) / prev_maxforce)\n energy = system.analysis.energy()['total']\n relener = np.abs((energy - prev_energy) / prev_energy)\n if i > 1 and (i + 1) % 4 == 0:\n print(f\"minimization step: {(i+1)*sd_params['emstep']:4.0f}\"\n f\" max. rel. force change:{relforce:+3.3e}\"\n f\" rel. energy change:{relener:+3.3e}\")\n if relforce < sd_params['f_tol'] or relener < sd_params['e_tol']:\n break\n i += 1\n\n system.integrator.set_vv()\n\nSTEEPEST_DESCENT_PARAMS = {'f_tol': 1e-2,\n 'e_tol': 1e-5,\n 'damping': 30,\n 'max_steps': 10000,\n 'max_displacement': 0.01,\n 'emstep': 10}\n\nremove_overlap(system, STEEPEST_DESCENT_PARAMS)", "After the overlap is removed, we activate a thermostat to simulate the system at a given temperature.", "LANGEVIN_PARAMS = {'kT': KT,\n 'gamma': 0.5,\n 'seed': 42}\nsystem.thermostat.set_langevin(**LANGEVIN_PARAMS)", "First run and observable setup\nBefore running the simulations to obtain the histograms, we need to decide how long we need to equilibrate the system. For this we plot the total energy vs the time steps.", "energies = []\nSTEPS_PER_SAMPLE_FIRST_RUN = 10\nN_SAMPLES_FIRST_RUN = 1000\nfor i in range(N_SAMPLES_FIRST_RUN):\n system.integrator.run(STEPS_PER_SAMPLE_FIRST_RUN)\n energies.append(system.analysis.energy()['total'])\n\n# plot time in time_steps so we can judge the number of warmup steps\nts = np.arange(0, N_SAMPLES_FIRST_RUN) * STEPS_PER_SAMPLE_FIRST_RUN\nplt.figure(figsize=(10, 7))\nplt.plot(ts, energies)\nplt.xlabel('time steps')\nplt.ylabel('system total energy')\nplt.show()\n\nWARMUP_STEPS = 5000\nSTEPS_PER_SAMPLE = 100", "Now we are ready to implement the observable calculation. As we are interested in the condensation of counterions on the rod, the physical quantity of interest is the density of charges $\\rho(r)$ around the rod, where $r$ is the distance from the rod. We need many samples to calculate the density from histograms.\nFrom the last tutorial you should already be familiar with the concepts of observables and accumulators in ESPResSo. We will use the CylindricalDensityProfile observable and the MeanVarianceCalculator accumulator\nExercise:\n\nWrite a function setup_profile_calculation(system, delta_N, ion_types, r_min, n_radial_bins) to create observables for $\\rho(r)$\ndelta_N is the number of integration steps between observable calculation\nion_types is a list of types for which the radial distances should be calculated. For the moment we only have counterions, but later we will also add additional salt ions for which we would also like to calculate the density\nreturn a a dictionary of the accumulators radial_distances[counterion_type] = &lt;accumulator&gt; and the edges of the bins\n\nHints:\n* Use system.part.select(type=...) to get only the particles of a specific type\n* The azimuthal angle and the $x_3$ position are irrelevant, so you need only one big bin for these coordinates\n```python\ndef setup_profile_calculation(system, delta_N, ion_types, r_min, n_radial_bins):\n radial_profile_accumulators = {}\n ctp = espressomd.math.CylindricalTransformationParameters(center = np.array(system.box_l) / 2.,\n axis = [0, 0, 1],\n orientation = [1, 0, 0])\n for ion_type in ion_types:\n ion_ids = system.part.select(type=ion_type).id\n radial_profile_obs = espressomd.observables.CylindricalDensityProfile(\n ids=ion_ids,\n transform_params = ctp,\n n_r_bins=n_radial_bins,\n min_r=r_min,\n min_z=-system.box_l[2] / 2.,\n max_r=system.box_l[0] / 2.,\n max_z=system.box_l[2] / 2.)\n bin_edges = radial_profile_obs.bin_edges()\n\n radial_profile_acc = espressomd.accumulators.MeanVarianceCalculator(\n obs=radial_profile_obs, delta_N=delta_N)\n system.auto_update_accumulators.add(radial_profile_acc)\n\n radial_profile_accumulators[ion_type] = radial_profile_acc\n\nreturn radial_profile_accumulators, bin_edges\n\n```", "r_min = ROD_RADIUS + ION_DIAMETER / 2.\nr_max = system.box_l[0] / 2.\nN_RADIAL_BINS = 200\nradial_profile_accs, bin_edges = setup_profile_calculation(\n system, STEPS_PER_SAMPLE, [COUNTERION_TYPE], r_min, N_RADIAL_BINS)\nassert isinstance(\n radial_profile_accs[COUNTERION_TYPE], espressomd.accumulators.MeanVarianceCalculator)\nassert len(bin_edges) == N_RADIAL_BINS + 1", "To run the simulation with different parameters, we need a way to reset the system and return it to an empty state before setting it up again.\nExercise:\n* Write a function clear_system(system) that\n * turns off the thermostat\n * removes all particles\n * removes all actors\n * removes all accumulators added to the auto-update-list\n * resets the system clock\nHints:\n* The relevant parts of the documentation can be found here:\nThermostats,\nParticleList,\nElectrostatics,\nAutoUpdateAccumulators,\nSystem properties\npython\ndef clear_system(system):\n system.thermostat.turn_off()\n system.part.clear()\n system.actors.clear()\n system.auto_update_accumulators.clear()\n system.time = 0.", "clear_system(system)", "Production run and analysis\nNow we are finally ready to run the simulations and produce the data we can compare to the Poisson-Boltzmann predictions. First we define the parameters and then loop over them.", "runs = [{'params': {'counterion_valency': 2, 'rod_charge_dens': 1},\n 'histogram': None},\n {'params': {'counterion_valency': 1, 'rod_charge_dens': 2},\n 'histogram': None}\n ]\nN_SAMPLES = 1500", "For longer simulation runs it will be convenient to have a progress bar", "def integrate_system(system, n_steps):\n for i in tqdm.trange(100):\n system.integrator.run(n_steps // 100)\n", "Exercise:\n* Run the simulation for the parameters given above and save the histograms in the corresponding dictionary for analysis\nHints:\n* Don't forget to clear the system before setting up the system with a new set of parameters\n* Don't forget to add a new p3m instance after each change of parameters. If we reuse the p3m that was tuned before, likely the desired accuracy will not be achieved. \n* Extract the radial density profile from the accumulator via .mean()\n```python\nfor run in runs:\n clear_system(system)\n setup_rod_and_counterions(\n system, run['params']['counterion_valency'], COUNTERION_TYPE,\n run['params']['rod_charge_dens'], N_rod_beads, ROD_TYPE)\n p3m = espressomd.electrostatics.P3M(p3m_params)\n system.actors.add(p3m)\n remove_overlap(system, STEEPEST_DESCENT_PARAMS)\n system.thermostat.set_langevin(LANGEVIN_PARAMS)\n print('', end='', flush=True)\n integrate_system(system, WARMUP_STEPS)\n radial_profile_accs, bin_edges = setup_profile_calculation(\n system, STEPS_PER_SAMPLE, [COUNTERION_TYPE], r_min, N_RADIAL_BINS)\n integrate_system(system, N_SAMPLES * STEPS_PER_SAMPLE)\nrun['histogram'] = radial_profile_accs[COUNTERION_TYPE].mean()\nprint(f'simulation for parameters {run[\"params\"]} done\\n')\n\n```\nQuestion\n* Why does the second simulation take much longer than the first one?\nThe rod charge density is doubled, so the total charge of the counterions needs to be doubled, too. Since their valency is only half of the one in the first run, there will be four times more counterions in the second run.\nWe plot the density of counterions around the rod as the normalized integrated radial counterion charge distribution function $P(r)$, meaning the integrated probability to find an amount of charge within the radius $r$. We express the rod charge density $\\lambda$ in terms of the dimensionless Manning parameter $\\xi = \\lambda l_B / e$ where $l_B$ is the Bjerrum length and $e$ the elementary charge", "# With the notion of P(r) the probability to find the charge up to r,\n# we only use the right side of the bin edges for plotting\nrs = bin_edges[1:, 0, 0, 0]\n\nfig, ax = plt.subplots(figsize=(10, 7))\nfor run in runs:\n hist = np.array(run['histogram'][:, 0, 0])\n # The CylindricalDensityProfile normalizes the bin values by the bin size.\n # We want the 'raw' distribution (number of ions within a radius)\n # so we need to multiply by the radii\n hist = hist * rs\n cum_hist = np.cumsum(hist)\n cum_hist /= cum_hist[-1]\n manning_xi = run['params']['rod_charge_dens'] * BJERRUM_LENGTH / Q_E\n ax.plot(rs, cum_hist, label=rf'$\\xi ={manning_xi}, \\nu = {run[\"params\"][\"counterion_valency\"]}$')\nax.set_xscale('log')\nax.legend()\nplt.xlabel('r')\nplt.ylabel('P(r)')\nplt.show()", "In the semilogarithmic plot we see an inflection point of the cumulative charge distribution which is the indicator for ion condensation. To compare to the meanfield approach of PB, we calculate the solution of the analytical expressions given in 10.1021/ma990897o", "def eq_to_solve_for_gamma(gamma, manning_parameter, rod_radius, max_radius):\n # eq 7 - eq 6 from 10.1021/ma990897o\n return gamma * np.log(max_radius / rod_radius) - np.arctan(1 / gamma) + np.arctan((1 - manning_parameter) / gamma)\n\n\ndef calc_manning_radius(gamma, max_radius):\n # eq 7 from 10.1021/ma990897o\n return max_radius * np.exp(-np.arctan(1. / gamma) / gamma)\n\n\ndef calc_PB_probability(r, manning_parameter, gamma, manning_radius):\n # eq 8 and 9 from 10.1021/ma990897o\n return 1. / manning_parameter + gamma / manning_parameter * np.tan(gamma * np.log(r / manning_radius))", "For multivalent counterions, the manning parameter $\\xi$ has to be multiplied by the valency $\\nu$. The result depends only on the product of rod_charge_dens and ion_valency, so we only need one curve", "rod_charge_density = runs[0]['params']['rod_charge_dens']\nion_valency = runs[0]['params']['counterion_valency']\nmanning_parameter_times_valency = BJERRUM_LENGTH * rod_charge_density * ion_valency\n\ngamma = scipy.optimize.fsolve(eq_to_solve_for_gamma, 1, args=(\n manning_parameter_times_valency, r_min, r_max))\nmanning_radius = calc_manning_radius(gamma, r_max)\n\nPB_probability = calc_PB_probability(\n rs, manning_parameter_times_valency, gamma, manning_radius)\n\nax.plot(rs, PB_probability, label=rf'PB $\\xi \\cdot \\nu$ = {manning_parameter_times_valency}')\nax.legend()\nax.set_xscale('log')\nfig", "We see that overall agreement is quite good, but the deviations from the PB solution get stronger the more charged the ions are.\nPoisson Boltzmann makes two simplifying assumptions: Particles are points and there are no correlations between the particles. Both is not given in the simulation. Excluded volume effects can only lower the density, but we see in the figure that the simulated density is always larger that the calculated one. This means that correlation effects cause the discrepancy.\nOvercharging by added salt\nAbove simulations were performed for a system where all ions come from dissociation off the polyelectrolyte. We can also investigate systems where there are additional salt ions present.", "def add_salt(system, anion_params, cation_params):\n\n N_anions = anion_params['number']\n N_cations = cation_params['number']\n\n anion_positions = np.random.random((N_anions, 3)) * system.box_l\n cation_positions = np.random.random((N_cations, 3)) * system.box_l\n\n anions = system.part.add(pos=anion_positions, type=[anion_params['type']] * N_anions,\n q=[-anion_params['valency']] * N_anions)\n cations = system.part.add(pos=cation_positions, type=[cation_params['type']] * N_cations,\n q=[cation_params['valency']] * N_cations)\n\n return anions, cations\n\nANION_PARAMS = {'type': 3,\n 'valency': 2,\n 'number': 150}\nCATION_PARAMS = {'type': 4,\n 'valency': 2,\n 'number': 150}\nROD_LENGTH = 10\nN_rod_beads = int(ROD_LENGTH / ROD_RADIUS)\nROD_CHARGE_DENS = 1\nCOUNTERION_VALENCY = 1\n\nSTEPS_PER_SAMPLE_SALT = 20\nN_SAMPLES_SALT = 1500\nN_RADIAL_BINS = 100\n\nall_ion_types = [COUNTERION_TYPE, ANION_PARAMS['type'], CATION_PARAMS['type']]\n\n# set interactions of salt with the rod and all ions\nfor salt_type in [ANION_PARAMS['type'], CATION_PARAMS['type']]:\n system.non_bonded_inter[salt_type, ROD_TYPE].wca.set_params(\n epsilon=WCA_EPSILON, sigma=ION_DIAMETER / 2. + ROD_RADIUS)\n for ion_type in all_ion_types:\n system.non_bonded_inter[salt_type, ion_type].wca.set_params(\n epsilon=WCA_EPSILON, sigma=ION_DIAMETER)\n\nclear_system(system)\nsystem.box_l = 3 * [ROD_LENGTH]\ncounterions = setup_rod_and_counterions(\n system, COUNTERION_VALENCY, COUNTERION_TYPE,\n ROD_CHARGE_DENS, N_rod_beads, ROD_TYPE)\nanions, cations = add_salt(system, ANION_PARAMS, CATION_PARAMS)\nassert abs(sum(anions.q) + sum(cations.q)) < 1e-10\n\np3m = espressomd.electrostatics.P3M(**p3m_params)\nsystem.actors.add(p3m)\nremove_overlap(system, STEEPEST_DESCENT_PARAMS)\nsystem.thermostat.set_langevin(**LANGEVIN_PARAMS)\nprint('', end='', flush=True)\nintegrate_system(system, WARMUP_STEPS)\nradial_profile_accs, bin_edges = setup_profile_calculation(\n system, STEPS_PER_SAMPLE_SALT, all_ion_types, r_min, N_RADIAL_BINS)\nintegrate_system(system, N_SAMPLES_SALT * STEPS_PER_SAMPLE_SALT)\n\nrs = bin_edges[1:, 0, 0, 0]\ncum_hists = {}\nfor ion_type in all_ion_types:\n hist = radial_profile_accs[ion_type].mean()\n hist = hist[:, 0, 0] * rs\n cum_hist = np.cumsum(hist)\n cum_hist /= cum_hist[-1]\n cum_hists[ion_type] = cum_hist", "Exercise:\n* Use the cumulative histograms from the cell above to create the cumulative charge histogram of the total ion charge\nHints\n* You need to account for the fact that the cumulative histograms are all normalized, but the total charge of each ion type is different\npython\ncounterion_charge = sum(counterions.q)\nanion_charge = sum(anions.q)\ncation_charge = sum(cations.q)\ncharge_hist = counterion_charge * cum_hists[COUNTERION_TYPE] + \\\n anion_charge * cum_hists[ANION_PARAMS['type']] + \\\n cation_charge * cum_hists[CATION_PARAMS['type']]", "charge_hist /= charge_hist[-1]\nfig2, ax2 = plt.subplots(figsize=(10, 7))\nax2.plot(rs, charge_hist)\nax2.set_xscale('linear')\nplt.xlabel('r')\nplt.ylabel('P(r)')\nplt.show()", "You should observe a strong overcharging effect, where ions accumulate close to the rod." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vzg100/Post-Translational-Modification-Prediction
old/Phosphorylation Sequence Tests -MLP -dbptm+ELM-VectorAvr.-phos_stripped.ipynb
mit
[ "Template for test", "from pred import Predictor\nfrom pred import sequence_vector\nfrom pred import chemical_vector", "Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.\nIncluded is N Phosphorylation however no benchmarks are available, yet. \nTraining data is from phospho.elm and benchmarks are from dbptm.", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(\"Data/Benchmarks/phos_stripped.csv\", \"S\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(\"Data/Benchmarks/phos_stripped.csv\", \"S\")\n del x\n", "Y Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_Y_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(\"Data/Benchmarks/phos_stripped.csv\", \"Y\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_Y_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(\"Data/Benchmarks/phos_stripped.csv\", \"Y\")\n del x\n", "T Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_t_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\")\n y.benchmark(\"Data/Benchmarks/phos_stripped.csv\", \"T\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_t_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\")\n x.benchmark(\"Data/Benchmarks/phos_stripped.csv\", \"T\")\n del x\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
empet/PSCourse
BivariateNormal.ipynb
bsd-3-clause
[ "Distributia normala bivariata\nIn acest notebook prezentam mai multe instrumente pentru vizualizarea datelor ce au distributie normala bivariata sau sunt observatii asupra unei mixturi Gaussiene. \nPentru a putea rula celulele din notebook trebuie sa faceti actualizarile prezentate in notebook-ul precedent.", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.stats import multivariate_normal as Nd", "In scipy.stats se poate declara un obiect V ca fiind un vector aleator normal distribuit d-dimensional, \nde medie $m=(m[0,], m[1], \\ldots m[d-1])^T$ si matrice de covarianta $\\Sigma\\in\\mathbb{R}^{d\\times d}$, asfel:\nV=Nd(mean=m, cov=Sigma)\nExemplificam pentru vectori aleatori 2D:", "m=[2, 3.5]# mediile pentru X si Y\ns=[1.8, 2.3]#abaterile standard pentru X, respectiv Y\nrho=-0.7# coef de corelatie dintre X si Y\ncovar=rho*np.sqrt(s[0]*s[1])# covarianta cov(X,Y)\nSig=np.array([[s[0]**2, covar],[covar, s[1]**2]])# Matricea de covarianta\n\nV=Nd(mean=m, cov=Sig)# V este un vector aleator 2D, de medie m si matrice de cov, Sig", "Vizualizarea densitatii de probabilitate prin contourplot si a distributiei experimentale prin heatmap\nSa desenam graficul densitatii de probabilitate avectorului aleator $V=(X,Y)$. Fiind o suprafata,\nimportam in plus instrumentele de grafica 3D:", "from mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm# modulul matplotlib.cm contine colormap-urile \n\nplt.rcParams['figure.figsize'] = (10.0, 6.0)\nk=3\nx=np.linspace(m[0]-k*s[0], m[0]+k*s[0], 100)\ny=np.linspace(m[1]-k*s[1], m[1]+k*s[1], 100)\nx,y=np.meshgrid(x,y)\npos=np.empty(x.shape + (2,))\npos[:, :, 0] = x; pos[:, :, 1] = y\nz=V.pdf(pos)\n\nfig1 = plt.figure(1)\nax = fig1.add_subplot(111, projection='3d')\nax.set_xlabel('x')\nax.set_ylabel('y')\nsurf = ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.Blues,\n linewidth=0, antialiased=True)\n", "Intensitatea culorii graficului este proportionala cu inaltimea. Intersectand acum clopotul cu\nplane paralele cu planul xOy si proiectand curbele de intersectie pe xOy obtinem contourplot-ul\ndensitatii de probabilitate.", "plt.rcParams['figure.figsize'] = (6.0, 4.0)\n\nfig2 = plt.figure(2)\nz=V.pdf(pos)\nplt.contourf(x, y, z, cmap=cm.Blues)\nplt.colorbar()", "Observam ca si in contourplot \nzonele corespunzatoare intersectiei cu plane $z=h$, de cota h mai ridicata, sunt mai intens colorate si respectiv sunt colorate \ncu nuante din ce in ce mai light, pe masura ce cota $h$ descreste.\nSimulam acum vectorul aleator V si vom vedea ca punctele generate vor fi dispuse intr-un disc eliptic la fel orientat\nsi aproximativ la fel colorat ca acest contourplot:", "pts=V.rvs(size=5000)# generam 5000 de puncte ca observatii asupra vectorului aleator\n\nprint type(pts)\n\nfig3=plt.figure(3)\nH, xedges, yedges = np.histogram2d(pts[:,1], pts[:,0], bins=[50,65], normed=True)\nHmasked = np.ma.masked_where(H==0,H) #mascam pixelii a caror valoare este 0\nextent = [ xedges[0], xedges[-1], yedges[0], yedges[-1]]\nheatmap=plt.imshow(Hmasked, cmap='Blues', origin='lower',interpolation='nearest', extent=extent)\nplt.colorbar()", "Functia np.histogram2D calculeaza cate puncte sunt generate in fiecare subdreptunghi rezultat din divizarea\nunui dreptunghi ce contine punctele generate, prin 50 intervale de subdivizune pe orizontala si 65 pe verticala (bins=[50, 65]). \nSetand\n normed=true se calculeaza probabilitatea ca un punct generat sa cada intr-un astfel de subdreptunghi, ca fiind nr punctelor generate in subdreptunghi supra nr total de puncte. Apoi colorarea subdreptunghiurilor se face mapand intervalul $[0,prob_{max}]$ al probabilitatilor inregistrate, pe nuantele paletei de culori\nnumita Blues. Patratele mai intens colorate au probabilitatea experimentala de vizita mai mare, iar cele colorate cu light blue au o probabilitate mai redusa. \nImaginea distributiei de probabilitate astfel generata se numeste heatmap.\nNotam ca,\npentru a vizualiza un punct de coordonate carteziene (x,y), in apelul functiei np.histogram2d, se includ ca argumente\npunctele de coordonate pixel $(y,x)$. \nIn cazul nostru am scris argumentele in aceasta ordine: pts[:,1], pts[:,0]. \nA se vedea in cursul de Algebra-Geometrie legatura dintre coordonatele pixel si coordonatele carteziene ale unui punct dintr-o imagine\nSa vizualizam contourplot-ul si heatmap-ul cu o paleta (colormap) ce nu consta ca si Blues, folosit mai sus, doar din nuante ale aceleiasi culori. \nLista colormap-urilor din matplotlib este afisata aici:", "plt.rcParams['figure.figsize'] = (6.0, 4.0)\n\nfig4 = plt.figure(4)\nplt.contourf(x, y, V.pdf(pos), cmap=cm.PiYG)\nplt.colorbar()\n\nfig5 = plt.figure(5)\nheatmap1=plt.imshow(Hmasked, cmap='PiYG', origin='lower',interpolation='nearest', extent=extent)\nplt.colorbar()", "Mixturi Gaussiene 2D\nAvand k distributii de probabilitate Gauss, 2D, de densitati de probabilitate\n $f_1$, $f_2, \\ldots, f_k$, atunci o combinatie convexa a lor cu probabilitatile: $p_1, p_2,\\ldots, p_k$, $\\sum_{j=1}^kp_k=1$:\n$$f=p_1f_1+p_2f_2+\\cdots+p_kf_k$$\neste o densitate de probabilitate numita mixtura Gaussiana 2D.\nSa ilustram o mixtura Gaussiana ce este combinatia a 4 densitati de probabilitate Gaussiene 2D.\nMediile si abaterile standard ale coordonatelor vectorilor aleatori Gaussieni $(X_i, Y_i)$, $i=0,1,2,3$, le dam in cate un array de 4 linii si 2 coloane, fiecare linie reprezentand vectorul mediilor, respectiv al abaterilor standard ale distributiilor Gauss 2D, corespunzatoare.\nCoeficientii de corelatie $\\rho(X_i, Y_i)$, $i=0,1,2,3$, ii dam intr-un vector de 4 coordonate:", "pr=[0.2, 0.4, 0.15, 0.25]# lista probabilitatilor din definitia mixturii\nmed=np.array([[0.5,0.5], [0.7,4.3], [2.9, 3.4],[2.5,2]])# mediile\nsig=np.array([[0.2, 0.25], [0.5,0.8], [0.2375, 0.4125 ],[0.35,0.53]]) # abaterile standard\nrho=np.array([0.0, -0.67,-0.76,0.5])# coeficientii de corelatie\n\ndef Covariance(m, s, r):# functie ce genereaza matricea de cov a unui vector (X,Y)~N(m,Sigma)\n covar= r*s[0]*s[1]\n return np.array([[s[0]**2, covar], [covar, s[1]**2]]) \n ", "Intr-un array 3D stocam cele 4 matrici de covarianta:", "Sigma=np.zeros((2,2,4), float)# 4 matrici 2D;\n #Sigma[:,:, i] este matricea de covarianta a vectorului (X_i, Y_i)\n\nfor i in range(4):\n Sigma[:,:,i]=Covariance(med[i, :], sig[i,:], rho[i])\n ", "Definim functia care din vectorul probabilitatilor pr, matricea mediilor si array-ul 3D al matricilor de covarianta\nevalueaza densitatea de probabilitate a mixturii in punctele unei grile din plan:", "def mixtureDensity(x,y, pr, med, Sigma):# x este array-ul absciselor nodurilor grilei,\n #iar y al ordonatelor\n pos=np.empty(x.shape + (2,))# daca x.shape este (m,n) atunci pos.shape este (m,n,2)\n pos[:, :, 0] = x; pos[:, :, 1] = y \n z=np.zeros(x.shape)\n for i in range(4) :\n z=z+pr[i]*Nd.pdf(pos, mean=med[i,:], cov=Sigma[:,:, i])\n return z ", "S-a definit array-ul 3D pos, pentru a corespunde modalitatii de implementare a densitatii\nunei distributii normale multivariate. \nPentru a alege un dreptunghi de definitie pentru mixtura, exploatam inegalitatea lui Cebasev conform careia:\n $P(m-k\\sigma<X<m+k\\sigma)>1/k^2$. Si anume identificam variabila $X_{jx}$ ce are media minima, respectiv variabila $X_{Jx}$ de medie maxima, intre toate variabilele $X_0, X_1, X_2, X_3$\n si procedam analog si pentru variabilele $Y_0, Y_1, Y_2, Y_3$:", "mmx=np.min(med[:,0] )\nmMx=np.max(med[:,0])#cea mai mica si cea mai mare medie pt X_0, X_1, X_2, X_3\nmmy=np.min(med[:,1] )\nmMy=np.max(med[:,1])\njx=np.argmin(med[:,0])\nJx=np.argmin(-1.0*med[:,0])\njy=np.argmin(med[:,1]) \nJy=np.argmin(-1.0*med[:,1])\n\nplt.rcParams['figure.figsize'] = (10.0, 6.0)\nk=3.5\nx=np.linspace(mmx-k*sig[jx][0], mMx+k*sig[Jx][0], 100)\nprint 'x initial', x.shape\ny=np.linspace(mmy-k*sig[jy][1], mMx+k*sig[Jy][1], 100)\nx,y=np.meshgrid(x,y)# din punctele de divizune pe Ox si Oy se constituie grila 2D\nprint 'x din grila', x.shape#x este acumarray 2D. Elementele sale sunt abscisele nodurilor grilei\nz=mixtureDensity(x,y,pr, med, Sigma)\nfig6=plt.figure(6)\nax = fig6.add_subplot(111, projection='3d')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('Densitatea de probabilitate a mixturii Gaussiene')\nax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.Blues,\n linewidth=0, antialiased=False)", "Folosind un alt colormap graficul densitatii mixturii arata astfel:", "fig7=plt.figure(7)\nax = fig7.add_subplot(111, projection='3d')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('Densitatea de probabilitate a mixturii Gaussiene')\nax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.PiYG,\n linewidth=0, antialiased=False)\n", "Sa generam contourplot-ul mixturii:", "plt.rcParams['figure.figsize'] = (6.0, 4.0)\nfig8=plt.figure(8)\nplt.contourf(x, y, z, cmap=cm.Blues)\nplt.colorbar()", "Sa simulam acum aceasta mixtura:", "def simDiscrete(pr):\n k=0\n F=pr[0]\n u=np.random.random()\n while(u>F):\n k+=1\n F=F+pr[k]\n return k\n\ndef GaussianMixture2D(N, pr, med,Sigma):\n dis=[simDiscrete(pr) for i in xrange(N)]#genereaza N observatii asupra distr discrete\n\n pts=np.empty((N,2), dtype=float)# pts contine punctele generate simuland mixtura\n n=len(pr)\n for k in range(n):\n \n I=[j for j in range(N) if dis[j] == k]# lista indicilor elementelor listei dis, egale \n #cu k (k=0, 1, ..., n-1) \n s=len(I)\n ptsk=Nd.rvs(size=s, mean=med[k], cov=Sigma[:,:,k])#generam atatea valori din ia distr. f_k\n #cat este lungimea lui I\n pts[I,:]=ptsk # in pozitiile I copiem valorile generate\n return pts\n\nN=10000\npts=GaussianMixture2D(N,pr, med, Sigma)\n\nplt.rcParams['figure.figsize'] = (6.0, 4.0)\nfig9=plt.figure(9)\nH, xedges, yedges = np.histogram2d(pts[:,1], pts[:,0], bins=[75,75], normed=True)\nHmasked = np.ma.masked_where(H==0,H) \nextent = [ xedges[0], xedges[-1], yedges[0], yedges[-1]]\nheatmap=plt.imshow(Hmasked, cmap='Blues', origin='lower',interpolation='nearest', extent=extent)\nplt.colorbar()\n", "Contourplots si heatmaps folosind seaborn", "import seaborn as sb\nimport scipy.stats as st", "seaborn ofera posibiliatea de a identifica dintr-un sir de observatii asupra a doua variabile aleatoare, $X$, $Y$, normal normal distribuite, $X\\sim N(m_0,\\sigma_0)$, $Y\\sim N(m_1, \\sigma_1)$,\ncoeficientul lor de corelatie. Functia sb.jointplot vizualizeaza norul de puncte $(x_i, y_i)$ ce sunt obsevatii asupra vectorului $(X,Y)$ si traseaza histogramele distributiilor marginale, ale lui $X$, respectiv $Y$.", "mX=0; sigX=1.2\nmY=1.7; sigY=0.93\nxvals=st.norm.rvs(size=1000, loc=mX, scale=sigX)\nyvals=st.norm.rvs(size=1000, loc=mY, scale=sigY)\nsb.jointplot(xvals, yvals,size=6, kind=\"reg\");# alte optiuni pt kind: `kde`", "Coeficientul Pearson, r, este un estimator al coeficientului de corelatie $\\rho(X,Y)$. Fiind foarte apropiat de 0, concluzionam ca cele doua variabile aleatoare sunt\nnecorelate si fiind normal distribuite sunt independente (in general 2 variabile necorelate nu sunt independente, dar in cazul normal ele sunt!!!!)\nSa verificam cat de bine estimeaza seaborn.jointplot coeficientul de corelatie. In acest scop generam observatii\nasupra unui vector aleator normal distribuit $(X,Y)$ cu $\\rho(X,Y)=0.67$ si apoi comparam coeficientul Pearson\ncalculat cu acest $\\rho$:", "m=[1, -2]# mediile pentru X si Y\nrho=0.67\ns=[1.2, 0.9]#abaterile standard pentru X, respectiv Y\ncovar=rho*np.sqrt(s[0]*s[1])\nSig=[[s[0]**2, covar],[covar, s[1]**2]]\n\nV=Nd(mean=m, cov=Sig)# \n\npts=V.rvs(size=2000)\nsb.jointplot(pts[:,0], pts[:,1], size=6, kind=\"reg\");", "Observam ca estimatorul lui $\\rho$ este suficient de bun. Axa mare a norului eliptic este\nde panta pozitiva, pentru ca $\\rho(X,Y)>0$.\nApeland acum functia sb.joinplot, cu cuvantul cheie kind setat pe kde, este afisata o aproximatie a contourplot-ului\ndensitatii de probabilitate a vectorului aleator normal distribuit, $V=(X,Y)$, iar sus si lateral dreapta, aproximatii ale densitatii lui $X$, respectiv $Y$:", "sb.jointplot(pts[:,0], pts[:,1], size=6, kind=\"kde\");", "kde inseamna kernel density estimation. Exista cativa algoritmi folositi in statistica si machine learning \ncare estimeaza densitatea de probabilitate a unei variabile aleatoare (vector aleator) din observatii asupra acestora.\nExemple de aplicatii ale heatmap-urilor\nNu doar distributiile normale bivariate se vizualizeaza prin heatmap-uri, ci orice alta distributie 2D.\nDe exemplu dupa campionatul mondial din 2010 pe site-ul FIFA s-au afisat heatmap-urile color ale pozitiei jucatorilor in teren in timpul fiecarui meci.\nIlustram mai jos heatmap-ul asociat jucatorului Piqu&eacute;:", "from IPython.display import Image\n\nImage(filename='Imags/heatmap_pique.jpg')#imaginea a fost postata pana de curand\n#aici: \n#http://www.fifa.com/worldcup/archive/southafrica2010/statistics/players/player=216973/\n#heatmap.html", "Evident, distributia de probabilitate a pozitiei lui Piqu&eacute; in teren este o mixtura de mai multe distributii.\nZona mai intens colorata in rosu este cea in care jucatorul a revenit mai des.\nIn acest document se pot vedea heatmap-urile tuturor jucatorilor din meciul Argentina-Germania.\nPentru optimizarea paginilor WEB se genereaza heatmap-ul click-urilor vizitatorilor\nIn recunoasterea formelor se pune problema inversa: dintr-un nor de puncte, ce se interpreteaza a fi valori de observatie asupra unei mixturi \n(Gaussiene) de K distributii, se estimeaza conform algoritmului EM (Expectation Maximization) \nprobabilitatile din definitia mixturii,\npunctele ce sunt centrii norilor (adica vectorii medii pentru fiecare distributie bivariata componenta) si elementele matricilor de covarianta. Prezentarea algoritmului aici.", "from IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NII-cloud-operation/Jupyter-LC_wrapper
examples/Summarizing and Logging.ipynb
bsd-3-clause
[ "Summarizing and Logging\nAn example of the Summarizing and Logging mode.\nEnabling the Summarizing and Logging mode\nTo enable the Summarizing and Logging mode, you should add !! at the beginning of the code cell.", "!!from time import sleep\n\nfor i in range(0, 100):\n print(i)\n sleep(0.1)", "You can configure the summarization settings via the environment variable lc_wrapper.", "%env lc_wrapper=4:4:4:4\n\n!!from time import sleep\n\nfor i in range(0, 100):\n print(i)\n sleep(0.1)", "The .log directory is created and the whole of output are recorded on a log file in this directory.\nThe filename is recorded on output area like above.", "!cat /notebooks/.log/20170704/20170704-071348-0190.log", "Various Types of Execution Results\nLC_wrapper also records execution results not only stream output.\nPlain Text in Execution Result\nAn execution result is recorded with stream outputs.", "def do_something():\n return \"output something\"\n\ndo_something()\n\n!!from time import sleep\n\nfor i in range(0, 100):\n print(i)\n sleep(0.1)\n\ndo_something()\n\n!cat /notebooks/.log/20170704/20170704-071448-0119.log", "HTML in Execution Result\nAlso you can contain HTML code in an exeuction result...", "!!from time import sleep\nfrom datetime import datetime\nimport pandas as pd\n\nitems = []\nfor i in range(0, 100):\n print(i)\n sleep(0.1)\n items.append((i, datetime.now()))\n\npd.DataFrame(items, columns=['Index', 'Datetime'])\n\n!cat /notebooks/.log/20170704/20170704-071539-0790.log", "Image in Execution Result", "%matplotlib inline\n\n!!from time import sleep\nfrom datetime import datetime\nimport pandas as pd\n\nitems = []\nfor i in range(0, 100):\n print(i)\n sleep(0.1)\n items.append((datetime.now(), i))\n\npd.DataFrame(items, columns=['Datetime', 'Index']).set_index('Datetime').plot()\n\n!cat /notebooks/.log/20170704/20170704-071619-0567.log", "Errors\nlc_wrapper can handle errors properly.", "!!from time import sleep\n\nfor i in range(0, 100):\n print(i)\n sleep(0.1)\n\n# Always raises AssertionError\nassert False\n\n!cat /notebooks/.log/20170704/20170704-071647-0970.log" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
drrelyea/SPGL1_python_port
examples/Official_demo.ipynb
lgpl-2.1
[ "SPGL1 demo\nThis notebook contains a Python implementation of the original examples from SPGL1 MATLAB solver", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom scipy.sparse import spdiags\nfrom scipy.sparse.linalg import lsqr as splsqr\nfrom spgl1.lsqr import lsqr\nfrom spgl1 import spgl1, spg_lasso, spg_bp, spg_bpdn, spg_mmv\nfrom spgl1.spgl1 import norm_l1nn_primal, norm_l1nn_dual, norm_l1nn_project\nfrom spgl1.spgl1 import norm_l12nn_primal, norm_l12nn_dual, norm_l12nn_project\n\n# Initialize random number generators\nnp.random.seed(43273289)", "Lasso", "# Create random m-by-n encoding matrix and sparse vector\nm = 50\nn = 128\nk = 14\n[A,Rtmp] = np.linalg.qr(np.random.randn(n,m),'reduced')\nA = A.T\np = np.random.permutation(n)\np = p[0:k]\nx0 = np.zeros(n)\nx0[p] = np.random.randn(k)", "Solve the underdetermined LASSO problem for $||x||_1 <= \\pi$:\n$$min.||Ax-b||_2 \\quad subject \\quad to \\quad ||x||_1 <= \\pi$$", "b = A.dot(x0)\ntau = np.pi\nx,resid,grad,info = spg_lasso(A, b, tau, verbosity=1)\n\nprint()\nprint('%s%s%s' % ('-'*35,' Solution ','-'*35))\nprint('nonzeros(x) = %i, ||x||_1 = %12.6e, ||x||_1 - pi = %13.6e' % \\\n (np.sum(abs(x)>1e-5), np.linalg.norm(x,1), np.linalg.norm(x,1)-np.pi))\nprint('%s' % ('-'*80))", "BP\nSolve the basis pursuit (BP) problem:\n$$min. ||x||_1 \\quad subject \\quad to \\quad Ax = b$$", "b = A.dot(x0) # signal\nx,resid,grad,info = spg_bp(A, b, verbosity=2)\n\nplt.figure()\nplt.plot(x,'b')\nplt.plot(x0,'ro')\nplt.legend(('Recovered coefficients','Original coefficients'))\nplt.title('Basis Pursuit');\n\nplt.figure()\nplt.plot(info['xnorm1'], info['rnorm2'], '.-k')\nplt.xlabel(r'$||x||_1$')\nplt.ylabel(r'$||r||_2$')\nplt.title('Sampled Pareto curve')\n\nplt.figure()\nplt.plot(np.arange(info['niters']), info['rnorm2']/max(info['rnorm2']), '.-k')\nplt.plot(np.arange(info['niters']), info['xnorm1']/max(info['xnorm1']), '.-r')\nplt.xlabel(r'#iter')\nplt.ylabel(r'$||r||_2 & ||x||_1$');\nplt.title('Cost functions');", "BPDN\nSolve the basis pursuit denoise (BPDN) problem:\n$$min. ||x||_1 \\quad subject \\quad to \\quad ||Ax - b||_2 <= 0.1$$", "b = A.dot(x0) + np.random.randn(m) * 0.075\nsigma = 0.10 # % Desired ||Ax - b||_2\nx,resid,grad,info = spg_bpdn(A, b, sigma, iter_lim=10, verbosity=2)\n\nplt.figure()\nplt.plot(x,'b')\nplt.plot(x0,'ro')\nplt.legend(('Recovered coefficients','Original coefficients'))\nplt.title('Basis Pursuit Denoise');", "BPDN with non-negative solution\nWe repeat the same procedure but we have only positive elements in the x vector. We compare spgl1 with L1 norms and with L1NN norms", "x0 = np.zeros(n)\nx0[p] = np.abs(np.random.randn(k))\nb = A.dot(x0) # signal\n\nx,resid,grad,info = spg_bp(A, b, iter_lim=20, verbosity=1)\n\n\nxnn,residnn,gradnn,infonn = spg_bp(A, b, iter_lim=20, verbosity=1, \n project=norm_l1nn_project, \n primal_norm=norm_l1nn_primal,\n dual_norm=norm_l1nn_dual)\n\nplt.figure()\nplt.plot(x,'b')\nplt.plot(xnn,'--g')\nplt.plot(x0,'ro')\nplt.legend(('Recovered coefficients', 'Recovered coefficients NNnorms','Original coefficients'))\nplt.title('Basis Pursuit');", "BP with complex numbers\nSolve the basis pursuit (BP) problem in COMPLEX variables:\n$$min. ||z||_1 \\quad subject \\quad to \\quad Az = b$$", "from scipy.sparse.linalg import LinearOperator\n\nclass partialFourier(LinearOperator):\n def __init__(self, idx, n):\n self.idx = idx\n self.n = n\n self.shape = (len(idx), n)\n self.dtype = np.complex128\n def _matvec(self, x): \n # % y = P(idx) * FFT(x)\n z = np.fft.fft(x) / np.sqrt(n)\n return z[idx]\n def _rmatvec(self, x): \n z = np.zeros(n,dtype=complex)\n z[idx] = x\n return np.fft.ifft(z) * np.sqrt(n)\n\n \n# % Create partial Fourier operator with rows idx\nidx = np.random.permutation(n)\nidx = idx[0:m]\nopA = partialFourier(idx, n)\n\n# % Create sparse coefficients and b = 'A' * z0;\nz0 = np.zeros(n,dtype=complex)\nz0[p] = np.random.randn(k) + 1j * np.random.randn(k)\nb = opA.matvec(z0)\n\nz,resid,grad,info = spg_bp(opA,b, verbosity=2)\n\nplt.figure()\nplt.plot(z.real,'b+',markersize=15.0)\nplt.plot(z0.real,'bo')\nplt.plot(z.imag,'r+',markersize=15.0)\nplt.plot(z0.imag,'ro')\nplt.legend(('Recovered (real)', 'Original (real)', 'Recovered (imag)', 'Original (imag)'))\nplt.title('Complex Basis Pursuit');", "Pareto Frontier\nSample the Pareto frontier at 100 points:\n$$phi(tau) = min. ||Ax-b||_2 \\quad subject \\quad to \\quad ||x|| <= \\tau$$", "b = A.dot(x0)\nx = np.zeros(n)\ntau = np.linspace(0, 1.05 * np.linalg.norm(x0, 1), 100)\ntau[0] = 1e-10\nphi = np.zeros(tau.size)\n\nfor i in range(tau.size):\n x,r,grad,info = spgl1(A, b, tau[i], 0, x, iter_lim=1000)\n phi[i] = np.linalg.norm(r)\n\nplt.figure()\nplt.plot(tau,phi, '.')\nplt.title('Pareto frontier')\nplt.xlabel('||x||_1')\nplt.ylabel('||Ax-b||_2');", "Weighted BP\nSolve\n$$min. ||y||_1 \\quad subject \\quad to \\quad AW^{-1}y = b$$\nand the weighted basis pursuit (BP) problem:\n$$min. ||Wx||_1 \\quad subject \\quad to \\quad Ax = b$$\nfollowed by setting $y = Wx$.", "# Sparsify vector x0 a bit more to get exact recovery\nk = 9\nx0 = np.zeros(n)\nx0[p[0:k]] = np.random.randn(k)\n\n# Set up weights w and vector b\nw = np.random.rand(n) + 0.1 # Weights\nb = A.dot(x0/w) # Signal\n\n# Solution\nx,resid,grad,info = spg_bp(A, b, **dict(iter_lim=1000, weights=w))\n\n# Reconstructed solution, with weighting\nx1 = x * w\n\nplt.figure()\nplt.plot(x1,'b')\nplt.plot(x0,'ro')\nplt.legend(('Coefficients','Original coefficients'))\nplt.title('Weighted Basis Pursuit');", "MMV\nSolve the multiple measurement vector (MMV) problem\n$$(1) \\quad min. ||Y||_{1,2} \\quad subject \\quad to \\quad AW^{-1}Y = B$$\nand the weighted MMV problem (weights on the rows of X):\n$$(2) \\quad min. ||WX||_{1,2} \\quad subject \\quad to \\quad AX = B$$\nfollowed by setting $Y = WX$.", "# Create problem\nm = 100\nn = 150\nk = 12\nl = 6;\nA = np.random.randn(m, n)\np = np.random.permutation(n)[:k]\nX0 = np.zeros((n, l))\nX0[p, :] = np.random.randn(k, l)\n\nweights = 3 * np.random.rand(n) + 0.1\nW = 1/weights * np.eye(n)\n\nB = A.dot(W).dot(X0)\n\n# Solve unweighted version\nx_uw, _, _, _ = spg_mmv(A.dot(W), B, 0, **dict(verbosity=1))\n\n# Solve weighted version\nx_w, _, _, _ = spg_mmv(A, B, 0, **dict(verbosity=2, weights=weights))\nx_w = spdiags(weights, 0, n, n).dot(x_w)\n\n# Plot results\nplt.figure()\nplt.plot(x_uw[:, 0], 'b-', label='Coefficients (1)')\nplt.plot(x_w[:, 0], 'g--', label='Coefficients (2)')\nplt.plot(X0[:, 0], 'ro', label='Original coefficients')\nplt.legend()\nplt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');\n\nplt.figure()\nplt.plot(x_uw[:, 1], 'b', label='Coefficients (1)')\nplt.plot(x_w[:, 1], 'g--', label='Coefficients (2)')\nplt.plot(X0[:, 1], 'ro', label='Original coefficients')\nplt.legend()\nplt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');", "MMV with non-negative solution", "# Create problem\nm = 100\nn = 150\nk = 12\nl = 6;\nA = np.random.randn(m, n)\np = np.random.permutation(n)[:k]\nX0 = np.zeros((n, l))\nX0[p, :] = np.abs(np.random.randn(k, l))\n\nB = A.dot(X0)\n\nX, _, _, _ = spg_mmv(A, B, 0, iter_lim=10, verbosity=1)\nXNN, _, _, _ = spg_mmv(A, B, 0, iter_lim=10, verbosity=1,\n project=norm_l12nn_project, \n primal_norm=norm_l12nn_primal, \n dual_norm=norm_l12nn_dual)\nprint('Negative X:', np.any(X))\nprint('Negative XNN:', np.any(XNN))\n\n\n# Plot results\nplt.figure()\nplt.plot(X[:, 0], 'b-', label='Coefficients')\nplt.plot(XNN[:, 0], 'g--', label='Coefficients NN')\nplt.plot(X0[:, 0], 'ro', label='Original coefficients')\nplt.legend()\nplt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');\n\nplt.figure()\nplt.plot(X[:, 1], 'b', label='Coefficients')\nplt.plot(XNN[:, 1], 'g--', label='Coefficients NN')\nplt.plot(X0[:, 1], 'ro', label='Original coefficients')\nplt.legend()\nplt.title('Weighted Basis Pursuit with Multiple Measurement Vectors');", "LSQR\nLet's finally try to compare the internal lsqr with scipy lsqr and perform sgpl1 with subspace minimization", "def Aprodfun(A, x, mode):\n if mode == 1:\n y = np.dot(A,x)\n else:\n return np.dot(np.conj(A.T), x)\n return y\n\nn = 10\nm = 20\nA = np.random.normal(0, 1, (m, n))\nAprod = lambda x, mode: Aprodfun(A, x, mode)\nx = np.ones(n)\ny = A.dot(x)\n\ndamp = 1e-5\naTol = 1e-5\nbTol = 1e-5\nconLim = 1e12\nitnMaxLSQR = 100\nshowLSQR = 2\n\nxinv, istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm, var = \\\n lsqr(m, n, Aprod, y, damp, aTol, bTol, conLim, itnMaxLSQR, showLSQR)\n \nxinv_sp, istop_sp, itn_sp, r1norm_sp, r2norm_sp, anorm_sp, acond_sp, arnorm_sp, xnorm_sp, var = \\\n splsqr(A, y, damp, aTol, bTol, conLim, itnMaxLSQR, showLSQR)\n\nprint('istop=%d, itn=%d, r1norm=%.2f, '\n 'r2norm=%.2f, anorm=%.2f, acond=%.2f, arnorm=%.2f, xnorm=%.2f' \\\n %(istop, itn, r1norm, r2norm, anorm, acond, arnorm, xnorm))\n\nprint('istop=%d, itn=%d, r1norm=%.2f, '\n 'r2norm=%.2f, anorm=%.2f, acond=%.2f, arnorm=%.2f, xnorm=%.2f' \\\n %(istop_sp, itn_sp, r1norm_sp, r2norm_sp, anorm_sp, acond_sp, arnorm_sp, xnorm_sp))\n\nplt.plot(x, lw=8)\nplt.plot(xinv, '--g', lw=4)\nplt.plot(xinv_sp, '--r')\nplt.ylim(0, 2);", "Subspace minimization in SPGL1\nAnd use subspace minimization in SPGL1", "# Create random m-by-n encoding matrix and sparse vector\nnp.random.seed(0)\n\nm = 50\nn = 128\nk = 14\n[A, Rtmp] = np.linalg.qr(np.random.randn(n,m),'reduced')\nA = A.T\np = np.random.permutation(n)\np = p[0:k]\nx0 = np.zeros(n)\nx0[p] = np.random.randn(k)\n\n# Basis pursuit with subspace minimization\nb = A.dot(x0) # signal\nx,resid,grad,info = spg_bp(A, b, subspace_min=False, verbosity=2)\nx,resid,grad,info_sub = spg_bp(A, b, subspace_min=True, verbosity=2)\n\nplt.figure()\nplt.plot(np.arange(info['niters']), info['rnorm2']/max(info['rnorm2']), '.-k', \n label='without subspace min')\nplt.plot(np.arange(info_sub['niters']), info_sub['rnorm2']/max(info_sub['rnorm2']), '.-r', \n label='with subspace min')\nplt.xlabel(r'#iter')\nplt.ylabel(r'$||r||_2$')\nplt.legend();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
science-of-imagination/nengo-buffer
Project/trained_mental_scaling_ens.ipynb
gpl-3.0
[ "Using the trained weights in an ensemble of neurons\n\nOn the function points branch of nengo\nOn the vision branch of nengo_extras", "import nengo\nimport numpy as np\nimport cPickle\nfrom nengo_extras.data import load_mnist\nfrom nengo_extras.vision import Gabor, Mask\nfrom matplotlib import pylab\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation", "Load the MNIST database", "# --- load the data\nimg_rows, img_cols = 28, 28\n\n(X_train, y_train), (X_test, y_test) = load_mnist()\n\nX_train = 2 * X_train - 1 # normalize to -1 to 1\nX_test = 2 * X_test - 1 # normalize to -1 to 1\n", "Each digit is represented by a one hot vector where the index of the 1 represents the number", "temp = np.diag([1]*10)\n\nZERO = temp[0]\nONE = temp[1]\nTWO = temp[2]\nTHREE= temp[3]\nFOUR = temp[4]\nFIVE = temp[5]\nSIX = temp[6]\nSEVEN =temp[7]\nEIGHT= temp[8]\nNINE = temp[9]\n\nlabels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]\n\ndim =28", "Load the saved weight matrices that were created by trainging the model", "label_weights = cPickle.load(open(\"label_weights1000.p\", \"rb\"))\nactivity_to_img_weights = cPickle.load(open(\"activity_to_img_weights_scale1000.p\", \"rb\"))\nscale_up_after_encoder_weights = cPickle.load(open(\"scale_up_after_encoder_weights1000.p\", \"r\"))\nscale_down_after_encoder_weights = cPickle.load(open(\"scale_down_after_encoder_weights1000.p\", \"r\"))\n\nscale_up_weights = cPickle.load(open(\"scale_up_weights1000.p\",\"rb\"))\nscale_down_weights = cPickle.load(open(\"scale_down_weights1000.p\",\"rb\"))\n", "The network where the mental imagery and scaling occurs\n\nThe state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work\nThe number of neurons (n_hid) must be the same as was used for training\nThe input must be shown for a short period of time to be able to view the scaling\nThe recurrent connection must be from the neurons because the weight matices were trained on the neuron activities", "rng = np.random.RandomState(9)\nn_hid = 1000\nmodel = nengo.Network(seed=3)\nwith model:\n #Stimulus only shows for brief period of time\n stim = nengo.Node(lambda t: ZERO if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))#\n \n ens_params = dict(\n eval_points=X_train,\n neuron_type=nengo.LIF(), #Why not use LIF?\n intercepts=nengo.dists.Choice([-0.5]),\n max_rates=nengo.dists.Choice([100]),\n )\n \n \n # linear filter used for edge detection as encoders, more plausible for human visual system\n encoders = Gabor().generate(n_hid, (11, 11), rng=rng)\n encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)\n\n\n ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params)\n \n #Recurrent connection on the neurons of the ensemble to perform the rotation\n nengo.Connection(ens.neurons, ens.neurons, transform = scale_down_after_encoder_weights.T, synapse=0.1) \n\n #Connect stimulus to ensemble, transform using learned weight matrices\n nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T, synapse=0.1)\n \n #Collect output, use synapse for smoothing\n probe = nengo.Probe(ens.neurons,synapse=0.1)\n \n\nsim = nengo.Simulator(model)\n\nsim.run(5)", "The following is not part of the brain model, it is used to view the output for the ensemble\nSince it's probing the neurons themselves, the output must be transformed from neuron activity to visual image", "'''Animation for Probe output'''\nfig = plt.figure()\n\noutput_acts = []\nfor act in sim.data[probe]:\n output_acts.append(np.dot(act,activity_to_img_weights))\n\ndef updatefig(i):\n im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)\n \n return im,\n\nani = animation.FuncAnimation(fig, updatefig, interval=0.1, blit=True)\nplt.show()", "Pickle the probe's output if it takes a long time to run", "#The filename includes the number of neurons and which digit is being rotated\nfilename = \"mental_scaling_output_ZERO_\" + str(n_hid) + \".p\"\ncPickle.dump(sim.data[probe], open( filename , \"wb\" ) )", "Testing", "testing = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights))\nplt.subplot(121)\npylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n#Get image\ntesting = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights))\n\n\n#Get activity of image\n_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing)\n\n#Get rotated encoder outputs\ntesting_scale = np.dot(testing_act,scale_down_after_encoder_weights)\n\n#Get activities\ntesting_scale = ens.neuron_type.rates(testing_scale, sim.data[ens].gain, sim.data[ens].bias)\n\nfor i in range(2):\n testing_scale = np.dot(testing_scale,scale_down_after_encoder_weights)\n testing_scale = ens.neuron_type.rates(testing_scale, sim.data[ens].gain, sim.data[ens].bias)\n\n\n#testing_rotate = np.dot(testing_rotate,rotation_weights)\n\ntesting_scale = np.dot(testing_scale,activity_to_img_weights)\n\nplt.subplot(122)\npylab.imshow(np.reshape(testing_scale,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
face_generation/dlnd_face_generation.ipynb
mit
[ "Face Generation\nIn this project, you'll use generative adversarial networks to generate new images of faces.\nGet the Data\nYou'll be using two datasets in this project:\n- MNIST\n- CelebA\nSince the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.\nIf you're using FloydHub, set data_dir to \"/input\" and use the FloydHub data ID \"R5KrjnANiKVhLWAkpXhNBe\".", "data_dir = './data'\n\n# FloydHub - Use with data ID \"R5KrjnANiKVhLWAkpXhNBe\"\n#data_dir = '/input'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\nhelper.download_extract('mnist', data_dir)\nhelper.download_extract('celeba', data_dir)", "Explore the Data\nMNIST\nAs you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.", "show_n_images = 25\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\nimport os\nfrom glob import glob\nfrom matplotlib import pyplot\n\nmnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')\npyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')", "CelebA\nThe CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.", "show_n_images = 25\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nmnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')\npyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))", "Preprocess the Data\nSince the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.\nThe MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).\nBuild the Neural Network\nYou'll build the components necessary to build a GANs by implementing the following functions below:\n- model_inputs\n- discriminator\n- generator\n- model_loss\n- model_opt\n- train\nCheck the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU", "\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.\n- Z input placeholder with rank 2 using z_dim.\n- Learning rate placeholder with rank 0.\nReturn the placeholders in the following the tuple (tensor of real input images, tensor of z data)", "import problem_unittests as tests\n\ndef model_inputs(image_width, image_height, image_channels, z_dim):\n \"\"\"\n Create the model inputs\n :param image_width: The input image width\n :param image_height: The input image height\n :param image_channels: The number of image channels\n :param z_dim: The dimension of Z\n :return: Tuple of (tensor of real input images, tensor of z data, learning rate)\n \"\"\"\n inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real') \n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n learning_rate = tf.placeholder(tf.float32, (None))\n return inputs_real, inputs_z, learning_rate\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)", "Discriminator\nImplement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of \"discriminator\" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).", "def discriminator(images, reuse=False):\n \"\"\"\n Create the discriminator network\n :param images: Tensor of input image(s)\n :param reuse: Boolean if the weights should be reused\n :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)\n \"\"\"\n alpha=0.2\n x = images\n with tf.variable_scope('discriminator', reuse=reuse):\n x = tf.layers.conv2d(x, 64, 4, strides=2, padding=\"same\")\n x = tf.layers.batch_normalization(x, training=True)\n x = tf.maximum(alpha * x, x)\n #x = tf.layers.dropout(x, 0.5)\n\n x = tf.layers.conv2d(x, 128, 4, strides=2, padding=\"same\")\n x = tf.layers.batch_normalization(x, training=True)\n x = tf.maximum(alpha * x, x)\n #x = tf.layers.dropout(x, 0.5)\n\n x = tf.layers.conv2d(x, 256, 4, strides=2, padding=\"same\")\n x = tf.layers.batch_normalization(x, training=True)\n x = tf.maximum(alpha * x, x)\n #x = tf.layers.dropout(x, 0.5)\n\n x = tf.reshape(x, (-1, 4 * 4 * 256))\n logits = tf.layers.dense(x, 1)\n out = tf.sigmoid(logits)\n\n return out, logits\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_discriminator(discriminator, tf)", "Generator\nImplement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of \"generator\" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.", "def generator(z, out_channel_dim, is_train=True):\n \"\"\"\n Create the generator network\n :param z: Input z\n :param out_channel_dim: The number of channels in the output image\n :param is_train: Boolean if generator is being used for training\n :return: The tensor output of the generator\n \"\"\"\n reuse = not is_train\n alpha= 0.2\n with tf.variable_scope('generator', reuse=reuse):\n x = tf.layers.dense(z, 4 * 4 * 512)\n \n x = tf.reshape(x, (-1, 4, 4, 512))\n x = tf.layers.batch_normalization(x, training=is_train)\n #x = tf.layers.dropout(x, 0.5)\n x = tf.maximum(alpha * x, x)\n #print(x.shape)\n x = tf.layers.conv2d_transpose(x, 256, 4, strides=1, padding=\"valid\")\n x = tf.layers.batch_normalization(x,training=is_train)\n x = tf.maximum(alpha * x, x)\n #print(x.shape)\n x = tf.layers.conv2d_transpose(x, 128, 4, strides=2, padding=\"same\")\n x = tf.layers.batch_normalization(x,training=is_train)\n x = tf.maximum(alpha * x, x)\n #print(x.shape)\n x = tf.layers.conv2d_transpose(x, out_channel_dim, 4, strides=2, padding=\"same\")\n #x = tf.maximum(alpha * x, x)\n\n logits = x\n out = tf.tanh(logits)\n\n return out\n\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_generator(generator, tf)", "Loss\nImplement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:\n- discriminator(images, reuse=False)\n- generator(z, out_channel_dim, is_train=True)", "def model_loss(input_real, input_z, out_channel_dim):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n smooth = 0.1\n _, d_logits_real = discriminator(input_real, reuse=False)\n fake = generator(input_z, out_channel_dim, is_train=True)\n d_logits_fake = discriminator(fake, reuse=True)\n # Calculate losses\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels=tf.ones_like(d_logits_real) * (1 - smooth)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels=tf.zeros_like(d_logits_fake)))\n d_loss = d_loss_real + d_loss_fake\n\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,\n labels=tf.ones_like(d_logits_fake)))\n return d_loss, g_loss\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_loss(model_loss)", "Optimization\nImplement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).", "def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n t_vars = tf.trainable_variables()\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n all_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n \n g_update_ops = [var for var in all_update_ops if var.name.startswith('generator')]\n d_update_ops = [var for var in all_update_ops if var.name.startswith('discriminator')]\n\n with tf.control_dependencies(d_update_ops):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n with tf.control_dependencies(g_update_ops):\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n return d_train_opt, g_train_opt\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_opt(model_opt, tf)", "Neural Network Training\nShow Output\nUse this function to show the current output of the generator during training. It will help you determine how well the GANs is training.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\ndef show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):\n \"\"\"\n Show example output for the generator\n :param sess: TensorFlow session\n :param n_images: Number of Images to display\n :param input_z: Input Z Tensor\n :param out_channel_dim: The number of channels in the output image\n :param image_mode: The mode to use for images (\"RGB\" or \"L\")\n \"\"\"\n cmap = None if image_mode == 'RGB' else 'gray'\n z_dim = input_z.get_shape().as_list()[-1]\n example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])\n\n samples = sess.run(\n generator(input_z, out_channel_dim, False),\n feed_dict={input_z: example_z})\n\n images_grid = helper.images_square_grid(samples, image_mode)\n pyplot.imshow(images_grid, cmap=cmap)\n pyplot.show()", "Train\nImplement train to build and train the GANs. Use the following functions you implemented:\n- model_inputs(image_width, image_height, image_channels, z_dim)\n- model_loss(input_real, input_z, out_channel_dim)\n- model_opt(d_loss, g_loss, learning_rate, beta1)\nUse the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.", "def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):\n \"\"\"\n Train the GAN\n :param epoch_count: Number of epochs\n :param batch_size: Batch Size\n :param z_dim: Z dimension\n :param learning_rate: Learning Rate\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :param get_batches: Function to get batches\n :param data_shape: Shape of the data\n :param data_image_mode: The image mode to use for images (\"RGB\" or \"L\")\n \"\"\"\n inputs_real, inputs_z, lr = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)\n d_loss, g_loss = model_loss(inputs_real, inputs_z, data_shape[-1])\n d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learning_rate, beta1)\n batch_num = 0 \n \n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for epoch_i in range(epoch_count):\n for batch_images in get_batches(batch_size):\n batch_num = batch_num+1\n batch_images = batch_images * 2\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))\n _ = sess.run(d_train_opt, feed_dict={inputs_real: batch_images, inputs_z: batch_z, lr:learning_rate})\n _ = sess.run(g_train_opt, feed_dict={inputs_z: batch_z, lr:learning_rate})\n \n if batch_num % 100 == 0:\n train_loss_d = d_loss.eval({inputs_z:batch_z, inputs_real: batch_images})\n train_loss_g = g_loss.eval({inputs_z:batch_z})\n print(\"Epoch {}/{} batch {}...\".format(epoch_i+1, epoch_count, batch_num),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) ", "MNIST\nTest your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.", "batch_size = 64\nz_dim = 100\nlearning_rate = 0.001\nbeta1 = 0.6\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nepochs = 2\n\nmnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))\nwith tf.Graph().as_default():\n train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,\n mnist_dataset.shape, mnist_dataset.image_mode)", "CelebA\nRun your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.", "batch_size = 64\nz_dim = 100\nlearning_rate = 0.001\nbeta1 = 0.6\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nepochs = 1\n\nceleba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))\nwith tf.Graph().as_default():\n train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,\n celeba_dataset.shape, celeba_dataset.image_mode)", "Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_face_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fastai/fastai
nbs/quick_start.ipynb
apache-2.0
[ "#|hide\n#|skip\n! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab\n\n#|all_slow\n\nfrom fastai.vision.all import *\nfrom fastai.text.all import *\nfrom fastai.collab import *\nfrom fastai.tabular.all import *", "fastai applications - quick start\nfastai's applications all use the same basic steps and code:\n\nCreate appropriate DataLoaders\nCreate a Learner\nCall a fit method\nMake predictions or view results.\n\nIn this quick start, we'll show these steps for a wide range of difference applications and datasets. As you'll see, the code in each case is extremely similar, despite the very different models and data being used.\nComputer vision classification\nThe code below does the following things:\n\nA dataset called the Oxford-IIIT Pet Dataset that contains 7,349 images of cats and dogs from 37 different breeds will be downloaded from the fast.ai datasets collection to the GPU server you are using, and will then be extracted.\nA pretrained model that has already been trained on 1.3 million images, using a competition-winning model will be downloaded from the internet.\nThe pretrained model will be fine-tuned using the latest advances in transfer learning, to create a model that is specially customized for recognizing dogs and cats.\n\nThe first two steps only need to be run once. If you run it again, it will use the dataset and model that have already been downloaded, rather than downloading them again.", "path = untar_data(URLs.PETS)/'images'\n\ndef is_cat(x): return x[0].isupper()\ndls = ImageDataLoaders.from_name_func(\n path, get_image_files(path), valid_pct=0.2, seed=42,\n label_func=is_cat, item_tfms=Resize(224))\n\nlearn = vision_learner(dls, resnet34, metrics=error_rate)\nlearn.fine_tune(1)", "You can do inference with your model with the predict method:", "img = PILImage.create('images/cat.jpg')\nimg\n\nis_cat,_,probs = learn.predict(img)\nprint(f\"Is this a cat?: {is_cat}.\")\nprint(f\"Probability it's a cat: {probs[1].item():.6f}\")", "Computer vision segmentation\nHere is how we can train a segmentation model with fastai, using a subset of the Camvid dataset:", "path = untar_data(URLs.CAMVID_TINY)\ndls = SegmentationDataLoaders.from_label_func(\n path, bs=8, fnames = get_image_files(path/\"images\"),\n label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}',\n codes = np.loadtxt(path/'codes.txt', dtype=str)\n)\n\nlearn = unet_learner(dls, resnet34)\nlearn.fine_tune(8)", "We can visualize how well it achieved its task, by asking the model to color-code each pixel of an image.", "learn.show_results(max_n=6, figsize=(7,8))", "Or we can plot the k instances that contributed the most to the validation loss by using the SegmentationInterpretation class.", "interp = SegmentationInterpretation.from_learner(learn)\ninterp.plot_top_losses(k=2)", "Natural language processing\nHere is all of the code necessary to train a model that can classify the sentiment of a movie review better than anything that existed in the world just five years ago:", "dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')\nlearn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)\nlearn.fine_tune(2, 1e-2)", "Predictions are done with predict, as for computer vision:", "learn.predict(\"I really liked that movie!\")", "Tabular\nBuilding models from plain tabular data is done using the same basic steps as the previous models. Here is the code necessary to train a model that will predict whether a person is a high-income earner, based on their socioeconomic background:", "path = untar_data(URLs.ADULT_SAMPLE)\n\ndls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names=\"salary\",\n cat_names = ['workclass', 'education', 'marital-status', 'occupation',\n 'relationship', 'race'],\n cont_names = ['age', 'fnlwgt', 'education-num'],\n procs = [Categorify, FillMissing, Normalize])\n\nlearn = tabular_learner(dls, metrics=accuracy)\nlearn.fit_one_cycle(2)", "Recommendation systems\nRecommendation systems are very important, particularly in e-commerce. Companies like Amazon and Netflix try hard to recommend products or movies that users might like. Here's how to train a model that will predict movies people might like, based on their previous viewing habits, using the MovieLens dataset:", "path = untar_data(URLs.ML_SAMPLE)\ndls = CollabDataLoaders.from_csv(path/'ratings.csv')\nlearn = collab_learner(dls, y_range=(0.5,5.5))\nlearn.fine_tune(6)", "We can use the same show_results call we saw earlier to view a few examples of user and movie IDs, actual ratings, and predictions:", "learn.show_results()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pdh21/XID_plus
docs/notebooks/examples/XID+_example_pyvo_prior.ipynb
mit
[ "from astropy.io import ascii, fits\nimport astropy\nimport pylab as plt\n%matplotlib inline\nfrom astropy import wcs\nfrom astropy.table import Table,Column,join,hstack\nfrom astropy.coordinates import SkyCoord\nfrom astropy import units as u\nimport pymoc\nimport glob\nfrom time import sleep\nimport os\n\n\nimport numpy as np\nimport xidplus\nfrom xidplus import moc_routines\nimport pickle\nimport xidplus.catalogue as cat\n\nimport sys\nfrom herschelhelp_internal.utils import inMoc,flux_to_mag\nfrom xidplus.stan_fit import SPIRE\n\nimport aplpy\nimport seaborn as sns\n#sns.set(color_codes=True)\nimport pandas as pd\n#sns.set_style(\"white\")\nimport xidplus.posterior_maps as postmaps\nimport pyvo as vo\n\n", "First we select the field that the sources we are considering are in. If the sources span multiple fields that each field will need to be run individually as the FIR maps from seperate fields cannot be easily combined.", "fields = ['AKARI-NEP',\n 'AKARI-SEP',\n 'Bootes',\n 'CDFS-SWIRE',\n 'COSMOS',\n 'EGS',\n 'ELAIS-N1',\n 'ELAIS-N2',\n 'ELAIS-S1',\n 'GAMA-09',\n 'GAMA-12',\n 'GAMA-15',\n 'HDF-N',\n 'Herschel-Stripe-82',\n 'Lockman-SWIRE',\n 'NGP',\n 'SA13',\n 'SGP',\n 'SPIRE-NEP',\n 'SSDF',\n 'XMM-13hr',\n 'XMM-LSS',\n 'xFLS']\n\nfield_use = fields[6]\nprint(field_use)", "Here you provide the coordinate of the objects you are planning to run XID+ on and their ID's if any \nIf no ids are provided then they will be numbered 1-N)", "ras = [242,243]#enter your ra here as a list of numpy array\ndecs = [55,55] #enter your dec here as a list or numpy array\nobject_coords = SkyCoord(ra=ras*u.degree,dec=decs*u.degree)\n\nids = [] #add your ids here as a list or numpy array\nif len(ids)==0:\n ids = np.arange(0,len(ras),1)", "Run the pyvo query to create a table of all help sources within the desired radius of your objects", "#setup a connection to the HELP VO server at Sussex\nsearch_radius = 60/3600 #distance away from object that the VO query will look for galaxies in degrees\n#for SPIRE AND PACS we recommend 60\" and for MIPS we reccomend 30\"\n\n\nservice = vo.dal.TAPService(\"https://herschel-vos.phys.sussex.ac.uk/__system__/tap/run/tap\")\n\nfor n,coords in enumerate(object_coords):\n ra = coords.ra\n dec = coords.dec\n query_spire_pacs = \"\"\"\n SELECT ra, dec, help_id, flag_optnir_det, f_mips_24\n FROM herschelhelp.main\n WHERE (\n herschelhelp.main.field = '{}' AND\n herschelhelp.main.falg_optnir_det>=5 AND\n herschelhelp.main.f_mips_24>20\n ) AND\n WHERE CONTAINS(POINT('ICRS',ra, dec), CIRCLE('ICRS',{},{},{}))=1\n \"\"\".format(field_use,ra,dec,search_radius)\n \n query_mips = \"\"\"\n SELECT ra, dec, help_id, flag_optnir_det, f_irac_i1, f_irac_i2, f_irac_i3, f_irac_i4\n FROM herschelhelp.main\n WHERE (\n herschelhelp.main.field = '{}' AND\n herschelhelp.main.falg_optnir_det>=5 AND\n ) AND\n WHERE CONTAINS(POINT('ICRS',ra, dec), CIRCLE('ICRS',{},{},{}))=1\n \"\"\".format(field_use,ra,dec,search_radius)\n \n \n try:\n job = service.submit_job(query)\n job.run()\n\n while job.phase == \"EXECUTING\":\n print(\"Job running\")\n sleep(5)\n print('Job finsihed') \n\n if n==0:\n prior_help = job.fetch_result().to_table()\n print('table created with {} rows'.format(len(table)))\n else:\n result = job.fetch_result().to_table()\n prior_help = astropy.table.vstack([result,table],join_type='outer')\n print('table editied, added {} rows'.format(len(result)))\n\n\n done_fields.append(field)\n except:\n print('VO call failed')\n job.delete()\n\nprint(len(prior_help))\nprior_help[:5]", "Run the below cell if you are running XID+ on SPIRE or PACS maps", "cra = Column(ras,name='ra')\ncdec = Column(decs,name='dec')\ncids = Column(ids,name='help_id')\ncdet = Column(np.zeros(len(ras))-99,name='flag_optnir_det')\ncmips = Column(np.zeros(len(ras))*np.nan,name='f_mips_24')\nprior_new = Table()\nprior_new.add_columns([cra,cdec,cids,cdet,cmips])\n\n\nprior_cat = vstack([prior_help,prior_new])\nlen(prior_cat)\nprior_cat[:5]", "Run the below cells if you are running XID+ on MIPS maps", "#provides limits on teh flat prior used in XID based on the galaxies IRAC fluxes\nMIPS_lower=np.full(len(prior_help),0.0)\nMIPS_upper=np.full(len(prior_help),1E5)\nfor i in range(len(prior_cat)):\n if np.isnan(prior_cat['f_irac_i4'][i])==False:\n MIPS_lower[i]=prior_cat['f_irac_i4'][i]/500.0\n MIPS_upper[i]=prior_cat['f_irac_i4'][i]*500.0\n elif np.isnan(prior_cat['f_irac_i3'][i])==False:\n MIPS_lower[i]=prior_cat['f_irac_i3'][i]/500.0\n MIPS_upper[i]=prior_cat['f_irac_i3'][i]*500.0\n elif np.isnan(prior_cat['f_irac_i2'][i])==False:\n MIPS_lower[i]=prior_cat['f_irac_i2'][i]/500.0\n MIPS_upper[i]=prior_cat['f_irac_i2'][i]*500.0\n elif np.isnan(prior_cat['f_irac_i1'][i])==False:\n MIPS_lower[i]=prior_cat['f_irac_i1'][i]/500.0\n MIPS_upper[i]=prior_cat['f_irac_i1'][i]*500.0\n \nmips_lower_col = Column(MIPS_lower,name='MIPS_lower')\nmips_upper_col = Column(MIPS_upper,name='MIPS_upper')\nprior_help.add_columns([mips_lower_col,mips_upper_col])\n\n#add your IRAC fluxes here, if your objects don't have IRAC fluxes then they will be set to nan\ni1_f = np.zeros(len(ras))*np.nan\ni2_f = np.zeros(len(ras))*np.nan\ni3_f = np.zeros(len(ras))*np.nan\ni4_f = np.zeros(len(ras))*np.nan\n\ncra = Column(ras,name='ra')\ncdec = Column(decs,name='dec')\ncids = Column(ids,name='help_id')\ncdet = Column(np.zeros(len(ras))-99,name='flag_optnir_det')\nci1 = Column(i1_f,name='f_irac_i1')\nci2 = Column(i2_f,name='f_irac_i2')\nci3 = Column(i3_f,name='f_irac_i3')\nci4 = Column(i4_f,name='f_irac_i4')\n\n\n\nMIPS_lower=np.full(len(lofar_prior),0.0)\nMIPS_upper=np.full(len(lofar_prior),1E5)\nfor i in range(len(lofar_prior)):\n if np.isnan(lofar_prior['f_irac_i4'][i])==False:\n MIPS_lower[i]=lofar_prior['f_irac_i4'][i]/500.0\n MIPS_upper[i]=lofar_prior['f_irac_i4'][i]*500.0\n elif np.isnan(lofar_prior['f_irac_i3'][i])==False:\n MIPS_lower[i]=lofar_prior['f_irac_i3'][i]/500.0\n MIPS_upper[i]=lofar_prior['f_irac_i3'][i]*500.0\n elif np.isnan(lofar_prior['f_irac_i2'][i])==False:\n MIPS_lower[i]=lofar_prior['f_irac_i2'][i]/500.0\n MIPS_upper[i]=lofar_prior['f_irac_i2'][i]*500.0\n elif np.isnan(lofar_prior['f_irac_i1'][i])==False:\n MIPS_lower[i]=lofar_prior['f_irac_i1'][i]/500.0\n MIPS_upper[i]=lofar_prior['f_irac_i1'][i]*500.0\n \nmips_lower_col = Column(MIPS_lower,name='MIPS_lower')\nmips_upper_col = Column(MIPS_upper,name='MIPS_upper')\nprior_new = Table()\nprior_new.add_columns([cra,cdec,cids,cdet,ci1,ci2,ci3,ci4,mips_lower_col,mips_upper_col])\n\n\nprior_cat = vstack([prior_help,prior_new])\nlen(prior_cat)", "Now that we have created the prior we can run XID+\nLoad in the FIR maps\nhere we load in the SPIRE maps but you can substitue this with PACS and MIPS yourself", "#Read in the herschel images\nimfolder='../../../../../HELP/dmu_products/dmu19/dmu19_HELP-SPIRE-maps/data/'\n\npswfits=imfolder+'ELAIS-N1_SPIRE250_v1.0.fits'#SPIRE 250 map\npmwfits=imfolder+'ELAIS-N1_SPIRE350_v1.0.fits'#SPIRE 350 map\nplwfits=imfolder+'ELAIS-N1_SPIRE500_v1.0.fits'#SPIRE 500 map\n\n#-----250-------------\nhdulist = fits.open(pswfits)\nim250phdu=hdulist[0].header\nim250hdu=hdulist['image'].header\n\nim250=hdulist['image'].data*1.0E3 #convert to mJy\nnim250=hdulist['error'].data*1.0E3 #convert to mJy\nw_250 = wcs.WCS(hdulist['image'].header)\npixsize250=3600.0*w_250.wcs.cd[1,1] #pixel size (in arcseconds)\nhdulist.close()\n#-----350-------------\nhdulist = fits.open(pmwfits)\nim350phdu=hdulist[0].header\nim350hdu=hdulist['image'].header\n\nim350=hdulist['image'].data*1.0E3 #convert to mJy\nnim350=hdulist['error'].data*1.0E3 #convert to mJy\nw_350 = wcs.WCS(hdulist['image'].header)\npixsize350=3600.0*w_350.wcs.cd[1,1] #pixel size (in arcseconds)\nhdulist.close()\n#-----500-------------\nhdulist = fits.open(plwfits)\nim500phdu=hdulist[0].header\nim500hdu=hdulist['image'].header \nim500=hdulist['image'].data*1.0E3 #convert to mJy\nnim500=hdulist['error'].data*1.0E3 #convert to mJy\nw_500 = wcs.WCS(hdulist['image'].header)\npixsize500=3600.0*w_500.wcs.cd[1,1] #pixel size (in arcseconds)\nhdulist.close()", "Create a moc around each of your objects that will be used to cut doen the SPIRE image", "moc=pymoc.util.catalog.catalog_to_moc(object_coords,search_radius,15)", "finish initalising the prior", "#---prior250--------\nprior250=xidplus.prior(im250,nim250,im250phdu,im250hdu, moc=moc)#Initialise with map, uncertianty map, wcs info and primary header\nprior250.prior_cat(prior_cat['ra'],prior_cat['dec'],'prior_cat',ID=prior_cat['help_id'])#Set input catalogue\nprior250.prior_bkg(-5.0,5)#Set prior on background (assumes Gaussian pdf with mu and sigma)\n#---prior350--------\nprior350=xidplus.prior(im350,nim350,im350phdu,im350hdu, moc=moc)\nprior350.prior_cat(prior_cat['ra'],prior_cat['dec'],'prior_cat',ID=prior_cat['help_id'])\nprior350.prior_bkg(-5.0,5)\n\n#---prior500--------\nprior500=xidplus.prior(im500,nim500,im500phdu,im500hdu, moc=moc)\nprior500.prior_cat(prior_cat['ra'],prior_cat['dec'],'prior_cat',ID=prior_cat['help_id'])\nprior500.prior_bkg(-5.0,5)\n\n#pixsize array (size of pixels in arcseconds)\npixsize=np.array([pixsize250,pixsize350,pixsize500])\n#point response function for the three bands\nprfsize=np.array([18.15,25.15,36.3])\n#use Gaussian2DKernel to create prf (requires stddev rather than fwhm hence pfwhm/2.355)\nfrom astropy.convolution import Gaussian2DKernel\n\n##---------fit using Gaussian beam-----------------------\nprf250=Gaussian2DKernel(prfsize[0]/2.355,x_size=101,y_size=101)\nprf250.normalize(mode='peak')\nprf350=Gaussian2DKernel(prfsize[1]/2.355,x_size=101,y_size=101)\nprf350.normalize(mode='peak')\nprf500=Gaussian2DKernel(prfsize[2]/2.355,x_size=101,y_size=101)\nprf500.normalize(mode='peak')\n\npind250=np.arange(0,101,1)*1.0/pixsize[0] #get 250 scale in terms of pixel scale of map\npind350=np.arange(0,101,1)*1.0/pixsize[1] #get 350 scale in terms of pixel scale of map\npind500=np.arange(0,101,1)*1.0/pixsize[2] #get 500 scale in terms of pixel scale of map\n\nprior250.set_prf(prf250.array,pind250,pind250)#requires PRF as 2d grid, and x and y bins for grid (in pixel scale)\nprior350.set_prf(prf350.array,pind350,pind350)\nprior500.set_prf(prf500.array,pind500,pind500)\n\nprint('fitting '+ str(prior250.nsrc)+' sources \\n')\nprint('using ' + str(prior250.snpix)+', '+ str(prior350.snpix)+' and '+ str(prior500.snpix)+' pixels')\n\nprior250.get_pointing_matrix()\nprior350.get_pointing_matrix()\nprior500.get_pointing_matrix()\n\nprior250.upper_lim_map()\nprior350.upper_lim_map()\nprior500.upper_lim_map()", "run XID+ and save the output", "from xidplus.stan_fit import SPIRE\nfit=SPIRE.all_bands(prior250,prior350,prior500,iter=1000)\n\nposterior=xidplus.posterior_stan(fit,[prior250,prior350,prior500])\nxidplus.save([prior250,prior350,prior500],posterior,'YOUR_FILE_NAME_HERE')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ioos/notebooks_demos
notebooks/2020-12-08-DataToDwC.ipynb
mit
[ "Aligning Data to Darwin Core\nCreating event core with an occurrence and extended measurement or fact extension using Python\nCaution: This notebook was created for the IOOS DMAC Code Sprint Biological Data Session.\nThe data in this notebook were created specifically as an example and meant solely to be\nillustrative of the process for aligning data to the biological data standard - Darwin Core.\nThese data should not be considered actual occurrences of species and any measurements\nare also contrived. This notebook is meant to provide a step by step process for taking\noriginal data and aligning it to Darwin Core. It has been adapted from the R markdown notebook created by Abby Benson IOOS_DMAC_DataToDWC_Notebook_event.md.\nFirst let's bring in the appropriate libraries to work with the tabular data files and generate the appropriate content for the Darwin Core requirements.", "import csv\nimport numpy as np\nimport pandas as pd\nimport pprint\nimport pyworms\nimport uuid", "Now we need to read in the raw data file using pandas.read_csv(). Here we display the first ten rows of data to give the user an idea of what observations are contained in the raw file.", "file = 'data/dwc/raw/MadeUpDataForBiologicalDataTraining.csv'\ndf = pd.read_csv(file, header=[0])\ndf.head()", "First we need to to decide if we will build an occurrence only version of the data or an event core with an occurrence and extended measurement or facts extension (eMoF) version of the data. \n\nOccurrence only: \nEasier to create. \nIt's only one file to produce. \n\nHowever, several pieces of information will be left out if we choose that option. \n\n\nsampling event with occurrence and extended measurement or fact (eMoF): \n\nMore difficult to create.\ncomposed of several files.\nCan capture all of the data in the file creating a lossless version.\n\nHere we decide to use the second option, extended measurement or fact (eMoF), to include as much information as we can.\nFirst let's create the eventID and occurrenceID in the original file so that information can be reused for all necessary files down the line.", "df['eventID'] = df[['region', 'station', 'transect']].apply(lambda x: '_'.join(x.astype(str)), axis=1)\ndf['occurrenceID'] = uuid.uuid4()", "We will need to create three separate files to comply with the sampling event format.\nWe'll start with the event file but we only need to include the columns that are relevant\nto the event file.\nEvent file\nMore information on the event category in Darwin Core can be found at https://dwc.tdwg.org/terms/#event.\nLet's first make a copy of the DataFrame we pulled in. Only using the data fields of interest for the event file).", "event = df[['date', 'lat', 'lon', 'region', 'station', 'transect', 'depth', 'bottom type', 'eventID']].copy()", "Next we need to rename any columns of data to match directly to Darwin Core.", "event['decimalLatitude'] = event['lat']\nevent['decimalLongitude'] = event['lon']\nevent['minimumDepthInMeters'] = event['depth']\nevent['maximumDepthInMeters'] = event['depth']\nevent['habitat'] = event['bottom type']\nevent['island'] = event['region']", "We need to appropriately read in the date field, so we can export it to ISO format. Also add any missing, required, fields.", "event['eventDate'] = pd.to_datetime(event['date'],format='%m/%d/%Y')\nevent['basisOfRecord'] = 'HumanObservation'\nevent['geodeticDatum'] = 'EPSG:4326 WGS84'", "Then we'll remove any fields that we no longer need to clean things up a bit.", "event.drop(\n columns=['date', 'lat', 'lon', 'region', 'station', 'transect', 'depth', 'bottom type'],\n inplace=True)", "We have too many repeating rows of information. We can pare this down using eventID which\nis a unique identifier for each sampling event in the data.", "event.drop_duplicates(subset='eventID',inplace=True)", "Finally, we write out the event file, specifying the ISO date format. We've printed ten random rows of the DataFrame to give an example of what the resultant file will look like.", "event.to_csv(\n 'data/dwc/processed/MadeUpData_event.csv',\n header=True,\n index=False,\n date_format='%Y-%m-%d')\n\nevent.sample(n=5).sort_index()", "Occurrence file\nMore information on the occurrence category in Darwin Core can be found at https://dwc.tdwg.org/terms/#occurrence.\nFor creating the occurrence file, we start by creating the DataFrame and renaming the fields that align directly with Darwin Core. Then, we'll add the required information that is missing.", "occurrence = df[['scientific name', 'eventID', 'occurrenceID', 'percent cover']].copy()\noccurrence['scientificName'] = occurrence['scientific name']\noccurrence['occurrenceStatus'] = np.where(occurrence['percent cover'] == 0, 'absent', 'present')", "Taxonomic Name Matching\nA requirement for OBIS is that all scientific names match to the World Register of\nMarine Species (WoRMS) and a scientificNameID is included. A scientificNameID looks\nlike this urn:lsid:marinespecies.org:taxname:275730 with the last digits after\nthe colon being the WoRMS aphia ID. We'll need to go out to WoRMS to grab this\ninformation. So, we create a lookup table of the unique scientific names found in the occurrence data we created above.", "lut_worms = pd.DataFrame(\n columns=['scientificName'],\n data=occurrence['scientificName'].unique())", "Next, we add the known columns that we can grab information from WoRMS including the required scientificNameID and populate the look up table with empty values for those fields (to initialize the DataFrame for population later).", "headers = ['acceptedname', 'acceptedID', 'scientificNameID', 'kingdom', 'phylum',\n 'class', 'order', 'family', 'genus', 'scientificNameAuthorship', 'taxonRank']\n\nfor head in headers:\n lut_worms[head] = ''", "Next, we perform a taxonomic lookup using the library pyworms. Using the function pyworms.aphiaRecordsByMatchNames() to collect the information and populate the look up table.\nHere we print the scientific name of the species we are looking up and the matching response from WoRMS with the detailed species information.", "for index, row in lut_worms.iterrows():\n print('\\n**Searching for scientific name = %s**' % row['scientificName'])\n resp = pyworms.aphiaRecordsByMatchNames(row['scientificName'])[0][0]\n pprint.pprint(resp)\n lut_worms.loc[index, 'acceptedname'] = resp['valid_name']\n lut_worms.loc[index, 'acceptedID'] = resp['valid_AphiaID']\n lut_worms.loc[index, 'scientificNameID'] = resp['lsid']\n lut_worms.loc[index, 'kingdom'] = resp['kingdom']\n lut_worms.loc[index, 'phylum'] = resp['phylum']\n lut_worms.loc[index, 'class'] = resp['class']\n lut_worms.loc[index, 'order'] = resp['order']\n lut_worms.loc[index, 'family'] = resp['family']\n lut_worms.loc[index, 'genus'] = resp['genus']\n lut_worms.loc[index, 'scientificNameAuthorship'] = resp['authority']\n lut_worms.loc[index, 'taxonRank'] = resp['rank']", "We then merge the lookup table of unique scientific names back into the occurrence data. Matching on the field scientificName. Then, we remove any unnecessary columns to clean up the DataFrame for writing.", "occurrence = pd.merge(occurrence, lut_worms, how='left', on='scientificName')\n\noccurrence.drop(\n columns=['scientific name', 'percent cover'],\n inplace=True)", "Finally, we write out the occurrence file. We've printed ten random rows of the DataFrame to give an example of what the resultant file will look like.", "# sort the columns on scientificName\noccurrence.sort_values('scientificName', inplace=True)\n\n# reorganize column order to be consistent with R example:\ncolumns = [\"scientificName\",\"eventID\",\"occurrenceID\",\"occurrenceStatus\",\"acceptedname\",\"acceptedID\",\n \"scientificNameID\",\"kingdom\",\"phylum\",\"class\",\"order\",\"family\",\"genus\",\"scientificNameAuthorship\",\n \"taxonRank\"]\n\noccurrence.to_csv(\n \"data/dwc/processed/MadeUpData_Occurrence.csv\",\n header=True,\n index=False,\n quoting=csv.QUOTE_ALL,\n columns=columns)\n\noccurrence.sample(n=10).sort_index()", "Extended Measurement Or Fact (eMoF)\nThe last file we need to create is the extended measurement or fact (eMoF) file. The measurement or fact includes measurements/facts about the event (temp, salinity, etc) as well as about the occurrence (percent cover, abundance, weight, length, etc). They are linked to the events using eventID and to the occurrences using occurrenceID. Extended Measurements Or Facts are any other generic observations that are associated with resources that are described using Darwin Core (eg. water temperature observations). See the DwC implementation guide for more information.\nFor the various TypeID fields (eg. measurementTypeID) include URI's from the BODC NERC vocabulary or other nearly permanent source, where possible. For example, water temperature in the BODC NERC vocabulary, the URI is http://vocab.nerc.ac.uk/collection/P25/current/WTEMP/.\nWe then populate the appropriate fields with the information we have available. The measurementValue field is populated with the observed values of the measurement described in the measurementType and measurementUnit field. \nFor measurement or facts of the occurrence (eg. percent cover, length, density, biomass, etc), we want to be sure to include the occurrenceID from the occurrence record as those observations are measurements of/from the organism. Other observations are tied to the event via the eventID (eg. water temperature, rugosity, etc).\nBelow we walk through creating three independent DataFrames for temperature, rugosity, and percent cover. Populating each DataFrame with all of the information we have available and removing duplicative fields. We finally concatenate all the extended measurements or facts together into one DataFrame.", "temperature = df[['eventID', 'temperature', 'date']].copy()\ntemperature['occurrenceID'] = ''\ntemperature['measurementType'] = 'temperature'\ntemperature['measurementTypeID'] = 'http://vocab.nerc.ac.uk/collection/P25/current/WTEMP/'\ntemperature['measurementValue'] = temperature['temperature']\ntemperature['measurementUnit'] = 'Celsius'\ntemperature['measurementUnitID'] = 'http://vocab.nerc.ac.uk/collection/P06/current/UPAA/'\ntemperature['measurementAccuracy'] = 3\ntemperature['measurementDeterminedDate'] = pd.to_datetime(temperature['date'],format='%m/%d/%Y')\ntemperature['measurementMethod'] = ''\ntemperature.drop(columns=['temperature', 'date'],inplace=True)\n\nrugosity = df[['eventID', 'rugosity', 'date']].copy()\nrugosity['occurrenceID'] = ''\nrugosity['measurementType'] = 'rugosity'\nrugosity['measurementTypeID'] = ''\nrugosity['measurementValue'] = rugosity['rugosity'].map('{:,.6f}'.format)\nrugosity['measurementUnit'] = ''\nrugosity['measurementUnitID'] = ''\nrugosity['measurementAccuracy'] = ''\nrugosity['measurementDeterminedDate'] = pd.to_datetime(rugosity['date'],format='%m/%d/%Y')\nrugosity['measurementMethod'] = ''\nrugosity.drop(columns=['rugosity', 'date'],inplace=True)\n\npercent_cover = df[['eventID', 'occurrenceID', 'percent cover', 'date']].copy()\npercent_cover['measurementType'] = 'Percent Cover'\npercent_cover['measurementTypeID'] = 'http://vocab.nerc.ac.uk/collection/P01/current/SDBIOL10/'\npercent_cover['measurementValue'] = percent_cover['percent cover']\npercent_cover['measurementUnit'] = 'Percent/100m^2'\npercent_cover['measurementUnitID'] = ''\npercent_cover['measurementAccuracy'] = 5\npercent_cover['measurementDeterminedDate'] = pd.to_datetime(percent_cover['date'],format='%m/%d/%Y')\npercent_cover['measurementMethod'] = ''\npercent_cover.drop(columns=['percent cover', 'date'],inplace=True)\n\nmeasurementorfact = pd.concat([temperature, rugosity, percent_cover])", "Finally, we write the measurement or fact file, again specifying the ISO date format. We've printed ten random rows of the DataFrame to give an example of what the resultant file will look like.", "measurementorfact.to_csv('data/dwc/processed/MadeUpData_mof.csv',\n index=False,\n header=True,\n date_format='%Y-%m-%d')\nmeasurementorfact.sample(n=10)", "Author: Mathew Biddle" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
splicemachine/splice-community-sample-code
twimlcon-workshop-materials/3 - Model Training.ipynb
apache-2.0
[ "<img src=\"Images/Splice_logo.jpeg\" width=\"250\" height=\"200\" align=\"left\" >\nTrain machine learning models using the Feature Store\n\nHow do you find features values at the correct point in time?\n\n\nFeatures are updated at different times\n\n\nHow would you join across asynchronous timestamps?\n\n\n<img src=\"Images/point_in_time_problem.png\" width=\"900\" align=\"left\" >\nThis can be done without a Feature Store once or twice, but for 5 or 50 models?\n\nEasily build point in time consistent training sets with our Feature Store\n<img src=\"Images/training_set.png\" width=\"1000\" align=\"left\" >\nStructure of Feature Set Tables\n<img src=\"Images/FS_tables.png\" width=\"800\" height=\"400\" align=\"left\" >\n\nFeature Store for Model Training", "#Begin spark session \nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.getOrCreate()\n\n#Create pysplice context. Allows you to create a Spark dataframe using our Native Spark DataSource \nfrom splicemachine.spark import PySpliceContext\nsplice = PySpliceContext(spark)\n\n#Initialize our Feature Store API\nfrom splicemachine.features import FeatureStore\nfrom splicemachine.features.constants import FeatureType\nfs = FeatureStore(splice)\n\n#Initialize MLFlow\nfrom splicemachine.mlflow_support import *\nmlflow.register_feature_store(fs)\nmlflow.register_splice_context(splice)", "Write any SQL to get your label. The label doesn't have to be apart of the Feature Store", "%%sql\nSELECT ltv.CUSTOMERID, \n ((w.WEEK_END_DATE - ltv.CUSTOMER_START_DATE)/ 7) CUSTOMERWEEK,\n CAST(w.WEEK_END_DATE as TIMESTAMP) CUSTOMER_TS, \n ltv.CUSTOMER_LIFETIME_VALUE as CUSTOMER_LTV\nFROM retail_rfm.weeks w --splice-properties useSpark=True\nINNER JOIN \n twimlcon_fs.customer_lifetime ltv \n ON w.WEEK_END_DATE >= ltv.CUSTOMER_START_DATE AND w.WEEK_END_DATE <= ltv.CUSTOMER_START_DATE + 28 --only first 4 weeks\nORDER BY 1,2\n\n{limit 8}\n;", "Create a Training View\nBy specifying the join key and time stamp, you can automatically get all of the relevant features you need", "sql = \"\"\"\nSELECT ltv.CUSTOMERID, \n ((w.WEEK_END_DATE - ltv.CUSTOMER_START_DATE)/ 7) CUSTOMERWEEK,\n CAST(w.WEEK_END_DATE as TIMESTAMP) CUSTOMER_TS, \n ltv.CUSTOMER_LIFETIME_VALUE as CUSTOMER_LTV\nFROM retail_rfm.weeks w --splice-properties useSpark=True\nINNER JOIN \n twimlcon_fs.customer_lifetime ltv \n ON w.WEEK_END_DATE > ltv.CUSTOMER_START_DATE AND w.WEEK_END_DATE <= ltv.CUSTOMER_START_DATE + 28 --only first 4 weeks\n\"\"\"\n\npks = ['CUSTOMERID','CUSTOMERWEEK'] # Each unique training row is identified by the customer and their week of spending activity\njoin_keys = ['CUSTOMERID'] # This is the primary key of the Feature Sets that we want to join to\n\nfs.create_training_view(\n 'twimlcon_customer_lifetime_value',\n sql=sql, \n primary_keys=pks, \n join_keys=join_keys,\n ts_col = 'CUSTOMER_TS', # How we join each unique row with our eventual Features\n label_col='CUSTOMER_LTV', # The thing we want to predict\n desc = 'The current (as of queried) lifetime value of each customer per week of being a customer'\n)", "Easily extract all features\nEvery time this code is re-run you have access to the most up-to-date features", "#Spark Dataframe\nall_features = fs.get_training_set_from_view('twimlcon_customer_lifetime_value')\nall_features.limit(8).toPandas()\n\n#SQL used to generate the Dataframe\nsql = fs.get_training_set_from_view('twimlcon_customer_lifetime_value',return_sql=True)\nprint(sql)", "Automatic Feature Selection\nAs simple as using the get_training_view function", "import re\n\n# get training set as a SQL statement\nfeats = fs.get_training_view_features('twimlcon_customer_lifetime_value')\n# Grab only up to 4 weeks of RFM values\ndesired_features = ['CUSTOMER_LIFETIME_DAYS'] + [f.name for f in feats if re.search('_[0-4]W',f.name)]\n\n\n\nall_features = fs.get_training_set_from_view('twimlcon_customer_lifetime_value', features = desired_features).dropna() \n\n\ntop_features, feature_importances = fs.run_feature_elimination(\n all_features,\n features=desired_features,\n label = 'CUSTOMER_LTV',\n n = 10,\n verbose=2,\n step=30,\n model_type='regression',\n log_mlflow=True,\n mlflow_run_name='Feature_Elimination_LTV',\n return_importances=True\n)\n\nmodel_training_df = fs.get_training_set_from_view('twimlcon_customer_lifetime_value', features = top_features).dropna() ", "Train a Machine Learning Model\nSplice Machine's model training is built around an integrated and enhanced version of MLFlow", "from splicemachine.notebook import get_mlflow_ui\nget_mlflow_ui()\n\n###############\n# SparkML Model\n###############\nfrom pyspark.ml.regression import LinearRegression, RandomForestRegressor\nfrom pyspark.ml.feature import VectorAssembler,StandardScaler\nfrom pyspark.ml import Pipeline\nfrom pyspark.ml.evaluation import RegressionEvaluator\n\n\nmlflow.set_experiment('Predict Lifetime Value from Initial Customer Activity')\nrun_tags={'project': 'TWIMLcon Demo',\n 'team': 'INSERT YOUR NAME HERE'\n }\n\nfeatures_list = [f.name for f in top_features]\nfeatures_str = ','.join(features_list) \n\nva = VectorAssembler(inputCols=features_list, outputCol='features_raw')\nscaler = StandardScaler(inputCol=\"features_raw\", outputCol=\"features\")\n\n\nwith mlflow.start_run(run_name = f\"Regression LTV\", tags = run_tags):\n\n\n lr = LinearRegression(featuresCol = 'features', labelCol = 'CUSTOMER_LTV', maxIter=10, regParam=0.3, elasticNetParam=0.8)\n #lr = RandomForestRegressor(featuresCol = 'features', labelCol = 'CUSTOMER_LTV')\n \n pipeline = Pipeline( stages=[va, scaler, lr])\n\n # log everything\n mlflow.log_feature_transformations(pipeline)\n mlflow.log_pipeline_stages(pipeline)\n\n #train\n train,test = model_training_df.randomSplit([0.80,0.20])\n model = pipeline.fit(train)\n predictions = model.transform(test)\n\n lr_model = model.stages[-1]\n print(\"Coefficients: \" + str(lr_model.coefficients))\n print(\"Intercept: \" + str(lr_model.intercept))\n \n # log metric\n pred_evaluator = RegressionEvaluator(predictionCol=\"prediction\", labelCol=\"CUSTOMER_LTV\",metricName=\"r2\")\n r2 = pred_evaluator.evaluate(predictions)\n print(\"R Squared (R2) on test data = %g\" % r2)\n mlflow.log_metric('r2',r2)\n\n mlflow.log_model(model)\n run_id = mlflow.current_run_id()\n\nfrom splicemachine.notebook import get_mlflow_ui\nget_mlflow_ui()", "Store most important features for use in the next jupyter notebook", "%store features_list\n%store features_str\n\nspark.stop()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
santipuch590/deeplearning-tf
tensorflow_2_tutorials/text/text_generation_with_an_rnn.ipynb
mit
[ "Text generation with an RNN Tutorial in TensorFlow 2.0\nThis tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data (\"Shakespear\"), train a model to predict the next character in the sequence (\"e\"). Longer sequences of text can be generated by calling the model repeatedly.\nSetup", "import os\nimport pathlib\nfrom pprint import pprint\nimport shutil\nimport time\n\nimport tensorflow as tf\nimport numpy as np\n\ncwd = pathlib.Path(\".\").resolve()\nprint(cwd)", "Download and read the data", "# Download the Shakespeare dataset\n\npath_to_file = pathlib.Path(tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt'))\nshakespeare_dataset_path = cwd / path_to_file.name\nshutil.copy(str(path_to_file), str(shakespeare_dataset_path))\nassert shakespeare_dataset_path.is_file()\nprint(shakespeare_dataset_path)\n\n with open(shakespeare_dataset_path, \"r\") as fd:\n text = fd.read()\nprint(f\"Length of text: {len(text)}\")\n\n# Take a look at the first 500 characters in text\nprint(text[:500])\n\n# Get the unique set of characters in the file\nvocabulary = sorted(set(text))\nprint(f\"Vocabulary size: {len(vocabulary)}\")\nprint()\nprint(vocabulary)", "Process the text\nVectorize the text\nBefore training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.", "char2idx = {c: i for i, c in enumerate(vocabulary)}\nidx2char = np.array(vocabulary)\n\npprint(char2idx)\n\ntext_as_int = np.array([char2idx[c] for c in text])\nprint(f\"'{text[:13]}' mapped to {text_as_int[:13]}\")", "The prediction task\nGiven a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.\nSince RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?\nCreate training examples and targets\nNext divide the text into example sequences. Each input sequence will contain seq_length characters from the text.\nFor each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.\nSo break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is \"Hello\". The input sequence would be \"Hell\", and the target sequence \"ello\".\nTo do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.", "seq_length = 100\n\nexamples_per_epoch = len(text) // (seq_length + 1)\nprint(f\"Examples per epoch: {examples_per_epoch}\")\n\n# Create training examples / targets\nchar_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)\n\nprint(char_dataset)\n\n# Let's inspect how this dataset is formed\nfor i in char_dataset.take(5):\n print(idx2char[i.numpy()])\n\n# Let's create sequences of the desired length by means of batching\nsequences = char_dataset.batch(seq_length + 1, drop_remainder=True)\n\nfor item in sequences.take(5):\n print(''.join(idx2char[item.numpy()]))", "Now for each sequence we will duplicate and shift it to form the input and target text by using the map method to apply a simple function to each batch:", "def split_input_target(chunk):\n input_text = chunk[:-1]\n target_text = chunk[1:]\n return input_text, target_text\n\ndataset = sequences.map(split_input_target)\n\nprint(dataset)\n\nfor input_seq, target_seq in dataset.take(2):\n print(f\"Input sequence: {repr(''.join(idx2char[input_seq.numpy()]))}\")\n print(f\"Target sequence: {repr(''.join(idx2char[target_seq.numpy()]))}\")\n\nfor i, (input_idx, target_idx) in enumerate(zip(input_seq[:5], target_seq[:5])):\n print(f\"Step {i:4d}\")\n print(f\" input: {input_idx} ({repr(idx2char[input_idx])})\")\n print(f\" expected output: {target_idx} ({repr(idx2char[target_idx])})\")", "Create training batches\nWe used tf.data to split the text into manageable sequences. But before feedings this data into the model, we need to shuffle the data and pack it into batches.", "BATCH_SIZE = 64\nBUFFER_SIZE = 10000\n\ndataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)\nprint(dataset)", "Build the model\nUse tf.keras.Sequential to define the model. For this simple example three layers are used to define our model>\n* tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map the numbers of each character to a vector with embedding_dim dimensions;\n* tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use a LSTM layer here.)\n* tf.keras.layers.Dense: The output layer, with vocab_size outputs.", "vocab_size = len(vocabulary)\nembedding_dim = 256\nrnn_units = 1024\n\ndef build_model(vocab_size, embedding_dim, rnn_units, batch_size):\n return tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]),\n tf.keras.layers.GRU(rnn_units, return_sequences=True, stateful=True, recurrent_initializer=\"glorot_uniform\"),\n tf.keras.layers.Dense(vocab_size),\n ])\n\nmodel = build_model(vocab_size, embedding_dim, rnn_units, batch_size=BATCH_SIZE)\nmodel.summary()", "For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:\n\nTry the model\nNow run the model to see that it behaves as expected.\nFirst check the shape of the output:", "for input_example_batch, target_example_batch in dataset.take(1):\n example_batch_predictions = model(input_example_batch)\n print(example_batch_predictions.shape, \"# (batch_size, sequence_length, vocab_size)\")", "To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.\nNote: it is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop", "sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)\nsampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()", "This gives us, at each timestep, a prediction of the next character index:", "print(sampled_indices)\n\nprint(\"Input: \\n\", repr(''.join(idx2char[input_example_batch[0]])))\nprint()\nprint(\"Next char predictions\\n\", repr(''.join(idx2char[sampled_indices])))", "Train the model\nAt this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.\nOptimizer and loss function", "def loss(labels, logits):\n return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)\n\nexample_batch_loss = loss(target_example_batch, example_batch_predictions)\n\nprint(\"Prediction shape: \", example_batch_predictions.shape)\nprint(\"Scalar loss: \", example_batch_loss.numpy().mean())", "Configure the training procedure using the tf.keras.Model.compile method. We'll use tf.keras.optimizers.Adam with default arguments and the loss function.", "model.compile(optimizer='adam', loss=loss)", "Configure checkpoints", "checkpoint_dir = pathlib.Path('./training_checkpoints')\ncheckpoint_prefix = checkpoint_dir / \"ckp_{epoch}\"\ncheckpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=str(checkpoint_prefix), save_weights_only=True)", "Execute the training", "EPOCHS = 40\n\nhistory = model.fit(\n dataset,\n epochs=EPOCHS,\n callbacks=[checkpoint_callback],\n)", "Generate text\nRestore the latest checkpoint\nTo keep this prediction step simple, use a batch size of 1. Because of the way the RRN state is passed from timestep to timestep, the model only accepts a fixed batch size once built. To run the model with a different batch size, we need to rebuild the model and restore the weights from the checkpoint.", "tf.train.latest_checkpoint(str(checkpoint_dir))\n\nmodel = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)\nmodel.load_weights(tf.train.latest_checkpoint(str(checkpoint_dir)))\nmodel.build(tf.TensorShape([1, None]))\n\nmodel.summary()", "Prediction loop\nThe following code block generates the text:\n\nIt Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.\nGet the prediction distribution of the next character using the start string and the RNN state.\nThen, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.\nThe RNN state returned by the model is fed back into the model so that it now has more context, instead than only one character. After predicting the next character, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted characters.\n\n\nLooking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.", "def generate_text(model, start_string):\n # Number of characters to generate\n num_generate = 1000\n\n # converting our start string to numbers (vectorizing)\n input_eval = [char2idx[s] for s in start_string]\n input_eval = tf.expand_dims(input_eval, 0)\n\n # Empty string to store our results\n text_generated = []\n\n # Low temperatures results in more predictable text. Higher temperatures results in more surprising text.\n # Experiment to find the best setting\n temperature = 1.0\n\n # Here batch_size == 1\n for i in range(num_generate):\n predictions = model(input_eval)\n\n # Remove the batch dimension\n predictions = tf.squeeze(predictions, 0)\n\n # Use a categorical distribution to predict the character returned by the model\n predictions = predictions / temperature\n predicted_id = tf.random.categorical(predictions, num_samples=1)[-1, 0].numpy()\n\n # We pass the predicted character as the next input to the model along with the previous hidden state\n input_eval = tf.expand_dims([predicted_id], 0)\n text_generated.append(idx2char[predicted_id])\n\n return (start_string + ''.join(text_generated))\n\nprint(generate_text(model, start_string=\"ROMEO: \"))", "Advanced: Customized Training\nThe above training procedure is simple, but does not give you much control.\nSo now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement curriculum learning to help stabilize the model's open-loop output.\nWe will use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.\nThe procedure works as follows:\n\nFirst, initialize the RNN state. We do this by calling the tf.keras.Model.reset_states method.\nNext, iterate over the dataset (batch by batch) and calculate the predictions associated with each.\nOpen a tf.GradientTape, and calculate the predictions and loss in that context.\nCalculate the gradients of the loss with respect to the model variables using the tf.GradientTape.grads method.\nFinally, take a step downwards by using the optimizer's tf.train.Optimizer.apply_gradients method.", " model = build_model(\n vocab_size = vocab_size,\n embedding_dim=embedding_dim,\n rnn_units=rnn_units,\n batch_size=BATCH_SIZE\n)\n\noptimizer = tf.keras.optimizers.Adam()\n\n@tf.function\ndef train_step(inp, target):\n with tf.GradientTape() as tape:\n predictions = model(inp)\n loss = tf.reduce_mean(tf.keras.losses.sparse_categorical_crossentropy(target, predictions, from_logits=True))\n grads = tape.gradient(loss, model.trainable_variables)\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n\n return loss\n\n# Training step\nEPOCHS = 20\n\nfor epoch in range(EPOCHS):\n start = time.time()\n\n # initializing the hidden state at the start of every epoch\n # initally hidden is None\n hidden = model.reset_states()\n\n for (batch_n, (inp, target)) in enumerate(dataset):\n loss = train_step(inp, target)\n\n if batch_n % 100 == 0:\n template = 'Epoch {} Batch {} Loss {}'\n print(template.format(epoch+1, batch_n, loss))\n\n # saving (checkpoint) the model every 5 epochs\n if (epoch + 1) % 5 == 0:\n model.save_weights(str(checkpoint_prefix).format(epoch=epoch))\n\n print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))\n print ('Time taken for 1 epoch {} sec\\n'.format(time.time() - start))\n\nmodel.save_weights(str(checkpoint_prefix).format(epoch=epoch))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CUBoulder-ASTR2600/lectures
lecture_15_ndarraysII.ipynb
isc
[ "Some comments from homework 4\n\nDon't do lines too long, it's considered bad style as horizontal\nscrolling is awkward. Most projects demand lines < 80 or, more rarely, < 100 chars.\nThis also helps the case when you want to compare two codes next to\n each other\nInclude a space between argumenst for increased readability:\nGood: myfunc(a, b, c, d, e)\nBad: myfunc(a,b,c,d,e)\nOnly use elif if there's another differing case to check. Otherwise, just use else.\nHelp yourself by doing unit conversions before the actual equation.\nOtherwise, already awkward looking equations become even harder to read.\nimports at top of module, not inside functions!\nMakes the reader immediately understand the dependencies of your code.\nThis paradigm is being softened for parallel processing, where it becomes easier to send a logically complete function (with imports at beginning of function) to the different processors.\n\nComments on HW 5\npython\nNarr = [N(i) for i in xArr] # list comprehension, **NOT** okay for vectorial\nis not the same as\npython\nNarr = N(xArr) # optimal\nis not the same as\npython\nNarr = np.exp(xArr**2/[....]) # kinda cheating...\nAnd when the instructions say, call f() on each element of a vector, it means that.\nSo:\npython\nxList = [f(i) for i in vector]\nQ. Review: what is the rank of $A_{i,j,k,l}$?", "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as pl", "little matplotlib config trick\nAs the colormap default is still the awful 'jet' colormap (creates artificial visual boundaries that don't exist in the data -> it fools you.), I want to switch the default to 'viridis'.\n(exercise to the reader: this also can be done in a config file that is being read everytime matplotlib is being loaded!)", "from matplotlib import rcParams", "Now, this config dictionary is huge:", "rcParams.keys()\n\n[key for key in rcParams.keys() if 'map' in key]\n\nrcParams['image.cmap']\n\nrcParams['image.cmap'] = 'viridis'\n\nrcParams['image.interpolation'] = 'none'", "Visualizing Multi-Dimensional Arrays", "x = np.array([[1,2,3], [4,5,6], [7,8,9], [10,11,12]])\nx", "Q. What is the rank of x?\nQ. What is the shape of x?", "x.shape\n\nx.ndim\n\nprint(x) # for reference\npl.imshow(x)\npl.colorbar();", "Notice that the first row of the array was plotted \nat the top of the image.\nThis may be counterintuitive if when you think of \nrow #0 you think of y=0, which in a normal x-y coordinate \nsystem is on the bottom.\nThis be changed using the \"origin\" keyword argument.\nThe reason for this is that this command was made for displaying \nCCD image data, and often the pixel (0,0) was considered to be the\none in the upper left.\nBut it also matches the standard print-out of arrays, so that's good as well.", "print(x) # for reference\npl.imshow(x, origin='lower')\npl.colorbar();\n\n# Interpolation (by default) makes an image look \n# smoother.\n\n# Instead:\npl.imshow(x, origin='lower', interpolation='bilinear')\npl.colorbar()", "To look up other interpolations, just use the help feature.\nAnd by the way, there shouldn't be any space after the question mark!", "pl.imshow?\n\nx # for reference\n\nprint(x)\nprint()\nprint(x.T)\n\nxT = x.T\n\npl.imshow(xT)\npl.colorbar()", "Q. And what should this yield?", "xT.shape", "Arrays can be indexed in one of two ways:", "xT # Reminder", "Q. What should this be?", "xT[2][1]\n\nxT[2,1]", "Can access x and y index information using numpy.indices:", "xT\n\nprint(np.indices(xT.shape))\n\nprint(\"-\" * 50)\n\nfor i in range(xT.shape[0]):\n for j in range(len(xT[0])):\n print(i, j)\n\ni, j = np.indices(xT.shape)\n\ni\n\nj", "Q. How to isolate the element in xT corresponding to i = 1 and j = 2?", "xT\n\nxT[1,2]\n\nprint(xT[np.logical_and(i == 1, j == 2)])\n\n# Q. How did this work?\nprint(np.logical_and(i == 1, j == 2))\n\ni == 1", "Q. How about the indices of all even elements in xT?", "xT # for reference\n\nnp.argwhere(xT % 2 == 0)", "Note you only need this if you want to use these indices somewhere else, e.g. in another array of same shape.\nBecause if you just wanted the values, you of course would do that:", "xT[xT % 2 == 0]", "How to find particular elements in a 2-D array?", "xT # for reference\n\nnp.argwhere(xT > 5)\n\nxT", "Array Computing", "xT\n\npl.imshow(xT)\npl.colorbar()\npl.clim(1, 12) # colorbar limits, \n # analogous to xlim, ylim\n\nprint(xT + 5)\npl.imshow(xT+5)\npl.colorbar()\n# pl.clim(1, 12) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/eng-edu
ml/cc/prework/hello_world.ipynb
apache-2.0
[ "Copyright 2017 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Prework: Hello World\nLearning Objective: Run a TensorFlow program in the browser.\nThe code block below is a \"Hello World\" TensorFlow program.\nIt consists of initialization code (importing tensorflow module and enabling \"eager execution\", which will be covered in more detail in subsequent exercises) and printing a \"Hello, world!\" string constant.", "from __future__ import print_function\n\nimport tensorflow as tf\ntry:\n tf.contrib.eager.enable_eager_execution()\nexcept ValueError:\n pass # enable_eager_execution errors after its first call\n\ntensor = tf.constant('Hello, world!')\ntensor_value = tensor.numpy()\nprint(tensor_value)", "To Run This Program\n\n\nClick anywhere in the code block (for example, on the word import).\n\n\nClick the right-facing-triangle icon in the upper-left corner of the code block, or hit ⌘/Ctrl-Enter.\nThe program will take a few seconds to run. If all goes well, the program will write the phrase Hello, world! just below the code block\n\n\nThis entire program consists of a single code block. However, most exercises consist of multiple code blocks, in which case you should run the code blocks individually in sequence, from top to bottom. \nRunning the code blocks out of sequence typically causes errors.\nUseful Keyboard Shortcuts\n\n⌘/Ctrl+m then b: creates an empty code cell below the cell that's currently selected\n⌘/Ctrl+m then i: interrupts a running cell\n⌘/Ctrl+m then h: shows a list of all keyboard shortcuts\nFor documentation on any TensorFlow API method, place the cursor right after its opening parenthesis and hit Tab:" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
KaiSzuttor/espresso
doc/tutorials/12-constant_pH/12-constant_pH.ipynb
gpl-3.0
[ "Introduction\nThis tutorial introduces the basic features for simulating titratable systems via the constant pH method.\nThe constant pH method is one of the methods implemented for simulating systems with chemical reactions within the Reaction Ensemble module. It is a Monte Carlo method designed to model an acid-base ionization reaction at a given (fixed) value of solution pH.\nWe will consider a homogeneous aqueous solution of a titratable acidic species $\\mathrm{HA}$ that can dissociate in a reaction, that is characterized by the equilibrium constant $\\mathrm{p}K_A=-\\log_{10} K_A$\n$$\\mathrm{HA} \\Leftrightarrow \\mathrm{A}^- + \\mathrm{H}^+$$\nIf $N_0 = N_{\\mathrm{HA}} + N_{\\mathrm{A}^-}$ is the number of titratable groups in solution, then we define the degree of dissociation $\\alpha$ as:\n$$\\alpha = \\dfrac{N_{\\mathrm{A}^-}}{N_0}.$$\nThis is one of the key quantities that can be used to describe the acid-base equilibrium. Usually, the goal of the simulation is to predict the value of $\\alpha$ under given conditions in a complex system with interactions.\nThe Chemical Equilibrium and Reaction Constant\nThe equilibrium reaction constant describes the chemical equilibrium of a given reaction. The values of equilibrium constants for various reactions can be found in tables. For the acid-base ionization reaction, the equilibrium constant is conventionally called the acidity constant, and it is defined as\n\\begin{equation}\nK_A = \\frac{a_{\\mathrm{H}^+} a_{\\mathrm{A}^-} } {a_{\\mathrm{HA}}}\n\\end{equation}\nwhere $a_i$ is the activity of species $i$. It is related to the chemical potential $\\mu_i$ and to the concentration $c_i$\n\\begin{equation}\n\\mu_i = \\mu_i^\\mathrm{ref} + k_{\\mathrm{B}}T \\ln a_i\n\\,,\\qquad\na_i = \\frac{c_i \\gamma_i}{c^{\\ominus}}\\,,\n\\end{equation}\nwhere $\\gamma_i$ is the activity coefficient, and $c^{\\ominus}$ is the (arbitrary) reference concentration, often chosen to be the standard concentration, $c^{\\ominus} = 1\\,\\mathrm{mol/L}$, and $\\mu_i^\\mathrm{ref}$ is the reference chemical potential.\nNote that $K$ is a dimensionless quantity but its numerical value depends on the choice of $c^0$.\nFor an ideal system, $\\gamma_i=1$ by definition, whereas for an interacting system $\\gamma_i$ is a non-trivial function of the interactions. For an ideal system we can rewrite $K$ in terms of equilibrium concentrations\n\\begin{equation}\nK_A \\overset{\\mathrm{ideal}}{=} \\frac{c_{\\mathrm{H}^+} c_{\\mathrm{A}^-} } {c_{\\mathrm{HA}} c^{\\ominus}}\n\\end{equation}\nThe ionization degree can also be expressed via the ratio of concentrations:\n\\begin{equation}\n\\alpha \n= \\frac{N_{\\mathrm{A}^-}}{N_0} \n= \\frac{N_{\\mathrm{A}^-}}{N_{\\mathrm{HA}} + N_{\\mathrm{A}^-}}\n= \\frac{c_{\\mathrm{A}^-}}{c_{\\mathrm{HA}}+c_{\\mathrm{A}^-}}\n= \\frac{c_{\\mathrm{A}^-}}{c_{\\mathrm{A}}}.\n\\end{equation}\nwhere $c_{\\mathrm{A}}=c_{\\mathrm{HA}}+c_{\\mathrm{A}^-}$ is the total concentration of titratable acid groups irrespective of their ionization state.\nThen, we can characterize the acid-base ionization equilibrium using the ionization degree and pH, defined as\n\\begin{equation}\n\\mathrm{pH} = -\\log_{10} a_{\\mathrm{H^{+}}} \\overset{\\mathrm{ideal}}{=} -\\log_{10} (c_{\\mathrm{H^{+}}} / c^{\\ominus})\n\\end{equation}\nSubstituting for the ionization degree and pH into the expression for $K_A$ we obtain the Henderson-Hasselbalch equation\n\\begin{equation}\n\\mathrm{pH}-\\mathrm{p}K_A = \\log_{10} \\frac{\\alpha}{1-\\alpha}\n\\end{equation}\nOne result of the Henderson-Hasselbalch equation is that at a fixed pH value the ionization degree of an ideal acid is independent of concentration. Another implication is, that the degree of ionization does not depend on the absolute values of $\\mathrm{p}K_A$ and $\\mathrm{pH}$, but only on their difference, $\\mathrm{pH}-\\mathrm{p}K_A$.\nConstant pH Method\nThe constant pH method Reed1992 is designed to simulate an acid-base ionization reaction at a given pH. It assumes that the simulated system is coupled to an implicit reservoir of $\\mathrm{H^+}$ ions but exchange of ions with this reservoir is not explicitly simulated. Therefore, the concentration of ions in the simulation box is not equal to the concentration of $\\mathrm{H^+}$ ions at the chosen pH. This may lead to artifacts when simulating interacting systems, especially at high of low pH values. Discussion of these artifacts is beyond the scope of this tutorial (see e.g. Landsgesell2019 for further details).\nIn ESPResSo, the forward step of the ionization reaction (from left to right) is implemented by \nchanging the chemical identity (particle type) of a randomly selected $\\mathrm{HA}$ particle to $\\mathrm{A}^-$, and inserting another particle that represents a neutralizing counterion. The neutralizing counterion is not necessarily an $\\mathrm{H^+}$ ion. Therefore, we give it a generic name $\\mathrm{B^+}$. In the reverse direction (from right to left), the chemical identity (particle type) of a randomly selected $\\mathrm{A}^{-}$ is changed to $\\mathrm{HA}$, and a randomly selected $\\mathrm{B}^+$ is deleted from the simulation box. The probability of proposing the forward reaction step is $P_\\text{prop}=N_\\mathrm{HA}/N_0$, and probability of proposing the reverse step is $P_\\text{prop}=N_\\mathrm{A}/N_0$. The trial move is accepted with the acceptance probability\n$$ P_{\\mathrm{acc}} = \\operatorname{min}\\left(1, \\exp(-\\beta \\Delta E_\\mathrm{pot} \\pm \\ln(10) \\cdot (\\mathrm{pH - p}K_A) ) \\right)$$\nHere $\\Delta E_\\text{pot}$ is the potential energy change due to the reaction, while $\\text{pH - p}K$ is an input parameter. \nThe signs $\\pm 1$ correspond to the forward and reverse direction of the ionization reaction, respectively. \nSetup\nThe inputs that we need to define our system in the simulation include\n* concentration of the titratable units c_acid\n* dissociation constant pK\n* Bjerrum length Bjerrum\n* system size (given by the number of titratable units) N_acid\n* concentration of added salt c_salt_SI\n* pH\nFrom the concentration of titratable units and the number of titratable units we calculate the box length.\nWe create a system with this box size.\nFrom the salt concentration we calculate the number of additional salt ion pairs that should be present in the system.\nWe set the dissociation constant of the acid to $\\mathrm{p}K_A=4.88$, that is the acidity constant of propionic acid. We choose propionic acid because its structure is closest to the repeating unit of poly(acrylic acid), the most commonly used weak polyacid.\nWe will simulate multiple pH values, the range of which is determined by the parameters offset and num_pHs.", "import matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.constants # physical constants\n\nimport espressomd\nimport pint # module for working with units and dimensions\nfrom espressomd import electrostatics, polymer, reaction_ensemble\nfrom espressomd.interactions import HarmonicBond\n\nureg = pint.UnitRegistry()\n# sigma=0.355 nm is a commonly used particle size in coarse-grained simulations\nureg.define('sigma = 0.355 * nm = sig')\nsigma = 1.0 * ureg.sigma # variable that has the value and dimension of one sigma\n# N_A is the numerical value of Avogadro constant in units 1/mole\nN_A = scipy.constants.N_A/ureg.mole\nBjerrum = 0.715 * ureg.nanometer # Bjerrum length at 300K\n# define that concentration is a quantity that must have a value and a unit\nconcentration = ureg.Quantity\n\n# System parameters\n#############################################################\n# 0.01 mol/L is a reasonable concentration that could be used in experiments\nc_acid = concentration(1e-3, 'mol/L')\n# Using the constant-pH method is safe if Ionic_strength > max(10**(-pH), 10**(-pOH) ) and C_salt > C_acid\n# additional salt to control the ionic strength\nc_salt = concentration(2*c_acid)\n# In the ideal system, concentration is arbitrary (see Henderson-Hasselbalch equation)\n# but it is important in the interacting system\nN_acid = 20 # number of titratable units in the box\n\nPROB_REACTION = 0.5 # select the reaction move with 50% probability\n# probability of the reaction is adjustable parameter of the method that affects the speed of convergence\n\n# Simulate an interacting system with steric repulsion (Warning: it will be slower than without WCA!)\nUSE_WCA = False\n# Simulate an interacting system with electrostatics (Warning: it will be very slow!)\nUSE_ELECTROSTATICS = False\n\n# particle types of different species\nTYPE_HA = 0\nTYPE_A = 1\nTYPE_B = 2\nTYPE_Na = 3\nTYPE_Cl = 4\n\nq_HA = 0\nq_A = -1\nq_B = +1\nq_Na = +1\nq_Cl = -1\n\n# acidity constant\npK = 4.88\nK = 10**(-pK)\noffset = 2.0 # range of pH values to be used pK +/- offset\nnum_pHs = 15 # number of pH values\npKw = 14.0 # autoprotolysis constant of water\n\n# dependent parameters\nBox_V = (N_acid/N_A/c_acid)\nBox_L = np.cbrt(Box_V.to('m**3'))\nif tuple(map(int, pint.__version__.split('.'))) < (0, 10):\n Box_L *= ureg('m')\n# we shall often need the numerical value of box length in sigma\nBox_L_in_sigma = Box_L.to('sigma').magnitude\n# unfortunately, pint module cannot handle cube root of m**3, so we need to explicitly set the unit\nN_salt = int(c_salt*Box_V*N_A) # number of salt ion pairs in the box\n# print the values of dependent parameters to check for possible rounding errors\nprint(\"N_salt: {0:.1f}, N_acid: {1:.1f}, N_salt/N_acid: {2:.7f}, c_salt/c_acid: {3:.7f}\".format(\n N_salt, N_acid, 1.0*N_salt/N_acid, c_salt/c_acid))\n\nn_blocks = 16 # number of block to be used in data analysis\ndesired_block_size = 10 # desired number of samples per block\n# number of reaction samples per each pH value\nnum_samples = int(n_blocks * desired_block_size / PROB_REACTION)\npHmin = pK-offset # lowest pH value to be used\npHmax = pK+offset # highest pH value to be used\npHs = np.linspace(pHmin, pHmax, num_pHs) # list of pH values\n\n# Initialize the ESPResSo system\n##############################################\nsystem = espressomd.System(box_l=[Box_L_in_sigma] * 3)\nsystem.time_step = 0.01\nsystem.cell_system.skin = 0.4\nsystem.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=7)\nnp.random.seed(seed=10) # initialize the random number generator in numpy", "After defining the simulation parameters, we set up the system that we want to simulate. It is a polyelectrolyte chain with some added salt that is used to control the ionic strength of the solution. For the first run, we set up the system without any steric repulsion and without electrostatic interactions. In the next runs, we will add the steric repulsion and electrostatic interactions to observe their effect on the ionization.", "# create the particles\n##################################################\n# we need to define bonds before creating polymers\nhb = HarmonicBond(k=30, r_0=1.0)\nsystem.bonded_inter.add(hb)\n\n# create the polymer composed of ionizable acid groups, initially in the ionized state\npolymers = polymer.linear_polymer_positions(n_polymers=1,\n beads_per_chain=N_acid,\n bond_length=0.9, seed=23)\nfor polymer in polymers:\n for index, position in enumerate(polymer):\n id = len(system.part)\n system.part.add(id=id, pos=position, type=TYPE_A, q=q_A)\n if index > 0:\n system.part[id].add_bond((hb, id - 1))\n\n# add the corresponding number of H+ ions\nfor index in range(N_acid):\n system.part.add(pos=np.random.random(3)*Box_L_in_sigma, type=TYPE_B, q=q_B)\n\n# add salt ion pairs\nfor index in range(N_salt):\n system.part.add(pos=np.random.random(\n 3)*Box_L_in_sigma, type=TYPE_Na, q=q_Na)\n system.part.add(pos=np.random.random(\n 3)*Box_L_in_sigma, type=TYPE_Cl, q=q_Cl)\n\n# set up the WCA interaction between all particle pairs\nif USE_WCA:\n types = [TYPE_HA, TYPE_A, TYPE_B, TYPE_Na, TYPE_Cl]\n for type_1 in types:\n for type_2 in types:\n system.non_bonded_inter[type_1, type_2].lennard_jones.set_params(\n epsilon=1.0, sigma=1.0,\n cutoff=2**(1.0 / 6), shift=\"auto\")\n\n# run a steepest descent minimization to relax overlaps\nsystem.integrator.set_steepest_descent(\n f_max=0, gamma=0.1, max_displacement=0.1)\nsystem.integrator.run(20)\nsystem.integrator.set_vv() # to switch back to velocity Verlet\n\n\n# short integration to let the system relax\nsystem.integrator.run(steps=1000)\n\n# if needed, set up and tune the Coulomb interaction\nif USE_ELECTROSTATICS:\n print(\"set up and tune p3m, please wait....\")\n p3m = electrostatics.P3M(prefactor=Bjerrum.to(\n 'sigma').magnitude, accuracy=1e-3)\n system.actors.add(p3m)\n p3m_params = p3m.get_params()\n# for key in list(p3m_params.keys()):\n# print(\"{} = {}\".format(key, p3m_params[key]))\n print(p3m.get_params())\n print(\"p3m, tuning done\")\nelse:\n # this speeds up the simulation of dilute systems with small particle numbers\n system.cell_system.set_n_square()\n\nprint(\"Done adding particles and interactions\")", "After setting creating the particles we initialize the reaction ensemble by setting the temperature, exclusion radius and seed of the random number generator. We set the temperature to unity, that determines that our reduced unit of energy will be $\\varepsilon=1k_{\\mathrm{B}}T$. In an interacting system the exclusion radius ensures that particle insertions too close to other particles are not attempted. Such insertions would make the subsequent Langevin dynamics integration unstable. If the particles are not interacting, we can set the exclusion radius to $0.0$. Otherwise, $1.0$ is a good value. We set the seed to a constant value to ensure reproducible results.", "RE = reaction_ensemble.ConstantpHEnsemble(\n temperature=1, exclusion_radius=1.0, seed=77)", "The next step is to define the reaction system. The order in which species are written in the lists of reactants and products is very important for ESPResSo. When a reaction move is performed, identity of the first species in the list of reactants is changed to the first species in the list of products, the second reactant species is changed to the second product species, and so on. If the reactant list has more species than the product list, then excess reactant species are deleted from the system. If the product list has more species than the reactant list, then product the excess product species are created and randomly placed inside the simulation box. This convention is especially important if some of the species belong to a chain-like molecule, and cannot be placed at an arbitrary position.\nIn the example below, the order of reactants and products ensures that identity of $\\mathrm{HA}$ is changed to $\\mathrm{A^{-}}$ and vice versa, while $\\mathrm{H^{+}}$ is inserted/deleted in the reaction move. Reversing the order of products in our reaction (i.e. from product_types=[TYPE_B, TYPE_A] to product_types=[TYPE_A, TYPE_B]), would result in a reaction move, where the identity HA would be changed to $\\mathrm{H^{+}}$, while $\\mathrm{A^{-}}$ would be inserted/deleted at a random position in the box. We also assign charges to each type because the charge will play an important role later, in simulations with electrostatic interactions.", "RE.add_reaction(gamma=K, reactant_types=[TYPE_HA], reactant_coefficients=[1],\n product_types=[TYPE_A, TYPE_B], product_coefficients=[1, 1],\n default_charges={TYPE_HA: q_HA, TYPE_A: q_A, TYPE_B: q_B})\nprint(RE.get_status())", "Next, we perform simulations at different pH values. The system must be equilibrated at each pH before taking samples.\nCalling RE.reaction(X) attempts in total X reactions (in both backward and forward direction).", "# the reference data from Henderson-Hasselbalch equation\ndef ideal_alpha(pH, pK):\n return 1. / (1 + 10**(pK - pH))\n\n\n# empty lists as placeholders for collecting data\nnumAs_at_each_pH = [] # number of A- species observed at each sample\n\n# run a productive simulation and collect the data\nprint(\"Simulated pH values: \", pHs)\nfor pH in pHs:\n print(\"Run pH {:.2f} ...\".format(pH))\n RE.constant_pH = pH\n numAs_current = [] # temporary data storage for a given pH\n RE.reaction(20*N_acid + 1) # pre-equilibrate to the new pH value\n for i in range(num_samples):\n if np.random.random() < PROB_REACTION:\n # should be at least one reaction attempt per particle\n RE.reaction(N_acid + 1)\n elif USE_WCA:\n system.integrator.run(steps=1000)\n numAs_current.append(system.number_of_particles(type=TYPE_A))\n numAs_at_each_pH.append(numAs_current)\n print(\"measured number of A-: {0:.2f}, (ideal: {1:.2f})\".format(\n np.mean(numAs_current), N_acid*ideal_alpha(pH, pK)))\nprint(\"finished\")", "Results\nFinally we plot our results and compare them to the analytical results obtained from the Henderson-Hasselbalch equation.\nStatistical Uncertainty\nThe molecular simulation produces a sequence of snapshots of the system, that \nconstitute a Markov chain. It is a sequence of realizations of a random process, where\nthe next value in the sequence depends on the preceding one. Therefore,\nthe subsequent values are correlated. To estimate statistical error of the averages\ndetermined in the simulation, one needs to correct for the correlations.\nHere, we will use a rudimentary way of correcting for correlations, termed the binning method.\nWe refer the reader to specialized literature for a more sophisticated discussion, for example Janke2002. The general idea is to group a long sequence of correlated values into a rather small number of blocks, and compute an average per each block. If the blocks are big enough, they\ncan be considered uncorrelated, and one can apply the formula for standard error of the mean of uncorrelated values. If the number of blocks is small, then they are uncorrelated but the obtained error estimates has a high uncertainty. If the number of blocks is high, then they are too short to be uncorrelated, and the obtained error estimates are systematically lower than the correct value. Therefore, the method works well only if the sample size is much greater than the autocorrelation time, so that it can be divided into a sufficient number of mutually uncorrelated blocks.\nIn the example below, we use a fixed number of 16 blocks to obtain the error estimates.", "# statistical analysis of the results\ndef block_analyze(input_data, n_blocks=16):\n data = np.array(input_data)\n block = 0\n # this number of blocks is recommended by Janke as a reasonable compromise\n # between the conflicting requirements on block size and number of blocks\n block_size = int(data.shape[1] / n_blocks)\n print(\"block_size:\", block_size)\n # initialize the array of per-block averages\n block_average = np.zeros((n_blocks, data.shape[0]))\n # calculate averages per each block\n for block in range(0, n_blocks):\n block_average[block] = np.average(\n data[:, block * block_size: (block + 1) * block_size], axis=1)\n # calculate the average and average of the square\n av_data = np.average(data, axis=1)\n av2_data = np.average(data * data, axis=1)\n # calculate the variance of the block averages\n block_var = np.var(block_average, axis=0)\n # calculate standard error of the mean\n err_data = np.sqrt(block_var / (n_blocks - 1))\n # estimate autocorrelation time using the formula given by Janke\n # this assumes that the errors have been correctly estimated\n tau_data = np.zeros(av_data.shape)\n for val in range(0, av_data.shape[0]):\n if av_data[val] == 0:\n # unphysical value marks a failure to compute tau\n tau_data[val] = -1.0\n else:\n tau_data[val] = 0.5 * block_size * n_blocks / (n_blocks - 1) * block_var[val] \\\n / (av2_data[val] - av_data[val] * av_data[val])\n return av_data, err_data, tau_data, block_size\n\n\n# estimate the statistical error and the autocorrelation time using the formula given by Janke\nav_numAs, err_numAs, tau, block_size = block_analyze(numAs_at_each_pH)\nprint(\"av = \", av_numAs)\nprint(\"err = \", err_numAs)\nprint(\"tau = \", tau)\n\n# calculate the average ionization degree\nav_alpha = av_numAs/N_acid\nerr_alpha = err_numAs/N_acid\n\n# plot the simulation results compared with the ideal titration curve\nplt.figure(figsize=(10, 6), dpi=80)\nplt.errorbar(pHs - pK, av_alpha, err_alpha, marker='o', linestyle='none',\n label=r\"simulation\")\npHs2 = np.linspace(pHmin, pHmax, num=50)\nplt.plot(pHs2 - pK, ideal_alpha(pHs2, pK), label=r\"ideal\")\nplt.xlabel('pH-p$K$', fontsize=16)\nplt.ylabel(r'$\\alpha$', fontsize=16)\nplt.legend(fontsize=16)\nplt.show()", "The simulation results for the non-interacting case very well compare with the analytical solution of Henderson-Hasselbalch equation. There are only minor deviations, and the estimated errors are small too. This situation will change when we introduce interactions.\nIt is useful to check whether the estimated errors are consistent with the assumptions that were used to obtain them. To do this, we follow Janke2002 to estimate the number of uncorrelated samples per block, and check whether each block contains a sufficient number of uncorrelated samples (we choose 10 uncorrelated samples per block as the threshold value).\nIntentionally, we make our simulation slightly too short, so that it does not produce enough uncorrelated samples. We encourage the reader to vary the number of blocks or the number of samples to see how the estimated error changes with these parameters.", "# check if the blocks contain enough data for reliable error estimates\nprint(\"uncorrelated samples per block:\\nblock_size/tau = \",\n block_size/tau)\nthreshold = 10. # block size should be much greater than the correlation time\nif np.any(block_size / tau < threshold):\n print(\"\\nWarning: some blocks may contain less than \", threshold, \"uncorrelated samples.\"\n \"\\nYour error estimated may be unreliable.\"\n \"\\nPlease, check them using a more sophisticated method or run a longer simulation.\")\n print(\"? block_size/tau > threshold ? :\", block_size/tau > threshold)\nelse:\n print(\"\\nAll blocks seem to contain more than \", threshold, \"uncorrelated samples.\\\n Error estimates should be OK.\")", "To look in more detail at the statistical accuracy, it is useful to plot the deviations from the analytical result. This provides another way to check the consistency of error estimates. About 68% of the results should be within one error bar from the analytical result, whereas about 95% of the results should be within two times the error bar. Indeed, if you plot the deviations by running the script below, you should observe that most of the results are within one error bar from the analytical solution, a smaller fraction of the results is slightly further than one error bar, and one or two might be about two error bars apart. Again, this situation will change when we introduce interactions because the ionization of the interacting system should deviate from the Henderson-Hasselbalch equation.", "# plot the deviations from the ideal result\nplt.figure(figsize=(10, 6), dpi=80)\nylim = np.amax(abs(av_alpha-ideal_alpha(pHs, pK)))\nplt.ylim((-1.5*ylim, 1.5*ylim))\nplt.errorbar(pHs - pK, av_alpha-ideal_alpha(pHs, pK),\n err_alpha, marker='o', linestyle='none', label=r\"simulation\")\nplt.plot(pHs - pK, 0.0*ideal_alpha(pHs, pK), label=r\"ideal\")\nplt.xlabel('pH-p$K$', fontsize=16)\nplt.ylabel(r'$\\alpha - \\alpha_{ideal}$', fontsize=16)\nplt.legend(fontsize=16)\nplt.show()", "The Neutralizing Ion $\\mathrm{B^+}$\nUp to now we did not discuss the chemical nature the neutralizer $\\mathrm{B^+}$. The added salt is not relevant in this context, therefore we omit it from the discussion. The simplest case to consider is what happens if you add the acidic polymer to pure water ($\\mathrm{pH} = 7$). Some of the acid groups dissociate and release $\\mathrm{H^+}$ ions into the solution. The pH decreases to a value that depends on $\\mathrm{p}K_{\\mathrm{A}}$ and on the concentration of ionizable groups. Now, three ionic species are present in the solution: $\\mathrm{H^+}$, $\\mathrm{A^-}$, and $\\mathrm{OH^-}$. Because the reaction generates only one $\\mathrm{B^+}$ ion in the simulation box, we conclude that in this case the $\\mathrm{B^+}$ ions correspond to $\\mathrm{H^+}$ ions. The $\\mathrm{H^+}$ ions neutralize both the $\\mathrm{A^-}$ and the $\\mathrm{OH^-}$ ions. At acidic pH there are only very few $\\mathrm{OH^-}$ ions and nearly all $\\mathrm{H^+}$ ions act as a neutralizer for the $\\mathrm{A^-}$ ions. Therefore, the concentration of $\\mathrm{B^+}$ is very close to the concentration of $\\mathrm{H^+}$ in the real aqueous solution. Only very few $\\mathrm{OH^-}$ ions, and the $\\mathrm{H^+}$ ions needed to neutralize them, are missing in the simulation box, when compared to the real solution.\nTo achieve a more acidic pH (with the same pK and polymer concentration), we need to add an acid to the system. We can do that by adding a strong acid, such as HCl or $\\mathrm{HNO}_3$. We will denote this acid by a generic name $\\mathrm{HX}$ to emphasize that in general its anion can be different from the salt anion $\\mathrm{Cl^{-}}$. Now, there are 4 ionic species in the solution: $\\mathrm{H^+}$, $\\mathrm{A^-}$, $\\mathrm{OH^-}$, and $\\mathrm{X^-}$ ions. By the same argument as before, we conclude that $\\mathrm{B^+}$ ions correspond to $\\mathrm{H^+}$ ions. The $\\mathrm{H^+}$ ions neutralize the $\\mathrm{A^-}$, $\\mathrm{OH^-}$, and the $\\mathrm{X^-}$ ions. Because the concentration of $\\mathrm{X^-}$ is not negligible anymore, the concentration of $\\mathrm{B^+}$ in the simulation box differs from the $\\mathrm{H^+}$ concentration in the real solution. Now, many more ions are missing in the simulation box, as compared to the real solution: Few $\\mathrm{OH^-}$ ions, many $\\mathrm{X^-}$ ions, and all the $\\mathrm{H^+}$ ions that neutralize them.\nTo achieve a neutral pH we need to add some base to the system to neutralize the polymer.\nIn the simplest case we add an alkali metal hydroxide, such as $\\mathrm{NaOH}$ or $\\mathrm{KOH}$, that we will generically denote as $\\mathrm{MOH}$. Now, there are 4 ionic species in the solution: $\\mathrm{H^+}$, $\\mathrm{A^-}$, $\\mathrm{OH^-}$, and $\\mathrm{M^+}$. In such situation, we can not clearly attribute a specific chemical identity to the $\\mathrm{B^+}$ ions. However, only very few $\\mathrm{H^+}$ and $\\mathrm{OH^-}$ ions are present in the system at $\\mathrm{pH} = 7$. Therefore, we can make the approximation that at this pH, all $\\mathrm{A^-}$ are neutralized by the $\\mathrm{M^+}$ ions, and the $\\mathrm{B^+}$ correspond to $\\mathrm{M^+}$. Then, the concentration of $\\mathrm{B^+}$ also corresponds to the concentration of $\\mathrm{M^+}$ ions. Now, again only few ions are missing in the simulation box, as compared to the real solution: Few $\\mathrm{OH^-}$ ions, and few $\\mathrm{H^+}$ ions.\nTo achieve a basic pH we need to add even more base to the system to neutralize the polymer.\nAgain, there are 4 ionic species in the solution: $\\mathrm{H^+}$, $\\mathrm{A^-}$, $\\mathrm{OH^-}$, and $\\mathrm{M^+}$ and we can not clearly attribute a specific chemical identity to the $\\mathrm{B^+}$ ions. Because only very few $\\mathrm{H^+}$ ions should be present in the solution, we can make the approximation that at this pH, all $\\mathrm{A^-}$ ions are neutralized by the $\\mathrm{M^+}$ ions, and therefore $\\mathrm{B^+}$ ions in the simulation correspond to $\\mathrm{M^+}$ ions in the real solution. Because additional $\\mathrm{M^+}$ ions in the real solution neutralize the $\\mathrm{OH^-}$ ions, the concentration of $\\mathrm{B^+}$ does not correspond to the concentration of $\\mathrm{M^+}$ ions. Now, again many ions are missing in the simulation box, as compared to the real solution: Few $\\mathrm{H^+}$ ions, many $\\mathrm{OH^-}$ ions, and a comparable amount of the $\\mathrm{M^+}$ ions.\nTo further illustrate this subject, we compare the concentration of the neutralizer ion $\\mathrm{B^+}$ calculated in the simulation with the expected number of ions of each species. At a given pH and pK we can calculate the expected degree of ionization from the Henderson Hasselbalch equation. Then we apply the electroneutrality condition \n$$c_\\mathrm{A^-} + c_\\mathrm{OH^-} + c_\\mathrm{X^-} = c_\\mathrm{H^+} + c_\\mathrm{M^+}$$\nwhere we use either $c_\\mathrm{X^-}=0$ or $c_\\mathrm{M^+}=0$ because we always only add extra acid or base, but never both. Adding both would be equivalent to adding extra salt $\\mathrm{MX}$.\nWe obtain the concentrations of $\\mathrm{OH^-}$ and $\\mathrm{H^+}$ from the input pH value, and substitute them to the electroneutrality equation to obtain\n$$\\alpha c_\\mathrm{acid} + 10^{-(\\mathrm{p}K_\\mathrm{w} - \\mathrm{pH})} + 10^{-\\mathrm{pH}} = c_\\mathrm{M^+} - c_\\mathrm{X^-}$$\nDepending on whether the left-hand side of this equation is positive or negative we know whether we should add $\\mathrm{M^+}$ or $\\mathrm{X^-}$ ions.", "# average concentration of B+ is the same as the concentration of A-\nav_c_Bplus = av_alpha*c_acid\nerr_c_Bplus = err_alpha*c_acid # error in the average concentration\n\nfull_pH_range = np.linspace(2, 12, 100)\nideal_c_Aminus = ideal_alpha(full_pH_range, pK)*c_acid\nideal_c_OH = np.power(10.0, -(pKw - full_pH_range))*ureg('mol/L')\nideal_c_H = np.power(10.0, -full_pH_range)*ureg('mol/L')\n# ideal_c_M is calculated from electroneutrality\nideal_c_M = np.maximum((ideal_c_Aminus + ideal_c_OH - ideal_c_H).to(\n 'mol/L').magnitude, np.zeros_like(full_pH_range))*ureg('mol/L')\n\n# plot the simulation results compared with the ideal results of the cations\nplt.figure(figsize=(10, 6), dpi=80)\nplt.errorbar(pHs,\n av_c_Bplus.to('mol/L').magnitude,\n err_c_Bplus.to('mol/L').magnitude,\n marker='o', c=\"tab:blue\", linestyle='none',\n label=r\"measured $c_{\\mathrm{B^+}}$\", zorder=2)\nplt.plot(full_pH_range, ideal_c_H.to('mol/L').magnitude, c=\"tab:green\",\n label=r\"ideal $c_{\\mathrm{H^+}}$\", zorder=0)\nplt.plot(full_pH_range, ideal_c_M.to('mol/L').magnitude, c=\"tab:orange\",\n label=r\"ideal $c_{\\mathrm{M^+}}$\", zorder=0)\nplt.plot(full_pH_range, ideal_c_Aminus.to('mol/L').magnitude, c=\"tab:blue\", ls=(0, (5, 5)),\n label=r\"ideal $c_{\\mathrm{A^-}}$\", zorder=1)\nplt.yscale(\"log\")\nplt.ylim(1e-6,)\nplt.xlabel('input pH', fontsize=16)\nplt.ylabel(r'concentration $c$ $[\\mathrm{mol/L}]$', fontsize=16)\nplt.legend(fontsize=16)\nplt.show()", "The plot shows that at intermediate pH the concentration of $\\mathrm{B^+}$ ions is approximately equal to the concentration of $\\mathrm{M^+}$ ions. Only at one specific $\\mathrm{pH}$ the concentration of $\\mathrm{B^+}$ ions is equal to the concentration of $\\mathrm{M^+}$ ions. This is the pH one obtains when dissolving the weak acid $\\mathrm{A}$ in pure water.\nIn an ideal system, the ions missing in the simulation have no effect on the ionization degree. In an interacting system, the presence of ions in the box affects the properties of other parts of the system. Therefore, in an interacting system this discrepancy is harmless only at intermediate pH. The effect of the small ions on the rest of the system can be estimated from the overall the ionic strength.\n$$ I = \\frac{1}{2}\\sum_i c_i z_i^2 $$", "ideal_c_X = np.maximum(-(ideal_c_Aminus + ideal_c_OH - ideal_c_H).to(\n 'mol/L').magnitude, np.zeros_like(full_pH_range))*ureg('mol/L')\n\nideal_ionic_strength = 0.5 * \\\n (ideal_c_X + ideal_c_M + ideal_c_H + ideal_c_OH + 2*c_salt)\n# in constant-pH simulation ideal_c_Aminus = ideal_c_Bplus\ncpH_ionic_strength = 0.5*(ideal_c_Aminus + 2*c_salt)\ncpH_ionic_strength_measured = 0.5*(av_c_Bplus + 2*c_salt)\ncpH_error_ionic_strength_measured = 0.5*err_c_Bplus\n\nplt.figure(figsize=(10, 6), dpi=80)\nplt.errorbar(pHs,\n cpH_ionic_strength_measured.to('mol/L').magnitude,\n cpH_error_ionic_strength_measured.to('mol/L').magnitude,\n c=\"tab:blue\",\n linestyle='none', marker='o',\n label=r\"measured\", zorder=3)\nplt.plot(full_pH_range,\n cpH_ionic_strength.to('mol/L').magnitude,\n c=\"tab:blue\",\n ls=(0, (5, 5)),\n label=r\"cpH\", zorder=2)\nplt.plot(full_pH_range,\n ideal_ionic_strength.to('mol/L').magnitude,\n c=\"tab:orange\",\n linestyle='-',\n label=r\"ideal\", zorder=1)\n\n\nplt.yscale(\"log\")\nplt.xlabel('input pH', fontsize=16)\nplt.ylabel(r'Ionic Strength [$\\mathrm{mol/L}$]', fontsize=16)\nplt.legend(fontsize=16)\nplt.show()", "We see that the ionic strength in the simulation box significantly deviates from the ionic strength of the real solution only at high or low pH value. If the $\\mathrm{p}K_{\\mathrm{A}}$ value is sufficiently large, then the deviation at very low pH can also be neglected because then the polymer is uncharged in the region where the ionic strength is not correctly represented in the cpH simulation. At a high pH the ionic strength will have an effect on the weak acid, because the it is fully charged. The pH range in which the cpH method uses approximately the right ionic strength depends on salt concentration, weak acid concentration and the $\\mathrm{p}K_{\\mathrm{A}}$ value. See also Landsgesell2019 for a more detailed discussion of this issue, and its consequences.\nSuggested problems for further work\n\n\nTry changing the concentration of ionizable species in the non-interacting system. You should observe that it does not affect the obtained titration.\n\n\nTry changing the number of samples and the number of particles to see how the estimated error and the number of uncorrelated samples will change. Be aware that if the number of uncorrelated samples is low, the error estimation is too optimistic.\n\n\nTry running the same simulations with steric repulsion and then again with electrostatic interactions. Observe how the ionization equilibrium is affected by various interactions. Warning: simulations with electrostatics are much slower. If you want to obtain your results more quickly, then decrease the number of pH values.\n\n\nReferences\nJanke2002 Janke W. Statistical Analysis of Simulations: Data Correlations and Error Estimation,\nIn Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms, Lecture Notes,\nJ. Grotendorst, D. Marx, A. Muramatsu (Eds.), John von Neumann Institute for Computing, Jülich,\nNIC Series, Vol. 10, ISBN 3-00-009057-6, pp. 423-445, 2002.\nLandsgesell2019 Landsgesell, J.; Nová, L.; Rud, O.; Uhlík, F.; Sean, D.; Hebbeker, P.; Holm, C.; Košovan, P. Simulations of Ionization Equilibria in Weak Polyelectrolyte Solutions and Gels. Soft Matter 2019, 15 (6), 1155–1185. \nReed1992 Reed, C. E.; Reed, W. F. Monte Carlo Study of Titration of Linear Polyelectrolytes. The Journal of Chemical Physics 1992, 96 (2), 1609–1620.\nSmith1994 Smith, W. R.; Triska, B. The Reaction Ensemble Method for the Computer Simulation of Chemical and Phase Equilibria. I. Theory and Basic Examples. The Journal of Chemical Physics 1994, 100 (4), 3019–3027." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
danmelamed/vowpal_wabbit
python/examples/poisson_regression.ipynb
bsd-3-clause
[ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom vowpalwabbit import pyvw\n\n%matplotlib inline\n\n# Generate some count data that has poisson distribution \n# z ~ poisson(x + y), x \\in [0,10), y \\in [0,10)\nx = np.random.choice(range(0,10), 100)\ny = np.random.choice(range(0,10), 100)\nz = np.random.poisson(x + y)", "We will model this data in two ways\n* log transform the labels and use linear prediction (square loss)\n* model it directly using poisson loss\nThe first model predicts mean(log(label)) the second predicts log(mean(label)). Due to Jensen's inequality, the first approach produces systematic negative bias", "# Train log-transform model\ntraining_samples = []\nlogz = np.log(0.001 + z)\nvw = pyvw.vw(\"-b 2 --loss_function squared -l 0.1 --holdout_off -f vw.log.model --readable_model vw.readable.log.model\")\nfor i in range(len(logz)):\n training_samples.append(\"{label} | x:{x} y:{y}\".format(label=logz[i], x=x[i], y=y[i]))\n# Do hundred passes over the data and store the model in vw.log.model\nfor iteration in range(100):\n for i in range(len(training_samples)):\n vw.learn(training_samples[i])\nvw.finish()\n\n# Generate predictions from the log-transform model\nvw = pyvw.vw(\"-i vw.log.model -t\")\nlog_predictions = [vw.predict(sample) for sample in training_samples]\n# Measure bias in the log-domain\nlog_bias = np.mean(log_predictions - logz)\nbias = np.mean(np.exp(log_predictions) - z)\n\n", "Although the model is relatively unbiased in the log-domain where we trained our model, in the original domain there is underprediction as we expected from Jensenn's inequality", "# Train original domain model using poisson regression\ntraining_samples = []\nvw = pyvw.vw(\"-b 2 --loss_function poisson -l 0.1 --holdout_off -f vw.poisson.model --readable_model vw.readable.poisson.model\")\nfor i in range(len(z)):\n training_samples.append(\"{label} | x:{x} y:{y}\".format(label=z[i], x=x[i], y=y[i]))\n# Do hundred passes over the data and store the model in vw.log.model\nfor iteration in range(100):\n for i in range(len(training_samples)):\n vw.learn(training_samples[i])\nvw.finish()\n\n# Generate predictions from the poisson model\nvw = pyvw.vw(\"-i vw.poisson.model\")\npoisson_predictions = [np.exp(vw.predict(sample)) for sample in training_samples]\npoisson_bias = np.mean(poisson_predictions - z)\n\n\n\nplt.figure(figsize=(18,6))\n# Measure bias in the log-domain\nplt.subplot(131)\nplt.plot(logz, log_predictions, '.')\nplt.plot(logz, logz, 'r')\nplt.title('Log-domain bias:%f'%(log_bias))\nplt.xlabel('label')\nplt.ylabel('prediction')\n\nplt.subplot(132)\nplt.plot(z, np.exp(log_predictions), '.')\nplt.plot(z, z, 'r')\nplt.title('Original-domain bias:%f'%(bias))\nplt.xlabel('label')\nplt.ylabel('prediction')\n\nplt.subplot(133)\nplt.plot(z, poisson_predictions, '.')\nplt.plot(z, z, 'r')\nplt.title('Poisson bias:%f'%(poisson_bias))\nplt.xlabel('label')\nplt.ylabel('prediction')" ]
[ "code", "markdown", "code", "markdown", "code" ]
bjodah/aqchem
examples/_kinetic_model_fitting.ipynb
bsd-2-clause
[ "Fitting kinetic parameters to experimental data\nAuthor: Björn Dahlgren.\nLet us consider the reaction:\n$$\nFe^{3+} + SCN^- \\rightarrow FeSCN^{2+}\n$$\nthe product is strongly coloured and we have experimental data (from a stopped-flow appartus) of the absorbance as function of time after mixing for several replicates. The experiment was performed at 7 different temperatures and for one temperature, 7 different ionic strengths. For each set of conditions the experiment was re-run 7 times (replicates). In this notebook, we will determine the activation enthalpy and entropy through regressions analysis, we will also look at the ionic strength dependence.", "import bz2, codecs, collections, functools, itertools, json\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport chempy\nimport chempy.equilibria\nfrom chempy.electrolytes import ionic_strength\nfrom chempy.kinetics.arrhenius import fit_arrhenius_equation\nfrom chempy.printing import number_to_scientific_latex, as_per_substance_html_table\nfrom chempy.properties.water_density_tanaka_2001 import water_density\nfrom chempy.units import rescale, to_unitless, default_units as u\nfrom chempy.util.regression import least_squares, irls, avg_params, plot_fit, plot_least_squares_fit, plot_avg_params\nfrom chempy._solution import QuantityDict\n%matplotlib inline\nprint(chempy.__version__)", "Experimental conditions, the two solutions which were mixed in 1:1 volume ratio in a stopped flow apparatus:", "sol1 = QuantityDict(u.mM, {'SCN-': 3*u.mM, 'K+': 3*u.mM, 'Na+': 33*u.mM, 'H+': 50*u.mM, 'ClO4-': (33+50)*u.mM})\nsol2 = QuantityDict(u.mM, {'Fe+3': 6*u.mM, 'H+': 50*u.mM, 'ClO4-': (3*6+50)*u.mM})\n\nsol = (sol1 + sol2)/2 # 1:1 volume ratio at mixing\nsol.quantity_name = 'concentration'\nIbase = ionic_strength(rescale(sol/water_density(293*u.K, units=u), u.molal))\nprint(Ibase)\nsol\n\nionic_strength_keys = 'abcd'\nionic_strengths = dict(zip(ionic_strength_keys, [0, 20, 40, 60]*u.molal*1e-3))\ntemperature_keys = '16.5 18.5 20.5 22.5 24.5'.split()\nT0C = 273.15*u.K\ntemperatures = {k: T0C + float(k)*u.K for k in temperature_keys}\nnrep = 7\nindices = lambda k: (ionic_strength_keys.index(k[0]), temperature_keys.index(k[1]))", "We will read the data from a preprocessed file:", "transform = np.array([[1e-3, 0], [0, 1e-4]]) # converts 1st col: ms -> s and 2nd col to absorbance\n_reader = codecs.getreader(\"utf-8\")\n_dat = {tuple(k): np.dot(np.array(v), transform) for k, v in json.load(_reader(bz2.BZ2File('specdata.json.bz2')))}\ndata = collections.defaultdict(list)\nfor (tI, tT, tR), v in _dat.items():\n k = (tI, tT) # tokens for ionic strength and temperatures\n data[k].append(v)\nassert len(data) == len(ionic_strengths)*len(temperatures) and all(len(serie) == nrep for serie in data.values())", "Let's plot the data:", "def mk_subplots(nrows=1, subplots_adjust=True, **kwargs):\n fig, axes = plt.subplots(nrows, len(ionic_strengths), figsize=(15,6), **kwargs)\n if subplots_adjust:\n plt.subplots_adjust(hspace=0.001, wspace=0.001)\n return axes\n\ndef _set_axes_titles_to_ionic_strength(axes, xlim=None, xlabel=None):\n for tI, ax in zip(ionic_strength_keys, axes):\n ax.set_title(r'$I\\ =\\ %s$' % number_to_scientific_latex(Ibase + ionic_strengths[tI], fmt=3))\n if xlabel is not None:\n ax.set_xlabel(xlabel)\n if xlim is not None:\n ax.set_xlim(xlim)\n\ndef plot_series(series):\n axes = mk_subplots(sharey=True, sharex=True)\n colors = 'rgbmk'\n for key in itertools.product(ionic_strengths, temperatures):\n idx_I, idx_T = indices(key)\n for serie in series[key]:\n axes[idx_I].plot(serie[:, 0], serie[:, 1], c=colors[idx_T], alpha=0.15)\n _set_axes_titles_to_ionic_strength(axes, xlim=[0, 3.4], xlabel='Time / s')\n axes[0].set_ylabel('Absorbance')\n for c, tT in zip(reversed(colors), reversed(temperature_keys)):\n axes[0].plot([], [], c=c, label=tT + ' °C')\n axes[0].legend(loc='best')\nplot_series(data)", "We see that one data series is off: 16.5 ℃ and 0.0862 molal. Let's ignore that for now and perform the fitting, let's start with a pseudo-first order assumption (poor but simple):", "def fit_pseudo1(serie, ax=None):\n plateau = np.mean(serie[2*serie.shape[0]//3:, 1])\n y = np.log(np.clip(plateau - serie[:, 1], 1e-6, 1))\n x = serie[:, 0]\n # irls: Iteratively reweighted least squares\n res = irls(x, y, irls.gaussian)\n if ax is not None:\n plot_least_squares_fit(x, y, res)\n return res\n\nbeta, vcv, info = fit_pseudo1(data['a', '16.5'][0], ax=True)\n\ndef fit_all(series, fit_cb=fit_pseudo1, plot=False):\n if plot:\n axes = mk_subplots(nrows=len(temperatures), sharex=True, sharey=True)#, subplots_adjust=False)\n _set_axes_titles_to_ionic_strength(axes[0, :])\n avg = {}\n for key in itertools.product(ionic_strengths, temperatures):\n idx_I, idx_T = indices(key)\n opt_params, cov_params = [], []\n for serie in series[key]:\n beta, vcv, nfo = fit_cb(serie)\n opt_params.append(beta)\n cov_params.append(vcv)\n ax = axes[idx_T, idx_I] if plot else None\n avg[key] = avg_params(opt_params, cov_params)\n plot_avg_params(opt_params, cov_params, avg[key], ax=ax, flip=True, nsigma=3)\n for tk, ax in zip(temperature_keys, axes[:, 0]):\n ax.set_ylabel('T = %s C' % tk)\n return avg\n\nresult_pseudo1 = fit_all(data, plot=True)\n\ndef pseudo_to_k2(v):\n unit = 1/sol['Fe+3']/u.second\n k_val = -v[0][1]*unit\n k_err = v[1][1]*unit\n return k_val, k_err\n\nk_pseudo1 = {k: pseudo_to_k2(v) for k, v in result_pseudo1.items()}\nk_pseudo1['a', '16.5']\n\nk2_unit = 1/u.M/u.s\naxes = mk_subplots(sharex=True, sharey=True)\nfor idxI, (tI, I) in enumerate(ionic_strengths.items()):\n series = np.empty((len(temperatures), 3))\n for idxT, (tT, T) in enumerate(temperatures.items()):\n kval, kerr = [to_unitless(v, k2_unit) for v in k_pseudo1[tI, tT]]\n lnk_err = (np.log(kval + kerr) - np.log(kval - kerr))/2\n series[idxT, :] = to_unitless(1/T, 1/u.K), np.log(kval), 1/lnk_err**2\n x, y, w = series.T\n res = b, vcv, r2 = least_squares(x, y, w)\n plot_least_squares_fit(x, y, res, w**-0.5, plot_cb=functools.partial(\n plot_fit, ax=axes[idxI], kw_data=dict(ls='None', marker='.'), nsigma=3))\n axes[idxI].get_lines()[-1].set_label(r'$E_{\\rm a} = %.5g\\ kJ/mol$' % (-b[1]*8.314511e-3))\n axes[idxI].legend()\n_set_axes_titles_to_ionic_strength(axes)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dacr26/CompPhys
01_01_euler.ipynb
mit
[ "author:\n- 'Adrian E. Feiguin'\ntitle: 'Computational Physics'\n...\nOrdinary differential equations\nLet’s consider a simple 1st order equation: \n$$\\frac{dy}{dx}=f(x,y)$$\nTo solve this equation with a computer we need to discretize the differences: we\nhave to convert the differential equation into a “finite differences” equation. The simplest\nsolution is Euler’s method.\nEuler’s method\nSupouse that at a point $x_0$, the function $f$ has a value $y_0$. We\nwant to find the approximate value of $y$ in a point $x_1$ close to\n$x_0$, $x_1=x_0+\\Delta x$, with $\\Delta x$ small. We assume that $f$,\nthe rate of change of $y$, is constant in this interval $\\Delta x$.\nTherefore we find: $$\\begin{eqnarray}\n&& dx \\approx \\Delta x &=&x_1-x_0, \\\n&& dy \\approx \\Delta y &=&y_1-y_0,\\end{eqnarray}$$ with\n$y_1=y(x_1)=y(x_0+\\Delta x)$. Then we re-write the differential equation in terms of discrete differences as:\n$$\\frac{\\Delta y}{\\Delta x}=f(x,y)$$ or \n$$\\Delta y = f(x,y)\\Delta x$$\nand approximate the value of $y_1$ as\n$$y_1=y_0+f(x_0,y_0)(x_1-x_0)$$ We can generalize this formula to find\nthe value of $y$ at $x_2=x_1+\\Delta x$ as\n$$y_{2}=y_1+f(x_1,y_1)\\Delta x,$$ or in the general case:\n$$y_{n+1}=y_n+f(x_n,y_n)\\Delta x$$\nThis is a good approximation as long as $\\Delta x$ is “small”. What is\nsmall? Depends on the problem, but it is basically defined by the “rate\nof change”, or “smoothness” of $f$. $f(x)$ has to behave smoothly and\nwithout rapid variations in the interval $\\Delta x$.\nNotice that Euler’s method is equivalent to a 1st order Taylor expansion\nabout the point $x_0$. The “local error” calculating $x_1$ is then\n$O(\\Delta x^2)$. If we use the method $N$ times to calculate $N$\nconsecutive points, the propagated “global” error will be\n$NO(\\Delta x^2)\\approx O(\\Delta \nx)$. This error decreases linearly with decreasing step, so we need to\nhalve the step size to reduce the error in half. The numerical work for\neach step consists of a single evaluation of $f$.\nExercise 1.1: Newton’s law of cooling\nIf the temperature difference between an object and its surroundings is\nsmall, the rate of change of the temperature of the object is\nproportional to the temperature difference: $$\\frac{dT}{dt}=-r(T-T_s),$$\nwhere $T$ is the temperature of the body, $T_s$ is the temperature of\nthe environment, and $r$ is a “cooling constant” that depends on the\nheat transfer mechanism, the contact area with the environment and the\nthermal properties of the body. The minus sign appears because if\n$T>T_s$, the temperature must decrease.\nWrite a program to calculate the temperature of a body at a time $t$,\ngiven the cooling constant $r$ and the temperature of the body at time\n$t=0$. Plot the results for $r=0.1\\frac{1}{min}$; $T_0=83^{\\circ} C$\nusing different intervals $\\Delta t$ and compare with exact (analytical)\nresults.", "T0 = 10. # initial temperature\nTs = 83. # temp. of the environment\nr = 0.1 # cooling rate\ndt = 0.05 # time step\ntmax = 60. # maximum time\nnsteps = int(tmax/dt) # number of steps\n\nT = T0\nfor i in range(1,nsteps+1):\n new_T = T - r*(T-Ts)*dt\n T = new_T\n print i,i*dt, T\n # we can also do t = t - r*(t-ts)*dt\n ", "Let's try plotting the results. We first need to import the required libraries and methods", "%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot ", "Next, we create numpy arrays to store the (x,y) values", "my_time = np.zeros(nsteps)\nmy_temp = np.zeros(nsteps)", "We have to re write the loop to store the values in the arrays. Remember that numpy arrays start from 0.", "T = T0\nmy_temp[0] = T0\nfor i in range(1,nsteps):\n T = T - r*(T-Ts)*dt\n my_time[i] = i*dt\n my_temp[i] = T\n \n\npyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3)\npyplot.xlabel('time')\npyplot.ylabel('temperature');", "We could have saved effort by defining", "my_time = np.linspace(0.,tmax,nsteps)\n\npyplot.plot(my_time, my_temp, color='#003366', ls='-', lw=3)\npyplot.xlabel('time')\npyplot.ylabel('temperature');", "Alternatively, and in order to re use code in future problems, we could have created a function.", "def euler(y, f, dx):\n \"\"\"Computes y_new = y + f*dx\n \n Parameters\n ----------\n y : float\n old value of y_n at x_n\n f : float\n first derivative f(x,y) evaluated at (x_n,y_n)\n dx : float\n x step\n \"\"\"\n \n return y + f*dx\n\nT = T0\nfor i in range(1,nsteps):\n T = euler(T, -r*(T-Ts), dt)\n my_temp[i] = T", "Actually, for this particularly simple case, calling a function may introduce unecessary overhead, but it is a an example that we will find useful for future applications. For a simple function like this we could have used a \"lambda\" function (more about lambda functions <a href=\"http://www.secnetix.de/olli/Python/lambda_functions.hawk\">here</a>).", "euler = lambda y, f, dx: y + f*dx ", "Now, let's study the effects of different time steps on the convergence:", "dt = 1.\n#my_color = ['#003366','#663300','#660033','#330066']\nmy_color = ['red', 'green', 'blue', 'black']\nfor j in range(0,4):\n nsteps = int(tmax/dt) #the arrays will have different size for different time steps\n my_time = np.linspace(dt,tmax,nsteps) \n my_temp = np.zeros(nsteps)\n T = T0\n for i in range(1,nsteps):\n T = euler(T, -r*(T-Ts), dt)\n my_temp[i] = T\n \n pyplot.plot(my_time, my_temp, color=my_color[j], ls='-', lw=3)\n dt = dt/2.\n\npyplot.xlabel('time');\npyplot.ylabel('temperature');\npyplot.xlim(8,10);\npyplot.ylim(48,58);", "Challenge 1.1\nTo properly study convergence, one possibility it so look at the result at a given time, for different time steps. Modify the previous program to print the temperature at $t=10$ as a function of $\\Delta t$." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
HowDoIUseThis/AGNClassification
Presentation.ipynb
gpl-3.0
[ "import utils\nimport core\nimport os\nimport math\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport matplotlib as plt\nfrom utils.graphing import graph\nimport theano\nfrom sampled import sampled\nimport pymc3 as pm\nimport theano.tensor as tt\n\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:,.2f}'.format\nplt.rcParams['figure.figsize'] = (16,12)\n\nfilepath = os.getcwd() + \"/Data/07to1Redshift.csv\"\ntest = core.FilteredFrame(filepath, percent_agn = ['BPT'], composite = False, include_upperlimit = True)\ndf = test.dataframe\n\ncolumns = ['BPT:P(AGN)',\n 'O_III','O_III_Sigma','O_III_Health',\n 'N_II','N_II_Sigma','N_II_Health',\n 'H_Beta','H_Beta_Sigma','H_Beta_Health',\n 'H_Alpha','H_Alpha_Sigma','H_Alpha_Health',\n ]\nlim_columns = ['BPT:P(AGN)','O_III_Health','N_II_Health', 'H_Beta_Health', 'H_Alpha_Health']\nd_limits = {}\ndd_limits = {}\noiii_ratio_lowLim = df['O_III_Health'].isin(['Limit']) & df['H_Beta_Health'].isin(['Healthy']) \noiii_ratio_upLim = df['O_III_Health'].isin(['Healthy']) & df['H_Beta_Health'].isin(['Limit'])\nnii_ratio_lowLim = df['N_II_Health'].isin(['Limit']) & df['H_Alpha_Health'].isin(['Healthy'])\nnii_ratio_upLim = df['N_II_Health'].isin(['Healthy']) & df['H_Alpha_Health'].isin(['Limit'])\noiii_healthy = df['O_III_Health'].isin(['Healthy']) & df['H_Beta_Health'].isin(['Healthy']) \nnii_healthy = df['N_II_Health'].isin(['Healthy']) & df['H_Alpha_Health'].isin(['Healthy']) \n\noiii_limit = df[(nii_healthy & oiii_ratio_lowLim)]\nnii_limit = df[(oiii_healthy & nii_ratio_lowLim)]\nbeta_limit = df[(nii_healthy & oiii_ratio_upLim)]\nalpha_limit = df[(oiii_healthy & nii_ratio_upLim)]\nd_limits['OIII'] = oiii_limit.shape[0]\nd_limits['NII'] = nii_limit.shape[0]\nd_limits['HBeta'] = beta_limit.shape[0]\nd_limits['HAlpha'] = alpha_limit.shape[0]\n\no_n_limit = df[oiii_ratio_lowLim & nii_ratio_lowLim]\no_a_limit = df[oiii_ratio_lowLim & nii_ratio_upLim]\nb_n_limit = df[oiii_ratio_upLim & nii_ratio_lowLim]\nb_a_limit = df[oiii_ratio_upLim & nii_ratio_upLim]\ndd_limits['OIII_NII'] = o_n_limit.shape[0]\ndd_limits['OIII_Alpha'] = o_a_limit.shape[0]\ndd_limits['Beta_NII'] = b_n_limit.shape[0]\ndd_limits['Beta_Alpha'] = b_a_limit.shape[0]\nprint(d_limits)\nprint(dd_limits)", "Bayesian Inference of Censored SDSS data\n\nBy: Christopher Stewart\nOverview\n\n\nIntroduce the problem\nData Selection\nIssues with traditional approach\nIntroduce the BPT\nLimit Cases\n\n\nHandling Limits\nCreating the model\nExample Case\nResults\n\n\nWhat next\n\nIntro - Data Selection\n\nPrimary SQL quarry\n\nSelected from redshift range of 0.07 to 0.1\nerror above zero for following lines\n$\\text{N}_{[\\text{II}]}$\n$\\text{O}_{[\\text{III}]}$\n$\\text{He}_{[\\text{II}]}$\n$\\text{H}_\\alpha$\n$\\text{H}_\\beta$\n\n\n\nTotal Objects: 42239\nSigma Filtering\nObjects were removed from the survey following the condition:\n$0 <\\text{flux_err} < 1\\times e^{-15}$\nIntro - Issues with traditional approach", "graph.BPT(df)", "No limits in any ratio", "no_limits = df[(oiii_healthy & nii_healthy)]\ngraph.BPT(no_limits)", "How many object we lose if we only take well defined objects", "print('Number of Objects\\n-------------\\nAll Data: {}\\nWell Def: {}'.format(df.shape[0],no_limits.shape[0]))\nprint('Loss Percentage: {0:.3f}%'.format((1-no_limits.shape[0]/df.shape[0])*100))", "Lose around 9.6% of our objects\nIntroduce the BPT\n\nEach object on the BPT hase two ratios which are made up of two fluxes with individual sigma values.\nFor example a single object has the following data: \n$\\log_{10}(\\frac{\\text{O}{\\text{III}}}{\\text{H}{\\beta}}) \\rightarrow \\left[\n\\begin{array}{ll}\n \\text{O}{\\text{flux}} & \\text{O}{{\\sigma}} \\\n \\text{H}\\beta_{\\text{flux}} & \\text{H}\\beta_{{\\sigma}} \\\n\\end{array} \n\\right]$\n$\\log_{10}(\\frac{\\text{N}{\\text{II}}}{\\text{H}{\\alpha}}) \\rightarrow \\left[\n\\begin{array}{ll}\n \\text{N}{\\text{flux}} & \\text{N}{{\\sigma}} \\\n \\text{H}\\alpha_{\\text{flux}} & \\text{H}\\alpha_{{\\sigma}} \\\n\\end{array} \n\\right]$", "print('OIII upLim {}\\nOIII lowLim: {}\\nNII upLim: {}\\nNII lowLim: {}'.format(df[oiii_ratio_upLim & nii_healthy].shape[0],\n df[oiii_ratio_lowLim & nii_healthy].shape[0],\n df[nii_ratio_upLim & oiii_healthy].shape[0],\n df[nii_ratio_lowLim & oiii_healthy].shape[0]))", "Limit cases\n\nLimit:\n$\\text{flux} < \\text{flux error}$\nLower Limit:\nLet $f = \\frac{A}{B} \\text{ where } B < \\sigma_B \\text{, so } f = \\frac{A}{\\sigma_B} \\text{ is an lower limit.}$\nUpper Limit:\nLet $f = \\frac{A}{B} \\text{ where } A < \\sigma_A \\text{, so } f = \\frac{\\sigma_A}{B} \\text{ is an upper limit.}$\n1D Limit\nIs a limit in only one ratio. For example a lower limit in x would have the following shape,\n\n2D Limit\nIs a limit in each ratio. For example a lower limit in x and y would have the following shape,\n\nNumber of Limit cases\n\n1D Limits:", "print(d_limits)", "2D Limits:", "print(dd_limits)", "Handling Limits\n\nCreating the model\nExample Case:\nObject has a limit in OIII so this is an upper limit in our y", "test_obj = df.iloc[[21226]]\ntest_obj[['Log(O_III/H_Beta)','Log(O_III/H_Beta)_Sigma','Log(N_II/H_Alpha)','Log(N_II/H_Alpha)_Sigma', 'O_III/H_Beta', 'H_Beta', 'H_Beta_Sigma', 'O_III']]\n\ngraph.BPT(test_obj)\ntest_obj[lim_columns]", "Creating model in pymc3", "x_test_data = [test_obj.loc[21226,'Log(N_II/H_Alpha)'],test_obj.loc[21226,'Log(N_II/H_Alpha)_Sigma'],\n test_obj.loc[21226,'N_II'],test_obj.loc[21226,'N_II_Sigma'],\n test_obj.loc[21226,'H_Alpha'],test_obj.loc[21226,'H_Alpha_Sigma']\n ]\ny_test_data = [test_obj.loc[21226,'Log(O_III/H_Beta)'],test_obj.loc[21226,'Log(O_III/H_Beta)_Sigma'],\n test_obj.loc[21226,'O_III'],test_obj.loc[21226,'O_III_Sigma'],\n test_obj.loc[21226,'H_Beta'],test_obj.loc[21226,'H_Beta_Sigma']\n ]\n\n@sampled\ndef oiii_limit(xData,yData):\n x,x_sigma,x_a,x_a_sigma,x_b,x_b_sigma = xData\n y,y_sigma,y_a,y_a_sigma,y_b,y_b_sigma = yData\n \n model_x_a = pm.Normal('N_II', mu=x_a, sd = x_a_sigma)\n model_x_b = pm.Normal('H_Alpha', mu=x_b, sd = x_b_sigma)\n model_y_b = pm.Normal('H_Beta', mu=y_b, sd = y_b_sigma)\n #the upper bound of 0.94 comes from solving ratio OIII/HBeta\n model_y_a_limit = pm.Uniform('OIII_Uniform',lower =0 ,upper=0.9417)\n model_y_a_gauss = pm.Normal('OIII_Normal',mu = 0.9417, sd = y_a_sigma)\n y_use_limit = pm.Binomial('OIII_swap', n=1,p=0.5)\n model_y_a = pm.Normal('OIII', mu = tt.switch(y_use_limit,model_y_a_limit,model_y_a_gauss), sd =1)\n \n model_NII_HAlpha = pm.Deterministic('NII/HAlpha', model_x_a/model_x_b)\n model_OIII_HBeta = pm.Deterministic('OIII/HBeta', model_y_a/model_y_b)\n \n", "Run model", "with oiii_limit(xData = x_test_data,yData =y_test_data):\n base_trace = pm.sample(10000)", "OIII in depth", "pm.traceplot(base_trace, varnames = ['OIII_Normal','OIII_Uniform'])\n\n\ndef plot_jointkde(x_pdf,y_pdf):\n def StarForming(x): return (1.3 + 0.61 / (x - 0.04))\n g = sns.jointplot(np.log10(x_pdf),np.log10(y_pdf),kind = 'kde',size=10)\n xSpan = np.linspace(-0.6, -0.1, num=200)\n g.ax_joint.plot(xSpan, StarForming(xSpan), 'k--',-0.31,-0.37, 'bo')\n g.set_axis_labels('Log(N_II/H_Aplha)','Log(O[III]/H_Beta)')\n\n", "Our 2D KDE\nWith our original point plotted as a blue dot", "plot_jointkde(x_pdf= base_trace['NII/HAlpha'],y_pdf =base_trace['OIII/HBeta'])\n\npm.summary(base_trace)\n\n", "Point location Result\n\nOld:\n$$\\log_{10}(\\frac{\\text{O}{\\text{III}}}{\\text{H}{\\beta}}) = -0.37$$ \n$$\\log_{10}(\\frac{\\text{N}{\\text{II}}}{\\text{H}{\\alpha}}) = -0.31 $$\nNew:\n$$\\log_{10}(\\frac{\\text{O}{\\text{III}}}{\\text{H}{\\beta}}) = -0.67$$ \n$$\\log_{10}(\\frac{\\text{N}{\\text{II}}}{\\text{H}{\\alpha}}) = -0.31$$", "def test_percentAGN(x,x_sig,y,y_sig):\n def AGNLine(x): return 1.19 + 0.61 / (x - 0.47)\n def SFLine(x): return 1.3 + 0.61 / (x - 0.04)\n lineCorrectionAGN = 0.2\n lineCorrectionSF = -0.2\n\n x0_array = np.float64(np.ones((100,)) * -x)\n y0_array = np.float64(np.ones((100,)) * -y)\n xLineSpace = np.linspace((x - x_sig), (x + x_sig), 100)\n yLineSpace = np.linspace((y - y_sig), (y + y_sig),\n 100)[:, None]\n ellipse = ((xLineSpace - x) / x_sig)**2 + ((yLineSpace - y) / y_sig)**2 <= 1\n boolean_AGN_line = np.subtract(\n yLineSpace, AGNLine(xLineSpace)) > 0 # AGN fit line\n boolean_AGN_Correction = (np.subtract(xLineSpace, np.ones(\n (100, 100)) * lineCorrectionAGN) > 0) # Fixes problem with the fit line\n # combines fit line and correction into single boolean array\n boolean_AGN_Mask = np.where(np.logical_or(\n boolean_AGN_line, boolean_AGN_Correction), True, False)\n \n # plt.imshow(boolean_AGN_Mask, origin = 'lower') #for debugging\n\n # Starforming Fit Lines\n boolean_SF_line = np.subtract(yLineSpace, SFLine(\n xLineSpace)) < 0 # Starforming fit line\n boolean_SF_Correction = (np.subtract(xLineSpace, np.ones(\n (100, 100)) * lineCorrectionSF) < 0) # Fixes problem with the fit line\n # combines fit line and correction into single boolean array\n boolean_SF_Mask = np.where(np.logical_and(\n boolean_SF_line, boolean_SF_Correction), True, False)\n\n # Neither mask indicates areas that are neither AGN or SF\n boolean_Neither_Mask = np.where(np.logical_or(\n boolean_SF_Mask, boolean_AGN_Mask), False, True)\n # plt.imshow(boolean_SF_Mask, origin = 'lower') #for debugging\n\n # Creat probability distribution function based on array index locations\n xWeight = (np.exp(-(np.add(xLineSpace, x0_array)**2) *\n (2 * x_sig**2)) * (2 * math.pi * x_sig**2))**2\n yWeight = (np.exp(-(np.add(yLineSpace, y0_array)**2) *\n (2 * y_sig**2)) * (2 * math.pi * y_sig**2))**2\n weight_Matrix = np.sqrt(xWeight + yWeight)\n\n # AGN total weight is the sum of weight matrix elements that are true in both the ellipse boolean mask and agn boolean mask\n AGN_Weight = np.sum(\n weight_Matrix[np.logical_and(ellipse, boolean_AGN_Mask)])\n SF_Weight = np.sum(weight_Matrix[np.logical_and(ellipse, boolean_SF_Mask)])\n Neither_Weight = np.sum(\n weight_Matrix[np.logical_and(ellipse, boolean_Neither_Mask)])\n # total weight of whole ellipse\n Total_Weight = np.sum(weight_Matrix[ellipse])\n\n # Percenbt AGN or starforming\n AGN_Percent = AGN_Weight / Total_Weight\n SF_Percent = SF_Weight / Total_Weight\n Neither_Percent = Neither_Weight / Total_Weight\n AGN_Percent += Neither_Percent\n \n return AGN_Percent", "P(AGN) Result\n\nOld:", "test_obj[['BPT:P(AGN)']]", "New:", "test_percentAGN(x=-0.31,x_sig=0.03,y=-0.67,y_sig=np.log10(4.26))", "What next\n\n\nFigure out 2D limit case\nAutomate this process\nFilter out simple cases" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
justinfinkle/pydiffexp
ipynb/example_diffexp.ipynb
gpl-3.0
[ "Pydiffexp\nThe pydiffexp package is meant to provide an interface between R and Python to do differential expression analysis.\nImports", "import pandas as pd\nfrom pydiffexp import DEAnalysis", "Load Data\nEach DEAnalysis object (DEA) operates on a specific dataset. DEA uses a <a href='http://pandas.pydata.org/pandas-docs/stable/advanced.html'> hierarchical dataframe</a> (i.e. a dataframe with a multiindex) for analysis. One can either be supplied, or can be created from a dataframe with appropriate column or row labels. DEA expects the multiindex to be along the columns and will transform the data if necessary. DEA can also be initilized without data, but many methods will not work as expected.", "test_path = \"/Users/jfinkle/Documents/Northwestern/MoDyLS/Python/sprouty/data/raw_data/all_data_formatted.csv\"\nraw_data = pd.read_csv(test_path, index_col=0)\n\n# Initialize analysis object with data. Data is retained\n\n'''\nThe hierarchy provides the names for each label in the multiindex. 'condition' and 'time' are supplied as the reference\nlabels, which are used to make contrasts. \n''' \nhierarchy = ['condition', 'well', 'time', 'replicate']\ndea = DEAnalysis(raw_data, index_names=hierarchy, reference_labels=['condition', 'time'] )", "Let's look at the data that has been added to the object. Notice that the columns are a Multiindex in which the levels correspond to lists of the possible values and the names of each level come from the list supplied to index_names\nRaw Data", "raw_data.head()", "Formatted data as Hierarchial Dataframe", "\ndea.data.head()\n\ndea.data.columns", "When the data is added, DEA automatically saves a summary of the experiment, which can also be summarized with the print function.", "dea.experiment_summary\n\ndea.print_experiment_summary()", "Model Fitting\nNow we're ready to fit a model! All we need to do is supply contrasts that we want to compare. These are formatted in the R style and can either be a string, list, or dictionary. Here we'll just do one contrast, so we supply a string. When the fit is run, DEA gains several new attributes that store the data, design, contrast, and fit objects created by R. \nAll of the model information is kept as attributes so that the entire object can be saved and the analysis can be recapitulated.", "# Types of contrasts\nc_dict = {'Diff0': \"(KO_15-KO_0)-(WT_15-WT_0)\", 'Diff15': \"(KO_60-KO_15)-(WT_60-WT_15)\",\n 'Diff60': \"(KO_120-KO_60)-(WT_120-WT_60)\", 'Diff120': \"(KO_240-KO_120)-(WT_240-WT_120)\"}\nc_list = [\"KO_15-KO_0\", \"KO_60-KO_15\", \"KO_120-KO_60\", \"KO_240-KO_120\"]\nc_string = \"KO_0-WT_0\"\ndea.fit(c_string)\n\nprint(dea.design, '', dea.contrast_robj, '', dea.de_fit)", "After the fit, we want to see our significant results. DEA calls <a href=\"http://web.mit.edu/~r/current/arch/i386_linux26/lib/R/library/limma/html/toptable.html\"> topTable</a> so all keywoard arguments from the R function can be passed, though the defaults explicitly in get_results() are the most commonly used ones. If more than one contrast is supplied, pydiffexp will default to using the F statistic when selecting significant genes.", "dea.get_results(p_value=0.01, n=10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
par2/lamana
docs/quickview.ipynb
bsd-3-clause
[ "# TimeStamp\nimport time, datetime\nst = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')\nprint('Last Run: {}'.format(st))\n\n# Hidden - Run this cell only once\nfrom IPython.display import clear_output\nimport pandas as pd\n\n%cd ../\nclear_output()\n\npd.set_option('display.max_columns', 9)\npd.set_option('precision', 4)\n\n# Import LamAna and setup plotting in Jupyter\nimport lamana as la\n\n%matplotlib inline\n\n# Build dicts of loading parameters and and material properties\nload_params = {\n 'R' : 12e-3, # specimen radius\n 'a' : 7.5e-3, # support ring radius\n 'r' : 2e-4, # radial distance from center loading\n 'P_a' : 1, # applied load\n 'p' : 2, # points/layer\n}\n\n# Quick Form: a dict of lists\nmat_props = {\n 'HA' : [5.2e10, 0.25],\n 'PSu' : [2.7e9, 0.33],\n}\n\n# Select geometries\nsingle_geo = ['400-200-800'] \nmultiple_geos = [\n '350-400-500', '400-200-800', '200-200-1200', '200-100-1400',\n '100-100-1600', '100-200-1400', '300-400-600'\n]", "This file is forked from Demo - Academic 0.1.3.ipynb. All cells above this level are hidden in readthedocs by the nbsphinx extension. \nQuick View\nHere is a brief gallery of some stress distribution plots produced from LamAna.\nSingle Geometry Plots\nWe can plot stress distributions for a single laminate as a function of height, d or normalized thicknesses, k (default).", "case1 = la.distributions.Case(load_params, mat_props) # instantiate a User Input Case Object through distributions\ncase1.apply(single_geo)\ncase1.plot(normalized=False)\n\ncase1.plot(normalized=True, grayscale=True)", "We can superimpose insets and adjust the colors for publications quality.", "case1.plot(annotate=True, colorblind=True, inset=True)", "Multiple Geometry Plots\nWith normalized layers, we can superimpose multiple stress distributions fpr laminates different geometries. Data for multiple laminates are encapsulated in a Case object.", "title = 'Stress Distributions of HA/PSu for Multiple Geometries'\nmultiple_geos = [\n '350-400-500', '400-200-800', '200-200-1200', '200-100-1400',\n '100-100-1600', '100-200-1400', '300-400-600'\n]\n\ncase2 = la.distributions.Case(load_params, mat_props) # instantiate a User Input Case Object through distributions\ncase2.apply(multiple_geos)\n\ncase2.plot(title, colorblind=True, annotate=True)", "These distributions can be separated as desired into a panel of various plots.", "case2.plot(title, colorblind=True, annotate=True, separate=True)", "Halfplots\nThe following has not been fully implemented yet, but demonstrates several multi-plots of tensile data. Each plot shows some pattern of interest, for example: \n\n(a) constant total thickness; varied layer thicknesses\n(b) constant outer layer\n(c) constant inner layer\n(d) constant middle layer\n\n\nPanels of multi-plots are possible with a Cases object, a container for several Case objects.", "# Setup a list of geometry strings\nconst_total = [\n '350-400-500', '400-200-800', '200-200-1200',\n '200-100-1400', '100-100-1600', '100-200-1400'\n]\n\n# Setup cases\ncases1 = la.distributions.Cases(\n const_total, load_params=load_params, mat_props=mat_props,\n model='Wilson_LT', ps=[2, 3]\n) \n\ncases1.plot(extrema=False)", "Data Analysis\nUsing a prior case, we can analyze the data calculations based on a given theorical model.", "case1\n\ncase1.model\n\ncase1.LMs", "Data for each laminate is contained in a pandas DataFrame, a powerful data structure for data analysis.", "#df = case1.frames\ndf = case1.frames[0]\ndf\n#df.style # pandas 0.17.1, css on html table\n#df.style.bar(subset=['stress_f (MPa/N)', 'strain'], color='#d65f5f')", "Exporting Data\nFinally, we can export data and parameters to Excel or .csv formats.\n\nConclusion\nWith a few simple lines of code, we can use LamAna to quickly perform laminate analysis calculations, visulalize stress distributions and export data. \n...and it's FREE." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]