repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
isendel/machine-learning
ml-classification/week-1/module-2-linear-classifier-assignment-blank.ipynb
apache-2.0
[ "Predicting sentiment from product reviews\nThe goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.\nIn this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.\n\nUse SFrames to do some feature engineering\nTrain a logistic regression model to predict the sentiment of product reviews.\nInspect the weights (coefficients) of a trained logistic regression model.\nMake a prediction (both class and probability) of sentiment for a new product review.\nGiven the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.\nInspect the coefficients of the logistic regression model and interpret their meanings.\nCompare multiple logistic regression models.\n\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create.", "from __future__ import division\nimport graphlab\nimport math\nimport string", "Data preperation\nWe will use a dataset consisting of baby product reviews on Amazon.com.", "products = graphlab.SFrame('amazon_baby.gl/')", "Now, let us see a preview of what the dataset looks like.", "products", "Build the word count vector for each review\nLet us explore a specific example of a baby product.", "products[269]", "Now, we will perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nTransform the reviews into word-counts.\n\nAside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as \"I'd\", \"would've\", \"hadn't\" and so forth. See this page for an example of smart handling of punctuations.", "def remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\nreview_without_puctuation = products['review'].apply(remove_punctuation)\nproducts['word_count'] = graphlab.text_analytics.count_words(review_without_puctuation)", "Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.", "products[269]['word_count']", "Extract sentiments\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.", "products = products[products['rating'] != 3]\nlen(products)", "Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.", "products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)\nproducts", "Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).\nSplit data into training and test sets\nLet's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.", "train_data, test_data = products.random_split(.8, seed=1)\nprint len(train_data)\nprint len(test_data)", "Train a sentiment classifier with logistic regression\nWe will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.\nNote: This line may take 1-2 minutes.", "sentiment_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count'],\n validation_set=None)\n\nsentiment_model", "Aside. You may get an warning to the effect of \"Terminated due to numerical difficulties --- this model may not be ideal\". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.\nNow that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:", "weights = sentiment_model.coefficients\nweights.column_names()", "There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. \nFill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).", "num_positive_weights = ...\nnum_negative_weights = ...\n\nprint \"Number of positive weights: %s \" % num_positive_weights\nprint \"Number of negative weights: %s \" % num_negative_weights", "Quiz question: How many weights are >= 0?\nMaking predictions with logistic regression\nNow that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.", "sample_test_data = test_data[10:13]\nprint sample_test_data['rating']\nsample_test_data", "Let's dig deeper into the first row of the sample_test_data. Here's the full review:", "sample_test_data[0]['review']", "That review seems pretty positive.\nNow, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.", "sample_test_data[1]['review']", "We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:\n$$\n\\mbox{score}_i = \\mathbf{w}^T h(\\mathbf{x}_i)\n$$ \nwhere $h(\\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].", "scores = sentiment_model.predict(sample_test_data, output_type='margin')\nprint scores", "Predicting sentiment\nThese scores can be used to make class predictions as follows:\n$$\n\\hat{y} = \n\\left{\n\\begin{array}{ll}\n +1 & \\mathbf{w}^T h(\\mathbf{x}_i) > 0 \\\n -1 & \\mathbf{w}^T h(\\mathbf{x}_i) \\leq 0 \\\n\\end{array} \n\\right.\n$$\nUsing scores, write code to calculate $\\hat{y}$, the class predictions:\nRun the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.", "print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data)", "Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.\nProbability predictions\nRecall from the lectures that we can also calculate the probability predictions from the scores using:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))}.\n$$\nUsing the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].\nCheckpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.", "print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data, output_type='probability')", "Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?\nFind the most positive (and negative) review\nWe now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.\nUsing the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the \"most positive reviews.\"\nTo calculate these top-20 reviews, use the following steps:\n1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)\n2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)\nQuiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]\nNow, let us repeat this excercise to find the \"most negative reviews.\" Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.\nQuiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]\nCompute accuracy of the classifier\nWe will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified examples}}{\\mbox{# total examples}}\n$$\nThis can be computed as follows:\n\nStep 1: Use the trained model to compute class predictions (Hint: Use the predict method)\nStep 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).\nStep 3: Divide the total number of correct predictions by the total number of data points in the dataset.\n\nComplete the function below to compute the classification accuracy:", "def get_classification_accuracy(model, data, true_labels):\n # First get the predictions\n ## YOUR CODE HERE\n ...\n \n # Compute the number of correctly classified examples\n ## YOUR CODE HERE\n ...\n\n # Then compute accuracy by dividing num_correct by total number of examples\n ## YOUR CODE HERE\n ...\n \n return accuracy", "Now, let's compute the classification accuracy of the sentiment_model on the test_data.", "get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])", "Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).\nQuiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?\nLearn another classifier with fewer words\nThere were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:", "significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', \n 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', \n 'work', 'product', 'money', 'would', 'return']\n\nlen(significant_words)", "For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.", "train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)\ntest_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)", "Let's see what the first example of the dataset looks like:", "train_data[0]['review']", "The word_count column had been working with before looks like the following:", "print train_data[0]['word_count']", "Since we are only working with a subet of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.", "print train_data[0]['word_count_subset']", "Train a logistic regression model on a subset of data\nWe will now build a classifier with word_count_subset as the feature and sentiment as the target.", "simple_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count_subset'],\n validation_set=None)\nsimple_model", "We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.", "get_classification_accuracy(simple_model, test_data, test_data['sentiment'])", "Now, we will inspect the weights (coefficients) of the simple_model:", "simple_model.coefficients", "Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.", "simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)", "Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?\nQuiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?\nComparing models\nWe will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.\nFirst, compute the classification accuracy of the sentiment_model on the train_data:\nNow, compute the classification accuracy of the simple_model on the train_data:\nQuiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?\nNow, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:\nNext, we will compute the classification accuracy of the simple_model on the test_data:\nQuiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?\nBaseline: Majority class prediction\nIt is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.\nWhat is the majority class in the train_data?", "num_positive = (train_data['sentiment'] == +1).sum()\nnum_negative = (train_data['sentiment'] == -1).sum()\nprint num_positive\nprint num_negative", "Now compute the accuracy of the majority class classifier on test_data.\nQuiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).\nQuiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mathLab/RBniCS
tutorials/06_thermal_block_unsteady/tutorial_thermal_block_unsteady_1_rb.ipynb
lgpl-3.0
[ "TUTORIAL 06 - Unsteady Thermal block problem\nKeywords: certified reduced basis method, scalar problem\n1. Introduction\nIn this Tutorial, we consider unsteady heat conduction in a two-dimensional domain $\\Omega$.\n<img src=\"data/thermal_block.png\" />\nWe define two subdomains $\\Omega_1$ and $\\Omega_2$, such that\n1. $\\Omega_1$ is a disk centered at the origin of radius $r_0=0.5$, and\n2. $\\Omega_2=\\Omega/\\ \\overline{\\Omega_1}$. \nThe conductivity $\\kappa$ is assumed to be constant on $\\Omega_1$ and $\\Omega_2$, i.e.\n$$\n\\kappa|{\\Omega_1}=\\kappa_0 \\quad \\textrm{and} \\quad \\kappa|{\\Omega_2}=1.\n$$\nFor this problem, we consider $P=2$ parameters:\n1. the first one is related to the conductivity in $\\Omega_1$, i.e. $\\mu_0\\equiv\\kappa_0$ (note that parameters numbering is zero-based);\n2. the second parameter $\\mu_1$ takes into account the constant heat flux over $\\Gamma_{base}$.\nThe parameter vector $\\boldsymbol{\\mu}$ is thus given by \n$$\n\\boldsymbol{\\mu} = (\\mu_0,\\mu_1)\n$$\non the parameter domain\n$$\n\\mathbb{P}=[0.1,10]\\times[-1,1].\n$$\nIn this problem we model the heat transfer process due to the heat flux over the bottom boundary $\\Gamma_{base}$ and the following conditions on the remaining boundaries:\n* the left and right boundaries $\\Gamma_{side}$ are insulated,\n* the top boundary $\\Gamma_{top}$ is kept at a reference temperature (say, zero),\nwith the aim of measuring the average temperature on $\\Gamma_{base}$.\nIn order to obtain a faster evaluation (yet, provably accurate) of the output of interest we propose to use a certified reduced basis approximation for the problem.\n2. Parametrized formulation\nLet $u(t;\\boldsymbol{\\mu})$ be the temperature in the domain $\\Omega\\times[0,t_f]$.\nThe strong formulation of the parametrized problem is given by:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, for $t\\in[0,t_f]$, find $u(t;\\boldsymbol{\\mu})$ such that</center>\n$$\n\\begin{cases}\n \\partial_tu(t;\\boldsymbol{\\mu})- \\text{div} (\\kappa(\\mu_0)\\nabla u(t;\\boldsymbol{\\mu})) = 0 & \\text{in } \\Omega\\times[0,t_f],\\\n u(t=0;\\boldsymbol{\\mu}) = 0 & \\text{in } \\Omega, \\ \n u(t;\\boldsymbol{\\mu}) = 0 & \\text{on } \\Gamma_{top}\\times[0,t_f],\\\n \\kappa(\\mu_0)\\nabla u(t;\\boldsymbol{\\mu})\\cdot \\mathbf{n} = 0 & \\text{on } \\Gamma_{side}\\times[0,t_f],\\\n \\kappa(\\mu_0)\\nabla u(t;\\boldsymbol{\\mu})\\cdot \\mathbf{n} = \\mu_1 & \\text{on } \\Gamma_{base}\\times[0,t_f].\n\\end{cases}\n$$\n<br>\nwhere \n* $\\mathbf{n}$ denotes the outer normal to the boundaries $\\Gamma_{side}$ and $\\Gamma_{base}$,\n* the conductivity $\\kappa(\\mu_0)$ is defined as follows:\n$$\n\\kappa(\\mu_0) =\n\\begin{cases}\n \\mu_0 & \\text{in } \\Omega_1,\\\n 1 & \\text{in } \\Omega_2,\\\n\\end{cases}\n$$\nThe corresponding weak formulation reads:\n<center>for a given parameter $\\boldsymbol{\\mu}\\in\\mathbb{P}$, for $t\\in[0,t_f]$, find $u(t;\\boldsymbol{\\mu})\\in\\mathbb{V}$ such that</center>\n$$m\\left(\\partial_tu(t;\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right) + a\\left(u(t;\\boldsymbol{\\mu}),v;\\boldsymbol{\\mu}\\right)=f(v;\\boldsymbol{\\mu})\\quad \\forall v\\in\\mathbb{V},\\quad \\forall t\\in[0,t_f]$$\nwhere\n\nthe function space $\\mathbb{V}$ is defined as\n$$\n\\mathbb{V} = {v\\in H^1(\\Omega) : v|{\\Gamma{top}}=0}\n$$\nthe parametrized bilinear form $m(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$m(u, v;\\boldsymbol{\\mu})=\\int_{\\Omega} \\partial_tu(t)v \\ d\\boldsymbol{x},$$\nthe parametrized bilinear form $a(\\cdot, \\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\times \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$a(u, v;\\boldsymbol{\\mu})=\\int_{\\Omega} \\kappa(\\mu_0)\\nabla u\\cdot \\nabla v \\ d\\boldsymbol{x},$$\nthe parametrized linear form $f(\\cdot; \\boldsymbol{\\mu}): \\mathbb{V} \\to \\mathbb{R}$ is defined by\n$$f(v; \\boldsymbol{\\mu})= \\mu_1\\int_{\\Gamma_{base}}v \\ ds,$$\n\nThe (compliant) output of interest $s(t;\\boldsymbol{\\mu})$ is given by\n$$s(t;\\boldsymbol{\\mu}) = \\mu_1\\int_{\\Gamma_{base}} u(t;\\boldsymbol{\\mu})$$\nis computed for each $\\boldsymbol{\\mu}$.", "from dolfin import *\nfrom rbnics import *", "3. Affine decomposition\nFor this problem the affine decomposition is straightforward:\n$$m(u,v;\\boldsymbol{\\mu})=\\underbrace{1}{\\Theta^{m}_0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega}uv \\ d\\boldsymbol{x}}{m_0(u,v)},$$\n$$a(u,v;\\boldsymbol{\\mu})=\\underbrace{\\mu_0}{\\Theta^{a}0(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega_1}\\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}}{a_0(u,v)} \\ + \\ \\underbrace{1}{\\Theta^{a}1(\\boldsymbol{\\mu})}\\underbrace{\\int{\\Omega_2}\\nabla u \\cdot \\nabla v \\ d\\boldsymbol{x}}{a_1(u,v)},$$\n$$f(v; \\boldsymbol{\\mu}) = \\underbrace{\\mu_1}{\\Theta^{f}0(\\boldsymbol{\\mu})} \\underbrace{\\int{\\Gamma_{base}}v \\ ds}{f_0(v)}.$$\nWe will implement the numerical discretization of the problem in the class\nclass UnsteadyThermalBlock(ParabolicCoerciveProblem):\nby specifying the coefficients $\\Theta^{m}(\\boldsymbol{\\mu})$, $\\Theta^{a}_(\\boldsymbol{\\mu})$ and $\\Theta^{f}(\\boldsymbol{\\mu})$ in the method\ndef compute_theta(self, term):\nand the bilinear forms $m_(u, v)$, $a(u, v)$ and linear forms $f_(v)$ in\ndef assemble_operator(self, term):", "class UnsteadyThermalBlock(ParabolicCoerciveProblem):\n\n # Default initialization of members\n def __init__(self, V, **kwargs):\n # Call the standard initialization\n ParabolicCoerciveProblem.__init__(self, V, **kwargs)\n # ... and also store FEniCS data structures for assembly\n assert \"subdomains\" in kwargs\n assert \"boundaries\" in kwargs\n self.subdomains, self.boundaries = kwargs[\"subdomains\"], kwargs[\"boundaries\"]\n self.u = TrialFunction(V)\n self.v = TestFunction(V)\n self.dx = Measure(\"dx\")(subdomain_data=self.subdomains)\n self.ds = Measure(\"ds\")(subdomain_data=self.boundaries)\n\n # Return custom problem name\n def name(self):\n return \"UnsteadyThermalBlock1RB\"\n\n # Return the alpha_lower bound.\n def get_stability_factor_lower_bound(self):\n return min(self.compute_theta(\"a\"))\n\n # Return theta multiplicative terms of the affine expansion of the problem.\n def compute_theta(self, term):\n mu = self.mu\n if term == \"m\":\n theta_m0 = 1.\n return (theta_m0, )\n elif term == \"a\":\n theta_a0 = mu[0]\n theta_a1 = 1.\n return (theta_a0, theta_a1)\n elif term == \"f\":\n theta_f0 = mu[1]\n return (theta_f0,)\n else:\n raise ValueError(\"Invalid term for compute_theta().\")\n\n # Return forms resulting from the discretization of the affine expansion of the problem operators.\n def assemble_operator(self, term):\n v = self.v\n dx = self.dx\n if term == \"m\":\n u = self.u\n m0 = u * v * dx\n return (m0, )\n elif term == \"a\":\n u = self.u\n a0 = inner(grad(u), grad(v)) * dx(1)\n a1 = inner(grad(u), grad(v)) * dx(2)\n return (a0, a1)\n elif term == \"f\":\n ds = self.ds\n f0 = v * ds(1)\n return (f0,)\n elif term == \"dirichlet_bc\":\n bc0 = [DirichletBC(self.V, Constant(0.0), self.boundaries, 3)]\n return (bc0,)\n elif term == \"inner_product\":\n u = self.u\n x0 = inner(grad(u), grad(v)) * dx\n return (x0,)\n elif term == \"projection_inner_product\":\n u = self.u\n x0 = u * v * dx\n return (x0,)\n else:\n raise ValueError(\"Invalid term for assemble_operator().\")", "4. Main program\n4.1. Read the mesh for this problem\nThe mesh was generated by the data/generate_mesh.ipynb notebook.", "mesh = Mesh(\"data/thermal_block.xml\")\nsubdomains = MeshFunction(\"size_t\", mesh, \"data/thermal_block_physical_region.xml\")\nboundaries = MeshFunction(\"size_t\", mesh, \"data/thermal_block_facet_region.xml\")", "4.2. Create Finite Element space (Lagrange P1, two components)", "V = FunctionSpace(mesh, \"Lagrange\", 1)", "4.3. Allocate an object of the UnsteadyThermalBlock class", "problem = UnsteadyThermalBlock(V, subdomains=subdomains, boundaries=boundaries)\nmu_range = [(0.1, 10.0), (-1.0, 1.0)]\nproblem.set_mu_range(mu_range)\nproblem.set_time_step_size(0.05)\nproblem.set_final_time(3)", "4.4. Prepare reduction with a reduced basis method", "reduction_method = ReducedBasis(problem)\nreduction_method.set_Nmax(20, POD_Greedy=4)\nreduction_method.set_tolerance(1e-5, POD_Greedy=1e-2)", "4.5. Perform the offline phase", "reduction_method.initialize_training_set(100)\nreduced_problem = reduction_method.offline()", "4.6. Perform an online solve", "online_mu = (8.0, -1.0)\nreduced_problem.set_mu(online_mu)\nreduced_solution = reduced_problem.solve()\nplot(reduced_solution, reduced_problem=reduced_problem, every=5, interval=500)", "4.7. Perform an error analysis", "reduction_method.initialize_testing_set(10)\nreduction_method.error_analysis()", "4.8. Perform a speedup analysis", "reduction_method.initialize_testing_set(10)\nreduction_method.speedup_analysis()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rebeccabilbro/viz
seaborn/SeabornTour-Titanic.ipynb
mit
[ "Touring Seaborn with Titanic\nIn this lab, we will use a familiar dataset to explore the use of visualizations in feature analysis and selection.\nThe objective of this lab is to work through some of the visualization capabilities available in Seaborn. For a more thorough investigation of the capabilities offered by Seaborn, you are encouraged to do the full tutorial linked below. Seaborn is an API to matplotlib. It integrates with pandas dataframes, simplifying the process of visualizing data. It provides simple functions for plotting.\nSome of the features that seaborn offers are\n\nSeveral built-in themes that improve on the default matplotlib aesthetics\nTools for choosing color palettes to make beautiful plots that reveal patterns in your data\nFunctions for visualizing univariate and bivariate distributions or for comparing them between subsets of data\nTools that fit and visualize linear regression models for different kinds of independent and dependent variables\nFunctions that visualize matrices of data and use clustering algorithms to discover structure in those matrices\nA function to plot statistical timeseries data with flexible estimation and representation of uncertainty around the estimate\nHigh-level abstractions for structuring grids of plots that let you easily build complex visualizations\n\nWe are going to look at 3 useful functions in seaborn: factorplot, pairplot, and joinplot.\n Before running the code in this lab, articulate to your partner what you expect the visualization to look like. Look at the code and the Seaborn documentation to figure out what data is being plotted and what the type of plot may look like.\nsources:\nPrevious Titanic work: https://github.com/rebeccabilbro/titanic\nSeaborn Tutorial: https://stanford.edu/~mwaskom/software/seaborn/tutorial.html", "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n%matplotlib inline\npd.set_option('display.max_columns', 500)", "Like scikit-learn, Seaborn has \"toy\" datasets available to import for exploration. This includes the Titanic data we have previously looked at. Let's load the Seaborn Titanic dataset and take a look.\n(https://github.com/mwaskom/seaborn-data shows the datasets available to load via this method in Seaborn.)", "df = sns.load_dataset('titanic')\n\n# Write the code to look at the head of the dataframe\n", "As you can see, the data has been cleaned up a bit.\nWe performed some rudimentary visualization for exploratory data analysis previously. For example, we created a histogram using matplotlib to look at the age distirbution of passengers.", "# Create a histogram to examine age distribution of the passengers.\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.hist(df['age'], bins = 10, range = (df['age'].min(),df['age'].max()))\nplt.title('Age distribution')\nplt.xlabel('Age')\nplt.ylabel('Count of Passengers')\nplt.show()", "Factorplot\nOur prior work with the Titanic data focused on the available numeric data. Factorplot gives us an easy method to explore some of the categorical data as well. Factorplots allow us to look at a parameter's distribution in bins defined by another parameter.\nFor example, we can look at the survival rate based on the deck a passenger's cabin was on.\nRemember: take a look at the documentation first (https://stanford.edu/~mwaskom/software/seaborn/index.html) and figure out what the code is doing. Being able to understand documentation will help you a lot in your projects.", "# What is a factorplot? Check the documentation! Which data are we using? What is the count a count of?\n\ng = sns.factorplot(\"alive\", col=\"deck\", col_wrap=4, \n data=df[df.deck.notnull()], kind=\"count\", size=4, aspect=.8)", "What other options can you set with a factorplot in Seaborn? Using the code above as a starting point, create some code to create a factorplot with the data above, but in a different configuration. For example- make 2 plots per column, change the colors, add a legend, change the size, etc.", "# Try your own variation of the factorplot above.\n", "As you saw in the factorplot documentation, you can specify several different types of plots in the parameters. Let's use factorplot to create a nested barplot showing passenger survival based on their class and sex. Fill in the missing pieces of the code below. \nThe goal is a barplot showing survival probablility by class that further shows the sex of the passengers in each class. (Hint: how can you use the hue parameter?)", "# Draw a nested barplot to show survival for class and sex\ng = sns.factorplot(x=\"CHANGE TO THE CORRECT FEATURE\", \n y=\"CHANGE TO THE CORRECT FEATURE\", \n hue=\"CHANGE TO THE CORRECT FEATURE\", \n data=df,\n size=6, kind=\"bar\", palette=\"muted\")\ng.despine(left=True)\ng.set_ylabels(\"survival probability\")", "Take a look at the code below. Let's again plot passenger survival based on their class and who they were (man, woman, child) but using a different plot for each class, like what we did above for the deck information.", "g = sns.factorplot(x=\"CHANGE TO THE CORRECT FEATURE\", \n y=\"CHANGE TO THE CORRECT FEATURE\", \n col=\"CHANGE TO THE CORRECT FEATURE\", \n data=df, \n saturation=.5, kind=\"bar\", ci=None,aspect=.6)\n(g.set_axis_labels(\"\", \"Survival Rate\").set_xticklabels([\"Men\", \"Women\", \"Children\"]).set_titles\n (\"{col_name} {col_var}\").set(ylim=(0, 1)).despine(left=True)) ", "Factorplot has 6 different kinds of plots, we explored two of them above. Using the documentation, try out one of the remaining plot types. A suggestion is provided below. You can follow it, and/or create your own visualization.", "# With factorplot, make a violin plot that shows the age of the passengers at each embarkation point \n# based on their class. Use the hue parameter to show the sex of the passengers\n", "Pairplot\nIn the Wheat Classification notebook, we saw a scatter matrix. A scatter matrix plots each feature against every other feature. The diaganol showed us a density plot of just that data. Seaborn gives us this ability in the pairplot. In order to make a useful pairplot with the data, let's update some information.", "df.age = df.age.fillna(df.age.mean())\n\ng = sns.pairplot(data=df[['survived', 'pclass', 'age', 'sibsp', 'parch', 'fare']], hue='survived', dropna=True)", "The Titanic data gives an idea of what we can see with a pairplot, but it might not be the most illustrative example. Using the information provided so far, make a pairplot using the seaborn car crashes data.", "# Pairplot of the crash data", "Jointplot\nLike pairplots, a jointplot shows the distribution between features. It also shows individual distributions of the features being compared.", "g = sns.jointplot(\"fare\", \"age\", df)", "Using either the Titanic or crash data, create some jointplots.", "# Jointplot, titanic data\n\n\n# Jointplot, crash data\n", "Bonus\nUse the Titanic data to create a boxplot of the age distribution on each deck by class.\nExtra Bonus\nPlot the same inforamtion using FacetGrid.", "# boxplot of the age distribution on each deck by class\n\n\n# boxplot of the age distribution on each deck by class using FacetGrid\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_topo_compare_conditions.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compare evoked responses for different conditions\nIn this example, an Epochs object for visual and\nauditory responses is created. Both conditions\nare then accessed by their respective names to\ncreate a sensor layout plot of the related\nevoked responses.", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n\n# License: BSD (3-clause)\n\n\nimport matplotlib.pyplot as plt\nimport mne\n\nfrom mne.viz import plot_evoked_topo\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = 1\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: MEG + STI 014 - bad channels (modify to your needs)\ninclude = [] # or stim channels ['STI 014']\n# bad channels in raw.info['bads'] will be automatically excluded\n\n# Set up amplitude-peak rejection values for MEG channels\nreject = dict(grad=4000e-13, mag=4e-12)\n\n# pick MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n include=include, exclude='bads')\n\n# Create epochs including different events\nevent_id = {'audio/left': 1, 'audio/right': 2,\n 'visual/left': 3, 'visual/right': 4}\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n picks=picks, baseline=(None, 0), reject=reject)\n\n# Generate list of evoked objects from conditions names\nevokeds = [epochs[name].average() for name in ('left', 'right')]", "Show topography for two different conditions", "colors = 'yellow', 'green'\ntitle = 'MNE sample data - left vs right (A/V combined)'\n\nplot_evoked_topo(evokeds, color=colors, title=title)\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oscarmore2/deep-learning-study
tflearn-digit-recognition/TFLearn_Digit_Recognition.ipynb
mit
[ "Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.", "# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist", "Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.", "# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)", "Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.", "# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef show_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\nshow_digit(11)\nprint(trainX.shape)\nprint(trainY.shape)", "Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.\nHint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.", "# Define the neural network\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n # Include the input layer, hidden layer(s), and set how you want to train the model\n net = tflearn.input_data([None, 784])\n net = tflearn.fully_connected(net, 500, activation='ReLU')\n net = tflearn.fully_connected(net, 100, activation='ReLU')\n net = tflearn.fully_connected(net, 10, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.015, loss='categorical_crossentropy')\n \n # This model assumes that your network is named \"net\" \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. \nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)", "Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!", "# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oyamad/game_theory_models
localint_note.ipynb
bsd-3-clause
[ "Local Interaction\nTomohiro Kusano\nGraduate School of Economics, University of Tokyo\nThis notebook demonstrates how to study local interaction model using the localint Python library.", "import numpy as np\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport random\nfrom matplotlib.animation import FuncAnimation\nfrom __future__ import division\nfrom localint import LocalInteraction\nfrom IPython.display import Image\nimport io\nimport base64\nfrom IPython.display import HTML", "Note: We don't use %matplotlib inline here because if use it, animation, which is a function defined in this notebook, doesn't work in ordinary environment.\nLocal Interaction Game\nLet $\\chi$ be a finite set of players and $P:\\chi \\times \\chi \\to \\mathbb{R}_+$ be a function such that\n\n$P(x,x) = 0$ for all $x \\in \\chi$\n$P(x,y) = P(y,x)$ for all $x,y \\in \\chi$.\n\nA local interaction system is the undirected graph induced by $(\\chi, P)$. Note that $P$ can be represented by a matrix, which will be introduced as \"adjacency matrix\" in the next section, since $\\chi$ is finite here.\nFor example, $(\\chi, P)$, where $\\chi = {0,1,2}$ and \n\\begin{equation}\nP =\n\\begin{bmatrix}\n0 & 1 & 0 \\\n0 & 0 & 2 \\\n3 & 0 & 0\n\\end{bmatrix}\n\\end{equation}\nrepresents the following local interaction system.", "Image(filename='./localint_materials/figure_1.png')", "The integer on each edge denote the corresponding weight on the edge.\nIn each period, given the local interaction system, each player plays a game constructing his/her belief, which is a distribution on the action space, according to the weights on the edges and what other players are taking.\nFor example, let's consider the above system. Suppose that each player has two actions (0 and 1), and Player 1, 2 are taking action 0, 1 respectively. Given the system and other players' action, Player 0 constructs a belief $(1, 3)$, which means the ratio of the probability that Player 0 meets a player taking action 0 to the probability that Player 0 meets a player taking action 1 is 1:3.\nThe LocalInteraction class\nThe LocalInteraction class requires two parameters, payoff matrix and adjacency matrix.\nPayoff Matrix\nPayoff matrix must be 2-dimensional square numpy array. In a game-theoretic model, it means that both the set of actions and the payoff function are the same across all players.\nFor instance, consider a coordination game where the payoff table is given by the following:\n1$\\backslash$2 | $A$ | $B$ \n ------------- |---------------| ---------\n $A$ | 4, 4 | 0, 2 \n $B$ | 2, 0 | 3, 3 \nNote that this payoff table implies that the game is symmetric. Because of the symmetricity, it suffices to record the only one of the player's payoffs like the following:", "payoff_matrix = np.asarray([[4, 0], \n [2, 3]])\nprint payoff_matrix", "Adjacency Matrix\nAdjacency matrix represents how the nodes in the system are connected. In particular, in the context of the local interaction model, it represents whether each pair of players interacts and how strong the interaction of them is if they are connected.\nLet's consider an adjacency matrix given by the following:\n\\begin{equation}\n[a_{ij}] = \n\\begin{bmatrix}\n0 &1 &3\\\n2 &0 &1\\\n3 &2 &0\n\\end{bmatrix}\n\\end{equation}", "adj_matrix = np.asarray([[0, 1, 3],\n [2, 0, 1],\n [3, 2, 0]])\nprint adj_matrix", "For example, $a_{12}(=1)$ denotes the weight on player 2's action to player 1. Note that the weight on player 1's action player 2 ($a_{21}=2$) is different. That is, the LocalInteraction class allow adjacency matrix to be asymmetric.\nCreating a LocalInteraction\nNow that we have two parameters, payoff_matrix and adj_matrix, we can create a LocalInteraction:", "li = LocalInteraction(payoff_matrix, adj_matrix)\n\nli.players[0]", "The adjacency matrix is saved in the form of csr_matrix.", "li.adj_matrix", "Initializing current actions\nOriginally, current actions are $N$-dimensional zero vector, where $N =$ \"the number of players\".", "li.N, li.current_actions", "To initialize current_actions, we can use set_init_actions:", "init_actions = [1, 0, 1]\nli.set_init_actions(init_actions)\n\nli.current_actions", "If we don't specify the list of the players' actions, set_init_actions randomly set current_actions.", "li.set_init_actions()\n\nli.current_actions", "Examples\nIn this section, we give you a couple of examples for typical graphs, and analyze the local interaction models corresponding to those graphs.\nIn order to show those results graphically, we have to define functions to draw a graph and generate an animation.", "def draw_graph(graph_dict, figsize=(16,10), node_size=200, linewidth=2):\n fig = plt.figure(figsize=figsize, facecolor='w')\n nx.draw_networkx_nodes(graph_dict['G'], graph_dict['pos'],\n node_size=node_size, node_color='w')\n nx.draw_networkx_edges(graph_dict['G'], graph_dict['pos'],\n alpha=0.5, width=linewidth, arrows=False)\n plt.axis('off')\n plt.show()\n\ndef animation(li, init_actions=None, pos='circular', node_size=200,\n node_colors=None, linewidth=2, interval=200, figsize=(16,10)):\n num_actions = li.num_actions\n\n if node_colors is None:\n node_colors = mpl.rcParams['axes.color_cycle']\n num_colors = len(node_colors)\n if num_colors < num_actions:\n raise ValueError('{0} colors required '.format(num_actions) +\n '(only {0} provided)'.format(num_colors))\n\n G = nx.DiGraph(li.adj_matrix)\n\n if isinstance(pos, dict):\n pos = pos\n else:\n try:\n layout_func = getattr(nx, '{0}_layout'.format(pos))\n pos = layout_func(G)\n except:\n raise ValueError(\n \"pos must be a dictionary of node-position pairs, or one of \" +\n \"{'circular', 'random', 'shell', 'spring', 'spectral'}\")\n\n def get_fig(n):\n for i in range(num_actions):\n nodelist = np.where(li.current_actions == i)[0].tolist()\n nx.draw_networkx_nodes(G, pos, node_size=node_size,\n nodelist=nodelist,\n node_color=node_colors[i])\n li.play()\n return fig\n\n li.set_init_actions(init_actions)\n\n fig = plt.figure(figsize=figsize, facecolor='w')\n nx.draw_networkx_edges(G, pos, alpha=0.5, width=linewidth, arrows=False)\n anim = FuncAnimation(fig, get_fig, interval=interval)\n plt.axis('off')\n plt.show()\n plt.close()", "2-actions case\nFor convenience, we focus on a coordination game, which is given by the following:", "coordination_game = np.array([[11, 0],\n [9, 8]])", "Also, let node_colors_2 be a list whose $i$-th ($i = 0, 1$) element denotes a color of players taking action $i$:", "node_colors_2 = ['b', 'y']", "Actually, in this case, the action 1, which leads to the risk-dominant but inefficient outcome if both players take it, is contageous in some sense although we don't formally define it. You would see what it means in the following section before long.\nCircle\nWe first examine one of the simplest graph, called \"circle graph\".", "N = 100\ncircle = {}\nG = nx.cycle_graph(n=N)\ncircle['G'] = G\ncircle['adj_matrix'] = nx.adjacency_matrix(G)\ncircle['pos'] = nx.circular_layout(G)", "Note that we have to specify not only the graph and the adjacency matrix but also positions of nodes since draw_graph and animation require it.", "draw_graph(circle)\n\nli_coor = LocalInteraction(coordination_game, circle['adj_matrix'])\n\ninit_actions = np.zeros(li_coor.N, dtype=int)\ninit_actions[[0, -1]] = 1\nanimation(li_coor, init_actions=init_actions, pos=circle['pos'],\n node_colors=node_colors_2, interval=100)", "You can see that the distribution of the players taking action 1 is spreaded across all nodes as time goes on.\nTwo-dimensional lattice\nWe next examine another simple graph, called \"Two-dimensional lattice\". Actually, Its procedure for simulation is the same as the circle graph, except for that it is tedious to specify the positions of nodes in this case.", "N = 100\nlattice2d = {}\nm, n = 10, 10\nG = nx.grid_2d_graph(m, n)\nlattice2d['adj_matrix'] = nx.adjacency_matrix(G)\nlattice2d['G'] = nx.Graph(lattice2d['adj_matrix'])\nlattice2d['pos'] = {}\nfor i, (x, y) in enumerate(G.nodes_iter()):\n lattice2d[(x, y)] = i \n lattice2d['pos'][i] = (x/(m-1), y/(n-1))\n\ndraw_graph(lattice2d)\n\nli_coor = LocalInteraction(coordination_game, lattice2d['adj_matrix'])\n\n# m, n = 10, 10\ninit_actions = np.zeros(li_coor.N, dtype=int)\nfor node in [(m//2-i, n//2-j) for i in range(2) for j in range(2)]:\n init_actions[lattice2d[node]] = 1\nanimation(li_coor, init_actions=init_actions, pos=lattice2d['pos'],\n node_colors=node_colors_2, figsize=(14,8), interval=500)", "3-actions case\nThe localint module works even in 3-actions case. Let's consider the following game, which is called \"Bilingual Game\":", "def bilingual_game(e, a=11, b=0, c=9, d=8):\n A = np.array([[a , a , b],\n [a-e, a-e, d-e],\n [c , d , d]])\n return A\n\nbg = bilingual_game(e=0.1)\nbg\n\nnode_colors_3 = ['b', 'r', 'y']", "We show that even the action 0, which leads to Pareto efficient outcome, can be contagious in this case. \nCircle", "li_bg = LocalInteraction(bg, circle['adj_matrix'])\n\ninit_actions = np.ones(li_bg.N, dtype=int) * 2\ninit_actions[[0, 1, -2, -1]] = 0\nanimation(li_bg, init_actions=init_actions, pos=circle['pos'],\n node_colors=node_colors_3, interval=100)", "Two-dimensional lattice", "li_bg = LocalInteraction(bg, lattice2d['adj_matrix'])\n\n# m, n = 10, 10\ninit_actions = np.ones(li_bg.N, dtype=int) * 2\nfor node in [(m//2-i, n//2-j) for i in range(2) for j in range(2)]:\n init_actions[lattice2d[node]] = 0\nanimation(li_bg, init_actions=init_actions, pos=lattice2d['pos'],\n node_colors=node_colors_3, interval=500)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thewtex/TubeTK
examples/Demo-ConvertTubesToPolyData.ipynb
apache-2.0
[ "Convert Tubes To PolyData\nThis notebook contains a few examples of how to call wrapped methods in itk and ITKTubeTK.\nITK and TubeTK must be installed on your system for this notebook to work. Typically, this is accomplished by\n\npython -m pip install itk-tubetk", "import os\nimport sys\nimport numpy\n\nimport itk\nfrom itk import TubeTK as ttk", "Load the tubes", "PixelType = itk.F\nDimension = 3\nImageType = itk.Image[PixelType, Dimension]\n \n# Read tre file\nTubeFileReaderType = itk.SpatialObjectReader[Dimension]\n \ntubeFileReader = TubeFileReaderType.New()\ntubeFileReader.SetFileName(\"Data/MRI-Normals/Normal071-VascularNetwork.tre\")\ntubeFileReader.Update()\n\ntubes = tubeFileReader.GetGroup()", "Generate the polydata representation of the tubes and save it to the file \"Tube.vtp\".\nThe Tube.vtp file can be displayed by dragging-and-dropping it onto ParaView Glance:\n https://kitware.github.io/paraview-glance/app/", "ttk.WriteTubesAsPolyData.New(Input=tubes, FileName=\"Tube.vtp\").Update()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
vicolab/neural-network-intro
1-regression/2-multivariate-linear-regression.ipynb
mit
[ "\"\"\"This area sets up the Jupyter environment.\nPlease do not modify anything in this cell.\n\"\"\"\nimport os\nimport sys\nimport time\n\n# Add project to PYTHONPATH for future use\nsys.path.insert(1, os.path.join(sys.path[0], '..'))\n\n# Import miscellaneous modules\nfrom IPython.core.display import display, HTML\n\n# Set CSS styling\nwith open('../admin/custom.css', 'r') as f:\n style = \"\"\"<style>\\n{}\\n</style>\"\"\".format(f.read())\n display(HTML(style))", "Multivariate Regression with Keras\n<div class=\"alert alert-warning\">\nIn this notebook we will get more familiar with the high-level artificial neural network package [Keras](https://keras.io/) by walking through a multivariate linear regression example.\n</div>\n\nDataset: Bike-Sharing System\nBackground\nPublic bike-sharing systems are a new generation of traditional bike rentals where the process from membership, rental, and return back of bicycles have become automatic. Through these systems, a user is able to easily rent a bicycle from a particular position and return it back to another position. Currently, there are about 500 bike-sharing systems around the world which are composed of over 500 thousand bicycles. Today, there exist great interest in these systems due to their important role in traffic, environmental, and health issues.\nApart from interesting real-world applications of these kinds of bike-sharing systems, the data being generated by these systems make them desirable for research as well. As opposed to other transport services such as bus or subway, the duration of travel, departure, and arrival position is explicitly recorded. This feature turns bike-sharing into a virtual sensor network that can be used for sensing mobility in a city. Hence, it is expected that significant events in a city could be detected by monitoring these data.\nThe bike-sharing rental process is highly correlated to environmental and seasonal settings. For instance, weather conditions,\nprecipitation, day of the week, season, hour of the day, and more can affect rental behaviours. The core dataset is related to a two-year historical log between 2011 and 2012 from the Capital Bikeshare system (Washington D.C., USA) which is publicly available at http://capitalbikeshare.com/system-data. The data was aggregated hourly as well as daily and then combined with weather and seasonal information. Weather information was extracted from http://www.freemeteo.com.\nWe have already standardised some of the features, i.e. zero mean and unit variance.\nTask: Regression\nPredict the hourly bicycle rental count based on the environmental and seasonal settings.\nDataset Characteristics\nday.csv - Bike-sharing counts aggregated on a daily basis (731 days)\nFeatures:\n- instance: record index\n- dteday : date\n- season : season (1:springer, 2:summer, 3:fall, 4:winter)\n- yr : year (0: 2011, 1:2012)\n- mnth : month ( 1 to 12)\n- holiday : weather day is holiday or not (extracted from http://dchr.dc.gov/page/holiday-schedule)\n- weekday : day of the week\n- workingday : if day is neither weekend nor holiday is 1, otherwise is 0.\n+ weathersit : \n - 1: Clear, Few clouds, Partly cloudy, Partly cloudy\n - 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist\n - 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds\n - 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog\n- temp : Normalized temperature in Celsius. The values are divided to 41 (max)\n- atemp: Normalized feeling temperature in Celsius. The values are divided to 50 (max)\n- hum: Normalized humidity. The values are divided to 100 (max)\n- windspeed: Normalized wind speed. The values are divided to 67 (max)\n- casual: count of casual users\n- registered: count of registered users\n- cnt: count of total rental bikes including both casual and registered\n\nLicense\nThis dataset was created and preprocessed in:\n[1] Fanaee-T, Hadi, and Gama, Joao, \"Event labeling combining ensemble detectors and background knowledge\", Progress in Artificial Intelligence (2013): pp. 1-15, Springer Berlin Heidelberg, doi:10.1007/s13748-013-0040-3.\nLoading the Data\n<div class=\"alert alert-info\">\n <strong>In the following code snippets we will:</strong>\n <ul>\n <li>Load the dataset from a slew of CSV files.</li>\n </ul>\n</div>", "# Plots will be show inside the notebook\n%matplotlib notebook\nimport matplotlib.pyplot as plt\n\n# High-level package for creating and training artificial neural networks\nimport keras\n\n# NumPy is a package for manipulating N-dimensional array objects \nimport numpy as np\n\n# Pandas is a data analysis package\nimport pandas as pd\n\nimport admin.tools as tools\nimport problem_unittests as tests", "Load features for training:", "train_features = tools.load_csv_with_dates('resources/bike_training_features.csv', 'dteday')", "Load targets for training:", "train_targets = tools.load_csv_with_dates('resources/bike_training_targets.csv', 'dteday')", "Load features for testing:", "test_features = tools.load_csv_with_dates('resources/bike_test_features.csv', 'dteday')", "Load targets for testing:", "test_targets = tools.load_csv_with_dates('resources/bike_test_targets.csv', 'dteday')\ntest_dates = test_targets.index.strftime('%b %d')\nprint('\\n', test_targets.head(n=5))", "Unpack the Pandas DataFrames to NumPy arrays:", "# Unpack features\nX_train = train_features.values\nX_test = test_features.values\n\n# Unpack targets\ny_train = train_targets['cnt'].values\ny_test = test_targets['cnt'].values\n\n# Record number of inputs and outputs\nnb_features = X_train.shape[1]\nnb_outputs = 1", "Task I: Build the Model\nUsing Keras we will build a multivariate regression model. Remember, these kinds of models can be represented as artifical neural networks, hence why we can implement them using Keras:\n<img src=\"resources/linear-regression-net.png\" alt=\"Linear regression as an artificial neural network\" width=\"300\" />\nThe model, an artificial neural network, will consist of a $d$ dimensional input that is fully- or densely-connected to a single output neuron. In the figure above, the inputs $\\mathbf{x}$ and the constant bias value $b$ are integrated via a linear combination $\\sum$. The integrated value is then pushed through an activation, or transfer, function such as the logistic function. In this notebook, the activation function $\\sigma$ will be defined as $\\sigma(x)=x$ because we are doing linear regression. When drawing an artificial neuron it is common to group the linear combination $\\sum$ together with the activation function $\\sigma$. We will talk more about these kinds of artificial neural networks in the next notebooks.\nThe model will be made using the Keras functional guide, which allows us to take advantage of a functional application programming interface (API) to create complex models with an arbitrary number of input and output neurons. One important thing to understand about Keras and other similar libraries is that the functional structure, or graph, of a model is defined before we instantiate the parameters and use them for something. Below is some example code for how to set up a simple model using this API with 32 inputs and 4 outputs:\n```python\nfrom keras.models import Model\nfrom keras.layers import Input, Dense\na = Input(shape=(32,))\nb = Dense(4)(a)\nmodel = Model(inputs=a, outputs=b)\n```\nNotice how this is the same setup we used for the previous notebook on linear regression. Make sure to revisit that notebook if you have trouble understanding the basic usage of this API.\n<div class=\"alert alert-success\">\n**Task**: Build a model using the Keras functional guide for the bike-sharing dataset. Use the following functions to put together your model:\n <ul>\n <li><a href=\"https://keras.io/models/model/\">Input()</a></li>\n <li><a href=\"https://keras.io/models/model/\">Dense()</a></li>\n <li><a href=\"https://keras.io/models/model/\">Model()</a></li>\n </ul>\nIt may be helpful to browse other parts of the Keras documentation.\n</div>", "# Import what we need\nfrom keras.layers import (Input, Dense)\nfrom keras.models import Model\n\n\ndef simple_model(nb_inputs, nb_outputs):\n \"\"\"Return a Keras Model.\n \"\"\"\n model = None\n\n return model\n\n### Do *not* modify the following line ###\n# Test and see that the model has been created correctly\ntests.test_simple_model(simple_model)", "Selecting Hyperparameters\nAs opposed to standard model parameters, such as the weights in a linear model, hyperparamters are user-specified parameters not learned by the training process, i.e. they are specified a priori. In the following section we will look at how we can define and evaluate a few different hyperparameters relevant to our previously defined model. The hyperparameters we will take a look at are:\n\nLearning rate\nNumber of epochs\nBatch size\n\nDigression: Different Sets of Data\nOne of the ultimate goals of machine learning is for our models to generalise well. That is, we would like the performance of our model on the data we have trained on, i.e. the in-sample error, to be representative of the performance of our model on the data we are attempting to model, i.e. the out-of-sample error. Unfortunately, for most problems we are unable to test our model on all possible data that we have not trained on. This might be due to difficulties gathering new data or simply because the amount of possible data is very large.\nFor this reason, we have to settle for a different solution when we want to evaluate our trained models. The go-to solution is to gather a second set of data, in addtion to the training set, called a test set. For the test set to be useful it is important that it is representative of the data we have not trained on. In order words, the error we get on the test set should be close to the out-of-sample error.\nSelecting appropriate hyperparameters can be seen as a sort of meta-optimisation task on top of the learning task. Now, we could train a model several times, alter some hyperparameters each time, and record the final performance on the test set, however, this will likely yield errors that are overly optimistic. This is because looking at the test set when making learning choices, i.e. selecting hyperparamters, introduces bias and causes the estimated out-of-sample error to diverge from the true out-of-sample error. Remember, this is the reason why we have a test set in the first place.\nThe solution to this problem is to create a third set: the validation set. This is typically a partition of the training set, however there exist several cross validation methodologies for how to create and use validation sets efficiently. By having this third set we can: (i) use the training set to train the trainable model parameters, (ii) use the validation set to select hyperparameters, and (iii) use the test set to estimate the out-of-sample error. This split ensures that the test set remains unbiased.\nLearning Rate\nAs we saw in the previous notebook, learning rate is an important parameter that decides how big of a jump we will make during gradient descent-based optimisation when moving in the negative gradient direction.\nIn order to select a good learning rate it is paramount that we track the state of the current error / loss / cost during training after each application of the gradient descent update rule. Below is a cartoon diagram illustrating the loss over the course of training. The shape of the error as training progresses can give a good indication as to what constitutes a good learning rate.\n<img src=\"resources/learningrates.jpeg\" alt=\"Choice of learning rate\" width=\"400\" />\nsource\n<div class=\"alert alert-danger\">\n<strong>Ideally we would want:</strong>\n<ul>\n <li>Small training error</li>\n <li>Little to no overfitting, i.e. *validation* performance measure matches the training performance measure (see figure below)</li>\n</ul>\n</div>\n\nValidation error refers to the error taken over a validation set on the current model.\n<img src=\"resources/validationset.jpeg\" alt=\"Validation set overfitting\" width=\"400\" />\nsource\nEpochs\nIn artificial neural network terminology one epoch typically means that every example in the training set has been seen once by the learning algorithm. It is generally preferable to track the number of epochs as opposed to the number iterations, i.e. applications of an update rule, because the latter depends on the batch size.\nIn literature, iteration is sometimes used synonymously with epoch.\n<div class=\"alert alert-danger\">\n<strong>Ideally we would want:</strong>\n<ul>\n <li>To avoid stopping the training too early</li>\n <li>To avoid training for too long</li>\n</ul>\n</div>\n\nBatch Size\nAs we saw in the previous notebook, we typically sum over multiple examples for a single application of an update rule. The number of examples we include is the batch size.\nThe batch size allows us to control how much memory we need during training because we only need to sample examples for a single batch. This is important for when the entire dataset cannot fit in memory. The important thing to keeep in mind when it comes to batch size is that the smaller the batch size the less accurate the estimate of the gradient over the training set will be. In other words, moves done by the update rule in the space over all trainable parameters become more noisy the smaller the batch size is.\n<div class=\"alert alert-danger\">\n<strong>Ideally we would want:</strong>\n<ul>\n <li>To fit a number of examples in memory</li>\n <li>Avoid unnecessary amounts of noise when updating trainable model parameters</li>\n</ul>\n</div>\n\nPlotting Error vs. Epoch with Keras\n<div class=\"alert alert-info\">\n <strong>In the following code snippet we will:</strong>\n <ul>\n <li>Create a model using the `simple_model()` function we made earlier</li>\n <li>Define all of the hyperparameters we will need</li>\n <li>Train the network using gradient descent</li>\n <li>Plot how the error evolves throughout training</li>\n </ul>\n</div>\n\nMake sure you understand most of the code below before you continue.", "\"\"\"Do not modify the following code. It is to be used as a refence for future tasks.\n\"\"\"\n\n# Create a simple model\nmodel = simple_model(nb_features, nb_outputs)\n\n#\n# Define hyperparameters\n#\nlr = 0.2\nnb_epochs = 10\nbatch_size = 10\n\n# Fraction of the training data held as a validation set\nvalidation_split = 0.1\n\n# Define optimiser\noptimizer = keras.optimizers.sgd(lr=lr)\n\n# Compile model, use mean squared error\nmodel.compile(loss='mean_squared_error', optimizer=optimizer)\n\n# Print model\nmodel.summary()\n\n# Train and record history\nlogs = model.fit(X_train, y_train,\n batch_size=batch_size,\n epochs=nb_epochs,\n validation_split=validation_split,\n verbose=2)\n\n# Plot the error\nfig, ax = plt.subplots(1,1)\n\npd.DataFrame(logs.history).plot(ax=ax)\nax.grid(linestyle='dotted')\nax.legend()\n\nplt.show()\n\n# Estimation on unseen data can be done using the `predict()` function, e.g.:\n_y = model.predict(X_test)\n\n# Model parameters can be retrieved by calling `get_weights()`:\nweights = model.get_weights()", "Analysis\n\nNeither of the errors seem very good\nThe training performance (loss) does not seem to generalise well to the validation set (val_loss)\nThe training performance (loss) does not improve\n\nTask II: Tuning Hyperparameters\nIn this task you will get the opportunity to play with the hyperparameters we discussed in the previous section.\n<div class=\"alert alert-success\">\n**Task**: Tune the following hyperparameters until the `loss` (training error) and `val_loss` (validation error) both converge to low numbers:\n<ul>\n <li>Learning rate</li>\n <li>Number of epochs</li>\n <li>Batch size</li>\n </ul>\nNotice that there is no code for creating the optimiser nor for creating the model in the code below. Take a look in the previous code snippet for how to do this. Remember, it is better to write the missing components down manually rather than copy-pasting them.\n</div>", "# Create a simple model\nmodel = None\n\n#\n# Define hyperparameters\n#\nlr = 0.2\nnb_epochs = 10\nbatch_size = 10\n\n# Fraction of the training data held as a validation set\nvalidation_split = 0.1\n\n# Define optimiser\n\n\n# Compile model, use mean squared error\n\n\n### Do *not* modify the following lines ###\n\n# Print model\nmodel.summary()\n\n# Train our network and do live plots of loss \ntools.assess_multivariate_model(model, X_train, y_train, X_test, y_test,\n test_dates, nb_epochs, batch_size,\n validation_split\n)", "Task III: Adding Regularization\nRegularisation is any modification made to a learning algorithm intended to reduce the generalisation error, i.e. the expected value of the error on an unseen example, but not the training error. Typically, this is interpreted as adjusting the complexity of the model by adding a regularisation term, or regulariser to the error function that we minimise:\n$$\n\\begin{equation}\n\\min_{h}\\sum_{i=1}^{N}E(h(\\mathbf{x}_i), y_i) + \\lambda R(h)\n\\end{equation}\n$$\nwhere $h$ is a hypothesis, $E$ is an error function, $R$ is the regularizer, and $\\lambda$ is a parameter for controlling the aforementioned regularizer. There are other ways to control the model complexity as well, such as noise injection, data augmentation, and early stopping, but in this notebook we will focus on the type above.\nIn case you want to review regularization material you can refer to the following material:\n\nWhat is regularization in plain english?\nRecommended video lecture 1\nRecommended video lecture 2\n\nAdding $L^2$ Regularization to Our Model\n$L^2$ regularization, otherwise known as weight decay, ridge regression, or Tikhonov regularization, is a popular form of regularization that penalises the norm of the model parameters. This is done by letting $R(h) = \\frac{1}{2}\\lVert\\mathbf{w}\\rVert_{2}^{2}$, which drives the weights towards the origin. Any point can be selected, but the origin is a good choice if we do not know the correct value. By multiplying with a factor of $\\frac{1}{2}$ we will simplify the gradient of $R(h)$.\n<div class=\"alert alert-success\">\n**Task**: Build a model using the Keras functional guide for the bike-sharing dataset, however, this time you will have to add $L^2$ regularization. Use the following functions to put together your model:\n <ul>\n <li><a href=\"https://keras.io/models/model/\">Input()</a></li>\n <li><a href=\"https://keras.io/models/model/\">Dense()</a> - Take a look at <a href=\"https://keras.io/regularizers/\">kernel_regularizer</a> for how to regularize the weights of a layer</li>\n <li><a href=\"https://keras.io/models/model/\">Model()</a></li>\n </ul>\nAs before, it may be helpful to browse other parts of the Keras documentation.\n</div>", "# Import what we need\nfrom keras import regularizers\n\n\ndef simple_model_l2(nb_inputs, nb_outputs, reg_factor):\n \"\"\"Return a L2 regularized Keras Model.\n \"\"\"\n model = None\n\n return model\n\n### Do *not* modify the following line ###\n# Test and see that the model has been created correctly\ntests.test_simple_model_regularized(simple_model_l2)", "Now, with this model, let's try to optimize the regularization factor $\\lambda$. This adjusts the strength of the regularizer.\n<div class=\"alert alert-success\">\n**Task**: Alter the regularization factor and assess the performance over 100 epochs using a batch size of 128. At a minimum, test out the following regularization strengths:\n<ul>\n <li> $\\lambda = 0.01$</li>\n <li> $\\lambda = 0.005$</li>\n <li> $\\lambda = 0.0005$</li>\n <li> $\\lambda = 0.00005$</li>\n</ul>\nSimilarly to the task where you had to tune hyperparameters, you will have to write down Keras code for creating an optimiser as well as the model. Remember, it is better to write the missing components down manually rather than copy-pasting them.\n</div>", "# Regularization factor (lambda)\nreg_factor = 0.005\n\n# Create a simple model\nmodel = None\n\n#\n# Define hyperparameters\n#\nlr = 0.0005\nnb_epochs = 100\nbatch_size = 128\n\nreg_factor = 0.0005\n\n# Fraction of the training data held as a validation set\nvalidation_split = 0.1\n\n# Define optimiser\n\n\n# Compile model, use mean squared error\n\n\n### Do *not* modify the following lines ###\n\n# Print model\nmodel.summary()\n\n# Train our network and do live plots of loss \nmodel = tools.assess_multivariate_model(model, X_train, y_train, X_test, y_test,\n test_dates, nb_epochs, batch_size,\n validation_split)\n\n# Print final model error\npredictions = model.predict(X_test)\nmse = np.mean((y_test - predictions)**2)\nprint('Mean squared error: {:.3}'.format(mse))\n\npredictions = model.predict(X_test[0:24 * 15])\nmse = np.mean((y_test[0:24 * 15] - predictions)**2)\nprint('Mean squared error for the first 15 days: '.format(mse))", "Topics to Think About\n\nWhich of the models above performance better?\nHow can we improve the performance even further?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nntisapeh/intro_programming
notebooks/introducing_functions.ipynb
mit
[ "Introducing Functions\nOne of the core principles of any programming language is, \"Don't Repeat Yourself\". If you have an action that should occur many times, you can define that action once and then call that code whenever you need to carry out that action.\nWe are already repeating ourselves in our code, so this is a good time to introduce simple functions. Functions mean less work for us as programmers, and effective use of functions results in code that is less error-prone.\nPrevious: Lists and Tuples | \nHome |\nNext: If Statements\nContents\n\nWhat are functions?\nGeneral Syntax\n\n\nExamples\nReturning a Value\nExercises\n\n\nChallenges\n\nWhat are functions?\nFunctions are a set of actions that we group together, and give a name to. You have already used a number of functions from the core Python language, such as string.title() and list.sort(). We can define our own functions, which allows us to \"teach\" Python new behavior.\nGeneral Syntax\nA general function looks something like this:", "# Let's define a function.\ndef function_name(argument_1, argument_2):\n\t# Do whatever we want this function to do,\n\t# using argument_1 and argument_2\n\n# Use function_name to call the function.\nfunction_name(value_1, value_2)", "This code will not run, but it shows how functions are used in general.\n\nDefining a function\nGive the keyword def, which tells Python that you are about to define a function.\nGive your function a name. A variable name tells you what kind of value the variable contains; a function name should tell you what the function does.\nGive names for each value the function needs in order to do its work.\nThese are basically variable names, but they are only used in the function.\nThey can be different names than what you use in the rest of your program.\nThese are called the function's arguments.\n\n\nMake sure the function definition line ends with a colon.\nInside the function, write whatever code you need to make the function do its work.\n\n\nUsing your function\nTo call your function, write its name followed by parentheses.\nInside the parentheses, give the values you want the function to work with.\nThese can be variables such as current_name and current_age, or they can be actual values such as 'eric' and 5.\n\n\n\n\n\ntop\nBasic Examples\nFor a simple first example, we will look at a program that compliments people. Let's look at the example, and then try to understand the code. First we will look at a version of this program as we would have written it earlier, with no functions.", "print(\"You are doing good work, Adriana!\")\nprint(\"Thank you very much for your efforts on this project.\")\n\nprint(\"\\nYou are doing good work, Billy!\")\nprint(\"Thank you very much for your efforts on this project.\")\n\nprint(\"\\nYou are doing good work, Caroline!\")\nprint(\"Thank you very much for your efforts on this project.\")", "Functions take repeated code, put it in one place, and then you call that code when you want to use it. Here's what the same program looks like with a function.", "def thank_you(name):\n # This function prints a two-line personalized thank you message.\n print(\"\\nYou are doing good work, %s!\" % name)\n print(\"Thank you very much for your efforts on this project.\")\n \nthank_you('Adriana')\nthank_you('Billy')\nthank_you('Caroline')", "In our original code, each pair of print statements was run three times, and the only difference was the name of the person being thanked. When you see repetition like this, you can usually make your program more efficient by defining a function.\nThe keyword def tells Python that we are about to define a function. We give our function a name, thank_you() in this case. A variable's name should tell us what kind of information it holds; a function's name should tell us what the variable does. We then put parentheses. Inside these parenthese we create variable names for any variable the function will need to be given in order to do its job. In this case the function will need a name to include in the thank you message. The variable name will hold the value that is passed into the function thank_you().\nTo use a function we give the function's name, and then put any values the function needs in order to do its work. In this case we call the function three times, each time passing it a different name.\nA common error\nA function must be defined before you use it in your program. For example, putting the function at the end of the program would not work.", "thank_you('Adriana')\nthank_you('Billy')\nthank_you('Caroline')\n\ndef thank_you(name):\n # This function prints a two-line personalized thank you message.\n print(\"\\nYou are doing good work, %s!\" % name)\n print(\"Thank you very much for your efforts on this project.\")", "On the first line we ask Python to run the function thank_you(), but Python does not yet know how to do this function. We define our functions at the beginning of our programs, and then we can use them when we need to.\nA second example\nWhen we introduced the different methods for sorting a list, our code got very repetitive. It takes two lines of code to print a list using a for loop, so these two lines are repeated whenever you want to print out the contents of a list. This is the perfect opportunity to use a function, so let's see how the code looks with a function.\nFirst, let's see the code we had without a function:", "students = ['bernice', 'aaron', 'cody']\n\n# Put students in alphabetical order.\nstudents.sort()\n\n# Display the list in its current order.\nprint(\"Our students are currently in alphabetical order.\")\nfor student in students:\n print(student.title())\n\n# Put students in reverse alphabetical order.\nstudents.sort(reverse=True)\n\n# Display the list in its current order.\nprint(\"\\nOur students are now in reverse alphabetical order.\")\nfor student in students:\n print(student.title())", "Here's what the same code looks like, using a function to print out the list:", "###highlight=[2,3,4,5,6,12,16]\ndef show_students(students, message):\n # Print out a message, and then the list of students\n print(message)\n for student in students:\n print(student.title())\n\nstudents = ['bernice', 'aaron', 'cody']\n\n# Put students in alphabetical order.\nstudents.sort()\nshow_students(students, \"Our students are currently in alphabetical order.\")\n\n#Put students in reverse alphabetical order.\nstudents.sort(reverse=True)\nshow_students(students, \"\\nOur students are now in reverse alphabetical order.\")", "This is much cleaner code. We have an action we want to take, which is to show the students in our list along with a message. We give this action a name, show_students(). \nThis function needs two pieces of information to do its work, the list of students and a message to display. Inside the function, the code for printing the message and looping through the list is exactly as it was in the non-function code.\nNow the rest of our program is cleaner, because it gets to focus on the things we are changing in the list, rather than having code for printing the list. We define the list, then we sort it and call our function to print the list. We sort it again, and then call the printing function a second time, with a different message. This is much more readable code.\nAdvantages of using functions\nYou might be able to see some advantages of using functions, through this example:\n\nWe write a set of instructions once. We save some work in this simple example, and we save even more work in larger programs.\nWhen our function works, we don't have to worry about that code anymore. Every time you repeat code in your program, you introduce an opportunity to make a mistake. Writing a function means there is one place to fix mistakes, and when those bugs are fixed, we can be confident that this function will continue to work correctly.\nWe can modify our function's behavior, and that change takes effect every time the function is called. This is much better than deciding we need some new behavior, and then having to change code in many different places in our program.\n\nFor a quick example, let's say we decide our printed output would look better with some form of a bulleted list. Without functions, we'd have to change each print statement. With a function, we change just the print statement in the function:", "def show_students(students, message):\n # Print out a message, and then the list of students\n print(message)\n for student in students:\n print(\"- \" + student.title())\n\nstudents = ['bernice', 'aaron', 'cody']\n\n# Put students in alphabetical order.\nstudents.sort()\nshow_students(students, \"Our students are currently in alphabetical order.\")\n\n#Put students in reverse alphabetical order.\nstudents.sort(reverse=True)\nshow_students(students, \"\\nOur students are now in reverse alphabetical order.\")", "You can think of functions as a way to \"teach\" Python some new behavior. In this case, we taught Python how to create a list of students using hyphens; now we can tell Python to do this with our students whenever we want to.\nReturning a Value\nEach function you create can return a value. This can be in addition to the primary work the function does, or it can be the function's main job. The following function takes in a number, and returns the corresponding word for that number:", "def get_number_word(number):\n # Takes in a numerical value, and returns\n # the word corresponding to that number.\n if number == 1:\n return 'one'\n elif number == 2:\n return 'two'\n elif number == 3:\n return 'three'\n # ...\n \n# Let's try out our function.\nfor current_number in range(0,4):\n number_word = get_number_word(current_number)\n print(current_number, number_word)", "It's helpful sometimes to see programs that don't quite work as they are supposed to, and then see how those programs can be improved. In this case, there are no Python errors; all of the code has proper Python syntax. But there is a logical error, in the first line of the output.\nWe want to either not include 0 in the range we send to the function, or have the function return something other than None when it receives a value that it doesn't know. Let's teach our function the word 'zero', but let's also add an else clause that returns a more informative message for numbers that are not in the if-chain.", "###highlight=[13,14,17]\ndef get_number_word(number):\n # Takes in a numerical value, and returns\n # the word corresponding to that number.\n if number == 0:\n return 'zero'\n elif number == 1:\n return 'one'\n elif number == 2:\n return 'two'\n elif number == 3:\n return 'three'\n else:\n return \"I'm sorry, I don't know that number.\"\n \n# Let's try out our function.\nfor current_number in range(0,6):\n number_word = get_number_word(current_number)\n print(current_number, number_word)", "If you use a return statement in one of your functions, keep in mind that the function stops executing as soon as it hits a return statement. For example, we can add a line to the get_number_word() function that will never execute, because it comes after the function has returned a value:", "###highlight=[16,17,18]\ndef get_number_word(number):\n # Takes in a numerical value, and returns\n # the word corresponding to that number.\n if number == 0:\n return 'zero'\n elif number == 1:\n return 'one'\n elif number == 2:\n return 'two'\n elif number == 3:\n return 'three'\n else:\n return \"I'm sorry, I don't know that number.\"\n \n # This line will never execute, because the function has already\n # returned a value and stopped executing.\n print(\"This message will never be printed.\")\n \n# Let's try out our function.\nfor current_number in range(0,6):\n number_word = get_number_word(current_number)\n print(current_number, number_word)", "More Later\nThere is much more to learn about functions, but we will get to those details later. For now, feel free to use functions whenever you find yourself writing the same code several times in a program. Some of the things you will learn when we focus on functions:\n\nHow to give the arguments in your function default values.\nHow to let your functions accept different numbers of arguments.\n\ntop\nExercises\nGreeter\n\nWrite a function that takes in a person's name, and prints out a greeting.\nThe greeting must be at least three lines, and the person's name must be in each line.\n\n\nUse your function to greet at least three different people.\nBonus: Store your three people in a list, and call your function from a for loop.\n\nFull Names\n\nWrite a function that takes in a first name and a last name, and prints out a nicely formatted full name, in a sentence. Your sentence could be as simple as, \"Hello, full_name.\"\nCall your function three times, with a different name each time.\n\nAddition Calculator\n\nWrite a function that takes in two numbers, and adds them together. Make your function print out a sentence showing the two numbers, and the result.\nCall your function with three different sets of numbers.\n\nReturn Calculator\n\nModify Addition Calculator so that your function returns the sum of the two numbers. The printing should happen outside of the function.\n\nList Exercises - Functions\n\nGo back and solve each of the following exercises from the section on Lists and Tuples, using functions to avoid repetetive code:\nOrdered Working List\nOrdered Numbers\n\n\n\ntop\nChallenges\nLyrics\n\nMany songs follow a familiar variation of the pattern of verse, refrain, verse, refrain. The verses are the parts of the song that tell a story - they are not repeated in the song. The refrain is the part of the song that is repeated throughout the song.\nFind the lyrics to a song you like that follows this pattern. Write a program that prints out the lyrics to this song, using as few lines of Python code as possible. hint\n\ntop\n\nPrevious: Lists and Tuples | \nHome |\nNext: If Statements\nHints\nThese are placed at the bottom, so you can have a chance to solve exercises without seeing any hints.\nLyrics\n\nDefine a function that prints out the lyrics of the refrain. Use this function any time the refrain comes up in the song." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ComputationalModeling/spring-2017-danielak
past-semesters/fall_2016/day-by-day/day12-exploratory-data-analysis-day1/Data_Exploration_Plotting.ipynb
agpl-3.0
[ "Exploring data\nNames of group members\n// Put your names here!\nGoals of this assignment\nThe purpose of this assignment is to explore data using visualization and statistics. \nSection 1\nThe file datafile_1.csv contains a three-dimensional dataset and associated uncertainty in the data. Read the data file into numpy arrays and visualize it using two new types of plots:\n\n2D plots of the various combinations of dimensions (x-y, x-z, y-z), including error bars (using the pyplot errorbar() method). Try plotting using symbols instead of lines, and make the error bars a different color than the points themselves.\n3D plots of all three dimensions at the same time using the mplot3d toolkit - in particular, look at the scatter() method. \n\nHints:\n\nLook at the documentation for numpy's loadtxt() method - in particular, what do the parameters skiprows, comments, and unpack do?\nIf you set up the 3D plot as described above, you can adjust the viewing angle with the command ax.view_init(elev=ANGLE1,azim=ANGLE2), where ANGLE1 and ANGLE2 are in degrees.", "# put your code here, and add additional cells as necessary.\n\n", "Section 2\nNow, we're going to experiment with data exploration. You have two data files to examine:\n\nGLB.Ts.csv, which contains mean global air temperature from 1880 through the present day (retrieved from the NASA GISS surface temperature website, \"Global-mean monthly, seasonal, and annual means, 1880-present\"). Each row in the data file contains the year, monthly global average, yearly global average, and seasonal global average. See this file for clues as to what the columns mean.\nbintanja2008.txt, which is a reconstruction of the global surface temperature, deep-sea temperature, ice volume, and relative sea level for the last 3 million years. This data comes from the National Oceanic and Atmospheric Administration's National Climatic Data Center website, and can be found here.\n\nSome important notes:\n\nThese data files are slightly modified versions of those on the website - they have been altered to remove some characters that don't play nicely with numpy (letters with accents), and symbols for missing data have been replaced with 'NaN', or \"Not a Number\", which numpy knows to ignore. No actual data has been changed.\nIn the file GLB.Ts.csv, the temperature units are in 0.01 degrees Celsius difference from the reference period 1950-1980 - in other words, the number 40 corresponds to a difference of +0.4 degrees C compared to the average temperature between 1950 and 1980. (This means you'll have to renormalize your values by a factor of 100.)\nIn the file bintanja2008.txt, column 9, \"Global sea level relative to present,\" is in confusing units - more positive values actually correspond to lower sea levels than less positive values. You may want to multiply column 9 by -1 in order to get more sensible values.\n\nThere are many possible ways to examine this data. First, read both data files into numpy arrays - it's fine to load them into a single combined multi-dimensional array if you want, or split the data into multiple arrays. We'll then try a few things:\n\nFor both datasets, make some plots of the raw data, particularly as a function of time. What do you see? How is the data \"shaped\"? Is there periodicity?\nDo some simple data analysis. What are the minimum, maximum, and mean values of the various quantities? (You may have problems with NaN - see nanmin and similar methods)\nIf you calculate some sort of average for annual temperature in GLB.Ts.csv (say, the average temperature smoothed over 10 years), how might you characterize the yearly variability? Try plotting the smoothed value along with the raw data and show how they differ.\nThere are several variables in the file bintanja2008.txt - try plotting multiple variables as a function of time together using the pyplot subplot functionality (and some more complicated subplot examples for further help). Do they seem to be related in some way? (Hint: plot surface temperature, deep sea temperature, ice volume, and sea level, and zoom in from 3 Myr to ~100,000 years)\nWhat about plotting the non-time quantities in bintanja2008.txt versus each other (i.e., surface temperature vs. ice volume or sea level) - do you see correlations?", "# put your code here, and add additional cells as necessary.\n\n", "In the cell below, describe some of the conclusions that you've drawn from the data you have just explored!\n// put your thoughts here.\nAssignment wrapup\nPlease fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!", "from IPython.display import HTML\nHTML(\n\"\"\"\n<iframe \n\tsrc=\"https://goo.gl/forms/Jg6Mxb0ZTvwiSe4R2?embedded=true\" \n\twidth=\"80%\" \n\theight=\"1200px\" \n\tframeborder=\"0\" \n\tmarginheight=\"0\" \n\tmarginwidth=\"0\">\n\tLoading...\n</iframe>\n\"\"\"\n)", "Congratulations, you're done!\nSubmit this assignment by uploading it to the course Desire2Learn web page. Go to the \"In-class assignments\" folder, find the dropbox link for Day 12, and upload it there." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Yu-Group/scikit-learn-sandbox
jupyter/backup_deprecated_nbs/13_RIT_Manual_Example.ipynb
mit
[ "Key Requirements for the iRF scikit-learn implementation\n\nThe following is a documentation of the main requirements for the iRF implementation\n\nTypical Setup", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_iris\nfrom sklearn.datasets import load_breast_cancer\nimport numpy as np\nfrom functools import reduce\n\n# Import our custom utilities\nfrom imp import reload\nfrom utils import irf_jupyter_utils\nfrom utils import irf_utils\nreload(irf_jupyter_utils)\nreload(irf_utils)", "Step 1: Fit the Initial Random Forest\n\nJust fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn", "%timeit\nX_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(sklearn_ds = load_breast_cancer(), \n train_split_propn=0.9, \n n_estimators=3,\n random_state_split=2017,\n random_state_classifier=2018)", "Check out the data", "print(\"Training feature dimensions\", X_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Training outcome dimensions\", y_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test feature dimensions\", X_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test outcome dimensions\", y_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 5 rows of the training set features\", X_train[:5], sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 5 rows of the training set outcomes\", y_train[:5], sep = \":\\n\")\n\nX_train.shape[0]\nbreast_cancer = load_breast_cancer()\nbreast_cancer.data.shape[0]", "Step 2: For each Tree get core leaf node features\n\nFor each decision tree in the classifier, get:\nThe list of leaf nodes\nDepth of the leaf node \nLeaf node predicted class i.e. {0, 1}\nProbability of predicting class in leaf node\nNumber of observations in the leaf node i.e. weight of node\n\n\n\nGet the 2 Decision trees to use for testing", "# Import our custom utilities\nrf.n_estimators\n\nestimator0 = rf.estimators_[0] # First tree\nestimator1 = rf.estimators_[1] # Second tree\nestimator2 = rf.estimators_[2] # Second tree", "Design the single function to get the key tree information\nGet data from the first and second decision tree", "tree_dat0 = irf_utils.getTreeData(X_train = X_train, dtree = estimator0, root_node_id = 0)\ntree_dat1 = irf_utils.getTreeData(X_train = X_train, dtree = estimator1, root_node_id = 0)\ntree_dat1 = irf_utils.getTreeData(X_train = X_train, dtree = estimator2, root_node_id = 0)", "Decision Tree 0 (First) - Get output\nCheck the output against the decision tree graph", "# Now plot the trees individually\nirf_jupyter_utils.draw_tree(decision_tree = estimator0)\n\nirf_jupyter_utils.prettyPrintDict(inp_dict = tree_dat0)\n\n# Count the number of samples passing through the leaf nodes\nsum(tree_dat0['tot_leaf_node_values'])", "Step 3: Get the Gini Importance of Weights for the Random Forest\n\nFor the first random forest we just need to get the Gini Importance of Weights\n\nStep 3.1 Get them numerically - most important", "feature_importances = rf.feature_importances_\nstd = np.std([dtree.feature_importances_ for dtree in rf.estimators_]\n , axis=0)\nfeature_importances_rank_idx = np.argsort(feature_importances)[::-1]\n\n# Check that the feature importances are standardized to 1\nprint(sum(feature_importances))", "Step 3.2 Display Feature Importances Graphically (just for interest)", "# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfor f in range(X_train.shape[1]):\n print(\"%d. feature %d (%f)\" % (f + 1\n , feature_importances_rank_idx[f]\n , feature_importances[feature_importances_rank_idx[f]]))\n \n# Plot the feature importances of the forest\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X_train.shape[1])\n , feature_importances[feature_importances_rank_idx]\n , color=\"r\"\n , yerr = std[feature_importances_rank_idx], align=\"center\")\nplt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)\nplt.xlim([-1, X_train.shape[1]])\nplt.show()", "Putting it all together\n\nCreate a dictionary object to include all of the random forest objects", "# CHECK: If the random forest objects are going to be really large in size\n# we could just omit them and only return our custom summary outputs\n\nrf_metrics = irf_utils.getValidationMetrics(rf, y_true = y_test, X_test = X_test)\nall_rf_outputs = {\"rf_obj\" : rf,\n \"feature_importances\" : feature_importances,\n \"feature_importances_rank_idx\" : feature_importances_rank_idx,\n \"rf_metrics\" : rf_metrics}\n\n# CHECK: The following should be paralellized!\n# CHECK: Whether we can maintain X_train correctly as required\nfor idx, dtree in enumerate(rf.estimators_):\n dtree_out = irf_utils.getTreeData(X_train = X_train, dtree = dtree, root_node_id = 0)\n # Append output to dictionary\n all_rf_outputs[\"dtree\" + str(idx)] = dtree_out", "Check the final dictionary of outputs", "irf_jupyter_utils.prettyPrintDict(inp_dict = all_rf_outputs)", "Now we can start setting up the RIT class\nOverview\nAt it's core, the RIT is comprised of 3 main modules\n* FILTERING: Subsetting to either the 1's or the 0's\n* RANDOM SAMPLING: The path-nodes in a weighted manner, with/ without replacement, within tree/ outside tree\n* INTERSECTION: Intersecting the selected node paths in a systematic manner\nFor now we will just work with a single decision tree outputs", "irf_jupyter_utils.prettyPrintDict(inp_dict = all_rf_outputs['rf_metrics'])\n\nall_rf_outputs['dtree0']", "Get the leaf node 1's paths\nGet the unique feature paths where the leaf node predicted class is just 1", "uniq_feature_paths = all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']\nleaf_node_classes = all_rf_outputs['dtree0']['all_leaf_node_classes']\nones_only = [i for i, j in zip(uniq_feature_paths, leaf_node_classes) \n if j == 1]\nones_only\n\nprint(\"Number of leaf nodes\", len(all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']), sep = \":\\n\")\nprint(\"Number of leaf nodes with 1 class\", len(ones_only), sep = \":\\n\")\n\n# Just pick the last seven cases, we are going to manually construct\n# binary RIT of depth 3 i.e. max 2**3 -1 = 7 intersecting nodes\nones_only_seven = ones_only[-7:]\nones_only_seven\n\n# Construct a binary version of the RIT manually!\n# This should come in useful for unit tests!\nnode0 = ones_only_seven[-1]\nnode1 = np.intersect1d(node0, ones_only_seven[-2])\nnode2 = np.intersect1d(node1, ones_only_seven[-3])\nnode3 = np.intersect1d(node1, ones_only_seven[-4])\nnode4 = np.intersect1d(node0, ones_only_seven[-5])\nnode5 = np.intersect1d(node4, ones_only_seven[-6])\nnode6 = np.intersect1d(node4, ones_only_seven[-7])\n\nintersected_nodes_seven = [node0, node1, node2, node3, node4, node5, node6]\n\nfor idx, node in enumerate(intersected_nodes_seven):\n print(\"node\" + str(idx), node)\n\nrit_output = reduce(np.union1d, (node2, node3, node5, node6))\nrit_output" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fazzolini/fast_ai
deeplearning1/nbs/lesson4.ipynb
apache-2.0
[ "from __future__ import division, print_function\n%matplotlib inline\nfrom importlib import reload # Python 3\nimport utils; reload(utils)\nfrom utils import *\nfrom keras.layers.merge import dot, add, concatenate\n\npath = \"data/ml-latest-small/\" # from https://grouplens.org/datasets/movielens/\n#path = \"data/ml-20m/\"\nmodel_path = path + 'models/'\nif not os.path.exists(model_path): os.mkdir(model_path)\n\nbatch_size=64\n#batch_size=1", "Set up data\nWe're working with the movielens data, which contains one rating per row, like this:", "ratings = pd.read_csv(path+'ratings.csv')\nratings.head()\n\nlen(ratings)", "Just for display purposes, let's read in the movie names too.", "movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict\n\nusers = ratings.userId.unique()\nmovies = ratings.movieId.unique()\n\n# userId and movieId become ditionary elements with values ranging from 0 to max len \nuserid2idx = {o:i for i,o in enumerate(users)}\nmovieid2idx = {o:i for i,o in enumerate(movies)}", "We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.", "ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])\nratings.userId = ratings.userId.apply(lambda x: userid2idx[x])\n\nuser_min, user_max, movie_min, movie_max = (ratings.userId.min(), \n ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())\nuser_min, user_max, movie_min, movie_max\n\nn_users = ratings.userId.nunique()\nn_movies = ratings.movieId.nunique()\nn_users, n_movies", "This is the number of latent factors in each embedding.", "n_factors = 50\n\nnp.random.seed = 42", "Randomly split into training and validation.", "msk = np.random.rand(len(ratings)) < 0.8\ntrn = ratings[msk]\nval = ratings[~msk]", "Create subset for Excel\nWe create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.", "g=ratings.groupby('userId')['rating'].count()\ntopUsers=g.sort_values(ascending=False)[:15]\n\ng=ratings.groupby('movieId')['rating'].count()\ntopMovies=g.sort_values(ascending=False)[:15]\n\ntop_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')\n\ntop_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')\n\npd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)", "Dot product\nThe most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:", "user_in = Input(shape=(1,), dtype='int64', name='user_in')\nu = Embedding(input_dim=n_users, output_dim=n_factors, input_length=1, embeddings_regularizer=l2(1e-4))(user_in)\nmovie_in = Input(shape=(1,), dtype='int64', name='movie_in')\nm = Embedding(input_dim=n_movies, output_dim=n_factors, input_length=1, embeddings_regularizer=l2(1e-4))(movie_in)\n\nx = dot([u, m], axes=2)\nx = Flatten()(x)\nmodel = Model([user_in, movie_in], x)\nmodel.compile(Adam(0.001), loss='mse')\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=1, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.01\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=3, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.001\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=6, \n validation_data=([val.userId, val.movieId], val.rating))", "The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...\nBias\nThe problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.", "def embedding_input(name, n_in, n_out, reg):\n inp = Input(shape=(1,), dtype='int64', name=name)\n return inp, Embedding(input_dim=n_in, output_dim=n_out, input_length=1, embeddings_regularizer=l2(reg))(inp)\n\nuser_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)\nmovie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)\n\ndef create_bias(inp, n_in):\n x = Embedding(input_dim=n_in, output_dim=1, input_length=1)(inp)\n return Flatten()(x)\n\nub = create_bias(user_in, n_users)\nmb = create_bias(movie_in, n_movies)\n\nx = dot([u, m], axes=2)\nx = Flatten()(x)\nx = add([x, ub])\nx = add([x, mb])\nmodel = Model([user_in, movie_in], x)\nmodel.compile(Adam(0.001), loss='mse')\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=1, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.01\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=6, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.optimizer.lr=0.001\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=10, \n validation_data=([val.userId, val.movieId], val.rating))\n\nmodel.fit([trn.userId, trn.movieId], trn.rating, batch_size=batch_size, epochs=5, \n validation_data=([val.userId, val.movieId], val.rating))", "This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!", "model.save_weights(model_path+'bias.h5')\n\nmodel.load_weights(model_path+'bias.h5')", "We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.", "model.predict([np.array([3]), np.array([6])])", "Analyze results\nTo make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.", "g=ratings.groupby('movieId')['rating'].count()\ntopMovies=g.sort_values(ascending=False)[:2000]\ntopMovies = np.array(topMovies.index)", "First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).", "get_movie_bias = Model(movie_in, mb)\nmovie_bias = get_movie_bias.predict(topMovies)\nmovie_ratings = [(b[0], movie_names()[movies[i]]) for i,b in zip(topMovies,movie_bias)]", "Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.", "sorted(movie_ratings, key=itemgetter(0))[:15]\n\nsorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]", "We can now do the same thing for the embeddings.", "get_movie_emb = Model(movie_in, m)\nmovie_emb = np.squeeze(get_movie_emb.predict([topMovies]))\nmovie_emb.shape", "Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.", "from sklearn.decomposition import PCA\npca = PCA(n_components=3)\nmovie_pca = pca.fit(movie_emb.T).components_\n\nfac0 = movie_pca[0]\n\nmovie_comp = [(f, movie_names()[movies[i]]) for f,i in zip(fac0, topMovies)]", "Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.", "sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]\n\nfac1 = movie_pca[1]\n\nmovie_comp = [(f, movie_names()[movies[i]]) for f,i in zip(fac1, topMovies)]", "The 2nd is 'hollywood blockbuster'.", "sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]\n\nfac2 = movie_pca[2]\n\nmovie_comp = [(f, movie_names()[movies[i]]) for f,i in zip(fac2, topMovies)]", "The 3rd is 'violent vs happy'.", "sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]\n\nsorted(movie_comp, key=itemgetter(0))[:10]", "We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.", "# The following would be for Python 2 only\n# reload(sys)\n# sys.setdefaultencoding('utf8')\n\nstart=50; end=100\nX = fac0[start:end]\nY = fac2[start:end]\nplt.figure(figsize=(15,15))\nplt.scatter(X, Y)\nfor i, x, y in zip(topMovies[start:end], X, Y):\n plt.text(x,y,movie_names()[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)\nplt.show()", "Neural net\nRather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.", "user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)\nmovie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)\n\nx = concatenate([u, m], axis=2)\nx = Flatten()(x)\nx = Dropout(0.3)(x)\nx = Dense(70, activation='relu')(x)\nx = Dropout(0.75)(x)\nx = Dense(1)(x)\nnn = Model([user_in, movie_in], x)\nnn.compile(Adam(0.001), loss='mse')\n\nnn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, epochs=8, \n validation_data=([val.userId, val.movieId], val.rating))", "This improves on our already impressive accuracy even further!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
UWSEDS/LectureNotes
week_3/Functions.ipynb
bsd-2-clause
[ "Functions\nFunctions are key to creating reusable software, testing, and working in teams.\nThis lecture motivates the use of functions, discusses how functions are defined in python, and\nintroduces a workflow that starts with exploratory code and produces a function.\nTopics\n- Creating reusable software components\n- Motivating example\n- Python function syntax\n- Name scoping in functions\n- Keyword arguments\n- Exercise\n- Function Driven Workflow\nCreating Reusable Software Components\n\nWhat makes a component reusable?\nSignature of a software component\nInputs\nHow they are \"passed\"\nData types\nSemantics\n\n\nOutputs\nSide effects\n\nMotivating Example", "# Our prime number example from week 1\nN = 10\nfor candidate in range(2, N):\n # n is candidate prime. Check if n is prime\n is_prime = True\n for m in range(2, candidate):\n if (candidate % m) == 0:\n is_prime = False\n break\n if is_prime:\n print(\"%d is prime!\" % candidate)", "Issues with making a function:\n1. What does it do?\n - Finds all primes less than or equal to N\n1. What are the inputs?\n - Input 1: start range (int)\n - Input 2: end range (int)\n1. What are the outputs?\n - Output: list-int", "# Our prime number example from week 1\nN = 10\nresult = []\nfor candidate in range(2, N):\n # n is candidate prime. Check if n is prime\n is_prime = True\n for m in range(2, candidate):\n if (candidate % m) == 0:\n is_prime = False\n break\n if is_prime:\n result.append(candidate)\n \nprint(result)", "Some questions\n1. How can we recast this script as a component?\n - Inputs\n - Outputs\n2. Should the component itself be recast as having another reusable component?\nPython Function Syntax\nTransform the above script into a python function.\n1. Function definition\n1. Formal arguments\n1. Calling a function", "def identify_primes(start_range, end_range):\n result = []\n for candidate in range(start_range, end_range):\n # n is candidate prime. Check if n is prime\n is_prime = True\n for m in range(2, candidate):\n if (candidate % m) == 0:\n is_prime = False\n break\n if is_prime:\n result.append(candidate)\n return(result)\n\nidentify_primes(2, 10)\n\nidentify_primes(5, 10)", "Name Scoping in Functions", "# Example 1: function invocation vs. formal arguments\ndef add_one(a):\n b = 10\n return a + 1\n#\na = 1\nb = 2\nprint(\"add_one(a): %d\" % add_one(a))\nprint(\"add_one(b): %d\" % add_one(b))\n\nb", "Why is func(b) equal to 3 when the function is defined in terms of a which equals 1?", "# Example 2: formal argument vs. global variable\ndef func(a):\n y = a + 1\n return y\n#\n# The following causes an error when False is changed to True. Why?\ny = 23\nfunc(2)\nprint(\"After call value of y: %d\" % y)", "Why didn't the value of y change? Shouldn't it be y = 3?", "# Example 3: manipulation of global variables\nx = 5\ndef func(a):\n global x\n x = 2*a\n#\n#print(\"Before call value of x = %d\" % x)\nfunc(2)\nprint(x)\n#print(\"After call value of x = %d\" % x)", "Why didn't the value of x change?\nRefactoring a Function", "def identify_primes2(start_range, end_range):\n result = []\n for candidate in range(start_range, end_range):\n # n is candidate prime. Check if n is prime\n if is_prime(candidate):\n result.append(candidate)\n return(result)\n\nidentify_primes2(2, 10)\n\ndef identify_primes2(start_range, end_range):\n result = []\n for candidate in range(start_range, end_range):\n # n is candidate prime. Check if n is prime\n if is_prime(candidate):\n result.append(candidate)\n return(result)\n\nidentify_primes2(2, 10)\n# Return True if number is prime\ndef is_prime(candidate):\n is_prime = True\n for m in range(2, candidate):\n if (candidate % m) == 0:\n is_prime = False\n return is_prime\n\n# Test cases\nprint(\"Should be prime %d\" %is_prime(53)) # should prime\nprint(\"Should not be prime %d\" %is_prime(52)) # should prime\nprint(\"0 input %d\" % is_prime(0))", "Keyword Arguments\nFunctions evolve over time, and so it is natural that you'll want to add agruments. But when you do, you \"break\" existing code that doesn't include those arguments. \nPython has a great capability to deal with this -- keyword arguments.", "# Optionally check for negative values\ndef identify_primes3(start_range, end_range, is_check=True):\n if is_check:\n if start_range < 0:\n print(\"Bad\")\n return\n result = []\n for candidate in range(start_range, end_range):\n # n is candidate prime. Check if n is prime\n if is_prime(candidate):\n result.append(candidate)\n return(result)\n\nidentify_primes3(-2, 10, is_check=False)\n\n# Extend find_primes to return None if argument is negative.", "Exercise\n\n\nFind which substrings are present in a string.\n\n\nFor example, consider the string \"The lazy brown fox jumped over the fence.\" Which of the following substrings is present: \"ow\", \"low\", \"row\" and how many occurrences are there?\n\n\nWrite a function that produces the desired result for this example.", "a_string = \"The lazy brown fox jumped over the fence.\"\na_string.index(\"lazy\")\n\ndef find_substrings(base_string, substrings):\n result = []\n for stg in substrings:\n if base_string.find(stg) >= 0:\n result.append(stg)\n return result\n\nfind_substrings(\"The lazy brown fox jumped over the fence.\", [\"azy\", \"jumped\", \"hopped\"])", "Function Driven Workflow\n\nScript in a notebook\nCreate functions from scripts\nCopy functions in a python module\nReplace functions in notebook with use of functions in module\nTo use a function inside a notebook, you must import its containint module.", "!ls\n\nimport identify_prime\nidentify_prime.identify_primes(2, 20)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jenshnielsen/HJCFIT
exploration/CH82.ipynb
gpl-3.0
[ "CH82 Model\nThe following tries to reproduce Fig 8 from Hawkes, Jalali, Colquhoun (1992).\nFirst we create the $Q$-matrix for this particular model. Please note that the units are different from other publications.", "%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom dcprogs.likelihood import QMatrix\n\ntau = 1e-4\nqmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ], \n [ 2./3., -1502./3., 0, 500, 0 ], \n [ 15, 0, -2065, 50, 2000 ], \n [ 0, 15000, 4000, -19000, 0 ], \n [ 0, 0, 10, 0, -10 ] ], 2)\nqmatrix.matrix /= 1000.0", "We first reproduce the top tow panels showing $\\mathrm{det} W(s)$ for open and shut times.\nThese quantities can be accessed using dcprogs.likelihood.DeterminantEq. The plots are done using a standard plotting function from the dcprogs.likelihood package as well.", "from dcprogs.likelihood import plot_roots, DeterminantEq\n\nfig, ax = plt.subplots(1, 2, figsize=(7,5))\n\nplot_roots(DeterminantEq(qmatrix, 0.2), ax=ax[0])\nax[0].set_xlabel('Laplace $s$')\nax[0].set_ylabel('$\\\\mathrm{det} ^{A}W(s)$')\n\nplot_roots(DeterminantEq(qmatrix, 0.2).transpose(), ax=ax[1])\nax[1].set_xlabel('Laplace $s$')\nax[1].set_ylabel('$\\\\mathrm{det} ^{F}W(s)$')\nax[1].yaxis.tick_right()\nax[1].yaxis.set_label_position(\"right\")\n\nfig.tight_layout()", "Then we want to plot the panels c and d showing the excess shut and open-time probability densities$(\\tau = 0.2)$. To do this we need to access each exponential that makes up the approximate survivor function. We could use:", "from dcprogs.likelihood import ApproxSurvivor\napprox = ApproxSurvivor(qmatrix, tau)\ncomponents = approx.af_components\nprint(components[:1])", "The list components above contain 2-tuples with the weight (as a matrix) and the exponant (or root) for each exponential component in $^{A}R_{\\mathrm{approx}}(t)$. We could then create python functions pdf(t) for each exponential component, as is done below for the first root:", "from dcprogs.likelihood import MissedEventsG\n\nweight, root = components[1]\neG = MissedEventsG(qmatrix, tau)\n# Note: the sum below is equivalent to a scalar product with u_F\ncoefficient = sum(np.dot(eG.initial_occupancies, np.dot(weight, eG.af_factor)))\npdf = lambda t: coefficient * exp((t)*root) ", "The initial occupancies, as well as the $Q_{AF}e^{-Q_{FF}\\tau}$ factor are obtained directly from the object implementing the weight, root = components[1]\nmissed event likelihood $^{e}G(t)$.\nHowever, there is a convenience function that does all the above in the package. Since it is generally of little use, it is not currently exported to the dcprogs.likelihood namespace. So we create below a plotting function that uses it.", "from dcprogs.likelihood._methods import exponential_pdfs\n\ndef plot_exponentials(qmatrix, tau, x=None, ax=None, nmax=2, shut=False):\n from dcprogs.likelihood import missed_events_pdf\n if ax is None:\n fig, ax = plt.subplots(1,1)\n if x is None: x = np.arange(0, 5*tau, tau/10)\n pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)\n graphb = [x, pdf(x+tau), '-k']\n functions = exponential_pdfs(qmatrix, tau, shut=shut)\n plots = ['.r', '.b', '.g'] \n together = None\n for f, p in zip(functions[::-1], plots):\n if together is None: together = f(x+tau)\n else: together = together + f(x+tau)\n graphb.extend([x, together, p])\n\n ax.plot(*graphb)\n\nfig, ax = plt.subplots(1,2, figsize=(7,5))\nax[0].set_xlabel('time $t$ (ms)')\nax[0].set_ylabel('Excess open-time probability density $f_{\\\\bar{\\\\tau}=0.2}(t)$')\nplot_exponentials(qmatrix, 0.2, shut=False, ax=ax[0])\n\nplot_exponentials(qmatrix, 0.2, shut=True, ax=ax[1])\nax[1].set_xlabel('time $t$ (ms)')\nax[1].set_ylabel('Excess shut-time probability density $f_{\\\\bar{\\\\tau}=0.2}(t)$')\nax[1].yaxis.tick_right()\nax[1].yaxis.set_label_position(\"right\")\nfig.tight_layout()", "Finally, we create the last plot (e), and throw in an (f) for good measure.", "fig, ax = plt.subplots(1,2, figsize=(7,5))\nax[0].set_xlabel('time $t$ (ms)')\nax[0].set_ylabel('Excess open-time probability density $f_{\\\\bar{\\\\tau}=0.5}(t)$')\nplot_exponentials(qmatrix, 0.5, shut=False, ax=ax[0])\n\nplot_exponentials(qmatrix, 0.5, shut=True, ax=ax[1])\nax[1].set_xlabel('time $t$ (ms)')\nax[1].set_ylabel('Excess shut-time probability density $f_{\\\\bar{\\\\tau}=0.5}(t)$')\nax[1].yaxis.tick_right()\nax[1].yaxis.set_label_position(\"right\")\n\nfig.tight_layout()\n\nfrom dcprogs.likelihood import QMatrix, MissedEventsG\n\ntau = 1e-4\nqmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ], \n [ 2./3., -1502./3., 0, 500, 0 ], \n [ 15, 0, -2065, 50, 2000 ], \n [ 0, 15000, 4000, -19000, 0 ], \n [ 0, 0, 10, 0, -10 ] ], 2)\neG = MissedEventsG(qmatrix, tau, 2, 1e-8, 1e-8)\nmeG = MissedEventsG(qmatrix, tau)\nt = 3.5* tau\n\nprint(eG.initial_CHS_occupancies(t) - meG.initial_CHS_occupancies(t))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/addons/tutorials/time_stopping.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TensorFlowアドオンのコールバック:TimeStopping\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/addons/tutorials/time_stopping\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"> TensorFlow.orgで表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/time_stopping.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"> Google Colab で実行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/time_stopping.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示{</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/time_stopping.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード/a0}</a></td>\n</table>\n\n概要\nこのノートブックでは、TensorFlowアドオンでTimeStoppingコールバックを使用する方法を紹介します。\nセットアップ", "!pip install -U tensorflow-addons\n\nimport tensorflow_addons as tfa\n\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Dropout, Flatten", "データのインポートと正規化", "# the data, split between train and test sets\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n# normalize data\nx_train, x_test = x_train / 255.0, x_test / 255.0", "シンプルなMNIST CNNモデルの構築", "# build the model using the Sequential API\nmodel = Sequential()\nmodel.add(Flatten(input_shape=(28, 28)))\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(10, activation='softmax'))\n\nmodel.compile(optimizer='adam',\n loss = 'sparse_categorical_crossentropy',\n metrics=['accuracy'])", "シンプルなTimeStoppingの使用法", "# initialize TimeStopping callback \ntime_stopping_callback = tfa.callbacks.TimeStopping(seconds=5, verbose=1)\n\n# train the model with tqdm_callback\n# make sure to set verbose = 0 to disable\n# the default progress bar.\nmodel.fit(x_train, y_train,\n batch_size=64,\n epochs=100,\n callbacks=[time_stopping_callback],\n validation_data=(x_test, y_test))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
milroy/Spark-Meetup
exercises/05_parpivot.ipynb
mit
[ "Example 5: A fast parallel pivot, or preparing for time series analysis", "from pyspark import SparkConf, SparkContext\nfrom collections import OrderedDict\n\npartitions = 48\nparcsv = sc.textFile(\"/lustre/janus_scratch/dami9546/lustre_timeseries.csv\", partitions)\nparcsv.take(5)", "Each of these lines contains 6 semi-colon delimited columns: hostname, metric name, value reported, type, units, and Unix epoch time. Can we assume all do? The example data is an excerpt of one day of Lustre data, but we have hundreds of full days which may contain dropped writes and malformed data. I'll apply a filter to the data to select all lines with six columns.\nSometimes it isn't evident whether filters are needed until a succeeding RDD action fails.", "filtered = parcsv.filter(lambda line: len(line.split(';')) == 6)", "As seen above, the lines are Unicode, but in anticipation of necessary transformations the timestamp and values will need to be cast to appropriate types. We'll need to create a function that takes each line as an argument and returns a 4-tuple (quadruple?), organized to facilitate intuitive indexing. Let's pick the following ordering: (timestamp, host, metric, value). We don't need the other values, so they are discarded.\nSince the values in the third column are currently Unicode, a try-except structure is used to attempt to cast them to floats. If unsuccessful we set them to zero rather than NaN, since these don't work with some machine learning methods.\nAn alternative to the try-except would be to apply a filter for lines whose third column can't be cast as a float. I haven't compared the performance between these two.", "def cast(line):\n try:\n val = float(str(line.split(';')[2]))\n except:\n val = 0.0\n return (int(line.split(';')[5]), line.split(';')[0], \n line.split(';')[1], val)\n\nparsed = filtered.map(cast)", "Metrics aren't reported continuously, nor are the monitoring systems flawless. We need to assemble a unique set (dictionary) of metrics for the pivot, but they must be ordered to make sure time series analysis isn't distorted. \nPySpark's \".distinct()\" method accomplishes this; we issue a \".collect()\" as well to assign the RDD's values to a variable.", "columns = parsed.map(lambda x: x[2]).distinct().collect()\nbasedict = dict((metric, 0.0) for metric in columns)", "Now we create an ordered dictionary to preserve the metric (and consequently, column) ordering. If we did not create this OrderedDict, the keys' ordering may be permuted. This will render ML techniques useless.\nThe object is broadcast to all executors to be used in a future mapped function.", "ordered = sc.broadcast(OrderedDict(sorted(basedict.items(), key=lambda y: y[0])))", "The two functions below are adapted from user patricksurry's answer to this Stack Overflow question: http://stackoverflow.com/questions/30260015/reshaping-pivoting-data-in-spark-rdd-and-or-spark-dataframes. Beware, patricksurry's answer is predominantly serial!", "def combine(u1, u2):\n u1.update(u2)\n return u1\n\ndef sequential(u, v):\n if not u:\n u = {}\n u[v[2]] = v[3]\n return u", "We need to perform an aggregation by key. This operation takes two functions as arguments: the sequential and combination functions. The sequential op constructs a dictionary from (metric, value) in each row, and the combine op combines row dictionaries based on identical (timestamp, host) keys.\n<img src=\"aggregateByKey.png\">", "aggregated = parsed.keyBy(lambda row: (row[0], row[1])).aggregateByKey(\n None, sequential, combine)", "Now we need to impose the structure of our OrderedDict on each aggregated key, value pair. We create a new function to copy our canonical dictionary (of ordered keys, and 0.0 values) and update it with the dictionaries created in the aggregateByKey step.", "def mergedicts(new):\n tmp = ordered.value.copy()\n tmp.update(new[1])\n return new[0], tmp\n\npivoted = aggregated.map(mergedicts)", "Let's take a look at the results.", "final_ordered = pivoted.takeOrdered(10, key=lambda x: x[0])\n\nfinal_ordered[0][0]", "To sort the entire RDD, we use a sortByKey.", "final_sorted = pivoted.sortByKey(keyfunc= lambda k: k[0])\n\nfinal_dict = final_sorted.map(lambda row: row[1].values())", "Writing the lists to disk takes quite a long time. This is not optimized and not writing in parallel. An exercise for the reader!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NathanYee/ThinkBayes2
bayesianLinearRegression/Final Report-V2.ipynb
gpl-2.0
[ "Bayesian Linear Regression:\nComputational bayes final project.\nNathan Yee\nUma Desai \nFirst example to gain understanding is taken from Cypress Frankenfeld.\nhttp://allendowney.blogspot.com/2015/04/two-hour-marathon-by-2041-probably.html", "from __future__ import print_function, division\n\n% matplotlib inline\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport math\nimport numpy as np\n\nfrom thinkbayes2 import Pmf, Cdf, Suite, Joint, EvalNormalPdf\nimport thinkplot\nimport pandas as pd\nimport matplotlib.pyplot as plt", "From: http://lib.stat.cmu.edu/DASL/Datafiles/Ageandheight.html\n\nThe height of a child is not stable but increases over time. Since the pattern of growth varies from child to child, one way to understand the general growth pattern is by using the average of several children's heights, as presented in this data set. The scatterplot of height versus age is almost a straight line, showing a linear growth pattern. The straightforward relationship between height and age provides a simple illustration of linear relationships, correlation, and simple regression. \nDescription: Mean heights of a group of children in Kalama, an Egyptian village that is the site of a study of nutrition in developing countries. The data were obtained by measuring the heights of all 161 children in the village each month over several years. \nAge: Age in months\nHeight: Mean height in centimeters for children at this age \n\nLet's start by loading our data into a Pandas dataframe to see what we're working with.", "df = pd.read_csv('ageVsHeight.csv', skiprows=0, delimiter='\\t')\ndf", "Next, let's create vectors of our ages and heights.", "ages = np.array(df['age'])\nheights = np.array(df['height'])", "Now let's visualize our data to make sure that linear regression is appropriate for predicting its distributions.", "plt.plot(ages, heights, 'o', label='Original data', markersize=10)", "Our data looks pretty linear. We can now calculate the slope and intercept of the line of least squares. We abstract numpy's least squares function using a function of our own.", "def leastSquares(x, y):\n \"\"\"\n leastSquares takes in two arrays of values. Then it returns the slope and intercept\n of the least squares of the two.\n \n Args:\n x (numpy array): numpy array of values.\n y (numpy array): numpy array of values.\n \n Returns:\n slope, intercept (tuple): returns a tuple of floats.\n \"\"\"\n A = np.vstack([x, np.ones(len(x))]).T\n slope, intercept = np.linalg.lstsq(A, y)[0]\n return slope, intercept", "To use our leastSquares function, we input our age and height vectors as our x and y arguments. Next, let's call leastSquares to get the slope and intercept, and use the slope and intercept to calculate the size of our alpha (intercept) and beta (slope) ranges.", "slope, intercept = leastSquares(ages, heights)\nprint(slope, intercept)\nalpha_range = .03 * intercept\nbeta_range = .05 * slope", "Now we can visualize the slope and intercept on the same plot as the data to make sure it is working correctly.", "plt.plot(ages, heights, 'o', label='Original data', markersize=10)\nplt.plot(ages, slope*ages + intercept, 'r', label='Fitted line')\nplt.legend()\nplt.show()", "Looks great! Based on the plot above, we are confident that bayesian linear regression will give us reasonable distributions for predicting future values. Now we need to create our hypotheses. Each hypothesis will consist of a range of intercepts (alphas), slopes (betas) and sigmas.", "alphas = np.linspace(intercept - alpha_range, intercept + alpha_range, 20)\nbetas = np.linspace(slope - beta_range, slope + beta_range, 20)\nsigmas = np.linspace(2, 4, 15)\n\nhypos = ((alpha, beta, sigma) for alpha in alphas \n for beta in betas for sigma in sigmas)\n\ndata = [(age, height) for age in ages for height in heights]", "Next make a least squares class that inherits from Suite and Joint where likelihood is calculated based on error from data. The likelihood function will depend on the data and normal distributions for each hypothesis.", "class leastSquaresHypos(Suite, Joint):\n def Likelihood(self, data, hypo):\n \"\"\"\n Likelihood calculates the probability of a particular line (hypo)\n based on data (ages Vs height) of our original dataset. This is\n done with a normal pmf as each hypo also contains a sigma.\n \n Args:\n data (tuple): tuple that contains ages (float), heights (float)\n hypo (tuple): intercept (float), slope (float), sigma (float)\n \n Returns:\n P(data|hypo)\n \"\"\"\n intercept, slope, sigma = hypo\n total_likelihood = 1\n for age, measured_height in data:\n hypothesized_height = slope * age + intercept\n error = measured_height - hypothesized_height\n total_likelihood *= EvalNormalPdf(error, mu=0, sigma=sigma)\n return total_likelihood\n ", "Now instantiate a LeastSquaresHypos suite with our hypos.", "LeastSquaresHypos = leastSquaresHypos(hypos)", "And update the suite with our data.", "for item in data:\n LeastSquaresHypos.Update([item])\n\nLeastSquaresHypos[LeastSquaresHypos.MaximumLikelihood()]", "We can now plot marginal distributions to visualize the probability distribution for each of our hypotheses for intercept, slope, and sigma values. Our hypotheses were carefully picked based on ranges that we found worked well, which is why all the intercepts, slopes, and sigmas that are important to this dataset are included in our hypotheses.", "marginal_intercepts = LeastSquaresHypos.Marginal(0)\nthinkplot.hist(marginal_intercepts)\n\nmarginal_slopes = LeastSquaresHypos.Marginal(1)\nthinkplot.hist(marginal_slopes)\n\nmarginal_sigmas = LeastSquaresHypos.Marginal(2)\nthinkplot.hist(marginal_sigmas)", "Next, we want to sample random data from our hypotheses. To do this, we will make two functions, getHeights and getRandomData. getRandomData calls getHeights to obtain random height values.", "def getHeights(hypo_samples, random_months):\n \"\"\"\n getHeights takes in random hypos and random months and returns the corresponding\n random height\n \n \"\"\"\n random_heights = np.zeros(len(random_months))\n for i in range(len(random_heights)):\n intercept = hypo_samples[i][0]\n slope = hypo_samples[i][1]\n sigma = hypo_samples[i][2]\n month = random_months[i]\n random_heights[i] = np.random.normal((slope * month + intercept), sigma, 1)\n return random_heights\n\ndef getRandomData(start_month, end_month, n, LeastSquaresHypos):\n \"\"\"\n start_month (int): Starting x range of our data\n end_month (int): Ending x range of our data\n n (int): Number of samples\n LeastSquaresHypos (Suite): Contains the hypos we want to sample\n \"\"\"\n random_hypos = LeastSquaresHypos.Sample(n)\n random_months = np.random.uniform(start_month, end_month, n)\n random_heights = getHeights(random_hypos, random_months)\n return random_months, random_heights", "Now we take 10000 random samples of pairs of months and heights. Here we want at least 10000 items so that we can get very smooth sampling.", "num_samples = 10000\nrandom_months, random_heights = getRandomData(18, 40, num_samples, LeastSquaresHypos)", "Next, we want to get the intensity of the data at locations. We do that by adding the randomly sampled values to buckets. This gives us intensity values for a grid of pixels in our sample range.", "num_buckets = 70 #num_buckets^2 is actual number\n\n# create horizontal and vertical linearly spaced ranges as buckets.\nhori_range, hori_step = np.linspace(18, 40 , num_buckets, retstep=True)\nvert_range, vert_step = np.linspace(65, 100, num_buckets, retstep=True)\n\nhori_step = hori_step / 2\nvert_step = vert_step / 2\n\n# store each bucket as a tuple in a the buckets dictionary.\nbuckets = dict()\nkeys = [(hori, vert) for hori in hori_range for vert in vert_range]\n\n# set each bucket as empty\nfor key in keys:\n buckets[key] = 0\n \n# loop through the randomly sampled data\nfor month, height in zip(random_months, random_heights):\n # check each bucket and see if randomly sampled data \n for key in buckets:\n if month > key[0] - hori_step and month < key[0] + hori_step:\n if height > key[1] - vert_step and height < key[1] + vert_step:\n buckets[key] += 1\n break # can only fit in a single bucket\n\npcolor_months = []\npcolor_heights = []\npcolor_intensities = []\nfor key in buckets:\n pcolor_months.append(key[0])\n pcolor_heights.append(key[1])\n pcolor_intensities.append(buckets[key]) \n \nprint(len(pcolor_months), len(pcolor_heights), len(pcolor_intensities))\n\nplt.plot(random_months, random_heights, 'o', label='Random Sampling')\nplt.plot(ages, heights, 'o', label='Original data', markersize=10)\nplt.plot(ages, slope*ages + intercept, 'r', label='Fitted line')\n# plt.legend()\nplt.show()", "Since density plotting is much simpler in Mathematica, we have written these funcitons to export all our data to csv files and plot them in Mathematica.", "def append_to_file(path, data):\n \"\"\"\n append_to_file appends a line of data to specified file. Then adds new line\n \n Args:\n path (string): the file path\n \n Return:\n VOID\n \"\"\"\n with open(path, 'a') as file:\n file.write(data + '\\n')\n \ndef delete_file_contents(path):\n \"\"\"\n delete_file_contents deletes the contents of a file\n \n Args:\n path: (string): the file path\n \n Return:\n VOID\n \"\"\"\n with open(path, 'w'):\n pass\n\ndef intensityCSV(x, y, z):\n file_name = 'intensityData.csv'\n delete_file_contents(file_name)\n\n for xi, yi, zi in zip(x, y, z):\n append_to_file(file_name, \"{}, {}, {}\".format(xi, yi, zi))\n \ndef monthHeightCSV(ages, heights):\n file_name = 'monthsHeights.csv'\n delete_file_contents(file_name)\n \n for month, height in zip(ages, heights):\n append_to_file(file_name, \"{}, {}\".format(month, height))\n \ndef fittedLineCSV(ages, slope, intercept):\n file_name = 'fittedLineCSV.csv'\n delete_file_contents(file_name)\n for age in ages:\n append_to_file(file_name, \"{}, {}\".format(age, slope*age + intercept))\n \ndef makeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept):\n intensityCSV(pcolor_months, pcolor_heights, pcolor_intensities)\n monthHeightCSV(ages, heights)\n fittedLineCSV(ages, slope, intercept)\n\nmakeCSVData(pcolor_months, pcolor_heights, pcolor_intensities, ages, heights, slope, intercept)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ipsl/cmip6/models/ipsl-cm6a-lr/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: IPSL\nSource ID: IPSL-CM6A-LR\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: CMIP5:IPSL-CM5A-LR \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ipsl', 'ipsl-cm6a-lr', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: sea ice [thickness, concentration, velocity, temperature, heat content], snow thickness, snow temperature\") \n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Ocean grid\") \n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Hibler 1979\") \n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Visco-plastic\") \n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: multi-layer on a regular vertical grid\") \n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: parametrized (calculated in ocean)\") \n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"Ice formed with from prescribed thickness\") \n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: no\") \n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \nDOC.set_value(\"Snow-ice\") \n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: one layer\") \n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \nDOC.set_value(\"Other: fonction of temperature and sea ice + snow thickness\") \n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AlpineNow/python-alpine-api
doc/JupyterNotebookExamples/Introduction.ipynb
mit
[ "Introduction\nLet's start with a example of an Alpine API session.\n\nInitialize a session.\nTake a tour of some commands.\nRun a workflow and download the results.\n\nImport the Python Alpine API and some other useful packages.", "import alpine as AlpineAPI\n\nfrom pprint import pprint\nimport json", "Setup\nHave access to a workflow on your Alpine instance that you can run. You'll need a few pieces of information in order to log in and run the workflow. First, find the URL of the open workflow. It should look something like:\nhttps://&lt;AlpineHost&gt;:&lt;PortNum&gt;/#workflows/&lt;WorkflowID&gt;\nYou'll also need your Alpine username and password.\nI've stored my connection information in a configuration file named alpine_login.conf that looks something like this:\nJSON\n {\n \"host\": \"AlpineHost\",\n \"port\": \"PortNum\",\n \"username\": \"fakename\",\n \"password\": \"12345\"\n }", "filename = \"alpine_login.conf\"\n\nwith open(filename, \"r\") as f:\n data = f.read()\n\nconn_info = json.loads(data)\n\nhost = conn_info[\"host\"]\nport = conn_info[\"port\"]\nusername = conn_info[\"username\"]\npassword = conn_info[\"password\"]", "Here are the names of a workspace and a workflow within it that we want to run.", "test_workspace_name = \"API Sample Workspace\"\ntest_workflow_name = \"Data ETL\"", "Create a session and log in the user.", "session = AlpineAPI.APIClient(host, port, username, password)", "Use the API\nGet information about the Alpine instance.", "pprint(session.get_license())\n\npprint(session.get_version())", "Find information about the logged-in user.", "pprint(session.get_status())", "Find information on all users.", "len(session.user.get_list())", "Find your user ID and then use it to update your user data.", "user_id = session.user.get_id(username)\n\npprint(session.user.update(user_id, title = \"Assistant to the Regional Manager\"))", "A similar set of commands can be used to create and update workspaces and the membership of each workspace.", "test_workspace_id = session.workspace.get_id(test_workspace_name)\nsession.workspace.member.add(test_workspace_id, user_id);", "Run a workflow\nTo run a workflow use the Process subclass of the Workfile class. The wait_until_finished method will periodically query the status of the running workflow and returns control to the user when the workflow has completed.", "workflow_id = session.workfile.get_id(workfile_name = \"Data ETL\",\n workspace_id = test_workspace_id)\n\nprocess_id = session.workfile.process.run(workflow_id)\n\nsession.workfile.process.wait_until_finished(workflow_id = workflow_id,\n process_id = process_id,\n verbose = True,\n query_time = 5)", "We can download results using the download_results method. The workflow results contain a summary of the output of each operator as well as metadata about the workflow run.", "flow_results = session.workfile.process.download_results(workflow_id, process_id)\npprint(flow_results, depth=2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kwant-project/kwant-tutorial-2016
3.4.graphene_qshe.ipynb
bsd-2-clause
[ "Graphene and Kane-Mele model\nWe are going to:\n* Deal with 2D band structures\n* Use a more general lattice (honeycomb lattice of graphene)\n* Construct the very first topological insulator\n* Learn about topological protection in presence of time-reversal symmetry\nParts of this tutorial are based on the online course on topology in condensed matter", "# We'll have 3D plotting and 2D band structure, so we need a handful of helper functions.\n\n%run matplotlib_setup.ipy\n\nfrom types import SimpleNamespace\n\nfrom ipywidgets import interact\nimport matplotlib\nfrom matplotlib import pyplot\nfrom mpl_toolkits import mplot3d\nimport numpy as np\n\nimport kwant\nfrom wraparound import wraparound\n\n\ndef momentum_to_lattice(k):\n \"\"\"Transform momentum to the basis of reciprocal lattice vectors.\n \n See https://en.wikipedia.org/wiki/Reciprocal_lattice#Generalization_of_a_dual_lattice\n \"\"\"\n B = np.array(graphene.prim_vecs).T\n A = B.dot(np.linalg.inv(B.T.dot(B)))\n return np.linalg.solve(A, k)\n\n\ndef dispersion_2D(syst, args=None, lim=1.5*np.pi, num_points=200):\n \"\"\"A simple plot of 2D band structure.\"\"\"\n if args is None:\n args = []\n momenta = np.linspace(-lim, lim, num_points)\n energies = []\n for kx in momenta:\n for ky in momenta:\n lattice_k = momentum_to_lattice([kx, ky])\n h = syst.hamiltonian_submatrix(args=(list(args) + list(lattice_k)))\n energies.append(np.linalg.eigvalsh(h))\n \n energies = np.array(energies).reshape(num_points, num_points, -1)\n emin, emax = np.min(energies), np.max(energies)\n kx, ky = np.meshgrid(momenta, momenta)\n fig = pyplot.figure()\n axes = fig.add_subplot(1, 1, 1, projection='3d')\n for band in range(energies.shape[-1]):\n axes.plot_surface(kx, ky, energies[:, :, band], cstride=2, rstride=2,\n cmap=matplotlib.cm.RdBu_r, vmin=emin, vmax=emax,\n linewidth=0.1)", "Graphene\nQuantum Hall effect creates protected edge states using a strong magnetic field. Another way to create those is to start from a system with Dirac cones, and open gaps in those.\nThere is a real (and a very important) two-dimensional system which has Dirac cones: graphene. So in this chapter we will take graphene and make it into a topological system with chiral edge states.\nGraphene is a single layer of carbon atoms arranged in a honeycomb lattice. It is a triangular lattice with two atoms per unit cell, type $A$ and type $B$, represented by red and blue sites in the figure:", "graphene = kwant.lattice.general([[1, 0], [1/2, np.sqrt(3)/2]], # lattice vectors\n [[0, 0], [0, 1/np.sqrt(3)]]) # Coordinates of the sites\na, b = graphene.sublattices", "We now create a Builder with the translational symmetries of graphene, and calculate the bulk dispersion of graphene.\nHence, the wave function in a unit cell can be written as a vector $(\\Psi_A, \\Psi_B)^T$ of amplitudes on the two sites $A$ and $B$. Taking a simple tight-binding model where electrons can hop between neighboring sites with hopping strength $t$, one obtains the Bloch Hamiltonian:\n$$\nH_0(\\mathbf{k})= \\begin{pmatrix} 0 & h(\\mathbf{k}) \\ h^\\dagger(\\mathbf{k}) & 0 \\end{pmatrix}\\,,\n$$\nwith $\\mathbf{k}=(k_x, k_y)$ and\n$$h(\\mathbf{k}) = t_1\\,\\sum_i\\,\\exp\\,\\left(i\\,\\mathbf{k}\\cdot\\mathbf{a}_i\\right)\\,.$$\nHere $\\mathbf{a}_i$ are the three vectors in the figure, connecting nearest neighbors of the lattice [we set the lattice spacing to one, so that for instance $\\mathbf{a}_1=(1,0)$]. Introducing a set of Pauli matrices $\\sigma$ which act on the sublattice degree of freedom, we can write the Hamiltonian in a compact form as\n$$H_0(\\mathbf{k}) = t_1\\,\\sum_i\\,\\left[\\sigma_x\\,\\cos(\\mathbf{k}\\cdot\\mathbf{a}_i)-\\sigma_y \\,\\sin(\\mathbf{k}\\cdot\\mathbf{a}_i)\\right]\\,.$$\nThe energy spectrum $E(\\mathbf{k}) = \\pm \\,\\left|h(\\mathbf{k})\\right|$ gives rise to the famous band structure of graphene, with the two bands touching at the six corners of the Brillouin zone:", "bulk_graphene = kwant.Builder(kwant.TranslationalSymmetry(*graphene.prim_vecs))\nbulk_graphene[graphene.shape((lambda pos: True), (0, 0))] = 0\nbulk_graphene[graphene.neighbors(1)] = 1\n\ndispersion_2D(wraparound(bulk_graphene).finalized())", "Let's also create 1D ribbons of graphene.\nThere are two nontrivial directions: armchair and zigzag", "zigzag_ribbon = kwant.Builder(kwant.TranslationalSymmetry([1, 0]))\nzigzag_ribbon[graphene.shape((lambda pos: abs(pos[1]) < 9), (0, 0))] = 0\nzigzag_ribbon[graphene.neighbors(1)] = 1\n\nkwant.plotter.bands(zigzag_ribbon.finalized());", "Your turn!\nCalculate a dispersion of an armchair nanoribbon. You'll need to figure out what is its period.\nYour turn!\nAdd potentials of opposite sign to the zigzag nanoribbon, and see what happens to the dispersion relation.\nWe have now opened a gap, but there are no protected states inside it.\nHaldane model of anomalous quantum Hall effect\nThe more interesting way to open the gap in graphene dispersion is introduced by Duncan Haldane, Phys. Rev. Lett. 61, 2015 (1988)\nThe idea of this model is to break inversion symmetry that protects the Dirac points by adding next-nearest neighbor hoppings", "nnn_hoppings_a = (((-1, 0), a, a), ((0, 1), a, a), ((1, -1), a, a))\nnnn_hoppings_b = (((1, 0), b, b), ((0, -1), b, b), ((-1, 1), b, b))\nnnn_hoppings = nnn_hoppings_a + nnn_hoppings_b\n\ndef nnn_hopping(site1, site2, params):\n return 1j * params.t_2\n\ndef onsite(site, params):\n return params.m * (1 if site.family == a else -1)\n\ndef add_hoppings(syst):\n syst[graphene.neighbors(1)] = 1\n syst[[kwant.builder.HoppingKind(*hopping) for hopping in nnn_hoppings]] = nnn_hopping\n\nhaldane = kwant.Builder(kwant.TranslationalSymmetry(*graphene.prim_vecs))\nhaldane[graphene.shape((lambda pos: True), (0, 0))] = onsite\nhaldane[graphene.neighbors(1)] = 1\nhaldane[[kwant.builder.HoppingKind(*hopping) for hopping in nnn_hoppings]] = nnn_hopping\n\n@interact(t_2=(0, .08, .01))\ndef qshe_dispersion(t_2=0, m=.2):\n dispersion_2D(wraparound(haldane).finalized(), [SimpleNamespace(t_2=t_2, m=m)], num_points=100)", "Now we see that the gap closes in one of the Dirac cones, and does not close in the other half. Let's see what this means for the dispersion relation in a ribbon.\nYour turn!\nPlot a dispersion of either nanoribbon, and see what happens to the edge states\nQuantum spin Hall effect in Kane-Mele model\n(Following: C.L. Kane and E.J. Mele, Phys. Rev. Lett. 95, 226801 (2005))\nHaldane model breaks time-reversal symmetry and inversion symmetry. Lattice-scale hoppings that break time-reversal symmetry do not appear in non-magnetic materials. We can make the Hamiltonian invariant under inversion and time-reversal by making the next-nearest neighbor hoppings spin-dependent.\nSo if we take those hoppings equal to $i t_2 \\sigma_z$, we get teh", "# Pauli matrices \ns0 = np.identity(2)\nsx = np.array([[0, 1], [1, 0]])\nsy = np.array([[0, -1j], [1j, 0]])\nsz = np.diag([1, -1])\n\ndef spin_orbit(site1, site2, params):\n return 1j * params.t_2 * sz\n\ndef onsite(site, params):\n return s0 * params.m * (1 if site.family == a else -1)\n\ndef add_hoppings(syst):\n syst[graphene.neighbors(1)] = s0\n syst[[kwant.builder.HoppingKind(*hopping) for hopping in nnn_hoppings]] = spin_orbit\n\nbulk_kane_mele = kwant.Builder(kwant.TranslationalSymmetry(*graphene.prim_vecs))\nbulk_kane_mele[graphene.shape((lambda pos: True), (0, 0))] = onsite\nadd_hoppings(bulk_kane_mele)\n\n@interact(t_2=(0, .3, .01))\ndef qshe_dispersion(t_2=0, m=.2):\n dispersion_2D(wraparound(bulk_kane_mele).finalized(), [SimpleNamespace(t_2=t_2, m=m)], num_points=100)\n\nzigzag_kane_mele = kwant.Builder(kwant.TranslationalSymmetry([1, 0]))\nzigzag_kane_mele[graphene.shape((lambda pos: abs(pos[1]) < 9), (0, 0))] = onsite\nadd_hoppings(zigzag_kane_mele)\n\n@interact(t_2=(0, .12, .01))\ndef qshe_zigzag_dispersion(t_2=0, m=.2):\n kwant.plotter.bands(zigzag_kane_mele.finalized(), [SimpleNamespace(t_2=t_2, m=m)])", "Robustness of quantum spin Hall effect\nWe have an open important question: what protects these new edge states? Is it spin conservation? If we don't break the conservation of $\\sigma_z$, then it is obvious that the edge states don't disappear (we have two copies of Haldane model after all).\nThe most interesting property of the quantum spin Hall effect is that it does not rely on any conservation law. The reason for this is Kramers degeneracy, that prevents two states that are time-reversal partners of each other from coupling to each other.\nAs a final test of topological protection, let's add an extra parameter that breaks spin conservation and check what happens to the edge states.\nYour turn!\nAdd a small perturbation proportional to $i \\sigma_x$ or $i \\sigma_y$ to some hopping. Check how the dispersion changes.\nSelf-study (takes more time)\nUsing the multiterminal conductance calculation, calculate what happens to $\\sigma_{xx}$ and $\\sigma_{xy}$ if you turn magnetic field on. What role does spin conservation play?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mitdbg/modeldb
client/workflows/demos/registry/create-standard-model.ipynb
mit
[ "Creating a Standard Model on Verta\nWithin Verta, a \"Model\" can be any arbitrary function: a traditional ML model (e.g., sklearn, PyTorch, TF, etc); a function (e.g., squaring a number, making a DB function etc.); or a mixture of the above (e.g., pre-processing code, a DB call, and then a model application.) See more here.\nThis notebook provides an example of how to define a a Verta Standard Model by extending VertaModelBase. Verta also offers convenience functions for a number of common libraries including scikit-learn, PyTorch, Tensorflow as documented here.\n0. Imports", "# restart your notebook if prompted on Colab\ntry:\n import verta\nexcept ImportError:\n !pip install verta\n\nimport os\n\n# Ensure credentials are set up, if not, use below\n# os.environ['VERTA_EMAIL'] = \n# os.environ['VERTA_DEV_KEY'] = \n# os.environ['VERTA_HOST'] = \n\nfrom verta import Client\n\nclient = Client(os.environ['VERTA_HOST'])", "1. Register a model", "registered_model = client.get_or_create_registered_model(\n name=\"census\", labels=[\"research-purpose\", \"team-a\"])\n\nfrom verta.registry import VertaModelBase\n\nclass MyModel(VertaModelBase):\n def __init__(self, artifacts):\n self.weights = json.load(open(artifacts[\"weights\"]))\n \n def predict(self, input):\n res = []\n for row in input:\n res.append(row[0] * self.weights[0] + row[1] * self.weights[1])\n return res\n\nfrom verta.environment import Python\n\nmodel_version = registered_model.create_standard_model(\n model_cls=MyModel,\n artifacts = {\"weights\" : [5, 6]},\n environment=Python(requirements=[\"json\"]),\n name=\"v0\",\n labels=[\"prototype\"],\n)", "2. Now you can use the registered model for deployment, monitoring, etc." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/migration/UJ1 legacy AutoML Vision Image Classification.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n", "AutoML SDK: AutoML image classification model\nInstallation\nInstall the latest (preview) version of AutoML SDK.", "! pip3 install -U google-cloud-automl --user\n", "Install the Google cloud-storage library as well.", "! pip3 install google-cloud-storage\n", "Restart the Kernel\nOnce you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "import os\n\n\nif not os.getenv(\"AUTORUN\"):\n # Automatically restart kernel after installs\n import IPython\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)\n", "Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the AutoML APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in AutoML Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" #@param {type:\"string\"}\n\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n\n! gcloud config set project $PROJECT_ID\n", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services", "REGION = 'us-central1' #@param {type: \"string\"}\n", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n", "Authenticate your GCP account\nIf you are using AutoML Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on AutoML, then don't execute this code\nif not os.path.exists('/opt/deeplearning/metadata/env_version'):\n if 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login\n", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.", "BUCKET_NAME = \"[your-bucket-name]\" #@param {type:\"string\"}\n\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP\n", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION gs://$BUCKET_NAME\n", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al gs://$BUCKET_NAME\n", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport AutoML SDK\nImport the AutoML SDK into our Python environment.", "import json\nimport os\nimport sys\nimport time\n\n\nfrom google.cloud import automl\n\n\nfrom google.protobuf.json_format import MessageToJson\nfrom google.protobuf.json_format import ParseDict\nfrom googleapiclient.discovery import build\n", "AutoML constants\nSetup up the following constants for AutoML:\n\nPARENT: The AutoML location root path for dataset, model and endpoint resources.", "# AutoML location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION\n", "Clients\nThe AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).\nYou will use several clients in this tutorial, so set them all up upfront.", "def automl_client():\n return automl.AutoMlClient()\n\n\ndef prediction_client():\n return automl.PredictionServiceClient()\n\n\ndef operations_client():\n return automl.AutoMlClient()._transport.operations_client\n\n\nclients = {}\nclients[\"automl\"] = automl_client()\nclients[\"prediction\"] = prediction_client()\nclients[\"operations\"] = operations_client()\n\nfor client in clients.items():\n print(client)\n\n\nIMPORT_FILE = \"gs://cloud-ml-data/img/flower_photos/train_set.csv\"\n\n\n#%%capture\n! gsutil cp -r gs://cloud-ml-data/img/flower_photos/ gs://$BUCKET_NAME\n\n\nimport tensorflow as tf\n\nall_files_csv = ! gsutil cat $IMPORT_FILE\nall_files_csv = [ l.replace(\"cloud-ml-data/img\", BUCKET_NAME) for l in all_files_csv ]\n\nIMPORT_FILE = \"gs://\" + BUCKET_NAME + \"/flower_photos/train_set.csv\"\nwith tf.io.gfile.GFile(IMPORT_FILE, 'w') as f:\n for l in all_files_csv:\n f.write(l + \"\\n\")\n\n\n! gsutil cat $IMPORT_FILE | head -n 10", "Example output:\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/754296579_30a9ae018c_n.jpg,daisy\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/18089878729_907ed2c7cd_m.jpg,dandelion\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/284497199_93a01f48f6.jpg,dandelion\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/3554992110_81d8c9b0bd_m.jpg,dandelion\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/4065883015_4bb6010cb7_n.jpg,daisy\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/roses/7420699022_60fa574524_m.jpg,roses\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/4558536575_d43a611bd4_n.jpg,dandelion\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/7568630428_8cf0fc16ff_n.jpg,daisy\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/tulips/7064813645_f7f48fb527.jpg,tulips\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/sunflowers/4933229095_f7e4218b28.jpg,sunflowers\nCreate a dataset\nprojects.locations.datasets.create\nRequest", "dataset = {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n \"image_classification_dataset_metadata\": {\n \"classification_type\": \"MULTICLASS\",\n },\n}\n\nprint(MessageToJson(\n automl.CreateDatasetRequest(\n parent=PARENT,\n dataset=dataset,\n ).__dict__[\"_pb\"])\n)\n", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"dataset\": {\n \"displayName\": \"flowers_20210226015151\",\n \"imageClassificationDatasetMetadata\": {\n \"classificationType\": \"MULTICLASS\"\n }\n }\n}\nCall", "request = clients[\"automl\"].create_dataset(\n parent=PARENT,\n dataset=dataset,\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/datasets/ICN2833688305139187712\"\n}", "# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split('/')[-1]\n\nprint(dataset_id)\n\n", "projects.locations.datasets.importData\nRequest", "input_config = {\n \"gcs_source\": {\n \"input_uris\": [IMPORT_FILE],\n },\n}\n\nprint(MessageToJson(\n automl.ImportDataRequest(\n name=dataset_short_id,\n input_config=input_config\n ).__dict__[\"_pb\"])\n)\n", "Example output:\n{\n \"name\": \"ICN2833688305139187712\",\n \"inputConfig\": {\n \"gcsSource\": {\n \"inputUris\": [\n \"gs://migration-ucaip-trainingaip-20210226015151/flower_photos/train_set.csv\"\n ]\n }\n }\n}\nCall", "request = clients[\"automl\"].import_data(\n name=dataset_id,\n input_config=input_config\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result))\n", "Example output:\n{}\nTrain a model\nprojects.locations.models.create\nRequest", "model = {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n \"dataset_id\": dataset_short_id,\n \"image_classification_model_metadata\": {\n \"train_budget_milli_node_hours\": 8000,\n },\n}\n\nprint(MessageToJson(\n automl.CreateModelRequest(\n parent=PARENT,\n model=model,\n ).__dict__[\"_pb\"])\n)\n", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"model\": {\n \"displayName\": \"flowers_20210226015151\",\n \"datasetId\": \"ICN2833688305139187712\",\n \"imageClassificationModelMetadata\": {\n \"trainBudgetMilliNodeHours\": \"8000\"\n }\n }\n}\nCall", "request = clients[\"automl\"].create_model(\n parent=PARENT,\n model=model,\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168\"\n}", "# The full unique ID for the training pipeline\nmodel_id = result.name\n# The short numeric ID for the training pipeline\nmodel_short_id = model_id.split('/')[-1]\n\nprint(model_short_id)\n", "Evaluate the model\nprojects.locations.models.modelEvaluations.list\nCall", "request = clients[\"automl\"].list_model_evaluations(\n parent=model_id, \n)\n", "Response", "import json\n\n\nmodel_evaluations = [\n json.loads(MessageToJson(me.__dict__[\"_pb\"])) for me in request \n]\n# The evaluation slice\nevaluation_slice = request.model_evaluation[0].name\n\nprint(json.dumps(model_evaluations, indent=2))\n", "Example output\n[\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/1701367336556072668\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"evaluatedExampleCount\": 329,\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.99747145,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.2\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 0.99088144,\n \"precision\": 0.92877495\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 0.98784196,\n \"precision\": 0.9447674\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 0.9848024,\n \"precision\": 0.9501466\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 0.9848024,\n \"precision\": 0.96142435\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 0.98176295,\n \"precision\": 0.9641791\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 0.98176295,\n \"precision\": 0.9670659\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 0.9787234,\n \"precision\": 0.966967\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 0.97568387,\n \"precision\": 0.96686745\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 0.97568387,\n \"precision\": 0.9727273\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 0.9726444,\n \"precision\": 0.9756098\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 0.9726444,\n \"precision\": 0.9756098\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.9665654,\n \"precision\": 0.9754601\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.9665654,\n \"precision\": 0.9814815\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 0.9665654,\n \"precision\": 0.98452014\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 0.9665654,\n \"precision\": 0.98452014\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9604863,\n \"precision\": 0.9875\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9452888,\n \"precision\": 0.99044585\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.94224924,\n \"precision\": 0.99041533\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.9392097,\n \"precision\": 0.99038464\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.9392097,\n \"precision\": 0.99038464\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.9361702,\n \"precision\": 0.9935484\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.9361702,\n \"precision\": 0.9935484\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.9361702,\n \"precision\": 0.9935484\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.9331307,\n \"precision\": 0.99352753\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.9300912,\n \"precision\": 0.99674267\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.92705166,\n \"precision\": 0.996732\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.9148936,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.89361703,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.88145894,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.87234044,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.8693009,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.8449848,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.81155014,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.24012157,\n \"precision\": 1.0\n }\n ],\n \"confusionMatrix\": {\n \"annotationSpecId\": [\n \"548545251585818624\",\n \"4295540141558071296\",\n \"5160231270013206528\",\n \"6601383150771765248\",\n \"8907226159985459200\"\n ],\n \"row\": [\n {\n \"exampleCount\": [\n 55,\n 0,\n 1,\n 2,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 59,\n 1,\n 0,\n 1\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 81,\n 0,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 0,\n 73,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 1,\n 2,\n 0,\n 53\n ]\n }\n ],\n \"displayName\": [\n \"roses\",\n \"sunflowers\",\n \"dandelion\",\n \"tulips\",\n \"daisy\"\n ]\n },\n \"logLoss\": 0.02853713\n }\n },\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/4464795143994212237\",\n \"annotationSpecId\": \"6601383150771765248\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.9990742,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.2218845\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 1.0,\n \"precision\": 0.8795181\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 1.0,\n \"precision\": 0.9125\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 1.0,\n \"precision\": 0.9240506\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 1.0,\n \"precision\": 0.9605263\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 1.0,\n \"precision\": 0.9605263\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 1.0,\n \"precision\": 0.97333336\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 1.0,\n \"precision\": 0.97333336\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 1.0,\n \"precision\": 0.97333336\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 1.0,\n \"precision\": 0.97333336\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 0.98630136,\n \"precision\": 0.972973\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 0.98630136,\n \"precision\": 0.972973\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.9726027,\n \"precision\": 0.9726027\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.9726027,\n \"precision\": 0.9726027\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 0.9726027,\n \"precision\": 0.9726027\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 0.9726027,\n \"precision\": 0.9726027\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.9726027,\n \"precision\": 0.9861111\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.9589041,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.9315069,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.91780823,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.91780823,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.91780823,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.9041096,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.8356164,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.12328767,\n \"precision\": 1.0\n }\n ],\n \"auRoc\": 0.99973243,\n \"logLoss\": 0.024023052\n },\n \"displayName\": \"tulips\"\n },\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/6132683338167493052\",\n \"annotationSpecId\": \"8907226159985459200\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.99841,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.17021276\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 1.0,\n \"precision\": 0.9655172\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 0.98214287,\n \"precision\": 0.9649123\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 0.98214287,\n \"precision\": 0.98214287\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 0.98214287,\n \"precision\": 0.98214287\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 0.98214287,\n \"precision\": 0.98214287\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 0.98214287,\n \"precision\": 0.98214287\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 0.96428573,\n \"precision\": 0.9818182\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 0.9464286,\n \"precision\": 0.9814815\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 0.9464286,\n \"precision\": 0.9814815\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 0.9464286,\n \"precision\": 0.9814815\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 0.9464286,\n \"precision\": 0.9814815\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.9285714,\n \"precision\": 0.9811321\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.9285714,\n \"precision\": 0.9811321\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 0.9285714,\n \"precision\": 0.9811321\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 0.9285714,\n \"precision\": 0.9811321\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9285714,\n \"precision\": 0.9811321\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.9285714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.91071427,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.91071427,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.91071427,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.91071427,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.875,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.83928573,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.8214286,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.8035714,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.78571427,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.76785713,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.30357143,\n \"precision\": 1.0\n }\n ],\n \"auRoc\": 0.99967295,\n \"logLoss\": 0.022124559\n },\n \"displayName\": \"daisy\"\n },\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/7147485663377408481\",\n \"annotationSpecId\": \"548545251585818624\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.9971625,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.1762918\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 0.9655172,\n \"precision\": 0.93333334\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 0.9655172,\n \"precision\": 0.9655172\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 0.9655172,\n \"precision\": 0.9655172\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 0.9655172,\n \"precision\": 0.9655172\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 0.94827586,\n \"precision\": 0.9649123\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 0.94827586,\n \"precision\": 0.9649123\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 0.94827586,\n \"precision\": 0.9649123\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 0.94827586,\n \"precision\": 0.9649123\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 0.94827586,\n \"precision\": 0.98214287\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.94827586,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.87931037,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.87931037,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.87931037,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.87931037,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.86206895,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.86206895,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.86206895,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.86206895,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.86206895,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.86206895,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.8448276,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.79310346,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.79310346,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.7758621,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.7758621,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.70689654,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.6896552,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.03448276,\n \"precision\": 1.0\n }\n ],\n \"auRoc\": 0.9993638,\n \"logLoss\": 0.034111425\n },\n \"displayName\": \"roses\"\n },\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/8076647367053688867\",\n \"annotationSpecId\": \"5160231270013206528\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.9989403,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.2462006\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 1.0,\n \"precision\": 0.9101124\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 1.0,\n \"precision\": 0.92045456\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 1.0,\n \"precision\": 0.92045456\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 1.0,\n \"precision\": 0.9310345\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 1.0,\n \"precision\": 0.94186044\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 1.0,\n \"precision\": 0.94186044\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 1.0,\n \"precision\": 0.94186044\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 1.0,\n \"precision\": 0.94186044\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 1.0,\n \"precision\": 0.9529412\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 1.0,\n \"precision\": 0.9529412\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 1.0,\n \"precision\": 0.9529412\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 1.0,\n \"precision\": 0.9529412\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 1.0,\n \"precision\": 0.96428573\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 1.0,\n \"precision\": 0.97590363\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 1.0,\n \"precision\": 0.97590363\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9876543,\n \"precision\": 0.9756098\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9876543,\n \"precision\": 0.9756098\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.97530866,\n \"precision\": 0.97530866\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.962963,\n \"precision\": 0.975\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.962963,\n \"precision\": 0.975\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.962963,\n \"precision\": 0.98734176\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.962963,\n \"precision\": 0.98734176\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.962963,\n \"precision\": 0.98734176\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.962963,\n \"precision\": 0.98734176\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.9506173,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.9382716,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.9259259,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.5555556,\n \"precision\": 1.0\n }\n ],\n \"auRoc\": 0.99965155,\n \"logLoss\": 0.029262401\n },\n \"displayName\": \"dandelion\"\n },\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/8816571236383372686\",\n \"annotationSpecId\": \"4295540141558071296\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.99703646,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.18541034\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 0.9836066,\n \"precision\": 0.9836066\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 0.9836066,\n \"precision\": 0.9836066\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.9672131,\n \"precision\": 0.98333335\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.9672131,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 0.9672131,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 0.9672131,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9508197,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.93442625,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.91803277,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.8852459,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.8852459,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.86885244,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.852459,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.852459,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.8360656,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.78688526,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.09836066,\n \"precision\": 1.0\n }\n ],\n \"auRoc\": 0.9992048,\n \"logLoss\": 0.03316421\n },\n \"displayName\": \"sunflowers\"\n }\n]\nprojects.locations.models.modelEvaluations.get\nCall", "request = clients[\"automl\"].get_model_evaluation(\n name=evaluation_slice,\n)\n", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168/modelEvaluations/1701367336556072668\",\n \"createTime\": \"2021-02-26T03:00:19.383521Z\",\n \"evaluatedExampleCount\": 329,\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 0.99747145,\n \"confidenceMetricsEntry\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.2\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 0.99088144,\n \"precision\": 0.92877495\n },\n {\n \"confidenceThreshold\": 0.1,\n \"recall\": 0.98784196,\n \"precision\": 0.9447674\n },\n {\n \"confidenceThreshold\": 0.15,\n \"recall\": 0.9848024,\n \"precision\": 0.9501466\n },\n {\n \"confidenceThreshold\": 0.2,\n \"recall\": 0.9848024,\n \"precision\": 0.96142435\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 0.98176295,\n \"precision\": 0.9641791\n },\n {\n \"confidenceThreshold\": 0.3,\n \"recall\": 0.98176295,\n \"precision\": 0.9670659\n },\n {\n \"confidenceThreshold\": 0.35,\n \"recall\": 0.9787234,\n \"precision\": 0.966967\n },\n {\n \"confidenceThreshold\": 0.4,\n \"recall\": 0.97568387,\n \"precision\": 0.96686745\n },\n {\n \"confidenceThreshold\": 0.45,\n \"recall\": 0.97568387,\n \"precision\": 0.9727273\n },\n {\n \"confidenceThreshold\": 0.5,\n \"recall\": 0.9726444,\n \"precision\": 0.9756098\n },\n {\n \"confidenceThreshold\": 0.55,\n \"recall\": 0.9726444,\n \"precision\": 0.9756098\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.9665654,\n \"precision\": 0.9754601\n },\n {\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.9665654,\n \"precision\": 0.9814815\n },\n {\n \"confidenceThreshold\": 0.7,\n \"recall\": 0.9665654,\n \"precision\": 0.98452014\n },\n {\n \"confidenceThreshold\": 0.75,\n \"recall\": 0.9665654,\n \"precision\": 0.98452014\n },\n {\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9604863,\n \"precision\": 0.9875\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9452888,\n \"precision\": 0.99044585\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.94224924,\n \"precision\": 0.99041533\n },\n {\n \"confidenceThreshold\": 0.9,\n \"recall\": 0.9392097,\n \"precision\": 0.99038464\n },\n {\n \"confidenceThreshold\": 0.91,\n \"recall\": 0.9392097,\n \"precision\": 0.99038464\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.9361702,\n \"precision\": 0.9935484\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.9361702,\n \"precision\": 0.9935484\n },\n {\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.9361702,\n \"precision\": 0.9935484\n },\n {\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.9331307,\n \"precision\": 0.99352753\n },\n {\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.9300912,\n \"precision\": 0.99674267\n },\n {\n \"confidenceThreshold\": 0.97,\n \"recall\": 0.92705166,\n \"precision\": 0.996732\n },\n {\n \"confidenceThreshold\": 0.98,\n \"recall\": 0.9148936,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.89361703,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.88145894,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.996,\n \"recall\": 0.87234044,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.997,\n \"recall\": 0.8693009,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"recall\": 0.8449848,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.81155014,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.24012157,\n \"precision\": 1.0\n }\n ],\n \"confusionMatrix\": {\n \"annotationSpecId\": [\n \"548545251585818624\",\n \"4295540141558071296\",\n \"5160231270013206528\",\n \"6601383150771765248\",\n \"8907226159985459200\"\n ],\n \"row\": [\n {\n \"exampleCount\": [\n 55,\n 0,\n 1,\n 2,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 59,\n 1,\n 0,\n 1\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 81,\n 0,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 0,\n 73,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 1,\n 2,\n 0,\n 53\n ]\n }\n ],\n \"displayName\": [\n \"roses\",\n \"sunflowers\",\n \"dandelion\",\n \"tulips\",\n \"daisy\"\n ]\n },\n \"logLoss\": 0.02853713\n }\n}\nMake batch predictions\nMake a batch prediction file", "test_items = !gsutil cat $IMPORT_FILE | head -n2\n\nif len(str(test_items[0]).split(',')) == 3:\n _, test_item_1, test_label_1 = str(test_items[0]).split(',')\n _, test_item_2, test_label_2 = str(test_items[1]).split(',')\nelse:\n test_item_1, test_label_1 = str(test_items[0]).split(',')\n test_item_2, test_label_2 = str(test_items[1]).split(',')\n\nprint(test_item_1, test_label_1)\nprint(test_item_2, test_label_2)\n", "Example output:\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/daisy/754296579_30a9ae018c_n.jpg daisy\ngs://migration-ucaip-trainingaip-20210226015151/flower_photos/dandelion/18089878729_907ed2c7cd_m.jpg dandelion", "file_1 = test_item_1.split('/')[-1]\nfile_2 = test_item_2.split('/')[-1]\n\n! gsutil cp $test_item_1 gs://$BUCKET_NAME/$file_1\n! gsutil cp $test_item_2 gs://$BUCKET_NAME/$file_2\n\ntest_item_1 = \"gs://\" + BUCKET_NAME + \"/\" + file_1\ntest_item_2 = \"gs://\" + BUCKET_NAME + \"/\" + file_2\n", "Make the batch input file", "import tensorflow as tf\nimport json\n\n\ngcs_input_uri = \"gs://\" + BUCKET_NAME + '/test.csv'\nwith tf.io.gfile.GFile(gcs_input_uri, 'w') as f:\n f.write(test_item_1 + '\\n')\n f.write(test_item_2 + '\\n')\n\n!gsutil cat $gcs_input_uri\n", "Example output:\ngs://migration-ucaip-trainingaip-20210226015151/754296579_30a9ae018c_n.jpg\ngs://migration-ucaip-trainingaip-20210226015151/18089878729_907ed2c7cd_m.jpg\nprojects.locations.models.batchPredict\nRequest", "input_config = {\n \"gcs_source\": {\n \"input_uris\": [gcs_input_uri]\n },\n}\n\noutput_config = {\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/batch_output/\"\n }\n}\n\nbatch_prediction = automl.BatchPredictRequest(\n name=model_id,\n input_config=input_config,\n output_config=output_config\n)\n\nprint(MessageToJson(\n batch_prediction.__dict__[\"_pb\"])\n)\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168\",\n \"inputConfig\": {\n \"gcsSource\": {\n \"inputUris\": [\n \"gs://migration-ucaip-trainingaip-20210226015151/test.csv\"\n ]\n }\n },\n \"outputConfig\": {\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226015151/batch_output/\"\n }\n }\n}\nCall", "request = clients[\"prediction\"].batch_predict(\n request=batch_prediction\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n", "Example output:\n{}", "destination_uri = batch_prediction.output_config.gcs_destination.output_uri_prefix[:-1]\n\n! gsutil ls $destination_uri/*\n! gsutil cat $destination_uri/prediction*/*.jsonl\n", "Example output:\ngs://migration-ucaip-trainingaip-20210226015151/batch_output/prediction-flowers_20210226015151-2021-02-26T03:00:47.533913Z/image_classification_0.jsonl\ngs://migration-ucaip-trainingaip-20210226015151/batch_output/prediction-flowers_20210226015151-2021-02-26T03:00:47.533913Z/image_classification_1.jsonl\n{\"ID\":\"gs://migration-ucaip-trainingaip-20210226015151/18089878729_907ed2c7cd_m.jpg\",\"annotations\":[{\"annotation_spec_id\":\"5160231270013206528\",\"classification\":{\"score\":0.9993481},\"display_name\":\"dandelion\"}]}\n{\"ID\":\"gs://migration-ucaip-trainingaip-20210226015151/754296579_30a9ae018c_n.jpg\",\"annotations\":[{\"annotation_spec_id\":\"8907226159985459200\",\"classification\":{\"score\":1},\"display_name\":\"daisy\"}]}\nMake online predictions\nPrepare file for online prediction\nprojects.locations.models.deploy\nCall", "request = clients[\"automl\"].deploy_model(\n name=model_id\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result))\n", "Example output:\n{}\nprojects.locations.models.predict\nRequest", "test_item = !gsutil cat $IMPORT_FILE | head -n1\ntest_item = test_item[0].split(\",\")[0]\n\nwith tf.io.gfile.GFile(test_item, \"rb\") as f:\n content = f.read()\n\npayload = [{\n \"image\": {\n \"image_bytes\": content\n }\n}]\n\nparams = {\"score_threshold\": \"0.8\"}\n\nprediction_r = automl.PredictRequest(\n name=model_id,\n payload=payload,\n params=params\n)\n\nprint(MessageToJson(\n automl.PredictRequest(\n name=model_id,\n payload=payload,\n params=params\n ).__dict__[\"_pb\"])\n)\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN3600040762873479168\",\n \"payload\": {\n \"image\": {\n \"imageBytes\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/4gRISUNDX1BST0ZJTEUAAQEAAAQ4YXBwbAIgAABtbnRyUkdCIFhZWiAH0AAIAA0AEAAGAAdhY3NwQVBQTAAAAABhcHBsAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLWFwcGwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAxjcHJ0AAACBAAAAEhkZXNjAAABFAAAADF3dHB0AAABSAAAABRyVFJDAAABXAAAAA5nVFJDAAABXAAAAA5iVFJDAAABXAAAAA5yWFlaAAABbAAAABRnWFlaAAABgAAAABRiWFlaAAABlAAAABR2Y2d0AAABqAAAADBjaGFkAAAB2AAAACxkc2NtAAACTAAAAepkZXNjAAAAAAAAAA1zUkdCIFByb2ZpbGUAAAAAAAAAAAAAAA1zUkdCIFByb2ZpbGUAAAAAWFlaIAAAAAAAAPNRAAEAAAABFsxjdXJ2AAAAAAAAAAECMwAAWFlaIAAAAAAAAG+iAAA49QAAA5BYWVogAAAAAAAAYpkAALeFAAAY2lhZWiAAAAAAAAAkoAAAD4QAALbPdmNndAAAAAAAAAABAADhSAAAAAAAAQAAAADhSAAAAAAAAQAAAADhSAAAAAAAAQAAc2YzMgAAAAAAAQxCAAAF3v//8yYAAAeTAAD9kP//+6L///2jAAAD3AAAwG50ZXh0AAAAAENvcHlyaWdodCAxOTk4IC0gMjAwMyBBcHBsZSBDb21wdXRlciBJbmMuLCBhbGwgcmlnaHRzIHJlc2VydmVkLgBtbHVjAAAAAAAAAA8AAAAMZW5VUwAAABgAAAHSZXNFUwAAABYAAAEyZGFESwAAACAAAAFwZGVERQAAABYAAAFIZmlGSQAAABoAAADEZnJGVQAAABYAAAD0aXRJVAAAABgAAAG6bmxOTAAAABgAAAGQbm9OTwAAABYAAADecHRCUgAAABYAAAEyc3ZTRQAAABYAAADeamFKUAAAABYAAAEKa29LUgAAABIAAAGoemhUVwAAABIAAAEgemhDTgAAABIAAAFeAHMAUgBHAEIALQBwAHIAbwBmAGkAaQBsAGkAcwBSAEcAQgAtAHAAcgBvAGYAaQBsAFAAcgBvAGYAaQBsACAAcwBSAFYAQgBzAFIARwBCACAw1zDtMNUwoTCkMOsAcwBSAEcAQgAggnJfaWPPj/AAUABlAHIAZgBpAGwAIABzAFIARwBCAHMAUgBHAEIALQBQAHIAbwBmAGkAbABzAFIARwBCACBjz4/wZYdO9gBzAFIARwBCAC0AYgBlAHMAawByAGkAdgBlAGwAcwBlAHMAUgBHAEIALQBwAHIAbwBmAGkAZQBsAHMAUgBHAEIAINUEuFzTDMd8AFAAcgBvAGYAaQBsAG8AIABzAFIARwBCAHMAUgBHAEIAIABQAHIAbwBmAGkAbABlAAD/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCADVAUADAREAAhEBAxEB/8QAHQAAAQQDAQEAAAAAAAAAAAAABgQFBwgAAgMBCf/EAEkQAAEDAwMCBAMFBgMFBwIHAAECAwQABREGEiEHMRNBUWEIInEUFYGRoQkjMkJSsXLB8BYkM2LRFyVTgpLh8UNzNTZEVLKzwv/EABwBAAEFAQEBAAAAAAAAAAAAAAMAAQIEBQYHCP/EADsRAAIBAwMDAgMGBgEDBAMAAAABAgMRIQQSMQVBURNhInGBBjKRobHwFCNCwdHh8QckUhUWNHJigrL/2gAMAwEAAhEDEQA/APllihATDk+VOI8pDmd+KQj3HekI8/vSEe4/GmGubbcUhrmelIR755phjCBTjnm304pCMxxSFcwJNIVzzaTjPNIVzNvNIe57twKQ1zMUhG2KQxg4phGZpxHeK0XFihTdkQlnA/RoqkpBxzWfOaB2O7m5KaErDu4zz3skjkVepxsMssalEk+1W0HWDynJDxp5rdLRVLUu0GKGZEsWI+CgKHHvXHV228Fkf3b4puOoE+VTpampFgZJApdroXvPnyFXXKVV3kSSSBaavavjNXaauiMhXZGy4srPPpQa7srE4oONJ2/7VdWm8ZB71UoPdUSGkWf6f6KBZaUEAnArraCVlYZImG1aVUy0AEeVXyVjW5aaCk4LYzj0pm7D2vgjHV3T1lalOJaTn1xzWLrKrUW0y5Qpq5G8zR7LcpeGxkcdq8h1+uk68l4Or09CO1Dlpi3Jt09IKcJKh5VUpa90KsavbuEr6ZSjgoP2r3o87MPekOjzjOaQ5hGaQjMd6Qj0DikMKYFvkXOShiM2p51XAAqUYuTsiMpKKySBP6Dalg6fN08IOAI3loJOce3rRnQklcipNq9ievgz6J6P6mwVuXVlmVLCyhaXRkoP0o1JR23IqO9tMsZ1A/Z3aRv1vUu3RzAkAZDsX5T+XancYSCenb7rI2Y+ACy2KA4mYXZL4RlLriv4vwqSpQSIbW+WVE66dJ1dKtVfZWipcCQCppSvIjumqdWG14JJ9mRsOKCSPfOkI8wDTiMxzSEZ+NIRnFIRs2hbziW20la1EJCQOST2pC45LadBfggf1za27tqR11hl1O5uK0dv5mrkKKWZEEpTzwg66nfATp/T+nJE63yZLD7aMj95uTn6GnlThYlsaV7lM4loMWU60ohRaWpBI7EgkVgV6m1tA/ce24hSkcD61mOdxxLNZKArt2o1OVyL8AtPV86u34Vr01gUMiI81YDGBNMIINMt5lJNZ2qfwjw5JStZT4QrkKt7hza6KCY6iCBSoq8iLBSU7+8BrXgsEW7DdJHiO5x3qzB7URbuPVjjkNJ4qlqJZCxeCU+m1s8afuwMjGKBpHeqxNls+mjC20tpWjA4xXYUOEIm+FDSlhJKc8elXiQhukZtQPAHsarzYWOQEvzLZbWCAawNbO0WaVFZInuUNBnukAYJrwrXVL6mo15Ov00fgQ2uRA08Fj1qqptqxclHB87DX06eSGYpCPMUhHoGRxSEeYpCNgnOAO54pC4LZfDf0pgP2wz5DaXHsg/MOa1YQVOKaK0fjbuWntemIt7srtuKEEAEIHp7VCUi1COLFR5rtz+FjrnGurQcbsE57bIQBhOCeT9R3/Oqqltd+zByTg7o+rPSnVsTW+lYs2M8h9K2woKSc7gRmivBZi9yuJNfWpEqO4lI2qGSMeVPGVhpRuUk+JjoJJ6lMx2oqg1JadC0Obc+RBH61Ga3qwGUXhorxdPgY17BbU614D7YG4HaoZFC9B+SHx+AZb+EXqQ4lZ+5wAnkEr/i+nFR9GSEnJ5sDV96Aa+08oiTpqYsf1MJ8TP5c0zozQt/lAsvROoGt++yz0bP4t0dQx+lR2T8D74hP0g6K37rBqIW22sqaZbVh+QtBw37fX2qUKbnzwPuu7R5LK3z9m/dItrS9Cu7pkhOVB1sFJP4Ub0Y9mPtmu4PdKfg11FaOpEQ31tl+BGWF7m88nPGQaeNPY7tkbSk7NF7JNza0vBj26LhBSkDCaLuyHeFZEJ/EBrO/OaakQrNEkz3th+WOkqO40OrJ7XtBzdsFCY0F6HJcalNLakIUQ4h1JSpKvPINcnXbvkGhxAG0DAxVElYaLosIQau0VdkHgD5qtzqq24KyHjyJjRQh7SEEelk/vkmsrV8EqZI1uUdvfiuYqoOlc4XmRsaPOKJQjdkWrAlIlYWea2YwwCk7GId8Yik47SCuFdkw00kEd6yqzvcsR4Jn6PxhKnnAwnI5pdPjebF3LlaFtDKYzRIGcV2dJWSHJGYbSG9oPFWWIQXOBvbJz5VXmg0WRrqmEpvfgnNYGtpuUWkaFGdnYhy9yJEWYoFJKc5rxHVaaVKvNVOWzrtLVi4o4CSZCM4qhsUWXp1E1g+dWK+nTyU9NMIw0hGUhHmeaQj0EpIPpzSuJ5Vi5fw7aoH3Gjac5SMjNbUGpwsU4va7k82TUy7dcUvjPhqIzQJRLUZWYn+JLphG6s9OpD0VCftiEeI0r0WORVCo7Fhx3rALfs4OtU233KboO8LUJEBf7gLPOzJBT/5SPyIosJbogYPbLaX26jNGNDTKb/hxk49DToO8EKayfXDZhzWkhQ8QJUCPI098kXxdEoMyYrmm4ZWwglYAPHJoq5yM3gJYWl4P2RpRjIyoelM3klbAK6105ZY7rYfjIHfHAosXdA3yQNrqwaeSX0JYRvV2wkc5qbWCGExJ0dsVh6ZszJbbTaC4suZAAJUfOoW7IUEo5Ce69dHJDqmIzZcHYDyNS2pEt9wm0vJK7K5c5raG3nAV8elBm12CRvywMt0R7VmpzgnYokD2FDuLknvTHS+zx4KUPx0LJHcjPNPfBKyKs/Fr8JMC6Jcvdl8KFcRyVAYSsehx/es/VaVVldcgZQtlEQ6B+GKz+G0u8L+0OnGQ4ogflVOHTl/U7klHuxn+KD4f9JaK0Q5dbS+3GmtpCglCuF+oNWnpIQjddiFSO3KKSOncc4qSwQRpmpEzBzj60hBZpVs8msfWMJTD6Cgoa9a5yo7stRQ0age2o/WrumjdgpcgVKlHxeTW9CGCtIVwHd2BQKkSUUGsIfwnsKwagZcE69HCGm0LHJJq1oFa79xi2eirsfBQj2FdbSwOSJBm7gM1YEbTZaSgnP60GROLAbUCkvrUkDNUasL4LEZWI+vVhEkqG0FRrm9b0unqYtSVy/T1DhwMzOjlt85IFcjL7NRvmT/AH9DQ/jW0fMUV64zjT2mEZTiMpCPKQjKQzJq+HbVv3fdzAdXhJIKcnyq/pqlsFeas7lu4KgtsYOQRkVaku48fAf6Du4Wly1yVZbdG1OfI1TqQuW4Stgr3qrSMjot8TumtUW9Batt3lCNJ29krV5/jx+VAjiRCotkk0fTK+KF30Ew8fmJZ7/hRHgsNXRESreL3aGmSMlDqf70K+UPbDJBVahHtEBs90rAxVm5BhuuQ3DiNKWQEgdzSHIU6vauZdloZZcClJyTg0aKsgM3myIDv81ydKJ3HNTvYEk2xplPvvpDO87B5ZoW8JtvgddLWoOT0JWOM80OVS4SMbEl3y5rdhNwI5KUkAHb5ChOVifOAr6X6d8F7xyOfXFRjlknglefemrJEU64sJIHGaMMV96idQJuqZaozZ/3cHAAHJpt2bIiyvPWvqgnptZipt4fbF5CEg8k1GdRU1dkJO2Cod3veuOs9wKN8qaxu+VtJIZT/wBarpzq8lfLYb6T+Ea53kJ+87q3CUrnw20gkfnRVTS5CKMiP+sXR9/pRc2mTNRPju5AWBtWk+hFDlFLgVnF2kR2kfMKh2E+A20o3hGTWDrHkPCIaIdCGsZrBcbstdgS1FL3rIBGBW1pYWWStMDn3Ct4DNbkVZFQfrO0V7c+VZ1d2DxWA6iIAaTnvXPTeQrJQ6XX1MN0NqVg5q7oZJScSD8lntI6pjpZRlQ7DzrrqfFyVyQbfqhlSAA7+Zow4pfvyHEnC8mojoYn5+9ZJI59aE43JJnNKUPc+dBlBMIpM6mIkIIP6UF0UyamfIBDanFpQgFSlHAA8zVoy7lkekXwc3TXcBqfcn1RWHBlLbffHuasqku7ElKWVwShO/Z8Q1xz9luUlt7HG4g81L0o9iWx+SFeonwea20TveiR/vaMnybG1ePp2NDdJrgi7rkh1WlLw3JWw5bZLLyDhSHGykj86jGlOXCIOpFHVWjLylO4wHcewqToTXYiqsWe2Zc/S95jTFx3WfDVzlJHHnUYqUJXaFJqSwXj6aaja1JpyNIbWFqCRmtS+5EIhzEeMd9DyOFJPlQZIsJhXrbTUXqPpBpxSQqTGWh0KxylaSCFfpVKSs7h2t8Wi0+hQq69NY7bg/eIZCVZ+lSZOLuiOdPtvW6+PR3GFlpK+4Scd8ig2ZJEl3iVG+zRlbgEggmietCK+KSFsk+EIOocK73GzBFoa8R3bwN2M1aja+QUr2wVpuOnL/FuihcYElDiyfmUgkH6EcUWckkAjF3ybO9Pru54a/sK0B5OW1OcBX0oDbfAdIHlaJv8OWov2aY0kZOS0SPzFV3v8ElYKNK6dlqK3vsrpA8w2eKFdk7JBRCtQDm9aTvJ7EdqV7jpEmacU1Z7YXV4ScedHpqyuRYA6x1DIv0pTLSj4WcYH81DlUzZD7e7Be82OXabI/KjxC/IKCRngDj1okU+Qbdsla7F8M2petGunbjqVOIqF4bjgnw20Z8z60JUpTlumBs5ZZbPSnQXRfTqPGtyYbU24rSDtV8qGk+alegq2kuwZRUcB3bOm2jtXW5SI+n4UiKlW4SHG8F5Q/p/5aewzUX2PmB8bvRXU3TjXki4SQ7J03MeUqMoDKYx/wDDPt6H8KBVg1lcFZu0rP6FZWxlYqq+CTDSxLDLIHFYeoW6RZhwPzkrLZxms5QyGbBi6NrdKiAT9BWxQ8IqVHYa49ndcdBKSR51qOL23K6d2FFlgbSBt5+lc/qZu9mW4hhHifIMDPFYcp5CMerBaZCZHioJSPSrekvOakgfGA3j3y4W5IKFq+WuzpXtgErj7aeqM1BCFKOR6mi38k1II2+rT8dG5Su3vUrsfcYnrSl5xKFK+dRwMU913G3EsaQvRnxkLKSdwBGaawRMI3pYbyTwBUbImmfKjQOnvv6+xwuSmK02sKKyMkn0FKCTZRld4R9Guirkdq0RmGb6h0ISBtLiT/arLaDwvbksNBhIkQgUupcXjhQOaQSw5Ri4IIYnQWH0njkZyKnuuQUbEea16H6c1Upb33c008exSkAip06rgQnRjMhnUvQk2RS9sZDjQ7EpxV1VYzKToyjwEejfhk0Zd2m2tTxHG5soBUdLo2x157DcDzVOb3dizGnFcsJZfQbS/TopisadkWcKOEPw5CnGl/Tdx+BqCk1wE9KCzYZNQaFk2WIZ8Vf2+25+ZxCcLa9lp8vr2qbkmR2uJmi7z9jkmOsgsujaQfSqla0Y3Yane9ixWiZUiZbVQGJJSjwxw3wQfL61W0ur0+qco05JuPK8B6lKdJJyVkx1mwJMm1l+GrwbjHJ4xwojuk/WrsuMA0Ct0va79ZS42fAuLGfEYzjeB3/GvM/tp0fVa3R/xWgqOM6eWk2ty/yuUdD0jVUqdVUq6un38EldPLwjUul4knjxmv3bgPkocfrXRfZjqr6v0qjqJ/fS2y/+yw39cP6lLqel/hNVOmuOV8mOdwgsOS3IchtJakIJSrHY+f8AeurMsjDWxNs1FZ7NkltpOdx8wexpuBEix2/GssCS4hKiD4auPwp7iGRx2NarrJjLbQnjxEHbxim5FwJrjcrHJebakW9JewTlHFVZTXFgqjdXBTV8CXHtKJSEeHAcXtTzyKjNtRuhksjHpTTy7jJDhQVAdgBTU1ceTDTUNimNwW2vsiEoXgEqPYeuPOrqBM4THoHTfSrstaEpecH7pOMb1kcfl3NJjpWIYZu8vUTs1+RJUy2+rMl/OCof0j2p1zYi+5ITHUV3TllSLXCMotoAbaQMDHkBRJSigebYI/6yMXTrX04uMC5acSl91tSUtuHntwc44OaruqngeUHKOUfL29dDtc6bvDVvmaYuJfed8FkMsl0OqPYApzVWUJWwivd8Msr0q/Z5dSdXxEyLxJt2jm1JCkNXAqeeVn1Q3/D+Jz7VW/gnUzN2LKvwkJOsXwU9QejkFdxdbj6lsiElTtxtIUoMgd/EQRuSPfkVUq9PnB7ou6FKdvvEJRtPqfWn5d2far1CklhFaV2OatHqQ1vDZB+laFSK2g0jjGthYcwpPP0riuoSaqZLtO1glscIPyUpNc5Xk0sBSQIdlSy0MDiuk0UFTSQF3ZpItpW8Ep8+O1dZTlFRIbWNs6wOsulSDgZ8hRN0bi2sRyrc84NqCSTT3T7icWuRTY9HykSUOryskg9u1Sa8EUix+ifEjw20lOCAO4p7B0FEyQsMnKB+IpNDnzh6L9CtW9Ybh4FhbVHi52uTF5CB9Md6HClKeexTbu7LkuLoT9mpqK1Mty29cT7fLGFYhgJTn3BJzR/SS7k1CXJPWjeivUjQKUNP38X1hHAU60ErI98HBpWCpTj3uSnbUXBLYRNilCuxwOKZoJcRjUVufuDsOMsLcaVtcUoEN58wlZG0kdu9JJyE3YeRpZm6spakKS0p3OxDqQM/Q5wfwNESS7kb3Mj6dFoty7PdYJm2nuh1rKlM/wCYAphGsiAhy3/cVykGfapIxDn7sqz5JUryWPI+dPcS8ANY1fZbnKstyKWpjeWw6R8ryD23DsQai1bIk74ALqNoB7TUldxtzKkxQQp1lP8A9IH+ZPqn+1cL9r56uPTm9Ne1/ityl5/ybXSoUpahKp9B46V64XbbhGnEYbbUEvI5OUdjXgvROqS+z3VYaht7JYn7p9/muUdvrdHHV6Z00srj5lkrkhqK7GnsFJizAEqUk5GTylX+VfWMZRklKLunlPs0eZ2adnyQvrplq1a2eZZUUIkIS+jacYJyCB+Ioc8O4y8BX0buCrdf5NtWshiY2XEJ8gsd8f68q8u6HD/0P7Q6jpSxSrL1ILt7pfLK+SR1Otl/G9Pp6r+qHwv5dv37kl6yyxbo81IIUw4nJH9J4NernKsA+qcdMpFnuqAC6gBKiPP/AFihzxkdBxp8pnaUdSk5Lawrv64NSTuhAv1BYw/AloGCWlJWfXFJ8jAiwhdwuMFbR3rdSEbQexziqlSN5qxYg/hySDr2NaImjRBnyPBcwPCCTyVD2o8opraBTtkFtFdQNN6cbaZfadUoD53ijjPtRFDZEbcmx8nXP7zC7nOUI8YDxcKOPDbHYfU1JYQzyQH1I1g/r2/tsRgr7I0rw2Ghzuye/wBTQXO5KwTK0ZCtVviQXwC4ynxJBz3cPOPwHFM5OOB9qYpiLbbAaiNcA4B7mhbmySSQ4tSn40xlgsOOLc5Kh2SPU1JRkxtyN7u9CjTQ4oMB5pGVPhIw3nuc+tWY4VgbzkS2nVKpboeW8puKFbE5Pzvq9AP8vzoijbDI3HHXvxKaG6WQVwNR3Jpy4ONf/l+CkPyFpP8A4o7IB/5iB9aHOpGm8vIrp8lC9ataZ1Vru53bS9m+4rNKWHGoBIw2oj5iAOEgnJ2jgeVQTTd0is42wIbjZ2m4+AkYxSqPA6iAV6hIY3KSBnNcb1JotRRtYiGnkOehzXKVm1wSSuyVrItm4MDCgCB2rRoa+DxezLcKFxebY0h7cojPoKPX63T06tJl+Gj39hYixNy8ZTnNYlX7VxWEmXY9NFtt0O0XQC1z6kUDTfan1qqg3a4WXTklewZ23RDDaR+7Ga9E0eu9WKdzFraZR4DG02ERxgJFdFCe5Ga47XY1v8RTMZWB5VNkGiOvhe1HadDaLtjDQQ2pTQJIxkkirkWtqsVKa2os/aetdubZGVlXsBmhStzcspseo3WuE+vb9mdPuUYoTkiWThf9dMakbhR2JarU0h0uSFhkqU8APlQCFDCckknOTgD1qKmrjvg6aV6gumUqDPhGGrdtbdbkeIw6PLaogAE/0kg0VPcrshww8BiTg4w2lDq/54j6QhzOPQ/35qNx7CePvjqUGFLfQ2MKjucPtf4fX6f3pciEtxtTFwjubEtpLh5Vt+R057LT5K+nPpSYsEX9R9LSXmG7lHQtM6IBuAVuK0fXzx6+Yqad8Mg01lGaM1WxqOF923DCZCU4StYyCP8ApQZwTTjJXTCRlezQF6y6cytKyHpVqYU5AWSVMoOS0c54Hmk/pXhH2l+x1WnUep6fHdB8xXMf8r5ZO76b1iEoqnqHaXnsySPh+1unX2lblpmaSidCypjfwopB/wAjXW/YzqEp6d9Lrv46Sur/APj4/wD1bt8mjK6zp4xqLU0/uyw7ef8AYG9XH3WdWRHSna42wEqT6KCjn869Jkt0Dmb2Y7afuhhzrLdmsFLMhtTn/wBtR2q/LNcL16hGlW0nVu9Cav8A/STUZfhf9Te6fPdCrpXxOLt81lFi7/CTcLBNa5+ZkkHHGRyK7z2MPsRVqZf2zSdvd7pQoA+tRlmLGCfpvKL0F2MT/wAVnIz6pqMHgmwX6h3xoOQIrbmXklXiAdgDxipT4IpXG/p2pmK/cLlJH7iCguFXvzgUFLN/BPhWQM6n1A/qScX3CrKzhKM52j0qUJXdyLVsHeFotxFwZTMGzYgPKbB59gaJu3Ma1hB1X1oZCBaoq8MtgB0pPC1+n0FDqT24CRi2xn6TafXLusi6lkvIt7ZcQjH8bhHyj+5oMHd38EpKyMni/wB+lu/7u4zlRKgoYOali+QWXwbxVTLLLixX3ymQ8va20hJJUe/f6f2o0ZR7EWn3DO5XZVtg7XXEqlLQNyz2QnFTuIjG6XWRqC5IixWluo3YS0kcuH1P+uKknbLGecD9ffh26raujqj2bU9n0Y2Wtipu1yTMGe6EAAIaHqrKlH/loVScpLbB2G2Mp71r+FPXXQOX94XdSNQWh9W9d7h71jeTz4wV8yVH+okg+tZf8POk9zdyDduQX07eW1oSnxElXsRV2LsiI/3Gf+6754odSeAiQAX6SHFbc8k1xuunuqbQq4uLdPQzIKf6a5jUz2hqSyS3pOxNeEk4wT51Ro0t/wAcuexs08cBL9w4eGSTXKa+pKjWlGTudBp0mh+tlqDOARXPVa242YU00EEeKEgHOMedUPVlGW6Lyh5001wP1pIUlOBkivXfs7rp16MbvKwznNZSUWwgZcKEY2jNeuaSo7ZOWrwzgaL6svNKTjHGK03K5RsVE6fXrTugLNDZu1z8R9CB8q1gAcVJVElZFaMVFZZLGm+ueiJag2i6sIUPLeBQm7h1KL7km2zX9gkxC8zcmVoSM8LFQvkIrNBNbNUWm/QFrtJW++lOdxPGasSjCCywabk8IZW+qtwsTDltkRErUV5fQoZCx5CixqQ2og1K4a6G6kXCZHSi7MtyIKl5aS0f38Uf8qifmA/pP4YoEqkXLARJ2uyU0T1vIbdRKTJA4blJ5P8AhV5/gampXyK3YdY90S6kiSgEq4UodlD0Pt+oqdxfMSXuEYWHCftEJzso8lOfX3/uM01xES6s0UuA8u6WknYn94tpPdA/qHqn19Kle+CFrcBDo7UTWpoH2STtEpCcEH+b/wBqFJWdiSyIo9mb0vq2DeIyCzKjuZCkceIg8KQr14/yrJqaGi9VDWJWqR7runhp+Vn6MtRrT9N0m/hfb+536+6cS9Lh36KN0aW0Fcdgr/4rYvm3kqvyBujJXjRXoSzzg7fYEY/vWVqqMNRSnp6ivGSafyZapTlCSqR5Rae2T2Faeguy32mQ6wkFTqwkKO3nk1bnWp6WnF6magvMmo3/ABsQUXUk/TV/lkh68X23N2SXb0SA4pbhLKkp+QgHIwo8cjkVw2r+3nQdJJ0/Wc2nZ7U37c4uvka1Lo2sqq+23zYm09rliwtoWN5dQlzyGBnj+9Yb/wCpPSYThGMJtPl2StjxfN/y89i9/wC39U07tewOTpCbrJ8RTwS6lW5ZXyOee4qpV/6n9OjKMYUJuL5eE18l3/FBI/ZzUpNuSv8AUdp1xYgaCdhx3d0yVLKpKAOUoA+X8/8ArXUdL+2fR+qpU6VXZOTdozw8W78ZvjPkztR0rVad3lG8V3WRs0C0zL1VHVLThhkKdwocLKRwPz/tXT6TqOi1ctmmrRm1yoyTf4J3M+pp61LNSDS90OWrNWCIZLyHAqQ+ooGOceZP9h+FaSmllAHG5DdwkKmzSCSrnnP61RlUvItRhZFgentgcs2h2nW8JccV474H8WMfL+lHyoYBcyuxfIvELUTHg25sm5oOCUpwk/4jQ1UVZWjyO47HfsDlwdZhOOTJCW3zGBQ24gYCnCMHB9B2qxTi4ZllgpS3YRGN+u8i6yfBaKnFOq7DkqVRk7gmSz0+6eRNMw2pdwwqcf3rgJACcc4J9BTSld2HirZHS9dbLZbXnI0TEtfZTgVhsH6+f4VNU2+ROXgHVdSLrrpblmi29V28ZO12G0wFI2H+vPAH+I0dbYoHl8FXepHWq2QNcy9JaD6TaZ19JgENS5cC3KmR2Xhnc1lhCQVJxgnfgfhQpSd7RiCbV7JDN1d6X3XUujNO6k0j0v1BYL7IcfavVhiMreZjhKUFDqEElQCiVYx3AORkZNKvTqShdRyEwrW4KySy+3NdYlMux5LLnhusPIKHG1DulSSAUn2IBrh68ZxqP1FZhL3V0GOmZCWkoGfrWG4bqyuFg7ImXSGHI6XCfl7VsQ0alG5oUqlg5jxG38KSTu964vq/R5VZOpT+8b2n1CjyOC4yoyNxTx6iuBr9P1VDM4Y9jdp6mL7njdwQgYzWa6UpPAd100LYd5TCSNxGSc16z9mtHUoUvjVr5Oc1tVSeB0Z1O2cZUMV6rppdjm6r7nGXc25CspUCD6Vq3KLPkjPuMm6P+LJdU6s+aqSVij8xMR7Uhwm0dbdYahmiJpaHfLrLPAj2hl59Z9trYJ/SpqMnwhrMsXofp98Vek4rblu6cawVGVghuRZ3Ek/UHB/SpenJhVvjwyTomseulh2u6q6G6ukJx88mJZn3kgDzO1KsVF0pIIpyXKuJHvi8sFtlpbmW+fYZaThbMuKtkg+4IFBlCS4J+ou6Jm6YfFJYb8pH2a6Mrc4yhSsFQ9CPOoKpKHJNbZ8MszpbU8HVEESoLgdSOHWc5Ug/9D5H/OrsJqaugbVuQlgSQx/u7o8WI6MAH+1TGENysioDwcjklhXzNLT3SfMf67inyIBNQ6PdRIN1safAuDJLjsRvgLHcqb9vak8objgXWe9R9YW8KX+7mIwHm+x+o9xQW74ZP3CSIiLetHXTT14fZivREl6O/IUEICfI7jwBk/kaHKrCnG9WSil3bsvzJKEpO0VcgiNPi2O8KZeUqO4lS2VyFjKG1pO0oIHOcg4Hn5V8/wD2j+1/V56irp9F/KVNuLtZzdm1e7WMeOH3O50HSNOqcalX420n7ceDxrVUuUww+ZqnW3krKPGdUBE2jBSokfINx7nAzkehPl+q9bUztqZSk42WXuum74u3f5L5nTQpwgmoRSQhXqQOyGA1OEfxY6XVIcWFBKwobdqsnnYMk8Y96GtLti90L2dseLd8LvxyWkvYWRtjakMPyAtbym0trbWoFtKmyRwTnnBTgnjI9KBO7vOKwr397Pyvxv3LVNRZtdJkiDbFG3xn3XvlUClslSAOCoeqcEDPkaajTjUqfzZJL5/l8/7B1sv8Q72d19/c68ghT4G9SiMIV2PAPr6d6pV4wj8MXx+f7/IE0rWHNUhcZe1vdggbsDBwewH40CjOdOca1OTjJO6aeU13vyBnSjUjaSwM8y2JufypV4EpIKQFn5c7jn6c+ZNendO+3/VdG0tW/Wp+/wB7jHxd/qmc7quhaepd0vhf5fgDtisj7uoWIjrStylkgkcKSOSR7cV7n0nq2l6xSVXST3cXXeLtezX5X4fZs43VaeppXtqK3j3CzVatYQrnHjomt2mA/wD8d0KO9LY8gkdz5CtuVKpuu5WM5TvhIdIeqPDjot1uBgWtoDxnicuvH1Uff0q7TSitsVZEZ+4zajvhngJSC3Ea4bb9fc0VzA7Rw0DY2LcwvUN0whtGSwlQ7++P7VK41vI0aw1xcNWSlNMb41sbV/w0H+P/ABHz+lEj7EZAE/qmAi6OQGy5cbg0NzkKCRubT6uuH5GU+XJKldkoVzh3VW7ZDLIe7FNy1xdrhYnbKQzAszmfEt0Hchl4Hv4yshT2fPeQD224qzCK5kDk2wdl680705s8VzU2oYmnLI0AiNEQCgLA42sRmklSgO3yoCR5qFWHJQWcEPcU6d/aAdJrI42wxa9U3FtJH+8ohstp+oS49u/PFZ9XW0abs2TQm+KXqH0t+InpENb6YlKVqywzI0R8TIZjTTHeKkht0nKXUApylSVr24I+XOKx+ozoarSynB3cf79h2mmmU9RfPu51PPGea4uNFye5die62CStL9T2G1NM9uwANblKvC21qxNTaJp07qRqShCgoYqU6cZl2nWaCtd2bcjjBBzVWWhjNcF1aiwO3G8tsKIBGaDDo1Ldfah5at25B24aoSlOQsEjyrSWkUEowwU3XvljErXhS9grwAe1XqS9JFWdTezZvqQBJQjxOMgd6lLUWdgfJ8//AHrVKIQ9PfuM60tJ1InfZEvhUlJUUpUkAnaojnBIAOOcE1ODSd2Sjl2LFo/aI6409A+6dGad09pGwtgJZgw2XMhOP51IWgKPqQkUeVdvCWCV0uBPF/aD64W6VXSw2S55OT+8ltn35Dx/tQ/UY+/2JN0R+0Q06t9KL9pa72BWR/vlpnJmJ/FtQaUkD1ClH2qSqIkpplidLdftEdcIgt0XVVq1WHR/+FXxKVPkenhSE5P/AJc1O9yYE62+CTpvqhxcm1R7h08vJJ2ybMd7AV5FUZw4I9kKb+tRcYyGcUyOX7Z15+EmaL7ERH6haUiqw5NtwWvLXmmQwR4rY8icEAjIWeDVf0nB7oivNK3JdroT1v0r8Q+h2tQaakFIBS3NtzywqRb3/NC8dx3wscKAzxyBZ+Qk75JRitKVuhPDAWMp9D6Ef64pl4HGm6WdxKvGjkpebPB7EKHcH0P/AM0z8jgfedJquWL/AGIoj3BCtkqKOEuH39CfX1ocvjV0OsEM9X+qUq4WxliClpP2IKZltvAJdcWFduTkpSDjgZznvxXjH2p6zT6jUj02m3tX3nb+pN4v7fT5nZ9J0To/9xLnt8rABMvTcRmG7JktSIL6g482FBLclGUkJKclTaiQEpV/DtA8ySPM4UJVJSUU1NcPLaee+FJWy1zf8DrI4SQrmCKq53iLEkTRcCthDrS4iQplKkqDiCoOBLrY+XBODnByQDkUdyp0pzS22lbLy1az4bi39VzhYJxbaEBiKubURlaB4zzkmKyI8ZxDjzzZUlKxuJCVAghSc8EAHI+YH3qk5STwlFu7TSUrXWMteHbPOHgOn3HSzTHXWoioe1wKhplxAE5cmBCjkE+RIAOPJSkp7GqdeEU5Kp/5OL8Ruv0XF+6TfJYVrBHHnCUy24w+laCgTFOAkbEqU4EN88qJUlJ48h6kVlyp7G1Jf/j+FrvwsX+oW/kIoFwCLWtBQFPsNI3FKjgKUoYyMDtxkH196zKlNure+G3+Fv7k+JDi1PUmU60QQhRQkEZG9XBJA9Mc4z71UdK8FL5/RCsmjULalBIWAQvcrH8oweR7kY5p7OHH7/fYi8C60TnLZc4Vx8JMh1pXLaycHPdOR5Hit7onW9R0LVrUad4TV43xJLs/zz25MnW6KlrabhPD7Puhq1Fdpl6u7r0pe95w8AH5Qnyx7V9RdM6vR6vpIayk/vLK8S7r6f7POK+llpqjpNcfmvJxLaWI5K1YQOTg9v8A3NbsJ2RRkrs1s9uN8lqdfJZtzHzOue3klPqo/p3oyeSDQr1PffvRola27faIqe61hCEJA7knj8TRo/ECeMkQXy7aj6iSU2jTLUjTOkiMSdSOo2SpqfNEJtQylKv/AB1j1KQeMycty2wwvINxb5CK1WCz6HsiYNujIjRm8kNNkqUtXmtajypR81E5NHpUlFbYqyIt2ENsuVsMkLkQ5MxSTkJddS2j32oAJ/Ekmru1LuBbbfBK9kb0ZrYpY1DoW0TGRtaDk22oc+UH+tSMgD6moOnEle/KArr5+z40VrTSki99MYCNMaqjIU83bo7hVCuKQOWwk/wL9FJxzwU85GfqNLGrFpLInG2Ynz+gR5ENDsZ7xGVIWQ4wokYWkkfMn1ByOe3NcFXlKLcGFWUhsvZO7Hei0AUxTp9am321qPYjmnqNKSBxbJw01qVuPGRlzGAPOtWhaVgqnYJh1BYaQEh0Z9c1tRpxsE9Rgze9cJeWoocz6c0RqKRHc2Bd11g6yhaw53HrWVWmoZE3cA5OuZrkzg/L7VSlOcle9ge6wrZ1MojxCtQV35rLqRqSldssxkiCa7gomeXvSEe5pCJl+G34Y798RmpFxYk2LYbJGyZd3ncpTjGUNoyC4v5k8AgDIyRkAmhT3ZfBJK59GtCfszuhthtjLd6ZvGrJagC5Nm3JyKhR/wCVtjZgfVSj71Y9KPgJtXA63z9mJ0BuySIkO9WR0jKHIN+Uop9CA8lfI+tR2R8ErLsGujfhOuHTi2tQtO9UNRXK2M8Jt2rWGLkyE4/hQ8gNPNAeQCikf00nHGCSuFLmhb9ZwHUtocWnuqC4Vfkk4Vjg+RqGUh7XI0ndHbQdXo1po55rQnUFALblzhM5hXRv+aPcIgIS8hXmtOx1J2qCipCcJNPKGaJg0lrI3wps99hixaibQXUxvF8Rt1IICnozmB4zWSArgKQSAtKSU5TYgf1J1109CLivGUqew4ph8IT8qlJOPmB5B8s0Ob8ciIa1P8Q0lsXM2ofY3JrBZC46slJPG9ORwr/5HNcj1jqi0mnk4S2yldL2fn6efc09JQ9eola6WfoV71ze7zqOZJnvuKcfcGHnkpSkuqxtyQkAEnAye5PJ5Oa8qVdams62oac5W7W/Tv393zm529OmqUVCGEh2ul3F3Q2qLFuc2MLdm3rYbYbUksEbipsDcpCcZO4DgLwoYJGPQoqg2qrinu+K7k/vcZ4u+MeVjsXd9+B6kwrxdA1KdaavcicpFztk92GylUhTSEFyOtSXRsOMfKrJUCDg44owlQpRcY3hGPwTSlKy3N2krxz34tZrlFhPNjazYuaYkiAVNxXZwegtR3HluW91R2ux1NbitbZKXAFclJCu2QKhX/kuUKuZKNpNqKU0sqSlaykk1jhq3OWWIS73FtskxHHLO+phCDKdlECG+pYW5gJWWm0kFC1fO4CjAwVYAIG0FaFRKpBO+1R5SWO12+UsRe67vbLTd7Ckuw8wZ0mPGZivSXokqOyDJDqlpDZTuyyoEAIWojcM898dzVCpThKTnGKabxa2b2+L3SWPwCKXcf404llHjoU1HSXJIUUBQajJUs4UvncUDuk5yUZ57nNnTTk9uXhfOTS4XZS7PGH2CKVh1bnlMcOPIDYBVIOWztbCuEpzngHsfPgVTdO7tF37fO3L/v8AiS3HVp0tpZUrKkJYytaAMDco4/uO3nQ3G9/n+hFyuLUOrcRtbbU43sRyjBJ47kd/p60BxSd5PyDdu5znKbjMokrbCFISAtRONqTjHH14zXpn2G6rHS6mpo60/hnmPjcuc9m1+L97HLdZ07qQVaK45+Q3ILt9Upe77PCR3cVwPr/7V7tT1CqfdZxzp2HNT6pMZuFESpENvsnzUfNR+tacJXVyrJZGu7CIdjcpLTyWVApS6NzaFjsdvZSh5Zzjv35q0m547A7JHEXCL9iel+IkNg4XLkq+UH0A7qPtyT6VcguLAJPyDq7kq7OK+wRCtvsqXL+UY9ucJ/U/SrqpqStLN/H+iu5NO6C3TlpfTgoutvirXyS2o5J9yBz+dHysJEOSUNM6eviAy/HvkR5IIwFKXg44waDJt8hErElQZb0OREiyC006sKcQpCtyVnAGM9/P68edBbzyEwfMP4x7Hb7H8QeqmbXbkwI58B97wiVNqedZQ6tX/KSpZyD51wvVoqOqdlyk/qRXgrvchvXg1WpOyA1LmRXwyR2GKU47gabRpctYPwGlBpeDj1q7pYTjhMTYwJ17PH8bpIPvW5eY9zuxrV58/M4e/rTyqSS4HjI6yL4uUNpVnNZEoyk7yCbjiyPnBPfzqEuCI5NkFs5H51VfIeJE4rsCqZSEZTiJG6b9ddR9J9M3q26XkLtdwubqFqujThDrCUgAhsYwkq5BV37YxgGiRm4rBNNJDjD+Kfq1CcCxry7yCP8A94tMjP13pOab1JDb2Hukvj46oafebTNNo1A0TgtS4KWlL9tzRQakqjZJTb7FnOnX7QvTk1tkau0hqLS61fKqbb2hPip/5iPkcSn6BZ+tFU0widy0XTbrfYepUVD+jNa27UDeATHZkAvN5HZbK8OJP1SPrU+Rw4nzWbyjZdYe2RgD7ZGAS8kj1B4V+J/KoOKZK5H3VGBBZ0hIRqFp6fp9C0ut3S2lSHoL2CEOpUPnjujOArlJBKSSkkEMns54JYeSpupZ7yVOOG8LvD+SPvN1CWnZQzwpxAJTvIxnBwTnFZWo1Lpq0sryufw/sSjTT4eSPbff3JUmZ+6+0JyGylK8EHPPB7e/avNetVXqasbysl7P93Ok6XD090rHN2W1AnxJ8a6TbeW3CouI/gCwP4RwRjgDNY6hKpTlSnTUrr8jbdlJS3NfoJ5NzbIvG1XgLQ2n7OxLGXWxySUKzwOw4z5ZwKJClL+XfN3lrh/Nf5DeraTz2x/oW6YlMR9HLWVvMamdcR92MuxQ8ZSgofu0PZHh53KJAPYjcPQGrhKWrSsnRSe9p22q3LjbPa118n5NSq7aScvvPi4bp1pGZZi3e/GJEfu8vcbowksyrc60gIJUjaE5UU5PfKiQTu5rB/gJycqGlu1Tj915jNN3w73xf2suMYLbqxgk3hP8hVB1CzdmkLjlyGzdn1RX4kRsrfEpr5w+hByQlaULUeARjuNxNBqaWVFtT+J01dN4W2WNra7ptJc/WwaNVNprCf43HO038alERq3rjMzLmwmW3bkTS2kvtk5ddCknbkqaAAyMhWdpANVK2m/hd0qybjB7XLbf4X2jZ54ld82ta/AWNTclbvlII7XqVi8KjSHQuXAlvgruIi43yCdoZBHCwdudh80kAnucutpZ0VKEcSivu3/p53e3PK7O9lwFhUi7NPD/AHYdHLkp+4JgRA0p+WlLqleIW0R0ZUks85V83PGAMZJxjFVFSUafqzvaOOLtvD3eMeb+2Qm/4tqz3HZF2Zfiu4Q2Xd2SwlzJbQng4wcEZzxxkk4zVJ0ZRkvHny3/AK7+FklvTQ+WxG8ocQlgIWsAjcQUq/vz7duKz6rth3x+/l/kd4wbuqjohvtyW1KY2qykK4UEHdgkdjx3881Z0NX0NXSrNX2yTtw/oUNVF1KMo3tg9tVtuF9YRKfaTb4uNzbK+No9duf1Vg+1fS/T/VrRU3HauyfP7+f4HntVxi7cs53q/wAKyQcIWdpBxg4W8fY+Sffz8vWuqpJJJv8Af+ijL2I8kT37y4txW1lhPG5R2pSPQDy+nc1owTk7RK0rI3i2R+5qaCnfBjtg7HZXCQD32N9zn1496uqcKaty/YrNOWXhBrYtP2xDjZkBcsp4D0x8NoH+FGQf0p/VrSwlZDbIL3CSTCnIQ2iywbe23jlSFx0E9/61f6zUJRqy7kk4rsFGmtIardkMvz3PsUJCQsqjy0rKs/4f/imUKi5Y94gxr/qLEsa5sVVxU4koLSpLiiAgHOSFZ4IHp5/pFz2/eZF2sV56n/E1pKNp+dZLJa0X+XKQpD01/lGSOSVHJUfz+tY+r6jRUXTgtzf4f7/eQbd+Cm8p5KVknyrAhF2sBl7DK9dEoUrn9avRpNggfuU7x3SAcgVpUqe1XJdxAtJIA71ZTGszrDbO/wBMVGbwSSY8sHaRVCWSfsObKgpNVJInbAtaUAkc8GgtZDRRFddaVGeU457TDGAcc96cctl8KPwOP9ZIMTVuuLo5pfQzqiY6WsCbcgnglrcCG288eIQrODtSe9WadFvMiSXkt3a+rPwrfDYHLZpeLp03aMfCclBIlPBQ773iFuLOfRQGfIUT4FgJwLB+0P6ZzlFDmpkx2U922YDiGgPf92T+ZpboMW4eLP1N6A9UJzElu5aMnXZRGxclcJuTn2LiEuZ+hBpWTHuTnboERMVKIT5CEj5E+IpSSPLBUVf3NM0OazoTimXULjtymnEFDjKkjC0nuFJOQag0xylPxCdNbXCuL7mk7g/apoyp2zSmSfDB/oCiCtv05Ch/UsViavQ0q3xK6fs7flx+QVVJRKwIupslwfbujxZcVjetgqTjGRtwrB/HFcVqNHJPZTXHlfmbGl1EIX3u1/ARQ5dxbQBBvNvltqSoCNcUFKmyU5wFJUFcnOMp5/GsWcKLf82lJPGY9/o1b8zajKorenUT9n/rP5Cl+4B6BJfZjIvSWFsvPeOsb21JGSjkcg7Tg8/U8ihxpOM4xlL073SssO/f8/8AgI6t4SaW61r8f4O1zutyhWDwJd6Z0vAiy2pcW0PtB10LXjetshQSggckYPPHy+cKNGjUr7oUnVlJOLmnZWXCeG3fs8fXs9SrOELykqaw7Ozf0yvwz9Btm3SzPW7WFvdkQLtNHgXC3XlbmJSFoIygbMIUnjOABkn3GLVOlqIz01VRlCOYyhb4bPvnKf6L5ZjKdOXqQc1Jq0k+/wCWH9AjtN0VKjXG7Q4LdxvNhmR7hMvsCUlstQVApdYQ2VELKmyvjvwFA5wKy61HZKGnqT206sZRjCSbvNZUm0la0rfmmrZLzrXk6kfilFptrw+Va7za/wCp31VrHTi+nhAan2habwq42ZgxVMPXOEsjed38WxRW5kpVjDSAocggej0WsWv5jP4FGbvdQmuMcXSStdf1StxkVbV0vRUsqzullOUf1s7u/wAkSTA+1SY3/ddjVEVdmkvaejOykiLCW222kuOhspGCVbigbyQspBByRy1TZCX8+rf03aq7PdJNt2je/CVr4SavZqyNdSnttCNm8xXZfO36ZHNF0ZSXGXrqy+Jbqvv+e/HShthxKkBDaVoIRlRRjlWUhGMneaqujLEo02tq/lxTu2mnd2avi98KzbvZbSXqR+65Kz+8+M+PH5hXAvLqENPRJFoXa1qXHYXDC1SHkf1BAG08bux+UY/iJzWPUoRbcakZ71Zu9kk/F73/AMvwsB/UbtJW2vGP3b99x8ZvMZxxJbuhUl0IBCGgSE+2D6geXNZ0tPO1nTyr9/32JerB/dkjjfNYN6VtK5a9sx9biWmmWwofaCo5IyOSUjKvbn2o2k0j1GoikrKOXftb/PBV1NVQpP3wIXuo0udFBdQ1Hax8lva3bU+7hJyfp/avdela6ervKVlFYUV+rff5HF1aahjuMKUzL7KL7ylrBGd5HJHsPIe/avQdNTnVd3x5MmpOMFYdEww1tBH8AyEgZ2/h/nW9FRpqxntuTuOkWxzJOHHdsRo8+LJJzj2T3P6VF1orER1DyP501EhQQ5sXKkODhb/CR7hA4/PNZ9fUzStEs06UeWJfsyY6Ut4BUrsmqTm7ZYbar4Dn/bZzQ2n0piPpZCEFTqnOUAY54rRpzdOKRWmtzPnj1n6yu9Q9SPQbW6W7LHWUBaTy+rzUfb+9YmurOXw9im53fsAik7WvoK5+92RuCN7mFnfzzW1p4bivJ2BCVcSSRnFbUKRBXZpGy4cknmpSwEQq8POO9BuTFLDJQN2aHKVydhWkkD/OgMbg6NSFNk4qDjcdMXCSohI9KDsQaLsiOq6Ur8mYpCPdp9D9aQ1zo2yXDjHFPFXYzlYsbr/4iFah6W2ay2KTcLddHUGJcWVZS3GjNpSG22Fg42ryrgAFISR/NmrcqkpY7C3RUbrk1+Hj4K9UdeG27vIlM6W0eFlKrrMRuW/g4UGG8jcB23EhOeBkgioKi3keGVdl1LT0e+GT4Z9PKnXO0WjUElpaYzl41htmIU/gkIQysFvPBOEN8Acnii+nGCuwya4QnPxedLUxlxoOodIWWGQU/ZbZYg2jb6EJi4NDUorgfdflhDpr4nem0wIZVr+xlIGEoVchGwfLCXAgCpbkxJrklfTPUXT+o2R9z6gh3Hb2EO4syQPwQtWPypx7oXap0zZ9dWowb9amLpEUCUqU3hbKj/MhQ5QfdJGfPNAqUoVY7Zq5NNrgo98Q3wmah0cl7UOl35N7scZXiqaQXHpERAySVtYUVIA7qQMDzCRzXNarp0oNuC3ReLdy3Tq5TXKK7Qpsa+sNi42iBcStJysEIeIHykJynueDwRXO1Kc9O36NSUf085s/8mvCarperBP9f0CG73O4TGxNjttRihaWX4CQW33UJI+VSUkj5h2xkc+XNZlCjRpv05tu+VLlJvum/D/bLVWVSdpwVrcru14+osiTo5uyTDkuoiT2PsMiRe4avCgK252pUsAIPITlJKeU5IwKDOnP0n6kVug9yUJK8890ufOVfmyYZTSqKcLpPD3LEfx/tjyazL/qKfcbXqY/c97ct8gWhgowFSFhKUIIUTtScFIGD8uD2FPT02kp06mj+OCmt79k228Wu+978/MBKtXbhqFtlte1e/bD/wAGj1tej/dsy+2BFtscKSLbd49oeW27NC8bElLYRvAUtB25JUc+pqUasZb6elrbqklug5pNRtzl7rYTV7WWCc1JbZV6aUIvbJRfN+MK11drHLCe2an1VYoTF4dvlvgwbU87p9q0XyMkSYkV5SSVOoABUohDRVuPzJT5gKrJq6TQ6ib08aUpSqJVHOD+GUo3wn2V3K1uG/NixGrqKaVWU0knttJZSdrN92/7D7pC0WO83O66csU+96rvcOQwzb7jEu/2VgwyEFxrJWlsJ+dxGEYyoJIUBnGfrq+qoUqes1UIUaclJyi4bnvzZ8OV8KWeFdWbLFCNKpKVKnKU5RtlSstvjm3tj8Qqt13gWFgJnwFt6JvgdEO1RUh5w7A0pSHCMhLZVtI52jOCrgA5FWhV1Er0p31FK26Tws7ldcXdrri/dLxeUo00ozX8uX3Us+Of3b3CV3VzOnU264XS1twJ86EluIm2pcCcJcUBvUlASVZJHP8ALt5AwayloparfSoVN0ISbe63dLhNtpf3v7lt140bSmrSaxb/AIFV110zZYsm5SFwLc6tsuOxWFBS3853OOLSAFKPOSefcnFAo9PlXlGjDdJJ2TfC4sknwvH6DSqOCc2lG/K8/OwJ2TVN+6iOOMWgLasrSwoqIU22FYIGVLwBgE8E55zjtXZaX7N1Zv8Al091Tu8WXzfBg6jqEGtqdor8Q8tGmoduAM2X9oWO6Y43D/1HA/LNerdH6DDQ016zTl+RzWo1nqP4OA+ter7TbGfBjW99sEYUtrClq+pIrr1GK7mZuYshWXTt1fMn72uEJ3OQ1KQlKEk+YSEgH6kk0N0oy/qHUrCiZ07vbjjcu2eBqaIk7iGJSGVjHkUrP9jQ/wCFm/6gnqK/Ati2/UV0WpiTpidaShJIXLW2GzjyCgT38j296rT0dTmLTCxrx4asMotkli4rRNYcjvpP/DcTg+x9x7jis3bONTbNWsWm043iVl+M/rQbPDb0ZaJOyXJTmY42r5m2+xTkdie351d7XMyvO3wL6lWNPjYlPFYOpyU7j9KfAYVkgcVnQj8Q265HmopOFK5rptLAg0wXWdy81rLCJLgd7agKbqnVeSURyaZ3EVVcrBl4FLjWxvnzoSd2JpWOOeKkQNm0lRFJuw6QtabOcmgNh0AIBUa6IqvAsjxSupxjfkDJsd4dm8TA25GaKoohkfomkQpAUEURRtwKwvRpZQChgjI49qi0uSSTLm9J9bLvWiLTb2tQw7LGgx0xG4Dr6WlJUlOFEAkZPGQfQ0Z11HhFiELrLK69Z9RS+o2oGmkurNntm5mC0TwewW6R6qx39MCqVSspu7H2vhAxYunk68zGIcKG9MlvrCGo7CCtbivQAcmhqSbshvTZa7pn+zM1hqmOxL1NeIWlI7nP2VLKpksD3SlSW0H6qJ9RRlTk+cE1BLksNoD9nhoDpRcDd3tSXudMW14WS1HYV3ydgSgqyfrT+ku7CRSjwiS4HQiIwtDls1XqazMD/wCm5cWnQoeXyqaIH60nS8SYRNd0Ol36MxLjB2HWN8akAcSGJDKVg+uA0BSlTclbcxrrwU966fANeJ803PS+poNxnqX4ngXiN9kU4rIOfHZCkk9/40Jz/VWRLpuX8bd/Zf2sv0DqpFNNq1hNZ/2d98lwGrhL11a4GoeXEqatTslpvKcFIWp1KiO/O0d+1Vl0Rbdkp/D3Xn6/6Zbetbe5R+Lz+/8AJC3WboVr3pa2P9qrkTZ1rAVLtzIXCkLKjgFQSVNFQx8qsZyeeOcGp06Wim3HT38Svey/K9vdfQveutRGzqteVbn9+xHH2iPLu8oxtIpzcI//AHc03IQFxlju6sADbxgjHzcAcZ4obZwpR36n7j+J2dmvCznx475JtJ1Hto/eXw54a7+36jtYNP2e/QHFIcuF61DIZUFJdfLamJoJC1q3kDA4G5IUcJwMk1S1Op1Gmmk1GFJPsr3g+ErefDtl3dkg9DTUqsL5lN+XxLznx5z4Q6G8SbBd4Fxul6tt4n3mG5Z7qblGKxb8Jw2pQGAnlS0pz35JBFVPRhqaU6NClKEaclOG123+V3b4TduOE7hnKdGpGdSabmtsrrEfH9+RXDdRJtibPCmwo12szTj9tnsNONp1CgIytgKyNuRlHG4FeFD5DQZxcaj1FSMnCo0pRbT9F3xK3e3ObNRuvvIKswVKD+KH3Xa29Wyvrw8vOeAzVrOVo8RtSmJb9L2qVi0S7HFZWZEZ3DhU4AMY/pUAAQSggKBKhhLQw1u7R7pVZx+OM21aSxZd/mub5Tawnadd0Eq0koxlhxSynn/h48PJ5M1/dY9lNg01aZ+p4cyO4G3VR3FmP82GsuK/jKQePlBBT3OcizpeirWaj+IqtQnFq6TWfOFxf5tNPjAOtro0aeyn8Saf08Xf7eDtozotqbUb8d/VDkGOhnGGZEle4+fzNt53H/EpIFd3S6FXk2qFoJ93l/S2V+8mFU125JVXe3ZE1RdFsQURo339DjJ4ShKYZCEeoACziuu0vTqtGCh6i+kbf3ZlTrxk72/MI09K96ylrUkd2QAFbHI5HB5BA39q0VQlHlgHJPsIV9PpDUkNvahtzYPGAshf/pVgfrVhU1FXlj54IN3eMjgemz0ZO56ddXEDBKoraHEY/wDIomrCjFJNcA3fgJNIWC4291D2mrsi5Ka/4kJxSmXj9AokE+2RRsJEck6WyeLhaWJbqPD+UGQy5wtryJx54PB9Ac0B4CcjT1H0sZelbhMht+LOhx3ZEcDGVFKCopH1xj0zg+uQVaSqxs+ScJODwfDLUWqZuutVXC+zllUic6XsZzsSeUpH0BArIrYwVL3k78j/AGdPhpHljzrnq7uyVje6yFJaPlTUYpsg42yR/eXyte30ro6EUkQ9hqFXCQ92r+AfWqFbklCw9RUZUO1UJsOvJ2ltEo7VCDyO+BIGjkelG3A7HVkfPioMksDzEZBSCR3qjOQeJGLXcV1pQkENnhF8jgY96sR4KuW8BzZ7OCEgYyfSiBLWwSJYdKuPNJJb71O6Q6TY+q0QsgZawD5iqVSTLEYHn+wSgoFVZlaptRahC4rZ0K2k57/UVkOrNvktqmkiw/wwTNNdN3Z95ucUP3dw/Z4pxktp88emc8n0ro+nJSpObeblStiSSLHay+Jex6SYDQW7NnFoOBprKEJ9B6n8TV51qcW03wRUW0mQhJ6s9WtfTVvWOPHhR3M7HZLZUMeuVHKh9E4oPrTm/hiK1u4WaE/2yv8AI8K7dQShDDXiSnIMFhLbaR3+daD+iaLGnVk/ilb2sLdFLgl5Gv4FgsbTsd4LaDRWmVcFYJQkcurxgJT/AKxVvZblkNyKuah/aEaKvGpXIDT0m8tMEtLlswSmOvnko+cEp9Dg5rO1Grp0Fd8CUle1x80Z8Sln6gPrh6R0zq26zm14xabcpxtB9FL37U8c5URxQYa6FVXhFv6E1ngLepJ19crRBtLNphRlXXLElOoVtuJQwSErHgJVlxRB4BITwSTQatWtUlGEYYfN+30DxUY5bA7rh8Kz/VTp7pG69Mo9rtMawMyIrTAiLjz5aQvwdjbpKR4ZKCvLmSThQPJJr6/Tb6DVOCft8n+YWlVlCe6Us/5RVmf0C6sWqTcDfdGTro3Y4BH2aIksLDQUCHkKQf3pTgnKMkAFQGc1yb0jUpUqMXCTe5trcu91ns/+cGjDUysnV+JJWSTt9RP0/wBA9RdRaPmTYVtk6iL7raS0pbJy2kHASk7VK9FYB3YFVf4Klra3/bQsoX4ve7+v4eLlmnrZ0qT9V3k/NrWQddAvhuufVSFFgXPVUmxogrK4zcaOlxyOR2CApQ2DsnndgfLwOKsaLT09frKn8u0WrNv+p+67+W8ZzyV5V6lKkot8ce3y/T5ErdNeh1it2kNTac6lT0ydQt3Vb0OfJbW4poJACF71A5DgKtyc9iPQVKn0vSydSNR+jNWjjCsvFlZp/j35ITr1ZRj/AFLn/n3HyL0clgrGnHYT0ZHZLExJOPccfqK6XQ9H09FbqS573v8AmUqmpk8M7QdPXTRkxD15dlwEKGwPJabW0SfLcQpP5100KMI4KDm3lj7qfQK7/pt242h77fIYHiKYS2hDu3zKdgAV9MZ96tRiouwKXxo46Thsak0kDNZEyTb074z3I3t/zIUB6d/wrivtpDWrpNWvoK0qc6fxNxdrx7r8M/Q2+iSovVRp14KSljObPseRoPhSktojNpG7O9KAAfYHzr5L1Gtr6lOderKT95N/qz12nRo04/y4pfJII2mwhaS04WVp/mbUUkH8KL0v7RdU6NPfo68orxzF/OLwUNXodNq1arBP37/iJJ+qn2ng4ooF1ZUPDmpSEucHkLxwsfWvp7pP2ofUtDDVSgk5Lt5WGebajpyo1XTTHpjqLeJd3jzVJDnhpIVGbGAsHG788dq3aPUpVaik1aP7yVZ6dRi13OfUb4qNIdKtDaldnXVkvMRXEW6CVhUh9xbWW2koPJIKtp44ABNbqqR5M94wfMvpb8LOtNZwo816Km0QVJSUuS8hahjvt/61i1U5/dIxpyeXgnS1fB4zGZT9svj6l9iGWQB/nVL+A9R5kE2JCq5/B3p6Qzj/AGkuEZWO7jSSP/41cp9PhHKkDcbkS63+CnUUTxJGnbpFv7aezP8Aw3cfqKvLTuKtF3K0qcuxXvV2jLxom6fYL3bZFsl43BuQjbuHqk9iPcU1pRxIjlYZztf/AAx9ap1uSUAktrO9Q+lZVWVizEcJUYbKrwmSYiMfBHHFG3ETj4Xhujng0S90LuO0Vz5B7VTmg6ZGDY+YV1rM6QS2CWlCgD/ajKWCusMmHQkFuW+2tXIyO9HRPlkzW6O202kJwAPSoy4DxQ/ww2sYOKpTZYihHclNoJxjGaw9VIu01fg0gqQ4Qc1mRlcsMci6lsfIognyHmavU5yWIPkFJJ5ZNmhtK2DTaIs6+oTedSvJDgZkKyxESf4N39SuR7Dk9q6Wjp9tt+ZFOU78YQz9XOuotFtlswFoagtJKF+EAj7Ss8YOP5eOB6ZNXJNUYtgL7sFb9O9a33BOjP3F0uPJIW34mA4nOQn0ArMWonRk58p8hElUW3uiKeuPxEah1lGkacacft1lO1LzJcyt8DslRH8nntHHrVmVd1FgrTe12QA9JoLN21RBiSFqbjyJbTLi090oUsAkfgTWLq4qUoQfDYoc5Prvq2+jplo/T+ndKH7msbQVHbZhnaE7SPPPJPJUrkqJJJNdKqKjBxjhIO5WaA/R1/eveoHH31reUnA3uK3KOSAOT+NElT2kYyvctq2pKYzDSQAgIGQOwAHAoLSYUANY6ljybhshOuhUELMhxs7eRg4Sc9wR39TUHTSe7wPuxYgn7/kQbzJuDRCX3XS+pRAyc9wfaq1LT7ZubWWPKpdWBC5Sha7zKuMIiNIkOqdJZO3lZJUB9STUY6ZU6r2q13ck6jlHJJQvTfUTTiJ6gk3WOA1JGMF1OOFfX/oasVaKnysjRnZEfTEOWyc282pTK0nLchv5VoP19Kqxoem90R3O+GH+m+pLkiOu2XxpmQy6nwypxAKHR6LHatSnU3YkAatwKnbc/oSQ3f8ATSlP2dPMmDu3LYGe6fVH6ijNvh8EcXugottktLkl+9QChi23JsvLQk4S25t+f8FZzj1zVXUxp1qFSlW+600/k07/AJBqTlGcZQ5TRHcRUtUlClbUJQAAj+UcetfCVRU9rR7fnLfPgc4cNx5RWwlz5j82e31q9oela7qs/S0lFz+Swvm+F9WVK+ro0I3rSSBfqHp/UyZsWRaLO9Mikj7VJZUlfhJ9SkHdj3xxXrn2c6B1rpdOUdVS2wbT5i//AOW/qcb1DW6evNOjK7+TX6jfqG93qFaTbtNxhN1HLb8Nkr4bZBHK1nyA/M16lpcvc1hGFXbS2x5BDpX8JVk0VcHL9qJSdR6tkuF96ZISClCzydo8v71qupKbKUKSp+7J0iW1ClobSjxdowEtjIHtiipkuchPAsDC0/72kxk443ECjRuyDseRNNyX5S/sTzciMD8wQEu8e6TzVuMXbDAtj/J6cWVEFuW9bWUuKICyyNuD6j0oxCyIy+JPoLbOqvSC7W1UFcu4xmFP29ZQC826E5G1XvjHv5021NWZCot0fdcHx5hxXob7seQ0tiQ0stuNOJ2qQoHCkkHsQcisSthgIBXZGwVYPORWJqHgtRHpyIVp7c1QU7MnbBwXAykjBzRFUI2Ga4slgDg8Gr1KSkROcV8g1OcScboj9P8AEK6N8FJjjbntjwPp5U6YF4ZLmhb14OwFWMc0WMgkUS3B1ElaB84z6UmwqyO8TUIQoFSgB71Xkrh07Dfd9TIU+AlYKayNTRc1gswmkbsagS00DvAHesv0JRLHqJizTWqEztSRWd4O3KwCf6Rn+9avT6d66uuAFeXw2JCuWoZMoqSlalLcVg88k9hXYwilJtmZKV1ggb4p9Qv6Ve07ai+C+/GcmOtJ/kBVsRn/ANK/zrN1Mt0kkTacYore7qSQp0OIUUrByFA4xVaytkHd8jfJnPTHS46srWfMnmkopLAnd5YZ9No7r75Syla3lr2oCP4iryx71idRbukgkFnBe97qTeL9pKxQ7olH26GlPjLbOQpW0Amuw0NX1qSdT71skKqlAKdB3RyHdPCUSkrWhQ/A5qzNpxTHgmmXOiXLw7P9qcOUNshaj6gJzVRJIsMA7ZanYlvn3BKUqeeUX3mlHcQhRJOB7Zz+FSlxYaKu7kJ9RAi0zW5UdKhEez3Odqv5k/rkexo1OO5Aqj2O5Gl0uJkNqQ2s/wBSeaM4XVwO4Jun2rE2KVIaeWUfaEDYVcAKHr9e1DnTurxCwnbDCm9OR7pCblMgBt4kEf0rHcVVcch28Aw3IAy2s8p7H2pttsEb9yQLvqBvQbViuUdfg2u5xNxDzmUeMnCXGxnvnIUB7n0rkeq9dn0bWUoV47qNRcpfFFp5b8xyr91+RtaXQR1lCcoO04v6NPj5PDEqNYBuZMskUeBbXdsxLRIwlKhkoz7kHj0Iryv7VfbOprqE9N0x2pS+Fys9z8pLsuVfl+x0/S+kRoyVTU/fWbdvb5sILFp9EiOhyQypCVq8RDG7kn39qzfsn9k1rdvUOoK9P+mP/l7v28Lv8uTdU6k6b9GhyuX++47ybqzDWI8dsSJOcBKR8if+te8Uo06EFSpRSS7JWX4HFycpvdJ3Zka9SYEpDipjipLZBDTBCUo9if8ALmrEZWywTjfBIo0ZZ7za275GhohvykBbqow2nPO724INE/h6VVbkrfIHvlB2I71HYJFiu64jziXDgLQsnAUk9jiqsqEoPagqnuVxbpyyXKY4W4YKucKUCEoH496NCm7kZSwPl86T3i5Wh0MXdMWcoHYsN70oPuCeauenjkBfwU96m2D4iOjc2XPtl3RfoiSXPGhxRvSB/wAmc4+hNZ7jq6LvFqSAS3LuQ9qf9ov1dnaWj2Zx62xJ8d4qXc2ohDywONikE7Qc9zj8KnHVykrNZAepJrkAdOfHp1esGr7feJeoTeo0VwqXbJLaEMPpIwUq2pz9DnirPqyfJKM5LLZGmsdau9SNfX/VL8Ni3u3ia5MVEjZ8NoqP8IJ7/XzOaztTLc7jxy7i21ktFJrBrZuHVwrZbDjOe+ax5OzCMwMDB470aLXciDmo2dicgce1X9NK7E0MrCflq9LkT8AHXQlQ6NOlCs+lM/Yi0EFn1A7AcSUnHnTppgleId23XiEoHir/AFohPf5Pbl1RQ0goaKs+9QY/qeAe/wC0eWHtwWpQz51VlubJqpYfouunrhHAS7tUO4zzQnG4VVELdPayXYNQQrg6tammnB4qU5JLZ4VgeZwTRKSdOSkhSlfkuZC0tFk2q33aHcIsiEFB8TUvDwn2FHKXEqPAx5juPOtV13Fk1TTRRP4iNbsa+6u3y4w3kyLeypMKK6g5SptobQoH0Ktx/GqsnuYOo7uxGxORTAjOc+9MxE3/AA125Mm9PvqGSwytxOfI8Jz+prNqpSr/ACQejyWRiKTEcbLyVLaA+dKTzk1apVJUXdFmUFJZOOpOv+ndFymIjTq5l2huNpU2GyNiCQSFq7cDnj1qxU1cUrgXaLsWCT8WNltuhUrly2JDL2NiGXQVqHBASM85NWHrKMKbqTeBWdyKOkXxl3G4dULgi9KTGjzVkwkE/KhscBk+WeMg+ZJHpWVpuouvUe/CfA90vhJN60PxJcdi52pzdaphO5CTww76fQ+X4iumpNxd+wCqlJWIPRPcYkbc4GTjJ7e1aM7NbkZ8bp7QsbvkW8W9CblHLMhobW5rSf4sfyrHn9ay5zjTk9jNCCco/EiZulGjf9r9Oz4a2HIbz6N8UunALyRkEexB59jVZVN7ZY2WRHWorZcLDLInRHoqge60EBX0PY1ZVmrgGmnYM4mmbz1B6dWJ2ywmLrO0/c5D6oj/AHcaW1j5B/MoKHA+orhPtf0qr1TRKGni3NPs0nZ82vzwsc+Df6Rqo6aruqPDXdd/3cYOn0WPcby/ZzJizG/EMiW2ykpcZVu5bV/ThXy44PGK8K6d0mrruoU4VqcoqPN/C5/Hj6nZV9TCjQl6c07+Pf8AwSxdbstK/sMTmSvhSk8bR6CvoCM1BKMfl8jjHG/Iym6xoLyoMV0vTDw662Nys/0p9PrR1NRx3BtNnVEhuK2hBSlhXknO5R/96Mmmsg37Em9PtcRo8IWea4GSG1uJU8sYIKh59uMn6VoUZK1itUTvcBdY9RIOptRvvQEFbQAZbefd2NhKc9gOTkkn8qHUqKUsDxjgVadui4clt5qW466Dy20MII9CM5IqUW1wM0iY7RcTcIDcqKtZCgQpl052qHl/r1q4pJoE1YFdTx7veo7T8CMyl08LQt3KFp/KldjZPlT8eHTKRoHqSi4mzPWuPdUlal+EQy46O+1XYnHJHfjNU6tNblJFSatKyKs5+amGH6y84rPrkoPIXwE5KfSsSoWEE1vkZbUk8kGsqrHNyVxWcYHlQk2IY9QM+KycDnvxV7TO0hNgwn5RitcYA66EqmUhHRDhTUWiLRt9pX5Kp8kdqOa3FKPJpWJJJG7ajiotDNCpt9bStyVFJ9qHYb2O7l1kKRtKs/WkkLk4OX65Ltq7aZ8n7uWvxFQ/GV4JV3zszjP4UdN2sETaEHakIz/XFIRg/CkMS50ava9NyWpiUeKjlDjecbkHvj37flXP6ut6Nbcg1J7ck13fXcibHQq0MKDyiEIS8kErWo4SMA+uPPzrLqdSnUqxp0ljvcO6l/uj7pHoNorT1qfvetr+rUeqn3t7lrgJ3NJUokqKnDgEDtkeZ4FdHKnG15vI8acVl5Y/a86o6a0l0+n2fSuiLZZHJrRjO3AoS9JLahhQCin5c57is/VVYQp+nShzi7CN2Vyqst3xJC1JVt/gAIOCOd1Z8FtS+pRly/oTboDr61EZb0zqKVxIaCm3XFfKvnACj5K47+ddR0zWVJQaqZs7E5WeGH4jR5UkIaUp1ladyHB/EPb3xXQutZc4Aqnd5EvUHqEx0uuWhrkhluekNy/ttrUsJEhshGxZByAQc4z71yfU8bZ0cSbv8/matCajibLNaS6qonotc+wpzBtcVue8Cnl11zBdx7JbwB9KuaepwlxbPzGlltk2mXbb1PududaakRJUVUhtDiApJQ4gqB59DmtYBaxHvUrqrP6W6Ab+4LaZ99uDCW4yWmx4UccBTznIAxwAPMnnhPOB1nqD6dp3Uisu+ey93j8P3e9o6Ea1S03hfn7f5Ic0FfbhBDst+DHYuMnKpD+7KnVEklR98k1430vTdSWrlqKU7xlzv557cs6nV1aEoqCXHgJJF1krZUiPIEdTnDj20qV74r06FOq1a6RgSnFdhbarYbdbXHIqXZG84VJYSN30GavwobYgJVLjbJsT9xYU5bb5HTIOQWrm04yc/wCNIWP0oipSf3ZfiRdRd0Rb1Ms/VLSNkk3ZrSzF5tbCSt6dbbh9sDScd1NpQlYHvjFJaeq8yf4AZ1UleKKxXT4jtWLcBiyI0YDt4bIP981GTUHZMo+tKR0tfxP9Rrc6lxi+JbWnkH7Og/5VB6ia4H3vyGVj+N3q9FcJRqFlSO5QqG3hXtwKaWuq0+AO+beWfRjpd1fgX/oVbtVSpUZ6dAtKZl3aZUkJSUoKl4GeOAcA/St/T1fVpRm/BYk9q3HTq70+018TfQmfbG3GZ9uu8ATbVPThRac272nEn1zj9RVpbZq/Zg5rcsHwrnwXbZPkw5AAfjuqZcA8lJJSf1BqhJWbRWvfI8WJOSKzNQyUAzt7e1IJPesKqw8cjtCVsd78Gqc8okO2Mo5ql3HQ1XMBSceVXKWGOCMkeG6tNbUXdXGSAH/Wa6MqmUhGcetIRlIRlIRuiosizvnjvQ7DHijkYpxCdXeiomjzypDmUhjPpSESZob5ben1rleoZmyceCW9KlKrlaUE/wD6gH8gT/lWJoob9ZH5r9ScOQ8fdVscVnnNdu1yw98AR1QL0ONDiONLbEhkyULWnAWgKKQR6jcFf+msfW/CoLyxSzgiZ1P75X/3D+gxVdPH77lVrIF6xkqfvzyc8NIS2Mewz/cmt3RR20E/N2RnyPui+tmr9B7EW25B2OjgRprSX28emDyPzrRUmhKbQpg6huWrZdxu91kqlSnlEkq/hSM8JSOyUjyArE18vjigkW5XbLD/AAo63j2GFeLaJRcmSHvEXEWSP93CEJOw5wc/MFD3H1otGezn98lmlZx23yWdtHVG8plx3WUoWwzFMJvxDhW0k7c8c4BrUjXeLIm4ruJNd9RY1n0zc7xeFFUOMwVrQnnCAQAAKq6inHURcaiun2JRm6fxeBo6S660z1iW2xpS7QVzFA7oktwMvII8ihXOD/UOPegU9HTjZRjYItQ55TD5eiruxelWa5SW7XLdQVR943tvewUMY9D6eYqytPZ24ZF1Lq5yNwtlm05cVXaLOhO2JLsiWbeSXltpwFkp7nAGePIEjvRHTjbOGiDqbc9hisPxM9IpaQy1rqda3TwFT2ypH470/wD+hUacqT4n+P8AwBdeLxwT30uvVkvznj2nUllvTS2ypEiE8gKPspG4ggjgitKMLK6GU1LhnyF6uosznVzWR04U/cH3vKMEND5A14hwE+3fHtisDVbVUltKsc8DEmMQ3x6VmOWQjWDrEBYCz5VCfxWQG1uDV68SYzD7LUp5ll4bXW23ClLg9FAHBH1qxT3LCFfB9C/2ZvWNd26aak0pc3dzenJHjxnHCMJjupUVI+gUFH8a6TSNqKgyxSzFt9j5m9Qp7F26g6lmxSPs0i5yXWinsUl1RGPwp6rTm2ipHhG9h7DntWRqQkWFcdZ2Jwax5IKhe28UISc8+dVmrsnfFx4TKy2Dk9qpOGRK413CRkHmrdOJL3Bqb8731NasMIXbkAOK6MqmfWkMZ50hzKQjPM0hzdsZVioy4IsUpQT5UJsY1WkjnFOncQmWOaKh0ZTjnhpCPRx5UhiU9FNbbc3n2rkNe71GGj90PPvVuxyrTKdVtaZlNqcPondgn8iazen/APyUx1jJONo085Jm2ovtbocqV4Oc8HBGQfwIruWrlhIjr4gZChrtEZf8MK1xIyQP5Qd7p/8A7BXPdTluqxiuyHas2Qqk7lpPbjcfxOf8qF2aK3cjm5PfaLhJcP8AM4T+tdPSjthGPsAeWJRRRg40ugNaeWrHKl1gat3rpFiH3TvCLaZrLjgSUIdSs58gFDNJuysRjbcXoi3dCQraoEeKnBB49q3YpItN3ZHvxH31X/ZDemUlWXiyycehdT39uKG3n5EajtBlM7Df7lpO/Q7xaJbkG5Q3A4xIbPKT/mCOCDwQakmZ8JuDwfTD4f8A4s9OfEDppjSus3W7NqRABZljjw3U/wALqFH+U/wnnjOFcYVVmMlUW1l2FRTJQaYW/qONJuUcKucBaLZeo+AUzYbv7tD6T2UNqgM+aVI/pNESs8/UJa+D5b61s7dh1JebW04l9qBNfiIdSeFpbcUgKH1Cc/jXNSjsqyinwzNeQZ+xNqcC9idx5zjmi+pJK1wSdx5gRkq4xx24qjVkXKaH6LbvETkis2dSzLijdCW5oSwjaBijUm5O4KSXDA+6SvDJAzityjC5VlZHlk6l6n0zZrlabRepVut1ySUSmY6tnipPcEjnkcd61ILZlcjqT27XwDbKfmAA+lJsjyEtmRtSOOay67uEiErKglIGKy5E1gUJO5OBQeGTFni/ugkdwKBbI5tFtT1ycASn5R3NWaMd7sh0hPeLI5GeSEpBHqBVypFUkSxYiPgGuiKRmMZpCPDinEZSHM86Q53ioBOTQ5sG+RwSnGP+lVmyZq40FA84P0qSYmrjY4nCiKtLgZGtOSPcZA5xSImyE5UBUbjEu6MaxBaGcdq4vXv42Wor4Rb1BdCLclsHgkAigdNV6lwc1gkXp58TabPo5qy3q3qkSoxQWLi2d3KP4FKR3CwPlJBwoAZAI56v1klZrISnWstsgQ17rBes7tMvqt+ZYCgHAAoAICE8DtwBxWBWl6uob+X5BW7x3IB5Kyy1JWDnYggfgKPBbnFeQHBHLo2rPr51064KyNaccP7S34Omo47FXNc5We7USLC+4bMRy6CkJ3FQIxSlK2RoOzuWttFwH3TEUhQUFMsqCh5/IK3KbvTT+X6Fh8gb17kur6TTHDgNOTIzHzHGVFSl8Dz4Qafm7IVsQKpOp+eoozWONnfciSW3mXFtPNnchxtRSpJ9QRyDQajayhoN3J3tfxO9QWtMmy/fKFI8H7Oiapj/AHptrOdqXM9s8jKSQexFAlrasY27+S16kmrEYTGC6VLJKs8kqPJrNjPOQbwNzg8MpTnj3qys5K6WR5tDOQCe1Ua77GhTXcJGiGmfTisp5ZdwkDV6fG4nPlWrQiV5WQC3V/xXcA10NGO1ZKMmrjYpGPrVq46Z2iNblZoc3ZDpXCi0NDbmsquwqQ9tpOBziqDZNIVRm9yCPegzdmSSHKIykrAJqrOTsMrB7pS3Ntw3nFgDA4zV7RKycmFEjUFN0vzbKUgoKgCKuVZbmosexWYjOK37lAzFK4jw9/WnuOeY8/KlcR5TjiyGMgetAmwbdmOCGVVWckTXsauJKeDxUk7iGp4ZWT6+VW48DI5pHNSY7PcUwx3jN73BwaHJ2QxMGkGx9nZFcVrX8TZopfChL1GeAWhAPZWcUbpccXKtQC2pGchJyfrW64+QCdwym4aYjs5wEBKfyHP9qw6eZORelxY42uKzPdDcnHgrBK9x45olacqavDkVOO92Ylnasstja+xRoUZ5QyFlLYWnP1PejU9HqdQ/UnNr62IucYfCkDUtmLqYIVBcaiPNjYIiwlCSM5ynHrmtSEqmkxVTaffn8Qb/AJixj2D1zSFwhaVtk1xttMN7LSHA4D8w7ggdq55aylPUzgvvLPBYdOWxPsNqC3DSvadysElWeKsu9S1yCtEm3pnfGrhoq0OF9tx5prwHEKdG4KQSkZGcjjBGe+a6CMlFWCR+KKdwH+IbXw1Cq3adihsRbe4qS8WznLyk7UpP+FO78VmnlUSW1AK0ruy7EFSRtdNSjwUZLJ2iLIUMZ/CoTQPgIra7l0A5rMqrAaLyO72EsqUapLLsFfcZHlBTg5HBq/FWQFRuwjs4+UVl13k0oIc57vhtHBxxVWlG7CzYDXyWopVhVdBp4IoVZAyoEnJ5rUKlzRSMnmpXJp2FURGO470ObDRCazo+TgVlV3kNEe2kBQrPk7BEOAYCGjVZyuxnhGn2nwFIVnz5p9m66IxYewrqj7rQWlj5hzzVyjK0Sx2CDplBTOvHirwTuplO8uSSVyoOe1dYZp6OaYYwUhHnfFOOYexphhbCO3+9Ankg3ZjshWcDFVGSvc5SuWyanDkZ5GRferqHXBrTjnoGTikIXwEAup+tAqPAO/xIlvSQwln864vW8s1HlA51FeUZ+PLJrV6ZFbDOrSd7AxZR4t1itnsp1IP51rV8U5NeAdJ/GkGV3cJeUcfyn9cCsKgsF6bGC/rPgNoGQkkqPv6Vo6ZZbB1HaKS7ghJ4crajwAieNDc42D5qFO3hkiTZbi/AaZ3q8NIBCNxxn1xXKQSu5WyW6jsJ5zYYgLKe5RiiU3umkwbdkMDaRkZAJ9a0mwKebM2eOBgCmjySbwMkrl8j3q/H7pWbvdjtboiScn+1U6s2KMcj5EjpQrI8qz5ybwG4N7i+r7KrHHFRpxW4dvAxJdLj6c1otWiRpZe4M7OkBCfpWFqOWacDa8uENmmoK7IzYC3hZOBjzroaCM2s8pDURgirYG9j1PI+tOTT7CxhACU1Xk8h0x/tAG0jyFZtbkKnyP8AGA3J471nTCoeVNAx8+1UU/iBydkDlzWW14B7VqUkmiMHdnOPd5DSA2lWEk9qI6Syw1yb+iTqlPoWeT3rJT21mkW6a+G5/9k=\"\n }\n },\n \"params\": {\n \"score_threshold\": \"0.8\"\n }\n}\nCall", "request = clients[\"prediction\"].predict(\n name=model_id,\n payload=payload,\n params=params\n)\n", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))\n", "Example output:\n{\n \"payload\": [\n {\n \"annotationSpecId\": \"8907226159985459200\",\n \"classification\": {\n \"score\": 1.0\n },\n \"displayName\": \"daisy\"\n }\n ]\n}\nprojects.locations.models.undeploy\nRequest", "request = clients[\"automl\"].undeploy_model(\n name=model_id\n)\n", "Call", "result = request.result()\n", "Response", "print(MessageToJson(result))\n", "Example output:\n{}\nTrain and export an Edge model\nprojects.locations.models.create\nRequest", "# creating edge model for export\nmodel_edge = {\n \"display_name\": \"flowers_edge_\" + TIMESTAMP,\n \"dataset_id\": dataset_short_id,\n \"image_classification_model_metadata\":{\n \"train_budget_milli_node_hours\": 8000,\n \"model_type\": \"mobile-versatile-1\"\n },\n}\n\nprint(\n MessageToJson(\n automl.CreateModelRequest(\n parent=PARENT,\n model=model_edge \n ).__dict__[\"_pb\"]\n )\n)\n", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"model\": {\n \"displayName\": \"flowers_edge_20210226015151\",\n \"datasetId\": \"ICN2833688305139187712\",\n \"imageClassificationModelMetadata\": {\n \"modelType\": \"mobile-versatile-1\",\n \"trainBudgetMilliNodeHours\": \"8000\"\n }\n }\n}\nCall", "request = clients[\"automl\"].create_model(\n parent=PARENT,\n model=model_edge\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN8566948201909714944\"\n}", "model_edge_id = result.name\n ", "projects.locations.models.export", "output_config = {\n \"model_format\": \"tflite\",\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/export/\",\n }\n}\n\n\nprint(MessageToJson(\n automl.ExportModelRequest(\n name=model_edge_id,\n output_config=output_config\n ).__dict__[\"_pb\"])\n)\n", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/ICN8566948201909714944\",\n \"outputConfig\": {\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226015151/export/\"\n },\n \"modelFormat\": \"tflite\"\n }\n}\nCall", "request = clients[\"automl\"].export_model(\n name=model_edge_id,\n output_config=output_config\n)\n", "Response", "result = request.result()\n\nprint(MessageToJson(result))\n", "Example output:\n{}", "model_export_dir = output_config[\"gcs_destination\"][\"output_uri_prefix\"]\n\n! gsutil ls -r $model_export_dir\n", "Example output:\n```\ngs://migration-ucaip-trainingaip-20210226015151/export/:\ngs://migration-ucaip-trainingaip-20210226015151/export/model-export/:\ngs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/:\ngs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/:\ngs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/dict.txt\ngs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/model.tflite\ngs://migration-ucaip-trainingaip-20210226015151/export/model-export/icn/tflite-flowers_edge_20210226015151-2021-02-26T06:16:19.437101Z/tflite_metadata.json\n```\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.", "delete_dataset = True\ndelete_model = True\ndelete_bucket = True\n\n# Delete the dataset using the AutoML fully qualified identifier for the dataset\ntry:\n if delete_dataset:\n clients['automl'].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the AutoML fully qualified identifier for the model\ntry:\n if delete_model:\n clients['automl'].delete_model(name=model_id)\nexcept Exception as e:\n print(e)\n \n# Delete the model using the AutoML fully qualified identifier for the model\ntry:\n if delete_model:\n clients['automl'].delete_model(name=model_edge_id)\nexcept Exception as e:\n print(e) \n\nif delete_bucket and 'BUCKET_NAME' in globals():\n ! gsutil rm -r gs://$BUCKET_NAME\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Dharamsitejas/E4571-Personalisation-Theory-Project
Part2/analysis/feature_extraction_from_api.ipynb
mit
[ "Feature extraction from GoodReads API\nThis noteboook contains the step-by-step code to enrich the books dataset using the ISBN key to extract information from the GoodReads API. The GoodReads API can be accessed using the 'goodreads' Python library. \n\nImporting libaries and tools", "import requests\nfrom goodreads import client\nimport pandas as pd\n\n# This this is the URL prefix common for each record\n\nurl_prefix = 'https://www.goodreads.com/book/isbn_to_id/0441172717,0739467352?key='\n\n# Setting up the GoodReads API client\n\nfile = open('goodreads_credentials')\n\nkey , secret = [element.strip() for element in file.readlines()]\n\ngc = client.GoodreadsClient(key,secret)", "Importing the data file\nThis data file contains the original data that needs to be enriched. It contains the ISBN data for books using which we will be extracting additional information from the API.", "df = pd.read_csv('Combine.csv',index_col=0)\n\nall_isbn = df.isbn.unique()\n\nisbn_df = pd.DataFrame(all_isbn,columns=['isbn'])\n\nc = 0\n\ndef get_details(isbn):\n \n global c\n \n c+=1\n \n if(c%100 == 0):\n print(c)\n \n \n try:\n b = gc.book(isbn=isbn)\n return pd.Series({'title':b.title,'description':b.description,'num_pages':b.num_pages})\n except:\n return pd.Series({'title':'none','description':'none','num_pages':'none'})\n\nisbn_df[['description','num_pages','title']] = isbn_df.isbn.apply(get_details)\n\nisbn_df.to_pickle('ibsn_features_full.pickle')\n\nisbn_df = pd.read_pickle('ibsn_features_full.pickle')", "Retrying to get info for null records", "dfx = isbn_df[isbn_df['title'] == 'none']\n\ndfx.head()\n\ndfx[['description','num_pages','title']] = dfx.isbn.apply(get_details)\n\ndfx[dfx['title'] == 'none'].shape", "There are 34 records that still remain with no information from the API. We shall remove these records from our dataset as they form a negligible portion of our sample of books.\nMerging dfx with ibsn_df", "for i, row in dfx.iterrows():\n isbn_df.loc[i] = dfx.loc[i]\n\n# Checking if the newly created dataset contains the same number of empty records as in dfx\n\nisbn_df[isbn_df['title'] == 'none'].shape == dfx[dfx['title'] == 'none'].shape\n\n# dfx = isbn_df[isbn_df['title'] == 'none']", "Saving the data file", "isbn_df.to_pickle('ibsn_features_new_batch.pickle')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/starthinker
colabs/url.ipynb
apache-2.0
[ "URL\nPull URL list from a table, fetch them, and write the results to another table.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter URL Recipe Parameters\n\nSpecify a table with only two columns URL, URI (can be null).\nCheck bigquery destination for results of fetching each URL.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth':'service', # Credentials used for rading and writing data.\n 'status':True, # Pull status of HTTP request.\n 'read':False, # Pull data from HTTP request.\n 'dataset':'', # Name of Google BigQuery dataset to write.\n 'table':'', # Name of Google BigQuery table to write.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute URL\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'url':{\n 'auth':{'field':{'name':'auth','kind':'authentication','order':1,'default':'service','description':'Credentials used for rading and writing data.'}},\n 'status':{'field':{'name':'status','kind':'boolean','order':2,'default':True,'description':'Pull status of HTTP request.'}},\n 'read':{'field':{'name':'read','kind':'boolean','order':3,'default':False,'description':'Pull data from HTTP request.'}},\n 'urls':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to write.'}},\n 'query':{'field':{'name':'table','kind':'text','order':5,'default':'','description':'Query to run to pull URLs.'}},\n 'legacy':False\n }\n },\n 'to':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':6,'default':'','description':'Name of Google BigQuery dataset to write.'}},\n 'table':{'field':{'name':'table','kind':'string','order':7,'default':'','description':'Name of Google BigQuery table to write.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amorgun/shad-ml-notebooks
notebooks/s1-4/linear.ipynb
unlicense
[ "%pylab inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns; sns.set();\nscatter_args = dict(s=100, edgecolor='black', linewidth='1.5', cmap=\"autumn\")", "Random forest\nOut-of-bag score\nFeature importances\nЛинейные классификаторы\n$$a(x) = sign(\\left<w^Tx\\right> - w_0)$$", "def get_grid(data, step=0.1):\n x_min, x_max = data.x.min() - 1, data.x.max() + 1\n y_min, y_max = data.y.min() - 1, data.y.max() + 1\n return np.meshgrid(np.arange(x_min, x_max, step),\n np.arange(y_min, y_max, step))\n\nfrom sklearn.cross_validation import cross_val_score\n\ndef get_score(X, y, cl):\n return cross_val_score(cl, X, y, cv=5, scoring='mean_squared_error').mean()\n\ndef plot_linear_border(cl, X, plot, borders=1):\n x_limits = (np.min(X.x) - borders, np.max(X.x) + borders)\n y_limits = (np.min(X.y) - borders, np.max(X.y) + borders)\n line_x = np.linspace(*x_limits, num=2)\n line_y = (-line_x * cl.coef_[0, 0] - cl.intercept_) / cl.coef_[0, 1]\n plot.plot(line_x, line_y, c='r', lw=2)\n plot.fill_between(line_x, line_y, -100, color='r')\n plot.fill_between(line_x, line_y, 100, color='yellow')\n plot.autoscale(tight=True)\n plot.set_xlim(*x_limits)\n plot.set_ylim(*y_limits)\n\ndef show_classifier(X, y, cl,\n feature_modifier=lambda x: x,\n proba=True,\n print_score=False,\n borders=1):\n fig, ax = plt.subplots(1, 1)\n xys = c_[ravel(xs), ravel(ys)]\n cl.fit(feature_modifier(X), y)\n if print_score:\n print(\"MSE = {}\".format(get_score(feature_modifier(X), y, cl)))\n if proba:\n predicted = cl.predict_proba(feature_modifier(pd.DataFrame(xys, columns=('x', 'y'))))[:,1].reshape(xs.shape)\n else:\n predicted = cl.predict(feature_modifier(pd.DataFrame(xys, columns=('x', 'y')))).reshape(xs.shape)\n plot_linear_border(cl, X, ax, borders=borders)\n ax.scatter(X.x, X.y, c=y, **scatter_args)\n return cl\n\nn = 200\nrandom = np.random.RandomState(17)\ndf1 = pd.DataFrame(data=random.multivariate_normal((0,0), [[1, 0.3], [0.3, 0.7]], n), columns=['x', 'y'])\ndf1['target'] = 0\ndf2 = pd.DataFrame(data=random.multivariate_normal((1,2), [[1, -0.5], [-0.5, 1.6]], n), columns=['x', 'y'])\ndf2['target'] = 1\ndata = pd.concat([df1, df2], ignore_index=True)\nfeatures = data[['x', 'y']]\ndata.plot(kind='scatter', x='x', y='y', c='target', colormap='autumn', alpha=0.75, colorbar=False);\n\nfrom sklearn.svm import LinearSVC\nbig_grid = get_grid(features, 0.1)\nshow_classifier(features, data.target, \n LinearSVC(),\n proba=False);", "Градиентный спуск¶\n$$M_i(w, w_0) = y_i(\\left<x, w\\right> - w_0)$$\n$$\\sum_{i=1}^l \\mathscr{L}(M(x_i)) \\to min$$", "from sklearn.linear_model import SGDClassifier\n\nrandom = np.random.RandomState(11)\nn_iters = 20\nfigure(figsize=(10, 8 * n_iters))\nxys = c_[ravel(xs), ravel(ys)]\nclf = SGDClassifier(alpha=1, l1_ratio=0)\ntrain_objects = data.ix[random.choice(data.index, n_iters)]\nfor iteration in range(n_iters):\n new_object = train_objects.iloc[iteration]\n clf = clf.partial_fit([new_object[['x', 'y']]], [new_object.target], classes=[0, 1])\n ax = subplot(n_iters, 1, iteration + 1)\n title(\"objets count = {}\".format(iteration + 1))\n predicted = clf.predict(xys).reshape(xs.shape)\n plot_linear_border(clf, features, ax)\n processed_objects = train_objects.head(iteration + 1)\n scatter(processed_objects.x, processed_objects.y, c=processed_objects.target, alpha=0.5, **scatter_args)\n scatter(new_object.x, new_object.y, marker='x', lw='20')", "Links\n\nhttp://scikit-learn.org/stable/modules/linear_model.html\nhttps://habrahabr.ru/post/279117/" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
NikitaLoik/machineLearning_andrewNg
notebooks/4_neural_networks_II.ipynb
mit
[ "import numpy as np\nnp.random.seed(0)\n\nfrom scipy.io import loadmat\nfrom scipy import optimize\n\nimport pandas as pd\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nfrom matplotlib.image import NonUniformImage\nfrom matplotlib import cm\nmatplotlib.style.use('ggplot')\n%matplotlib inline\n\n%load_ext autoreload\n%autoreload 2", "0 Data Structure", "file_path = '../course_materials/ex4data1.mat'\ndata = loadmat(file_path)\neights_file_path = '../course_materials/ex4weights.mat'\nweights = loadmat(eights_file_path)\nprint(weights['Theta1'].shape)\nprint(weights['Theta2'].shape)\nprint(type(weights))", "1 Nural Network\n1.1 Forward Porpagation\n<img src=\"../course_materials/forward_propagation.png\">", "def get_data(file_path):\n data = loadmat(file_path)\n X = data['X']\n y = data['y']\n return X, y\n\ndef get_β(layer):\n '''Generate β-matrix for every layer in Neural Network'''\n β_set = ()\n for i in range(len(layer)-1):\n# recommendation from Andrew Ng window is ±(6/(inLayer + outLayer))**0.5\n low, high = -(6/(layer[i]+layer[i+1]))**0.5, (6/(layer[i]+layer[i+1]))**0.\n β_set += (np.random.uniform(low,high,(layer[i+1], layer[i]+1)),)\n# β_set += (np.zeros((outLayer, inLayer+1)),)\n return β_set\n\ndef flatten_β(β_set):\n β_flat = β_set[0].flatten()\n for β in β_set[1:]:\n β_flat = np.concatenate((β_flat, β.flatten()), axis=-1)\n return β_flat\n\ndef reshape_β(β, layer):\n splitIndex = 0\n splitIndices = []\n for i in range(len(layer)-2):\n splitIndex += (layer[i]+1)*layer[i+1]\n splitIndices += [splitIndex]\n splitβ = np.split(β, splitIndices)\n reshapedβ = ()\n for i in range(len(splitβ)):\n reshapedβ += (splitβ[i].reshape(layer[i+1],layer[i]+1),)\n return reshapedβ\n \ndef get_sigmoid(z):\n return 1/(1+np.exp(-z))\n\ndef forward_propagation(β_flat, layer, X_flat, n_samples):\n '''Forward Propagation is the hypothesis function for Neural Networks'''\n β_set = reshape_β(β_flat, layer)\n# H_0 (5000, 400)\n H = X_flat.reshape(n_samples, -1)\n# Z_H = ()\n H_byLayer = ()\n for β in β_set:\n# print(H.shape)\n# Z_l (5000, k_l); l is the number of layers [0, ...,l]; k is the number of neurons in a layer l [1,...,k]\n Z = np.dot(np.insert(H, 0, 1, axis=1), β.T)\n# H_l (5000, k_l); l is the number of layers [0, ...,l]; k is the number of neurons in a layer l [1,...,k]\n H = get_sigmoid(Z)\n# Z_H += ((Z, H),)\n H_byLayer += (H,)\n# H_2 (5000, 10)\n return H_byLayer\n\ndef get_sigmoid_gradient(Z):\n return get_sigmoid(Z)*(1-get_sigmoid(Z))\n\ndef cost_function(β_flat, layer, X_flat, n_samples, y, yUnique, λ = 0.):\n X = X_flat.reshape(n_samples, -1)\n Y = np.array([yUnique]* y.shape[0]) == y\n β_set = reshape_β(β_flat, layer)\n J = 0\n for n in range(n_samples):\n x_n = X[n:n+1,:]\n y_n = Y[n:n+1,:]\n# hypothesis vector h_n(1, 10)\n h_n = forward_propagation(β_flat, layer, x_n, 1)[len(β_set)-1]\n# cost function scalar j_n(1, 1) = y_n(1, 10)*h_n.T(10, 1)\n j_n = (- np.dot(y_n, np.log(h_n).T) - np.dot((1-y_n), np.log(1-h_n).T))\n J += j_n\n# regularisation term (R)\n cummulativeR = 0\n for β in β_set:\n cummulativeR += np.sum(β*β) #element-wise multiplication\n cummulativeR *= λ/(2*n_samples)\n return J[0][0]/n_samples + cummulativeR", "1.1.1 Neural Network Initialisation\nThe input-data matrix X(5000, 400) is comprised of 5000 digit images 20 by 20 pixels (400 pixels).<br>\nThe output-data vector Y(5000,1) is comprised of 5000 assigned digits (1 through 10; 10 represents figure '0').<br>\nThe neural network in this work has 1 input layer (400 neurons), one hidden layer (25 neurons) and an output layer (10 neurons).\nTo initialise a simple neural network, one has to do the following:\n1. set the number of neurons in every layer (including input and output layers)\n2. extract and flatten input matrix X\n3. transform output Y\n3. initialise Beat matrix", "# Set number of neurons in every layer (including input and output layers)\nlayer = 400, 25, 10\n# Extract and flatten input matrix X\nX, y = get_data(file_path)\nn_samples, n_variables = X.shape\nX_flat = X.flatten()\nyUnique = np.unique(y)\n# Initialise Beat matrix\nβ_test = flatten_β((weights['Theta1'], weights['Theta2']))\nβ_initial = flatten_β(get_β(layer))\nprint(X.shape)\nprint(y.shape)\nfor β in get_β(layer): print(β.shape)", "1.1.2 Forward-Propagation Test", "# either transformed Y or y together with yUnique can be suplied to a function\n# Y = np.array([yUnique]* y.shape[0]) == y\n# print(Y[0:0+1,:].shape)\n\nprint(forward_propagation(β_test, layer, X_flat, n_samples)[1].shape)\nprint(forward_propagation(β_test, layer, X[0:0+1,:], 1)[1].shape)\n\nprint(X.shape)\nprint(X[0][None,:].shape)\n# cost_function(β_test, layer, X.flatten(), n_samples, y, yUnique, λ = 0.)\ncost_function(β_test, layer, X[0:5000][None,:].flatten(), 5000, y, yUnique, λ = 0.)", "1.1.3 Cost-Function Test\nThe outputs of the cost_function should be as follows:<br>\nβ_test, X, λ=0. — 0.287629 (Andrew Ng)<br>\nβ_test, X, λ=1. — 0.383770 (Andrew Ng)<br>\nβ_test, X, λ=0. — 0.0345203898838<br>\nβ_initial, X, λ=1. — 65.5961451562", "print(cost_function(β_test, layer, X_flat, n_samples, y, yUnique, λ = 0.))\nprint(cost_function(β_test, layer, X_flat, n_samples, y, yUnique, λ = 1.))\nprint(cost_function(β_test, layer, X[0][None,:].flatten(), 1, y, yUnique, λ = 0.))\nprint(cost_function(β_initial, layer, X_flat, n_samples, y, yUnique, λ = 1.))", "1.2 Back Propagation\n$\\delta^l = H^l - Y$<br>\n$\\delta^{l-1} = (\\beta^{l-1})^T\\delta^l\\cdot g'(h^{l-1})$", "def back_propagation(β_flat, layer, X_flat, n_samples, y, yUnique):\n Y = np.array([yUnique]* y.shape[0]) == y\n β_set = reshape_β(β_flat, layer)\n\n deltaSet = ()\n# hypothesis matrix E(5000, 10)\n H = forward_propagation(β_flat, layer, X_flat, n_samples)\n# error matrix E(5000, 10)\n E = H[len(layer)-2] - Y\n for l in reversed(range(len(layer)-1)):\n E = np.dot(E*get_sigmoid_gradient(H[l]), β_set[l])[:,1:]\n deltaSet = (np.dot(H[l].T, np.insert(E, 0, 1, axis=1)),) + deltaSet\n flatDelta = flatten_β(deltaSet)\n return β_flat + flatDelta/n_samples\n\nY = np.array([yUnique]* y.shape[0]) == y\n# print(Y.shape)\nβ_set = reshape_β(β_test, layer)\n# print(len(β_set))\ndeltaSet = ()\n# hypothesis matrix E(5000, 10)\nH = forward_propagation(β_test, layer, X_flat, n_samples)\n# print (len(H))\n# error matrix E(5000, 10)\nE = H[len(layer)-2] - Y\n# print(E.shape)\nfor l in reversed(range(len(layer)-1)):\n E = np.dot(E*get_sigmoid_gradient(H[l]), β_set[l])[:,1:]\n print(E.shape)\n deltaSet = (np.dot(H[l].T, np.insert(E, 0, 1, axis=1)),) + deltaSet\nprint(len(deltaSet))\nprint(deltaSet[0].shape)\nprint(deltaSet[1].shape)\nflatDelta = flatten_β(deltaSet)\nprint(β_test.shape)\nf = β_test + flatDelta/n_samples\nf[3915]\n\nβ_initial = flatten_β(get_β(layer))\na = back_propagation(β_test, layer, X_flat, n_samples, y, yUnique)\nprint(a.shape)\nprint(a[3915])\nprint(np.sum(a))\nprint(cost_function(a,layer, X_flat, n_samples, y, yUnique, λ = 0.))\n\ndef check_gradient(β_flat, layer, X_flat, n_samples, y, yUnique, epsilon):\n for i in np.random.randint(β_flat.size, size=10):\n epsilonVector = np.zeros(β_flat.size)\n epsilonVector[i] = epsilon\n \n gradient = back_propagation(β_flat, layer, X_flat, n_samples, y, yUnique)\n \n βPlus = βMinus = β_flat\n# βPlus = β + epsilonVector\n βPlus += epsilonVector\n costPlus = cost_function(βPlus,layer, X, n_samples, y, yUnique, λ = 0.)\n# βMinus = β - epsilonVector\n βMinus -= epsilonVector\n costMinus = cost_function(βMinus,layer, X, n_samples, y, yUnique, λ = 0.)\n approximateGradient = (costPlus-costMinus)/(2*epsilon)\n print (i, '\\t', approximateGradient, '\\t', gradient[i])\n\nepsilon = 0.0001\ncheck_gradient(β_test, layer, X_flat, n_samples, y, yUnique, epsilon)\n", "http://www.holehouse.org/mlclass/09_Neural_Networks_Learning.html", "def optimise_β_1(β_flat, X_flat, n_samples, y, yUnique, λ=0.):\n\n β_optimised = optimize.minimize(cost_function, β_flat, args=(layer, X_flat, n_samples, y, yUnique),\n method=None, jac=back_propagation, options={'maxiter':50})\n\n# β_optimised = optimize.fmin_cg(cost_function, fprime=back_propagation, x0=β_flat,\n# args=(layer, X_flat, n_samples, y, yUnique),\n# maxiter=50,disp=True,full_output=True)\n return(β_optimised)\n\ndef optimise_β_2(β_flat, X_flat, n_samples, y, yUnique, λ=0.):\n\n# β_optimised = optimize.minimize(cost_function, β_flat, args=(layer, X_flat, n_samples, y, yUnique),\n# method=None, jac=back_propagation, options={'maxiter':50})\n\n β_optimised = optimize.fmin_cg(cost_function, fprime=back_propagation, x0=β_flat,\n args=(layer, X_flat, n_samples, y, yUnique),\n maxiter=50,disp=True,full_output=True)\n return(β_optimised)\n\na = optimise_β_1(β_initial, X_flat, n_samples, y, yUnique, λ=0.)\n\nb = optimise_β_2(β_initial, X_flat, n_samples, y, yUnique, λ=0.)\n\ndef quality_control(β_optimised, layer, X_flat, n_samples, y, yUnique, λ = 0.):\n X = X_flat.reshape(n_samples,-1)\n yAssignmentVector = []\n misAssignedIndex = []\n for n in range(n_samples):\n x = X[n]\n yAssignment = np.argmax(forward_propagation(β_optimised, layer, X[n], 1)[1]) + 1\n if yAssignment == y[n]:\n yAssignmentVector += [True]\n else:\n yAssignmentVector += [False]\n misAssignedIndex += [n]\n return (sum(yAssignmentVector)/n_samples)\n\n# neuralNetworkClassifier(, X_flat, n_samples, y, yUnique, λ=0.)\nquality_control(a['x'], layer, X_flat, n_samples, y, yUnique, λ = 0.)\n\nquality_control(b[0], layer, X_flat, n_samples, y, yUnique, λ = 0.)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
telescopeuser/workshop_blog
wechat_tool_py3_local/lesson_5_py3_local.ipynb
mit
[ "<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-iss.png' width=15% style=\"float: right;\">\n<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-nus.png' width=15% style=\"float: right;\">", "import IPython.display\nIPython.display.YouTubeVideo('leVZjVahdKs')", "如何使用和开发微信聊天机器人的系列教程\nA workshop to develop & use an intelligent and interactive chat-bot in WeChat\nWeChat is a popular social media app, which has more than 800 million monthly active users.\n<img src='https://www.iss.nus.edu.sg/images/default-source/About-Us/7.6.1-teaching-staff/sam-website.tmb-.png' width=8% style=\"float: right;\">\n<img src='reference/WeChat_SamGu_QR.png' width=10% style=\"float: right;\">\nby: GU Zhan (Sam)\nOctober 2018 : Update to support Python 3 in local machine, e.g. iss-vm.\nApril 2017 ======= Scan the QR code to become trainer's friend in WeChat =====>>\n第五课:视频识别和处理\nLesson 5: Video Recognition & Processing\n\n识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as \"dog\", \"flower\" or \"car\")\n识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)\n识别受限内容 (Explicit Content Detection: Detect adult content within a video)\n生成视频字幕 (Video Transcription BETA: Transcribes video content in English)\n\nUsing Google Cloud Platform's Machine Learning APIs\nFrom the same API console, choose \"Dashboard\" on the left-hand menu and \"Enable API\".\nEnable the following APIs for your project (search for them) if they are not already enabled:\n<ol>\n**<li> Google Cloud Video Intelligence API </li>**\n</ol>\n\nFinally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)", "# Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); \n# !pip install --upgrade google-api-python-client", "短片预览 / Video viewing", "# 多媒体文件的二进制base64码转换 (Define media pre-processing functions)\n\n# Import the base64 encoding library.\nimport base64, io, sys, IPython.display\n\n# Python 2\nif sys.version_info[0] < 3:\n import urllib2\n# Python 3\nelse:\n import urllib.request\n\n# Pass the media data to an encoding function.\ndef encode_media(media_file):\n with io.open(media_file, \"rb\") as media_file:\n media_content = media_file.read()\n# Python 2\n if sys.version_info[0] < 3:\n return base64.b64encode(media_content).decode('ascii')\n# Python 3\n else:\n return base64.b64encode(media_content).decode('utf-8')\n\nvideo_file = 'reference/video_IPA.mp4'\n# video_file = 'reference/SampleVideo_360x240_1mb.mp4'\n# video_file = 'reference/SampleVideo_360x240_2mb.mp4'\n\nIPython.display.HTML(data=\n '''<video alt=\"test\" controls><source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\" /></video>'''\n .format(encode_media(video_file)))", "<span style=\"color:blue\">Install the client library</span> for Video Intelligence / Processing", "!pip install --upgrade google-cloud-videointelligence", "", "# Imports the Google Cloud client library\nfrom google.cloud import videointelligence\n\n# [Optional] Display location of service account API key if defined in GOOGLE_APPLICATION_CREDENTIALS\n!echo $GOOGLE_APPLICATION_CREDENTIALS\n\n\n##################################################################\n# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS\n# video_client = videointelligence.VideoIntelligenceServiceClient()\n\n# \n# (2) Instantiates a client - using 'service account json' file\nvideo_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(\n \"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json\")\n##################################################################\n", "* 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as \"dog\", \"flower\" or \"car\")\nhttps://cloud.google.com/video-intelligence/docs/analyze-labels\ndidi_video_label_detection()", "from google.cloud import videointelligence\n\ndef didi_video_label_detection(path):\n \"\"\"Detect labels given a local file path. (Demo)\"\"\"\n \"\"\" Detects labels given a GCS path. (Exercise / Workshop Enhancement)\"\"\"\n\n##################################################################\n# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS\n# video_client = videointelligence.VideoIntelligenceServiceClient()\n\n# \n# (2) Instantiates a client - using 'service account json' file\n video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(\n \"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json\")\n##################################################################\n\n features = [videointelligence.enums.Feature.LABEL_DETECTION]\n\n with io.open(path, 'rb') as movie:\n input_content = movie.read()\n\n operation = video_client.annotate_video(\n features=features, input_content=input_content)\n print('\\nProcessing video for label annotations:')\n\n result = operation.result(timeout=90)\n print('\\nFinished processing.')\n\n # Process video/segment level label annotations\n segment_labels = result.annotation_results[0].segment_label_annotations\n for i, segment_label in enumerate(segment_labels):\n print('Video label description: {}'.format(\n segment_label.entity.description))\n for category_entity in segment_label.category_entities:\n print('\\tLabel category description: {}'.format(\n category_entity.description))\n\n for i, segment in enumerate(segment_label.segments):\n start_time = (segment.segment.start_time_offset.seconds +\n segment.segment.start_time_offset.nanos / 1e9)\n end_time = (segment.segment.end_time_offset.seconds +\n segment.segment.end_time_offset.nanos / 1e9)\n positions = '{}s to {}s'.format(start_time, end_time)\n confidence = segment.confidence\n print('\\tSegment {}: {}'.format(i, positions))\n print('\\tConfidence: {}'.format(confidence))\n print('\\n')\n\n # Process shot level label annotations\n shot_labels = result.annotation_results[0].shot_label_annotations\n for i, shot_label in enumerate(shot_labels):\n print('Shot label description: {}'.format(\n shot_label.entity.description))\n for category_entity in shot_label.category_entities:\n print('\\tLabel category description: {}'.format(\n category_entity.description))\n\n for i, shot in enumerate(shot_label.segments):\n start_time = (shot.segment.start_time_offset.seconds +\n shot.segment.start_time_offset.nanos / 1e9)\n end_time = (shot.segment.end_time_offset.seconds +\n shot.segment.end_time_offset.nanos / 1e9)\n positions = '{}s to {}s'.format(start_time, end_time)\n confidence = shot.confidence\n print('\\tSegment {}: {}'.format(i, positions))\n print('\\tConfidence: {}'.format(confidence))\n print('\\n')\n\n # Process frame level label annotations\n frame_labels = result.annotation_results[0].frame_label_annotations\n for i, frame_label in enumerate(frame_labels):\n print('Frame label description: {}'.format(\n frame_label.entity.description))\n for category_entity in frame_label.category_entities:\n print('\\tLabel category description: {}'.format(\n category_entity.description))\n\n # Each frame_label_annotation has many frames,\n # here we print information only about the first frame.\n frame = frame_label.frames[0]\n time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9\n print('\\tFirst frame time offset: {}s'.format(time_offset))\n print('\\tFirst frame confidence: {}'.format(frame.confidence))\n print('\\n')\n \n return segment_labels, shot_labels, frame_labels\n\n# video_file = 'reference/video_IPA.mp4'\n\ndidi_segment_labels, didi_shot_labels, didi_frame_labels = didi_video_label_detection(video_file)\n\ndidi_segment_labels\n\ndidi_shot_labels\n\ndidi_frame_labels", "* 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)\nhttps://cloud.google.com/video-intelligence/docs/shot_detection\ndidi_video_shot_detection()", "from google.cloud import videointelligence\n\ndef didi_video_shot_detection(path):\n \"\"\" Detects camera shot changes given a local file path \"\"\"\n\n##################################################################\n# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS\n# video_client = videointelligence.VideoIntelligenceServiceClient()\n\n# \n# (2) Instantiates a client - using 'service account json' file\n video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(\n \"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json\")\n##################################################################\n\n features = [videointelligence.enums.Feature.SHOT_CHANGE_DETECTION]\n# features = [videointelligence.enums.Feature.LABEL_DETECTION]\n\n with io.open(path, 'rb') as movie:\n input_content = movie.read()\n \n# operation = video_client.annotate_video(path, features=features)\n operation = video_client.annotate_video(features=features, input_content=input_content)\n print('\\nProcessing video for shot change annotations:')\n\n result = operation.result(timeout=180)\n print('\\nFinished processing.')\n\n for i, shot in enumerate(result.annotation_results[0].shot_annotations):\n start_time = (shot.start_time_offset.seconds +\n shot.start_time_offset.nanos / 1e9)\n end_time = (shot.end_time_offset.seconds +\n shot.end_time_offset.nanos / 1e9)\n print('\\tShot {}: {} to {}'.format(i, start_time, end_time))\n \n return result\n\n\n# video_file = 'reference/video_IPA.mp4'\n\ndidi_result = didi_video_shot_detection(video_file)\n\ndidi_result", "* 识别受限内容 (Explicit Content Detection: Detect adult content within a video)\ndidi_video_safesearch_detection()", "from google.cloud import videointelligence\n\ndef didi_video_safesearch_detection(path):\n \"\"\" Detects explicit content given a local file path. \"\"\"\n\n##################################################################\n# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS\n# video_client = videointelligence.VideoIntelligenceServiceClient()\n\n# \n# (2) Instantiates a client - using 'service account json' file\n video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(\n \"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json\")\n##################################################################\n\n features = [videointelligence.enums.Feature.EXPLICIT_CONTENT_DETECTION]\n\n with io.open(path, 'rb') as movie:\n input_content = movie.read()\n \n# operation = video_client.annotate_video(path, features=features)\n operation = video_client.annotate_video(features=features, input_content=input_content)\n print('\\nProcessing video for explicit content annotations:')\n\n result = operation.result(timeout=90)\n print('\\nFinished processing.')\n\n likely_string = (\"Unknown\", \"Very unlikely\", \"Unlikely\", \"Possible\",\n \"Likely\", \"Very likely\")\n\n # first result is retrieved because a single video was processed\n for frame in result.annotation_results[0].explicit_annotation.frames:\n frame_time = frame.time_offset.seconds + frame.time_offset.nanos / 1e9\n print('Time: {}s'.format(frame_time))\n print('\\tpornography: {}'.format(\n likely_string[frame.pornography_likelihood]))\n \n return result\n\n\n# video_file = 'reference/video_IPA.mp4'\n\ndidi_result = didi_video_safesearch_detection(video_file)", "<span style=\"color:red\">[ Beta Features ]</span> * 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)\nhttps://cloud.google.com/video-intelligence/docs/beta\nCloud Video Intelligence API includes the following beta features in version v1p1beta1:\nSpeech Transcription - the Video Intelligence API can transcribe speech to text from the audio in supported video files. Learn more.", "# Beta Features: videointelligence_v1p1beta1\nfrom google.cloud import videointelligence_v1p1beta1 as videointelligence\n\ndef didi_video_speech_transcription(path):\n \"\"\"Transcribe speech given a local file path.\"\"\"\n\n##################################################################\n# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS\n# video_client = videointelligence.VideoIntelligenceServiceClient()\n\n# \n# (2) Instantiates a client - using 'service account json' file\n video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(\n \"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json\")\n##################################################################\n\n features = [videointelligence.enums.Feature.SPEECH_TRANSCRIPTION]\n\n with io.open(path, 'rb') as movie:\n input_content = movie.read()\n \n config = videointelligence.types.SpeechTranscriptionConfig(\n language_code='en-US',\n enable_automatic_punctuation=True)\n video_context = videointelligence.types.VideoContext(\n speech_transcription_config=config)\n\n# operation = video_client.annotate_video(\n# input_uri, \n# features=features,\n# video_context=video_context)\n operation = video_client.annotate_video(\n features=features,\n input_content=input_content, \n video_context=video_context)\n\n print('\\nProcessing video for speech transcription.')\n\n result = operation.result(timeout=180) \n \n # There is only one annotation_result since only\n # one video is processed.\n annotation_results = result.annotation_results[0]\n speech_transcription = annotation_results.speech_transcriptions[0]\n \n if str(speech_transcription) == '': # result.annotation_results[0].speech_transcriptions[0] == ''\n print('\\nNOT FOUND: video for speech transcription.')\n else:\n alternative = speech_transcription.alternatives[0]\n print('Transcript: {}'.format(alternative.transcript))\n print('Confidence: {}\\n'.format(alternative.confidence))\n\n print('Word level information:')\n for word_info in alternative.words:\n word = word_info.word\n start_time = word_info.start_time\n end_time = word_info.end_time\n print('\\t{}s - {}s: {}'.format(\n start_time.seconds + start_time.nanos * 1e-9,\n end_time.seconds + end_time.nanos * 1e-9,\n word))\n\n return result\n \n\n# video_file = 'reference/video_IPA.mp4'\n\ndidi_result = didi_video_speech_transcription(video_file)\n\ndidi_result", "<span style=\"color:blue\">Wrap cloud APIs into Functions() for conversational virtual assistant (VA):</span>\nReuse above defined Functions().", "def didi_video_processing(video_file):\n didi_video_reply = u'[ Video 视频处理结果 ]\\n\\n'\n \n didi_video_reply += u'[ didi_video_label_detection 识别视频消息中的物体名字 ]\\n\\n' \\\n + str(didi_video_label_detection(video_file)) + u'\\n\\n'\n \n didi_video_reply += u'[ didi_video_shot_detection 识别视频的场景片段 ]\\n\\n' \\\n + str(didi_video_shot_detection(video_file)) + u'\\n\\n'\n \n didi_video_reply += u'[ didi_video_safesearch_detection 识别受限内容 ]\\n\\n' \\\n + str(didi_video_safesearch_detection(video_file)) + u'\\n\\n'\n \n didi_video_reply += u'[ didi_video_speech_transcription 生成视频字幕 ]\\n\\n' \\\n + str(didi_video_speech_transcription(video_file)) + u'\\n\\n'\n \n return didi_video_reply\n\n# [Optional] Agile testing:\n# parm_video_response = didi_video_processing(video_file)\n# print(parm_video_response)", "Define a global variable for future 'video search' function enhancement", "parm_video_response = {} # Define a global variable for future 'video search' function enhancement", "<span style=\"color:blue\">Start interactive conversational virtual assistant (VA):</span>\nImport ItChat, etc. 导入需要用到的一些功能程序库:", "import itchat\nfrom itchat.content import *", "Log in using QR code image / 用微信App扫QR码图片来自动登录", "# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。\nitchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片\n\n# @itchat.msg_register([VIDEO], isGroupChat=True)\n@itchat.msg_register([VIDEO])\ndef download_files(msg):\n msg.download(msg.fileName)\n print('\\nDownloaded video file name is: %s' % msg['FileName'])\n \n ##############################################################################################################\n # call video analysis APIs #\n ##############################################################################################################\n global parm_video_response # save into global variable, which can be accessed by next WeChat keyword search\n \n # python 2 version WeChat Bot\n # parm_video_response = KudosData_VIDEO_DETECTION(encode_media(msg['FileName']))\n \n # python 3 version WeChat Bot\n parm_video_response = didi_video_processing(msg['FileName'])\n\n \n ##############################################################################################################\n # format video API results #\n ##############################################################################################################\n \n # python 2 version WeChat Bot\n # video_analysis_reply = KudosData_video_generate_reply(parm_video_response)\n\n # python 3 version WeChat Bot\n video_analysis_reply = parm_video_response # Exercise / Workshop Enhancement: To pase and format result nicely.\n \n \n print ('')\n print(video_analysis_reply)\n return video_analysis_reply\n\nitchat.run()", "", "# interupt kernel, then logout\nitchat.logout() # 安全退出", "<span style=\"color:blue\">Exercise / Workshop Enhancement:</span>\n<font color='blue'>\n<font color='blue'>\n[提问 1] 使用文字来搜索视频内容?需要怎么处理? \n[Question 1] Can we use text (keywords) as input to search video content? How?\n</font>\n<font color='blue'>\n<font color='blue'>\n[提问 2] 使用图片来搜索视频内容?需要怎么处理? \n[Question 2] Can we use an image as input to search video content? How?\n</font>", "'''\n\n# Private conversational mode / 单聊模式,基于关键词进行视频搜索:\n@itchat.msg_register([TEXT])\ndef text_reply(msg):\n# if msg['isAt']:\n list_keywords = [x.strip() for x in msg['Text'].split(',')]\n # call video search function:\n search_responses = KudosData_search(list_keywords) # return is a list\n # Format search results:\n search_reply = u'[ Video Search 视频搜索结果 ]' + '\\n'\n if len(search_responses) == 0:\n search_reply += u'[ Nill 无结果 ]'\n else:\n for i in range(len(search_responses)): search_reply += '\\n' + str(search_responses[i])\n print ('')\n print (search_reply)\n return search_reply\n \n '''\n\n'''\n\n# Group conversational mode / 群聊模式,基于关键词进行视频搜索:\n@itchat.msg_register([TEXT], isGroupChat=True)\ndef text_reply(msg):\n if msg['isAt']:\n list_keywords = [x.strip() for x in msg['Text'].split(',')]\n # call video search function:\n search_responses = KudosData_search(list_keywords) # return is a list\n # Format search results:\n search_reply = u'[ Video Search 视频搜索结果 ]' + '\\n'\n if len(search_responses) == 0:\n search_reply += u'[ Nill 无结果 ]'\n else:\n for i in range(len(search_responses)): search_reply += '\\n' + str(search_responses[i])\n print ('')\n print (search_reply)\n return search_reply\n \n '''", "恭喜您!已经完成了:\n第五课:视频识别和处理\nLesson 5: Video Recognition & Processing\n\n识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as \"dog\", \"flower\" or \"car\")\n识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)\n识别受限内容 (Explicit Content Detection: Detect adult content within a video)\n生成视频字幕 (Video Transcription BETA: Transcribes video content in English)\n\n下一课是:\n第六课:交互式虚拟助手的智能应用\nLesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations\n\n虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation) \n虚拟员工: 文字指令交互(Conversational automation using text/message command) \n虚拟员工: 语音指令交互(Conversational automation using speech/voice command) \n虚拟员工: 多种语言交互(Conversational automation with multiple languages)\n\n<img src='reference/WeChat_SamGu_QR.png' width=80% style=\"float: left;\">" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BioGraphs-LD/BioPax-patterns
PathwayCommons-sample-query.ipynb
mit
[ "SPARQL engine configuration", "PC_Endpoint = \\\n \"http://rdf.pathwaycommons.org/sparql\"", "We will use the sparqlwrapper library.\nExecuting the sparql queries is straightforward (cf. the first four lines of runQuery(...) below), but we introduce some auxiliary functions for nicely displaying the results.", "from SPARQLWrapper import SPARQLWrapper, JSON\nfrom IPython.display import display, Markdown \n # for telling jupyter to display the result as markdown\n\ndef runQuery(queryString, outputFormat=\"tsv\", varList=[], truncateAt=30):\n \"\"\" Send the query to the endpoint and attempt \n to nicely display the result.\n \n Possible values for outputFormat: \"tsv\", \"markdown\"\n \"\"\"\n sparql = SPARQLWrapper(PC_Endpoint)\n sparql.setQuery(queryString)\n sparql.setReturnFormat(JSON)\n results = sparql.query().convert()\n if outputFormat == \"tsv\":\n displayQueryResultAsTSV(results, varList)\n elif outputFormat == \"markdown\":\n displayQueryResultAsMarkdown(results, varList, truncateAt)\n\ndef displayQueryResultAsTSV(queryResult, varList=[], truncateAt=30):\n if len(queryResult[\"results\"][\"bindings\"]) == 0:\n print(\"Empty result\")\n return\n if varList == []:\n varList = [varName for varName in queryResult[\"results\"][\"bindings\"][0].keys()]\n displayResult = \"\"\n for currentVar in varList:\n displayResult += currentVar + \"\\t\"\n displayResult = displayResult[:-1] + \"\\n\"\n for result in queryResult[\"results\"][\"bindings\"]:\n for currentVar in varList:\n if currentVar in result.keys():\n displayResult += truncateString(result[currentVar]['value'], truncateAt) + \"\\t\"\n else:\n displayResult += \"\\t\"\n displayResult = displayResult[:-1] + \"\\n\"\n print(displayResult)\n\ndef displayQueryResultAsMarkdown(queryResult, varList=[], truncateAt=30):\n if len(queryResult[\"results\"][\"bindings\"]) == 0:\n print(\"Empty result\")\n return\n if varList == []:\n varList = [varName for varName in queryResult[\"results\"][\"bindings\"][0].keys()]\n displayResult = \"\"\n sepLine = \"\"\n for currentVar in varList:\n displayResult += \" | \" + currentVar\n sepLine += \"| ---\"\n displayResult += \"\\n\" + sepLine + \"\\n\"\n for result in queryResult[\"results\"][\"bindings\"]:\n for currentVar in varList:\n if currentVar in result.keys():\n displayResult += \"| \" + truncateString(result[currentVar]['value'], truncateAt) + \" \"\n else:\n displayResult += \"| \"\n displayResult += \" \\n\"\n display(Markdown(displayResult))\n\ndef truncateString(message, length=30):\n if (length == -1) or (len(message) <= length):\n return message\n return message[:15] + \"...\" + message[-12:]", "For the sake of clarity, we will define the usual prefixes in the commonPrefixes variable.", "commonPrefixes = \"\"\"\nPREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\nPREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\nPREFIX owl: <http://www.w3.org/2002/07/owl#>\nPREFIX xsd: <http://www.w3.org/2001/XMLSchema#>\nPREFIX dc: <http://purl.org/dc/elements/1.1/>\nPREFIX dcterms: <http://purl.org/dc/terms/>\nPREFIX foaf: <http://xmlns.com/foaf/0.1/>\nPREFIX skos: <http://www.w3.org/2004/02/skos/core#>\nPREFIX bp3: <http://www.biopax.org/release/biopax-level3.owl#>\nPREFIX taxon: <http://identifiers.org/taxonomy/>\nPREFIX reactome: <http://identifiers.org/reactome/>\nPREFIX release: <http://www.reactome.org/biopax/49/48887#>\n\nPREFIX up: <http://purl.uniprot.org/core/> \nPREFIX uniprot: <http://purl.uniprot.org/uniprot/>\nPREFIX bp: <http://www.biopax.org/release/biopax-level3.owl#>\nPREFIX chebi: <http://purl.obolibrary.org/obo/CHEBI_>\nPREFIX obo2: <http://purl.obolibrary.org/obo#>\n\n\"\"\"", "Live PathwayCommons queries", "queryTFControllers = \"\"\"\nSELECT ?tempReac ?type ?controlledName ?controllerName ?source WHERE{ \n FILTER( (?controlledName = 'Transcription of SCN5A'^^<http://www.w3.org/2001/XMLSchema#string>)\n and (?controllerName != 'SCN5A')\n and (?source != 'mirtarbase') ) .\n ?tempReac a bp:TemplateReactionRegulation .\n ?tempReac bp:displayName ?reacName ; \n bp:controlled ?controlled ; \n bp:controller ?controller ; \n bp:controlType ?type ; \n bp:dataSource ?source .\n ?controlled bp:displayName ?controlledName .\n ?controller bp:displayName ?controllerName . }\nGROUP BY ?controlledName ?controllerName\n\"\"\"\nrunQuery(commonPrefixes + queryTFControllers, \"markdown\", [])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dcavar/python-tutorial-for-ipython
notebooks/Python SVM Classifier Example.ipynb
apache-2.0
[ "Python SVM Classifier Example\n(C) 2017-2019 by Damir Cavar\nDownload: This and various other Jupyter notebooks are available from my GitHub repo.\nVersion: 1.1, September 2019\nLicense: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)\nThis is a tutorial related to the discussion of an SVN classifier in the textbook Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach.\nThis tutorial was developed as part of my course material for the course Machine Learning for Computational Linguistics in the Computational Linguistics Program of the Department of Linguistics at Indiana University.\nSVN Example using\nThe basic idea and storyline for this example was taken from or inspired by the tutorial Simple Support Vector Machine (SVM) example with character recognition.\nThis tutorial requires Scikit-learn and Matplotlib. These modules come with the default Anaconda Python installation.\nTo start the tutorial and run the example, we will import pyplot from matplotlib, and datasets and svm from scikit-learn.", "import matplotlib.pyplot as plt\nfrom sklearn import datasets\nfrom sklearn import svm", "We load the digits data set into memory, refering to it with the variable digits.", "digits = datasets.load_digits()", "We can output the data set in digits:", "print(digits.data)", "The data contains the actual features. You will find a brief description of the digits dataset on the Scikit-learn website. It contains datapoints with 8x8 images of the digits 0 to 9.\nWe can print out the image of the digits. In this case we are printing the digit 0:", "plt.gray()\nplt.matshow(digits.images[0])\nplt.show()", "The target vector contains the actual labels of the datapoints.", "print(digits.target)", "We will use a default classifier from the Scikit-learn module, the C-Support Vector Classifier. The penalty parameter C is set to 1.0 in the default. In this example C is set to 100. The kernel coeefficient is optional and in this example it is set to 0.001. The meaning and effect of C and gamma is explained on the Scikit-learn pages.", "classifier = svm.SVC(gamma=0.001, C=100)", "We can train the classifier now on all datapoints but the last 10. We leave the last 10 datapoints out for testing. The X variable contains the coordinates or features, and the y variable the targets or labels.", "X,y = digits.data[:-10], digits.target[:-10]", "We train the classifier on this data:", "classifier.fit(X,y)", "We can now test the classifier on one of the test datapoints that we left out from the training corpus. Note that newer Scikit-learn modules deprecate the passing of one-dimensional arrays as data, which digits.data[-5]. Since digits.data[-5] contains a single sample, we need to reshape it using .reshape(1,-1).", "print(classifier.predict(digits.data[-5].reshape(1,-1)))", "The reshape method converts vectors. For example, imagine we have an array of 10 digits arranged as a 1-dimensional columnar array as in the following example:", "import numpy\n\nnumpy.array([[0],[1],[2],[3],[4],[5],[6],[7],[8],[9]])", "This array can be converted to an 1-dimensional row array using the reshape function:", "t = numpy.array([[0],[1],[2],[3],[4],[5],[6],[7],[8],[9]])\n\nprint(t.reshape(1,-1))", "Alternatively, an 1-dimensional row array can be reshaped to a 1-dimensional columnar array in the following way:", "t = t.reshape(1,-1)\nprint(\"t as a row-array:\", t)\nprint(\"t as a columnar array:\", t.reshape((-1,1)))", "In the above example the digits data contains a single sample:", "print(digits.data[-5])", "The classifier.predict() function in the Scikit-learn module requires the vector of this one sample to be reshaped to actually an array that contains the entire sample as an element, that is an array with an array that contains the sample data:", "print(digits.data[-5].reshape(1,-1))", "Returning to our classifier result, let us look at the 5th datapoint in our test data, that is the fifth element from the back (-5) of digits, given that we left out the last ten datapoints for testing. We see that the classifier guessed that 5th sample represents a 9:", "print(classifier.predict(digits.data[-5].reshape(1,-1)))", "We can print the image and see whether the classifier was right:", "plt.imshow(digits.images[-5], cmap=plt.cm.gray_r, interpolation='nearest')\nplt.show()", "(C) 2017-2019 by Damir Cavar - Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0); portions taken from the referenced sources." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MSeeker/Notebook-Collections
Monte Carlo Simulation of the Ising Model.ipynb
mit
[ "import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom collections import deque\n%matplotlib inline\nmatplotlib.rc('figure', figsize=(8, 6))\nnp.random.seed(2015)", "1. Review of the Ising Model\n\n\n$N$ particles/molecules/magnetic dipoles with $\\sigma = \\pm 1$\n\n\n\"Nearest neighbor\" interaction Hamiltonian\n\n\n$$ E=-J\\sum_{ [i,j] }\\sigma_i\\sigma_j-H\\sum_i\\sigma_i \\quad (1)$$\nwhere $[i, j]$ denotes summation over nearest-neighbor pairs -- topology dependent!\n1D Ising Model\nNo phase transition above 0K (Ising)\n2D Ising Model (aka square-lattice Ising model) at $H=0$\nAnalytic solution by Onsager:\n\n\nTransition temperature: $$ T_C = \\frac{1}{k_B}\\frac{2J}{\\ln(1+\\sqrt{2})} \\approx 2.269\\frac{J}{k_B} \\quad (2)$$\n\n\nSpontaneous magnization: $$ |\\bar{\\sigma}| = \\Big{\n \\begin{array}\n ((1 - \\sinh^{-4}(\\frac{2J}{k_B T}))^{1/8} & (T < T_C) \\\n 0 & (T \\geq T_C)\n \\end{array} \\quad (3)\n $$\n\n\nDue to its simplicity and interesting analytic behavior, the 2D Ising Model is frequently used as a \"testing stone\" for Molecular Dynamics/Monte Carlo algorithms\n\n\nWe will only consider the case of $H=0$ in this notebook\n\n\n2. Monte Carlo Simulation\n2.1 Importance sampling\n\n\nThe total number of microstates, $\\Omega$, approaches astronomical magnitudes even for a small system with $N \\sim 10^4$\n\n\nHowever, when $N$ is sufficiently large most of the microstates have extremely low probability compared to those in the vincinity of the most probable states.\n\n\nMetropolis (pioneer of the MC methods):\n\n...instead of choosing configurations randomly, then weighting them with $\\exp(-E/k_BT)$, we choose configurations with a probability $\\exp(-E/k_BT)$ and weight them evenly.[1]\n\n\n\nThis is called importance sampling.\n\n\nHow samples are gathered:\n\n\nMolecular dynamics simulation (MD): start from first principles and simulate the time evolution of the system. For thermodynamic systems at equilibrium, points along any phase trajectories should be a good representation of the ensemble (ergodicity)\n\n\nMonte Carlo simulation (MC): Carry out a hypothetical \"random walk\" on the phase space, in effect creating a Markov Chain. Since the term Monte Carlo is used ubiquitously to represent anything that involve some degrees of uncertainty, these methods are sometimes differentiated from others by the name Markov chain Monte Carlo (MCMC)\n\n\nThe most widely used MC sampling algorithm is the Metropolis-Hastings algorithm.\n2.2 The Metropolis-Hastings Algorithm\nMarkov Chain 101\n\n\nA markov chain consists of discrete states (labeled from 1 through $\\Omega$). The state of the systme at time step $n$ is denoted $x_{n}$\n\n\nThe chain evolves according to the rule \n\n\n$$ Pr[x_{n+1}=j|x_n=i] = t_{ij} \\quad (i,j=1,2,\\ldots,\\Omega) \\quad (4)$$ \nThe $t_{ij}$ are called transition probabilities and $T=(t_{ij})$ is the transition matrix.\n\nStationary distribution ${\\pi_i}$:\n\n$$ \\pi_j = \\sum_{i=1}^{\\Omega}t_{ij}\\pi_i(j = 1, 2, \\ldots, \\Omega) \\quad (5)$$\nIt can be proven that a markov chain converges to a unique stationary distribution if the ergodicity condition is satiesfied (this notion of \"ergodicity\" is different from that used in statistical mechanics). Ergodicity is assumed throughout our discussions (for a detailed discussion see [2]).\n\nDetailed balance: For all $i$ and $j$, $t_{ij}\\pi_i = t_{ji}\\pi_j$. Intuitively, this means there's no net \"probability flow\" between any two states. It's easy to see that a system under detailed balance must also be in a stationary distribution.\n\nThe M-H algorithm generates its samples by constructing a chain in detailed balance\nTo sample the probability distribution $\\pi(n) \\, (n = 1,2,\\ldots,\\Omega)$, the algorithm goes as follows. Suppose the system is currently in state $a$:\n\n\nPropose a state $b$ different from $a$ using a predefined a priori distribution $A(a \\rightarrow b)$ (More on this later).\n\n\nCarry out a Bernoulli trial with probability \n\n\n$$ p(a \\rightarrow b) := \\min[1,\\frac{\\pi(b)}{\\pi(a)}\\frac{A(b \\rightarrow a)}{A(a \\rightarrow b)}] \\quad (6)$$\nAccept state $b$ if the trial succeeds (this means updating the system to state $b$ as well as recording $b$ to the list of samples). Else, reject it and stays in $a$ (also recording $a$ in the list of samples).\nCheck for yourself that the system defined above is in fact a markov chain with transition probability $t(a \\rightarrow b) = A(a \\rightarrow b)p(a \\rightarrow b)$. Its stationary distribution is no other than $\\pi(n)$. In addition, detailed balance is satiesfied upon reaching equilibrium.\nNote that in (6) only the quotient between $\\pi(b)$ and $\\pi(a)$ are required for the algorithm to work. This means we can specify $\\pi(n)$ within an uncertainty of a multiplicative constant. In the case of statistical mechanics, we can ditch the partition function $Z$ altogether and only compute the Boltzman factor $\\exp(-E(a)/k_BT)$ of the various states.\n2.3 Example: a Metropolis-Hastings dice\n\nConsider a loaded dice. Upon throwing, the results have the following distribution: $\\pi(1)=1/2$,$\\pi(2)=1/4$,$\\pi(3)=1/8$,$\\pi(4)=1/16$,$\\pi(5)=\\pi(6)=1/32$\nChoose a unifrom a priori distribution: $A(i \\rightarrow j) = 1/6$", "def dice_samples(trials):\n prob = {1: 1/2, 2: 1/4, 3: 1/8, 4: 1/16, 5: 1/32, 6: 1/32}\n samples = np.zeros(trials + 1, dtype=int)\n samples[0] = 1\n for i in range(trials):\n a = samples[i]\n b = np.random.random_integers(1, 6) # uniform a priori distribution\n pa = prob[a]\n pb = prob[b]\n if pb >= pa or np.random.rand() < pb / pa:\n samples[i + 1] = b\n else:\n samples[i + 1] = a\n return samples\n\ndef summarize(samples):\n '''\n Return the percentage of every face in the samples\n '''\n num_samples = len(samples)\n distribution = {i: (samples == i).sum() * 100 / num_samples for i in [1, 2, 3, 4, 5 ,6]} # 百分数\n return distribution", "We change the number of MC steps to give a view to the time evolution of the M-H chain:", "samples = dice_samples(1000000)\nns = np.array(np.logspace(1, 6, num=50), dtype=int)\n\ndistributions = {i: np.zeros(50) for i in [1, 2, 3, 4 ,5 ,6]}\nfor index in range(50):\n n = ns[index]\n distribution = summarize(samples[:n])\n for i in [1, 2, 3, 4, 5, 6]:\n distributions[i][index] = distribution[i]\nfor i in [1, 2, 3, 4, 5, 6]:\n plt.plot(ns, distributions[i], label='Face {}'.format(i))\nplt.xlabel('MC iterations')\nplt.ylabel('Percentage')\nplt.ylim(0, 100)\nplt.semilogx()\nplt.legend()\nplt.grid()\nplt.title(\"The Metropolis-Hastings dice\")", "So the constructed chains do converto our desired value. But notice the length of transient steps: it took as much as 10000 steps for us to obtain a reasonably accurate value! As we will see below, this \"high accuracy, low precision\" characteristic of MC methods (a slow convergence to an exact value) is ubiquitous. It remains a problem til this day to find a MC method that have fast convergence.\n3. MC Simulation of the Ising Model\n3.1 The \"single-flip\" MC\nA uniform representation for both the 1D and 2D Ising Model\nThe modern language of nearest-neighbor interation topology is graph theory. We use the most popular graph storage format --- the adjacency list format --- to record the structure of 1D (chain) and 2D (square lattice) Ising Models:", "# First, we need helper function to transform between the (i, j) coordinate of a 2D lattice and a serial one\n# a: length of the square lattice's side\ndef flatten_2d(i, j, a):\n return i * a + j # serial No. = row No. * lenght + colum No.\ndef unflatten_2d(n, a):\n j = n % a\n i = (n - j) // a\n return i, j\n\n# Generate the adjacency list\ndef gen_neighbors_1d(N):\n neighbors = np.zeros((N, 2), dtype=int)\n for n in range(N):\n neighbors[n][0] = (n - 1) % N # left\n neighbors[n][1] = (n + 1) % N # right\n return neighbors\n\ndef gen_neighbors_2d(a):\n neighbors = np.zeros((a*a, 4), dtype=int)\n for n in range(a*a):\n i, j = unflatten_2d(n, a)\n neighbors[n][0] = flatten_2d(i, (j - 1) % a, a) # left\n neighbors[n][1] = flatten_2d(i, (j + 1) % a, a) # right\n neighbors[n][2] = flatten_2d((i - 1) % a, j, a) # up\n neighbors[n][3] = flatten_2d((i + 1) % a, j, a) # down\n return neighbors", "Let $J=1$,$k_B=1$ in the following discussion:\n\n\nState of the markov chain = microstate of the Ising Model $\\vec{\\sigma} := (\\sigma_1\\ldots\\sigma_N)^T$\n\n\nA priori probability: choose uniformly from any state that relates to the current state by a single spin flip\n\n\nEnergy difference resulting from a single flip: $\\delta E= 2\\sigma_i\\sum_{[i,j]}\\sigma_j $\n\n\nAcceptance probability $ P=\\min(1, \\exp(-\\delta E/T)) $\n\n\nThe \"single flip\" mechanism makes the previously defined adjacency list particularly useful.", "def MH_single_flip(neighbors_list, T, iterations):\n '''\n This function performs single flip MC iterations for an Ising system with arbitrary topology, \n given by the adjaceny list `neighbors_list`.\n The inital state is chosen randomly.\n \n Returns\n =======\n `magnetization`: magnetization (average molecular spin) at each MC step\n `energy`: total energy of the system at each MC step\n '''\n # Initialization\n size = neighbors_list.shape[0]\n spins = np.random.random_integers(0, 1, size)\n spins[spins == 0] = -1\n # Allocation\n magnetization = np.zeros(iterations + 1)\n energy = np.zeros(iterations + 1)\n magnetization[0] = spins.sum()\n energy[0] = -spins.dot(spins[neighbors_list].sum(axis=1)) / 2\n \n for step in range(iterations):\n n = np.random.randint(0, size) # Choose next state according to the a priori distribution\n delta_E = 2 * spins[n] * spins[neighbors_list[n]].sum()\n if delta_E < 0 or np.random.rand() < np.exp(-delta_E / T):\n # Acceptance\n spins[n] = -spins[n]\n magnetization[step + 1] = magnetization[step] + 2 * spins[n]\n energy[step + 1] = energy[step] + delta_E\n else:\n # Rejection\n magnetization[step + 1] = magnetization[step]\n energy[step + 1] = energy[step]\n return magnetization / size, energy", "Let's see the result for 1D and 2D Ising Models:", "def plot_magnetization(dimension):\n if dimension == 1:\n neighbors_list = gen_neighbors_1d(400)\n elif dimension == 2:\n neighbors_list = gen_neighbors_2d(20)\n T_list = [0.5, 1.0, 1.5, 1.8, 2.0, 2.2, 2.4, 3.0, 3.5]\n fig = plt.figure(figsize=(12, 40))\n for i in range(9):\n T = T_list[i]\n magnetization, _ = MH_single_flip(neighbors_list, T, 100000)\n # Random walk history\n fig.add_subplot(9, 2, 2 * i + 1)\n plt.plot(magnetization)\n plt.ylim(-1, 1)\n plt.ylabel('Magnetization')\n plt.xlabel('Iterations')\n plt.annotate('T = {}'.format(T), (10000,0.8))\n plt.grid()\n # Sample distribution histogram\n fig.add_subplot(9, 2, 2 * i + 2)\n plt.hist(magnetization, bins=np.linspace(-1, 1, num=20), orientation='horizontal')\n plt.ylim(-1, 1)\n plt.xlabel('Counts')\n plt.grid()\n plt.suptitle(\"Monte Carlo simulation history & distribution to the {:d}D Ising Model\".format(dimension))\n\nplot_magnetization(1)\n\nplot_magnetization(2)", "Remarks:\n\n\nNo spontaneous Magnetization is seen in the 1D case, in agreement with the result from Ising.\n\n\nThe 2D case displays spontaneous behavior roughly for $T$ smaller than the theoretical critical temperature $T_C$. However, instead of a two-peak distribution as prescribed by the canocical distribution, only one peak is observed.\n\n\nConvergence of the markow chain is very slow for $T \\approx T_C$. We can prove theoretically that the number of steps required actually diverges as $T \\rightarrow T_C$. This is called the critical slowing down behavior.\n\n\nInteresting results! But why?\n3.2 Theoretical interlude: the clustering of spins near $T_C$\nRecall the definition of the spin correlation function:\n$$ g(x, y) := E[\\sigma(m + x, n + y)\\sigma(m, n)] - E[\\sigma(m + x, m + y)]E[\\sigma(m, n)] \\quad (7)$$\nwhere the expectation is taken for all grid points $(m, n)$. The above formula can be simplified if we assume periodic boundary consitions for all sides:\n$$ g(x, y) = E[\\sigma(m + x, n + y)\\sigma(m, n)] - M^2 \\quad (8) $$\nwhere $M := E[\\sigma(n)]$. The spin correlation function can tell us more about what happens when $T \\approx T_C$.", "def spin_correlation(spins, a):\n M = spins.mean()\n spins_2d = spins.reshape((a, a))\n rs = np.arange(-a/2, a/2 + 1, dtype=int)\n num_rs = len(rs)\n correlations = np.zeros((num_rs, num_rs))\n for i, y in enumerate(rs):\n for j, x in enumerate(rs):\n correlations[i, j] = (spins_2d * np.roll(np.roll(spins_2d, x, axis=1), y, axis=0)).mean() + M**2\n return correlations", "We modify the single-flip MC code a bit to have it output snapshots of the system at the given time stamps:", "def gen_snapshots(time_stamps, a, T):\n iterations = time_stamps[-1]\n snapshots = {}\n \n neighbors_list = gen_neighbors_2d(a)\n size = neighbors_list.shape[0]\n spins = np.random.random_integers(0, 1, size)\n spins[spins == 0] = -1\n for step in range(iterations):\n n = np.random.randint(0, size)\n delta_E = 2 * spins[n] * spins[neighbors_list[n]].sum()\n if delta_E < 0 or np.random.rand() < np.exp(-delta_E / T):\n spins[n] = -spins[n]\n if step + 1 in time_stamps:\n snapshots[step + 1] = {\n 'spins': spins.copy().reshape((a, a)), \n 'magnetization': spins.mean(),\n 'correlation': spin_correlation(spins, a)\n }\n return snapshots", "This is what happens when $T = 2.2$:", "a = 40\nT = 2.2\ntime_stamps = np.array([1, 10, 100, 1000, 10000, 20000, 40000, 60000, 80000, 100000])\nsnapshots = gen_snapshots(time_stamps, a, T)\nfig, axes = plt.subplots(10, 2, figsize=(12, 6 * 10))\nfor i, t in enumerate(time_stamps):\n axes[i][0].matshow(snapshots[t]['spins'], interpolation='none')\n axes[i][0].set_ylabel('MC step {}, M = {:.3f}'.format(t, snapshots[t]['magnetization']))\n axes[i][0].set_xlabel('Spins')\n axes[i][1].matshow(snapshots[t]['correlation'])\n axes[i][1].set_xlabel('Spin correlation function')\n axes[i][1].set_xticks([0, 10, 20, 30, 40])\n axes[i][1].set_xticklabels([-20, -10, 0, 10, 20])\n axes[i][1].set_yticks([0, 10, 20, 30, 40])\n axes[i][1].set_yticklabels([-20, -10, 0, 10, 20])", "So the markov chain, instead of converging, actually goes into an ordered stated with considerable correlation between the spins. The spin clusters formed in the process makes it more difficult for future states to be accepted, as a randomly picked molecule is more like to be inside a cluster than at the boundary between two clusters. Flipping this molecule would thus increase the system's energy, so it's more likely to be rejected. This is the explanation for the above mentioned critical slowing down behavior.\nThe same argument can be applied to the spontaneous magnitization state for $T \\ll T_C$. In this case, once the system enter the ordered state the giant cluster makes further flipping very difficult to accept. Because we have arbitrarily constrained the system to move only a small step in configuration space per MC step, it would take forever for it to move from one steady state to another.\n3.3 Cluster MC\nTo address the problems mentioned above, the cluster Monte Carlo method is proposed to allow for large configuration space hopping between two MC steps.\nSpecifically:\n\n\nA \"seed\" is chosen randomly at the start of each MC step.\n\n\nA breadth-first search is performed in the lattice starting from the \"seed\". Whenever a molecule with the same spin as the seed is encountered, it is added to the breadth-first queue with probability p. The search is carried on until the queue is exhausted, generating a cluster of same-spin molecules from the breadth-first tree.\n\n\nA new state is proposed by flipping all molecules within the cluster. The acceptance/rejection procedure is identical to that given by Metropolis-Hastings.\n\n\nTo carry out cluster MC, we need to calculated the effective a priori distribution from the procedures described above. We state the result without proving (refer to Ref [2] for a complete discussion):\nIf we set $p$ to the \"magic number\" $1-e^{2/T}$, then $p(a \\rightarrow b) \\equiv 1$, which means the next state will always be accepted. We can already see that under this choice MC iterations can be carried out way quicker than the single-flip scheme.", "def cluster_MC(neighbors_list, T, iterations):\n p = 1 - np.exp(-2 / T) # \"magic number\"\n # Initialization\n size = neighbors_list.shape[0]\n spins = np.random.random_integers(0, 1, size)\n spins[spins == 0] = -1\n # Allocation\n magnetization = np.zeros(iterations + 1)\n magnetization[0] = spins.sum()\n energy = np.zeros(iterations + 1)\n energy[0] = -spins.dot(spins[neighbors_list].sum(axis=1))\n \n for step in range(iterations):\n # Use a deque to implement breadth-first search\n n0 = np.random.randint(0, size)\n sign = spins[n0]\n cluster = set([n0])\n pockets = deque([n0])\n finished = False\n while not finished:\n try:\n n = pockets.popleft()\n neighbors = neighbors_list[n]\n for neighbor in neighbors:\n if spins[neighbor] == sign and neighbor not in cluster and np.random.rand() < p:\n cluster.add(neighbor)\n pockets.append(neighbor)\n except IndexError:\n finished = True\n # Flip the cluster\n cluster = np.fromiter(cluster, dtype=int)\n spins[cluster] = -sign\n magnetization[step + 1] = magnetization[step] - 2 * sign * len(cluster)\n energy[step + 1] = -spins.dot(spins[neighbors_list].sum(axis=1))\n return magnetization / size, energy / 2 # Every pair is counted two times", "Cluster MC vs. single-flip MC in the field:", "def plot_magnetization_cluster(dimension):\n if dimension == 1:\n neighbors_list = gen_neighbors_1d(400)\n elif dimension == 2:\n neighbors_list = gen_neighbors_2d(20)\n T_list = [0.5, 1.0, 1.5, 1.8, 2.0, 2.2, 2.4, 3.0, 3.5]\n fig = plt.figure(figsize=(12, 40))\n for i in range(9):\n T = T_list[i]\n magnetization, _ = cluster_MC(neighbors_list, T, 1000)\n # 随机行走历史\n fig.add_subplot(9, 2, 2 * i + 1)\n plt.plot(magnetization)\n plt.ylim(-1, 1)\n plt.ylabel('Magnetization')\n plt.xlabel('Iterations')\n plt.annotate('T = {}'.format(T), (50,0.8))\n # 样本分布直方图\n fig.add_subplot(9, 2, 2 * i + 2)\n plt.hist(magnetization, bins=np.linspace(-1, 1, num=20), orientation='horizontal')\n plt.ylim(-1, 1)\n plt.xlabel('Counts')\n plt.suptitle(\"Monte Carlo simulation history & distribution to the {:d}D Ising Model\".format(dimension))\n\nplot_magnetization_cluster(2)", "3.4 Comparison to theoretical results\nWe compare results from cluster MC to the ones given by Onsager in (3):", "def averages(magnetization, energy, T):\n '''计算各量的系综平均'''\n M = np.abs(magnetization).mean()\n E = energy.mean()\n CV = energy.var() / T\n return M, E, CV\n\n# The code below runs VERY SLOWLY\npoints = 31\ndimension = 64\niterations = 5000\n\nneighbors_list = gen_neighbors_2d(dimension)\nTs = np.linspace(1.0, 4.0, points)\nMs = np.zeros(points)\nfor i in range(points):\n T = Ts[i]\n magnetization, _ = cluster_MC(neighbors_list, T, iterations)\n Ms[i] = np.abs(magnetization).mean()\n# print(\"Iteration for T = {:.3f} complete\".format(T)) # Uncomment to print progress as the simulation goes\n\nOnsager_Tc = 2.269\nTs_plot = np.linspace(1.0, 4.0, num=200)\nOnsager_Ms = np.zeros(len(Ts_plot))\nfor i, T in enumerate(Ts_plot):\n if T <= 2.269:\n Onsager_Ms[i] = (1 - (np.sinh(2/T))**(-4))**(1/8)\nplt.plot(Ts, Ms, '^', label='simulation')\nplt.plot(Ts_plot, Onsager_Ms, '--', label='theoretical')\nplt.ylim(0, 1)\nplt.legend()\nplt.xlabel('Temperature')\nplt.ylabel('Magnetization')\nplt.grid()\nplt.title(\"Theoretical vs. Numerical values for spontaneous magnitization\")", "4. Development of MC methods\n\n\nDifferent choice of a priori distributions;\n\n\nHybrid MC/MD simulations;\n\n\nBetter sample generation: correlation, buring and thinning;\n\n\nMCMC applied to Hierarchichal Bayesian Models in the field of statistics. An excellent material is Probabilistic Programming and Bayesian Methods for Hackers\n\n\nReferences\n[1] N.Metropolis, A.W.Rosenbluth, M.N.Rosenbluth, A.H.Teller, E.Teller, Equation of State Calculations by Fast Computing Machines, The Journal of Chemial Physics 21(1953), pp.1087\n[2] K.Werner, Statistical Mechanics: Algorithms and Computations, Oxford(2006)", "import sys\nprint(\"Python version = \", sys.version)\nprint(\"Numpy version = \", np.version.version)\nprint(\"Matplotlib version = \", matplotlib.__version__)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/kfserving-lts
docs/samples/client/kfserving_sdk_v1alpha2_sample.ipynb
apache-2.0
[ "Sample for KFServing SDK v1alpha2\nThis is a sample for KFServing SDK v1alpha2. \nThe notebook shows how to use KFServing SDK to create, get, rollout_canary, promote and delete InferenceService.", "from kubernetes import client\n\nfrom kfserving import KFServingClient\nfrom kfserving import constants\nfrom kfserving import utils\nfrom kfserving import V1alpha2EndpointSpec\nfrom kfserving import V1alpha2PredictorSpec\nfrom kfserving import V1alpha2TensorflowSpec\nfrom kfserving import V1alpha2InferenceServiceSpec\nfrom kfserving import V1alpha2InferenceService\nfrom kubernetes.client import V1ResourceRequirements", "Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.", "namespace = utils.get_default_target_namespace()", "Define InferenceService\nFirstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.", "api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION\ndefault_endpoint_spec = V1alpha2EndpointSpec(\n predictor=V1alpha2PredictorSpec(\n tensorflow=V1alpha2TensorflowSpec(\n storage_uri='gs://kfserving-samples/models/tensorflow/flowers',\n resources=V1ResourceRequirements(\n requests={'cpu':'100m','memory':'1Gi'},\n limits={'cpu':'100m', 'memory':'1Gi'}))))\n \nisvc = V1alpha2InferenceService(api_version=api_version,\n kind=constants.KFSERVING_KIND,\n metadata=client.V1ObjectMeta(\n name='flower-sample', namespace=namespace),\n spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec))", "Create InferenceService\nCall KFServingClient to create InferenceService.", "KFServing = KFServingClient()\nKFServing.create(isvc)", "Check the InferenceService", "KFServing.get('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)", "Delete the InferenceService", "KFServing.delete('flower-sample', namespace=namespace)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lwahedi/CurrentPresentation
talks/SignalAnalysis.ipynb
mit
[ "Import Packages, Load Data", "import pandas as pd\nimport statsmodels.api as sm\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport datetime\nimport time\nmigration_df = pd.read_csv('migration_dums.csv')\nmigration_df.set_index(['date_stamp','Province'],inplace=True)", "Look at the data\nPatterns differ from state to state", "f,a=plt.subplots(3,1)\na[0].set_title(\"Country-wide Volume\")\na[1].set_title(\"Anbar Volume\")\na[2].set_title(\"Babylon Volume\")\nfoo = migration_df.loc[:,['vol','vol_arabic','Date']].groupby('Date').sum()\nfoo.loc[:,['vol','vol_arabic']].plot(ax=a[0],figsize=(10,7))\nfoo = migration_df.loc[(slice(None),'Anbar'),['vol','vol_arabic']]\nfoo.reset_index('Province').plot(ax=a[1])\nmigration_df.loc[(slice(None),'Babylon'),['vol','vol_arabic']].reset_index('Province').plot(ax=a[2])\nf.tight_layout()\n\n# f,a=plt.figure(figsize=(5,5))\nvol_plot=migration_df.loc[:,['vol']].unstack(level=\"Province\")\nvol_plot.columns = vol_plot.columns.droplevel(0)\nvol_plot.drop('Sulaymaniyah',axis=1,inplace=True)\nax =vol_plot.loc[:,['Anbar','Babylon','Thi-Qar','Baghdad']].plot(kind='hist',alpha=.5,bins=50)\n# vol_plot.plot.density()\nax.figsize=(10,5)\n", "What do we learn?\n\nVariation over provinces\n\nIf we ignore space:\n\nMay be that people tweet about and flee some provinces more than others, says nothing about <b>when</b> people flee\n\nIID violation\n\nAutocorrelation within space. Confidence estimates wrong. \n\n\nWhat about time?", "from statsmodels.tsa.stattools import acf, pacf\nfoo = migration_df.loc[:,['vol','vol_arabic','origin','destination','Date']].groupby('Date').sum()\n\nf,axs = plt.subplots(5,2)\naxs[0][0].set_title('acf for English')\naxs[0][1].set_title('pacf for English')\naxs[1][0].set_title('acf for Arabic')\naxs[1][1].set_title('pacf for Arabic')\naxs[2][0].set_title('acf for Arabic dif')\naxs[2][1].set_title('pacf for Arabic dif')\naxs[3][0].set_title('acf for origin')\naxs[3][1].set_title('pacf for origin')\naxs[4][0].set_title('acf for destination')\naxs[4][1].set_title('pacf for destination')\na = acf(foo.vol)\na = pd.DataFrame([a]).T\na.plot(kind='bar',ax = axs[0][0],figsize=(10,12))\n# foo = foo.dropna(axis=0)\na = pacf(foo.vol)\na = pd.DataFrame([a]).T\na = a.dropna(axis=0)\na.plot(kind='bar',ax = axs[0][1])\n\na = acf(foo.origin)\na = pd.DataFrame([a]).T\na.plot(kind='bar',ax = axs[3][0])\n\na = pacf(foo.origin)\na = pd.DataFrame([a]).T\na = a.dropna(axis=0)\na.plot(kind='bar',ax = axs[3][1],ylim=[-10,3])\n\nfoo = foo.dropna(axis=0)\na = acf(foo.destination)\na = pd.DataFrame([a]).T\na.plot(kind='bar',ax = axs[4][0])\n\na = pacf(foo.destination)\na = pd.DataFrame([a]).T\na = a.dropna(axis=0)\na.plot(kind='bar',ax = axs[4][1])\n\n\nfoo = foo.dropna(axis=0)\na = acf(foo.vol_arabic)\na = pd.DataFrame([a]).T\na.plot(kind='bar',ax = axs[1][0])\n\na = pacf(foo.vol_arabic)\na = pd.DataFrame([a]).T\na = a.dropna(axis=0)\na.plot(kind='bar',ax = axs[1][1])\n\n\n\nfoo['vol_arabic_dif'] = foo.vol_arabic- foo.vol_arabic.shift(1)\nfoo = foo.dropna(axis=0)\na = acf(foo.vol_arabic_dif)\na = pd.DataFrame([a]).T\na.plot(kind='bar',ax = axs[2][0])\n\na = pacf(foo.vol_arabic_dif)\na = pd.DataFrame([a]).T\na = a.dropna(axis=0)\na.plot(kind='bar',ax = axs[2][1])\n\n\n\n\nf.tight_layout()\n\n", "What do we learn?\n\nAutocorrelation in time\nSome weird time stuff going on at later lags. \n\nIf we ignore time:\n\nAR process, non-stationary data. Reduced predictive validity\nSpurious results more likely\n\nIID violation\n\nAutocorrelation within time. Confidence estimates wrong. \n\nWhat does this mean?\n\nDon't know whether bivariate correlation estimates are noise or 0\nWe care about where and when something happens, can't get that from country-level pooled estimates\n\n\nSolution:\n\ndifferencing, lags, fixed effects\n\n\nFixed Effects:\n\nAdd a constant for every month, and every place. \nIf Anbar always has more tweets, compare Anbar against Anbar\n\nWhy:\n\nControl for unknowns to <b>isolate effect of the signal </b>", "bar = pd.DataFrame([1,2,3,4],columns=['x'])\nbar['y']=[2,1,4,3]\nbar.plot.scatter('x','y')\nbar['condition']=[1,1,0,0]\nbar['c']=1\nprint(sm.OLS(bar.y,bar.loc[:,['x','c']]).fit().summary())\nbar['fit1']=bar.x*.6+1\nplt.plot(bar.x,bar.fit1,\"r--\")\nprint('\\n\\nCorrelation:',sp.stats.stats.pearsonr(bar.x,bar.y)[0])\n\n# bar.loc[bar.condition==1,['x','y']].plot.scatter('x','y')\nprint(sm.OLS(bar.y,bar.loc[:,['x','c','condition']]).fit().summary())\nbar.plot.scatter('x','y',c=['r','r','b','b'])\nbar['fit2']=7-bar.x\nbar['fit3']=7-bar.x\nbar['fit3']=bar.fit3 - 4\nplt.plot(bar.loc[bar.condition==0,'x'],bar.loc[bar.condition==0,'fit2'],\"b--\")\nplt.plot(bar.loc[bar.condition==1,'x'],bar.loc[bar.condition==1,'fit3'],\"r--\")\n\n", "In context, imagine y is tweet volume, and x is some outcome of interest that occurs at the local level. We know that the tweet volume is higher in Anbar than Baghdad. In these circumstances, local effects would be mased with a bivariate correlation. \nNote also that, while it is a good idea to look at your data's distributions, you want to make these decisions before you start modeling if you can. You <i>can</i> lie with statistics. And human heuristics make it easy to justify. Protect yourself from yourself, so you don't. Think about model design before you look at results\nTakeaway:\n\nOur final model will have a lot of other predictors and controls, but this model doesn't\nCan get around that by isolating the signal with fixed effects\nLook at the effect of a signal in Anbar, rather than comparing the effect of that signal in Anbar and while avoiding comparing it unduely to the effect in Baghdad. \n\"Partially pooled\". Allow regional and temporal variation without multiple comparisions or iid violations with pooled. \nExpect similar effects, with different magnitudes\nCould go all the way to random effects, allow each governorate to have have its own effect, drawn from a distribution, and estimate that distribution. But we don't have that much data here, might loose so much power than real results fade away.\n\nOLS vs GLM\nCount data\n\nWe know the data are count. Poisson <i> should </i>be our first guess", "migration_df.loc[:,['vol','vol_arabic','origin','destination']].plot.hist(bins=60,alpha=.5)\n", "Those are some long tails...", "spreads = migration_df.loc[:,['vol','vol_arabic','origin','destination','Orig_difs','Dest_difs']].mean()\nspreads = pd.DataFrame(spreads,columns = ['mean'])\nspreads['var'] = migration_df.loc[:,['vol','vol_arabic','origin','destination','Orig_difs','Dest_difs']].var(skipna=True)\nspreads", "Poisson distributions have mean=variance.\nUse Negative Binomial instead to model mean and variance separately\nNegative Binomial Distribution is the most appropriate distribution for our outcome variables of interest.\nNote: there are also a lot of zeros, should probably run zero-inflated negative binomial, to model 0s as distinct processes. But that's harder in python, so we can check model fit to see if it's necessary or if we can get reasonable estimates without it.\n\nWhat's wrong with OLS?\nHomoskedasticity Assumption\n\n{k+r-1 \\choose k}\\cdot (1-p)^{r}p^{k}\nVariance changes as mean changes. Data are heteroskedastic. Since regression is essentially a way to measure variance, you have to account for the variance appropriately or your certainty estimates are wrng\n\nIt doesn't fit the data.\n\nCan predict negative numbers\nDifferent relationship between predictors and probability of observed outcome than a gaussian regression.", "dates =['14-09',\n '14-10', '14-11', '14-12', '15-01', '15-02', '15-03', '15-04',\n '15-05', '15-06', '15-07', '15-08', '15-09', '15-10', '15-11',\n '15-12', '16-01', '16-02', '16-03', '16-04', '16-05', '16-06',\n '16-07', '16-08', '16-09', '16-10', '16-11', '16-12', '17-01',\n '17-02', '17-03', '17-04', '17-05',]\nprovinces = migration_df.index.get_level_values(1).unique()\nyvar = 'origin'\nxvars = ['vol','origin_lag']\nxvars.extend(provinces)\nxvars.extend(dates)\nglm =False\n\nmodel_olsg = sm.GLM(migration_df.loc[:,yvar],\n migration_df.loc[:,xvars],missing='drop',\n family=sm.families.Gaussian(),\n )\nmodel_nb = sm.GLM(migration_df.loc[:,yvar],\n migration_df.loc[:,xvars],missing='drop',\n family=sm.families.NegativeBinomial(),\n )\nmodel_ols = sm.OLS(migration_df.loc[:,yvar],\n migration_df.loc[:,xvars],missing='drop')\nif glm:\n results_nb = model_nb.fit()\n print(results_nb.summary())\nelse:\n results_olsg = model_olsg.fit()\n results_ols = model_ols.fit()\n print(results_ols.summary())\n ", "Just doesn't fit:", "fig = plt.figure(figsize=(12,8))\nfig=sm.graphics.plot_regress_exog(results_ols, \"vol\",fig=fig)", "Heteroskedastic", "fig, ax = plt.subplots(figsize=(4,4))\nax.scatter(results_olsg.mu, results_olsg.resid_response)\n# ax.hlines(0, 0, 3000000)\n# ax.set_xlim(0, 70000)\n# ax.set_ylim(0, 70000)\nax.hlines(0, 0, 250000)\nax.set_title('Residual Dependence Plot, Volume and Origin, NB')\nax.set_ylabel('Pearson Residuals')\nax.set_xlabel('Fitted values')", "And now with a GLM:\nNote: statsmodels isn't as sophisticated as many of the packages in R, and the negative binomial regression is still a little new. Converges with the MASS package in R, but has trouble with Statsmodels. I also just trust MASS a little more than statsmodels. So the results are pasted below: \nCall:\nglm.nb(formula = origin ~ vol + origin_lag + Anbar + Babylon + \n Baghdad + Basrah + Dahuk + Diyala + Erbil + Kerbala + Kirkuk + \n Missan + Muthanna + Najaf + Ninewa + Qadissiya + Salah.al.Din + \n Sulaymaniyah + Thi.Qar + Wassit + X14.10 + X14.11 + X14.12 + \n X15.01 + X15.02 + X15.03 + X15.04 + X15.05 + X15.06 + X15.07 + \n X15.08 + X15.09 + X15.10 + X15.11 + X15.12 + X16.01 + X16.02 + \n X16.03 + X16.04 + X16.05 + X16.06 + X16.07 + X16.08 + X16.09 + \n X16.10 + X16.11 + X16.12 + X17.01 + X17.02 + X17.03 + X17.04 - \n 1, data = data, init.theta = 1.394043988, link = log)\nDeviance Residuals: \n Min 1Q Median 3Q Max\n-2.9672 -0.6948 -0.1600 0.1415 3.8842 \n```Coefficients:\n Estimate Std. Error z value Pr(>|z|) \nvol 2.301e-05 9.822e-06 2.342 0.019157 \norigin_lag 1.177e-05 2.679e-06 4.394 1.11e-05 \nAnbar 9.456e+00 5.647e-01 16.745 < 2e-16 \nBabylon 8.183e+00 3.059e-01 26.749 < 2e-16 \nBaghdad 8.718e+00 3.065e-01 28.444 < 2e-16 \nBasrah -1.776e-01 3.503e-01 -0.507 0.612050 \nDahuk -4.087e+00 1.043e+00 -3.918 8.95e-05 \nDiyala 9.614e+00 3.158e-01 30.441 < 2e-16 \nErbil 7.699e+00 3.069e-01 25.089 < 2e-16 \nKerbala -3.739e+01 1.125e+07 0.000 0.999997 \nKirkuk 9.624e+00 3.124e-01 30.808 < 2e-16 \nMissan 8.451e-02 3.415e-01 0.247 0.804572 \nMuthanna -3.739e+01 1.125e+07 0.000 0.999997 \nNajaf -2.089e+00 4.998e-01 -4.179 2.92e-05 \nNinewa 9.628e+00 5.818e-01 16.549 < 2e-16 \nQadissiya 1.482e+00 3.154e-01 4.700 2.60e-06 \nSalah.al.Din 1.018e+01 3.587e-01 28.377 < 2e-16 \nSulaymaniyah -1.625e+00 4.444e-01 -3.656 0.000256 \nThi.Qar -4.126e+00 1.062e+00 -3.884 0.000103 \nWassit -3.739e+01 1.125e+07 0.000 0.999997 \nX14.10 1.383e-01 3.999e-01 0.346 0.729497 \nX14.11 6.279e-01 3.805e-01 1.650 0.098899 .\nX14.12 6.501e-01 3.806e-01 1.708 0.087623 .\nX15.01 7.865e-01 3.785e-01 2.078 0.037704 \nX15.02 1.454e+00 3.718e-01 3.912 9.14e-05 \nX15.03 1.516e+00 3.712e-01 4.085 4.41e-05 \nX15.04 1.433e+00 3.723e-01 3.849 0.000119 \nX15.05 1.718e-01 3.819e-01 0.450 0.652739 \nX15.06 1.581e-01 3.815e-01 0.415 0.678462 \nX15.07 1.622e-01 3.815e-01 0.425 0.670676 \nX15.08 1.561e-01 3.814e-01 0.409 0.682287 \nX15.09 1.379e-01 3.815e-01 0.361 0.717814 \nX15.10 2.568e+00 3.647e-01 7.041 1.90e-12 \nX15.11 1.951e+00 3.722e-01 5.241 1.60e-07 ***\nX15.12 -1.175e-01 3.872e-01 -0.304 0.761502 \nX16.01 -1.209e-01 3.847e-01 -0.314 0.753366 \nX16.02 -7.577e-02 3.834e-01 -0.198 0.843339 \nX16.03 -1.287e-01 3.844e-01 -0.335 0.737728 \nX16.04 -1.511e-01 3.843e-01 -0.393 0.694187 \nX16.05 -2.037e-01 3.856e-01 -0.528 0.597330 \nX16.06 -2.027e-01 3.859e-01 -0.525 0.599386 \nX16.07 -2.204e-01 3.862e-01 -0.571 0.568232 \nX16.08 -2.304e-01 3.864e-01 -0.596 0.550960 \nX16.09 -2.075e-01 3.855e-01 -0.538 0.590401 \nX16.10 -2.240e-01 3.943e-01 -0.568 0.569996 \nX16.11 -9.720e-02 3.854e-01 -0.252 0.800856 \nX16.12 -6.413e-02 3.836e-01 -0.167 0.867236 \nX17.01 -3.999e-02 3.839e-01 -0.104 0.917048 \nX17.02 -2.726e-02 3.837e-01 -0.071 0.943351 \nX17.03 2.561e-02 3.837e-01 0.067 0.946770 \nX17.04 -7.492e-02 3.843e-01 -0.195 0.845445 \n\nSignif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05 ‘.’ 0.1 ‘ ’ 1\n(Dispersion parameter for Negative Binomial(1.394) family taken to be 1)\nNull deviance: 2.8122e+07 on 576 degrees of freedom\n\nResidual deviance: 4.7556e+02 on 525 degrees of freedom\n (18 observations deleted due to missingness)\nAIC: 6307.7\nNumber of Fisher Scoring iterations: 1\n Theta: 1.394 \n Std. Err.: 0.127\n\nWarning while fitting theta: alternation limit reached \n2 x log-likelihood: -6203.692 \nWarning message:\nIn glm.nb(origin ~ vol + origin_lag + Anbar + Babylon + Baghdad + :\n alternation limit reached\n```\nTransform residuals from NB GLM using DHARMa in R\n```\n\nsim <- simulateResiduals(fittedModel = m1, n=500)\nplotSimulatedResiduals(simulationOutput = sim)```", "from IPython.display import Image\nfrom IPython.core.display import HTML \nImage(url= \"resid.png\")\n", "Results\nUnderstanding the signals\n\nOrigin and Destination well correlated\nLocalized Pos/Neg sentiment well correlated\nLocalized Arabic/English volume well correlated\n\nRelationships\n\nDeath/capita leading indicator of movement\nVolume in english and arabic a leading indicator of movement\n\nBut why though?\nCan we say anything about what the Twitter signals mean?\nTo understand the relationship between variables in explaining an outcome, add and remove them from the model\n\nRun the model with volume alone\nRun the model with volume and death/capita\nThe significance and effect size of volume go down. Distribution moves toward 0. \n\nThis means that at least some of the movement explained by tweet volume operates through death." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yedivanseven/bestPy
examples/06.1_BenchmarkSplitData.ipynb
gpl-3.0
[ "CHAPTER 6\n6.1 Benchmark: Split Data into Training and Test Sets\nNow that we have a convenient way to make recommendations, we still need to make an informed choice as to which of bestPy's algorithms we should pick and how we should set its parameters to achieve the highest possible fidelity in our recommendations.\nThe only way of telling how well we are doing with our recommendations is to see how well we can predict future purchases from past purchases. This means that, instead of using all of our data to train an algorithm, we have to hold out the last couple of purchases of each user and only used the rest. Then, we can test the recommendations produced by our algorithm against what customers actually did buy next.\nTo conveniently perform this split of our data into training and test sets, bestPy offers the advanced data class TrainTest.\nPreliminaries\nWe only need this because the examples folder is a subdirectory of the bestPy package.", "import sys\nsys.path.append('../..')", "Imports and logging\nNo algorithm or recommender is needed for now as we are focusing solely on the data structure TrainTest, which is, naturally, accessible through the sub-package bestPy.datastructures.", "from bestPy import write_log_to\nfrom bestPy.datastructures import TrainTest # Additionally import RecoBasedOn\n\nlogfile = 'logfile.txt'\nwrite_log_to(logfile, 20)", "Read TrainTest data\nReading in TrainTest data works in pretty much the same way as reading in Transactions data. Again, two data sources are available, a postgreSQL database and a CSV file. For the former, we again need a fully configured PostgreSQLparams instance (let's call it database) before we can read in the data with:\ndata = TrainTest.from_postgreSQL(database)\nReading from then works like so:", "file = 'examples_data.csv'\ndata = TrainTest.from_csv(file)", "NOTE: There is only one difference to reading Transactions data. The from_csv() class method has an addtional argument fmt. If it is not given, then the timestamps in the CSV file are assumed to be UNIX timestamp since epoch, i.e., integer numbers.\nIf, on the other hand, it is given, then it must be a valid format string specifying the format in which the timestamps are written in the CSV file. To tell bestPy that, for example, the timestamps in your CSV file look like\n2012-03-09 16:18:02\ni.e., year-month-day hour:minute:second, you would have to set fmt to the string:\n'%Y-%m-%d %H:%M:%S'\nWith the documentation of the datetime package, it should be easy to assemble the correct format string for just about any which way a timestamp could possible be composed.\nInitial attributes of TrainTest data objects\nInspecting the new data object with Tab completion reveals reveals several attribute that we already now from Transactions data. Notably, these are:", "print(data.number_of_corrupted_records)\nprint(data.number_of_transactions)", "There is also an additional attribute that tells us the maximum numbers of purchases we can possibly hold out as test data for each customer.", "data.max_hold_out", "Splitting the data into training and test sets\nAlso present is a method called split(), which indeed does exactly what you think it should. It has two arguments, hold_out and only_new. Naturally, the former tells bestPy how many unique purchases per cutomer to hold out (i.e., put aside) for each customer. Naturally, cutomers who bought fewer than hold_out articles cannot be tested at all and cutomers who bought exactly hold_out articles will be treated as new customers in testing.\nThe second argument, only_new tells bestPy whether only new articles will be recommended in the benchmark run or whether recommendations will include also articles that customers bouhgt before. If True, then all previous buys of any of the hold_out last unique items need to be deleted from the training data for each customer. Let's try.", "data.split(4, False)", "Attributes of split TrainTest data objects\nInspecting the TrainTest data object with Tab completion again reveals two more attributes that magically appeared, train and test. The former is an instance of Transactions with all the attributes we already know.", "print(type(data.train))\nprint(data.train.user.count)", "So we have 2141 customers that bought 4 items or more and whose next 4 purchases can therefore be compared to our recommendations. I suggest you make it a habbit of checking that you have a decent number of customers left in your training set.\nNOTE: Should you, for some reason, chose to hold out max_hold_out purchases, you might well end up with a single customer in your training set and, therefore, obtain spurious benchmark results.\nThe test attribute of a split TrainTest instance is a new, auxiliary data type with a very simple structure. Its data attribute contains the test data in the form of a python dictionary with customer IDs as keys and the artcile IDs of their hold_out last unique purchases as values.", "data.test.data", "Its attributes hold_out and only_new simply reflect the respective arguments from the last call to the split() method and, hard to guess, number_of_cases yields the number of test cases.", "print(data.test.hold_out)\nprint(data.test.only_new)\nprint(data.test.number_of_cases)", "NOTE: Should you wish to split the same data again, but this time with different settings, no need to read it in again. Just call split() again.", "data.split(6, True)\n\nprint(data.train.user.count)\nprint(data.test.number_of_cases)", "And that's it for the equally powerful and convenient TrainTest data structure." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
navaro1/deep-learning
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
mit
[ "Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.", "# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist", "Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.", "# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)", "Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.", "# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef show_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\nshow_digit(54)", "Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.\nHint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.", "# Define the neural network\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n # Include the input layer, hidden layer(s), and set how you want to train the model\n net = tflearn.input_data([None, 784])\n net = tflearn.fully_connected(net, 400, activation='relu')\n net = tflearn.fully_connected(net, 200, activation='relu')\n net = tflearn.fully_connected(net, 10, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\n # This model assumes that your network is named \"net\" \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. \nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=512, n_epoch=25)", "Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!", "# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_stats_cluster_methods.ipynb
bsd-3-clause
[ "%matplotlib inline", "Permutation t-test on toy data with spatial clustering\nFollowing the illustrative example of Ridgway et al. 2012 [1],\nthis demonstrates some basic ideas behind both the \"hat\"\nvariance adjustment method, as well as threshold-free\ncluster enhancement (TFCE) [2] methods in mne-python.\nThis toy dataset consists of a 40 x 40 square with a \"signal\"\npresent in the center (at pixel [20, 20]) with white noise\nadded and a 5-pixel-SD normal smoothing kernel applied.\nIn the top row plot the T statistic over space, peaking toward the\ncenter. Note that it has peaky edges. Second, with the \"hat\" variance\ncorrection/regularization, the peak becomes correctly centered. Third,\nthe TFCE approach also corrects for these edge artifacts. Fourth, the\nthe two methods combined provide a tighter estimate, for better or\nworse.\nNow considering multiple-comparisons corrected statistics on these\nvariables, note that a non-cluster test (e.g., FDR or Bonferroni) would\nmis-localize the peak due to sharpness in the T statistic driven by\nlow-variance pixels toward the edge of the plateau. Standard clustering\n(first plot in the second row) identifies the correct region, but the\nwhole area must be declared significant, so no peak analysis can be done.\nAlso, the peak is broad. In this method, all significances are\nfamily-wise error rate (FWER) corrected, and the method is\nnon-parametric so assumptions of Gaussian data distributions (which do\nactually hold for this example) don't need to be satisfied. Adding the\n\"hat\" technique tightens the estimate of significant activity (second\nplot). The TFCE approach (third plot) allows analyzing each significant\npoint independently, but still has a broadened estimate. Note that\nthis is also FWER corrected. Finally, combining the TFCE and \"hat\"\nmethods tightens the area declared significant (again FWER corrected),\nand allows for evaluation of each point independently instead of as\na single, broad cluster.\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example does quite a bit of processing, so even on a\n fast machine it can take a few minutes to complete.</p></div>", "# Authors: Eric Larson <larson.eric.d@gmail.com>\n# License: BSD (3-clause)\n\nimport numpy as np\nfrom scipy import stats\nfrom functools import partial\nimport matplotlib.pyplot as plt\n# this changes hidden MPL vars:\nfrom mpl_toolkits.mplot3d import Axes3D # noqa\n\nfrom mne.stats import (spatio_temporal_cluster_1samp_test,\n bonferroni_correction, ttest_1samp_no_p)\n\ntry:\n from sklearn.feature_extraction.image import grid_to_graph\nexcept ImportError:\n from scikits.learn.feature_extraction.image import grid_to_graph\n\nprint(__doc__)", "Set parameters", "width = 40\nn_subjects = 10\nsignal_mean = 100\nsignal_sd = 100\nnoise_sd = 0.01\ngaussian_sd = 5\nsigma = 1e-3 # sigma for the \"hat\" method\nthreshold = -stats.distributions.t.ppf(0.05, n_subjects - 1)\nthreshold_tfce = dict(start=0, step=0.2)\nn_permutations = 1024 # number of clustering permutations (1024 for exact)", "Construct simulated data\nMake the connectivity matrix just next-neighbor spatially", "n_src = width * width\nconnectivity = grid_to_graph(width, width)\n\n# For each \"subject\", make a smoothed noisy signal with a centered peak\nrng = np.random.RandomState(42)\nX = noise_sd * rng.randn(n_subjects, width, width)\n# Add a signal at the dead center\nX[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd\n# Spatially smooth with a 2D Gaussian kernel\nsize = width // 2 - 1\ngaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))\nfor si in range(X.shape[0]):\n for ri in range(X.shape[1]):\n X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')\n for ci in range(X.shape[2]):\n X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')", "Do some statistics\n<div class=\"alert alert-info\"><h4>Note</h4><p>X needs to be a multi-dimensional array of shape\n samples (subjects) x time x space, so we permute dimensions:</p></div>", "X = X.reshape((n_subjects, 1, n_src))", "Now let's do some clustering using the standard method.\n<div class=\"alert alert-info\"><h4>Note</h4><p>Not specifying a connectivity matrix implies grid-like connectivity,\n which we want here:</p></div>", "T_obs, clusters, p_values, H0 = \\\n spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold,\n connectivity=connectivity,\n tail=1, n_permutations=n_permutations)\n\n# Let's put the cluster data in a readable format\nps = np.zeros(width * width)\nfor cl, p in zip(clusters, p_values):\n ps[cl[1]] = -np.log10(p)\nps = ps.reshape((width, width))\nT_obs = T_obs.reshape((width, width))\n\n# To do a Bonferroni correction on these data is simple:\np = stats.distributions.t.sf(T_obs, n_subjects - 1)\np_bon = -np.log10(bonferroni_correction(p)[1])\n\n# Now let's do some clustering using the standard method with \"hat\":\nstat_fun = partial(ttest_1samp_no_p, sigma=sigma)\nT_obs_hat, clusters, p_values, H0 = \\\n spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold,\n connectivity=connectivity,\n tail=1, n_permutations=n_permutations,\n stat_fun=stat_fun, buffer_size=None)\n\n# Let's put the cluster data in a readable format\nps_hat = np.zeros(width * width)\nfor cl, p in zip(clusters, p_values):\n ps_hat[cl[1]] = -np.log10(p)\nps_hat = ps_hat.reshape((width, width))\nT_obs_hat = T_obs_hat.reshape((width, width))\n\n# Now the threshold-free cluster enhancement method (TFCE):\nT_obs_tfce, clusters, p_values, H0 = \\\n spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold_tfce,\n connectivity=connectivity,\n tail=1, n_permutations=n_permutations)\nT_obs_tfce = T_obs_tfce.reshape((width, width))\nps_tfce = -np.log10(p_values.reshape((width, width)))\n\n# Now the TFCE with \"hat\" variance correction:\nT_obs_tfce_hat, clusters, p_values, H0 = \\\n spatio_temporal_cluster_1samp_test(X, n_jobs=1, threshold=threshold_tfce,\n connectivity=connectivity,\n tail=1, n_permutations=n_permutations,\n stat_fun=stat_fun, buffer_size=None)\nT_obs_tfce_hat = T_obs_tfce_hat.reshape((width, width))\nps_tfce_hat = -np.log10(p_values.reshape((width, width)))", "Visualize results", "fig = plt.figure(facecolor='w')\n\nx, y = np.mgrid[0:width, 0:width]\nkwargs = dict(rstride=1, cstride=1, linewidth=0, cmap='Greens')\n\nTs = [T_obs, T_obs_hat, T_obs_tfce, T_obs_tfce_hat]\ntitles = ['T statistic', 'T with \"hat\"', 'TFCE statistic', 'TFCE w/\"hat\" stat']\nfor ii, (t, title) in enumerate(zip(Ts, titles)):\n ax = fig.add_subplot(2, 4, ii + 1, projection='3d')\n ax.plot_surface(x, y, t, **kwargs)\n ax.set_xticks([])\n ax.set_yticks([])\n ax.set_title(title)\n\np_lims = [1.3, -np.log10(1.0 / n_permutations)]\npvals = [ps, ps_hat, ps_tfce, ps_tfce_hat]\ntitles = ['Standard clustering', 'Clust. w/\"hat\"',\n 'Clust. w/TFCE', 'Clust. w/TFCE+\"hat\"']\naxs = []\nfor ii, (p, title) in enumerate(zip(pvals, titles)):\n ax = fig.add_subplot(2, 4, 5 + ii)\n plt.imshow(p, cmap='Purples', vmin=p_lims[0], vmax=p_lims[1])\n ax.set_xticks([])\n ax.set_yticks([])\n ax.set_title(title)\n axs.append(ax)\n\nplt.tight_layout()\nfor ax in axs:\n cbar = plt.colorbar(ax=ax, shrink=0.75, orientation='horizontal',\n fraction=0.1, pad=0.025)\n cbar.set_label('-log10(p)')\n cbar.set_ticks(p_lims)\n cbar.set_ticklabels(['%0.1f' % p for p in p_lims])\n\nplt.show()", "References\n.. [1] Ridgway et al. 2012, \"The problem of low variance voxels in\n statistical parametric mapping; a new hat avoids a 'haircut'\",\n NeuroImage. 2012 Feb 1;59(3):2131-41.\n.. [2] Smith and Nichols 2009, \"Threshold-free cluster enhancement:\n addressing problems of smoothing, threshold dependence, and\n localisation in cluster inference\", NeuroImage 44 (2009) 83-98." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
khrapovs/metrix
notebooks/mle_probit.ipynb
mit
[ "Probit\n$$Y_{i}^{*}=X_{i}\\theta+e_{i}$$\n$$Y_{i}=\\begin{cases}\n1, & Y_{i}^{}>0,\\\n0, & Y_{i}^{}\\leq0.\n\\end{cases}$$", "import numpy as np\nimport matplotlib.pylab as plt\nimport seaborn as sns\n\nnp.set_printoptions(precision=4, suppress=True)\nsns.set_context('notebook')\n\n%matplotlib inline", "Generate data", "# True parameter\ntheta = .5\n# Sample size\nn = int(1e2)\n# Independent variable, N(0,1)\nX = np.random.normal(0, 1, n)\n# Error term, N(0,1)\ne = np.random.normal(0, 1, n)\n\n# Sort data for nice plots\nX = np.sort(X)\n\n# Unobservable dependent variable\nYs = X * theta + e\n\n# Generate observable binary variable\nY = np.zeros_like(Ys)\nY[Ys > 0] = 1", "Plot the data and the model", "plt.figure(figsize=(16,8))\n\n# Unobservables\nplt.subplot(2, 1, 1)\nplt.plot(X, X * theta, label='True model')\nplt.scatter(X[Ys > 0], Ys[Ys > 0], c='red', label='Unobserved > 0')\nplt.scatter(X[Ys < 0], Ys[Ys < 0], c='blue', label='Unobserved < 0')\nplt.ylabel(r'$Y^*$')\nplt.xlabel(r'$X$')\nplt.legend()\n\n# Observables\nplt.subplot(2, 1, 2)\nplt.scatter(X, Y, c=[], lw=2)\nplt.ylabel(r'$Y$')\nplt.xlabel(r'$X$')\n\nplt.show()", "Maximize log-likelihood\n$$l\\left(y|x,\\theta\\right)=\\sum_{i=1}^{n}\\log f\\left(y_{i}|x_{i},\\theta\\right)=\\sum_{i=1}^{n}\\left[y_{i}\\log\\Phi\\left(x_{i}\\theta\\right)+\\left(1-y_{i}\\right)\\log\\left[1-\\Phi\\left(x_{i}\\theta\\right)\\right]\\right]$$", "import scipy.optimize as opt\nfrom scipy.stats import norm\n\n# Define objective function\ndef f(theta, X, Y):\n Q = - np.sum(Y * np.log(1e-3 + norm.cdf(X * theta)) + (1 - Y) * np.log(1e-3 + 1 - norm.cdf(X * theta)))\n return Q\n\n# Run optimization routine\ntheta_hat = opt.fmin_bfgs(f, 0., args=(X, Y))\n\nprint(theta_hat)", "Plot objective function, true parameter, and the estimate", "# Generate data for objective function plot\nth = np.linspace(-3., 3., 1e2)\nQ = [f(z, X, Y) for z in th]\n\n# Plot the data\nplt.figure(figsize=(8, 4))\nplt.plot(th, Q, label='Q')\nplt.xlabel(r'$\\theta$')\nplt.axvline(x=theta_hat, c='red', label='Estimated')\nplt.axvline(x=theta, c='black', label='True')\nplt.legend()\nplt.show()", "Solve first order conditions", "from scipy.optimize import fsolve\n\n# Define the first order condition\ndef df(theta, X, Y):\n return - np.sum(X * norm.pdf(X * theta) * (Y - norm.cdf(X * theta)))\n\n# Solve FOC\ntheta_hat = fsolve(df, 0., args=(X, Y))\n\nprint(theta_hat)", "Plot first order condition", "# Generate data for the plot\nth = np.linspace(-3., 3., 1e2)\nQ = np.array([df(z, X, Y) for z in th])\n\n# Plot the data\nplt.figure(figsize=(8, 4))\nplt.plot(th, Q, label='Q')\nplt.xlabel(r'$\\beta$')\nplt.axvline(x=theta_hat, c='red', label='Estimated')\nplt.axvline(x=theta, c='black', label='True')\nplt.axhline(y=0, c='green')\nplt.legend()\nplt.show()", "Plot original data and fitted model", "plt.figure(figsize=(16, 8))\n\n# Unobservables\nplt.subplot(2, 1, 1)\nplt.plot(X, X * theta, label='True model')\nplt.plot(X, X * theta_hat, label='Fitted model')\nplt.scatter(X[Ys > 0], Ys[Ys > 0], c='red', label='Unobserved > 0')\nplt.scatter(X[Ys < 0], Ys[Ys < 0], c='blue', label='Unobserved < 0')\nplt.ylabel(r'$Y^*$')\nplt.xlabel(r'$X$')\nplt.legend()\n\n# Observables\nplt.subplot(2, 1, 2)\nplt.scatter(X, Y, c=[], label='Y')\nplt.plot(X, norm.cdf(X * theta), label=r'$\\Phi(X\\theta)$')\nplt.plot(X, norm.cdf(X * theta_hat), label=r'$\\Phi(X\\hat{\\theta})$')\nplt.ylabel(r'$Y$')\nplt.xlabel(r'$X$')\nplt.legend()\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/hammoz-consortium/cmip6/models/sandbox-3/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: HAMMOZ-CONSORTIUM\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:03\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-3', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
keras-team/keras-io
examples/vision/ipynb/siamese_network.ipynb
apache-2.0
[ "Image similarity estimation using a Siamese Network with a triplet loss\nAuthors: Hazem Essam and Santiago L. Valdarrama<br>\nDate created: 2021/03/25<br>\nLast modified: 2021/03/25<br>\nDescription: Training a Siamese Network to compare the similarity of images using a triplet loss function.\nIntroduction\nA Siamese Network is a type of network architecture that\ncontains two or more identical subnetworks used to generate feature vectors for each input and compare them.\nSiamese Networks can be applied to different use cases, like detecting duplicates, finding anomalies, and face recognition.\nThis example uses a Siamese Network with three identical subnetworks. We will provide three images to the model, where\ntwo of them will be similar (anchor and positive samples), and the third will be unrelated (a negative example.)\nOur goal is for the model to learn to estimate the similarity between images.\nFor the network to learn, we use a triplet loss function. You can find an introduction to triplet loss in the\nFaceNet paper by Schroff et al,. 2015. In this example, we define the triplet\nloss function as follows:\nL(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0)\nThis example uses the Totally Looks Like dataset\nby Rosenfeld et al., 2018.\nSetup", "import matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport random\nimport tensorflow as tf\nfrom pathlib import Path\nfrom tensorflow.keras import applications\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import losses\nfrom tensorflow.keras import optimizers\nfrom tensorflow.keras import metrics\nfrom tensorflow.keras import Model\nfrom tensorflow.keras.applications import resnet\n\n\ntarget_shape = (200, 200)\n", "Load the dataset\nWe are going to load the Totally Looks Like dataset and unzip it inside the ~/.keras directory\nin the local environment.\nThe dataset consists of two separate files:\n\nleft.zip contains the images that we will use as the anchor.\nright.zip contains the images that we will use as the positive sample (an image that looks like the anchor).", "cache_dir = Path(Path.home()) / \".keras\"\nanchor_images_path = cache_dir / \"left\"\npositive_images_path = cache_dir / \"right\"\n\n!gdown --id 1jvkbTr_giSP3Ru8OwGNCg6B4PvVbcO34\n!gdown --id 1EzBZUb_mh_Dp_FKD0P4XiYYSd0QBH5zW\n!unzip -oq left.zip -d $cache_dir\n!unzip -oq right.zip -d $cache_dir", "Preparing the data\nWe are going to use a tf.data pipeline to load the data and generate the triplets that we\nneed to train the Siamese network.\nWe'll set up the pipeline using a zipped list with anchor, positive, and negative filenames as\nthe source. The pipeline will load and preprocess the corresponding images.", "\ndef preprocess_image(filename):\n \"\"\"\n Load the specified file as a JPEG image, preprocess it and\n resize it to the target shape.\n \"\"\"\n\n image_string = tf.io.read_file(filename)\n image = tf.image.decode_jpeg(image_string, channels=3)\n image = tf.image.convert_image_dtype(image, tf.float32)\n image = tf.image.resize(image, target_shape)\n return image\n\n\ndef preprocess_triplets(anchor, positive, negative):\n \"\"\"\n Given the filenames corresponding to the three images, load and\n preprocess them.\n \"\"\"\n\n return (\n preprocess_image(anchor),\n preprocess_image(positive),\n preprocess_image(negative),\n )\n", "Let's setup our data pipeline using a zipped list with an anchor, positive,\nand negative image filename as the source. The output of the pipeline\ncontains the same triplet with every image loaded and preprocessed.", "# We need to make sure both the anchor and positive images are loaded in\n# sorted order so we can match them together.\nanchor_images = sorted(\n [str(anchor_images_path / f) for f in os.listdir(anchor_images_path)]\n)\n\npositive_images = sorted(\n [str(positive_images_path / f) for f in os.listdir(positive_images_path)]\n)\n\nimage_count = len(anchor_images)\n\nanchor_dataset = tf.data.Dataset.from_tensor_slices(anchor_images)\npositive_dataset = tf.data.Dataset.from_tensor_slices(positive_images)\n\n# To generate the list of negative images, let's randomize the list of\n# available images and concatenate them together.\nrng = np.random.RandomState(seed=42)\nrng.shuffle(anchor_images)\nrng.shuffle(positive_images)\n\nnegative_images = anchor_images + positive_images\nnp.random.RandomState(seed=32).shuffle(negative_images)\n\nnegative_dataset = tf.data.Dataset.from_tensor_slices(negative_images)\nnegative_dataset = negative_dataset.shuffle(buffer_size=4096)\n\ndataset = tf.data.Dataset.zip((anchor_dataset, positive_dataset, negative_dataset))\ndataset = dataset.shuffle(buffer_size=1024)\ndataset = dataset.map(preprocess_triplets)\n\n# Let's now split our dataset in train and validation.\ntrain_dataset = dataset.take(round(image_count * 0.8))\nval_dataset = dataset.skip(round(image_count * 0.8))\n\ntrain_dataset = train_dataset.batch(32, drop_remainder=False)\ntrain_dataset = train_dataset.prefetch(8)\n\nval_dataset = val_dataset.batch(32, drop_remainder=False)\nval_dataset = val_dataset.prefetch(8)\n", "Let's take a look at a few examples of triplets. Notice how the first two images\nlook alike while the third one is always different.", "\ndef visualize(anchor, positive, negative):\n \"\"\"Visualize a few triplets from the supplied batches.\"\"\"\n\n def show(ax, image):\n ax.imshow(image)\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n fig = plt.figure(figsize=(9, 9))\n\n axs = fig.subplots(3, 3)\n for i in range(3):\n show(axs[i, 0], anchor[i])\n show(axs[i, 1], positive[i])\n show(axs[i, 2], negative[i])\n\n\nvisualize(*list(train_dataset.take(1).as_numpy_iterator())[0])", "Setting up the embedding generator model\nOur Siamese Network will generate embeddings for each of the images of the\ntriplet. To do this, we will use a ResNet50 model pretrained on ImageNet and\nconnect a few Dense layers to it so we can learn to separate these\nembeddings.\nWe will freeze the weights of all the layers of the model up until the layer conv5_block1_out.\nThis is important to avoid affecting the weights that the model has already learned.\nWe are going to leave the bottom few layers trainable, so that we can fine-tune their weights\nduring training.", "base_cnn = resnet.ResNet50(\n weights=\"imagenet\", input_shape=target_shape + (3,), include_top=False\n)\n\nflatten = layers.Flatten()(base_cnn.output)\ndense1 = layers.Dense(512, activation=\"relu\")(flatten)\ndense1 = layers.BatchNormalization()(dense1)\ndense2 = layers.Dense(256, activation=\"relu\")(dense1)\ndense2 = layers.BatchNormalization()(dense2)\noutput = layers.Dense(256)(dense2)\n\nembedding = Model(base_cnn.input, output, name=\"Embedding\")\n\ntrainable = False\nfor layer in base_cnn.layers:\n if layer.name == \"conv5_block1_out\":\n trainable = True\n layer.trainable = trainable", "Setting up the Siamese Network model\nThe Siamese network will receive each of the triplet images as an input,\ngenerate the embeddings, and output the distance between the anchor and the\npositive embedding, as well as the distance between the anchor and the negative\nembedding.\nTo compute the distance, we can use a custom layer DistanceLayer that\nreturns both values as a tuple.", "\nclass DistanceLayer(layers.Layer):\n \"\"\"\n This layer is responsible for computing the distance between the anchor\n embedding and the positive embedding, and the anchor embedding and the\n negative embedding.\n \"\"\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def call(self, anchor, positive, negative):\n ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1)\n an_distance = tf.reduce_sum(tf.square(anchor - negative), -1)\n return (ap_distance, an_distance)\n\n\nanchor_input = layers.Input(name=\"anchor\", shape=target_shape + (3,))\npositive_input = layers.Input(name=\"positive\", shape=target_shape + (3,))\nnegative_input = layers.Input(name=\"negative\", shape=target_shape + (3,))\n\ndistances = DistanceLayer()(\n embedding(resnet.preprocess_input(anchor_input)),\n embedding(resnet.preprocess_input(positive_input)),\n embedding(resnet.preprocess_input(negative_input)),\n)\n\nsiamese_network = Model(\n inputs=[anchor_input, positive_input, negative_input], outputs=distances\n)", "Putting everything together\nWe now need to implement a model with custom training loop so we can compute\nthe triplet loss using the three embeddings produced by the Siamese network.\nLet's create a Mean metric instance to track the loss of the training process.", "\nclass SiameseModel(Model):\n \"\"\"The Siamese Network model with a custom training and testing loops.\n\n Computes the triplet loss using the three embeddings produced by the\n Siamese Network.\n\n The triplet loss is defined as:\n L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0)\n \"\"\"\n\n def __init__(self, siamese_network, margin=0.5):\n super(SiameseModel, self).__init__()\n self.siamese_network = siamese_network\n self.margin = margin\n self.loss_tracker = metrics.Mean(name=\"loss\")\n\n def call(self, inputs):\n return self.siamese_network(inputs)\n\n def train_step(self, data):\n # GradientTape is a context manager that records every operation that\n # you do inside. We are using it here to compute the loss so we can get\n # the gradients and apply them using the optimizer specified in\n # `compile()`.\n with tf.GradientTape() as tape:\n loss = self._compute_loss(data)\n\n # Storing the gradients of the loss function with respect to the\n # weights/parameters.\n gradients = tape.gradient(loss, self.siamese_network.trainable_weights)\n\n # Applying the gradients on the model using the specified optimizer\n self.optimizer.apply_gradients(\n zip(gradients, self.siamese_network.trainable_weights)\n )\n\n # Let's update and return the training loss metric.\n self.loss_tracker.update_state(loss)\n return {\"loss\": self.loss_tracker.result()}\n\n def test_step(self, data):\n loss = self._compute_loss(data)\n\n # Let's update and return the loss metric.\n self.loss_tracker.update_state(loss)\n return {\"loss\": self.loss_tracker.result()}\n\n def _compute_loss(self, data):\n # The output of the network is a tuple containing the distances\n # between the anchor and the positive example, and the anchor and\n # the negative example.\n ap_distance, an_distance = self.siamese_network(data)\n\n # Computing the Triplet Loss by subtracting both distances and\n # making sure we don't get a negative value.\n loss = ap_distance - an_distance\n loss = tf.maximum(loss + self.margin, 0.0)\n return loss\n\n @property\n def metrics(self):\n # We need to list our metrics here so the `reset_states()` can be\n # called automatically.\n return [self.loss_tracker]\n", "Training\nWe are now ready to train our model.", "siamese_model = SiameseModel(siamese_network)\nsiamese_model.compile(optimizer=optimizers.Adam(0.0001))\nsiamese_model.fit(train_dataset, epochs=10, validation_data=val_dataset)", "Inspecting what the network has learned\nAt this point, we can check how the network learned to separate the embeddings\ndepending on whether they belong to similar images.\nWe can use cosine similarity to measure the\nsimilarity between embeddings.\nLet's pick a sample from the dataset to check the similarity between the\nembeddings generated for each image.", "sample = next(iter(train_dataset))\nvisualize(*sample)\n\nanchor, positive, negative = sample\nanchor_embedding, positive_embedding, negative_embedding = (\n embedding(resnet.preprocess_input(anchor)),\n embedding(resnet.preprocess_input(positive)),\n embedding(resnet.preprocess_input(negative)),\n)", "Finally, we can compute the cosine similarity between the anchor and positive\nimages and compare it with the similarity between the anchor and the negative\nimages.\nWe should expect the similarity between the anchor and positive images to be\nlarger than the similarity between the anchor and the negative images.", "cosine_similarity = metrics.CosineSimilarity()\n\npositive_similarity = cosine_similarity(anchor_embedding, positive_embedding)\nprint(\"Positive similarity:\", positive_similarity.numpy())\n\nnegative_similarity = cosine_similarity(anchor_embedding, negative_embedding)\nprint(\"Negative similarity\", negative_similarity.numpy())\n", "Summary\n\n\nThe tf.data API enables you to build efficient input pipelines for your model. It is\nparticularly useful if you have a large dataset. You can learn more about tf.data\npipelines in tf.data: Build TensorFlow input pipelines.\n\n\nIn this example, we use a pre-trained ResNet50 as part of the subnetwork that generates\nthe feature embeddings. By using transfer learning,\nwe can significantly reduce the training time and size of the dataset.\n\n\nNotice how we are fine-tuning\nthe weights of the final layers of the ResNet50 network but keeping the rest of the layers untouched.\nUsing the name assigned to each layer, we can freeze the weights to a certain point and keep the last few layers open.\n\n\nWe can create custom layers by creating a class that inherits from tf.keras.layers.Layer,\nas we did in the DistanceLayer class.\n\n\nWe used a cosine similarity metric to measure how to 2 output embeddings are similar to each other.\n\n\nYou can implement a custom training loop by overriding the train_step() method. train_step() uses\ntf.GradientTape,\nwhich records every operation that you perform inside it. In this example, we use it to access the\ngradients passed to the optimizer to update the model weights at every step. For more details, check out the\nIntro to Keras for researchers\nand Writing a training loop from scratch." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
thempel/adaptivemd
examples/rp/3_example_adaptive.ipynb
lgpl-2.1
[ "AdaptiveMD\nExample 3 - Running an adaptive loop", "import sys, os\n\n# stop RP from printing logs until severe\n# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')\nos.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'\n\nfrom adaptivemd import (\n Project,\n Event, FunctionalEvent,\n File\n)\n\n# We need this to be part of the imports. You can only restore known objects\n# Once these are imported you can load these objects.\nfrom adaptivemd.engine.openmm import OpenMMEngine\nfrom adaptivemd.analysis.pyemma import PyEMMAAnalysis", "Let's open our test project by its name. If you completed the first examples this should all work out of the box.", "project = Project('test')", "Open all connections to the MongoDB and Session so we can get started.\n\nAn interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in the last example, go back here and check on the change of the contents of the project.\n\nLet's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.", "print project.files\nprint project.generators\nprint project.models", "Now restore our old ways to generate tasks by loading the previously used generators.", "engine = project.generators['openmm']\nmodeller = project.generators['pyemma']\npdb_file = project.files['initial_pdb']", "Run simulations\nNow we really start simulations. The general way to do so is to create a simulation task and then submit it to a cluster to be executed. A Task object is a general description of what should be done and boils down to staging some files to your working directory, executing a bash script and finally moving files back from your working directory to a shared storage. RP takes care of most of this very elegantly and hence a Task is designed somewhat to cover the capabilities but in a somehow simpler and more pythonic way.\nFor example there is a RPC Python Call Task that allows you to execute a function remotely and pull back the results. \nFunctional Events\nWe want to first look into a way to run python code asynchroneously in the project. For this, write a function that should be executed. Start with opening a scheduler or using an existing one (in the latter case you need to make sure that when it is executed - which can take a while - the scheduler still exists).\nIf the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called.", "def strategy():\n # create a new scheduler\n with project.get_scheduler(cores=2) as local_scheduler:\n for loop in range(10):\n tasks = local_scheduler(project.new_ml_trajectory(\n length=100, number=10))\n yield tasks.is_done()\n\n task = local_scheduler(modeller.execute(list(project.trajectories)))\n yield task.is_done", "turn a generator of your function use add strategy() and not strategy to the FunctionalEvent", "ev = FunctionalEvent(strategy())", "and execute the event inside your project", "project.add_event(ev)", "after some time you will have 10 more trajectories. Just like that.\nLet's see how our project is growing", "import time\nfrom IPython.display import clear_output\n\ntry:\n while True:\n clear_output(wait=True)\n print '# of files %8d : %s' % (len(project.trajectories), '#' * len(project.trajectories))\n print '# of models %8d : %s' % (len(project.models), '#' * len(project.models))\n sys.stdout.flush()\n time.sleep(1)\n \nexcept KeyboardInterrupt:\n pass", "And some analysis", "trajs = project.trajectories\nq = {}\nins = {}\nfor f in trajs:\n source = f.frame if isinstance(f.frame, File) else f.frame.trajectory\n ind = 0 if isinstance(f.frame, File) else f.frame.index\n ins[source] = ins.get(source, []) + [ind]", "Event", "scheduler = project.get_scheduler(cores=2)\n\ndef strategy1():\n for loop in range(10):\n tasks = scheduler(project.new_ml_trajectory(\n length=100, number=10))\n yield tasks.is_done()\n\ndef strategy2():\n for loop in range(10):\n num = len(project.trajectories)\n task = scheduler(modeller.execute(list(project.trajectories)))\n yield task.is_done\n yield project.on_ntraj(num + 5)\n\nproject._events = []\n\nproject.add_event(FunctionalEvent(strategy1))\nproject.add_event(FunctionalEvent(strategy2))\n\nproject.close()", "Tasks\nTo actually run simulations you need to have a scheduler (maybe a better name?). This instance can execute tasks or more precise you can use it to submit tasks which will be converted to ComputeUnitDescriptions and executed on the cluster previously chosen.", "scheduler = project.get_scheduler(cores=2) # get the default scheduler using 2 cores", "Now we are good to go and can run a first simulation\nThis works by creating a Trajectory object with a filename, a length and an initial frame. Then the engine will take this information and create a real trajectory with exactly this name, this initil frame and the given length.\nSince this is such a common task you can also submit just a Trajectory without the need tp convert it to a Task first (which the engine can also do).\nOut project can create new names automatically and so we want 4 new trajectories of length 100 and starting at the existing pdb_file we use to initialize the engine.", "trajs = project.new_trajectory(pdb_file, 100, 4)", "Let's submit and see", "scheduler.submit(trajs)", "Once the trajectories exist these objects will be saved to the database. It might be a little confusing to have objects before they exist, but this way you can actually work with these trajectories like referencing even before they exist.\nThis would allow to write now a function that triggers when the trajectory comes into existance. But we are not doing this right now.\nWait is dangerous since it is blocking and you cannot do anything until all tasks are finished. Normally you do not need it. Especially in interactive sessions.", "scheduler.wait()", "Look at all the files our project now contains.", "print '# of files', len(project.files)", "Great! That was easy (I hope you agree). \nNext we want to run a simple analysis.", "t = modeller.execute(list(project.trajectories))\n\nscheduler(t)\n\nscheduler.wait()", "Let's look at the model we generated", "print project.models.last.data.keys()", "And pick some information", "print project.models.last.data['msm']['P']", "Next example will demonstrate on how to write a full adaptive loop\nEvents\nA new concept. Tasks are great and do work for us. But so far we needed to submit tasks ourselves. In adaptive simulations we want this to happen automagically. To help with some of this events exist. This are basically a task_generator coupled with conditions on when to be executed.\nLet's write a little task generator (in essence a function that returns tasks)", "def task_generator():\n return [\n engine.task_run_trajectory(traj) for traj in\n project.new_ml_trajectory(100, 4)]\n\ntask_generator()", "Now create an event.", "ev = Event().on(project.on_ntraj(range(20,22,2))).do(task_generator)", ".on specifies when something should be executed. In our case when the project has a number of 20 trajectories. This is not yet the case so this event will not do anything unless we simulation more trajectories.\n.do specifies the function to be called.\nThe concept is borrowed from event based languages like often used in JavaScript. \nYou can build quite complex execution patterns with this. An event for example also knows when it is finished and this can be used as another trigger.", "def hello():\n print 'DONE!!!'\n return [] # todo: allow for None here\n\nfinished = Event().on(ev.on_done).do(hello)\n\nscheduler.add_event(ev)\nscheduler.add_event(finished)", "All events and tasks run parallel or at least get submitted and queue for execution in parallel. RP takes care of the actual execution.", "print '# of files', len(project.files)", "So for now lets run more trajectories and schedule computation of models in regular intervals.", "ev1 = Event().on(project.on_ntraj(range(30, 70, 4))).do(task_generator)\nev2 = Event().on(project.on_ntraj(38)).do(lambda: modeller.execute(list(project.trajectories))).repeat().until(ev1.on_done)\nscheduler.add_event(ev1)\nscheduler.add_event(ev2)\n\nlen(project.trajectories)\n\nlen(project.models)", ".repeat means to redo the same task when the last is finished (it will just append an infinite list of conditions to keep on running).\n.until specifies a termination condition. The event will not be executed once this condition is met. Makes most sense if you use .repeat or if the trigger condition and stopping should be independent. You might say, run 100 times unless you have a good enough model.", "print project.files", "Strategies (aka the brain)\nThe brain is just a collection of events. This makes it reuseable and easy to extend.", "project.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fja05680/pinkfish
examples/200.momentum-gem-portfolio/optimize.ipynb
mit
[ "Global Equities Momentum (GEM)\nGary Antonacci’s Dual Momentum approach is simple: by combining both relative momentum and absolute momentum (i.e. trend following), Dual Momentum seeks to rotate into areas of relative strength while preserving the flexibility to shift entirely to safety assets (e.g. short-term U.S. Treasury bills) during periods of pervasive, negative trends.\nAntonacci’s Global Equities Momentum (GEM) portfolio builds a portfolio with three assets: U.S. stocks, international stocks and U.S. bonds. For the retail investor he recommends using low-cost ETFs: for example, VOO for U.S. stocks; VEU for non-U.S. stocks and AGG for U.S. aggregate bonds. Compare against a global equities benchmark like the ACWI ETF. \nAntonacci named his system “Dual Momentum” because he uses both relative momentum (the measure of the performance of an asset relative to another asset) and absolute momentum ( the measure of performance relative to the risk-free rate – absolute excess return.) To keep the process very simple to implement, he used a 12-month look-back period and an easy to execute buy and sell system.\nEvery month the investor places all funds in the equity ETF that has the best 12-month performance relative to the other equity ETFs, unless the absolute performance is worse than the return of six-month U.S. Treasuries (as measured by BIL ETF). If absolute performance is below the BIL ETF, then the investor places all funds in AGG, the aggregate bond index.\nhttps://www.theemergingmarketsinvestor.com/using-momentum-in-emerging-markets/\nhttps://blog.thinknewfound.com/2019/01/fragility-case-study-dual-momentum-gem/\nhttps://seekingalpha.com/article/4010394-prospecting-dual-momentum-gem\nNote that this methods have NOT done so well since 2018, and especially didn't handle the COVID downturn very well.\nOptimize: number of lookback months.", "import pandas as pd\nimport matplotlib.pyplot as plt\nimport datetime\n\nimport pinkfish as pf\nimport strategy\n\n# Format price data\npd.options.display.float_format = '{:0.2f}'.format\n\n%matplotlib inline\n\n# Set size of inline plots\n'''note: rcParams can't be in same cell as import matplotlib\n or %matplotlib inline\n \n %matplotlib notebook: will lead to interactive plots embedded within\n the notebook, you can zoom and resize the figure\n \n %matplotlib inline: only draw static images in the notebook\n'''\nplt.rcParams[\"figure.figsize\"] = (10, 7)", "Some global data", "symbols = {'US STOCKS' : 'SPY', 'US BONDS' : 'AGG', 'EX-US STOCKS' : 'VEU', 'T-BILL': 'BIL'}\n\ncapital = 10000\nstart = datetime.datetime(1900, 1, 1)\n#start = datetime.datetime(*pf.SP500_BEGIN)\n#end = datetime.datetime.now()\nend = datetime.datetime(2019, 1, 1)", "Define Optimizations", "# Pick one\noptimize_lookback = True\n\nif optimize_lookback:\n Xs = range(3, 18+1, 1)\n Xs = [str(X) for X in Xs]\n\noptions = {\n 'use_adj' : True,\n 'use_cache' : True,\n 'lookback': None,\n 'margin': 1,\n}\noptions", "Run Strategy", "strategies = pd.Series(dtype=object)\nfor X in Xs:\n print(X, end=\" \")\n if optimize_lookback:\n options['lookback'] = int(X)\n \n strategies[X] = strategy.Strategy(symbols, capital, start, end, options) \n strategies[X].run()", "Summarize results", "metrics = ('annual_return_rate',\n 'max_closed_out_drawdown',\n 'annualized_return_over_max_drawdown',\n 'drawdown_recovery_period',\n 'best_month',\n 'worst_month',\n 'sharpe_ratio',\n 'sortino_ratio',\n 'monthly_std',\n 'pct_time_in_market',\n 'total_num_trades',\n 'pct_profitable_trades',\n 'avg_points')\n\ndf = pf.optimizer_summary(strategies, metrics)\ndf", "Bar graphs", "pf.optimizer_plot_bar_graph(df, 'annual_return_rate')\npf.optimizer_plot_bar_graph(df, 'sharpe_ratio')\npf.optimizer_plot_bar_graph(df, 'max_closed_out_drawdown')", "Run Benchmark", "s = strategies[Xs[0]]\nbenchmark = pf.Benchmark('ACWI', s.capital, s.start, s.end, use_adj=True)\nbenchmark.run()", "Equity curve", "if optimize_lookback: Y = '12'\n\npf.plot_equity_curve(strategies[Y].dbal, benchmark=benchmark.dbal)\n\nstrategies_ = strategies[3:10:2]\n\nlabels = []\nfor strategy in strategies_:\n if optimize_lookback:\n label=strategy.options['lookback']\n labels.append(label)\n\npf.plot_equity_curves(strategies_, labels)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kaleoyster/nbi-data-science
Finding+Time+Interval+Before+Intervention+of+Bridge+Components.ipynb
gpl-2.0
[ "Modules and Connection to MongoDb", "from pymongo import MongoClient\nimport time\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import *\nimport datetime as dt\nimport random as rnd\nimport warnings\nimport datetime as dt\nimport csv\n%matplotlib inline\n\nwarnings.filterwarnings(action=\"ignore\")\nClient = MongoClient(\"mongodb://bridges:readonly@nbi-mongo.admin/bridge\")\ndb = Client.bridge\ncollection = db[\"bridges\"]", "Extraction of Data from MongoDB and Creating DataFrame", "years = [1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,\n 2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016]\n\n## convert following list to \n#states = ['25','04','08','38','09','19', '26', '48','35', '17', '51', \n# '23','16', '36','56','29', '39','28','11', '21', '18','06','47','12',\n# '24','34','46','13','55','30','54','15', '32', '37','10','33','44',\n# '50', '42','05','20','45','22','40','72','41','53', '01', '31','02','49']\n\nstates = ['31']\nmasterdec = []\nfor yr in years:\n for state in states:\n #print(state + str(year))\n pipeline = [{\"$match\":{\"$and\":[{\"year\":yr},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"year\":1,\n \"stateCode\":1, \n \"structureNumber\":1,\n \"yearBuilt\":1,\n \"yearReconstructed\":1,\n \"deck\":1,\n \"substructure\":1, ## rating of substructure\n \"superstructure\":1, ## rating of superstructure\n }}]\n dec = collection.aggregate(pipeline)\n for i in list(dec):\n masterdec.append(i)\n #masterdec.append(list(dec))\nconditionRatings = pd.DataFrame(masterdec)", "First five rows from the dataframe", "conditionRatings.head()", "Data filterxation\n\nDeck, Substructure, Superstructure - survery record with 'N', 'NA' are removed", "before = len(conditionRatings)\nprint(\"Total Records before filteration: \",len(conditionRatings))\nconditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]\nconditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]\nconditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]\nafter = len(conditionRatings)\nprint(\"Total Records after filteration: \",len(conditionRatings))\nprint(\"Difference: \", before - after)", "Data Exploration\nThe following attributes are explored in futher detail:\n1. Year reconstructed \n2. Deck condition, Substructure condition and Superstructure condition\n1. Year reconstructed\nList of years in which bridges went under reconstruction\nNote: The list contains 0 and -1 , these values indicate that year of reconstruction was either recorded incorrectly or was absent", "conditionRatings['yearReconstructed'].unique()", "Number of bridges with valid year recorded for year reconstructed.", "yrCR = conditionRatings.loc[~conditionRatings['yearReconstructed'].isin([0,-1])]\nlen(yrCR)\n\nyrCR = yrCR.groupby('yearReconstructed').agg([\"count\"])\n\ncolumn = ['deck','stateCode','structureNumber','substructure','superstructure','year','yearBuilt']\nyrCR.columns = column\nyrCR.sort_values(by='deck',ascending = False)['structureNumber'][:15]", "2. Deck, Substructure, and Supertructure\n\nFinding mean time interval before any intervention, Intervention can be described as Repair, Reconstruction, or Rehabilitation.\n\nConstruction of dictionary consisting of 'structureNumber' as key and a list of condition rating of that particular bridge component as value. This construction of dictionary is carried for all the bridge components such as Deck, Substructure and Superstructure.", "deckCondition = {k: g[\"deck\"].tolist() for k,g in conditionRatings.groupby(\"structureNumber\")}\nsubstructureCondition = {k: g[\"substructure\"].tolist() for k,g in conditionRatings.groupby(\"structureNumber\")}\nsuperstructureCondition = {k: g[\"superstructure\"].tolist() for k,g in conditionRatings.groupby(\"structureNumber\")}", "we have a set of three dictionary for substructure, superstructure and deck of all the bridges from 1992 - 2016.\nFollowing is the example for a dictionary explained previously.", "deckCondition", "Dividing the time interval of deterioration on basis of suspected intervention. if the deterioration of condition rating is abruptly interfered by sudden increase in condition rating, we will consider a possible case of intervention (Repair, Reconstruction and Repair) and hence the time interval will be divided at the point of intervention, which is considered as splitting point, and second interval will be calculated from next point.\nA sudden increase of two or more condition rating is considered as possible intervention. Since, An increase in condition rating of 1 is not considered a possible intervention as the rating is given by inspectors which could be subjective.\nThe Following function will split all possible intervals and then return a list of condition rating and return number of intervals.", "def findAllIntervals(lst):\n fList = []\n temp = []\n i = 0\n j = 1\n for k in lst:\n if j == len(lst):\n temp.append(int(lst[i]))\n fList.append(temp)\n return (fList,len(fList))\n if lst[i] == lst[j]:\n pass\n temp.append(int(lst[i]))\n if lst[i] < lst[j]:\n diff = int(lst[j]) - int(lst[i])\n if diff > 1:\n #break\n fList.append(temp)\n temp = []\n pass\n i = i + 1\n j = j + 1\n ", "Calculating the mean time interval for first and second time interval for Deck", "\ndeckIntervals = []\ndeckIntervalSize = []\nfor i in deckCondition.values():\n lst, size = findAllIntervals(i)\n deckIntervalSize.append(size)\n deckIntervals.append(lst)\n \n#\nfrom collections import Counter\nkeys =list(Counter(deckIntervalSize).keys())\nvalues =list(Counter(deckIntervalSize).values())\n\nplt.figure(figsize=(10,8))\nplt.bar(keys,values)\nplt.xlabel(\"No. of Intervals\")\nplt.ylabel(\"No. of Records\")\nplt.title(\"No. of Records vs No. of Intervals\")\nplt.show()\n\n\n## Filter records with only two or more interval\ndeckIntervalGt2 = []\nfor index in range(0,len(deckIntervalSize)-1,1):\n if deckIntervalSize[index] > 1:\n deckIntervalGt2.append(deckIntervals[index])\n\n\nfirst_interval = []\nsecond_interval = []\nfor intervals in deckIntervalGt2:\n first_interval.append(len(intervals[0]))\n second_interval.append(len(intervals[1]))\nprint(\"Mean of the first time intervals :\", np.mean(first_interval))\nprint(\"Mean of the second time intervals :\", np.mean(second_interval))", "As opposed to the hypothesis the mean time for second interval is greater than first time interval for deck. but not all intervals start from rating 9 and we might not consider that some of the bridges might have be reconstructed before 1992\nCalculation of first condition rating of the first time interval and second time intervals.", "avgStartOfFirstInterval = []\navgStartOfSecondInterval = []\nfor intervals in deckIntervalGt2:\n avgStartOfFirstInterval.append(intervals[0][0])\n avgStartOfSecondInterval.append(intervals[1][0])\nprint(\"Mean of the first condition rating of first time intervals :\", np.mean(avgStartOfFirstInterval))\nprint(\"Mean of the first condition rating of second time intervals :\", np.mean(avgStartOfSecondInterval))", "Calculating the mean time interval for first and second time interval for Superstructure", "superstructureIntervals = []\nsuperstructureIntervalSize = []\nfor i in superstructureCondition.values():\n lst, size = findAllIntervals(i)\n superstructureIntervalSize.append(size)\n superstructureIntervals.append(lst)\n \n#\nfrom collections import Counter\nkeys_superstructure =list(Counter(superstructureIntervalSize).keys())\nvalues_superstructure =list(Counter(superstructureIntervalSize).values())\n\nplt.figure(figsize=(10,8))\nplt.bar(keys_superstructure,values_superstructure)\nplt.xlabel(\"No. of Intervals\")\nplt.ylabel(\"No. of Records\")\nplt.xticks([1,2,3,4,5])\nplt.title(\"No. of Records vs No. of Intervals\")\nplt.show()\n\n\n## Filter records with only two or more interval\nsuperstructureIntervalGt2 = []\nfor index in range(0,len(superstructureIntervalSize)-1,1):\n if superstructureIntervalSize[index] > 1:\n superstructureIntervalGt2.append(superstructureIntervals[index])\n\nsuperstructureFirstInterval = []\nsuperstructureSecondInterval = []\nfor intervals in superstructureIntervalGt2:\n superstructureFirstInterval.append(len(intervals[0]))\n superstructureSecondInterval.append(len(intervals[1]))\nprint(\"Mean of the first time intervals :\", np.mean(superstructureFirstInterval))\nprint(\"Mean of the second time intervals :\", np.mean(superstructureSecondInterval))", "As opposed to the hypothesis the mean time for second interval is greater than first time interval for superstructure.", "avgStartOfFirstInterval_sp = []\navgStartOfSecondInterval_sp = []\nfor intervals in superstructureIntervalGt2:\n avgStartOfFirstInterval_sp.append(intervals[0][0])\n avgStartOfSecondInterval_sp.append(intervals[1][0])\nprint(\"Mean of the first condition rating of first time intervals :\", np.mean(avgStartOfFirstInterval_sp))\nprint(\"Mean of the first condition rating of second time intervals :\", np.mean(avgStartOfSecondInterval_sp))", "Calculating the mean time interval for first and second time interval for Substructure", "substructureIntervals = []\nsubstructureIntervalSize = []\nfor i in substructureCondition.values():\n lst, size = findAllIntervals(i)\n substructureIntervalSize.append(size)\n substructureIntervals.append(lst)\n \n#\nfrom collections import Counter\nkeys_substructure =list(Counter(substructureIntervalSize).keys())\nvalues_substructure =list(Counter(substructureIntervalSize).values())\n\nplt.figure(figsize=(10,8))\nplt.bar(keys_substructure,values_substructure)\nplt.xlabel(\"No. of Intervals\")\nplt.ylabel(\"No. of Records\")\nplt.xticks([1,2,3,4,5])\nplt.title(\"No. of Records vs No. of Intervals\")\nplt.show()\n\n## Filter records with only two or more interval\nsubstructureIntervalGt2 = []\nfor index in range(0,len(substructureIntervalSize)-1,1):\n if substructureIntervalSize[index] > 1:\n substructureIntervalGt2.append(substructureIntervals[index])\n\nsubstructureFirstInterval = []\nsubstructureSecondInterval = []\nfor intervals in substructureIntervalGt2:\n substructureFirstInterval.append(len(intervals[0]))\n substructureSecondInterval.append(len(intervals[1]))\nprint(\"Mean of the first time intervals :\", np.mean(substructureFirstInterval))\nprint(\"Mean of the second time intervals :\", np.mean(substructureSecondInterval))\n\navgStartOfFirstInterval_sb = []\navgStartOfSecondInterval_sb = []\nfor intervals in substructureIntervalGt2:\n avgStartOfFirstInterval_sb.append(intervals[0][0])\n avgStartOfSecondInterval_sb.append(intervals[1][0])\nprint(\"Mean of the first condition rating of first time intervals :\", np.mean(avgStartOfFirstInterval_sb))\nprint(\"Mean of the first condition rating of second time intervals :\", np.mean(avgStartOfSecondInterval_sb))", "Consideration: \n1. We might consider bridges built only after 1992 to get an unbaised mean years of first time interval.", "CR1992 = conditionRatings.loc[conditionRatings['yearBuilt'] >=1992] \nCR1992.head()", "Superstructure built after 1992", "deckCondition_1992 = {k: g[\"deck\"].tolist() for k,g in CR1992.groupby(\"structureNumber\")}\nsubstructureCondition_1992 = {k: g[\"substructure\"].tolist() for k,g in CR1992.groupby(\"structureNumber\")}\nsuperstructureCondition_1992 = {k: g[\"superstructure\"].tolist() for k,g in CR1992.groupby(\"structureNumber\")}\n\n\ndeckIntervals_1992 = []\ndeckIntervalSize_1992 = []\nfor i in deckCondition_1992.values():\n lst, size = findAllIntervals(i)\n deckIntervalSize_1992.append(size)\n deckIntervals_1992.append(lst)\n \n#\nfrom collections import Counter\nkeys_d1992 =list(Counter(deckIntervalSize_1992).keys())\nvalues_d1992 =list(Counter(deckIntervalSize_1992).values())\n\nplt.figure(figsize=(10,8))\nplt.bar(keys_d1992,values_d1992)\nplt.xlabel(\"No. of Intervals\")\nplt.ylabel(\"No. of Records\")\nplt.xticks([1,2,3,4,5])\nplt.title(\"No. of Records vs No. of Intervals\")\nplt.show()\n\n## Filter records with only two or more interval\ndeckIntervalGt2_1992 = []\nfor index in range(0,len(deckIntervalSize_1992)-1,1):\n if deckIntervalSize_1992[index] > 1:\n deckIntervalGt2_1992.append(deckIntervals_1992[index])\n \ndeckFirstInterval_1992 = []\ndeckSecondInterval_1992 = []\nfor intervals in deckIntervalGt2_1992:\n deckFirstInterval_1992.append(len(intervals[0]))\n deckSecondInterval_1992.append(len(intervals[1]))\nprint(\"Mean of the first time intervals :\", np.mean(deckFirstInterval_1992))\nprint(\"Mean of the second time intervals :\", np.mean(deckSecondInterval_1992))\nprint()\navgStartOfFirstInterval_1992 = []\navgStartOfSecondInterval_1992 = []\nfor intervals in deckIntervalGt2_1992:\n avgStartOfFirstInterval_1992.append(intervals[0][0])\n avgStartOfSecondInterval_1992.append(intervals[1][0])\nprint(\"Mean of the first condition rating of first time intervals :\", np.mean(avgStartOfFirstInterval_1992))\nprint(\"Mean of the first condition rating of second time intervals :\", np.mean(avgStartOfSecondInterval_1992))", "Superstructure built after 1992", "\nsuperstructureIntervals_1992 = []\nsuperstructureIntervalSize_1992 = []\nfor i in superstructureCondition_1992.values():\n lst, size = findAllIntervals(i)\n superstructureIntervalSize_1992.append(size)\n superstructureIntervals_1992.append(lst)\n \n#\nfrom collections import Counter\nkeys_sp1992 =list(Counter(superstructureIntervalSize_1992).keys())\nvalues_sp1992 =list(Counter(superstructureIntervalSize_1992).values())\n\nplt.figure(figsize=(10,8))\nplt.bar(keys_sp1992,values_sp1992)\nplt.xlabel(\"No. of Intervals\")\nplt.ylabel(\"No. of Records\")\nplt.xticks([1,2,3,4,5])\nplt.title(\"No. of Records vs No. of Intervals\")\nplt.show()\n\nsuperstructureIntervalGt2_1992 = []\nfor index in range(0,len(superstructureIntervalSize_1992)-1,1):\n if superstructureIntervalSize_1992[index] > 1:\n superstructureIntervalGt2_1992.append(superstructureIntervals_1992[index])\n \nsuperstructureFirstInterval_1992 = []\nsuperstructureSecondInterval_1992 = []\nfor intervals in superstructureIntervalGt2_1992:\n superstructureFirstInterval_1992.append(len(intervals[0]))\n superstructureSecondInterval_1992.append(len(intervals[1]))\nprint(\"Mean of the first time intervals :\", np.mean(superstructureFirstInterval_1992))\nprint(\"Mean of the second time intervals :\", np.mean(superstructureSecondInterval_1992))\nprint()\n\navgStartOfFirstInterval_sp_1992 = []\navgStartOfSecondInterval_sp_1992 = []\nfor intervals in superstructureIntervalGt2_1992:\n avgStartOfFirstInterval_sp_1992.append(intervals[0][0])\n avgStartOfSecondInterval_sp_1992.append(intervals[1][0])\nprint(\"Mean of the first condition rating of first time intervals :\", np.mean(avgStartOfFirstInterval_sp_1992))\nprint(\"Mean of the first condition rating of second time intervals :\", np.mean(avgStartOfSecondInterval_sp_1992))", "Substructure built after 1992", "substructureIntervals_1992 = []\nsubstructureIntervalSize_1992 = []\nfor i in substructureCondition_1992.values():\n lst, size = findAllIntervals(i)\n substructureIntervalSize_1992.append(size)\n substructureIntervals_1992.append(lst)\n \n#\nfrom collections import Counter\nkeys_sb1992 =list(Counter(substructureIntervalSize_1992).keys())\nvalues_sb1992 =list(Counter(substructureIntervalSize_1992).values())\n\nplt.figure(figsize=(10,8))\nplt.bar(keys_sb1992,values_sb1992)\nplt.xlabel(\"No. of Intervals\")\nplt.ylabel(\"No. of Records\")\nplt.xticks([1,2,3,4,5])\nplt.title(\"No. of Records vs No. of Intervals\")\nplt.show()\n\nsubstructureIntervalGt2_1992 = []\nfor index in range(0,len(substructureIntervalSize_1992)-1,1):\n if substructureIntervalSize_1992[index] > 1:\n substructureIntervalGt2_1992.append(substructureIntervals_1992[index])\n \nsubstructureFirstInterval_1992 = []\nsubstructureSecondInterval_1992 = []\nfor intervals in substructureIntervalGt2_1992:\n substructureFirstInterval_1992.append(len(intervals[0]))\n substructureSecondInterval_1992.append(len(intervals[1]))\nprint(\"Mean of the first time intervals :\", np.mean(substructureFirstInterval_1992))\nprint(\"Mean of the second time intervals :\", np.mean(substructureSecondInterval_1992))\nprint()\n\navgStartOfFirstInterval_sb_1992 = []\navgStartOfSecondInterval_sb_1992 = []\nfor intervals in substructureIntervalGt2_1992:\n avgStartOfFirstInterval_sb_1992.append(intervals[0][0])\n avgStartOfSecondInterval_sb_1992.append(intervals[1][0])\nprint(\"Mean of the first condition rating of first time intervals :\", np.mean(avgStartOfFirstInterval_sb_1992))\nprint(\"Mean of the first condition rating of second time intervals :\", np.mean(avgStartOfSecondInterval_sb_1992))\n\n## Consider creating a deterioration model and curves\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jorgemauricio/INIFAP_Course
ejercicios/Pandas/8_Visualizacion con Pandas.ipynb
mit
[ "Visualizacion de informacion con Pandas\nEn este ejercicio se demostrara como se puede visualizar informacion mediante la libreria Pandas\nImportar librerias", "import numpy as np\nimport pandas as pd\n%matplotlib inline", "La informacion\nPara el ejercicio se creo informacion ficticia la cual se encuentra en la carpeta data con los nombres df1, df2 y df3", "df1 = pd.read_csv('../data/df1',index_col=0)\ndf2 = pd.read_csv('../data/df2')", "Hojas de estilo\nMatplotlib tiene Hojas de estilo que se pueden utilizar para crear graficas. Estas hojas de estilo incluyen plot_bmh, plot_fivethirtyeight, plot_ggplot y mas. Basicamente crean distintas reglas de estilo que se pueden aplicar facilmente.\n Antes de utilizar plt.style.use() tus graficas se pueden ver como la siguiente:", "df1['A'].hist()", "Utilizando estilos:", "import matplotlib.pyplot as plt\nplt.style.use('ggplot')", "Ahora tu grafica se visulizara de la siguiente manera:", "df1['A'].hist()\n\nplt.style.use('bmh')\ndf1['A'].hist()\n\nplt.style.use('dark_background')\ndf1['A'].hist()\n\nplt.style.use('fivethirtyeight')\ndf1['A'].hist()\n\nplt.style.use('ggplot')", "Por lo pronto utilizaremos el estilo ggplot\nTipos de grafica\nExisten varios tipos de graficas que se integran en pandas, la mayoria de ellas para demostrar datos estadisticos:\n\ndf.plot.area \ndf.plot.barh \ndf.plot.density \ndf.plot.hist \ndf.plot.line \ndf.plot.scatter\ndf.plot.bar \ndf.plot.box \ndf.plot.hexbin \ndf.plot.kde \ndf.plot.pie\n\nA su vez puedes mandar llamar df.plot(kind='hist') o remplazar el argumento de kind por cualquiera de la siguientes claves (ej. 'box','barh', etc..)\n\nArea", "df2.plot.area(alpha=0.4)", "Graficas de barras", "df2.head()\n\ndf2.plot.bar()\n\ndf2.plot.bar(stacked=True)", "Histogramas", "df1['A'].plot.hist(bins=50)", "Graficas de lineas", "df1.plot.line(x=df1.index,y='B',figsize=(12,3),lw=1)", "Graficas de puntos", "df1.plot.scatter(x='A',y='B')", "Se puede utilizar c para cambiar el color a desplegar y cmap para modificar el rango de colores a desplegar\nPara consultar todos los rangos de colores puedes consultar: http://matplotlib.org/users/colormaps.html", "df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm')", "Tambien se puede utilizar s para indicar el tamanio de alguna otra columna. El parametro s debe de ser un arreglo y no solo el nombre de la columna:", "df1.plot.scatter(x='A',y='B',s=df1['C']*100)", "Graficas de caja", "df2.plot.box() # Tambien puedes incluir el argumento by= para agrupar la informacion", "Grafica Hexagonal\nMuy util para la informacion bivariable, una alterntiva a la grafica de puntos:", "df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b'])\ndf.plot.hexbin(x='a',y='b',gridsize=25,cmap='Oranges')", "Kernel Density Estimation (KDE)", "df2['a'].plot.kde()\n\ndf2.plot.density()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sandeep-n/incubator-systemml
samples/jupyter-notebooks/ALS_python_demo.ipynb
apache-2.0
[ "Scaling Alternating Least Squares Using Apache SystemML\nRecommendation systems based on Alternating Least Squares (ALS) algorithm have gained popularity in recent years because, in general, they perform better as compared to content based approaches.\nALS is a matrix factorization algorithm, where a user-item matrix is factorized into two low-rank non-orthogonal matrices:\n$$R = U M$$\nThe elements, $r_{ij}$, of matrix $R$ can represent, for example, ratings assigned to the $j$th movie by the $i$th user.\nThis matrix factorization assumes that each user can be described by $k$ latent features. Similarly, each item/movie can also be represented by $k$ latent features. The user rating of a particular movie can thus be approximated by the product of two $k$-dimensional vectors:\n$$r_{ij} = {\\bf u}_i^T {\\bf m}_j$$\nThe vectors ${\\bf u}_i$ are rows of $U$ and ${\\bf m}_j$'s are columns of $M$. These can be learned by minimizing the cost function:\n$$f(U, M) = \\sum_{i,j} \\left( r_{ij} - {\\bf u}_i^T {\\bf m}_j \\right)^2 = \\| R - UM \\|^2$$\nRegularized ALS\nIn this notebook, we'll implement ALS algorithm with weighted-$\\lambda$-regularization formulated by Zhou et. al. The cost function with such regularization is:\n$$f(U, M) = \\sum_{i,j} I_{ij}\\left( r_{ij} - {\\bf u}i^T {\\bf m}_j \\right)^2 + \\lambda \\left( \\sum_i n{u_i} \\| {\\bf u}\\|i^2 + \\sum_j n{m_j} \\|{\\bf m}\\|_j^2 \\right)$$\nHere, $\\lambda$ is the usual regularization parameter. $n_{u_i}$ and $n_{m_j}$ represent the number of ratings of user $i$ and movie $j$ respectively. $I_{ij}$ is an indicator variable such that $I_{ij} = 1$ if $r_{ij}$ exists and $I_{ij} = 0$ otherwise.\nIf we fix ${\\bf m}_j$, we can determine ${\\bf u}_i$ by solving a regularized least squares problem:\n$$ \\frac{1}{2} \\frac{\\partial f}{\\partial {\\bf u}_i} = 0$$\nThis gives the following matrix equation:\n$$\\left(M \\text{diag}({\\bf I}i^T) M^{T} + \\lambda n{u_i} E\\right) {\\bf u}_i = M {\\bf r}_i^T$$\nHere ${\\bf r}i^T$ is the $i$th row of $R$. Similarly, ${\\bf I}_i$ the $i$th row of the matrix $I = [I{ij}]$. Please see Zhou et. al for details.\nReading Netflix Movie Ratings Data\nIn this example, we'll use Netflix movie ratings. This data set can be downloaded from here. We'll use spark to read movie ratings data into a dataframe. The csv files have four columns: MovieID, UserID, Rating, Date.", "from pyspark.sql import SparkSession\nfrom pyspark.sql.types import *\nfrom systemml import MLContext, dml\n\nspark = SparkSession\\\n .builder\\\n .appName(\"als-example\")\\\n .getOrCreate()\n\nschema = StructType([StructField(\"movieId\", IntegerType(), True),\n StructField(\"userId\", IntegerType(), True),\n StructField(\"rating\", IntegerType(), True),\n StructField(\"date\", StringType(), True)])\n\nratings = spark.read.csv(\"./netflix/training_set_normalized/mv_0*.txt\", schema = schema)\nratings = ratings.select('userId', 'movieId', 'rating')\nratings.show(10)\n\nratings.describe().show()", "ALS implementation using DML\nThe following script implements the regularized ALS algorithm as described above. One thing to note here is that we remove empty rows/columns from the rating matrix before running the algorithm. We'll add back the zero rows and columns to matrices $U$ and $M$ after the algorithm converges.", "#-----------------------------------------------------------------\n# Create kernel in SystemML's DSL using the R-like syntax for ALS\n# Algorithms available at : https://systemml.apache.org/algorithms\n# Below algorithm based on ALS-CG.dml\n#-----------------------------------------------------------------\nals_dml = \\\n\"\"\"\n# Default values of some parameters\nr = rank\nmax_iter = 50\ncheck = TRUE\nthr = 0.01\n\nR = table(X[,1], X[,2], X[,3])\n\n# check the input matrix R, if some rows or columns contain only zeros remove them from R\nR_nonzero_ind = R != 0;\nrow_nonzeros = rowSums(R_nonzero_ind);\ncol_nonzeros = t(colSums (R_nonzero_ind));\norig_nonzero_rows_ind = row_nonzeros != 0;\norig_nonzero_cols_ind = col_nonzeros != 0;\nnum_zero_rows = nrow(R) - sum(orig_nonzero_rows_ind);\nnum_zero_cols = ncol(R) - sum(orig_nonzero_cols_ind);\nif (num_zero_rows > 0) {\n print(\"Matrix R contains empty rows! These rows will be removed.\");\n R = removeEmpty(target = R, margin = \"rows\");\n}\nif (num_zero_cols > 0) {\n print (\"Matrix R contains empty columns! These columns will be removed.\");\n R = removeEmpty(target = R, margin = \"cols\");\n}\nif (num_zero_rows > 0 | num_zero_cols > 0) {\n print(\"Recomputing nonzero rows and columns!\");\n R_nonzero_ind = R != 0;\n row_nonzeros = rowSums(R_nonzero_ind);\n col_nonzeros = t(colSums (R_nonzero_ind));\n}\n\n###### MAIN PART ######\n\nm = nrow(R);\nn = ncol(R);\n\n# initializing factor matrices\nU = rand(rows = m, cols = r, min = -0.5, max = 0.5);\nM = rand(rows = n, cols = r, min = -0.5, max = 0.5);\n\n# initializing transformed matrices\nRt = t(R);\n\n\nloss = matrix(0, rows=max_iter+1, cols=1)\nif (check) {\n loss[1,] = sum(R_nonzero_ind * (R - (U %*% t(M)))^2) + lambda * (sum((U^2) * row_nonzeros) +\n sum((M^2) * col_nonzeros));\n print(\"----- Initial train loss: \" + toString(loss[1,1]) + \" -----\");\n}\n\nlambda_I = diag (matrix (lambda, rows = r, cols = 1));\nit = 0;\nconverged = FALSE;\nwhile ((it < max_iter) & (!converged)) {\n it = it + 1;\n # keep M fixed and update U\n parfor (i in 1:m) {\n M_nonzero_ind = t(R[i,] != 0);\n M_nonzero = removeEmpty(target=M * M_nonzero_ind, margin=\"rows\");\n A1 = (t(M_nonzero) %*% M_nonzero) + (as.scalar(row_nonzeros[i,1]) * lambda_I); # coefficient matrix\n U[i,] = t(solve(A1, t(R[i,] %*% M)));\n }\n\n # keep U fixed and update M\n parfor (j in 1:n) {\n U_nonzero_ind = t(Rt[j,] != 0)\n U_nonzero = removeEmpty(target=U * U_nonzero_ind, margin=\"rows\");\n A2 = (t(U_nonzero) %*% U_nonzero) + (as.scalar(col_nonzeros[j,1]) * lambda_I); # coefficient matrix\n M[j,] = t(solve(A2, t(Rt[j,] %*% U)));\n }\n\n # check for convergence\n if (check) {\n loss_init = as.scalar(loss[it,1])\n loss_cur = sum(R_nonzero_ind * (R - (U %*% t(M)))^2) + lambda * (sum((U^2) * row_nonzeros) +\n sum((M^2) * col_nonzeros));\n loss_dec = (loss_init - loss_cur) / loss_init;\n print(\"Train loss at iteration (M) \" + it + \": \" + loss_cur + \" loss-dec \" + loss_dec);\n if (loss_dec >= 0 & loss_dec < thr | loss_init == 0) {\n print(\"----- ALS converged after \" + it + \" iterations!\");\n converged = TRUE;\n }\n loss[it+1,1] = loss_cur\n }\n} # end of while loop\n\nloss = loss[1:it+1,1]\nif (check) {\n print(\"----- Final train loss: \" + toString(loss[it+1,1]) + \" -----\");\n}\n\nif (!converged) {\n print(\"Max iteration achieved but not converged!\");\n}\n\n# inject 0s in U if original R had empty rows\nif (num_zero_rows > 0) {\n U = removeEmpty(target = diag(orig_nonzero_rows_ind), margin = \"cols\") %*% U;\n}\n# inject 0s in R if original V had empty rows\nif (num_zero_cols > 0) {\n M = removeEmpty(target = diag(orig_nonzero_cols_ind), margin = \"cols\") %*% M;\n}\nM = t(M);\n\"\"\"", "Running the Algorithm\nWe'll first create an MLContext object which the entry point for SystemML. Inputs and outputs are defined through a dml function.", "ml = MLContext(sc)\n\n# Define input/output variables for DML script\nalsScript = dml(als_dml).input(\"X\", ratings) \\\n .input(\"lambda\", 0.01) \\\n .input(\"rank\", 100) \\\n .output(\"U\", \"M\", \"loss\")\n\n# Execute script\nres = ml.execute(alsScript)\nU, M, loss = res.get('U','M', \"loss\")\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(loss.toNumPy(), 'o');", "Predictions\nOnce $U$ and $M$ are learned from the data, we can recommend movies for any users. If $U'$ represent the users for which we seek recommendations, we first obtain the predicted ratings for all the movies by users in $U'$:\n$$R' = U' M$$\nFinally, we sort the ratings for each user and present the top 5 movies with highest predicted ratings. The following dml script implements this. Since we're using very low rank in this example, these recommendations are not meaningful.", "predict_dml = \\\n\"\"\"\nR = table(R[,1], R[,2], R[,3])\nK = 5\nRrows = nrow(R);\nRcols = ncol(R);\n\nzero_cols_ind = (colSums(M != 0)) == 0;\nK = min(Rcols - sum(zero_cols_ind), K);\n\nn = nrow(X);\n\nUrows = nrow(U);\nMcols = ncol(M);\n\nX_user_max = max(X[,1]);\n\nif (X_user_max > Rrows) {\n\tstop(\"Predictions cannot be provided. Maximum user-id exceeds the number of rows of R.\");\n}\nif (Urows != Rrows | Mcols != Rcols) {\n\tstop(\"Number of rows of U (columns of M) does not match the number of rows (column) of R.\");\n}\n\n# creats projection matrix to select users\ns = seq(1, n);\nones = matrix(1, rows = n, cols = 1);\nP = table(s, X[,1], ones, n, Urows);\n\n\n# selects users from factor U\nU_prime = P %*% U;\n\n# calculate rating matrix for selected users\nR_prime = U_prime %*% M;\n\n# selects users from original R\nR_users = P %*% R;\n\n# create indictor matrix to remove existing ratings for given users\nI = R_users == 0;\n\n# removes already recommended items and creating user2item matrix\nR_prime = R_prime * I;\n\n# stores sorted movies for selected users \nR_top_indices = matrix(0, rows = nrow (R_prime), cols = K);\nR_top_values = matrix(0, rows = nrow (R_prime), cols = K);\n\n# a large number to mask the max ratings\nrange = max(R_prime) - min(R_prime) + 1;\n\n# uses rowIndexMax/rowMaxs to update kth ratings\nfor (i in 1:K){\n\trowIndexMax = rowIndexMax(R_prime);\n\trowMaxs = rowMaxs(R_prime);\n\tR_top_indices[,i] = rowIndexMax;\n\tR_top_values[,i] = rowMaxs;\n\tR_prime = R_prime - range * table(seq (1, nrow(R_prime), 1), rowIndexMax, nrow(R_prime), ncol(R_prime));\n}\n\nR_top_indices = R_top_indices * (R_top_values > 0);\n\n# cbind users as a first column\nR_top_indices = cbind(X[,1], R_top_indices);\nR_top_values = cbind(X[,1], R_top_values);\n\"\"\"\n\n# user for which we want to recommend movies\nids = [116,126,130,131,133,142,149,158,164,168,169,177,178,183,188,189,192,195,199,201,215,231,242,247,248,\n 250,261,265,266,267,268,283,291,296,298,299,301,302,304,305,307,308,310,312,314,330,331,333,352,358,363,\n 368,369,379,383,384,385,392,413,416,424,437,439,440,442,453,462,466,470,471,477,478,479,481,485,490,491]\n\nusers = spark.createDataFrame([[i] for i in ids])\n\npredScript = dml(predict_dml).input(\"R\", ratings) \\\n .input(\"X\", users) \\\n .input(\"U\", U) \\\n .input(\"M\", M) \\\n .output(\"R_top_indices\")\n\npred = ml.execute(predScript).get(\"R_top_indices\")\n\npred = pred.toNumPy()", "Just for Fun!\nOnce we have the movie recommendations, we can show the movie posters for those recommendations. We'll fetch these movie poster from wikipedia. If movie page doesn't exist on wikipedia, we'll just list the movie title.", "import pandas as pd\ntitles = pd.read_csv(\"./netflix/movie_titles.csv\", header=None, sep=';', names=['movieID', 'year', 'title'])\n\nimport re\nimport wikipedia as wiki\nfrom bs4 import BeautifulSoup as bs\nimport requests as rq\nfrom IPython.core.display import Image, display\n\ndef get_poster(title):\n if title.endswith('Bonus Material'):\n title = title.strip('Bonus Material')\n \n title = re.sub(r'[^\\w\\s]','',title)\n matches = wiki.search(title)\n if matches is None:\n return\n film = [s for s in matches if 'film)' in s]\n film = film[0] if len(film) > 0 else matches[0]\n\n try:\n url = wiki.page(film).url\n except:\n return\n\n html = rq.get(url)\n if html.status_code == 200:\n soup = bs(html.content, 'html.parser')\n infobox = soup.find('table', class_=\"infobox\")\n if (infobox):\n img = infobox.find('img')\n if img:\n display(Image('http:' + img['src']))\n\ndef show_recommendations(userId, preds):\n for row in preds:\n if int(row[0]) == userId:\n print(\"\\nrecommendations for userId\", int(row[0]) )\n for title in titles.title[row[1:]].values:\n print(title)\n get_poster(title)\n break\n\nshow_recommendations(192, preds=pred)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oroszl/szamprob
notebooks/Package02/feladat02.ipynb
gpl-3.0
[ "Feladatok\n\nMinden feladatot külön notebookba oldj meg! \nA megoldásnotebook neve tartalmazza a feladat számát! \nA megoldasok kerüljenek a MEGOLDASOK mappába!<br> Csak azok a feladatok kerülnek elbírálásra amelyek a MEGOLDASOK mappában vannak!\nA megoldás tartalmazza a megoldandó feladat szövegét a megoldás notebook első markdown cellájában! \nKommentekkel illetve markdown cellákkal magyarázd hogy éppen mit csinál az adott kódrészlet!<br> Magyarázat nélkül beküldött feladatok csak fél feladatnak számítanak!\n\n\n01-for\nAz alábbi három tömbről döntsük el, hogy a Fibonacci-sorozat részét képezik-e ! A tömbök első két eleme garantáltan jó sorrendben részei a Fibonacci sorozatnak! \n- Írjunk egy kód részletet ami for ciklus(ok) segítségével dönti el a vizsgálandó kérdést.\n- Markdown cellába fejtsük tapasztalatainkat szóban is. \n - Melyik lista része a Fibonacchi-sorozatnak ?\n - Ha valamelyik nem része akkor azt is tárgyaljuk hogy miért nem az!", "a=[12586269025, 20365011074, 32951280099, 53316291173, 86267571272, 139583862445, 225851433717,365435296162, 591286729879,\n 956722026041, 1548008755920, 2504730781961, 4052739537881, 6557470319842, 10610209857723, 17167680177565, 27777890035288,\n 44945570212853, 72723460248141, 117669030460994]\n\nb=[832040, 1346269, 2175309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986]\n\nc=[267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073, 4807526976,7778742049,\n 12586269025, 20365011074, 32951280099, 53316291173, 86267571272]", "02-if\nÍrjunk egy függvényt a nap az ora és fiulany bemeneti valltozók megadott értékei alapján eldönti, hogy épp az adott időben a kérdéses személy mit csinál. A lehetséges tevékenységeket az alábbiak alapján döntsük el:\n\nA fiúk is és a lányok is hétköznap délelőtt tanulnak.\nA lányok délután 2 és 4 között teáznak, egyébként babáznak.\nA fiúk 12 és 4 között fociznak, 4 től golyóznak.\nHétvégén mindenki kirándul. A fiuk a hegyekbe, a lányok a tengerhez mennek szombaton, de vasárnap fordítva.\nMindennap mindenki 8-kor megy aludni, és reggel 8 kor kel.\n\na függvény egy karakterláncal térjen vissza melynek az értékei a fenti kritérium rendszer alapján alábbiak lehetnek: 'tanul','teázik','babázik','focizik','golyózik','tengernél kirándul','hegyekben kirándul','alszik'\na három bemenő változó lehetséges értékei pedig az alábbiak lehetnek:\n\nnap : 'hétfő','kedd','szerda','csütörtök','péntek','szombat','vasárnap'\nora : egy egész szám 0 és 24 között\nfiulany: 'fiú','lány'", "def kiholmit(nap,ora,fiulany):\n \"...\" # ide jön a docstring\n #\n # ide jön a varázslat..\n #\n return # ide jön a visszatérési érték", "03-Mértani sorozat\nÍrjunk egy függvényt, amely egy kezdőértékből, egy kvóciensből és egy N egész számból legyárt egy N hoszú mértani sorozatot.\n- Írjunk docstringet!\n- a függvény egy listával térjen vissza !", "def mertani(x0,q,N):\n \"...\" # ide jön a docstring\n #\n # ide jön a varázslat..\n #", "04-Telefon központ\nÍrjunk egy függvényt, ami neveket és telefonszámokat tartalmazó szótárakat dolgoz fel! \n- Két bemenő paramétert használjunk. Az első egy szám, a második pedig egy szótár (dict).\n- A függvény írja ki azoknak az embereknek a nevét, akik abban a körzetben laknak, amelynek az előhívó számát megadjuk (az első három számjegy)! \n- A függvény visszatérési értéke legyen az hogy hány ember lakik az adott körzetben.\nItt egy példaadatbázis:", "adatok={'Alonzo Hinton': '(855) 278-2590',\n 'Cleo Hennings': '(844) 832-0585',\n 'Daine Ventura': '(833) 832-5081',\n 'Esther Leeson': '(855) 485-0624',\n 'Gene Connell': '(811) 973-2926',\n 'Lashaun Bottorff': '(822) 687-1735',\n 'Marx Hermann': '(844) 164-8116',\n 'Nicky Duprey': '(811) 032-6328',\n 'Piper Subia': '(844) 373-4228',\n 'Zackary Palomares': '(822) 647-3686'}\n\ndef telefon_kozpont(korzet,adatok):\n \"Ha megadod a körzetszámot (korzet) akkor kiírom ki lakik ott.\"\n #\n #ide jön a varázslat...\n #\n return # ide jön a visszatérési érték", "05-Változó számú argumentumok-I\nA harmadik példában megírt mértani sorozat függvényt módosítsuk úgy, hogy:\n- ha egy bemeneti értéke van, akkor azt tekintse kezdőértéknek, a kvóciens legyen 0.5, N pedig 10.\n- ha két bemeneti érték van, akkor az első legyen a kezdőérték, a második a kvociens, N pedig 10\n- ha megvan mind a három paraméter, akkor ugyanúgy viselkedjen, mint ahogy azt az előző feladatban tette.\n06-Változó számú argumentumok-II ☠\nÍrjunk egy függvényt, amelyik egy tetszőleges fokszámú polinomot értékel ki egy adott x helyen!\nA polinom fokszámát és együtthatóit határozzuk meg az a változó hosszúságú argumentumból! Használjuk a listákra alkalmazható len() függvényt!", "def poly(x,*a):\n \"Polinom függvény f(x)=\\sum_i a_i x^i\" #Ez csak a docstring\n #\n # ide jön a varázslat..\n #\n return # ide jön a visszatérési érték", "07-kulcsszavas függvény változó számú argumentummal ☠\nÍrjunk egy függvényt, amely egy adott x valós értékre kiértékel egy tetszőleges polinomfüggvényt, vagy annak a reciprokát!\nA polinom-együtthatókat egy tetszőleges hosszúságú args nevű listában kapjuk. Ha a függvény kap egy harmadik argumentumot kulcsszavas lista formájában, akkor vizsgáljuk meg, hogy abban a 'fajta' kulcsszó mit tartalmaz.\nHa a kulcsszó 'reciprok', akkor a polinom reciprokát számoljuk! Ellenkező esetben a polinom értékét adjuk vissza!", "def fuggveny(x,*args,**kwargs):\n \"Ha a kwargs nem rendelkezik másképp akkor kiértékelek egy polinomot\"\n #\n #ide jön a varázslat\n #\n if kwargs['fajta']=='inverz':\n #\n #\n else:\n #\n #\n \n #\n return #ide jön a visszatérési érték.." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cccma/cmip6/models/sandbox-1/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: SANDBOX-1\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'sandbox-1', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
robertoalotufo/ia898
src/phasecorr.ipynb
mit
[ "Function phasecorr\nSynopse\nComputes the phase correlation of two images.\n\ng = phasecorr(f,h)\nOUTPUT\ng: Image. Phase correlation map.\n\n\nINPUT\nf: Image. n-dimensional.\nh: Image. n-dimensional.\n\n\n\n\n\nDescription\nComputes the phase correlation of two n-dimensional images. Notice that the input images must have\nthe same dimension and size. The output is an image with same dimension and size of the input image.\nThis output image is a phase correlation map were the point of maximum value corresponds to the\ntranslation between the input images.", "import numpy as np\ndef phasecorr(f,h):\n F = np.fft.fftn(f)\n H = np.fft.fftn(h)\n T = F * np.conjugate(H)\n R = T/np.abs(T)\n g = np.fft.ifftn(R)\n return g.real", "Examples", "testing = (__name__ == \"__main__\")\nif testing:\n import numpy as np\n import sys,os\n ia898path = os.path.abspath('../../')\n if ia898path not in sys.path:\n sys.path.append(ia898path)\n import ia898.src as ia\n \n %matplotlib inline\n import matplotlib.image as mpimg\n", "Example 1\nShow that the point of maximum correlation for two equal images is the origin.", "if testing:\n # 2D example\n f1 = mpimg.imread(\"../data/cameraman.tif\")\n noise = np.random.rand(f1.shape[0],f1.shape[1])\n f2 = ia.normalize(ia.ptrans(f1,(-1,50)) + 300 * noise)\n g1 = ia.phasecorr(f1,f2)\n i = np.argmax(g1)\n row,col = np.unravel_index(i,g1.shape)\n v = g1[row,col]\n print(np.array(f1.shape) - np.array((row,col)))\n\nif testing:\n print('max at:(%d, %d)' % (row,col))\n\n ia.adshow(ia.normalize(f1), \"input image\")\n ia.adshow(ia.normalize(f2), \"input image\")\n ia.adshow(ia.normalize(g1), \"Correlation peak at (%d,%d) with %d\" % (row,col,v))", "Exemplo 3\nShow how to perform Template Matching using phase correlation.", "if testing:\n # 2D example\n w1 = f1[27:69,83:147]\n \n h3 = np.zeros_like(f1)\n h3[:w1.shape[0],:w1.shape[1]] = w1\n noise = np.random.rand(h3.shape[0],h3.shape[1])\n h3 = ia.normalize(h3 + 100 * noise)\n\n h3 = ia.ptrans(h3, - np.array(w1.shape, dtype=int)//2)\n \n g9 = ia.phasecorr(f1,h3)\n \n p3 = np.unravel_index(np.argmax(g9), g9.shape)\n g11 = ia.ptrans(h3,p3)\n \n ia.adshow(ia.normalize(f1), \"Original 2D image - Cameraman\")\n ia.adshow(ia.normalize(w1), \"2D Template\")\n ia.adshow(ia.normalize(h3), \"2D Template same size as f1\")\n ia.adshow(ia.normalize(g9), \"Cameraman - Correlation peak: %s\"%str(p3))\n ia.adshow(ia.normalize((g11*2.+f1)/3.), \"Template translated mixed with original image\")", "Equation\nWe calculate the discrete Fourier transform of the input images $f$ and $h$:\n$$ F = \\mathcal{F}(f); $$\n$$ H = \\mathcal{F}(h). $$\nNext, the following equation compute $R$\n$$ R = \\dfrac{F H^}{|F H^|}. $$\nFinally, the result is given by applying the inverse discrete Fourier transform to $R$\n$$ g = \\mathcal{F}^{-1}(R). $$\nThe displacement (not implemented in this function) can be obtained by:\n$$ (row, col) = arg max{g} $$\nSee also\n\nia636:iadft iadft -- Discrete Fourier Transform.\nia636:iaidft iaidft -- Inverse Discrete Fourier Transform.\nia636:iaptrans iaptrans -- Periodic translation.\nia636:iamosaic iamosaic -- Creates a mosaic of images from the input volume (3D).\nia636:iacorrdemo iacorrdemo -- Illustrate the Template Matching technique.\n\nReferences\n\nE. De Castro and C. Morandi \"Registration of Translated and Rotated Images Using Finite Fourier Transforms\", IEEE Transactions on pattern analysis and machine intelligence, Sept. 1987.\n\nContributions\n\nAndré Luis da Costa, 1st semester 2011" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bloomberg/bqplot
examples/Marks/Pyplot/Lines.ipynb
apache-2.0
[ "Introduction\nThe Lines object provides the following features:\n\nAbility to plot a single set or multiple sets of y-values as a function of a set or multiple sets of x-values\nAbility to style the line object in different ways, by setting different attributes such as the colors, line_style, stroke_width etc.\nAbility to specify a marker at each point passed to the line. The marker can be a shape which is at the data points between which the line is interpolated and can be set through the markers attribute\n\nThe Lines object has the following attributes\n| Attribute | Description | Default Value |\n|:-:|---|:-:|\n| colors | Sets the color of each line, takes as input a list of any RGB, HEX, or HTML color name | CATEGORY10 |\n| opacities | Controls the opacity of each line, takes as input a real number between 0 and 1 | 1.0 |\n| stroke_width | Real number which sets the width of all paths | 2.0 |\n| line_style | Specifies whether a line is solid, dashed, dotted or both dashed and dotted | 'solid' |\n| interpolation | Sets the type of interpolation between two points | 'linear' |\n| marker | Specifies the shape of the marker inserted at each data point | None |\n| marker_size | Controls the size of the marker, takes as input a non-negative integer | 64 |\n|close_path| Controls whether to close the paths or not | False |\n|fill| Specifies in which way the paths are filled. Can be set to one of {'none', 'bottom', 'top', 'inside'}| None |\n|fill_colors| List that specifies the fill colors of each path | [] |\n| Data Attribute | Description | Default Value |\n|x |abscissas of the data points | array([]) |\n|y |ordinates of the data points | array([]) |\n|color | Data according to which the Lines will be colored. Setting it to None defaults the choice of colors to the colors attribute | None |\npyplot's plot method can be used to plot lines with meaningful defaults", "import numpy as np\nfrom pandas import date_range\nimport bqplot.pyplot as plt\nfrom bqplot import ColorScale\n\nsecurity_1 = np.cumsum(np.random.randn(150)) + 100.0\nsecurity_2 = np.cumsum(np.random.randn(150)) + 100.0", "Basic Line Chart", "fig = plt.figure(title=\"Security 1\")\naxes_options = {\"x\": {\"label\": \"Index\"}, \"y\": {\"label\": \"Price\"}}\n# x values default to range of values when not specified\nline = plt.plot(security_1, axes_options=axes_options)\nfig", "We can explore the different attributes by changing each of them for the plot above:", "line.colors = [\"DarkOrange\"]", "In a similar way, we can also change any attribute after the plot has been displayed to change the plot. Run each of the cells below, and try changing the attributes to explore the different features and how they affect the plot.", "# The opacity allows us to display the Line while featuring other Marks that may be on the Figure\nline.opacities = [0.5]\n\nline.stroke_width = 2.5", "To switch to an area chart, set the fill attribute, and control the look with fill_opacities and fill_colors.", "line.fill = \"bottom\"\nline.fill_opacities = [0.2]\n\nline.line_style = \"dashed\"\n\nline.interpolation = \"basis\"", "While a Lines plot allows the user to extract the general shape of the data being plotted, there may be a need to visualize discrete data points along with this shape. This is where the markers attribute comes in.", "line.marker = \"triangle-down\"", "The marker attributes accepts the values square, circle, cross, diamond, square, triangle-down, triangle-up, arrow, rectangle, ellipse. Try changing the string above and re-running the cell to see how each marker type looks.\nPlotting a Time-Series\nThe DateScale allows us to plot time series as a Lines plot conveniently with most date formats.", "# Here we define the dates we would like to use\ndates = date_range(start=\"01-01-2007\", periods=150)\n\nfig = plt.figure(title=\"Time Series\")\naxes_options = {\"x\": {\"label\": \"Date\"}, \"y\": {\"label\": \"Security 1\"}}\ntime_series = plt.plot(dates, security_1, axes_options=axes_options)\nfig", "Plotting multiples sets of data\nThe Lines mark allows the user to plot multiple y-values for a single x-value. This can be done by passing an ndarray or a list of the different y-values as the y-attribute of the Lines as shown below.", "dates_new = date_range(start=\"06-01-2007\", periods=150)", "We pass each data set as an element of a list", "fig = plt.figure()\naxes_options = {\"x\": {\"label\": \"Date\"}, \"y\": {\"label\": \"Price\"}}\nline = plt.plot(\n dates,\n [security_1, security_2],\n labels=[\"Security 1\", \"Security 2\"],\n axes_options=axes_options,\n display_legend=True,\n)\nfig", "Similarly, we can also pass multiple x-values for multiple sets of y-values", "line.x, line.y = [dates, dates_new], [security_1, security_2]", "Coloring Lines according to data\nThe color attribute of a Lines mark can also be used to encode one more dimension of data. Suppose we have a portfolio of securities and we would like to color them based on whether we have bought or sold them. We can use the color attribute to encode this information.", "fig = plt.figure()\naxes_options = {\n \"x\": {\"label\": \"Date\"},\n \"y\": {\"label\": \"Security 1\"},\n \"color\": {\"visible\": False},\n}\n# add a custom color scale to color the lines\nplt.scales(scales={\"color\": ColorScale(colors=[\"Red\", \"Green\"])})\n\ndates_color = date_range(start=\"06-01-2007\", periods=150)\n\nsecurities = 100.0 + np.cumsum(np.random.randn(150, 10), axis=0)\n# we generate 10 random price series and 10 random positions\npositions = np.random.randint(0, 2, size=10)\n\n# We pass the color scale and the color data to the plot method\nline = plt.plot(dates_color, securities.T, color=positions, axes_options=axes_options)\nfig", "We can also reset the colors of the Line to their defaults by setting the color attribute to None.", "line.color = None", "Patches\nThe fill attribute of the Lines mark allows us to fill a path in different ways, while the fill_colors attribute lets us control the color of the fill", "fig = plt.figure(animation_duration=1000)\npatch = plt.plot(\n [],\n [],\n fill_colors=[\"orange\", \"blue\", \"red\"],\n fill=\"inside\",\n axes_options={\"x\": {\"visible\": False}, \"y\": {\"visible\": False}},\n stroke_width=10,\n close_path=True,\n display_legend=True,\n)\n\npatch.x = (\n [\n [0, 2, 1.2, np.nan, np.nan, np.nan, np.nan],\n [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan],\n [4, 5, 6, 6, 5, 4, 3],\n ],\n)\npatch.y = [\n [0, 0, 1, np.nan, np.nan, np.nan, np.nan],\n [0.5, 0.5, -0.5, np.nan, np.nan, np.nan, np.nan],\n [1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0],\n]\nfig\n\npatch.opacities = [0.1, 0.2]\n\npatch.x = [\n [2, 3, 3.2, np.nan, np.nan, np.nan, np.nan],\n [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan],\n [4, 5, 6, 6, 5, 4, 3],\n]\n\npatch.close_path = False" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AstroHackWeek/AstroHackWeek2016
breakouts/gaussian_process/GaussianProcessTasteTest.ipynb
mit
[ "Gaussian Process Taste-test\nThe scikit-learn package has a nice Gaussian Process example - but what is it doing? In this notebook, we review the mathematics of Gaussian Processes, and then 1) run the scikit-learn example, 2) do the same thing by-hand with numpy/scipy, and finally 3) use the GPy package to compare a few different kernels, on the same test dataset.\nSuper-brief Introduction to GPs\nLet us look at the basics of Gaussian Processes in one dimension. See Rasmussen and Williams for a great, pedagogically smooth introduction to Gaussian Processes that will teach you everything you will need to get started.\nWe denote $\\vec{x}=(x_0, \\dots, x_N)$ a vector of 1D input values. A 1D Gaussian process $f$ is such that\n$f \\sim \\mathcal{GP} \\ \\Longleftrightarrow \\ p(f(\\vec{x}), f(\\vec{x}'))\\ \\mathrm{is\\ Gaussian} \\ \\forall \\vec{x}, \\vec{x}'$.\nIt is fully characterized by a mean function and a kernel,\n$$\\begin{eqnarray}m(\\vec{x}) &=& \\mathbb{E}[ f(\\vec{x}) ]\\\nk(\\vec{x}, \\vec{x}') &=& \\mathbb{E}[ (f(\\vec{x})-m(\\vec{x}))(f(\\vec{x}')-m(\\vec{x}')) ]\\end{eqnarray}$$\nLet us consider a noisy dataset $(\\vec{x},\\vec{y})$ with Gaussian homoskedastic errors $\\epsilon$ that are Gaussian distributed with standard deviation $\\sigma$. Fitting a Gaussian Process to this data is equivalent to considering a set of basis functions ${\\phi_i(x)}$ and finding the optimal weights ${\\omega_i}$, which we assume to be Gaussian distributed with some covariance $\\Sigma$. It can also be thought of as fitting for an unknown correlated noise term in the data.\n$$\\begin{eqnarray}\n\\vec{y} &=& f(\\vec{x}) + \\vec{\\epsilon}\\\n\\vec{\\epsilon} &\\sim & \\mathcal{N}(0,\\sigma^2 I)\\\nf(\\vec{x}) &=& \\sum_i \\omega_i \\phi_i(\\vec{x}) = \\vec{\\omega}^T \\vec{\\phi}(\\vec{x}) \\\n\\vec{\\omega} &\\sim & \\mathcal{N}(0,\\Sigma)\\\n\\end{eqnarray}$$\nIn this case, the mean function is assumed to be zero, $m(\\vec{x}) = 0$. (This is not actually very constraining, as Rasmussen and Williams explain, and it is not equivalent to assuming that the mean of $f$ is zero.) \nThere are as many weights as there are data points, which makes the function $f$ very flexible. The weights are constrained by their Gaussian distribution, though. Importantly, the kernel is fully characterized by the choice of basis functions, via\n$$\\quad k(\\vec{x},\\vec{x}') = \\vec{\\phi}(\\vec{x})^T \\Sigma\\ \\vec{\\phi}(\\vec{x}')$$\nPicking a set of basis functions is equivalent to picking a kernel, and vice versa. In the correlated noise model interpretation its the kernel function that makes more sense. Typically a kernel will have a handful of hyper-parameters $\\vec{\\theta}$, that govern the shape of the basis function and correlation structure of the covariance matrix of the predictions. These hyper-parameters can in be inferred from the data, via their log likelihood:\n$$ \\log p(\\vec{y} | \\vec{x},\\vec{\\theta}) = \\frac{1}{2} \\vec{y}^T K^{-1} \\vec{y} - \\frac{1}{2} \\log |K| - \\frac{n}{2} \\log 2\\pi $$\n(Here, the matrix $K$ has elements $K_{ij} = k(x_i,x_j) + \\sigma^2 \\delta_{ij}$. Note that evaluating the likelihood for $\\theta$ involves computing the determinant of the matrix $K$.) Fitting the hyper-parameters is often done by maximizing this likelihood - but that only gets you the \"best-fit\" hyper-parameters. Posterior samples of the hyper-parameters can be drawn by MCMC in the usual way.\nFor any given set of hyper-parameters, we can use the Gaussian Process to predict new outputs $\\vec{y}^$ at inputs $\\vec{x}^$. Thanks to the magic of Gaussian distributions and linear algebra, one can show that the posterior distribution for the process evaluated at new inputs $\\vec{x}^*$ given a fit to the existing values $(\\vec{x},\\vec{y})$ is also Gaussian:\n$$p( f(\\vec{x}^) | \\vec{y}, \\vec{x}, \\vec{x}^ ) \\ = \\ \\mathcal{N}(\\bar{f}, \\bar{k})$$\nThe mean of this PDF for $f(\\vec{x}^*)$ is \n$$\\bar{f} \\ =\\ k(\\vec{x}^*,\\vec{x})[k(\\vec{x},\\vec{x}) + \\sigma^2 I]^{-1} \\vec{y}$$\nand its covariance is\n$$\\bar{k} = k(\\vec{x}^,\\vec{x}^) - k(\\vec{x},\\vec{x}^) [k(\\vec{x},\\vec{x}) + \\sigma^2 I]^{-1}k(\\vec{x},\\vec{x}^)^T $$\n\nOnce the kernel is chosen, one can fit the data and make predictions for new data in a single linear algebra operation. Note that multiple matrix inversions and multiplications are involved, so Gaussian Processes can be computationally very expensive - and that the weights are being optimized during the arithmetic calculation of $\\bar{f}$. \nInferring the hyper-parameters of the kernel make GPs even more expensive, thanks to the determinant calculation involved.\nTo generate large numbers of predictions, one just makes a long vector $\\vec{x}^*$. We'll see this in the code below, when generating smooth functions to plot through sparse and noisy data. The mean prediction $\\bar{f}$ is linear in the input data $y$, which is quite remarkable.\nThe above is all the math you need to run Gaussian Processes in simple situations. Here is a list of more advanced topics that you should think about when applying Gaussian Processes to real data:\n- Generalizing to multiple input dimensions (keeping one output dimension) is trivial, but the case of multiple outputs is not (partly because it is less natural). \n- Choosing a physically motivated kernel or a kernel that simplifies the computation, for example by yielding sparse matrices.\n- Parametrizing the kernel and/or the mean function and inferring these hyperparameters from the data.\n- Using a small fraction of the data to make predictions. This is referred to as Sparse Gaussian Processes. Finding an optimal \"summary\" subset of the data is key.\n- Gaussian Processes natively work with Gaussian noise / likelihood functions. With non-Gaussian cases, some analytical results are no longer valid (e.g. the marginal likelihood) but approximations exist.\n- What if the inputs $\\vec{x}$ have uncertainties? There are various way to deal with this, but this is much more intensive than normal Gaussian Processes.", "%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt\nplt.style.use('seaborn-whitegrid')", "Make Some Data", "def f(x):\n \"\"\"The function to predict.\"\"\"\n return x * np.sin(x)\n\ndef make_data(N, rseed=1):\n np.random.seed(rseed)\n\n # Create some observations with noise\n X = np.random.uniform(low=0.1, high=9.9, size=N)\n X = np.atleast_2d(X).T\n\n y = f(X).ravel()\n dy = 0.5 + 1.0 * np.random.random(y.shape)\n noise = np.random.normal(0, dy)\n y += noise\n \n return X, y, dy\n\nX, y, dy = make_data(20)", "Gaussian Process Regression with Scikit-Learn\nExample adapted from Scikit-learn's Examples", "# Get the master version of scikit-learn; new GP code isn't in release\n# This needs to compile things, so it will take a while...\n# Uncomment the following:\n\n# !pip install git+git://github.com/scikit-learn/scikit-learn.git\n\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import RBF as SquaredExponential\nfrom sklearn.gaussian_process.kernels import ConstantKernel as Amplitude\n\n# Instanciate a Gaussian Process model\nkernel = Amplitude(1.0, (1E-3, 1E3)) * SquaredExponential(10, (1e-2, 1e2))\n\n# Instantiate a Gaussian Process model\ngp = GaussianProcessRegressor(kernel=kernel,\n alpha=(dy / y)**2, # fractional errors in data\n n_restarts_optimizer=10)\n\n# Fit to data using Maximum Likelihood Estimation of the hyper-parameters\ngp.fit(X, y)\n\ngp.kernel_\n\n# note: gp.kernel is the initial kernel\n# gp.kernel_ (with an underscore) is the fitted kernel\ngp.kernel_.get_params()\n\n# Mesh the input space for evaluations of the real function, the prediction and\n# its MSE\nx_pred = np.atleast_2d(np.linspace(0, 10, 1000)).T\n\n# Make the prediction on the meshed x-axis (ask for MSE as well)\ny_pred, sigma = gp.predict(x_pred, return_std=True)\n\ndef plot_results(X, y, dy, x_pred, y_pred, sigma):\n fig = plt.figure(figsize=(8, 6))\n plt.plot(x_pred, f(x_pred), 'k:', label=u'$f(x) = x\\,\\sin(x)$')\n plt.errorbar(X.ravel(), y, dy, fmt='k.', markersize=10, label=u'Observations',\n ecolor='gray')\n plt.plot(x_pred, y_pred, 'b-', label=u'Prediction')\n plt.fill(np.concatenate([x_pred, x_pred[::-1]]),\n np.concatenate([y_pred - 1.9600 * sigma,\n (y_pred + 1.9600 * sigma)[::-1]]),\n alpha=.3, fc='b', ec='None', label='95% confidence interval')\n plt.xlabel('$x$')\n plt.ylabel('$f(x)$')\n plt.ylim(-10, 20)\n plt.legend(loc='upper left');\n \nplot_results(X, y, dy, x_pred, y_pred, sigma)", "Gaussian Processes by-hand\nLet us run the same example but solving the Gaussian Process equations by hand.\nLet's use the kernel constructed with scikit-learn (because its parameters are optimized)\nAnd let's compute the Gaussian process manually using Scipy linalg", "import scipy.linalg\nKXX = gp.kernel_(X)\nA = KXX + np.diag((dy/y)**2.)\nL = scipy.linalg.cholesky(A, lower=True)\nKXXp = gp.kernel_(x_pred, X)\nKXpXp = gp.kernel_(x_pred)\nalpha = scipy.linalg.cho_solve((L, True), y)\ny_pred = np.dot(KXXp, alpha) + np.mean(y, axis=0)\nv = scipy.linalg.cho_solve((L, True), KXXp.T)\ny_pred_fullcov = KXpXp - KXXp.dot(v)\nsigma = np.sqrt(np.diag(y_pred_fullcov))\n\nplot_results(X, y, dy, x_pred, y_pred, sigma) ", "Quick kernel comparison with GPy\nLet's now use the GPy package and compare a couple of kernels applied to our example.\nWe'll optimize the parameters in each case. We not only plot the mean and std dev of the process\nbut also a few samples. As you can see, they look very different, and the choice of kernel is critical!", "import GPy\n\nkernels = [GPy.kern.RBF(input_dim=1),\n GPy.kern.Brownian(input_dim=1),\n GPy.kern.Matern32(input_dim=1),\n GPy.kern.Matern52(input_dim=1),\n GPy.kern.ExpQuad(input_dim=1),\n GPy.kern.Cosine(input_dim=1)]\nnames = ['Gaussian', 'Brownian', 'Mattern32', 'Matern52', 'ExpQuad', 'Cosine']\n\nfig, axs = plt.subplots(3, 2, figsize=(12, 12), sharex=True, sharey=True)\naxs = axs.ravel()\n\nfor i, k in enumerate(kernels):\n m = GPy.models.GPRegression(X, y[:,None], kernel=k)\n m.optimize()\n m.plot_f(ax=axs[i], plot_data=True, samples=4, legend=False, plot_limits=[0, 10]) \n # plotting four samples of the GP posterior too\n axs[i].errorbar(X, y, yerr=dy, fmt=\"o\", c='k')\n axs[i].set_title(names[i])\n axs[i].plot(x_pred, f(x_pred), 'k:', label=u'$f(x) = x\\,\\sin(x)$')\nfig.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
radhikapc/foundation-homework
homework_sql/Homework_4-Radhika_graded.ipynb
mit
[ "Grade: 10 / 11\nHomework #4\nThese problem sets focus on list comprehensions, string operations and regular expressions.\nProblem set #1: List slices and list comprehensions\nLet's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:", "numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'", "In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').", "# TA-COMMENT: You commented out the answer! \nraw_data = numbers_str.split(\",\")\nnumbers = []\nfor i in raw_data:\n numbers.append(int(i))\nnumbers\n#max(numbers)", "Great! We'll be using the numbers list you created above in the next few problems.\nIn the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:\n[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]\n\n(Hint: use a slice.)", "sorted(numbers)[11:]", "In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:\n[120, 171, 258, 279, 528, 699, 804, 855]", "# TA-COMMENT: (-1) This isn't sorted -- it doesn't match Allison's expected output. \n[i for i in numbers if i % 3 == 0]\n\n# TA-COMMET: This would have been an acceptable answer. \n[i for i in sorted(numbers) if i % 3 == 0]", "Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:\n[2.6457513110645907, 8.06225774829855, 8.246211251235321]\n\n(These outputs might vary slightly depending on your platform.)", "from math import sqrt\n[sqrt(i) for i in numbers if i < 100]", "Problem set #2: Still more list comprehensions\nStill looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.", "planets = [\n {'diameter': 0.382,\n 'mass': 0.06,\n 'moons': 0,\n 'name': 'Mercury',\n 'orbital_period': 0.24,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.949,\n 'mass': 0.82,\n 'moons': 0,\n 'name': 'Venus',\n 'orbital_period': 0.62,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 1.00,\n 'mass': 1.00,\n 'moons': 1,\n 'name': 'Earth',\n 'orbital_period': 1.00,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.532,\n 'mass': 0.11,\n 'moons': 2,\n 'name': 'Mars',\n 'orbital_period': 1.88,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 11.209,\n 'mass': 317.8,\n 'moons': 67,\n 'name': 'Jupiter',\n 'orbital_period': 11.86,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 9.449,\n 'mass': 95.2,\n 'moons': 62,\n 'name': 'Saturn',\n 'orbital_period': 29.46,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 4.007,\n 'mass': 14.6,\n 'moons': 27,\n 'name': 'Uranus',\n 'orbital_period': 84.01,\n 'rings': 'yes',\n 'type': 'ice giant'},\n {'diameter': 3.883,\n 'mass': 17.2,\n 'moons': 14,\n 'name': 'Neptune',\n 'orbital_period': 164.8,\n 'rings': 'yes',\n 'type': 'ice giant'}]", "Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:\n['Jupiter', 'Saturn', 'Uranus']", "earth_diameter = [i['diameter'] for i in planets if i['name'] == \"Earth\"]\nearth = int(earth_diameter[0])\n[i['name'] for i in planets if i['diameter'] > 4 * earth]", "In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79", "#count = 0\n#for i in planets:\n #count = count + i['mass']\n#print(count)\n\nsum([i['mass'] for i in planets])", "Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:\n['Jupiter', 'Saturn', 'Uranus', 'Neptune']", "[i['name'] for i in planets if \"giant\" in i['type']]", "EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:\n['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']\n\nProblem set #3: Regular expressions\nIn the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.", "import re\npoem_lines = ['Two roads diverged in a yellow wood,',\n 'And sorry I could not travel both',\n 'And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'To where it bent in the undergrowth;',\n '',\n 'Then took the other, as just as fair,',\n 'And having perhaps the better claim,',\n 'Because it was grassy and wanted wear;',\n 'Though as for that the passing there',\n 'Had worn them really about the same,',\n '',\n 'And both that morning equally lay',\n 'In leaves no step had trodden black.',\n 'Oh, I kept the first for another day!',\n 'Yet knowing how way leads on to way,',\n 'I doubted if I should ever come back.',\n '',\n 'I shall be telling this with a sigh',\n 'Somewhere ages and ages hence:',\n 'Two roads diverged in a wood, and I---',\n 'I took the one less travelled by,',\n 'And that has made all the difference.']", "In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.\nIn the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \\b anchor. Don't overthink the \"two words in a row\" requirement.)\nExpected result:\n['Then took the other, as just as fair,',\n 'Had worn them really about the same,',\n 'And both that morning equally lay',\n 'I doubted if I should ever come back.',\n 'I shall be telling this with a sigh']", "# TA-COMMENT: A better way of writing this regular expression: r\"\\b\\w{4}\\b \\b\\w{4}\\b\"\n[line for line in poem_lines if re.search(r\"\\b\\w\\w\\w\\w\\b \\b\\w\\w\\w\\w\\b\", line)]", "Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:\n['And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'And having perhaps the better claim,',\n 'Though as for that the passing there',\n 'In leaves no step had trodden black.',\n 'Somewhere ages and ages hence:']", "[line for line in poem_lines if re.search(r\"\\b\\w{5}[^0-9a-zA-Z]?$\", line)]", "Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.", "all_lines = \" \".join(poem_lines)", "Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:\n['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']", "re.findall(r\"I (\\b\\w+\\b)\", all_lines)\n#re.findall(r\"New York (\\b\\w+\\b)\", all_subjects)", "Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.", "entrees = [\n \"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95\",\n \"Lavender and Pepperoni Sandwich $8.49\",\n \"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v\",\n \"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v\",\n \"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95\",\n \"Rutabaga And Cucumber Wrap $8.49 - v\"\n]", "You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.\nExpected output:\n[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',\n 'price': 10.95,\n 'vegetarian': False},\n {'name': 'Lavender and Pepperoni Sandwich ',\n 'price': 8.49,\n 'vegetarian': False},\n {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',\n 'price': 12.95,\n 'vegetarian': True},\n {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',\n 'price': 9.95,\n 'vegetarian': True},\n {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',\n 'price': 19.95,\n 'vegetarian': False},\n {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]", "# TA-COMMENT: Note that 'price' should contain floats, not strings! \nmenu = []\nfor item in entrees:\n menu_items = {}\n match = re.search(r\"^(.*) \\$(\\d{1,2}\\.\\d{2})\", item)\n #print(\"name\",match.group(1))\n #print(\"price\", match.group(2))\n #menu_items.update({'name': match.group(1), 'price': match.group(2)})\n if re.search(\"v$\", item):\n menu_items.update({'name': match.group(1), 'price': match.group(2), 'vegetarian': True})\n else:\n menu_items.update({'name': match.group(1), 'price': match.group(2),'vegetarian': False})\n menu_items\n menu.append(menu_items)\nmenu", "Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/examples/diher_misaligned.ipynb
gpl-3.0
[ "DI Her: Misaligned Binary\nIn this example, we'll reproduce Figure 8 in the misalignment release paper (Horvat et al. 2018).\n<img src=\"horvat+18_fig8.png\" alt=\"Figure 8\" width=\"400px\"/>\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"", "As always, let's do imports and initialize a logger and a new bundle.", "import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger('error')\n\nb = phoebe.default_binary()", "System Parameters\nWe'll adopt and set parameters from the following sources:\n * Albrecht et al. (2009), Nature: https://arxiv.org/pdf/0909.2861\n * https://en.wikipedia.org/wiki/DI_Herculis\n * Claret et al (2010) https://arxiv.org/pdf/1002.2949.pdf", "Nt = 2000\n\nb.set_value('t0_supconj@orbit', 2442233.3481)\nb.set_value('vgamma@system', 9.1) # [km/s] (Albrecht et al. 2009) \nb.set_value('ntriangles@primary', Nt)\nb.set_value('ntriangles@secondary', Nt)\n\nmass1 = 5.1 # [M_sun] (Albrecht et al. 2009)\nmass2 = 4.4 # [M_sun] (Albrecht et al. 2009)\n\nP = 10.550164 # [d] (Albrecht et al. 2009)\nmu_sun = 1.32712440018e20 # = G M_sun [m3 s^-2], Wiki Standard_gravitational_parameter\nR_sun = 695700000 # [m] Wiki Sun\n\nsma = (mu_sun*(mass1 + mass2)*(P*86400/(2*np.pi))**2)**(1./3)/R_sun # Kepler equation\n\nincl = 89.3 # deg (Albrecht et al. 2009)\nvp_sini = 109 # [km/s] (Albrecht et al. 2009)\nvs_sini = 117 # [km/s] (Albrecht et al. 2009)\n\nRp = 2.68 # [R_sun] (Albrecht et al. 2009)\nRs = 2.48 # [R_sun] (Albrecht et al. 2009) \n \nsini = np.sin(np.pi*incl/180)\n\nvp = vp_sini*86400/sini # [km/s]\nvs = vs_sini*86400/sini # [km/s]\n\nPp = 2*np.pi*Rp*R_sun/1000/vp\nPs = 2*np.pi*Rs*R_sun/1000/vs\n\nFp = P/Pp\nFs = P/Ps\n\nb.set_value('q', mass2/mass1)\nb.set_value('incl@binary', incl) # (Albrecht et al. 2009)\nb.set_value('sma@binary', sma) # calculated\nb.set_value('ecc@binary', 0.489) # (Albrecht et al. 2009)\n\nb.set_value('per0@binary', 330.2) # (Albrecht et al. 2009)\nb.set_value('period@binary', P) # calculated\n\nb.set_value('syncpar@primary', Fp) # calculated\nb.set_value('syncpar@secondary', Fs) # calculated\n\nb.set_value('requiv@primary', Rp) # !!! requiv (Albrecht et al. 2009)\nb.set_value('requiv@secondary', Rs) # !!! requiv (Albrecht et al. 2009)\n\nb.set_value('teff@primary', 17300) # Wiki DI_Herculis\nb.set_value('teff@secondary', 15400) # Wiki DI_Herculis\n \nb.set_value('gravb_bol@primary', 1.)\nb.set_value('gravb_bol@secondary', 1.)\n\n\n# beta = 72 deg (Albrecht et al. 2009)\ndOmega_p = 72\ndi_p = 62 - incl\nb.set_value('pitch@primary', di_p) # di\nb.set_value('yaw@primary', dOmega_p) # dOmega\n\n# beta = - 84 deg (Albrecht et al. 2009)\ndOmega_s = -84\ndi_s = 100 - incl\nb.set_value('pitch@secondary', di_s) # di\nb.set_value('yaw@secondary', dOmega_s) # dOmega\n \nb.set_value_all('atm','extern_planckint')\nb.set_value_all('irrad_method', 'none')\n", "Datasets\nLet's compute an LC and RV dataset sampled at 200 points in phase (with some aliasing).", "n = 200\ntimes = b.to_time(np.linspace(-0.05, 1.05, n))\n\nb.add_dataset('lc', times=times, dataset='lc01', ld_mode='manual', ld_func='logarithmic', ld_coeffs = [0.5,0.5])\nb.add_dataset('rv', times=times, dataset='rv01', ld_mode='manual', ld_func='logarithmic', ld_coeffs = [0.5,0.5])", "Compute", "b.run_compute(ltte=False)", "Plotting", "afig, mplfig = b.plot(kind='lc', show=True)\n\nafig, mplfig = b.plot(kind='rv', show=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marcinofulus/teaching
Python4physicists_SS2017/Meshgrid_mgrid.ipynb
gpl-3.0
[ "Meshgrid & mgrid\nx - jest najszybciej zmieniajacym się wskażnikiem (row-major) więc indeskowanie trzeba \"odwrócic\" względem kolejności argumentów w funkcji f:\n$$x,y \\to j,i$$", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nnx = 11\nny = 5\nx1,y1 = 1,2\n\nX,Y = np.meshgrid(np.linspace(0,x1,nx),np.linspace(0,y1,ny))\nX.shape\n\nf = lambda X_,Y_:np.sin(X_**2+Y_**2)\nZ = f(X,Y)\n\nplt.contourf(X,Y,Z)\n# To samo co:\n# plt.contourf(X.T,Y.T,Z.T)\n\nplt.imshow(Z,interpolation='nearest',origin='lower')\n\nX\n\nY", "Porównanie z próbkowaniem \"ręcznym\":", "i,j = 2,3\nprint (\"dla x i y\", X[i,j],Y[i,j],\"jest\", Z[i,j],f(X[i,j],Y[i,j]),\\\n \"powinno byc rowne\", f(x1/float(nx-1)*i,y1/float(ny-1)*j) )", "Dobrze:", "i,j = 2,3\nprint (\"dla x i y\" ,X[j,i],Y[j,i],\"jest\", Z[j,i],f(X[j,i],Y[j,i]),\\\n \"powinno byc rowne\", f(x1/float(nx-1)*i,y1/float(ny-1)*j))", "Z jest row major więc można też napisać:", "print Z[j,i],Z.flatten()[j*nx+i]", "mgrid\nmgrid zachowuje się odwrotnie\nX,Y = np.meshgrid(np.arange(0,nx),np.arange(0,ny))\nYn, Xn = np.mgrid[0:ny,0:nx]", "Yn, Xn = np.mgrid[0:ny,0:nx]\nXn.shape\n\nXn\n\nYn\n\nXn/(float(nx-1)*x1)\n\nX1,Y1 = Xn*(x1/float(nx-1)),Yn*(y1/float(ny-1))\n\nnp.allclose(X, X1),np.allclose(Y, Y1),np.allclose(Y, Y1)", "Zresztą sprawdzmy:", "Z.strides\n\nnp.meshgrid(np.arange(nx),np.arange(ny))\n\nlist(reversed(np.mgrid[0:ny,0:nx]))\n\nnp.meshgrid(np.arange(ny),np.arange(nx),indexing='ij')\n\nnp.mgrid[0:ny,0:nx]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_tf_lcmv.ipynb
bsd-3-clause
[ "%matplotlib inline", "Time-frequency beamforming using LCMV\nCompute LCMV source power [1]_ in a grid of time-frequency windows and\ndisplay results.\nReferences\n.. [1] Dalal et al. Five-dimensional neuroimaging: Localization of the\n time-frequency dynamics of cortical activity.\n NeuroImage (2008) vol. 40 (4) pp. 1686-1700", "# Author: Roman Goj <roman.goj@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import compute_covariance\nfrom mne.datasets import sample\nfrom mne.event import make_fixed_length_events\nfrom mne.beamformer import tf_lcmv\nfrom mne.viz import plot_source_spectrogram\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nnoise_fname = data_path + '/MEG/sample/ernoise_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name", "Read raw data, preload to allow filtering", "raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Pick a selection of magnetometer channels. A subset of all channels was used\n# to speed up the example. For a solution based on all MEG channels use\n# meg=True, selection=None and add grad=4000e-13 to the reject dictionary.\n# We could do this with a \"picks\" argument to Epochs and the LCMV functions,\n# but here we use raw.pick_types() to save memory.\nleft_temporal_channels = mne.read_selection('Left-temporal')\nraw.pick_types(meg='mag', eeg=False, eog=False, stim=False, exclude='bads',\n selection=left_temporal_channels)\nreject = dict(mag=4e-12)\n# Re-normalize our empty-room projectors, which should be fine after\n# subselection\nraw.info.normalize_proj()\n\n# Setting time limits for reading epochs. Note that tmin and tmax are set so\n# that time-frequency beamforming will be performed for a wider range of time\n# points than will later be displayed on the final spectrogram. This ensures\n# that all time bins displayed represent an average of an equal number of time\n# windows.\ntmin, tmax = -0.55, 0.75 # s\ntmin_plot, tmax_plot = -0.3, 0.5 # s\n\n# Read epochs. Note that preload is set to False to enable tf_lcmv to read the\n# underlying raw object.\n# Filtering is then performed on raw data in tf_lcmv and the epochs\n# parameters passed here are used to create epochs from filtered data. However,\n# reading epochs without preloading means that bad epoch rejection is delayed\n# until later. To perform bad epoch rejection based on the reject parameter\n# passed here, run epochs.drop_bad(). This is done automatically in\n# tf_lcmv to reject bad epochs based on unfiltered data.\nevent_id = 1\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n baseline=None, preload=False, reject=reject)\n\n# Read empty room noise, preload to allow filtering, and pick subselection\nraw_noise = mne.io.read_raw_fif(noise_fname, preload=True)\nraw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\nraw_noise.pick_types(meg='mag', eeg=False, eog=False, stim=False,\n exclude='bads', selection=left_temporal_channels)\nraw_noise.info.normalize_proj()\n\n# Create artificial events for empty room noise data\nevents_noise = make_fixed_length_events(raw_noise, event_id, duration=1.)\n# Create an epochs object using preload=True to reject bad epochs based on\n# unfiltered data\nepochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin, tmax,\n proj=True, baseline=None,\n preload=True, reject=reject)\n\n# Make sure the number of noise epochs is the same as data epochs\nepochs_noise = epochs_noise[:len(epochs.events)]\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Read label\nlabel = mne.read_label(fname_label)", "Time-frequency beamforming based on LCMV", "# Setting frequency bins as in Dalal et al. 2008 (high gamma was subdivided)\nfreq_bins = [(4, 12), (12, 30), (30, 55), (65, 299)] # Hz\nwin_lengths = [0.3, 0.2, 0.15, 0.1] # s\n\n# Setting the time step\ntstep = 0.05\n\n# Setting the whitened data covariance regularization parameter\ndata_reg = 0.001\n\n# Subtract evoked response prior to computation?\nsubtract_evoked = False\n\n# Calculating covariance from empty room noise. To use baseline data as noise\n# substitute raw for raw_noise, epochs.events for epochs_noise.events, tmin for\n# desired baseline length, and 0 for tmax_plot.\n# Note, if using baseline data, the averaged evoked response in the baseline\n# period should be flat.\nnoise_covs = []\nfor (l_freq, h_freq) in freq_bins:\n raw_band = raw_noise.copy()\n raw_band.filter(l_freq, h_freq, n_jobs=1)\n epochs_band = mne.Epochs(raw_band, epochs_noise.events, event_id,\n tmin=tmin_plot, tmax=tmax_plot, baseline=None,\n proj=True)\n\n noise_cov = compute_covariance(epochs_band, method='shrunk')\n noise_covs.append(noise_cov)\n del raw_band # to save memory\n\n# Computing LCMV solutions for time-frequency windows in a label in source\n# space for faster computation, use label=None for full solution\nstcs = tf_lcmv(epochs, forward, noise_covs, tmin, tmax, tstep, win_lengths,\n freq_bins=freq_bins, subtract_evoked=subtract_evoked,\n reg=data_reg, label=label)\n\n# Plotting source spectrogram for source with maximum activity.\n# Note that tmin and tmax are set to display a time range that is smaller than\n# the one for which beamforming estimates were calculated. This ensures that\n# all time bins shown are a result of smoothing across an identical number of\n# time windows.\nplot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,\n source_index=None, colorbar=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tarunchhabra26/fss16dst
code/7/WS3/sbiswas4_pom3_ga.ipynb
apache-2.0
[ "Optimizing Real World Problems\nIn this workshop we will code up a model called POM3 and optimize it using the GA we developed in the first workshop.\nPOM3 is a software estimation model like XOMO for Software Engineering. It is based on Turner\nand Boehm’s model of agile development. It compares traditional plan-based approaches\nto agile-based approaches in requirements prioritization. It describes how a team decides which\nrequirements to implement next. POM3 reveals requirements incrementally in random order, with\nwhich developers plan their work assignments. These assignments are further adjusted based on\ncurrent cost and priority of requirement. POM3 is a realistic model which takes more runtime than\nstandard mathematical models(2-100ms, not 0.006-0.3ms)", "%matplotlib inline\n# All the imports\nfrom __future__ import print_function, division\nfrom math import *\nimport random\nimport sys\nimport matplotlib.pyplot as plt\n\n# TODO 1: Enter your unity ID here \n__author__ = \"<sbiswas4>\"\n\nclass O:\n \"\"\"\n Basic Class which\n - Helps dynamic updates\n - Pretty Prints\n \"\"\"\n def __init__(self, **kwargs):\n self.has().update(**kwargs)\n def has(self):\n return self.__dict__\n def update(self, **kwargs):\n self.has().update(kwargs)\n return self\n def __repr__(self):\n show = [':%s %s' % (k, self.has()[k]) \n for k in sorted(self.has().keys()) \n if k[0] is not \"_\"]\n txt = ' '.join(show)\n if len(txt) > 60:\n show = map(lambda x: '\\t' + x + '\\n', show)\n return '{' + ' '.join(show) + '}'\n \nprint(\"Unity ID: \", __author__)", "The Generic Problem Class\nRemember the Problem Class we coded up for GA workshop. Here we abstract it further such that it can be inherited by all the future classes. Go through these utility functions and classes before you proceed further.", "# Few Utility functions\ndef say(*lst):\n \"\"\"\n Print whithout going to new line\n \"\"\"\n print(*lst, end=\"\")\n sys.stdout.flush()\n\ndef random_value(low, high, decimals=2):\n \"\"\"\n Generate a random number between low and high. \n decimals incidicate number of decimal places\n \"\"\"\n return round(random.uniform(low, high),decimals)\n\ndef gt(a, b): return a > b\n\ndef lt(a, b): return a < b\n\ndef shuffle(lst):\n \"\"\"\n Shuffle a list\n \"\"\"\n random.shuffle(lst)\n return lst\n\nclass Decision(O):\n \"\"\"\n Class indicating Decision of a problem\n \"\"\"\n def __init__(self, name, low, high):\n \"\"\"\n @param name: Name of the decision\n @param low: minimum value\n @param high: maximum value\n \"\"\"\n O.__init__(self, name=name, low=low, high=high)\n \nclass Objective(O):\n \"\"\"\n Class indicating Objective of a problem\n \"\"\"\n def __init__(self, name, do_minimize=True, low=0, high=1):\n \"\"\"\n @param name: Name of the objective\n @param do_minimize: Flag indicating if objective has to be minimized or maximized\n \"\"\"\n O.__init__(self, name=name, do_minimize=do_minimize, low=low, high=high)\n \n def normalize(self, val):\n return (val - self.low)/(self.high - self.low)\n\nclass Point(O):\n \"\"\"\n Represents a member of the population\n \"\"\"\n def __init__(self, decisions):\n O.__init__(self)\n self.decisions = decisions\n self.objectives = None\n \n def __hash__(self):\n return hash(tuple(self.decisions))\n \n def __eq__(self, other):\n return self.decisions == other.decisions\n \n def clone(self):\n new = Point(self.decisions[:])\n new.objectives = self.objectives[:]\n return new\n\nclass Problem(O):\n \"\"\"\n Class representing the cone problem.\n \"\"\"\n def __init__(self, decisions, objectives):\n \"\"\"\n Initialize Problem.\n :param decisions - Metadata for Decisions\n :param objectives - Metadata for Objectives\n \"\"\"\n O.__init__(self)\n self.decisions = decisions\n self.objectives = objectives\n \n @staticmethod\n def evaluate(point):\n assert False\n return point.objectives\n \n @staticmethod\n def is_valid(point):\n return True\n \n def generate_one(self, retries = 20):\n for _ in range(retries):\n point = Point([random_value(d.low, d.high) for d in self.decisions])\n if self.is_valid(point):\n return point\n raise RuntimeError(\"Exceeded max runtimes of %d\" % 20)", "Great. Now that the class and its basic methods is defined, lets extend it for \nPOM3 model.\nPOM3 has multiple versions but for this workshop we will code up the POM3A model. It has 9 decisions defined as follows\n\nCulture in [0.1, 0.9]\nCriticality in [0.82, 1.20]\nCriticality Modifier in [2, 10]\nInitially Known in [0.4, 0.7]\nInter-Dependency in [1, 100]\nDynamism in [1, 50]\nSize in [0, 4]\nPlan in [0, 5]\nTeam Size in [1, 44]\n\n<img src=\"pom3.png\"/>\nThe model has 4 objectives\n* Cost in [0,10000] - Minimize\n* Score in [0,1] - Maximize\n* Completion in [0,1] - Maximize\n* Idle in [0,1] - Minimize", "class POM3(Problem):\n from pom3.pom3 import pom3 as pom3_helper\n helper = pom3_helper()\n def __init__(self):\n \"\"\"\n Initialize the POM3 classes\n \"\"\"\n names = [\"Culture\", \"Criticality\", \"Criticality Modifier\", \"Initial Known\", \n \"Inter-Dependency\", \"Dynamism\", \"Size\", \"Plan\", \"Team Size\"]\n lows = [0.1, 0.82, 2, 0.40, 1, 1, 0, 0, 1]\n highs = [0.9, 1.20, 10, 0.70, 100, 50, 4, 5, 44]\n # TODO 2: Use names, lows and highs defined above to code up decision\n # and objective metadata for POM3.\n decisions = [Decision(n, l, h) for n, l, h in zip(names, lows, highs)]\n objectives = [Objective(\"Cost\", True, 0, 10000), Objective(\"Score\", False, 0, 1),\n Objective(\"Completion\", False, 0, 1),\n Objective(\"Idle\", True, 0, 1),]\n Problem.__init__(self, decisions, objectives)\n# decisions = None\n# objectives = None\n# Problem.__init__(self, decisions, objectives)\n \n @staticmethod\n def evaluate(point):\n if not point.objectives:\n point.objectives = POM3.helper.simulate(point.decisions)\n return point.objectives\n \npom3 = POM3()\none = pom3.generate_one()\nprint(POM3.evaluate(one))", "Utility functions for genetic algorithms.", "def populate(problem, size):\n \"\"\"\n Create a Point list of length size\n \"\"\"\n population = []\n for _ in range(size):\n population.append(problem.generate_one())\n return population\n\ndef crossover(mom, dad):\n \"\"\"\n Create a new point which contains decisions from \n the first half of mom and second half of dad\n \"\"\"\n n = len(mom.decisions)\n return Point(mom.decisions[:n//2] + dad.decisions[n//2:])\n\ndef mutate(problem, point, mutation_rate=0.01):\n \"\"\"\n Iterate through all the decisions in the point\n and if the probability is less than mutation rate\n change the decision(randomly set it between its max and min).\n \"\"\"\n for i, decision in enumerate(problem.decisions):\n if random.random() < mutation_rate:\n point.decisions[i] = random_value(decision.low, decision.high)\n return point\n\ndef bdom(problem, one, two):\n \"\"\"\n Return if one dominates two based\n on binary domintation\n \"\"\"\n objs_one = problem.evaluate(one)\n objs_two = problem.evaluate(two)\n dominates = False\n for i, obj in enumerate(problem.objectives):\n better = lt if obj.do_minimize else gt\n if better(objs_one[i], objs_two[i]):\n dominates = True\n elif objs_one[i] != objs_two[i]:\n return False\n return dominates\n\ndef fitness(problem, population, point, dom_func):\n \"\"\"\n Evaluate fitness of a point based on the definition in the previous block.\n For example point dominates 5 members of population,\n then fitness of point is 5.\n \"\"\"\n return len([1 for another in population if dom_func(problem, point, another)])\n\ndef elitism(problem, population, retain_size, dom_func):\n \"\"\"\n Sort the population with respect to the fitness\n of the points and return the top 'retain_size' points of the population\n \"\"\"\n fitnesses = []\n for point in population:\n fitnesses.append((fitness(problem, population, point, dom_func), point))\n population = [tup[1] for tup in sorted(fitnesses, reverse=True)]\n return population[:retain_size]\n ", "Putting it all together and making the GA", "def ga(pop_size = 100, gens = 250, dom_func=bdom):\n problem = POM3()\n population = populate(problem, pop_size)\n [problem.evaluate(point) for point in population]\n initial_population = [point.clone() for point in population]\n gen = 0 \n while gen < gens:\n say(\".\")\n children = []\n for _ in range(pop_size):\n mom = random.choice(population)\n dad = random.choice(population)\n while (mom == dad):\n dad = random.choice(population)\n child = mutate(problem, crossover(mom, dad))\n if problem.is_valid(child) and child not in population+children:\n children.append(child)\n population += children\n population = elitism(problem, population, pop_size, dom_func)\n gen += 1\n print(\"\")\n return initial_population, population", "Visualize\nLets plot the initial population with respect to the final frontier.", "def plot_pareto(initial, final):\n initial_objs = [point.objectives for point in initial]\n final_objs = [point.objectives for point in final]\n initial_x = [i[1] for i in initial_objs]\n initial_y = [i[2] for i in initial_objs]\n final_x = [i[1] for i in final_objs]\n final_y = [i[2] for i in final_objs]\n plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')\n plt.scatter(final_x, final_y, color='r', marker='o', label='final')\n plt.title(\"Scatter Plot between initial and final population of GA\")\n plt.ylabel(\"Score\")\n plt.xlabel(\"Completion\")\n plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)\n plt.show()\n \n\ninitial, final = ga(gens=50)\nplot_pareto(initial, final)", "Sample Output\n<img src=\"sample.png\"/>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KshitijT/fundamentals_of_interferometry
2_Mathematical_Groundwork/2_5_convolution.ipynb
gpl-2.0
[ "<span style=\"background-color:red\">BVH:MC: author needs to add figure labels</span>\n\n\nOutline\nGlossary\n2. Mathematical Groundwork\nPrevious: 2.4 The Fourier Transform\nNext: 2.6 Cross-correlation and auto-correlation\n\n\n\n\nImport standard modules:", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS", "Import section specific modules:", "import math\n\nfrom IPython.display import HTML\nHTML('../style/code_toggle.html')", "2.5. Convolution<a id='math:sec:convolution'></a>\nThe convolution is an operation connecting two functions, with the result of a mutual broadening. In signal processing, the convolution is often used to represent instrumental broadening of a signal. For any observation, the signal received is \"filtered\" by an instrumental function. The signal is smeared out. The mathematical description for this effect is the convolution of the function representing the original signal with the instrumental function. In this chapter, we give a detailed description.\n2.5.1. Definition of the convolution<a id='math:sec:definition_of_the_convolution'></a>\nThe convolution $\\circ$ is an operation acting on two complex-valued functions.\n<a id='math:eq:5_001'></a><!--\\label{math:eq:5_001}-->$$\n\\circ: \\left{f\\,|\\, f:\\mathbb{R}\\rightarrow \\mathbb{C}\\right}\\,\\times\\, \\left{f\\,|\\, f:\\mathbb{R}\\rightarrow \\mathbb{C}\\right} \\rightarrow \\left{f\\,|\\, f:\\mathbb{R}\\rightarrow \\mathbb{C}\\right}\\\n(f\\circ g)(x) \\,=\\, \\int_{-\\infty}^{+\\infty} f(x-t)\\,g(t)\\,dt\n$$\nor, in more than one dimension\n<a id='math:eq:5_002'></a><!--\\label{math:eq:5_002}-->$$\n\\circ: \\left{f\\,|\\, f:\\mathbb{R}^n\\rightarrow \\mathbb{C}\\right}\\,\\times\\, \\left{f\\,|\\, f:\\mathbb{R}^n\\rightarrow \\mathbb{C}\\right} \\rightarrow \\left{f\\,|\\, f:\\mathbb{R}^n\\rightarrow \\mathbb{C}\\right} \\, \\quad n \\in \\mathbb{N}\\\n\\begin{align}\n(f\\circ g)(x_1,\\ldots,x_n ) \\,&=\\, (f\\circ g)({\\bf x})\\\n\\,&=\\, \\int_{-\\infty}^{+\\infty} \\ldots \\int_{-\\infty}^{+\\infty} f(x_1-t_1, \\ldots , x_n-t_n)\\,g(t_1, \\ldots, t_n) \\,d^nt\\\n\\,&=\\, \\int_{-\\infty}^{+\\infty} f({\\bf x}-{\\bf t})\\,g({\\bf t}) \\,d^nt\\end{align}\n$$\n2.5.2. Properties of the convolution<a id='math:sec:properties_of_the_convolution'></a>\nThe following rules apply:\n<a id='math:eq:5_003'></a><!--\\label{math:eq:5_003}-->$$\n\\forall\\,f,g\\in \\left{h\\,|\\, h:\\mathbb{R}\\rightarrow \\mathbb{C}\\right}, \\quad a \\in \\mathbb{C}\\\n\\begin{align}\nf\\circ g \\,&=\\, g \\circ f&\\qquad (\\text{commutativity})\\\n(f\\circ g)\\circ h \\,&=\\, f \\circ (g\\circ h)&\\qquad (\\text{assiociativity})\\\nf \\circ (g + h) \\,&=\\, (f \\circ g) + (f\\circ h) &\\qquad (\\text{distributivity})\\\n(a\\, g)\\circ h \\,&=\\, a \\, (g\\circ h)&\\qquad (\\text{assiociativity with scalar multiplication})\\\n\\end{align}\n$$\nBecause (in one dimenstion, to keep it short)\n<a id='math:eq:5_002'></a><!--\\label{math:eq:5_002}-->$$\n\\begin{split}\n(f\\circ g)(x) \\,&=\\, \\int_{-\\infty}^{+\\infty} f(x-t)\\,g(t)\\,dt\\\n&\\underset{t^{\\prime} = x - t}{=}\\, \\int_{\\infty}^{-\\infty} f(t^{\\prime})\\,g(x-t^{\\prime})\\,\\frac{dt}{dt^{\\prime}}dt^{\\prime}\\\n&=\\, - \\int_{-\\infty}^{+\\infty} f(t^{\\prime})\\,g(x-t^{\\prime})(-1)\\,dt'\\\n&= (g\\circ f)(x)\\\n((f\\circ g)\\circ h)(x) \\,&=\\, \\int_{-\\infty}^{+\\infty} \\left[\\int_{-\\infty}^{+\\infty} f((x-t^{\\prime})-t)\\,g(t)\\,dt\\right]\\,h(t^\\prime)\\,dt^{\\prime}\\\n&=\\, \\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty} f(x-t -t^{\\prime})\\,g(t)\\,h(t^\\prime)\\,dt\\,dt^{\\prime}\\\n&=\\, \\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty} f((x-t) -t^{\\prime})\\,h(t^\\prime)\\,g(t)\\,dt^{\\prime}\\,dt\\\n&=\\, \\int_{-\\infty}^{+\\infty} \\left[\\int_{-\\infty}^{+\\infty} f((x-t) -t^{\\prime})\\,h(t^\\prime)\\,dt^{\\prime}\\right]\\,g(t)\\,dt\\\n&=\\, (f\\circ (g\\circ h))(x)\n\\end{split}\\qquad \\rm \n$$\nThe last two rules can be easily verified.\n2.5.3. Convolution Examples<a id='math:sec:convolution_examples'></a>\nAs said, the convolution is often used to represent an instrumental function. We want to demonstrate this. Let us assume a simple function, the triangle wave and a rectangle function (scaled to an area of 1). If we convolve them we get this:", "import math\n\nfrom matplotlib import rcParams\nrcParams['text.usetex'] = True\n\n#def trianglewave(x, T):\n# \"\"\"\n# This is a sawtooth, though\n# \"\"\"\n# return np.mod(x/T,1.)*np.logical_and(x>=0,x<=T)\n\ndef trianglewave(x, T):\n \"\"\"\n T is the period.\n \"\"\"\n return np.abs(2.*(np.mod(x/T,1.)-0.5))-0.5\n\ndef boxcar(x,a,b,amp):\n return amp*np.logical_and(x>=a,x<=b)\n \ndef plottriboxconv(a, b, period):\n\n # limits of boxcar Play arround with this\n# a = -0.1\n# b = 0.1\n \n # Plotting range\n xrange = [-2., 2.]\n\n # Create functions\n xpoints = 1000\n \n # Resolution element\n dx = (xrange[1]-xrange[0])/float(xpoints)\n\n x = np.linspace(xrange[0], xrange[1], xpoints)\n y = boxcar(x, a, b, 1.)\n\n # boxcar will be normalised to 1. amp = 1./(b-a) works in the limit of many points, but here we do\n # numberofpixelsinbox*dx*amplitude = y.sum *dx*amplitude = 1\n # to take into account numerical effects\n amp = float(xpoints)/((xrange[1]-xrange[0])* y.sum())\n y = boxcar(x, a, b, 1./(b-a))\n ycorr = boxcar(x, a, b, amp)\n z = trianglewave(x, period)\n\n result = np.convolve(ycorr,z,'same')\n result = dx*result\n \n # Start the plot, create a figure instance and a subplot\n fig = plt.figure()\n ax1 = fig.add_subplot(311)\n fig.tight_layout()\n plt.subplots_adjust(hspace = 0.6)\n \n # Axis ranges\n ax1.axis([xrange[0]+(b-a), xrange[1]-(b-a), z.min()-0.1*(z.max()-z.min()), z.max()+0.1*(z.max()-z.min())])\n\n # Plot a grid\n ax1.grid(True)\n\n # Insert lines at x=0 and y=0\n ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n ax1.plot(x,z,'b-')\n\n plt.title(\"Triangle wave\", fontsize=14,color='black')\n \n ax2 = fig.add_subplot(312, sharex=ax1) \n\n # Axis ranges\n ax2.axis([xrange[0]+(b-a), xrange[1]-(b-a), ycorr.min()-0.1*(ycorr.max()-ycorr.min()), \\\n ycorr.max()+0.1*(ycorr.max()-ycorr.min())])\n\n # Plot a grid\n ax2.grid(True)\n\n # Insert lines at x=0 and y=0\n ax2.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax2.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n e1 = int(math.ceil(xpoints*(a-xrange[0])/(xrange[1]-xrange[0])))\n ax2.plot(x[:e1],y[:e1],'b-')\n ax2.plot([a, a],[0., amp],'b--')\n e2 = int(math.floor(xpoints*(b-xrange[0])/(xrange[1]-xrange[0])))\n ax2.plot(x[e1:e2],y[e1:e2],'b-')\n e3 = xpoints\n ax2.plot(x[e2:],y[e2:],'b-')\n ax2.plot([b, b],[0., amp],'b--')\n\n plt.title(\"Rectangle function\", fontsize=14,color='black')\n \n ax3 = fig.add_subplot(313, sharex=ax2) \n\n # Axis ranges: mask out border effects\n rmin = result.min()\n rmax = result.max()\n \n # Just to make the result a bit more beautiful if the function is very flat\n if (rmax - rmin) < 0.1:\n rmin=rmin-0.1\n rmax=rmax+0.1\n\n ax3.axis([xrange[0]+(b-a), xrange[1]-(b-a), rmin-0.1*(rmax-rmin), rmax+0.1*(rmax-rmin)])\n\n # Plot a grid\n ax3.grid(True)\n\n # Insert lines at x=0 and y=0\n ax3.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax3.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n plr1 = int(xpoints*(b-a)/(xrange[1]-xrange[0]))\n plr2 = int(xpoints*(1-(b-a)/(xrange[1]-xrange[0])))\n \n ax3.plot(x[plr1:plr2],result[plr1:plr2],'b-')\n\n plt.title(\"Triangle wave filtered with rectangle function\", fontsize=14,color='black')\n \n# first two arguments give the position of the rectangle, third the period of the Triangle\nplottriboxconv(-0.1, 0.1, 1.0)", "Figure 2.5.1: Rectangle-filtered triangle wave\nOne might assume that one is observing a (co-)sine function. But it can get worse:", "# first two arguments give the position of the rectangle, third the period of the Triangle\nplottriboxconv(-0.5, 0.5, 1.0)", "Figure 2.5.2:\nThis example illustrates that the process of filtering can destroy information about our signal. However, filtering can also be useful. Given noisy observations of a function, a rectangle function can be used to filter out the noise. This is illustrated in the subsequent example.", "from matplotlib import rcParams\nrcParams['text.usetex'] = True\n\ndef noisycosinewave(x, amplitude, T, sigma):\n \"\"\"\n T is the period, sigma is the dispersion, amplitude the amplitude\n \"\"\"\n return amplitude*np.cos(2.*math.pi*x/T)+np.random.normal(scale=sigma, size=x.size)\n\ndef boxcar(x,a,b,amp):\n return amp*np.logical_and(x>=a,x<=b)\n \ndef plotcosboxconv(a, b, period, sigma):\n\n # limits of boxcar Play arround with this\n# a = -0.1\n# b = 0.1\n \n # Plotting range\n xrange = [-2., 2.]\n\n # Create functions\n xpoints = 1000\n \n # Resolution element\n dx = (xrange[1]-xrange[0])/float(xpoints)\n\n x = np.linspace(xrange[0], xrange[1], xpoints)\n y = boxcar(x, a, b, 1.)\n\n # boxcar will be normalised to 1. amp = 1./(b-a) works in the limit of many points, but here we do\n # numberofpixelsinbox*dx*amplitude = y.sum *dx*amplitude = 1\n # to take into account numerical effects\n amp = float(xpoints)/((xrange[1]-xrange[0])* y.sum())\n y = boxcar(x, a, b, 1./(b-a))\n ycorr = boxcar(x, a, b, amp)\n z = noisycosinewave(x, 1., period, sigma)\n c = np.cos(2.*math.pi*x/period)\n \n result = np.convolve(ycorr,z,'same')\n result = dx*result\n \n # Start the plot, create a figure instance and a subplot\n fig = plt.figure()\n \n ax1 = fig.add_subplot(411)\n fig.tight_layout()\n plt.subplots_adjust(hspace = 0.8)\n \n # Axis ranges\n ax1.axis([xrange[0]+(b-a), xrange[1]-(b-a), c.min()-0.1*(c.max()-c.min()), c.max()+0.1*(c.max()-c.min())])\n\n # Plot a grid\n ax1.grid(True)\n\n # Insert lines at x=0 and y=0\n ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n ax1.plot(x,c,'b-')\n\n plt.title(\"Original function (cos)\", fontsize=14,color='black')\n\n ax1 = fig.add_subplot(412)\n \n # Axis ranges\n ax1.axis([xrange[0]+(b-a), xrange[1]-(b-a), z.min()-0.1*(z.max()-z.min()), z.max()+0.1*(z.max()-z.min())])\n\n # Plot a grid\n ax1.grid(True)\n\n # Insert lines at x=0 and y=0\n ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n ax1.plot(x,z,'b-')\n\n plt.title(\"Noise added\", fontsize=14,color='black')\n \n ax2 = fig.add_subplot(413, sharex=ax1) \n\n # Axis ranges\n ax2.axis([xrange[0]+(b-a), xrange[1]-(b-a), ycorr.min()-0.1*(ycorr.max()-ycorr.min()), \\\n ycorr.max()+0.1*(ycorr.max()-ycorr.min())])\n\n # Plot a grid\n ax2.grid(True)\n\n # Insert lines at x=0 and y=0\n ax2.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax2.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n e1 = int(math.ceil(xpoints*(a-xrange[0])/(xrange[1]-xrange[0])))\n ax2.plot(x[:e1],y[:e1],'b-')\n ax2.plot([a, a],[0., amp],'b--')\n e2 = int(math.floor(xpoints*(b-xrange[0])/(xrange[1]-xrange[0])))\n ax2.plot(x[e1:e2],y[e1:e2],'b-')\n e3 = xpoints\n ax2.plot(x[e2:],y[e2:],'b-')\n ax2.plot([b, b],[0., amp],'b--')\n\n plt.title(\"Rectangle function\", fontsize=14,color='black')\n \n ax3 = fig.add_subplot(414, sharex=ax2) \n\n # Axis ranges: mask out border effects\n rmin = result.min()\n rmax = result.max()\n \n # Just to make the result a bit more beautiful if the function is very flat\n if (rmax - rmin) < 0.1:\n rmin=rmin-0.1\n rmax=rmax+0.1\n\n ax3.axis([xrange[0]+(b-a), xrange[1]-(b-a), rmin-0.1*(rmax-rmin), rmax+0.1*(rmax-rmin)])\n\n # Plot a grid\n ax3.grid(True)\n\n # Insert lines at x=0 and y=0\n ax3.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax3.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n plr1 = int(xpoints*(b-a)/(xrange[1]-xrange[0]))\n plr2 = int(xpoints*(1-(b-a)/(xrange[1]-xrange[0])))\n \n ax3.plot(x[plr1:plr2],result[plr1:plr2],'b-')\n\n plt.title(\"Noisy function filtered with rectangle function\", fontsize=14,color='black')\n \n# first two arguments give the position of the rectangle, third the period of the Triangle\nplotcosboxconv(-0.1, 0.1, 1.0, 2.5)", "Figure 2.5.3:\nNote, that while the signal is not visible in the noisy data, it is partially recovered in the output of the filter.\nRepresenting instrumental functions, it is important to differentiate between the response function in a certain direction and the image of an impulse, which is the reverse of the response function. The function used to represent a measurement via convolution is the image of an impulse function at the origin. This becomes evident, when we convolve two asymmetric functions.", "from matplotlib import rcParams\nrcParams['text.usetex'] = True\n\ndef gausshermetian(x, amp, mu, sigma, h3, h4):\n \"\"\"\n T is the period, sigma is the dispersion, amplitude the amplitude\n \"\"\"\n y = (x-mu)/sigma\n return amp*np.exp(-0.5*y**2)*(1+h3*(2*np.sqrt(2.)*y**3-3*np.sqrt(2.)*y)/np.sqrt(6.)+h4*(4*y**4-12*y**2+3)/np.sqrt(24))\n\n#amplitude*np.cos(2.*math.pi*x/T)+np.random.normal(scale=sigma, size=x.size)\n\ndef boxcar(x,a,b,amp):\n return amp*np.logical_and(x>=a,x<=b)\n \ndef plotskewedgaussobs(pos1, pos2, boxwidth, sigma, h3, h4):\n\n # limits of boxcar Play arround with this\n# a = -0.1\n# b = 0.1\n \n # Plotting range\n xrange = [-2., 2.]\n\n # Create functions\n xpoints = 1000\n \n # Resolution element\n dx = (xrange[1]-xrange[0])/float(xpoints)\n\n x = np.linspace(xrange[0], xrange[1], xpoints)\n y = boxcar(x, pos1-boxwidth/2., pos1+boxwidth/2, \\\n 1./boxwidth)+0.5*boxcar(x, pos2-boxwidth/2., pos2+boxwidth/2, 1./boxwidth)\n\n # boxcar will be normalised to 1. amp = 1./(b-a) works in the limit of many points, but here we do\n # numberofpixelsinbox*dx*amplitude = y.sum *dx*amplitude = 1\n # to take into account numerical effects\n z = gausshermetian(x, 1., 0., sigma, h3, h4)\n \n result = np.convolve(y,z,'same')\n result = dx*result\n \n # Start the plot, create a figure instance and a subplot\n fig = plt.figure()\n \n ax1 = fig.add_subplot(311) \n fig.tight_layout()\n plt.subplots_adjust(hspace = 0.7)\n \n # Axis ranges\n ax1.axis([xrange[0]+boxwidth, xrange[1]-boxwidth, y.min()-0.1*(y.max()-y.min()), y.max()+0.1*(y.max()-y.min())])\n\n # Plot a grid\n ax1.grid(True)\n\n # Insert lines at x=0 and y=0\n ax1.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax1.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n ax1.plot(x,y,'b-')\n\n plt.title(\"Original function, impulse\", fontsize=14,color='black')\n \n ax2 = fig.add_subplot(312, sharex=ax1)\n \n # Axis ranges\n ax2.axis([xrange[0]+boxwidth, xrange[1]-boxwidth, z.min()-0.1*(z.max()-z.min()), z.max()+0.1*(z.max()-z.min())])\n\n # Plot a grid\n ax2.grid(True)\n\n # Insert lines at x=0 and y=0\n ax2.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax2.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n ax2.plot(x,z,'b-')\n\n plt.title(\"Instrumental function\", fontsize=14,color='black')\n\n ax3 = fig.add_subplot(313, sharex=ax2) \n\n # Axis ranges: mask out border effects\n rmin = result.min()\n rmax = result.max()\n \n ax3.axis([xrange[0]+boxwidth, xrange[1]-boxwidth, rmin-0.1*(rmax-rmin), rmax+0.1*(rmax-rmin)])\n\n # Plot a grid\n ax3.grid(True)\n\n # Insert lines at x=0 and y=0\n ax3.axhline(0.,linewidth=1, color = 'k', linestyle='dashed')\n ax3.axvline(0.,linewidth=1, color = 'k', linestyle='dashed')\n \n # Plot function\n plr1 = int(xpoints*boxwidth/(xrange[1]-xrange[0]))\n plr2 = int(xpoints*(1-boxwidth/(xrange[1]-xrange[0])))\n \n ax3.plot(x[plr1:plr2],result[plr1:plr2],'b-')\n\n plt.title(\"Image: original function filtered with instrumental function\", fontsize=14,color='black')\n \n# first two arguments give the position of the rectangle, third the period of the Triangle\nplotskewedgaussobs(0.0, 1.0, 0.01, 0.1, 0.2, 0.1)", "Figure 2.5.4:\nWould it be the sensitivity at a certain position, the convolution would not be the appropriate operation to describe an experiment. In that case (assuming real-valued functions), the cross-correlation would be the operation of choice.\n\n\nNext 2.6 Cross-correlation and auto-correlation\n\n<div class=warn><b>Future Additions:</b></div>\n\n\nadd the convolution theorem, it might be some where else but should be here also" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
corochann/chainer-hands-on-tutorial
src/02_mnist_mlp/mnist_dataset_introduction.ipynb
mit
[ "MNIST dataset introduction\nMNIST dataset\nMNIST (Mixed National Institute of Standards and Technology) database is dataset for handwritten digits, distributed by Yann Lecun's THE MNIST DATABASE of handwritten digits website.\n\nWikipedia\n\nThe dataset consists of pair, \"handwritten digit image\" and \"label\". Digit ranges from 0 to 9, meaning 10 patterns in total.\n\nhandwritten digit image: This is gray scale image with size 28 x 28 pixel.\nlabel : This is actual digit number this handwritten digit image represents. It is either 0 to 9.\n\n\nSeveral samples of \"handwritten digit image\" and its \"label\" from MNIST dataset.\nMNIST dataset is widely used for \"classification\", \"image recognition\" task. This is considered as relatively simple task, and often used for \"Hello world\" program in machine learning category. It is also often used to compare algorithm performances in research.\nHandling MNIST dataset with Chainer\nFor these famous datasets like MNIST, Chainer provides utility function to prepare dataset. So you don't need to write preprocessing code by your own, downloading dataset from internet, and extract it, followed by formatting it etc... Chainer function do it for you!\nCurrently,\nMNIST\nCIFAR-10, CIFAR-100\nPenn Tree Bank (PTB)\nare supported, refer Official document for dataset.\nLet's get familiar with MNIST dataset handling at first. Below codes are based on mnist_dataset.ipynb. To prepare MNIST dataset, you just need to call chainer.datasets.get_mnist function.", "import numpy as np\nimport chainer\n\n# Load the MNIST dataset from pre-inn chainer method\ntrain, test = chainer.datasets.get_mnist()", "If this is first time, it starts downloading the dataset which might take several minutes. From second time, chainer will refer the cached contents automatically so it runs faster.\nYou will get 2 returns, each of them corresponds to \"training dataset\" and \"test dataset\".\nMNIST have total 70000 data, where training dataset size is 60000, and test dataset size is 10000.", "# train[i] represents i-th data, there are 60000 training data\n# test data structure is same, but total 10000 test data\nprint('len(train), type ', len(train), type(train))\nprint('len(test), type ', len(test), type(test))", "I will explain about only train dataset below, but test dataset have same dataset format.\ntrain[i] represents i-th data, type=tuple($ x_i $, $y_i $), where $ x_i $ is image data in array format with size 784, and $y_i$ is label data indicates actual digit of image.", "print('train[0]', type(train[0]), len(train[0]))\n# print(train[0]) # x_i = long array and y_i = label", "$x_i $ information. You can see that image is represented as just an array of float numbers ranging from 0 to 1. MNIST image size is 28 × 28 pixel, so it is represented as 784 1-d array.", "# train[i][0] represents x_i, MNIST image data,\n# type=numpy(784,) vector <- specified by ndim of get_mnist()\nprint('train[0][0]', train[0][0].shape)\nnp.set_printoptions(threshold=10) # set np.inf to print all.\nprint(train[0][0])", "$y_i $ information. In below case you can see that 0-th image has label \"5\".", "# train[i][1] represents y_i, MNIST label data(0-9), type=numpy() -> this means scalar\nprint('train[0][1]', train[0][1].shape, train[0][1])", "Plotting MNIST\nSo, each i-th dataset consists of image and label\n- train[i][0] or test[i][0]: i-th handwritten image\n- train[i][1] or test[i][1]: i-th label\nBelow is a plotting code to check how images (this is just an array vector in python program) look like. This code will generate the MNIST image which was shown in the top of this articl", "import os\n\nimport chainer\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nbase_dir = 'src/02_mnist_mlp/images'\n\n# Load the MNIST dataset from pre-inn chainer method\ntrain, test = chainer.datasets.get_mnist(ndim=1)\n\nROW = 4\nCOLUMN = 5\nfor i in range(ROW * COLUMN):\n # train[i][0] is i-th image data with size 28x28\n image = train[i][0].reshape(28, 28) # not necessary to reshape if ndim is set to 2\n plt.subplot(ROW, COLUMN, i+1) # subplot with size (width 3, height 5)\n plt.imshow(image, cmap='gray') # cmap='gray' is for black and white picture.\n # train[i][1] is i-th digit label\n plt.title('label = {}'.format(train[i][1]))\n plt.axis('off') # do not show axis value\nplt.tight_layout() # automatic padding between subplots\nplt.savefig(os.path.join(base_dir, 'mnist_plot.png'))\n#plt.show()", "[Hands on] Try plotting \"test\" dataset instead of \"train\" dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Vvkmnn/books
AutomateTheBoringStuffWithPython/lesson21.ipynb
gpl-3.0
[ "Lesson 21:\nString Formatting\nYou can typically combine strings with +.", "'hello ' + 'world!'", "This gets harder with more variables.", "name = 'Alice'\nplace = 'Main Street'\ntime = '6 pm'\nfood = 'turnips'\n\nprint('Hello ' + name + ', you are invited to a party at ' + place + ' at ' + time + '. Please bring ' + food + '.')", "Python has string interpolation, which uses %s to insert other strings into placeholders.", "print(' Hello %s, you are invited to a party at %s at %s. Please bring %s.' % (name, place, time, food)) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session13/Day4/ConditionalEntropy.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib notebook", "Conditional Entropy: Can Information Theory Beat the L-S Periodogram?\nVersion 0.2\n\nBy AA Miller\n23 Sep 2021\nLecture IV focused on alternative methods to Lomb-Scargle when searching for periodic signals in astronomical time series. In this notebook you will develop the software necessary to search for periodicity via Conditional Entropy (my personal favorite method). \nConditional Entropy\nConditional Entropy (CE; Graham et al. 2013), and other entropy based methods, aim to minimize the entropy in binned (normalized magnitude, phase) space. CE, in particular, is good at supressing signal due to the window function.\nWhen tested on real observations, CE outperforms most of the alternatives (e.g., LS, PDM, etc).\n<img style=\"display: block; margin-left: auto; margin-right: auto\" src=\"./images/CE.png\" align=\"middle\">\n<div align=\"right\"> <font size=\"-3\">(credit: Graham et al. 2013) </font></div>\n\nConditional Entropy\nThe focus of today's exercise is conditional entropy (CE), which uses information theory and thus, in principle, works better in the presence of noise and outliers. Furthermore, CE does not make any assumptions about the underlying shape of the signal, which is useful when looking for non-sinusoidal patterns (such as transiting planets or eclipsing binaries).\nFor full details on the CE algorithm, see Graham et al. (2013).\nBriefly, CE is based on the using the Shannon entropy (Cincotta et al. 1995), which is determined as follows:\n\n\nNormalize the time series data $m(t_i)$ to occupy a uniform square over phase, $\\phi$, and magnitude, $m$, at a given trial period, $p$. \n\n\nCalculate the Shannon entropy, $H_0$, over the $k$ partitions in $(\\phi, m)$:\n\n\n$$H_0 = - \\sum_{i=1}^{k} \\mu_i \\ln{(\\mu_i)}\\;\\; \\forall \\mu_i \\ne 0,$$\n where $\\mu_i$ is the occupation probability for the $i^{th}$ partition, which is just the number of data points in that partition divided by the total number of points in the data set.\n\nIterate over multiple periods, and identify the period that minimizes the entropy (recall that entropy measures a lack of information)\n\nAs discussed in Graham et al. (2013), minimizing the Shannon entropy can be influenced by the window function, so they introduce the conditional entropy, $H_c(m|\\phi)$, to help mitigate these effects. The CE can be calculated as:\n$$H_c = \\sum_{i,j} p(m_i, \\phi_j) \\ln \\left( \\frac{p(\\phi_j)}{p(m_i, \\phi_j)} \\right), $$\nwhere $p(m_i, \\phi_j)$ is the occupation probability for the $i^{th}$ partition in normalized magnitude and the $j^{th}$\npartition in phase and $p(\\phi_j)$ is the occupation probability of the $j^{th}$ phase partition\nIn this problem we will first calculate the Shannon entropy, then the CE to find the best-fit period of the eclipsing binary from the LS lecture.\nProblem 1) Helper Functions\nProblem 1a\nCreate a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions:\n$$ y = A\\,cos\\left(\\frac{2{\\pi}x}{P} - \\phi\\right) + \\sigma_y$$\nwhere $A, P, \\phi$ are inputs to the function. gen_periodic_data should include Gaussian noise, $\\sigma_y$, for each output $y_i$.", "def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):\n '''Generate periodic data given the function inputs\n \n y = A*cos(x/p - phase) + noise\n \n Parameters\n ----------\n x : array-like\n input values to evaluate the array\n \n period : float (default=1)\n period of the periodic signal\n \n amplitude : float (default=1)\n amplitude of the periodic signal\n \n phase : float (default=0)\n phase offset of the periodic signal\n \n noise : float (default=0)\n variance of the noise term added to the periodic signal\n \n Returns\n -------\n y : array-like\n Periodic signal evaluated at all points x\n '''\n \n y = amplitude*np.sin(2*np.pi*x/(period) - phase) + np.random.normal(0, np.sqrt(noise), size=len(x))\n return y", "Problem 1b\nCreate a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).\nInclude an optional argument, y_unc, to include uncertainties on the y values, when available.", "def phase_plot(x, y, period, y_unc = 0.0):\n '''Create phase-folded plot of input data x, y\n \n Parameters\n ----------\n x : array-like\n data values along abscissa\n\n y : array-like\n data values along ordinate\n\n period : float\n period to fold the data\n \n y_unc : array-like\n uncertainty of the \n ''' \n phases = (x/period) % 1\n if type(y_unc) == float:\n y_unc = np.zeros_like(x)\n \n plot_order = np.argsort(phases)\n norm_y = (y - np.min(y))/(np.max(y) - np.min(y))\n norm_y_unc = (y_unc)/(np.max(y) - np.min(y))\n \n plt.rc('grid', linestyle=\":\", color='0.8')\n fig, ax = plt.subplots()\n ax.errorbar(phases[plot_order], norm_y[plot_order], norm_y_unc[plot_order],\n fmt='o', mec=\"0.2\", mew=0.1)\n ax.set_xlabel(\"phase\")\n ax.set_ylabel(\"signal\")\n ax.set_yticks(np.linspace(0,1,11))\n ax.set_xticks(np.linspace(0,1,11))\n ax.grid()\n fig.tight_layout()", "Problem 1c\nGenerate a signal with $A = 2$, $p = \\pi$, and Gaussian noise with variance = 0.01 over a regular grid between 0 and 10. Plot the phase-folded results (and make sure the results behave as you would expect).\nHint - your simulated signal should have at least 100 data points.", "x = np.linspace( # complete\ny = # complete\n\n# complete plot", "Note a couple changes from the previous helper function –– we have added a grid to the plot (this will be useful for visualizing the entropy), and we have also normalized the brightness measurements from 0 to 1. \nProblem 2) The Shannon entropy\nAs noted above, to calculate the Shannon entropy we need to sum the data over partitions in the normalized $(\\phi, m)$ plane.\nThis is straightforward using histogram2d from numpy. \nProblem 2a \nWrite a function shannon_entropy to calculate the Shannon entropy, $H_0$, for a timeseries, $m(t_i)$, at a given period, p.\nHint - use histogram2d and a 10 x 10 grid (as plotted above).", "def shannon_entropy(m, t, p):\n '''Calculate the Shannon entropy\n \n Parameters\n ----------\n m : array-like\n brightness measurements of the time-series data\n \n t : array-like (default=1)\n timestamps corresponding to the brightness measurements\n \n p : float\n period of the periodic signal\n \n Returns\n -------\n H0 : float\n Shannon entropy for m(t) at period p\n '''\n \n m_norm = # complete\n phases = # complete\n H, _, _ = np.histogram2d( # complete\n \n occupied = np.where(H > 0)\n H0 = # complete\n \n return H0", "Problem 2b\nWhat is the Shannon entropy for the simulated signal at periods = 1, $\\pi$-0.05, and $\\pi$?\nDo these results make sense given your understanding of the Shannon entropy?", "print('For p = 1, \\t\\tH_0 = {:.5f}'.format( # complete\nprint('For p = pi - 0.05, \\tH_0 = {:.5f}'.format( # complete\nprint('For p = pi, \\t\\tH_0 = {:.5f}'.format( # complete", "We know the correct period of the simulated data is $\\pi$, so it makes sense that this period minimizes the Shannon entropy. \nProblem 2c\nWrite a function, se_periodogram to calculate the Shannon entropy for observations $m$, $t$ over a frequency grid f_grid.", "def se_periodogram(m, t, f_grid):\n '''Calculate the Shannon entropy at every freq in f_grid\n \n Parameters\n ----------\n m : array-like\n brightness measurements of the time-series data\n \n t : array-like\n timestamps corresponding to the brightness measurements\n \n f_grid : array-like\n trial periods for the periodic signal\n \n Returns\n -------\n se_p : array-like\n Shannon entropy for m(t) at every trial freq\n '''\n \n # complete\n for # complete in # complete\n # complete\n \n return se_p", "Problem 2d\nPlot the Shannon entropy periodogram, and return the best-fit period from the periodogram.\nHint - recall what we learned about frequency grids earlier today.", "f_grid = # complete\nse_p = # complete\n\nfig,ax = plt.subplots()\n# complete\n# complete\n# complete\n\nprint(\"The best fit period is: {:.4f}\".format( # complete", "Problem 3) The Conditional Entropy\nThe CE is very similar to the Shannon entropy, though we need to condition the calculation on the occupation probability of the partitions in phase.\nProblem 3a \nWrite a function conditional_entropy to calculate the CE, $H_c$, for a timeseries, $m(t_i)$, at a given period, p.\nHint - if you use histogram2d be sure to sum along the correct axes\nHint 2 - recall from session 8 that we want to avoid for loops, try to vectorize your calculation.", "def conditional_entropy(m, t, p):\n '''Calculate the conditional entropy\n \n Parameters\n ----------\n m : array-like\n brightness measurements of the time-series data\n \n t : array-like\n timestamps corresponding to the brightness measurements\n \n p : float\n period of the periodic signal\n \n Returns\n -------\n Hc : float\n Conditional entropy for m(t) at period p\n '''\n \n m_norm = # complete\n phases = # complete\n # complete\n # complete\n # complete\n Hc = # complete\n \n return Hc", "Problem 3b\nWhat is the conditional entropy for the simulated signal at periods = 1, $\\pi$-0.05, and $\\pi$?\nDo these results make sense given your understanding of CE?", "print('For p = 1, \\t\\tH_c = {:.5f}'.format( # complete\nprint('For p = pi - 0.05, \\tH_c = {:.5f}'.format( # complete\nprint('For p = pi, \\t\\tH_c = {:.5f}'.format( # complete", "Problem 3c\nWrite a function, ce_periodogram, to calculate the conditional entropy for observations $m$, $t$ over a frequency grid f_grid.", "def ce_periodogram(m, t, f_grid):\n '''Calculate the conditional entropy at every freq in f_grid\n \n Parameters\n ----------\n m : array-like\n brightness measurements of the time-series data\n \n t : array-like\n timestamps corresponding to the brightness measurements\n \n f_grid : array-like\n trial periods for the periodic signal\n \n Returns\n -------\n ce_p : array-like\n conditional entropy for m(t) at every trial freq\n '''\n \n # complete\n for # complete in # complete\n # complete\n \n return ce_p", "Problem 3d\nPlot the conditional entropy periodogram, and return the best-fit period from the periodogram.", "f_grid = # complete\nce_p = # complete\n\nfig,ax = plt.subplots()\n# complete\n# complete\n# complete\n\nprint(\"The best fit period is: {:.4f}\".format( # complete", "The Shannon entropy and CE return nearly identical results for a simulated sinusoidal signal. Now we will examine how each performs with actual astronomical observations. \nProblem 4) SE vs. CE for real observations\nProblem 4a\nLoad the data from our favorite eclipsing binary from this morning's LS exercise. Plot the light curve. \nHint - if you haven't already, download the example light curve.", "data = pd.read_csv(\"example_asas_lc.dat\")\n\nfig, ax = plt.subplots()\nax.errorbar( # complete\nax.set_xlabel('HJD (d)')\nax.set_ylabel('V (mag)')\nax.set_ylim(ax.get_ylim()[::-1])\nfig.tight_layout()", "Problem 4b \nUsing the Shannon entropy, determine the best period for this light curve.\nHint - recall this morning's discussion about the optimal grid for a period search", "f_min = # complete\nf_max = # complete\ndelta_f = # complete\n\nf_grid = # complete\n\nse_p = # complete\n\nprint(\"The best fit period is: {:.9f}\".format( # complete", "Problem 4c\nPlot the Shannon entropy periodogram.", "fig, ax = plt.subplots()\n# complete\n# complete\n# complete", "Problem 4d\nPlot the light curve phase-folded on the best-fit period, as measured by the Shannon entropy periodogram.\nDoes this look reasonable? Why or why not?\nHint - it may be helpful to zoom in on the periodogram.", "phase_plot(# complete", "Problem 4e \nUsing the conditional entropy, determine the best period for this light curve.", "ce_p = # complete\n\nprint(\"The best fit period is: {:.9f}\".format( # complete", "Problem 4f\nPlot the CE periodogram.", "fig, ax = plt.subplots()\n# complete\n# complete\n# complete", "Problem 4g\nPlot the light curve phase-folded on the best-fit period, as measured by the CE periodogram.\nDoes this look reasonable? If not - can you make it look better?", "phase_plot( # complete", "This example demonstrates the primary strength of CE over the Shannon entropy. \nIf you zoom-in on the CE periodogram, there is no power at $p \\approx 1\\,\\mathrm{d}$, unlike the LS periodogram or the Shannon entropy method. This will not be the case for every single light curve, but this is a very nice feature of the CE method. And one reason why it may be preferred to something like LS when analyzing every light curve in LSST.\nChallenge Problem) Overlapping Bins\nIn the previous example we used a simple uniform grid to identify the best-fit period for the eclipsing binary. However, the \"best-fit\" resulted in an estimate of the half period. One way to improve upon this estimate is to build a grid that has overlapping phase bins. This requirement results in better continuity in the phase-folded light curves (K.Burdge, private communication). \nChallenge Problem\nBuild a function conditional_entropy_overlap that utilizes overlapping bins in the CE calculation. \nCan you use this function to identify the correct period of the binary?" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Hexiang-Hu/mmds
week6/.ipynb_checkpoints/Quiz-Week6-checkpoint.ipynb
mit
[ "Quiz -Week 6A\nQ1.\n\nThe figure below shows two positive points (purple squares) and two negative points (green circles): \n\n\n\n\nThat is, the training data set consists of:\n\n(x1,y1) = ((5,4),+1)\n(x2,y2) = ((8,3),+1)\n(x3,y3) = ((7,2),-1)\n(x4,y4) = ((3,3),-1)\n\n\n\nOur goal is to find the maximum-margin linear classifier for this data. In easy cases, the shortest line between a positive and negative point has a perpendicular bisector that separates the points. If so, the perpendicular bisector is surely the maximum-margin separator. Alas, in this case, the closest pair of positive and negative points, x2 and x3, have a perpendicular bisector that misclassifies x1 as negative, so that won't work.\n\n\nThe next-best possibility is that we can find a pair of points on one side (i.e., either two positive or two negative points) such that a line parallel to the line through these points is the maximum-margin separator. In these cases, the limit to how far from the two points the parallel line can get is determined by the closest (to the line between the two points) of the points on the other side. For our simple data set, this situation holds.\n\n\nConsider all possibilities for boundaries of this type, and express the boundary as w.x+b=0, such that w.x+b≥1 for positive points x and w.x+b≤-1 for negative points x. Assuming that w = (w1,w2), identify in the list below the true statement about one of w1, w2, and b.", "import numpy as np\n\np1 = (5, 4)\np2 = (8, 3)\np3 = (7, 2)\np4 = (3, 3)\n\ndef calc_wb(p1, p2):\n dx = ( p1[0] - p2[0] )\n dy = ( p1[1] - p2[1] )\n return ( ( float(dy) *2 / float(dy - dx), float(-dx)*2 / float(dy - dx) ),\\\n (dx*p2[1] - dy * p2[0])*2 / float(dy - dx) + 1) # b = dx*y1 - dy*x1\n\ndef cal_margin(w, b, pt):\n return w[0] * pt[0] + w[1] * pt[1] + b\n\nw, b = calc_wb(p1, p2)\nprint \"w for p1, p2: \" + str(w)\nprint \"b for p1, p2: \" + str(b)\nprint \"===========================\"\nprint cal_margin(w, b, p1)\nprint cal_margin(w, b, p2)\nprint cal_margin(w, b, p3)\nprint cal_margin(w, b, p4)\nprint \n\nw, b = calc_wb(p4, p3)\nprint \"w for p1, p2: \" + str(w)\nprint \"b for p1, p2: \" + str(b)\nprint \"===========================\"\nprint cal_margin(w, b, p1)\nprint cal_margin(w, b, p2)\nprint cal_margin(w, b, p3)\nprint cal_margin(w, b, p4)\n", "Q2.\n\nConsider the following training set of 16 points. The eight purple squares are positive examples, and the eight green circles are negative examples.\n\n\n\n\nWe propose to use the diagonal line with slope +1 and intercept +2 as a decision boundary, with positive examples above and negative examples below. However, like any linear boundary for this training set, some examples are misclassified. We can measure the goodness of the boundary by computing all the slack variables that exceed 0, and then using them in one of several objective functions. In this problem, we shall only concern ourselves with computing the slack variables, not an objective function.\n\n\nTo be specific, suppose the boundary is written in the form w.x+b=0, where w = (-1,1) and b = -2. Note that we can scale the three numbers involved as we wish, and so doing changes the margin around the boundary. However, we want to consider this specific boundary and margin.\n\n\nDetermine the slack for each of the 16 points. Then, identify the correct statement in the list below.", "w = (-1, 1)\nb = -2\n\ndef cal_margin(w, b, pt):\n return w[0] * pt[0] + w[1] * pt[1] + b\n\nprint cal_margin(w, b, (7, 10) )\nprint cal_margin(w, b, (7, 8) )\nprint cal_margin(w, b, (3, 4) )\nprint cal_margin(w, b, (3, 4) )", "Q3.\n\nBelow we see a set of 20 points and a decision tree for classifying the points.\n\n\n\n\n\nTo be precise, the 20 points represent (Age,Salary) pairs of people who do or do not buy gold jewelry. Age (appreviated A in the decision tree) is the x-axis, and Salary (S in the tree) is the y-axis. Those that do are represented by gold points, and those that do not by green points. The 10 points of gold-jewelry buyers are:\n\n\n(28,145), (38,115), (43,83), (50,130), (50,90), (50,60), (50,30), (55,118), (63,88), and (65,140).\n\n\nThe 10 points of those that do not buy gold jewelry are:\n\n\n(23,40), (25,125), (29,97), (33,22), (35,63), (42,57), (44, 105), (55,63), (55,20), and (64,37).\n\n\nSome of these points are correctly classified by the decision tree and some are not. Determine the classification of each point, and then indicate in the list below the point that is misclassified.", "def predict_by_tree(pt):\n if pt[0] < 45:\n if pt[1] < 110:\n print \"Doesn't buy\"\n else:\n print \"Buy\"\n else:\n if pt[1] < 75:\n print \"Doesn't buy\"\n else:\n print \"Buy\"\n\npredict_by_tree((43, 83))\npredict_by_tree((55, 118))\npredict_by_tree((65, 140))\npredict_by_tree((28, 145))\n\nprint \"==============\"\npredict_by_tree((65, 140))\npredict_by_tree((25, 125))\npredict_by_tree((44, 105))\npredict_by_tree((35, 63))", "Quiz Week 6A.\nQ1.\n\nUsing the matrix-vector multiplication described in Section 2.3.1, applied to the matrix and vector:\n\n<pre>\n\n | 1 2 3 4 | | 1 |\n | 5 6 7 8 | * | 2 |\n | 9 10 11 12 | | 3 |\n | 13 14 15 16 | | 4 |\n\n</pre>\n\n\nApply the Map function to this matrix and vector. Then, identify in the list below, one of the key-value pairs that are output of Map.\n\nSolution 1.\nThe matrix-vector product is the vector x of length n, whose ith element xi is given by\n$$\n\\begin{equation}\n x_i =􏰅 \\sum_{ j = 1}^n m_{ij} \\cdot v_j\n\\end{equation}\n$$\n\nFrom each matrix element mij it produces the key-value pair ( i, $m_{ij} \\cdot v_j$ ).\nThus, all terms of the sum that make up the component $x_i$ of the matrix-vector product will get the same key, i.", "import numpy as np\n\nmat = np.array([ [1, 2, 3, 4],\n [5, 6, 7, 8],\n [9, 10,11,12],\n [13,14,15,16] ])\nvec = np.array([1, 2, 3, 4])\n\ndef key_val(mat, vec):\n pair = dict()\n for idx, row in enumerate(mat):\n# pair[idx + 1] = np.dot(row, vec)\n pair[idx + 1] = row * vec\n \n return pair\n\nprint key_val(mat, vec)", "Q2.\n\nSuppose we use the algorithm of Section 2.3.10 to compute the product of matrices M and N. Let M have x rows and y columns, while N has y rows and z columns. As a function of x, y, and z, express the answers to the following questions:\nThe output of the Map function has how many different keys? How many key-value pairs are there with each key? How many key-value pairs are there in all?\nThe input to the Reduce function has how many keys? What is the length of the value (a list) associated with each key?\n\nSolution 2.\n\nDiffrent keys output of Map function => x * z\nKey-value pairs with each key => 2 * y\nKey value pairs in all => 2 * x * y * z\nKey Input to Reduce function => \nLength of value list = 2 * y\n\nQ3.\n\nSuppose we use the two-stage algorithm of Section 2.3.9 to compute the product of matrices M and N. Let M have x rows and y columns, while N has y rows and z columns. As a function of x, y, and z, express the answers to the following questions:\nThe output of the first Map function has how many different keys? How many key-value pairs are there with each key? How many key-value pairs are there in all?\nThe output of the first Reduce function has how many keys? What is the length of the value (a list) associated with each key?\nThe output of the second Map function has how many different keys? How many key-value pairs are there with each key? How many key-value pairs are there in all?\nThen, identify the true statement in the list below.\n\nSolution 3.\n\nDifferent keys of first map => y\nDifferent key-value pairs of each key => y * x + y * z\nKey-value pairs in all => y * ( y * x + y * z)\n\nQ4.\n\nSuppose we have the following relations:\n\n<pre>\n\n R S\n\n __A__ __B__ __B__ __C__\n 0 1 0 1\n 1 2 1 2\n 2 3 2 3\n\n</pre>\n\n\n\nand we take their natural join by the algorithm of Section 2.3.7. Apply the Map function to the tuples of these relations. Then, construct the elements that are input to the Reduce function. Identify one of these elements in the list below.\n\n\nMap Results:\n\n\n<pre>\n (1, (R, 0))\n (2, (R, 1))\n (3, (R, 2))\n (0, (S, 1))\n (1, (S, 2))\n (2, (S, 3))\n</pre>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
roryyorke/python-control
examples/steering.ipynb
bsd-3-clause
[ "Vehicle steering\nKarl J. Astrom and Richard M. Murray\n23 Jul 2019\nThis notebook contains the computations for the vehicle steering running example in Feedback Systems.\nRMM comments to Karl, 27 Jun 2019\n* I'm using this notebook to walk through all of the vehicle steering examples and make sure that all of the parameters, conditions, and maximum steering angles are consitent and reasonable.\n* Please feel free to send me comments on the contents as well as the bulletted notes, in whatever form is most convenient.\n* Once we have sorted out all of the settings we want to use, I'll copy over the changes into the MATLAB files that we use for creating the figures in the book.\n* These notes will be removed from the notebook once we have finalized everything.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport control as ct\nct.use_fbs_defaults()\nct.use_numpy_matrix(False)", "Vehicle steering dynamics (Example 3.11)\nThe vehicle dynamics are given by a simple bicycle model. We take the state of the system as $(x, y, \\theta)$ where $(x, y)$ is the position of the reference point of the vehicle in the plane and $\\theta$ is the angle of the vehicle with respect to horizontal. The vehicle input is given by $(v, \\delta)$ where $v$ is the forward velocity of the vehicle and $\\delta$ is the angle of the steering wheel. We take as parameters the wheelbase $b$ and the offset $a$ between the rear wheels and the reference point. The model includes saturation of the vehicle steering angle (maxsteer).\n\nSystem state: x, y, theta\nSystem input: v, delta \nSystem output: x, y \nSystem parameters: wheelbase, refoffset, maxsteer \n\nAssuming no slipping of the wheels, the motion of the vehicle is given by a rotation around a point O that depends on the steering angle $\\delta$. To compute the angle $\\alpha$ of the velocity of the reference point with respect to the axis of the vehicle, we let the distance from the center of rotation O to the contact point of the rear wheel be $r_\\text{r}$ and it the follows from Figure 3.17 in FBS that $b = r_\\text{r} \\tan \\delta$ and $a = r_\\text{r} \\tan \\alpha$, which implies that $\\tan \\alpha = (a/b) \\tan \\delta$.\nReasonable limits for the steering angle depend on the speed. The physical limit is given in our model as 0.5 radians (about 30 degrees). However, this limit is rarely possible when the car is driving since it would cause the tires to slide on the pavement. We us a limit of 0.1 radians (about 6 degrees) at 10 m/s ($\\approx$ 35 kph) and 0.05 radians (about 3 degrees) at 30 m/s ($\\approx$ 110 kph). Note that a steering angle of 0.05 rad gives a cross acceleration of $(v^2/b) \\tan \\delta \\approx (100/3) 0.05 = 1.7$ $\\text{m/s}^2$ at 10 m/s and 15 $\\text{m/s}^2$ at 30 m/s ($\\approx$ 1.5 times the force of gravity).", "def vehicle_update(t, x, u, params):\n # Get the parameters for the model\n a = params.get('refoffset', 1.5) # offset to vehicle reference point\n b = params.get('wheelbase', 3.) # vehicle wheelbase\n maxsteer = params.get('maxsteer', 0.5) # max steering angle (rad)\n\n # Saturate the steering input\n delta = np.clip(u[1], -maxsteer, maxsteer)\n alpha = np.arctan2(a * np.tan(delta), b)\n\n # Return the derivative of the state\n return np.array([\n u[0] * np.cos(x[2] + alpha), # xdot = cos(theta + alpha) v\n u[0] * np.sin(x[2] + alpha), # ydot = sin(theta + alpha) v\n (u[0] / b) * np.tan(delta) # thdot = v/l tan(phi)\n ])\n\ndef vehicle_output(t, x, u, params):\n return x[0:2]\n\n# Default vehicle parameters (including nominal velocity)\nvehicle_params={'refoffset': 1.5, 'wheelbase': 3, 'velocity': 15, \n 'maxsteer': 0.5}\n\n# Define the vehicle steering dynamics as an input/output system\nvehicle = ct.NonlinearIOSystem(\n vehicle_update, vehicle_output, states=3, name='vehicle',\n inputs=('v', 'delta'), outputs=('x', 'y'), params=vehicle_params)", "Vehicle driving on a curvy road (Figure 8.6a)\nTo illustrate the dynamics of the system, we create an input that correspond to driving down a curvy road. This trajectory will be used in future simulations as a reference trajectory for estimation and control.\nRMM notes, 27 Jun 2019:\n* The figure below appears in Chapter 8 (output feedback) as Example 8.3, but I've put it here in the notebook since it is a good way to demonstrate the dynamics of the vehicle.\n* In the book, this figure is created for the linear model and in a manner that I can't quite understand, since the linear model that is used is only for the lateral dynamics. The original file is OutputFeedback/figures/steering_obs.m.\n* To create the figure here, I set the initial vehicle angle to be $\\theta(0) = 0.75$ rad and then used an input that gives a figure approximating Example 8.3 To create the lateral offset, I think subtracted the trajectory from the averaged straight line trajectory, shown as a dashed line in the $xy$ figure below.\n* I find the approach that we used in the MATLAB version to be confusing, but I also think the method of creating the lateral error here is a hart to follow. We might instead consider choosing a trajectory that goes mainly vertically, with the 2D dynamics being the $x$, $\\theta$ dynamics instead of the $y$, $\\theta$ dynamics.\nKJA comments, 1 Jul 2019:\n\n\nI think we should point out that the reference point is typically the projection of the center of mass of the whole vehicle.\n\n\nThe heading angle $\\theta$ must be marked in Figure 3.17b.\n\n\nI think it is useful to start with a curvy road that you have done here but then to specialized to a trajectory that is essentially horizontal, where $y$ is the deviation from the nominal horizontal $x$ axis. Assuming that $\\alpha$ and $\\theta$ are small we get the natural linearization of (3.26) $\\dot x = v$ and $\\dot y =v(\\alpha + \\theta)$\n\n\nRMM response, 16 Jul 2019:\n* I've changed the trajectory to be about the horizontal axis, but I am ploting things vertically for better figure layout. This corresponds to what is done in Example 9.10 in the text, which I think looks OK.\nKJA response, 20 Jul 2019: Fig 8.6a is fine", "# System parameters\nwheelbase = vehicle_params['wheelbase']\nv0 = vehicle_params['velocity']\n\n# Control inputs\nT_curvy = np.linspace(0, 7, 500)\nv_curvy = v0*np.ones(T_curvy.shape) \ndelta_curvy = 0.1*np.sin(T_curvy)*np.cos(4*T_curvy) + 0.0025*np.sin(T_curvy*np.pi/7)\nu_curvy = [v_curvy, delta_curvy]\nX0_curvy = [0, 0.8, 0]\n\n# Simulate the system + estimator\nt_curvy, y_curvy, x_curvy = ct.input_output_response(\n vehicle, T_curvy, u_curvy, X0_curvy, params=vehicle_params, return_x=True)\n\n# Configure matplotlib plots to be a bit bigger and optimize layout\nplt.figure(figsize=[9, 4.5])\n\n# Plot the resulting trajectory (and some road boundaries)\nplt.subplot(1, 4, 2)\nplt.plot(y_curvy[1], y_curvy[0])\nplt.plot(y_curvy[1] - 9/np.cos(x_curvy[2]), y_curvy[0], 'k-', linewidth=1)\nplt.plot(y_curvy[1] - 3/np.cos(x_curvy[2]), y_curvy[0], 'k--', linewidth=1)\nplt.plot(y_curvy[1] + 3/np.cos(x_curvy[2]), y_curvy[0], 'k-', linewidth=1)\n\nplt.xlabel('y [m]')\nplt.ylabel('x [m]');\nplt.axis('Equal')\n\n# Plot the lateral position\nplt.subplot(2, 2, 2)\nplt.plot(t_curvy, y_curvy[1])\nplt.ylabel('Lateral position $y$ [m]')\n\n# Plot the steering angle\nplt.subplot(2, 2, 4)\nplt.plot(t_curvy, delta_curvy)\nplt.ylabel('Steering angle $\\\\delta$ [rad]')\nplt.xlabel('Time t [sec]')\nplt.tight_layout()", "Linearization of lateral steering dynamics (Example 6.13)\nWe are interested in the motion of the vehicle about a straight-line path ($\\theta = \\theta_0$) with constant velocity $v_0 \\neq 0$. To find the relevant equilibrium point, we first set $\\dot\\theta = 0$ and we see that we must have $\\delta = 0$, corresponding to the steering wheel being straight. The motion in the xy plane is by definition not at equilibrium and so we focus on lateral deviation of the vehicle from a straight line. For simplicity, we let $\\theta_\\text{e} = 0$, which corresponds to driving along the $x$ axis. We can then focus on the equations of motion in the $y$ and $\\theta$ directions with input $u = \\delta$.", "# Define the lateral dynamics as a subset of the full vehicle steering dynamics\nlateral = ct.NonlinearIOSystem(\n lambda t, x, u, params: vehicle_update(\n t, [0., x[0], x[1]], [params.get('velocity', 1), u[0]], params)[1:],\n lambda t, x, u, params: vehicle_output(\n t, [0., x[0], x[1]], [params.get('velocity', 1), u[0]], params)[1:],\n states=2, name='lateral', inputs=('phi'), outputs=('y')\n)\n\n# Compute the linearization at velocity v0 = 15 m/sec\nlateral_linearized = ct.linearize(lateral, [0, 0], [0], params=vehicle_params)\n\n# Normalize dynamics using state [x1/b, x2] and timescale v0 t / b\nb = vehicle_params['wheelbase']\nv0 = vehicle_params['velocity']\nlateral_transformed = ct.similarity_transform(\n lateral_linearized, [[1/b, 0], [0, 1]], timescale=v0/b)\n\n# Set the output to be the normalized state x1/b\nlateral_normalized = lateral_transformed * (1/b)\nprint(\"Linearized system dynamics:\\n\")\nprint(lateral_normalized)\n\n# Save the system matrices for later use\nA = lateral_normalized.A\nB = lateral_normalized.B\nC = lateral_normalized.C", "Eigenvalue placement controller design (Example 7.4)\nWe want to design a controller that stabilizes the dynamics of the vehicle and tracks a given reference value $r$ of the lateral position of the vehicle. We use feedback to design the dynamics of the system to have the characteristic polynomial\n$p(s) = s^2 + 2 \\zeta_\\text{c} \\omega_\\text{c} + \\omega_\\text{c}^2$.\nTo find reasonable values of $\\omega_\\text{c}$ we observe that the initial response of the steering angle to a unit step change in the steering command is $\\omega_\\text{c}^2 r$, where $r$ is the commanded lateral transition. Recall that the model is normalized so that the length unit is the wheelbase $b$ and the time unit is the time $b/v_0$ to travel one wheelbase. A typical car has a wheelbase of about 3 m and, assuming a speed of 30 m/s, a normalized time unit corresponds to 0.1 s. To determine a reasonable steering angle when making a gentle lane change, we assume that the turning radius is $R$ = 600 m. For a wheelbase of 3 m this corresponds to a steering angle $\\delta \\approx 3/600 = 0.005$ rad and a lateral acceleration of $v^2/R$ = 302/600 = 1.5 m/s$^2$. Assuming that a lane change corresponds to a translation of one wheelbase we find $\\omega_\\text{c} = \\sqrt{0.005}$ = 0.07 rad/s.\nThe unit step responses for the closed loop system for different values of the design parameters are shown below. The effect of $\\omega_c$ is shown on the left, which shows that the response speed increases with increasing $\\omega_\\text{c}$. All responses have overshoot less than 5% (15 cm), as indicated by the dashed lines. The settling times range from 30 to 60 normalized time units, which corresponds to about 3–6 s, and are limited by the acceptable lateral acceleration of the vehicle. The effect of $\\zeta_\\text{c}$ is shown on the right. The response speed and the overshoot increase with decreasing damping. Using these plots, we conclude that a reasonable design choice is $\\omega_\\text{c} = 0.07$ and $\\zeta_\\text{c} = 0.7$. \nRMM note, 27 Jun 2019: \n* The design guidelines are for $v_0$ = 30 m/s (highway speeds) but most of the examples below are done at lower speed (typically 10 m/s). Also, the eigenvalue locations above are not the same ones that we use in the output feedback example below. We should probably make things more consistent.\nKJA comment, 1 Jul 2019: \n* I am all for maikng it consist and choosing e.g. v0 = 30 m/s\nRMM comment, 17 Jul 2019:\n* I've updated the examples below to use v0 = 30 m/s for everything except the forward/reverse example. This corresponds to ~105 kph (freeway speeds) and a reasonable bound for the steering angle to avoid slipping is 0.05 rad.", "# Utility function to place poles for the normalized vehicle steering system\ndef normalized_place(wc, zc):\n # Get the dynamics and input matrices, for later use\n A, B = lateral_normalized.A, lateral_normalized.B\n \n # Compute the eigenvalues from the characteristic polynomial\n eigs = np.roots([1, 2*zc*wc, wc**2])\n \n # Compute the feedback gain using eigenvalue placement\n K = ct.place_varga(A, B, eigs)\n \n # Create a new system representing the closed loop response\n clsys = ct.StateSpace(A - B @ K, B, lateral_normalized.C, 0)\n \n # Compute the feedforward gain based on the zero frequency gain of the closed loop\n kf = np.real(1/clsys.evalfr(0))\n\n # Scale the input by the feedforward gain\n clsys *= kf\n \n # Return gains and closed loop system dynamics\n return K, kf, clsys\n\n# Utility function to plot simulation results for normalized vehicle steering system\ndef normalized_plot(t, y, u, inpfig, outfig):\n plt.sca(outfig)\n plt.plot(t, y)\n plt.sca(inpfig)\n plt.plot(t, u[0])\n \n# Utility function to label plots of normalized vehicle steering system \ndef normalized_label(inpfig, outfig):\n plt.sca(inpfig)\n plt.xlabel('Normalized time $v_0 t / b$')\n plt.ylabel('Steering angle $\\delta$ [rad]')\n\n plt.sca(outfig)\n plt.ylabel('Lateral position $y/b$')\n plt.plot([0, 20], [0.95, 0.95], 'k--')\n plt.plot([0, 20], [1.05, 1.05], 'k--')\n\n# Configure matplotlib plots to be a bit bigger and optimize layout\nplt.figure(figsize=[9, 4.5])\n\n# Explore range of values for omega_c, with zeta_c = 0.7\noutfig = plt.subplot(2, 2, 1)\ninpfig = plt.subplot(2, 2, 3)\nzc = 0.7\nfor wc in [0.5, 0.7, 1]:\n # Place the poles of the system\n K, kf, clsys = normalized_place(wc, zc)\n \n # Compute the step response\n t, y, x = ct.step_response(clsys, np.linspace(0, 20, 100), return_x=True)\n \n # Compute the input used to generate the control response\n u = -K @ x + kf * 1\n\n # Plot the results\n normalized_plot(t, y, u, inpfig, outfig)\n \n# Add labels to the figure\nnormalized_label(inpfig, outfig)\nplt.legend(('$\\omega_c = 0.5$', '$\\omega_c = 0.7$', '$\\omega_c = 0.1$'))\n\n# Explore range of values for zeta_c, with omega_c = 0.07\noutfig = plt.subplot(2, 2, 2)\ninpfig = plt.subplot(2, 2, 4)\nwc = 0.7\nfor zc in [0.5, 0.7, 1]:\n # Place the poles of the system\n K, kf, clsys = normalized_place(wc, zc)\n \n # Compute the step response\n t, y, x = ct.step_response(clsys, np.linspace(0, 20, 100), return_x=True)\n \n # Compute the input used to generate the control response\n u = -K @ x + kf * 1\n\n # Plot the results\n normalized_plot(t, y, u, inpfig, outfig)\n \n# Add labels to the figure\nnormalized_label(inpfig, outfig)\nplt.legend(('$\\zeta_c = 0.5$', '$\\zeta_c = 0.7$', '$\\zeta_c = 1$'))\nplt.tight_layout()", "RMM notes, 17 Jul 2019\n* These step responses are very slow. Note that the steering wheel angles are about 10X less than a resonable bound (0.05 rad at 30 m/s). A consequence of these low gains is that the tracking controller in Example 8.4 has to use a different set of gains. We could update, but the gains listed here have a rationale that we would have to update as well.\n* Based on the discussion below, I think we should make $\\omega_\\text{c}$ range from 0.5 to 1 (10X faster).\nKJA response, 20 Jul 2019: Makes a lot of sense to make $\\omega_\\text{c}$ range from 0.5 to 1 (10X faster). The plots were still in the range 0.05 to 0.1 in the note you sent me.\nRMM response: 23 Jul 2019: Updated $\\omega_\\text{c}$ to 10X faster. Note that this makes size of the inputs for the step response quite large, but that is in part because a unit step in the desired position produces an (instantaneous) error of $b = 3$ m $\\implies$ quite a large error. A lateral error of 10 cm with $\\omega_c = 0.7$ would produce an (initial) input of 0.015 rad.\nEigenvalue placement observer design (Example 8.3)\nWe construct an estimator for the (normalized) lateral dynamics by assigning the eigenvalues of the estimator dynamics to desired value, specifified in terms of the second order characteristic equation for the estimator dynamics.", "# Find the eigenvalue from the characteristic polynomial\nwo = 1 # bandwidth for the observer\nzo = 0.7 # damping ratio for the observer\neigs = np.roots([1, 2*zo*wo, wo**2])\n \n# Compute the estimator gain using eigenvalue placement\nL = np.transpose(\n ct.place(np.transpose(A), np.transpose(C), eigs))\nprint(\"L = \", L)\n\n# Create a linear model of the lateral dynamics driving the estimator\nest = ct.StateSpace(A - L @ C, np.block([[B, L]]), np.eye(2), np.zeros((2,2)))", "Linear observer applied to nonlinear system output\nA simulation of the observer for a vehicle driving on a curvy road is shown below. The first figure shows the trajectory of the vehicle on the road, as viewed from above. The response of the observer is shown on the right, where time is normalized to the vehicle length. We see that the observer error settles in about 4 vehicle lengths.\nRMM note, 27 Jun 2019:\n* As an alternative, we can attempt to estimate the state of the full nonlinear system using a linear estimator. This system does not necessarily converge to zero since there will be errors in the nominal dynamics of the system for the linear estimator.\n* The limits on the $x$ axis for the time plots are different to show the error over the entire trajectory.\n* We should decide whether we want to keep the figure above or the one below for the text.\nKJA comment, 1 Jul 2019:\n* I very much like your observation about the nonlinear system. I think it is a very good idea to use your new simulation\nRMM comment, 17 Jul 2019: plan to use this version in the text.\nKJA comment, 20 Jul 2019: I think this is a big improvement we show that an observer based on a linearized model works on a nonlinear simulation, If possible we could add a line telling why the linear model works and that this is standard procedure in control engineering.", "# Convert the curvy trajectory into normalized coordinates\nx_ref = x_curvy[0] / wheelbase\ny_ref = x_curvy[1] / wheelbase\ntheta_ref = x_curvy[2]\ntau = v0 * T_curvy / b\n\n# Simulate the estimator, with a small initial error in y position\nt, y_est, x_est = ct.forced_response(est, tau, [delta_curvy, y_ref], [0.5, 0])\n\n# Configure matplotlib plots to be a bit bigger and optimize layout\nplt.figure(figsize=[9, 4.5])\n\n# Plot the actual and estimated states\nax = plt.subplot(2, 2, 1)\nplt.plot(t, y_ref)\nplt.plot(t, x_est[0])\nax.set(xlim=[0, 10])\nplt.legend(['actual', 'estimated'])\nplt.ylabel('Lateral position $y/b$')\n\nax = plt.subplot(2, 2, 2)\nplt.plot(t, x_est[0] - y_ref)\nax.set(xlim=[0, 10])\nplt.ylabel('Lateral error')\n\nax = plt.subplot(2, 2, 3)\nplt.plot(t, theta_ref)\nplt.plot(t, x_est[1])\nax.set(xlim=[0, 10])\nplt.xlabel('Normalized time $v_0 t / b$')\nplt.ylabel('Vehicle angle $\\\\theta$')\n\nax = plt.subplot(2, 2, 4)\nplt.plot(t, x_est[1] - theta_ref)\nax.set(xlim=[0, 10])\nplt.xlabel('Normalized time $v_0 t / b$')\nplt.ylabel('Angle error')\nplt.tight_layout()", "Output Feedback Controller (Example 8.4)\nRMM note, 27 Jun 2019\n* The feedback gains for the controller below are different that those computed in the eigenvalue placement example (from Ch 7), where an argument was given for the choice of the closed loop eigenvalues. Should we choose a single, consistent set of gains in both places?\n* This plot does not quite match Example 8.4 because a different reference is being used for the laterial position.\n* The transient in $\\delta$ is quiet large. This appears to be due to the error in $\\theta(0)$, which is initialized to zero intead of to theta_curvy.\nKJA comment, 1 Jul 2019:\n1. The large initial errors dominate the plots.\n\nThere is somehing funny happening at the end of the simulation, may be due to the small curvature at the end of the path?\n\nRMM comment, 17 Jul 2019:\n* Updated to use the new trajectory\n* We will have the issue that the gains here are different than the gains that we used in Chapter 7. I think that what we need to do is update the gains in Ch 7 (they are too sluggish, as noted above).\n* Note that unlike the original example in the book, the errors do not converge to zero. This is because we are using pure state feedback (no feedforward) => the controller doesn't apply any input until there is an error.\nKJA comment, 20 Jul 2019: We may add that state feedback is a proportional controller which does not guarantee that the error goes to zero for example by changing the line \"The tracking error ...\" to \"The tracking error can be improved by adding integral action (Section7.4), later in this chapter \"Disturbance Modeling\" or feedforward (Section 8,5). Should we do an exercises?", "# Compute the feedback gains\n# K, kf, clsys = normalized_place(1, 0.707) # Gains from MATLAB\n# K, kf, clsys = normalized_place(0.07, 0.707) # Original gains\nK, kf, clsys = normalized_place(0.7, 0.707) # Final gains\n\n# Print out the gains\nprint(\"K = \", K)\nprint(\"kf = \", kf)\n\n# Construct an output-based controller for the system\nclsys = ct.StateSpace(\n np.block([[A, -B@K], [L@C, A - B@K - L@C]]),\n np.block([[B], [B]]) * kf, \n np.block([[C, np.zeros(C.shape)], [np.zeros(C.shape), C]]), \n np.zeros((2,1)))\n\n# Simulate the system\nt, y, x = ct.forced_response(clsys, tau, y_ref, [0.4, 0, 0.0, 0])\n\n# Calcaluate the input used to generate the control response\nu_sfb = kf * y_ref - K @ x[0:2]\nu_ofb = kf * y_ref - K @ x[2:4]\n\n# Configure matplotlib plots to be a bit bigger and optimize layout\nplt.figure(figsize=[9, 4.5])\n\n# Plot the actual and estimated states\nax = plt.subplot(1, 2, 1)\nplt.plot(t, x[0])\nplt.plot(t, x[2])\nplt.plot(t, y_ref, 'k-.')\nax.set(xlim=[0, 30])\nplt.legend(['state feedback', 'output feedback', 'reference'])\nplt.xlabel('Normalized time $v_0 t / b$')\nplt.ylabel('Lateral position $y/b$')\n\nax = plt.subplot(2, 2, 2)\nplt.plot(t, x[1])\nplt.plot(t, x[3])\nplt.plot(t, theta_ref, 'k-.')\nax.set(xlim=[0, 15])\nplt.ylabel('Vehicle angle $\\\\theta$')\n\nax = plt.subplot(2, 2, 4)\nplt.plot(t, u_sfb[0])\nplt.plot(t, u_ofb[0])\nplt.plot(t, delta_curvy, 'k-.')\nax.set(xlim=[0, 15])\nplt.xlabel('Normalized time $v_0 t / b$')\nplt.ylabel('Steering angle $\\\\delta$')\nplt.tight_layout()", "Trajectory Generation (Example 8.8)\nTo illustrate how we can use a two degree-of-freedom design to improve the performance of the system, consider the problem of steering a car to change lanes on a road. We use the non-normalized form of the dynamics, which were derived in Example 3.11.\nKJA comment, 1 Jul 2019:\n1. I think the reference trajectory is too much curved in the end compare with Example 3.11\nIn summary I think it is OK to change the reference trajectories but we should make sure that the curvature is less than $\\rho=600 m$ not to have too high acceleratarion.\nRMM response, 16 Jul 2019:\n* Not sure if the comment about the trajectory being too curved is referring to this example. The steering angles (and hence radius of curvature/acceleration) are quite low. ??\nKJA response, 20 Jul 2019: You are right the curvature is not too small. We could add the sentence \"The small deviations can be eliminated by adding feedback.\"\nRMM response, 23 Jul 2019: I think the small deviation you are referring to is in the velocity trace. This occurs because I gave a fixed endpoint in time and so the velocity had to be adjusted to hit that exact point at that time. This doesn't show up in the book, so it won't be a problem ($\\implies$ no additional explanation required).", "import control.flatsys as fs\n\n# Function to take states, inputs and return the flat flag\ndef vehicle_flat_forward(x, u, params={}):\n # Get the parameter values\n b = params.get('wheelbase', 3.)\n \n # Create a list of arrays to store the flat output and its derivatives\n zflag = [np.zeros(3), np.zeros(3)]\n \n # Flat output is the x, y position of the rear wheels\n zflag[0][0] = x[0]\n zflag[1][0] = x[1]\n \n # First derivatives of the flat output\n zflag[0][1] = u[0] * np.cos(x[2]) # dx/dt\n zflag[1][1] = u[0] * np.sin(x[2]) # dy/dt\n \n # First derivative of the angle\n thdot = (u[0]/b) * np.tan(u[1])\n\n # Second derivatives of the flat output (setting vdot = 0)\n zflag[0][2] = -u[0] * thdot * np.sin(x[2])\n zflag[1][2] = u[0] * thdot * np.cos(x[2])\n \n return zflag\n\n# Function to take the flat flag and return states, inputs\ndef vehicle_flat_reverse(zflag, params={}):\n # Get the parameter values\n b = params.get('wheelbase', 3.) \n\n # Create a vector to store the state and inputs\n x = np.zeros(3)\n u = np.zeros(2)\n \n # Given the flat variables, solve for the state\n x[0] = zflag[0][0] # x position\n x[1] = zflag[1][0] # y position\n x[2] = np.arctan2(zflag[1][1], zflag[0][1]) # tan(theta) = ydot/xdot\n \n # And next solve for the inputs\n u[0] = zflag[0][1] * np.cos(x[2]) + zflag[1][1] * np.sin(x[2])\n thdot_v = zflag[1][2] * np.cos(x[2]) - zflag[0][2] * np.sin(x[2])\n u[1] = np.arctan2(thdot_v, u[0]**2 / b)\n \n return x, u\n\nvehicle_flat = fs.FlatSystem(vehicle_flat_forward, vehicle_flat_reverse, inputs=2, states=3)", "To find a trajectory from an initial state $x_0$ to a final state $x_\\text{f}$ in time $T_\\text{f}$ we solve a point-to-point trajectory generation problem. We also set the initial and final inputs, which sets the vehicle velocity $v$ and steering wheel angle $\\delta$ at the endpoints.", "# Define the endpoints of the trajectory \nx0 = [0., 2., 0.]; u0 = [15, 0.]\nxf = [75, -2., 0.]; uf = [15, 0.]\nTf = xf[0] / uf[0]\n\n# Define a set of basis functions to use for the trajectories\npoly = fs.PolyFamily(6)\n\n# Find a trajectory between the initial condition and the final condition\ntraj = fs.point_to_point(vehicle_flat, x0, u0, xf, uf, Tf, basis=poly)\n\n# Create the trajectory\nt = np.linspace(0, Tf, 100)\nx, u = traj.eval(t)\n\n# Configure matplotlib plots to be a bit bigger and optimize layout\nplt.figure(figsize=[9, 4.5])\n\n# Plot the trajectory in xy coordinate\nplt.subplot(1, 4, 2)\nplt.plot(x[1], x[0])\nplt.xlabel('y [m]')\nplt.ylabel('x [m]')\n\n# Add lane lines and scale the axis\nplt.plot([-4, -4], [0, x[0, -1]], 'k-', linewidth=1)\nplt.plot([0, 0], [0, x[0, -1]], 'k--', linewidth=1)\nplt.plot([4, 4], [0, x[0, -1]], 'k-', linewidth=1)\nplt.axis([-10, 10, -5, x[0, -1] + 5])\n\n# Time traces of the state and input\nplt.subplot(2, 4, 3)\nplt.plot(t, x[1])\nplt.ylabel('y [m]')\n\nplt.subplot(2, 4, 4)\nplt.plot(t, x[2])\nplt.ylabel('theta [rad]')\n\nplt.subplot(2, 4, 7)\nplt.plot(t, u[0])\nplt.xlabel('Time t [sec]')\nplt.ylabel('v [m/s]')\nplt.axis([0, Tf, u0[0] - 1, uf[0] +1])\n\nplt.subplot(2, 4, 8)\nplt.plot(t, u[1]);\nplt.xlabel('Time t [sec]')\nplt.ylabel('$\\delta$ [rad]')\nplt.tight_layout()", "Vehicle transfer functions for forward and reverse driving (Example 10.11)\nThe vehicle steering model has different properties depending on whether we are driving forward or in reverse. The figures below show step responses from steering angle to lateral translation for a the linearized model when driving forward (dashed) and reverse (solid). In this simulation we have added an extra pole with the time constant $T=0.1$ to approximately account for the dynamics in the steering system.\nWith rear-wheel steering the center of mass first moves in the wrong direction and the overall response with rear-wheel steering is significantly delayed compared with that for front-wheel steering. (b) Frequency response for driving forward (dashed) and reverse (solid). Notice that the gain curves are identical, but the phase curve for driving in reverse has non-minimum phase.\nRMM note, 27 Jun 2019:\n* I cannot recreate the figures in Example 10.11. Since we are looking at the lateral velocity, there is a differentiator in the output and this takes the step function and creates an offset at $t = 0$ (intead of a smooth curve).\n* The transfer functions are also different, and I don't quite understand why. Need to spend a bit more time on this one.\nKJA comment, 1 Jul 2019: The reason why you cannot recreate figures i Example 10.11 is because the caption in figure is wrong, sorry my fault, the y-axis should be lateral position not lateral velocity. The approximate expression for the transfer functions\n$$\nG_{y\\delta}=\\frac{av_0s+v_0^2}{bs} = \\frac{1.5 s + 1}{3s^2}=\\frac{0.5s + 0.33}{s}\n$$\nare quite close to the values that you get numerically\nIn this case I think it is useful to have v=1 m/s because we do not drive to fast backwards.\nRMM response, 17 Jul 2019\n* Updated figures below use the same parameters as the running example (the current text uses different parameters)\n* Following the material in the text, a pole is added at s = -1 to approximate the dynamics of the steering system. This is not strictly needed, so we could decide to take it out (and update the text)\nKJA comment, 20 Jul 2019: I have been oscillating a bit about this example. Of course it does not make sense to drive in reverse in 30 m/s but it seems a bit silly to change parameters just in this case (if we do we have to motivate it). On the other hand what we are doing is essentially based on transfer functions and a RHP zero. My current view which has changed a few times is to keep the standard parameters. In any case we should eliminate the extra time constant. A small detail, I could not see the time response in the file you sent, do not resend it!, I will look at the final version.\nRMM comment, 23 Jul 2019: I think it is OK to have the speed be different and just talk about this in the text. I have removed the extra time constant in the current version.", "# Magnitude of the steering input (half maximum)\nMsteer = vehicle_params['maxsteer'] / 2\n\n# Create a linearized model of the system going forward at 2 m/s\nforward_lateral = ct.linearize(lateral, [0, 0], [0], params={'velocity': 2})\nforward_tf = ct.ss2tf(forward_lateral)[0, 0]\nprint(\"Forward TF = \", forward_tf)\n\n# Create a linearized model of the system going in reverise at 1 m/s\nreverse_lateral = ct.linearize(lateral, [0, 0], [0], params={'velocity': -2})\nreverse_tf = ct.ss2tf(reverse_lateral)[0, 0]\nprint(\"Reverse TF = \", reverse_tf)\n\n# Configure matplotlib plots to be a bit bigger and optimize layout\nplt.figure()\n\n# Forward motion\nt, y = ct.step_response(forward_tf * Msteer, np.linspace(0, 4, 500))\nplt.plot(t, y, 'b--')\n\n# Reverse motion\nt, y = ct.step_response(reverse_tf * Msteer, np.linspace(0, 4, 500))\nplt.plot(t, y, 'b-')\n\n# Add labels and reference lines\nplt.axis([0, 4, -0.5, 2.5])\nplt.legend(['forward', 'reverse'], loc='upper left')\nplt.xlabel('Time $t$ [s]')\nplt.ylabel('Lateral position [m]')\nplt.plot([0, 4], [0, 0], 'k-', linewidth=1)\n\n# Plot the Bode plots\nplt.figure()\nplt.subplot(1, 2, 2)\nct.bode_plot(forward_tf[0, 0], np.logspace(-1, 1, 100), color='b', linestyle='--')\nct.bode_plot(reverse_tf[0, 0], np.logspace(-1, 1, 100), color='b', linestyle='-')\nplt.legend(('forward', 'reverse'));\n", "Feedforward Compensation (Example 12.6)\nFor a lane transfer system we would like to have a nice response without overshoot, and we therefore consider the use of feedforward compensation to provide a reference trajectory for the closed loop system. We choose the desired response as $F_\\text{m}(s) = a^22/(s + a)^2$, where the response speed or aggressiveness of the steering is governed by the parameter $a$.\nRMM note, 27 Jun 2019:\n* $a$ was used in the original description of the dynamics as the reference offset. Perhaps choose a different symbol here?\n* In current version of Ch 12, the $y$ axis is labeled in absolute units, but it should actually be in normalized units, I think.\n* The steering angle input for this example is quite high. Compare to Example 8.8, above. Also, we should probably make the size of the \"lane change\" from this example match whatever we use in Example 8.8\nKJA comments, 1 Jul 2019: Chosen parameters look good to me\nRMM response, 17 Jul 2019\n* I changed the time constant for the feedforward model to give something that is more reasonable in terms of turning angle at the speed of $v_0 = 30$ m/s. Note that this takes about 30 body lengths to change lanes (= 9 seconds at 105 kph).\n* The time to change lanes is about 2X what it is using the differentially flat trajectory above. This is mainly because the feedback controller applies a large pulse at the beginning of the trajectory (based on the input error), whereas the differentially flat trajectory spreads the turn over a longer interval. Since are living the steering angle, we have to limit the size of the pulse => slow down the time constant for the reference model.\nKJA response, 20 Jul 2019: I think the time for lane change is too long, which may depend on the small steering angles used. The largest steering angle is about 0.03 rad, but we have admitted larger values in previous examples. I suggest that we change the design so that the largest sterring angel is closer to 0.05, see the remark from Bjorn O a lane change could take about 5 s at 30m/s. \nRMM response, 23 Jul 2019: I reset the time constant to 0.2, which gives something closer to what we had for trajectory generation. It is still slower, but this is to be expected since it is a linear controller. We now finish the trajectory in 20 body lengths, which is about 6 seconds.", "# Define the desired response of the system\na = 0.2\nP = ct.ss2tf(lateral_normalized)\nFm = ct.TransferFunction([a**2], [1, 2*a, a**2])\nFr = Fm / P\n\n# Compute the step response of the feedforward components\nt, y_ffwd = ct.step_response(Fm, np.linspace(0, 25, 100))\nt, delta_ffwd = ct.step_response(Fr, np.linspace(0, 25, 100))\n\n# Scale and shift to correspond to lane change (-2 to +2)\ny_ffwd = 0.5 - 1 * y_ffwd\ndelta_ffwd *= 1\n\n# Overhead view\nplt.subplot(1, 2, 1)\nplt.plot(y_ffwd, t)\nplt.plot(-1*np.ones(t.shape), t, 'k-', linewidth=1)\nplt.plot(0*np.ones(t.shape), t, 'k--', linewidth=1)\nplt.plot(1*np.ones(t.shape), t, 'k-', linewidth=1)\nplt.axis([-5, 5, -2, 27])\n\n# Plot the response\nplt.subplot(2, 2, 2)\nplt.plot(t, y_ffwd)\n# plt.axis([0, 10, -5, 5])\nplt.ylabel('Normalized position y/b')\n\nplt.subplot(2, 2, 4)\nplt.plot(t, delta_ffwd)\n# plt.axis([0, 10, -1, 1])\nplt.ylabel('$\\\\delta$ [rad]')\nplt.xlabel('Normalized time $v_0 t / b$');\n\nplt.tight_layout()", "Fundamental Limits (Example 14.13)\nConsider a controller based on state feedback combined with an observer where we want a faster closed loop system and choose $\\omega_\\text{c} = 10$, $\\zeta_\\text{c} = 0.707$, $\\omega_\\text{o} = 20$, and $\\zeta_\\text{o} = 0.707$.\nKJA comment, 20 Jul 2019: This is a really troublesome case. If we keep it as a vehicle steering problem we must have an order of magnitude lower valuer for $\\omega_c$ and $\\omega_o$ and then the zero will not be slow. My recommendation is to keep it as a general system with the transfer function. $P(s)=(s+1)/s^2$. The text then has to be reworded.\nRMM response, 23 Jul 2019: I think the way we have it is OK. Our current value for the controller and observer is $\\omega_\\text{c} = 0.7$ and $\\omega_\\text{o} = 1$. Here we way we want something faster and so we got to $\\omega_\\text{c} = 7$ (10X) and $\\omega_\\text{o} = 10$ (10X).", "# Compute the feedback gain using eigenvalue placement\nwc = 10\nzc = 0.707\neigs = np.roots([1, 2*zc*wc, wc**2])\nK = ct.place(A, B, eigs)\nkr = np.real(1/clsys.evalfr(0))\nprint(\"K = \", np.squeeze(K))\n\n# Compute the estimator gain using eigenvalue placement\nwo = 20\nzo = 0.707\neigs = np.roots([1, 2*zo*wo, wo**2])\nL = np.transpose(\n ct.place(np.transpose(A), np.transpose(C), eigs))\nprint(\"L = \", np.squeeze(L))\n\n# Construct an output-based controller for the system\nC1 = ct.ss2tf(ct.StateSpace(A - B@K - L@C, L, K, 0))\nprint(\"C(s) = \", C1)\n\n# Compute the loop transfer function and plot Nyquist, Bode\nL1 = P * C1\nplt.figure(); ct.nyquist_plot(L1, np.logspace(0.5, 3, 500))\nplt.figure(); ct.bode_plot(L1, np.logspace(-1, 3, 500));\n\n# Modified control law\nwc = 10\nzc = 2.6\neigs = np.roots([1, 2*zc*wc, wc**2])\nK = ct.place(A, B, eigs)\nkr = np.real(1/clsys.evalfr(0))\nprint(\"K = \", np.squeeze(K))\n\n# Construct an output-based controller for the system\nC2 = ct.ss2tf(ct.StateSpace(A - B@K - L@C, L, K, 0))\nprint(\"C(s) = \", C2)\n\n# Plot the gang of four for the two designs\nct.gangof4(P, C1, np.logspace(-1, 3, 100))\nct.gangof4(P, C2, np.logspace(-1, 3, 100))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sz2472/foundations-homework
homework_6_shengying_zhao_graded.ipynb
mit
[ "Grade: 6.5 / 7", "Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)\n\nimport requests\n\n!pip3 install requests\n\n#new york\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889\")\n\ndata = response.json()\n\nprint(data)", "2) What's the current wind speed? How much warmer does it feel than it actually is?", "# TA-COMMENT: (-0.5) You don't give us the current wind speed! \n\ntype(data)\n\ndata.keys()\n\nprint(data['currently'])\n\nprint(data['currently']['temperature']-data['currently']['apparentTemperature'])", "3) The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?", "print(data['daily'])\n\ntype(data['daily'])\n\ndata['daily'].keys()\n\nprint(data['daily']['data'][0])\n\ntype(data['daily']['data'])\n\nprint(data['daily']['data'][0]['moonPhase'])", "4) What's the difference between the high and low temperatures for today?", "weather_today = data['daily']['data'][0]\nprint(weather_today['temperatureMax']-weather_today['temperatureMin'])", "5) Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.", "print(data['daily']['data'])\ndaily_data = data['daily']['data']\n\nweather_next_week = data['daily']['data']\nfor weather in weather_next_week:\n print(weather['temperatureMax'])\n if weather['temperatureMax'] > 84:\n print(\"it's a hot day.\")\n elif weather['temperatureMax'] > 74 and weather['temperatureMax'] < 83:\n print(\"it's a warm day.\")\n else:\n print(\"it's a cold day.\")", "6) What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say \"{temperature} and cloudy\" instead of just the temperature.", "import requests\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/25.7738889, -80.1938889\")\n\ndata = response.json()\n\nprint(data['hourly'])\n\ndata['hourly'].keys()\n\ndata['hourly']['data']\n\nfor cloudcover in data['hourly']['data']:\n if cloudcover['cloudCover'] > 0.5:\n print(cloudcover['temperature'], \"and cloudy\")\n else:\n print(cloudcover['temperature'])\n ", "7) What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?\nTip: You'll need to use UNIX time, which is the number of seconds since January 1, 1970. Google can help you convert a normal date!\nTip: You'll want to use Forecast.io's \"time machine\" API at https://developer.forecast.io/docs/v2", "import requests\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,346550400\")\ndata = response.json()\nprint(data['currently']['temperature'])\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,662083200\")\ndata = response.json()\nprint(data['currently']['temperature'])\n\nresponse = requests.get(\"https://api.forecast.io/forecast/94bc3fa3628bfad686b10e7054c67f71/40.7141667, -74.0063889,977702400\")\ndata = response.json()\nprint(data['currently']['temperature'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
dev/_downloads/cf9b035ec9fdf9fb55b24e8c3a75ad55/psf_ctf_vertices.ipynb
bsd-3-clause
[ "%matplotlib inline", "Plot point-spread functions (PSFs) and cross-talk functions (CTFs)\nVisualise PSF and CTF at one vertex for sLORETA.", "# Authors: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>\n# Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD-3-Clause\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import (make_inverse_resolution_matrix, get_cross_talk,\n get_point_spread)\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path / 'subjects'\nmeg_path = data_path / 'MEG' / 'sample'\nfname_fwd = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'\nfname_cov = meg_path / 'sample_audvis-cov.fif'\nfname_evo = meg_path / 'sample_audvis-ave.fif'\n\n# read forward solution\nforward = mne.read_forward_solution(fname_fwd)\n# forward operator with fixed source orientations\nmne.convert_forward_solution(forward, surf_ori=True,\n force_fixed=True, copy=False)\n\n# noise covariance matrix\nnoise_cov = mne.read_cov(fname_cov)\n\n# evoked data for info\nevoked = mne.read_evokeds(fname_evo, 0)\n\n# make inverse operator from forward solution\n# free source orientation\ninverse_operator = mne.minimum_norm.make_inverse_operator(\n info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0.,\n depth=None)\n\n# regularisation parameter\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = 'MNE' # can be 'MNE' or 'sLORETA'\n\n# compute resolution matrix for sLORETA\nrm_lor = make_inverse_resolution_matrix(forward, inverse_operator,\n method='sLORETA', lambda2=lambda2)\n\n# get PSF and CTF for sLORETA at one vertex\nsources = [1000]\n\nstc_psf = get_point_spread(rm_lor, forward['src'], sources, norm=True)\n\nstc_ctf = get_cross_talk(rm_lor, forward['src'], sources, norm=True)\ndel rm_lor", "Visualize\nPSF:", "# Which vertex corresponds to selected source\nvertno_lh = forward['src'][0]['vertno']\nverttrue = [vertno_lh[sources[0]]] # just one vertex\n\n# find vertices with maxima in PSF and CTF\nvert_max_psf = vertno_lh[stc_psf.data.argmax()]\nvert_max_ctf = vertno_lh[stc_ctf.data.argmax()]\n\nbrain_psf = stc_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir)\nbrain_psf.show_view('ventral')\nbrain_psf.add_text(0.1, 0.9, 'sLORETA PSF', 'title', font_size=16)\n\n# True source location for PSF\nbrain_psf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',\n color='green')\n\n# Maximum of PSF\nbrain_psf.add_foci(vert_max_psf, coords_as_verts=True, scale_factor=1.,\n hemi='lh', color='black')", "CTF:", "brain_ctf = stc_ctf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir)\nbrain_ctf.add_text(0.1, 0.9, 'sLORETA CTF', 'title', font_size=16)\nbrain_ctf.show_view('ventral')\nbrain_ctf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',\n color='green')\n\n# Maximum of CTF\nbrain_ctf.add_foci(vert_max_ctf, coords_as_verts=True, scale_factor=1.,\n hemi='lh', color='black')", "The green spheres indicate the true source location, and the black\nspheres the maximum of the distribution." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
action_gap_rl/notebooks/mode_regression_nn.ipynb
apache-2.0
[ "Copyright 2020 Google LLC.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Default title text\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport tensorflow.compat.v2 as tf\nimport matplotlib.pyplot as plt\n\ntf.enable_v2_behavior()\n\n# Mode as a function of observation\ndef f(s):\n return np.sin(s*2*np.pi/100.)/2.\n\nN = 100\ns = np.random.uniform(-100, 100, size=N) # observations between -100 and 100\na = np.random.uniform(-1, 1, size=N) # Actions between -1 and 1\n\nP = 0.2\ny = -100*np.abs(a - f(s))**P\ny /= np.max(np.abs(y))\nprint(np.max(y))\nprint(np.min(y))\n\nplt.scatter(s, a, c=y)\nplt.plot(np.sort(s), f(np.sort(s)))\nplt.plot()", "Explanation\nObservations $s$ are scalar values between -100 and 100. Actions $a$ are scalar values between -1 and 1.\nThe plot above shows $(s, a)$ pairs, with their corresponding targets $y$ as color (dark is low, light is hight). The high target regions follow the curve $f$, which gives the mode (argmax action) as a function of $s$.\nThe goal is to recover $f$ from the given data points.", "data = (s[:, np.newaxis], a[:, np.newaxis], y[:, np.newaxis])\ns_features = tf.constant(np.linspace(-100, 100, 50)[np.newaxis, :], dtype=tf.float32)\n\nhidden_widths = [1000, 500]\nmodel = tf.keras.Sequential(\n [tf.keras.layers.Lambda(lambda x: tf.exp(-(x - s_features)**2/2000))]\n + [tf.keras.layers.Dense(w, activation='relu') for w in hidden_widths]\n + [tf.keras.layers.Dense(1, activation=None)]\n)", "What loss functions best recover the curve $f$ from our dataset?", "# loss A\n# ||h(s) - a|^p - R|^q\n# This is danabo's mode regression loss\n\np = 0.1\nq = 1/P\n# p = q = 2.0\ndef loss(model, s, a, y):\n reg = tf.linalg.global_norm(model.trainable_variables)\n return tf.reduce_mean(tf.abs(-tf.abs(model(s)-a)**p - y)**q) + 0.003*reg\n\n# loss B\n# |h(s) - a|^p * exp(R/tau)\n# This is one of Dale's surrogate loss, specifically dot-product loss.\n\np = 1.0\ntau = 1/10.\ndef loss(model, s, a, y):\n reg = tf.linalg.global_norm(model.trainable_variables)\n target = tf.cast(tf.exp(y/tau), tf.float32)\n return tf.reduce_mean(tf.abs(model(s)-a)**p * target) + 0.0005*reg\n\nnp.var(s)\n\n# Initialize model\n\ndevice_string = '/device:GPU:0'\n# device_string = '/device:TPU:0'\n# device_string = '' # CPU\n\nwith tf.device(device_string):\n model(data[0])\n print(loss(model, *data).numpy()) # Initialize model\n\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)\n\ndef sample_batch(batch_size, *args):\n assert args\n idx = np.random.choice(args[0].shape[0], batch_size)\n return tuple([arg[idx] for arg in args])\n\nfor i in range(10000):\n # batch = sample_batch(100, *data)\n batch = data\n optimizer.minimize(lambda: loss(model, *batch), model.trainable_variables)\n if i % 100 == 0:\n print(i, '\\t', loss(model, *data).numpy())", "Test recovery of $f$.", "X = np.linspace(-100, 100, 200)[:, np.newaxis]\nY = model(X).numpy()\nplt.plot(X, Y)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/feat.cols_tf.data.ipynb
apache-2.0
[ "Introduction to Feature Columns\nLearning Objectives\n\nLoad a CSV file using Pandas\nCreate an input pipeline using tf.data\nCreate multiple types of feature columns\n\nIntroduction\nIn this notebook, you classify structured data (e.g. tabular data in a CSV file) using feature columns. Feature columns serve as a bridge to map from columns in a CSV file to features used to train a model. In a subsequent lab, we will use Keras to define the model.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. \nThe Dataset\nWe will use a small dataset provided by the Cleveland Clinic Foundation for Heart Disease. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. We will use this information to predict whether a patient has heart disease, which in this dataset is a binary classification task.\nFollowing is a description of this dataset. Notice there are both numeric and categorical columns.\n\nColumn| Description| Feature Type | Data Type\n------------|--------------------|----------------------|-----------------\nAge | Age in years | Numerical | integer\nSex | (1 = male; 0 = female) | Categorical | integer\nCP | Chest pain type (0, 1, 2, 3, 4) | Categorical | integer\nTrestbpd | Resting blood pressure (in mm Hg on admission to the hospital) | Numerical | integer\nChol | Serum cholestoral in mg/dl | Numerical | integer\nFBS | (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) | Categorical | integer\nRestECG | Resting electrocardiographic results (0, 1, 2) | Categorical | integer\nThalach | Maximum heart rate achieved | Numerical | integer\nExang | Exercise induced angina (1 = yes; 0 = no) | Categorical | integer\nOldpeak | ST depression induced by exercise relative to rest | Numerical | float\nSlope | The slope of the peak exercise ST segment | Numerical | integer\nCA | Number of major vessels (0-3) colored by flourosopy | Numerical | integer\nThal | 3 = normal; 6 = fixed defect; 7 = reversable defect | Categorical | string\nTarget | Diagnosis of heart disease (1 = true; 0 = false) | Classification | integer\n\nImport TensorFlow and other libraries", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nimport tensorflow as tf\n\n\nfrom tensorflow import feature_column\nfrom tensorflow.keras import layers\nfrom sklearn.model_selection import train_test_split\n\nprint(\"TensorFlow version: \",tf.version.VERSION)", "Lab Task 1: Use Pandas to create a dataframe\nPandas is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset from a URL, and load it into a dataframe.", "URL = 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv'\ndataframe = pd.read_csv(URL)\ndataframe.head()\n\ndataframe.info()", "Split the dataframe into train, validation, and test\nThe dataset we downloaded was a single CSV file. As a best practice, Complete the below TODO by splitting this into train, validation, and test sets.", "# TODO 1a\n# TODO: Your code goes here\nprint(len(train), 'train examples')\nprint(len(val), 'validation examples')\nprint(len(test), 'test examples')", "Lab Task 2: Create an input pipeline using tf.data\nNext, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train a model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly. That is not covered in this lab.\nComplete the TODOs in the below cells using df_to_dataset function.", "# A utility method to create a tf.data dataset from a Pandas Dataframe\ndef df_to_dataset(dataframe, shuffle=True, batch_size=32):\n dataframe = dataframe.copy()\n labels = dataframe.pop('target')\n ds = # TODO 2a: Your code goes here\n if shuffle:\n ds = ds.shuffle(buffer_size=len(dataframe))\n ds = ds.batch(batch_size)\n return ds\n\nbatch_size = 5 # A small batch sized is used for demonstration purposes\n\n# TODO 2b\ntrain_ds = # Your code goes here\nval_ds = # Your code goes here\ntest_ds = # Your code goes here\n", "Understand the input pipeline\nNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.", "for feature_batch, label_batch in train_ds.take(1):\n print('Every feature:', list(feature_batch.keys()))\n print('A batch of ages:', feature_batch['age'])\n print('A batch of targets:', label_batch)", "Lab Task 3: Demonstrate several types of feature column\nTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.", "# We will use this batch to demonstrate several types of feature columns\nexample_batch = next(iter(train_ds))[0]\n\n# A utility method to create a feature column\n# and to transform a batch of data\ndef demo(feature_column):\n feature_layer = layers.DenseFeatures(feature_column)\n print(feature_layer(example_batch).numpy())", "Numeric columns\nThe output of a feature column becomes the input to the model. A numeric column is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.", "age = feature_column.numeric_column(\"age\")\ntf.feature_column.numeric_column\nprint(age)", "Let's have a look at the output:\nkey='age'\nA unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns.\nshape=(1,)\nIn the heart disease dataset, most columns from the dataframe are numeric. Recall that tensors have a rank. \"Age\" is a \"vector\" or \"rank-1\" tensor, which is like a list of values. A vector has 1-axis, thus the shape will always look like this: shape=(3,), where 3 is a scalar (or single number) and with 1-axis. \ndefault_value=None\nA single value compatible with dtype or an iterable of values compatible with dtype which the column takes on during tf.Example parsing if data is missing. A default value of None will cause tf.io.parse_example to fail if an example does not contain this column. If a single value is provided, the same value will be applied as the default value for every item. If an iterable of values is provided, the shape of the default_value should be equal to the given shape.\ndtype=tf.float32\ndefines the type of values. Default value is tf.float32. Must be a non-quantized, real integer or floating point type.\nnormalizer_fn=None\nIf not None, a function that can be used to normalize the value of the tensor after default_value is applied for parsing. Normalizer function takes the input Tensor as its argument, and returns the output Tensor. (e.g. lambda x: (x - 3.0) / 4.2). Please note that even though the most common use case of this function is normalization, it can be used for any kind of Tensorflow transformations.", "demo(age)", "Bucketized columns\nOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.", "age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])\ndemo(____) # TODO 3a: Replace the blanks with a correct value\n", "Categorical columns\nIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). The vocabulary can be passed as a list using categorical_column_with_vocabulary_list, or loaded from a file using categorical_column_with_vocabulary_file.", "thal = tf.feature_column.categorical_column_with_vocabulary_list(\n 'thal', ['fixed', 'normal', 'reversible'])\n\nthal_one_hot = tf.feature_column.indicator_column(thal)\ndemo(thal_one_hot)", "In a more complex dataset, many columns would be categorical (e.g. strings). Feature columns are most valuable when working with categorical data. Although there is only one categorical column in this dataset, we will use it to demonstrate several important types of feature columns that you could use when working with other datasets.\nEmbedding columns\nSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an embedding column represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. The size of the embedding (8, in the example below) is a parameter that must be tuned.\nKey point: using an embedding column is best when a categorical column has many possible values. We are using one here for demonstration purposes, so you have a complete example you can modify for a different dataset in the future.", "# Notice the input to the embedding column is the categorical column\n# we previously created\nthal_embedding = tf.feature_column.embedding_column(thal, dimension=8)\ndemo(thal_embedding)", "Hashed feature columns\nAnother way to represent a categorical column with a large number of values is to use a categorical_column_with_hash_bucket. This feature column calculates a hash value of the input, then selects one of the hash_bucket_size buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash_buckets significantly smaller than the number of actual categories to save space.\nKey point: An important downside of this technique is that there may be collisions in which different strings are mapped to the same bucket. In practice, this can work well for some datasets regardless.", "thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(\n 'thal', hash_bucket_size=1000)\ndemo(tf.feature_column.indicator_column(thal_hashed))", "Crossed feature columns\nCombining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that crossed_column does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a hashed_column, so you can choose how large the table is.", "crossed_feature = tf.feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)\ndemo(tf.feature_column.indicator_column(crossed_feature))", "Choose which columns to use\nWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this tutorial is to show you the complete code (e.g. mechanics) needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.\nKey point: If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.", "feature_columns = []\n\n# numeric cols\nfor header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:\n feature_columns.append(feature_column.numeric_column(header))\n\n# bucketized cols\nage_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])\nfeature_columns.append(age_buckets)\n\n# indicator cols\nthal = feature_column.categorical_column_with_vocabulary_list(\n 'thal', ['fixed', 'normal', 'reversible'])\nthal_one_hot = feature_column.indicator_column(thal)\nfeature_columns.append(thal_one_hot)\n\n# embedding cols\nthal_embedding = feature_column.embedding_column(thal, dimension=8)\nfeature_columns.append(thal_embedding)\n\n# crossed cols\ncrossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)\ncrossed_feature = feature_column.indicator_column(crossed_feature)\nfeature_columns.append(crossed_feature)", "How to Input Feature Columns to a Keras Model\nNow that we have defined our feature columns, we now use a DenseFeatures layer to input them to a Keras model. Don't worry if you have not used Keras before. There is a more detailed video and lab introducing the Keras Sequential and Functional models.", "feature_layer = tf.keras.layers.DenseFeatures(feature_columns)", "Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.", "batch_size = 32\ntrain_ds = df_to_dataset(train, batch_size=batch_size)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)\ntest_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)", "Create, compile, and train the model", "model = tf.keras.Sequential([\n feature_layer,\n layers.Dense(128, activation='relu'),\n layers.Dense(128, activation='relu'),\n layers.Dense(1)\n])\n\nmodel.compile(optimizer='adam',\n loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=['accuracy'])\n\nhistory = model.fit(train_ds,\n validation_data=val_ds,\n epochs=5)\n\nloss, accuracy = model.evaluate(test_ds)\nprint(\"Accuracy\", accuracy)", "Visualize the model loss curve\nNext, we will use Matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the accuracy over the training epochs for both the train (blue) and test (orange) sets.", "def plot_curves(history, metrics):\n nrows = 1\n ncols = 2\n fig = plt.figure(figsize=(10, 5))\n\n for idx, key in enumerate(metrics): \n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history['val_{}'.format(key)])\n plt.title('model {}'.format(key))\n plt.ylabel(key)\n plt.xlabel('epoch')\n plt.legend(['train', 'validation'], loc='upper left'); \n \n \n\nplot_curves(history, ['loss', 'accuracy'])", "You can see that accuracy is at 77% for both the training and validation data, while loss bottoms out at about .477 after four epochs.\nKey point: You will typically see best results with deep learning with much larger and more complex datasets. When working with a small dataset like this one, we recommend using a decision tree or random forest as a strong baseline. The goal of this tutorial is not to train an accurate model, but to demonstrate the mechanics of working with structured data, so you have code to use as a starting point when working with your own datasets in the future.\nNext steps\nThe best way to learn more about classifying structured data is to try it yourself. We suggest finding another dataset to work with, and training a model to classify it using code similar to the above. To improve accuracy, think carefully about which features to include in your model, and how they should be represented.\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chseifert/tutorials
nlp-ie/Text-Classification.ipynb
apache-2.0
[ "A simple Text Classifier\nAuthor: Christin Seifert, licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/ \nIt is based on a tutorial of Nils Witt (https://github.com/n-witt/MachineLearningWithText_SS2017)\nThis is a tutorial for learning and evaluating a simple naive bayes classifier on for a simple text classification problem. In this tutorial you will:\n\ninspect the data you will be using to train the decision tree \ntrain a decision tree \nevaluate how well the decision tree does \nvisualize the decision tree\n\nIt is assumed that you have some general knowledge on \n* document-term matrices\n* what a Naive Bayes classifier does\nConverting texts to features\nWe wil start with a small example of 3 SMS'. The texts in the SMS are the following \"call me tonight\", \"Call me a cab\", \"please call me... PLEASE!\" In order to do text classification we need to convert the text into a feature vector. We will follow a very simple approach here:\n1. Find out which different words (or tokens) are used in the text. These makes up the vocabulary.\n2. The length of a vector for each document then is the size of the vocabulary, and each entry in the vector corresponds to one word. This means, the first entry in the vector corresponds to the first word in the vocabulary, the second to the second and .. you get the logic ;-)\n3. For each document we simply cound how often each word occurs and write it at the index in the vector that corresponds to this word. \nAll those things can easily be done with the CountVectorizer from the sklearn library.", "simple_train = ['call you tonight', 'Call me a cab', 'please call me... PLEASE!']\n\n# import and instantiate CountVectorizer (with the default parameters)\nfrom sklearn.feature_extraction.text import CountVectorizer\nvect = CountVectorizer()\n\n# learn the 'vocabulary' of the training data \nvect.fit(simple_train)\n\n# examine the fitted vocabulary\nvect.get_feature_names()", "Have you noticed that all words are lower case now? And that we ignored punctuation? Whether this is a good idea, depends on the application. E.g. for detecting emotions in texts, smilies (punctutation) might be a helpful feature. But for now, let's keep it simple.\nNow we generate a document-term matrix. In this matrix each row corresponds to one document, each column to one feature. Entry (i,j) tells us how often word j occurs in document i.\nNote: The \"how often\" is only true if we use the count vectorizer. Instead of word count there are many other possible features.\nFrom the scikit-learn documentation:\n\nIn this scheme, features and samples are defined as follows:\n\nEach individual token occurrence frequency (normalized or not) is treated as a feature.\nThe vector of all the token frequencies for a given document is considered a sample.\n\nA corpus of documents can thus be represented by a matrix with one row per document and one column per token (e.g. word) occurring in the corpus.\nWe call vectorization the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the Bag of Words or \"Bag of n-grams\" representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.", "# transform training data into a 'document-term matrix'\nsimple_train_dtm = vect.transform(simple_train)\nsimple_train_dtm\n\n# convert sparse matrix to a dense matrix\nsimple_train_dtm.toarray()", "We can use a pandas data frame to store the vector and the feature names together.", "# examine the vocabulary and document-term matrix together\nimport pandas as pd\npd.DataFrame(simple_train_dtm.toarray(), columns=vect.get_feature_names())", "Since in general this is an aweful lot of zeros (think of how many of all English words are present in a SMS), the more efficient way to store the information is as a sparse matrix. For humans this is a bit harder to read.\nFrom the scikit-learn documentation:\n\nAs most documents will typically use a very small subset of the words used in the corpus, the resulting matrix will have many feature values that are zeros (typically more than 99% of them).\nFor instance, a collection of 10,000 short text documents (such as emails) will use a vocabulary with a size in the order of 100,000 unique words in total while each document will use 100 to 1000 unique words individually.\nIn order to be able to store such a matrix in memory but also to speed up operations, implementations will typically use a sparse representation such as the implementations available in the scipy.sparse package.", "# check the type of the document-term matrix\ntype(simple_train_dtm)\n\n# examine the sparse matrix contents\nprint(simple_train_dtm)", "Generate the feature vector for a previously unseen text\nIn order to make predictions for unseen data, the new observation must have the same features as the training observations, both in number and meaning.", "# example text for model testing\nsimple_test = [\"please don't call me, I don't like you\"]\n\n# transform testing data into a document-term matrix (using existing vocabulary)\nsimple_test_dtm = vect.transform(simple_test)\nsimple_test_dtm.toarray()\n\n# examine the vocabulary and document-term matrix together\npd.DataFrame(simple_test_dtm.toarray(), columns=vect.get_feature_names())", "A simple spam filter\nNow we are going to implement a simple spam filter for SMS messages. We are given a data set with SMS that are already annotated with either spam or ham (=not spam). We first load the data set and have a look at the data.", "path = 'material/sms.tsv'\nsms = pd.read_table(path, header=None, names=['label', 'message'])\n\nsms.shape\n\n# examine the first 10 rows\nsms.head(10)", "We convert the label to a numerical value.", "# examine the class distribution\nsms.label.value_counts()\n\n# convert label to a numerical variable\nsms['label_num'] = sms.label.map({'ham':0, 'spam':1})\n\n# check that the conversion worked\nsms.head(10)", "Now we have our text in the column message and our label in the column label_num. Let's have a look at the sizes.", "# how to define X and y (from the SMS data) for use with COUNTVECTORIZER\nX = sms.message\ny = sms.label_num\nprint(X.shape)\nprint(y.shape)", "And at the text of the first 5 messages.", "sms.message.head()", "We now prepare the data for the classifier. First split it into a training and a test set. There is a convenient method train_test_split available that helps us with that. We use a fixed random state random_state=42to split randomly, but at the same time get the same results each time we run the code.", "# split X and y into training and testing sets\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\nprint(X_train.shape)\nprint(X_test.shape)\nprint(y_train.shape)\nprint(y_test.shape)", "Now we use the data preprocessing knowledge from above and generate the vocabulary. We will do this ONLY on the training data set, because we presume to have no knowledge whatsoever about the test data set. So we don't know the test data's vocabulary.", "# learn training data vocabulary, then use it to create a document-term matrix\nvect = CountVectorizer()\nvect.fit(X_train)\nX_train_dtm = vect.transform(X_train)\n\n# examine the document-term matrix\nX_train_dtm", "Next we transform the test data set using the same vocabulary (that is using the same vect object that internally knows the vocabulary).", "# transform testing data (using fitted vocabulary) into a document-term matrix\nX_test_dtm = vect.transform(X_test)\nX_test_dtm", "Building and evaluating a model\nNow we are at the stage where we have a matrix of features and the corresponding labels. We can now train a classifier for spam detection on sms. We will use multinomial Naive Bayes:\n\nThe multinomial Naive Bayes classifier is suitable for classification with discrete features (e.g., word counts for text classification). The multinomial distribution normally requires integer feature counts. However, in practice, fractional counts such as tf-idf may also work.", "from sklearn.naive_bayes import MultinomialNB\nnb = MultinomialNB()\n\nnb.fit(X_train_dtm, y_train)\ny_test_pred = nb.predict(X_test_dtm)\n\nfrom sklearn import metrics\nmetrics.accuracy_score(y_test, y_test_pred)\n\n# print the confusion matrix\nmetrics.confusion_matrix(y_test, y_test_pred)", "\"Spaminess\" of words\nBefore we start: the estimator has several fields that allow us to examine its internal state:", "vect.vocabulary_\n\nX_train_tokens = vect.get_feature_names()\nprint(X_train_tokens[:50])\n\nprint(X_train_tokens[-50:])\n\n# feature count per class\nnb.feature_count_\n\n# number of times each token appears across all HAM messages\nham_token_count = nb.feature_count_[0, :]\n\n# number of times each token appears across all SPAM messages\nspam_token_count = nb.feature_count_[1, :]\n\n# create a table of tokens with their separate ham and spam counts\ntokens = pd.DataFrame({'token':X_train_tokens, 'ham':ham_token_count, 'spam':spam_token_count}).set_index('token')\ntokens.head()\n\ntokens.sample(5, random_state=6)", "Naive Bayes counts the number of observations in each class", "nb.class_count_", "Add 1 to ham and spam counts to avoid dividing by 0", "tokens['ham'] = tokens.ham + 1\ntokens['spam'] = tokens.spam + 1\ntokens.sample(5, random_state=6)\n\n# convert the ham and spam counts into frequencies\ntokens['ham'] = tokens.ham / nb.class_count_[0]\ntokens['spam'] = tokens.spam / nb.class_count_[1]\ntokens.sample(5, random_state=6)", "Calculate the ratio of spam-to-ham for each token", "tokens['spam_ratio'] = tokens.spam / tokens.ham\ntokens.sample(5, random_state=6)", "Examine the DataFrame sorted by spam_ratio", "tokens.sort_values('spam_ratio', ascending=False)\n\ntokens.loc['00', 'spam_ratio']", "Tuning the vectorizer\nDo you see any potential to enhance the vectorizer? Think about the following questions:\nAre all word equally important?\nDo you think there are \"noise words\" which negatively influence the results?\nHow can we account for the order of words?\nStopwords\nStopwords are the most common words in a language. Examples are 'is', 'which' and 'the'. Usually is beneficial to exclude these words in text processing tasks.\nThe CountVectorizer has a stop_words parameter:\n- stop_words: string {'english'}, list, or None (default)\n - If 'english', a built-in stop word list for English is used.\n - If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens.\n - If None, no stop words will be used.", "vect = CountVectorizer(stop_words='english')", "n-grams\nn-grams concatenate n words to form a token. The following accounts for 1- and 2-grams", "vect = CountVectorizer(ngram_range=(1, 2))", "Document frequencies\nOften it's beneficial to exclude words that appear in the majority or just a couple of documents. This is, very frequent or infrequent words. This can be achieved by using the max_df and min_df parameters of the vectorizer.", "# ignore terms that appear in more than 50% of the documents\nvect = CountVectorizer(max_df=0.5)\n\n# only keep terms that appear in at least 2 documents\nvect = CountVectorizer(min_df=2)", "A note on Stemming\n\n'went' and 'go' \n'kids' and 'kid' \n'negative' and 'negatively'\n\nWhat is the pattern?\nThe process of reducing a word to it's word stem, base or root form is called stemming. Scikit-Learn has no powerfull stemmer, but other libraries like the NLTK have. \nTf-idf\n\nTf-idf can be understood as a modification of the raw term frequencies (tf)\nThe concept behind tf-idf is to downweight terms proportionally to the number of documents in which they occur.\nThe idea is that terms that occur in many different documents are likely unimportant or don't contain any useful information for Natural Language Processing tasks such as document classification.\n\nExplanation by example\nLet consider a dataset containing 3 documents:", "import numpy as np\ndocs = np.array([\n 'The sun is shining',\n 'The weather is sweet',\n 'The sun is shining and the weather is sweet'])", "First, we will compute the term frequency (alternatively: Bag-of-Words) $tf(t, d)$. $t$ is the number of times a term occures in a document $d$. Using Scikit-Learn we can quickly get those numbers:", "from sklearn.feature_extraction.text import CountVectorizer\ncv = CountVectorizer()\ntf = cv.fit_transform(docs).toarray()\ntf\n\ncv.vocabulary_", "Secondly, we introduce inverse document frequency ($idf$) by defining the term document frequency $\\text{df}(d,t)$, which is simply the number of documents $d$ that contain the term $t$. We can then define the idf as follows:\n$$\\text{idf}(t) = log{\\frac{n_d}{1+\\text{df}(d,t)}},$$ \nwhere\n$n_d$: The total number of documents\n$\\text{df}(d,t)$: The number of documents that contain term $t$.\nNote that the constant 1 is added to the denominator to avoid a zero-division error if a term is not contained in any document in the test dataset.\nNow, Let us calculate the idfs of the words \"and\", \"is,\" and \"shining:\"", "n_docs = len(docs)\n\ndf_and = 1\nidf_and = np.log(n_docs / (1 + df_and))\nprint('idf \"and\": %s' % idf_and)\n\ndf_is = 3\nidf_is = np.log(n_docs / (1 + df_is))\nprint('idf \"is\": %s' % idf_is)\n\ndf_shining = 2\nidf_shining = np.log(n_docs / (1 + df_shining))\nprint('idf \"shining\": %s' % idf_shining)", "Using those idfs, we can eventually calculate the tf-idfs for the 3rd document:\n$$\\text{tf-idf}(t, d) = \\text{tf}(t, d) \\times \\text{idf}(t),$$", "print('Tf-idfs in document 3:\\n')\nprint('tf-idf \"and\": %s' % (1 * idf_and))\nprint('tf-idf \"is\": %s' % (2 * idf_is))\nprint('tf-idf \"shining\": %s' % (1 * idf_shining))", "Tf-idf in Scikit-Learn", "from sklearn.feature_extraction.text import TfidfTransformer\ntfidf = TfidfTransformer(smooth_idf=False, norm=None)\ntfidf.fit_transform(tf).toarray()[-1][:3]", "Wait! Those numbers aren't the same!\nTf-idf in Scikit-Learn is calculated a little bit differently. Here, the +1 count is added to the idf, whereas instead of the denominator if the df:\n$$\\text{idf}(t) = log{\\frac{n_d}{\\text{df}(d,t)}} + 1$$", "tf_and = 1\ndf_and = 1 \ntf_and * (np.log(n_docs / df_and) + 1)\n\ntf_is = 2\ndf_is = 3 \ntf_is * (np.log(n_docs / df_is) + 1)\n\ntf_shining = 1\ndf_shining = 2 \ntf_shining * (np.log(n_docs / df_shining) + 1)", "Normalization\nBy default, Scikit-Learn performs a normalization. The most common way to normalize the raw term frequency is l2-normalization, i.e., dividing the raw term frequency vector $v$ by its length $||v||_2$ (L2- or Euclidean norm).\n$$v_{norm} = \\frac{v}{||v||_2} = \\frac{v}{\\sqrt{v{_1}^2 + v{_2}^2 + \\dots + v{_n}^2}}$$\nWhy is that useful?\nFor example, we would normalize our 3rd document 'The sun is shining and the weather is sweet' as follows:", "tfidf = TfidfTransformer(use_idf=True, smooth_idf=False, norm='l2')\ntfidf.fit_transform(tf).toarray()[-1][:3]", "Smooth idf\nWe are not quite there. Sckit-Learn also applies smoothing, which changes the original formula as follows:\n$$\\text{idf}(t) = log{\\frac{1 + n_d}{1+\\text{df}(d,t)}} + 1$$", "tfidf = TfidfTransformer(use_idf=True, smooth_idf=True, norm='l2')\ntfidf.fit_transform(tf).toarray()[-1][:3]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dwhswenson/openpathsampling
examples/misc/tutorial_handle_nan.ipynb
mit
[ "How to deal with errors in engines\nImports", "import openpathsampling as paths\n\nimport openpathsampling.engines.openmm as dyn_omm\nimport openpathsampling.engines as dyn\nfrom simtk.openmm import app\nimport simtk.openmm as mm\nimport simtk.unit as unit\n\nimport mdtraj as md\n\nimport numpy as np", "Setting up the engine\nNow we set things up for the OpenMM simulation. We will need a openmm.System object and an openmm.Integrator object.\nTo learn more about OpenMM, read the OpenMM documentation. The code we use here is based on output from the convenient web-based OpenMM builder.", "# this cell is all OpenMM specific\nforcefield = app.ForceField('amber96.xml', 'tip3p.xml')\npdb = app.PDBFile(\"../resources/AD_initial_frame.pdb\")\nsystem = forcefield.createSystem(\n pdb.topology, \n nonbondedMethod=app.PME, \n nonbondedCutoff=1.0*unit.nanometers,\n constraints=app.HBonds, \n rigidWater=True,\n ewaldErrorTolerance=0.0005\n)\nhi_T_integrator = mm.LangevinIntegrator(\n 500*unit.kelvin, \n 1.0/unit.picoseconds, \n 2.0*unit.femtoseconds)\nhi_T_integrator.setConstraintTolerance(0.00001)", "The storage file will need a template snapshot. In addition, the OPS OpenMM-based Engine has a few properties and options that are set by these dictionaries.", "template = dyn_omm.snapshot_from_pdb(\"../resources/AD_initial_frame.pdb\")\nopenmm_properties = {'OpenCLPrecision': 'mixed'}\nengine_options = {\n 'n_frames_max': 2000,\n 'nsteps_per_frame': 10\n}\n\nengine = dyn_omm.Engine(\n template.topology, \n system, \n hi_T_integrator, \n openmm_properties=openmm_properties,\n options=engine_options\n).named('500K')\n\nengine.initialize('OpenCL')", "Defining states\nWe define stupid non-existant states which we can never hit. Good grounds to generate nan or too long trajectories.", "volA = paths.EmptyVolume()\nvolB = paths.EmptyVolume()\n\ninit_traj_ensemble = paths.AllOutXEnsemble(volA) | paths.AllOutXEnsemble(volB)", "Create a bad snapshot", "nan_causing_template = template.copy()\nkinetics = template.kinetics.copy()\n# this is crude but does the trick\nkinetics.velocities = kinetics.velocities.copy()\nkinetics.velocities[0] = \\\n (np.zeros(template.velocities.shape[1]) + 1000000.) * \\\n unit.nanometers / unit.picoseconds\nnan_causing_template.kinetics = kinetics\n\n# generate trajectory that includes frame in both states\ntry:\n trajectory = engine.generate(nan_causing_template, [init_traj_ensemble.can_append])\nexcept dyn.EngineNaNError as e:\n print 'we got NaNs, oh no.'\n print 'last valid trajectory was of length %d' % len(e.last_trajectory)\nexcept dyn.EngineMaxLengthError as e:\n print 'we ran into max length.'\n print 'last valid trajectory was of length %d' % len(e.last_trajectory)\n", "Now we will make a long trajectory", "engine.options['n_frames_max'] = 10\nengine.options['on_max_length'] = 'fail'\n\n# generate trajectory that includes frame in both states\ntry:\n trajectory = engine.generate(template, [init_traj_ensemble.can_append])\nexcept dyn.EngineNaNError as e:\n print 'we got NaNs, oh no.'\n print 'last valid trajectory was of length %d' % len(e.last_trajectory)\nexcept dyn.EngineMaxLengthError as e:\n print 'we ran into max length.'\n print 'last valid trajectory was of length %d' % len(e.last_trajectory)\n\n", "What, if that happens inside of a simulation?", "mover = paths.ForwardShootMover(\n ensemble=init_traj_ensemble, \n selector=paths.UniformSelector(),\n engine=engine)", "Should run indefinitely and hit the max frames of 10.", "init_sampleset = paths.SampleSet([paths.Sample(\n trajectory=paths.Trajectory([template] * 5),\n replica=0,\n ensemble = init_traj_ensemble\n )])", "Run the PathMover and check the change", "change = mover.move(init_sampleset)\n\nassert(isinstance(change, paths.movechange.RejectedMaxLengthSampleMoveChange))\n\nassert(not change.accepted)\n\nchange.samples[0].details.__dict__.get('stopping_reason')", "Let's try again what happens when nan is encountered", "init_sampleset = paths.SampleSet([paths.Sample(\n trajectory=paths.Trajectory([nan_causing_template] * 5),\n replica=0,\n ensemble = init_traj_ensemble\n )])\n\nchange = mover.move(init_sampleset)\n\nassert(isinstance(change, paths.movechange.RejectedNaNSampleMoveChange))\n\nassert(not change.accepted)\n\nchange.samples[0].details.__dict__.get('stopping_reason')", "Change the behaviour of the engine to ignore nans. This is really not advised, because not all platforms support this. CPU will always throw an nan error and", "engine.options['on_nan'] = 'ignore'\nengine.options\n\nchange = mover.move(init_sampleset)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maxalbert/paper-supplement-nanoparticle-sensing
notebooks/fig_7_frequency_change_vs_lateral_particle_position.ipynb
mit
[ "Fig. 7: Frequency Change $\\Delta f$ vs. Lateral Particle Position\nThis notebook reproduces Fig. 7 in the paper, which shows the the frequency change $\\Delta f$ as a function of lateral particle position $x$ for the first five eigenmodes (N = 1 - 5), with the MNP either above the major axis of the ellipse (y = 0 nm) or shifted by 20 nm in y-direction. The frequency change for each mode is shown for three values of the nanoparticle-disc separation: d = 5 nm, d = 20 nm, d = 50 nm.", "import matplotlib.pyplot as plt\nimport pandas as pd\nfrom style_helpers import style_cycle_fig7\n\n%matplotlib inline\nplt.style.use('style_sheets/fig7.mplstyle')", "Define which values of $z$, $y$ and $N$ are going to appear in the figure.", "dvals = [5, 20, 50]\nyvals = [0, 20]\nNvals = [1, 2, 3, 4, 5]", "Read the data frame with the eigenmode data and filter out the parameter values relevant for this plot.", "df = pd.read_csv('../data/eigenmode_info_data_frame.csv')\ndf = df.query('(has_particle == True) & (d_particle == 20) & (Ms_particle == 1e6) & '\n '(d in [5, 20, 50]) & (Hz == 8e4)')\ndf = df.sort_values('x')", "Define a series of helper functions.", "def draw_zero_line(ax):\n \"\"\"\n Draw a horizontal line representing delta_f = 0.\n \"\"\"\n ax.plot([-90, 90], [0, 0], color='#888888', linestyle='-', linewidth=1)\n\ndef add_y_value_annotation(ax, y):\n \"\"\"\n Add annotation to indicate y-value used in this row\n of subplots. Example: \"y = 20 nm\"\n \"\"\"\n ax.annotate(r'$y={y:d}$ nm'.format(y=y), xy=(1.00, 0.5), xytext=(15, 0),\n ha='left', va='center', rotation=90,\n xycoords='axes fraction', textcoords='offset points', fontsize=18)\n\ndef adjust_subplot_appearance(ax):\n \"\"\"\n Set correct axis limits and ticks positions for subplot.\n \"\"\"\n ax.set_xticks([-60, 0, 60])\n ax.set_yticks([-300, 0, 300, 600])\n ax.set_ylim((-75, 625))\n ax.xaxis.set_ticks_position('bottom' if i == 1 else 'none')\n ax.yaxis.set_ticks_position('left' if j == 0 else 'none')\n\ndef get_data_values_for_subplot(df, y, d, N):\n \"\"\"\n Filter the data frame `df` to obtain the values of `x` and `delta_f`\n for the subplot corresponding to the given values of `y`, `d`, `N`.\n \"\"\"\n query_string = '(y == {y}) and (d == {d}) and (N == {N})'.format(y=y, d=d, N=N)\n df_subplot = df.query(query_string)\n xvals = df_subplot['x']\n fvals = df_subplot['freq_diff'] * 1e3 # in MHz, not GHz\n return xvals, fvals\n\ndef draw_subplot(df, axes, i, j, y, N):\n \"\"\"\n Draw the three curves (for d = 5 nm, 20 nm, 50 nm) into each subplot.\n \"\"\"\n ax = axes[i, j]\n\n draw_zero_line(ax)\n adjust_subplot_appearance(ax)\n\n for d, style in zip(reversed(dvals), style_cycle_fig7):\n xvals, fvals = get_data_values_for_subplot(df, y, d, N)\n line, = ax.plot(xvals, fvals, label='d={d} nm'.format(d=d), marker=' ', **style)\n if style['linestyle'] == 'dashed':\n line.set_dashes([5, 4]) # 5 points on, 4 off\n\ndef add_column_titles(axes):\n \"\"\"\n Add titles \"N=1\", \"N=2\", etc. to the columns.\n \"\"\"\n for N in Nvals:\n ax = axes[0, N-1]\n ax.set_title(r'N$={N}$'.format(N=N), y=1.15)\n\ndef add_axis_labels(axes):\n \"\"\"\n Set overall axis labels.\n \n Horizontal axis: \"x (nm)\"\n Vertical axis: \"\\Delta f (MHz)\"\n \"\"\"\n axes[1, 2].set_xlabel('x (nm)')\n axes[0, 0].annotate(r'$\\Delta f$ (MHz)',\n xy=(-0.5, 0.4), xytext=(0, 0), ha='right', va='top', rotation=90,\n xycoords='axes fraction', textcoords='offset points', fontsize=20)\n\ndef add_legend(axes):\n \"\"\"\n Add legend in top right corner of the grid.\n \"\"\"\n # We reverse the labels so that they appear in the natural order\n # with \"d = 5 nm\" at the top and \"d = 20 nm\" at the bottom.\n handles, labels = axes[0, 4].get_legend_handles_labels()\n axes[0, 4].legend(handles[::-1], labels[::-1], loc='upper right', bbox_to_anchor=(1.0, 1.18), handlelength=2)", "Finally we can produce the plot for Fig. 7.", "fig, axes = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(10, 4.5))\n\nfor i, y in enumerate(yvals):\n for j, N in enumerate(Nvals):\n draw_subplot(df, axes, i, j, y, N)\n\nadd_y_value_annotation(axes[0, 4], y=0)\nadd_y_value_annotation(axes[1, 4], y=20)\nadd_column_titles(axes)\nadd_axis_labels(axes)\nadd_legend(axes)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MTG/essentia
src/examples/python/musicbricks-tutorials/2-sinemodel_analsynth.ipynb
agpl-3.0
[ "Sine model Analysis/Synthesis - MusicBricks Tutorial\nIntroduction\nThis tutorial will guide you through some tools for performing spectral analysis and synthesis using the Essentia library (http://www.essentia.upf.edu). \nThis algorithm shows how to analyze the input signal, and resynthesize it again, allowing to apply new transformations directly on the spectral domain.\nYou should first install the Essentia library with Python bindings. Installation instructions are detailed here: http://essentia.upf.edu/documentation/installing.html . \nProcessing steps", "# import essentia in streaming mode\nimport essentia\nimport essentia.streaming as es", "After importing Essentia library, let's import other numerical and plotting tools", "# import matplotlib for plotting\nimport matplotlib.pyplot as plt\nimport numpy as np", "Define the parameters of the STFT workflow", "# algorithm parameters\nparams = { 'frameSize': 2048, 'hopSize': 512, 'startFromZero': False, 'sampleRate': 44100, \\\n 'maxnSines': 100,'magnitudeThreshold': -74,'minSineDur': 0.02,'freqDevOffset': 10, \\\n 'freqDevSlope': 0.001}", "Specify input and output audio filenames", "inputFilename = 'singing-female.wav'\noutputFilename = 'singing-female-sindemodel.wav'\n\n# create an audio loader and import audio file\nout = np.array(0)\nloader = es.MonoLoader(filename = inputFilename, sampleRate = 44100)\npool = essentia.Pool()", "Define algorithm chain for frame-by-frame process: \nFrameCutter -> Windowing -> FFT -> IFFT -> OverlapAdd -> AudioWriter", "# algorithm instantation\nfcut = es.FrameCutter(frameSize = params['frameSize'], hopSize = params['hopSize'], startFromZero = False);\nw = es.Windowing(type = \"blackmanharris92\");\nfft = es.FFT(size = params['frameSize']);\nsmanal = es.SineModelAnal(sampleRate = params['sampleRate'], maxnSines = params['maxnSines'], magnitudeThreshold = params['magnitudeThreshold'], freqDevOffset = params['freqDevOffset'], freqDevSlope = params['freqDevSlope'])\nsmsyn = es.SineModelSynth(sampleRate = params['sampleRate'], fftSize = params['frameSize'], hopSize = params['hopSize'])\nifft = es.IFFT(size = params['frameSize']);\noverl = es.OverlapAdd (frameSize = params['frameSize'], hopSize = params['hopSize'], gain = 1./params['frameSize'] );\nawrite = es.MonoWriter (filename = outputFilename, sampleRate = params['sampleRate']);", "Now we set the algorithm network and store the processed audio samples in the output file", "# analysis\nloader.audio >> fcut.signal\nfcut.frame >> w.frame\nw.frame >> fft.frame\nfft.fft >> smanal.fft\n# synthesis\nsmanal.magnitudes >> smsyn.magnitudes\nsmanal.frequencies >> smsyn.frequencies\nsmanal.phases >> smsyn.phases\nsmsyn.fft >> ifft.fft\nifft.frame >> overl.frame\noverl.signal >> awrite.audio\noverl.signal >> (pool, 'audio')", "Finally we run the process that will store an output file in a WAV file", "essentia.run(loader)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kit-cel/wt
qc/basic_concepts_Python.ipynb
gpl-2.0
[ "Contents and Objective\n\n\nDescribing several commands and methods that will be used throughout the simulations\n\n\n<b>Note:</b> Basic knowledge of programming languages and concepts is assumed. Only specific concepts that are different from, e.g., C++ or Matlab, are provided.\n\n\n<b>NOTE 2:</b> The following summary is by no means complete or exhaustive, but only provides a short and simplified overview of the commands used throughout the simulations in the lecture. For a detailed introduction please have a look at one of the numerous web-tutorials or books on Python, e.g., \n\nhttps://www.python-kurs.eu/\nhttps://link.springer.com/book/10.1007%2F978-1-4842-4246-9\nhttps://primo.bibliothek.kit.edu/primo_library/libweb/action/search.do?mode=Basic&vid=KIT&vl%28freeText0%29=python&vl%28freeText0%29=python&fn=search&tab=kit&srt=date\n\n\n\nCell Types\nThere are two types of cells:\n\nText cells (called 'Markdown'): containing text, allowing use of LaTeX\nMath/code cells: where code is being executed\n\nAs long as you are just reading the simulations, there is no need to be concerned about this fact.\nData Structures\n\nIn the following sections the basic data structures used in upcoming simulations will be introduced.\nBasic types as int, float, string are supposed to be well-known.\n\nLists\n\nContainer-type structure for collecting entities (which may even be of different type)\nDefined by key word list( ) or by square brackets with entities being separated by comma\nReferenced by index in square brackets; <b>Note</b>: indexing starting at 0\nEntries may be changed, appended, sliced,...", "# defining lists\nsport_list = [ 'cycling', 'football', 'fitness' ]\nfirst_prime_numbers = [ 2, 3, 5, 7, 11, 13, 17, 19 ]\n\n# getting contents\nsport = sport_list[ 2 ]\nthird_prime = first_prime_numbers[ 2 ]\n\n# printing\nprint( 'All sports:', sport_list )\nprint( 'Sport to be done:', sport )\n\nprint( '\\nFirst primes:', first_prime_numbers )\nprint( 'Third prime number:', third_prime )\n\n\n# adapt entries and append new entries\nsport_list[ 1 ] = 'swimming'\nsport_list.append( 'running' )\nfirst_prime_numbers.append( 23 ) \n\n# printing\nprint( 'All sports:', sport_list )\nprint( 'First primes:', first_prime_numbers )", "Tuples\n\nSimilar to lists but \"immutable\", i.e., entries can be appended, but not be changed\nDefined by tuple( ) or by brackets with entities being separated by comma\nReferenced by index in square brackets; <b>Note</b>: indexing starting at 0", "# defining tuple\nsport_tuple = ( 'cycling', 'football', 'fitness' )\n\n# getting contents\nsport = sport_tuple[ 2 ]\n\n# printing\nprint( 'All sports:', sport_tuple )\nprint( 'Sport to be done:', sport )\n\n# append new entries\nsport_tuple += ( 'running', )\n\n# printing\nprint( 'All sports:', sport_tuple )\nprint()\n\n# changing entries will fail\n# --> ERROR is being generated on purpose\n# --> NOTE: Error is handled by 'try: ... except: ...' statement\ntry:\n sport_tuple[ 1 ] = 'swimming'\nexcept:\n print('ERROR: Entries within tuples cannot be adapted!')", "Dictionaries\n\nContainer in which entries are of type: ( key : value )\nDefined by key word dict or by curly brackets with entities of shape \"key : value\" being separated by comma\nReferenced by key in square brackets --> <b>Note</b>: Indexing by keys instead of indices might be a major advantage (at least sometimes)", "# defining dictionaries\nsports_days = { 'Monday': 'pause', 'Tuesday': 'fitness', 'Wednesday' : 'running', \n 'Thursday' : 'fitness', 'Friday' : 'swimming', 'Saturday' : 'cycling', \n 'Sunday' : 'cycling' }\n\nprint( 'Sport by day:', sports_days )\nprint( '\\nOn Tuesday:', sports_days[ 'Tuesday' ])\n\n# Changes are made by using the key as identifier\nsports_days[ 'Tuesday' ] = 'running'\nprint( 'Sport by day:', sports_days )", "Sets\n\nAs characterized by the naming, sets are representing mathematical sets; no double occurences of elements\nDefined by keyword set of by curly braces with entities being separated by comma\n<b>Note</b>: As in maths, sets don't possess ordering, so there is no indexing of sets!", "# defining sets\nsports_set = { 'fitness', 'running', 'swimming', 'cycling'}\nprint( sports_set )\nprint()\n\n# indexing will fail\n# --> ERROR is being generated on purpose\ntry:\n print( sports_set[0] )\nexcept:\n print('ERROR: No indexing of sets!')\n\n# adding elements (or not)\nsports_set.add( 'pause' )\nprint(sports_set)\n\nsports_set.add( 'fitness' )\nprint(sports_set)\n\n# union of sets (also: intersection, complement, ...)\nall_stuff_set = set( sports_set )\nunion_of_sets = all_stuff_set.union( first_prime_numbers)\n\nprint( union_of_sets )", "Flow Control\n\nStandards commands as for, while, ...\nFunctions for specific purposes\n<b>Note:</b> Since commands and their concept are quite self-explaining, only short description of syntax is provided\n\nFor Loops\n\nfor loops in Python allow looping along every so-called iterable as, e.g., list, tuple, dicts.... <b>Note</b>: Not necessarily int\nSyntax: for i in iterable:\n<b>Note:</b> Blocks are structured by indentation; sub-command (as, e.g., in a loop) are indented", "# looping in lists simply parsing along the list\nfor s in sport_list:\n print( s )\n\nprint()\n\n# looping in dictionaries happens along keys\nfor s in sports_days:\n print( '{}: \\t{}'.format( s, sports_days[ s ] ) )", "While Loops\n\nwhile loops in Python are (as usual) constructed by checking condition and exiting loop if condition becomes False\n<b>Note:</b> Blocks are structured by indentation; sub-command (as, e.g., in a loop) are indented", "# initialize variables\nsum_primes = 0\n_n = 0\n\n# sum primes up to sum-value of 20\nwhile sum_primes < 20:\n \n # add prime of according index\n sum_primes += first_prime_numbers[ _n ]\n \n # increase index\n _n += 1\n \nprint( 'Sum of first {} primes is {}.'.format( _n, sum_primes ) )", "Functions\n\nDefined by key-word def followed by list of arguments in brackets\nDoc string defined directly after def by ''' TEXT '''\nValues returned by key word return; <b>Note:</b> return \"value\" can be scalar, list, dict, vector, maxtrix,...", "def get_n_th_prime( n, first_prime_numbers ):\n '''\n DOC String\n \n IN: index of prime number, list of prime numbers\n OUT: n-th prime number\n '''\n \n # do something smart as, e.g., checking that according index really exists\n # \"assert\" does the job by checking first arg and--if not being TRUE--providing text given as second arg\n try: \n val = first_prime_numbers[ n - 1 ]\n except:\n return '\"ERROR: Index not feasible!\"'\n \n # NOTE: since counting starts at 0, (n-1)st number is returned\n # Furthermore, there is no need for a function here; a simple reference would have done the job!\n \n return first_prime_numbers[ n - 1 ]\n\n\n# show doc string\nprint( help( get_n_th_prime ) )\n\n# apply functions\nN = 3\nprint( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) )\n\nprint()\n\nN = 30\nprint( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) )", "Several Useful Commands and Wording\n\n\nimport: importing modules to be used as, e.g., modules for \n\nplotting (matplotlib), \nnumerical calculations (numpy), \nnp.linalg: linear algebra\nnp.random: everything dealing with randomness, as, e.g., realization of different distributions\n\n\nscientific math (scipy),\niterating along an iterable (itertools, ...)\n\n\n\nbreak: quitting current loop\n\n\ncontinue: processing to next iteration of current loop\n\n\nenumerate( x ): generates enumeration tuples of shape ( index, value ) parsing all entries of iterable x\n\nlen( x ): length of list/tuple x\n\nrange( N ): generates iterable with integers $0,\\ldots N-1$\n\n\niterable: something that can be iterated, i.e., \" walked through\"', as, e.g., lists, tuples, dicts\n\n\nModule Numpy\n\nIn simulations imported as import numpy as np such that all functions or submodules within the module are being used as np.sub_func_mod\n\nOptimized for vector operations\n\n\nnp.arange( a, b, step ): generating a numpy array starting at a (included) up to b (not included) with step size 'step' (optional)\n\nnp.argmax( a ): returns first index at which a attains its maximum value\nnp.array( ): standard data type of numpy, essentially correponding to a vector as used in maths\nnp.average( x ): determines average value of list/array x\nnp.concatenate( x, y ): generates new vector/tuple ( x, y )\nx.copy( ): creates a copy of x\nnp.corrcoeff( x, y ): determines correlation coefficient of x and y\nnp.correlate( x, y, 'full'): determines correlation function (!) of vectors (signals) x and y\nnp.cumsum( x ): cumulative sum of x\nnp.histogram( values, bins ): generates histogram (relative frequency) of 'values' with respect to classes listed in 'bins'\nnp.isclose( a, b ): returns Boolean describing if $a \\approx b$ 'within tolerance'\nnp.linspace( a, b, num_points ): generates vector of 'num_points' equidistant points in $[a, b)$\nnp.ones( shape ): generates array of ones of shape 'shape'\nnp.ones_like( x ): generates array of ones of same shape and type as x\nnp.prod( x ): multiplying all elements of x\nnp.random.choice( A, size = S ): sampling S entities out of A; replacing elements and probability may be characterized as parameters\nnp.random.rand( size ): generates 'size' random values being uniformly distributed in $[0,1)$\nnp.random.randint( a, b, size ): generates 'size' random integers between a (included) and b (not included)\nnp.random.randn( size ): generates a vector of standard gaussian ($E(X)=0, D^2(X)=1$) random variables of size 'size'\nnp.sum( x ): summing up all elements of x\nnp.zeros( shape ): generates array of zeros of shape 'shape'\nnp.zeros_like( x ): generates array of zeros of same shape and type as x\n\nModule Scipy\n\nContains several usefull scientific modules\ncontains elements for digital signal processing, e.g., filter design, filters, fast Fourier transforms,...\nspecial: several special functions, e.g., Bessel, binomial coefficients,..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joosthub/pytorch-nlp-tutorial-sf2017
day_2/00-Load-Vectorize-Generate-And-Sequences-as-Tensors.ipynb
mit
[ "import json\n\nfrom local_settings import settings, datautils\n\nfrom datautils.vocabulary import Vocabulary\n\nimport pandas as pd\nimport numpy as np\n\nimport torch\nfrom torch import FloatTensor\nfrom torch import nn\nfrom torch.autograd import Variable\nfrom torch.nn import Parameter\nfrom torch.nn import functional as F\nfrom torch.utils.data import Dataset\nfrom torch.utils.data import DataLoader\n\nfrom tqdm import tqdm, tqdm_notebook", "Data Structures\nFor the notebooks presented today, we will be using a pattern that we have employed many times. For this, we break the machine learning data pipeline into 4 distinct parts:\n\nRaw Data\nVectorized Data\nA Vectorizer\nA (python) generator\n\nTo give it a name, I'll called it Load-Vectorize-Generate (LVG)\nThis pipeline turns letters or words into integers and then batches them to yield matrices of integers. For language, since it is variable length, there are also 0-valued positions in the matrix. we will see how we tell PyTorch to treat these 0s as ignore-values. \nAfter I introduce LVG, I will show quickly how to use the data generated from LVG ( a matrix of integers ). First, it is embedded so a vector of numbers is associated with each integer, then the batch is put on the 0th dimension so that it can be iterated over. \nLoad\nLoading the raw data from disk should be relatively quickly. Preferably, all munging should have happened & the form that is loaded should have precomputed things like split (between train/test/eval or fold #).", "class RawSurnames(object):\n def __init__(self, data_path=settings.SURNAMES_CSV, delimiter=\",\"):\n self.data = pd.read_csv(data_path, delimiter=delimiter)\n\n def get_data(self, filter_to_nationality=None):\n if filter_to_nationality is not None:\n return self.data[self.data.nationality.isin(filter_to_nationality)]\n return self.data\n", "Vectorize\nThe first class is here is for managing the vectorized data structure. It subclasses PyTorch's dataset class, which is supposed to implement two functions: __len__ and __getitem__. Our assumption with this is that no data processing is happening here; it is given the final tensors at init time and it just provides them through __getitem__. PyTorch has things available to use this for sophisticated data queueing with the DataLoader class. The DataLoader class will also convert these structures into PyTorch tensors, so we don't have to do that conversion. \nSome additional things: we also are returning the lengths of the sequences so that we can use them in the model.", "class VectorizedSurnames(Dataset):\n def __init__(self, x_surnames, y_nationalities):\n self.x_surnames = x_surnames\n self.y_nationalities = y_nationalities\n\n def __len__(self):\n return len(self.x_surnames)\n\n def __getitem__(self, index):\n return {'x_surnames': self.x_surnames[index],\n 'y_nationalities': self.y_nationalities[index],\n 'x_lengths': len(self.x_surnames[index].nonzero()[0])}\n\n", "Vectorizer\nThe actual vectorizer has a lot of responsibility. \nPrimarily, it manages the Vocabulary object, saving and loading it, and applying it to a dataset to create a vectorized form.", "class SurnamesVectorizer(object):\n def __init__(self, surname_vocab, nationality_vocab, max_seq_length):\n self.surname_vocab = surname_vocab\n self.nationality_vocab = nationality_vocab\n self.max_seq_length = max_seq_length\n \n def save(self, filename):\n vec_dict = {\"surname_vocab\": self.surname_vocab.get_serializable_contents(),\n \"nationality_vocab\": self.nationality_vocab.get_serializable_contents(),\n 'max_seq_length': self.max_seq_length}\n\n with open(filename, \"w\") as fp:\n json.dump(vec_dict, fp)\n \n @classmethod\n def load(cls, filename):\n with open(filename, \"r\") as fp:\n vec_dict = json.load(fp)\n\n vec_dict[\"surname_vocab\"] = Vocabulary.deserialize_from_contents(vec_dict[\"surname_vocab\"])\n vec_dict[\"nationality_vocab\"] = Vocabulary.deserialize_from_contents(vec_dict[\"nationality_vocab\"])\n return cls(**vec_dict)\n\n @classmethod\n def fit(cls, surname_df):\n \"\"\"\n \"\"\"\n surname_vocab = Vocabulary(use_unks=False,\n use_mask=True,\n use_start_end=True,\n start_token=settings.START_TOKEN,\n end_token=settings.END_TOKEN)\n\n nationality_vocab = Vocabulary(use_unks=False, use_start_end=False, use_mask=False)\n\n max_seq_length = 0\n for index, row in surname_df.iterrows():\n surname_vocab.add_many(row.surname)\n nationality_vocab.add(row.nationality)\n\n if len(row.surname) > max_seq_length:\n max_seq_length = len(row.surname)\n max_seq_length = max_seq_length + 2\n\n return cls(surname_vocab, nationality_vocab, max_seq_length)\n\n @classmethod\n def fit_transform(cls, surname_df, split='train'):\n vectorizer = cls.fit(surname_df)\n return vectorizer, vectorizer.transform(surname_df, split)\n\n def transform(self, surname_df, split='train'):\n\n df = surname_df[surname_df.split==split].reset_index()\n n_data = len(df)\n \n x_surnames = np.zeros((n_data, self.max_seq_length), dtype=np.int64)\n y_nationalities = np.zeros(n_data, dtype=np.int64)\n\n for index, row in df.iterrows():\n vectorized_surname = list(self.surname_vocab.map(row.surname, \n include_start_end=True))\n x_surnames[index, :len(vectorized_surname)] = vectorized_surname\n y_nationalities[index] = self.nationality_vocab[row.nationality]\n\n return VectorizedSurnames(x_surnames, y_nationalities)\n", "Generate\nFinally, the make_data_generator interacts with PyTorch's DataLoader and returns a generator. It basically just iterates over the DataLoader generator and does some processing. Currently, it returns a function rather than just making the generator itself so some control can be had over num_batches & volatile mode, and other run time things. It's mostly a cheap and easy function that can be written in many ways.", "# data generator\n\ndef make_generator(vectorized_data, batch_size, num_batches=-1, \n num_workers=0, volatile_mode=False, \n strict_batching=True):\n\n loaded_data = DataLoader(vectorized_data, batch_size=batch_size, \n shuffle=True, num_workers=num_workers)\n\n def inner_func(num_batches=num_batches, \n volatile_mode=volatile_mode):\n\n for batch_index, batch in enumerate(loaded_data):\n out = {}\n current_batch_size = list(batch.values())[0].size(0)\n if current_batch_size < batch_size and strict_batching:\n break\n for key, value in batch.items():\n if not isinstance(value, Variable):\n value = Variable(value)\n if settings.CUDA:\n value = value.cuda()\n if volatile_mode:\n value = value.volatile()\n out[key] = value\n yield out\n\n if num_batches > 0 and batch_index > num_batches:\n break\n\n return inner_func\n\nraw_data = RawSurnames().get_data()\n\nraw_data.head()\n\nvectorizer = SurnamesVectorizer.fit(raw_data)\n\nvectorizer.nationality_vocab, vectorizer.surname_vocab\n\nvec_train = vectorizer.transform(raw_data, split='train')\n\nvec_train.x_surnames, vec_train.x_surnames.shape\n\nvec_train.y_nationalities, vec_train.y_nationalities.shape\n\n# let's say we are making a randomized batch. \nn_data = len(vec_train)\nindices = np.random.choice(np.arange(n_data), \n size=n_data, \n replace=False)\n\nbatch_indices = indices[:10]\nbatched_x = vec_train.x_surnames[batch_indices]\nbatched_x.shape", "Embedding sequences\nLet's take a look at how sequences are embedded", "import torch\nfrom torch import LongTensor\nfrom torch.autograd import Variable\n\nn_surnames = len(vectorizer.surname_vocab)\n# padding_idx is very important!\nemb = torch.nn.Embedding(embedding_dim=8, num_embeddings=n_surnames, padding_idx=0)\n\ntorch_x = Variable(LongTensor(batched_x))\nx_seq = emb(torch_x)\nx_seq.size()", "Common Pattern: putting sequence dimension on dimension 0\nBecause dimension 0 is indexed faster, and it's easier to write code for, many times the dimensions are permuted to put the sequence on the first dimension. this is done like the following", "# where this swaps 1 and 0. if we did it twice, it would swap back. \nx_seq_on_dim0 = x_seq.permute(1, 0, 2)\nx_seq_on_dim0.size()", "so, later when we want to get the 5th item in the sequence, we can", "x_5th_step = x_seq_on_dim0[4, :, :]\nx_5th_step.size()", "so, this is the gist of how we will be using sequences as tensors. we construct a matrix of embedding integers, use an embedding module to retrieve their corresponding vectors, and then move the sequence to the first dimension so we can index into it easier & faster." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kaggle/learntools
notebooks/feature_engineering_new/raw/ex3.ipynb
apache-2.0
[ "Introduction\nIn this exercise you'll start developing the features you identified in Exercise 2 as having the most potential. As you work through this exercise, you might take a moment to look at the data documentation again and consider whether the features we're creating make sense from a real-world perspective, and whether there are any useful combinations that stand out to you.\nRun this cell to set everything up!", "# Setup feedback system\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.feature_engineering_new.ex3 import *\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import cross_val_score\nfrom xgboost import XGBRegressor\n\n\ndef score_dataset(X, y, model=XGBRegressor()):\n # Label encoding for categoricals\n for colname in X.select_dtypes([\"category\", \"object\"]):\n X[colname], _ = X[colname].factorize()\n # Metric for Housing competition is RMSLE (Root Mean Squared Log Error)\n score = cross_val_score(\n model, X, y, cv=5, scoring=\"neg_mean_squared_log_error\",\n )\n score = -1 * score.mean()\n score = np.sqrt(score)\n return score\n\n\n# Prepare data\ndf = pd.read_csv(\"../input/fe-course-data/ames.csv\")\nX = df.copy()\ny = X.pop(\"SalePrice\")", "Let's start with a few mathematical combinations. We'll focus on features describing areas -- having the same units (square-feet) makes it easy to combine them in sensible ways. Since we're using XGBoost (a tree-based model), we'll focus on ratios and sums.\n1) Create Mathematical Transforms\nCreate the following features:\n\nLivLotRatio: the ratio of GrLivArea to LotArea\nSpaciousness: the sum of FirstFlrSF and SecondFlrSF divided by TotRmsAbvGrd\nTotalOutsideSF: the sum of WoodDeckSF, OpenPorchSF, EnclosedPorch, Threeseasonporch, and ScreenPorch", "# YOUR CODE HERE\nX_1 = pd.DataFrame() # dataframe to hold new features\n\n#_UNCOMMENT_IF(PROD)_\n#X_1[\"LivLotRatio\"] = ____\n#_UNCOMMENT_IF(PROD)_\n#X_1[\"Spaciousness\"] = ____\n#_UNCOMMENT_IF(PROD)_\n#X_1[\"TotalOutsideSF\"] = ____\n\n\n# Check your answer\nq_1.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_1.hint()\n#_COMMENT_IF(PROD)_\nq_1.solution()\n\n#%%RM_IF(PROD)%%\nX_1 = pd.DataFrame() # dataframe to hold new features\n\nX_1[\"LivLot\"] = df.GrLivArea / df.LotArea\nX_1[\"Spacious\"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd\nX_1[\"TotalSF\"] = \\\n df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \\\n df.Threeseasonporch + df.ScreenPorch\n\nq_1.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_1 = pd.DataFrame() # dataframe to hold new features\n\nX_1[\"LivLotRatio\"] = df.GrLivArea * df.LotArea\nX_1[\"Spaciousness\"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd\nX_1[\"TotalOutsideSF\"] = \\\n df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \\\n df.Threeseasonporch + df.ScreenPorch\n\nq_1.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_1 = pd.DataFrame() # dataframe to hold new features\n\nX_1[\"LivLotRatio\"] = df.GrLivArea / df.LotArea\nX_1[\"Spaciousness\"] = (df.FirstFlrSF - df.SecondFlrSF) / df.TotRmsAbvGrd\nX_1[\"TotalOutsideSF\"] = \\\n df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \\\n df.Threeseasonporch + df.ScreenPorch\n\nq_1.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_1 = pd.DataFrame() # dataframe to hold new features\n\nX_1[\"LivLotRatio\"] = df.GrLivArea / df.LotArea\nX_1[\"Spaciousness\"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd\nX_1[\"TotalOutsideSF\"] = \\\n df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \\\n df.Threeseasonporch\n\nq_1.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_1 = pd.DataFrame() # dataframe to hold new features\n\nX_1[\"LivLotRatio\"] = df.GrLivArea / df.LotArea\nX_1[\"Spaciousness\"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd\nX_1[\"TotalOutsideSF\"] = \\\n df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \\\n df.Threeseasonporch + df.ScreenPorch\n\nq_1.assert_check_passed()", "If you've discovered an interaction effect between a numeric feature and a categorical feature, you might want to model it explicitly using a one-hot encoding, like so:\n```\nOne-hot encode Categorical feature, adding a column prefix \"Cat\"\nX_new = pd.get_dummies(df.Categorical, prefix=\"Cat\")\nMultiply row-by-row\nX_new = X_new.mul(df.Continuous, axis=0)\nJoin the new features to the feature set\nX = X.join(X_new)\n```\n2) Interaction with a Categorical\nWe discovered an interaction between BldgType and GrLivArea in Exercise 2. Now create their interaction features.", "# YOUR CODE HERE\n# One-hot encode BldgType. Use `prefix=\"Bldg\"` in `get_dummies`\nX_2 = ____ \n# Multiply\nX_2 = ____\n\n\n# Check your answer\nq_2.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_2.hint()\n#_COMMENT_IF(PROD)_\nq_2.solution()\n\n#%%RM_IF(PROD)%%\nX_2 = pd.get_dummies(df.BldgType, prefix=\"Bldg\")\nX_2 = X_2.mul(df.LotArea, axis=0)\n\nq_2.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_2 = pd.get_dummies(df.BldgType, prefix=\"Bldg\")\n\nq_2.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_2 = pd.get_dummies(df.BldgType, prefix=\"Bldg\")\nX_2 = X_2.mul(df.GrLivArea, axis=0)\n\nq_2.assert_check_passed()", "3) Count Feature\nLet's try creating a feature that describes how many kinds of outdoor areas a dwelling has. Create a feature PorchTypes that counts how many of the following are greater than 0.0:\nWoodDeckSF\nOpenPorchSF\nEnclosedPorch\nThreeseasonporch\nScreenPorch", "X_3 = pd.DataFrame()\n\n# YOUR CODE HERE\n#_UNCOMMENT_IF(PROD)_\n#X_3[\"PorchTypes\"] = ____\n\n\n# Check your answer\nq_3.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_3.hint()\n#_COMMENT_IF(PROD)_\nq_3.solution()\n\n#%%RM_IF(PROD)%%\nX_3 = pd.DataFrame()\n\nX_3[\"PorchTypes\"] = df[[\n \"OpenPorchSF\",\n \"EnclosedPorch\",\n \"Threeseasonporch\",\n \"ScreenPorch\",\n]].gt(0.0).sum(axis=1)\n\nq_3.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_3 = pd.DataFrame()\n\nX_3[\"PorchTypes\"] = df[[\n \"WoodDeckSF\",\n \"OpenPorchSF\",\n \"EnclosedPorch\",\n \"Threeseasonporch\",\n \"ScreenPorch\",\n]].sum(axis=1)\n\nq_3.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_3 = pd.DataFrame()\n\nX_3[\"PorchTypes\"] = df[[\n \"WoodDeckSF\",\n \"OpenPorchSF\",\n \"EnclosedPorch\",\n \"Threeseasonporch\",\n \"ScreenPorch\",\n]].gt(0.0).sum(axis=1)\n\nq_3.assert_check_passed()", "4) Break Down a Categorical Feature\nMSSubClass describes the type of a dwelling:", "df.MSSubClass.unique()", "You can see that there is a more general categorization described (roughly) by the first word of each category. Create a feature containing only these first words by splitting MSSubClass at the first underscore _. (Hint: In the split method use an argument n=1.)", "X_4 = pd.DataFrame()\n\n# YOUR CODE HERE\n#_UNCOMMENT_IF(PROD)_\n#____\n\n# Check your answer\nq_4.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_4.hint()\n#_COMMENT_IF(PROD)_\nq_4.solution()\n\n#%%RM_IF(PROD)%%\nX_4 = pd.DataFrame()\n\nX_4[\"MSClass\"] = df.MSSubClass.str.split(n=1, expand=True)[0]\n\nq_4.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_4 = pd.DataFrame()\n\nX_4[\"MSClass\"] = df.MSSubClass.str.split(\"_\", n=1, expand=True)[0]\n\nq_4.assert_check_passed()", "5) Use a Grouped Transform\nThe value of a home often depends on how it compares to typical homes in its neighborhood. Create a feature MedNhbdArea that describes the median of GrLivArea grouped on Neighborhood.", "X_5 = pd.DataFrame()\n\n# YOUR CODE HERE\n#_UNCOMMENT_IF(PROD)_\n#X_5[\"MedNhbdArea\"] = ____\n\n# Check your answer\nq_5.check()\n\n# Lines below will give you a hint or solution code\n#_COMMENT_IF(PROD)_\nq_5.hint()\n#_COMMENT_IF(PROD)_\nq_5.solution()\n\n#%%RM_IF(PROD)%%\nX_5 = pd.DataFrame()\n\nX_5[\"MedNhbdArea\"] = df.groupby(\"Neighborhood\")[\"GrLivArea\"].transform(\"mean\")\n\nq_5.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_5 = pd.DataFrame()\n\nX_5[\"MedNhbdArea\"] = df.groupby(\"Neighborhood\")[\"LotArea\"].transform(\"median\")\n\nq_5.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_5 = pd.DataFrame()\n\nX_5[\"MedNhbdArea\"] = df.groupby(\"MSSubClass\")[\"GrLivArea\"].transform(\"median\")\n\nq_5.assert_check_failed()\n\n#%%RM_IF(PROD)%%\nX_5 = pd.DataFrame()\n\nX_5[\"MedNhbdArea\"] = df.groupby(\"Neighborhood\")[\"GrLivArea\"].transform(\"median\")\n\nq_5.assert_check_passed()", "Now you've made your first new feature set! If you like, you can run the cell below to score the model with all of your new features added:", "X_new = X.join([X_1, X_2, X_3, X_4, X_5])\nscore_dataset(X_new, y)", "Keep Going\nUntangle spatial relationships by adding cluster labels to your dataset." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mitdbg/modeldb
client/workflows/demos/Embedding-and-Lookup-TF-Hub.ipynb
mit
[ "Embedding and Lookup (TF Hub and Annoy)\nThis example logs a class (instead of an object instance) as a model.\nThis allows for custom setup configuration in the class's __init__() method,\nand access to logged artifacts at deployment time.", "try:\n import verta\nexcept ImportError:\n !pip install verta", "This example features:\n- embedding model built around a TensorFlow Hub module\n- nearest neighbor embedding lookup via Annoy\n- verta's Python client logging a class as a model to be instantiated at deployment time\n- predictions against a deployed model", "HOST = \"app.verta.ai\"\n\nPROJECT_NAME = \"Film Review Embeddings\"\nEXPERIMENT_NAME = \"TF Hub and Annoy\"\n\n# import os\n# os.environ['VERTA_EMAIL'] = \n# os.environ['VERTA_DEV_KEY'] = ", "Imports", "from __future__ import print_function\n\nimport os\nimport time\n\nimport pandas as pd\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\nimport annoy\n\ntry:\n import wget\nexcept ImportError:\n !pip install wget # you may need pip3\n import wget", "Run Workflow\nPrepare Data", "train_data_url = \"http://s3.amazonaws.com/verta-starter/imdb_master.csv\"\ntrain_data_filename = wget.detect_filename(train_data_url)\nif not os.path.isfile(train_data_filename):\n wget.download(train_data_url)\n\nall_reviews = pd.read_csv(train_data_filename, encoding='latin')['review'].values.tolist()\nreviews = all_reviews[:2000] # just a subset for this example\n\nreviews[0]", "Instantiate Client", "from verta import Client\nfrom verta.utils import ModelAPI\n\nclient = Client(HOST)\nproj = client.set_project(PROJECT_NAME)\nexpt = client.set_experiment(EXPERIMENT_NAME)\nrun = client.set_experiment_run()", "Build Nearest Neighbor Embedding Index", "EMBEDDING_LENGTH = 512\nNN_INDEX_FILENAME = \"reviews.ann\"\n\nos.environ[\"TFHUB_CACHE_DIR\"] = \"tf_cache_dir\"\n\n# define graph\ng = tf.Graph()\nwith g.as_default():\n text_input = tf.placeholder(dtype=tf.string, shape=[None])\n encoder = hub.Module(\"https://tfhub.dev/google/universal-sentence-encoder-large/3\")\n embed = encoder(text_input)\n init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()])\ng.finalize()\n\n# initialize session\nsess = tf.Session(graph=g)\nsess.run(init_op)\n\n# build and save embedding index\nt = annoy.AnnoyIndex(EMBEDDING_LENGTH, 'angular') # Length of item vector that will be indexed\nfor i, review in enumerate(reviews):\n # produce embedding with TF\n embedding = sess.run(embed, feed_dict={text_input: [review]})\n t.add_item(i, embedding[0])\nt.build(10) # 10 trees\nt.save(NN_INDEX_FILENAME)\n\nrun.log_artifact(\"nn_index\", open(NN_INDEX_FILENAME, 'rb'))", "Define Model Class\nA TensorFlow model—particularly one using TensorFlow Hub and a pre-built Annoy index—will require some setup at deployment time.\nTo support this, the Verta platform allows a model to be defined as a class that will be instantiated when it's deployed.\nThis class should have provide the following interface:\n\n__init__(self, artifacts) where artifacts is a mapping of artifact keys to filepaths. This will be explained below, but Verta will provide this so you can open these artifact files and set up your model. Other initialization steps would be in this method, as well.\npredict(self, data) where data—like in other custom Verta models—is a list of input values for the model.", "class EmbeddingAndLookupModel:\n def __init__(self, artifacts):\n \"\"\"\n Parameters\n ----------\n artifacts\n Mapping of Experiment Run artifact keys to filepaths.\n This is provided by ``run.fetch_artifacts(artifact_keys)``.\n \n \"\"\"\n # get artifact filepath from `artifacts` mapping\n annoy_index_filepath = artifacts['nn_index']\n \n # load embedding index\n self.index = annoy.AnnoyIndex(EMBEDDING_LENGTH, \"angular\")\n self.index.load(annoy_index_filepath)\n \n os.environ[\"TFHUB_CACHE_DIR\"] = \"tf_cache_dir\"\n \n # define graph\n g = tf.Graph()\n with g.as_default():\n self.text_input = tf.placeholder(dtype=tf.string, shape=[None])\n self.encoder = hub.Module(\"https://tfhub.dev/google/universal-sentence-encoder-large/3\")\n self.embed = self.encoder(self.text_input)\n init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()])\n g.finalize()\n self.graph = g\n \n # initialize session\n self.session = tf.Session(graph=self.graph)\n self.session.run(init_op)\n \n def predict(self, data):\n predictions = []\n for review in data:\n # embed sentence\n embedding = self.session.run(self.embed, feed_dict={self.text_input: [review]})\n\n # find closest\n predictions.append({\n review: self.index.get_nns_by_vector(embedding[0], 10)\n })\n\n return predictions", "Earlier we logged an artifact with the key \"nn_index\".\nYou can obtain an artifacts mapping mentioned above using run.fetch_artifacts(keys) to work with locally.\nA similar mapping—that works identically—will be passed into __init__() when the model is deployed.", "artifacts = run.fetch_artifacts([\"nn_index\"])\n\nmodel = EmbeddingAndLookupModel(artifacts=artifacts)\n\nmodel.predict([\"Good film.\", \"Bad film!\"])", "Log Model\nThe keys expected in the artifacts mapping mentioned above must be passed into run.log_model() to be available during deployment!", "run.log_model(\n model=EmbeddingAndLookupModel,\n artifacts=['nn_index'],\n)", "We also have to make sure we provide every package involved in the model.", "run.log_requirements([\n \"annoy==1.16.2\",\n \"tensorflow\",\n \"tensorflow_hub\",\n])", "Make Live Predictions\nAccess the Experiment Run through the Verta Web App and deploy it!", "run", "Prepare Data", "reviews = all_reviews[-2000:]", "Load Deployed Model", "from verta.deployment import DeployedModel\n\ndeployed_model = DeployedModel(HOST, run.id)", "Query Deployed Model", "for review in reviews:\n print(deployed_model.predict([review]))\n time.sleep(.5)", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BrownDwarf/ApJdataFrames
notebooks/Muzic2015.ipynb
mit
[ "ApJdataFrames Muzic2015\nTitle: SUBSTELLAR OBJECTS IN NEARBY YOUNG CLUSTERS (SONYC) IX: THE PLANETARY-MASS DOMAIN OF CHAMAELEON-I AND UPDATED MASS FUNCTION IN LUPUS-3\nAuthors: Muzic et al.\nData is from this paper:\nhttp://iopscience.iop.org/article/10.1088/0004-637X/810/2/159/meta", "%pylab inline\n\nimport seaborn as sns\nsns.set_context(\"notebook\", font_scale=1.5)\n\n#import warnings\n#warnings.filterwarnings(\"ignore\")\n\nimport pandas as pd", "Table 2 - Photometric Candidates Included in the Spectroscopic Follow-up with Sofi\nAwful mutliple line formatting. Whyyyy?\nData Cleaning:\nYou have to do a regex search and replace using \\n{8}\\t as the search match, and replace with nothing. Once that is complete, trim the footer and then save the file.", "#tbl2 = pd.read_csv(\"http://iopscience.iop.org/0004-637X/810/2/159/suppdata/apj517985t2_ascii.txt\",\n# sep='\\t|\\n{8}', skiprows=11, skipfooter=3, engine='python', na_values=\"cdots\")\n#tbl2\n\nnames = ['id','alpha(J2000)','delta(J2000)','Date','Slit','I','J','K','Comments']\ntbl2 = pd.read_csv('../data/Muzic2015/muzic2015_tbl2.txt', \n sep='\\t', na_values='cdots', names=names, header=1, index_col=False)\ntbl2", "The $I-$band data is Cousins $I$ for Cha, and DENIS $i$ for Lupus.", "tbl2.to_csv(\"../data/Muzic2015/tbl2.csv\", index=False)", "Plots", "I_J = tbl2.I - tbl2.J\n\nsns.distplot(I_J, axlabel='$I-J$')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opesci/devito
examples/mpi/overview.ipynb
mit
[ "Prerequisites\nThis notebook contains examples which are expected to be run with exactly 4 MPI processes; not because they wouldn't work otherwise, but simply because it's what their description assumes. For this, you need to:\n\nInstall an MPI distribution on your system, such as OpenMPI, MPICH, or Intel MPI (if not already available).\nInstall some optional dependencies, including mpi4py and ipyparallel; from the root Devito directory, run\npip install -r requirements-optional.txt\nCreate an ipyparallel MPI profile, by running our simple setup script. From the root directory, run\n./scripts/create_ipyparallel_mpi_profile.sh\n\nLaunch and connect to an ipyparallel cluster\nWe're finally ready to launch an ipyparallel cluster. Open a new terminal and run the following command\nipcluster start --profile=mpi -n 4\nOnce the engines have started successfully, we can connect to the cluster", "import ipyparallel as ipp\nc = ipp.Client(profile='mpi')", "In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.", "%%px --group-outputs=engine\n\nfrom mpi4py import MPI\nprint(f\"Hi, I'm rank %d.\" % MPI.COMM_WORLD.rank)", "Overview of MPI in Devito\nDistributed-memory parallelism via MPI is designed so that users can \"think sequentially\" for as much as possible. The few things requested to the user are:\n\nLike any other MPI program, run with mpirun -np X python ...\nSome pre- and/or post-processing may be rank-specific (e.g., we may want to plot on a given MPI rank only, even though this might be hidden away in the next Devito releases, when newer support APIs will be provided.\nParallel I/O (if and when necessary) to populate the MPI-distributed datasets in input to a Devito Operator. If a shared file system is available, there are a few simple alternatives to pick from, such as NumPy’s memory-mapped arrays.\n\nTo enable MPI, users have two options. Either export the environment variable DEVITO_MPI=1 or, programmatically:", "%%px\nfrom devito import configuration\nconfiguration['mpi'] = True\n\n%%px\n# Keep generated code as simple as possible\nconfiguration['language'] = 'C'\n# Fix platform so that this notebook can be tested by py.test --nbval\nconfiguration['platform'] = 'knl7210'", "An Operator will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed in a later section.\nLet's start by creating a TimeFunction.", "%%px\nfrom devito import Grid, TimeFunction, Eq, Operator \ngrid = Grid(shape=(4, 4))\nu = TimeFunction(name=\"u\", grid=grid, space_order=2, time_order=0)", "Domain decomposition is performed when creating a Grid. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the Grid over the available MPI processes. Since u is defined over a decomposed Grid, its data get distributed too.", "%%px --group-outputs=engine\nu.data", "Globally, u consists of 4x4 points -- this is what users \"see\". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is: for the user, the fact that u.data is distributed is completely abstracted away -- the perception is that of indexing into a classic NumPy array, regardless of whether MPI is enabled or not. All sort of NumPy indexing schemes (basic, slicing, etc.) are supported. For example, we can write into a slice-generated view of our data.", "%%px\nu.data[0, 1:-1, 1:-1] = 1.\n\n%%px --group-outputs=engine\nu.data", "The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment u.data[0, 0] = u.data[3, 3] will raise an exception unless both entries belong to the same MPI rank).\nWe can finally write out a trivial Operator to try running something.", "%%px\nop = Operator(Eq(u.forward, u + 1))\nsummary = op.apply(time_M=0)", "And we can now check again the (distributed) content of our u.data", "%%px --group-outputs=engine\nu.data", "Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...", "%%px --targets 0\nprint(op)", "Hang on. There's nothing MPI-specific here! At least apart from the header file #include \"mpi.h\". What's going on? Well, it's simple. Devito was smart enough to realize that this trivial Operator doesn't even need any sort of halo exchange -- the Eq implements a pure \"map computation\" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil Eq.", "%%px --targets 0\nop = Operator(Eq(u.forward, u.dx + 1))\nprint(op)", "Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines:\n\nhaloupdate0 performs a blocking halo exchange, relying on three additional functions, gather0, sendrecv0, and scatter0;\ngather0 copies the (generally non-contiguous) boundary data into a contiguous buffer;\nsendrecv0 takes the buffered data and sends it to one or more neighboring processes; then it waits until all data from the neighboring processes is received;\nscatter0 copies the received data into the proper array locations.\n\nThis is the simplest halo exchange scheme available in Devito. There are a few, and some of them apply aggressive optimizations, as shown later on.\nBefore looking at other scenarios and performance optimizations, there is one last thing it is worth discussing -- the data_with_halo view.", "%%px --group-outputs=engine\nu.data_with_halo", "This is again a global data view. The shown with_halo is the \"true\" halo surrounding the physical domain, not the halo used for the MPI halo exchanges (often referred to as \"ghost region\"). So it gets trivial for a user to initialize the \"true\" halo region (which is typically read by a stencil Eq when an Operator iterates in proximity of the domain bounday).", "%%px\nu.data_with_halo[:] = 1.\n\n%%px --group-outputs=engine\nu.data_with_halo", "MPI and SparseFunction\nA SparseFunction represents a sparse set of points which are generically unaligned with the Grid. A sparse point could be anywhere within a grid, and is therefore attached some coordinates. Given a sparse point, Devito looks at its coordinates and, based on the domain decomposition, logically assigns it to a given MPI process; this is purely logical ownership, as in Python-land, before running an Operator, the sparse point physically lives on the MPI rank which created it. Within op.apply, right before jumping to C-land, the sparse points are scattered to their logical owners; upon returning to Python-land, the sparse points are gathered back to their original location.\nIn the following example, we attempt injection of four sparse points into the neighboring grid points via linear interpolation.", "%%px\nfrom devito import Function, SparseFunction\ngrid = Grid(shape=(4, 4), extent=(3.0, 3.0))\nx, y = grid.dimensions\nf = Function(name='f', grid=grid)\ncoords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]\nsf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)", "Let:\n* O be a grid point\n* x be a halo point\n* A, B, C, D be the sparse points\nWe show the global view, that is what the user \"sees\".\nO --- O --- O --- O\n| A | | |\nO --- O --- O --- O\n| | C | B |\nO --- O --- O --- O\n| | D | |\nO --- O --- O --- O\nAnd now the local view, that is what the MPI ranks own when jumping to C-land. \n```\nRank 0 Rank 1\nO --- O --- x x --- O --- O\n| A | | | | |\nO --- O --- x x --- O --- O\n| | C | | C | B |\nx --- x --- x x --- x --- x\nRank 2 Rank 3\nx --- x --- x x --- x --- x\n| | C | | C | B |\nO --- O --- x x --- O --- O\n| | D | | D | |\nO --- O --- x x --- O --- O\n```\nWe observe that the sparse points along the boundary of two or more MPI ranks are duplicated and thus redundantly computed over multiple processes. However, the contributions from these points to the neighboring halo points are naturally ditched, so the final result of the interpolation is as expected. Let's convince ourselves that this is the case. We assign a value of $5$ to each sparse point. Since we are using linear interpolation and all points are placed at the exact center of a grid quadrant, we expect that the contribution of each sparse point to a neighboring grid point will be $5 * 0.25 = 1.25$. Based on the global view above, we eventually expect f to look like as follows:\n1.25 --- 1.25 --- 0.00 --- 0.00\n| | | |\n1.25 --- 2.50 --- 2.50 --- 1.25\n| | | |\n0.00 --- 2.50 --- 3.75 --- 1.25\n| | | |\n0.00 --- 1.25 --- 1.25 --- 0.00\nLet's check this out.", "%%px\nsf.data[:] = 5.\nop = Operator(sf.inject(field=f, expr=sf))\nsummary = op.apply()\n\n%%px --group-outputs=engine\nf.data", "Performance optimizations\nThe Devito compiler applies several optimizations before generating code.\n\nRedundant halo exchanges are identified and removed. A halo exchange is redundant if a prior halo exchange carries out the same Function update and the data is not “dirty” yet.\nComputation/communication overlap, with explicit prodding of the asynchronous progress engine to make sure that non-blocking communications execute in background during the compute part.\nHalo exchanges could also be reshuffled to maximize the extension of the computation/communication overlap region.\n\nTo run with all these optimizations enabled, instead of DEVITO_MPI=1, users should set DEVITO_MPI=full, or, equivalently", "%%px\nconfiguration['mpi'] = 'full'", "We could now peek at the generated code to see that things now look differently.", "%%px\nop = Operator(Eq(u.forward, u.dx + 1))\n# Uncomment below to show code (it's quite verbose)\n# print(op)", "The body of the time-stepping loop has changed, as it now implements a classic computation/communication overlap scheme:\n\nhaloupdate0 triggers non-blocking communications;\ncompute0 executes the core domain region, that is the sub-region which doesn't require reading from halo data to be computed;\nhalowait0 wait and terminates the non-blocking communications;\nremainder0, which internally calls compute0, computes the boundary region requiring the now up-to-date halo data." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Naereen/notebooks
Demo_of_RISE_for_slides_with_Jupyter_notebooks__Julia.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Demo-of-RISE-for-slides-with-Jupyter-notebooks-(Python)\" data-toc-modified-id=\"Demo-of-RISE-for-slides-with-Jupyter-notebooks-(Python)-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Demo of RISE for slides with Jupyter notebooks (Python)</a></div><div class=\"lev2 toc-item\"><a href=\"#Title-2\" data-toc-modified-id=\"Title-2-11\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Title 2</a></div><div class=\"lev3 toc-item\"><a href=\"#Title-3\" data-toc-modified-id=\"Title-3-111\"><span class=\"toc-item-num\">1.1.1&nbsp;&nbsp;</span>Title 3</a></div><div class=\"lev4 toc-item\"><a href=\"#Title-4\" data-toc-modified-id=\"Title-4-1111\"><span class=\"toc-item-num\">1.1.1.1&nbsp;&nbsp;</span>Title 4</a></div><div class=\"lev2 toc-item\"><a href=\"#Text\" data-toc-modified-id=\"Text-12\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Text</a></div><div class=\"lev2 toc-item\"><a href=\"#Maths\" data-toc-modified-id=\"Maths-13\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Maths</a></div><div class=\"lev2 toc-item\"><a href=\"#And-code\" data-toc-modified-id=\"And-code-14\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>And code</a></div><div class=\"lev1 toc-item\"><a href=\"#More-demo-of-Markdown-code\" data-toc-modified-id=\"More-demo-of-Markdown-code-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>More demo of Markdown code</a></div><div class=\"lev2 toc-item\"><a href=\"#Lists\" data-toc-modified-id=\"Lists-21\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Lists</a></div><div class=\"lev4 toc-item\"><a href=\"#Images\" data-toc-modified-id=\"Images-2101\"><span class=\"toc-item-num\">2.1.0.1&nbsp;&nbsp;</span>Images</a></div><div class=\"lev2 toc-item\"><a href=\"#Some-code\" data-toc-modified-id=\"Some-code-22\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Some code</a></div><div class=\"lev4 toc-item\"><a href=\"#And-Markdown-can-include-raw-HTML\" data-toc-modified-id=\"And-Markdown-can-include-raw-HTML-2201\"><span class=\"toc-item-num\">2.2.0.1&nbsp;&nbsp;</span>And Markdown can include raw HTML</a></div><div class=\"lev1 toc-item\"><a href=\"#End-of-this-demo\" data-toc-modified-id=\"End-of-this-demo-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>End of this demo</a></div>\n\n# Demo of RISE for slides with Jupyter notebooks (Python)\n\n- This document is an example of a slideshow, written in a [Jupyter notebook](https://www.jupyter.org/) with the [RISE extension](https://github.com/damianavila/RISE).\n\n> By [Lilian Besson](http://perso.crans.org/besson/), Sept.2017.\n\n---\n## Title 2\n### Title 3\n#### Title 4\n##### Title 5\n##### Title 6\n\n## Text\nWith text, *emphasis*, **bold**, ~~striked~~, `inline code` and\n\n> *Quote.*\n>\n> -- By a guy.\n\n## Maths\nWith inline math $\\sin(x)^2 + \\cos(x)^2 = 1$ and equations:\n$$\\sin(x)^2 + \\cos(x)^2 = \\left(\\frac{\\mathrm{e}^{ix} - \\mathrm{e}^{-ix}}{2i}\\right)^2 + \\left(\\frac{\\mathrm{e}^{ix} + \\mathrm{e}^{-ix}}{2}\\right)^2 = \\frac{-\\mathrm{e}^{2ix}-\\mathrm{e}^{-2ix}+2 \\; ++\\mathrm{e}^{2ix}+\\mathrm{e}^{-2ix}+2}{4} = 1.$$\n\n## And code\nIn Markdown:\n```python\nfrom sys import version\nprint(version)\n```\n\nAnd in a executable cell (with Python 3 kernel) :", "from sys import version\nprint(version)", "More demo of Markdown code\nLists\n\nUnordered\nlists\nare easy.\n\nAnd\n\nand ordered also ! Just\nstart lines by 1., 2. etc\nor simply 1., 1., ...\n\nImages\nWith a HTML &lt;img/&gt; tag or the ![alt](url) Markdown code:\n<img width=\"100\" src=\"agreg/images/dooku.jpg\"/>\n\nSome code", "# https://gist.github.com/dm-wyncode/55823165c104717ca49863fc526d1354\n\"\"\"Embed a YouTube video via its embed url into a notebook.\"\"\"\nfrom functools import partial\n\nfrom IPython.display import display, IFrame\n\nwidth, height = (560, 315, )\n\ndef _iframe_attrs(embed_url):\n \"\"\"Get IFrame args.\"\"\"\n return (\n ('src', 'width', 'height'), \n (embed_url, width, height, ),\n )\n\ndef _get_args(embed_url):\n \"\"\"Get args for type to create a class.\"\"\"\n iframe = dict(zip(*_iframe_attrs(embed_url)))\n attrs = {\n 'display': partial(display, IFrame(**iframe)),\n }\n return ('YouTubeVideo', (object, ), attrs, )\n\ndef youtube_video(embed_url):\n \"\"\"Embed YouTube video into a notebook.\n\n Place this module into the same directory as the notebook.\n\n >>> from embed import youtube_video\n >>> youtube_video(url).display()\n \"\"\"\n YouTubeVideo = type(*_get_args(embed_url)) # make a class\n return YouTubeVideo() # return an object", "And Markdown can include raw HTML\n<center><span style=\"color: green;\">This is a centered span, colored in green.</span></center>\nIframes are disabled by default, but by using the IPython internals we can include let say a YouTube video:", "youtube_video(\"https://www.youtube.com/embed/FNg5_2UUCNU\").display()", "End of this demo\n\nSee here for more notebooks!\nThis document, like my other notebooks, is distributed under the MIT License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
syednasar/datascience
optimization_algos/Optimization.ipynb
mit
[ "Group Travel Optimization\nPlanning a trip for a group of people (the Glass family in this example) from different locations all arriving at the same place is always a challenge, and it makes for an interesting optimization problem.\nLoad Data", "import time\nimport random\nimport math\npeople = [('Seymour','BOS'),\n ('Franny','DAL'),\n ('Zooey','CAK'),\n ('Walt','MIA'),\n ('Buddy','ORD'),\n ('Les','OMA')]\n # LaGuardia airport in New York\ndestination='LGA'\n\n\n\"\"\"Load this data into a dictionary with the origin and destination (dest) as the keys \nand a list of potential flight details as the values.\"\"\"\nflights={}\n #\nfor line in file('schedule.txt'):\n origin,dest,depart,arrive,price=line.strip( ).split(',') \n flights.setdefault((origin,dest),[])\n # Add details to the list of possible flights\n flights[(origin,dest)].append((depart,arrive,int(price))) \n\n\n \n\n\"\"\"Calculates how many minutes into the day a given time is. This makes it easy to calculate \n flight times and waiting times\"\"\"\ndef getminutes(t):\n x=time.strptime(t,'%H:%M')\n return x[3]*60+x[4]\n\n\"\"\"Routine that prints all the flights that people decide to take in a nice table\"\"\"\ndef printschedule(r):\n for d in range(len(r)/2):\n name=people[d][0]\n origin=people[d][1]\n out=flights[(origin,destination)][r[d]]\n ret=flights[(destination,origin)][r[d+1]]\n print '%10s%10s %5s-%5s $%3s %5s-%5s $%3s' % (name,origin,\n out[0],out[1],out[2],\n ret[0],ret[1],ret[2])", "This will print a line containing each person’s name and origin, as well as the depar- ture time, arrival time, and price for the outgoing and return flights", "\ns=[1,4,3,2,7,3,6,3,2,4,5,3] \nprintschedule(s)", "Observation: Even disregarding price, this schedule has some problems. In particular, since the family members are traveling to and from the airport together, everyone has to arrive at the airport at 6 a.m. for Les’s return flight, even though some of them don’t leave until nearly 4 p.m. To determine the best combination, the program needs a way of weighting the various properties of different schedules and deciding which is the best.\nThe Cost Function", "\"\"\"This function takes into account the total cost of the trip and the total time spent waiting at airports for the \nvarious members of the family. It also adds a penalty of $50 if the car is returned at a later time of \nthe day than when it was rented.\n\"\"\"\ndef schedulecost(sol):\n totalprice=0\n latestarrival=0\n earliestdep=24*60\n for d in range(len(sol)/2):\n # Get the inbound and outbound flights\n origin=people[d][1]\n outbound=flights[(origin,destination)][int(sol[d])]\n returnf=flights[(destination,origin)][int(sol[d+1])]\n # Total price is the price of all outbound and return flights\n totalprice+=outbound[2]\n totalprice+=returnf[2]\n # Track the latest arrival and earliest departure\n if latestarrival<getminutes(outbound[1]): latestarrival=getminutes(outbound[1])\n if earliestdep>getminutes(returnf[0]): earliestdep=getminutes(returnf[0])\n # Every person must wait at the airport until the latest person arrives.\n # They also must arrive at the same time and wait for their flights.\n totalwait=0\n \n for d in range(len(sol)/2):\n origin=people[d][1]\n outbound=flights[(origin,destination)][int(sol[d])]\n returnf=flights[(destination,origin)][int(sol[d+1])]\n totalwait+=latestarrival-getminutes(outbound[1])\n totalwait+=getminutes(returnf[0])-earliestdep\n\n # Does this solution require an extra day of car rental? That'll be $50!\n if latestarrival>earliestdep: totalprice+=50\n\n return totalprice+totalwait\n\n#Print schedule cost\nschedulecost(s)", "Observation: Now that the cost function has been created, it should be clear that the goal is to minimize cost by choosing the correct set of numbers.\nRandom Searching\nRandom searching isn’t a very good optimization method, but it makes it easy to understand exactly what all the algorithms are trying to do, and it also serves as a baseline so you can see if the other algorithms are doing a good job.", "\"\"\"The function takes a couple of parameters. Domain is a list of 2-tuples that specify the minimum and maximum values \nfor each variable. The length of the solution is the same as the length of this list. In the current example, \nthere are nine outbound flights and nine inbound flights for every person, so the domain in the list is (0,8) \nrepeated twice for each person.\nThe second parameter, costf, is the cost function, which in this example will be schedulecost. This is passed as \na parameter so that the function can be reused for other optimization problems. This function randomly generates \n1,000 guesses and calls costf on them. It keeps track of the best guess (the one with the lowest cost) and returns it.\n\"\"\"\ndef randomoptimize(domain,costf):\n best=999999999\n bestr=None\n for i in range(1000):\n # Create a random solution\n r=[random.randint(domain[i][0],domain[i][1])\n for i in range(len(domain))]\n # Get the cost\n cost=costf(r)\n # Compare it to the best one so far\n if cost<best:\n best=cost\n bestr=r\n return r\n\n#Let's try 1000 guesses. 1,000 guesses is a very small fraction of the total number of possibilities'\n\ndomain=[(0,8)]*(len(optimization.people)*2)\ns=randomoptimize(domain,schedulecost) \n\nschedulecost(s)\n\nprintschedule(s)", "Observation: Due to the random element, your results will be different from the results here. The results shown are not great, as they have Zooey waiting at the airport for six hours until Walt arrives, but they could definitely be worse. Try running this function several times to see if the cost changes very much, or try increasing the loop size to 10,000 to see if you find better results that way." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gevero/py_matrix
py_matrix/examples/Gold SPR.ipynb
gpl-3.0
[ "# Gold Surface Plasmon Resonance (SPR) Example\n The notebook is structured as follows:\n - Setup of useful settings and import of necessary libraries\n - Inputs for the simulation\n - Computation\n - Plot\nSettings and libraries", "# libraries\nimport numpy as np # numpy\nimport sys # sys to add py_matrix to the path\n\n# matplotlib inline plots\nimport matplotlib.pylab as plt\n%matplotlib inline\n\n# adding py_matrix parent folder to python path\nsys.path.append('../../')\nimport py_matrix as pm # importing py_matrix\n\n# useful parameters\nf_size=20;", "Inputs\n\nLoading optical constants and building the optical constant database\nSetting the inputs such as layer compositionm thickness, incident angles, etc...", "# building the optical constant database, point the folder below to the \"materials\" py_matrix folder\neps_db_out=pm.mat.generate_eps_db('../materials/',ext='*.edb')\neps_files,eps_names,eps_db=eps_db_out['eps_files'],eps_db_out['eps_names'],eps_db_out['eps_db']\n\n# multilayer and computation inputs\nstack=['e_bk7','e_au','e_vacuum']; # materials composing the stack, as taken from eps_db\nd_list=[0.0,55.0,0.0]; # multilayer thicknesses: incident medium and substrate have zero thickness\nwl_0=633; # incident wavelenght in nm\n# polar angle in radians\ntheta_min=40*np.pi/1.8e2;\ntheta_max=50*np.pi/1.8e2;\ntheta_step=500;\nv_theta=np.linspace(theta_min,theta_max,theta_step)\n# azimuthal angle radians\nphi_0=0.0", "Computation\n\nRetrieval of optical constants at $\\lambda$=633 nm from the optical constant database\nFilling of the dielectric tensor at $\\lambda$=633 nm\nInitialization of the reflectance output vector\nPolar angle loop", "# optical constant tensor\nm_eps=np.zeros((len(stack),3,3),dtype=np.complex128);\ne_list=pm.mat.db_to_eps(wl_0,eps_db,stack) # retrieving optical constants at wl_0 from the database\nm_eps[:,0,0]=e_list # filling dielectric tensor diagonal\nm_eps[:,1,1]=e_list\nm_eps[:,2,2]=e_list\n\n# initializing reflectance output vector\nv_r_p=np.zeros_like(v_theta)\n\n# angle loop\nfor i_t,t in enumerate(v_theta):\n \n #------Computing------\n m_r_ps=pm.core.rt(wl_0,t,phi_0,m_eps,d_list)['m_r_ps'] # reflection matrix\n v_r_p[i_t]=pm.utils.R_ps_rl(m_r_ps)['R_p'] # getting p-polarized reflectance", "Plot of the reflectance spectrum at $\\lambda$ = 633 nm", "# reflectivity plots\nplt.figure(1,figsize=(15,10))\nplt.plot(v_theta*1.8e2/np.pi,v_r_p,'k',linewidth=2.0)\n\n# labels\nplt.xlabel(r'$\\Theta^{\\circ}$',fontsize=f_size+10)\nplt.ylabel('R',fontsize=f_size+10)\n\n# ticks\nplt.xticks(fontsize=f_size)\nplt.yticks(fontsize=f_size)\n\n# grids\nplt.grid()\n\n#legends\nplt.legend(['55 nm Au film reflectance at 633 nm'],loc='upper right',fontsize=f_size,frameon=False);", "Plot of the local fields at $\\lambda=633$ nm at the SPP coupling angle", "# fields components and wavevectors\ntheta_0 = v_theta[v_r_p.argmin()] # getting the plasmon coupling angle\nout = pm.core.rt(wl_0,theta_0,phi_0,m_eps,d_list) # reflection matrix\nm_Kn = out['m_Kn']\nm_Hn = out['m_Hn']\nm_En = out['m_En']\n\n# computing the field, absorbed power and Poynting vector\nv_z = np.linspace(-100,500,1000) # z probing\nv_field = np.array([np.abs(pm.utils.field(m_Kn,m_En,m_Hn,e_list,d_list,0.0,0.0,z,'TM')['H'][1]) for z in v_z])\nv_abs = np.array([np.abs(pm.utils.field(m_Kn,m_En,m_Hn,e_list,d_list,0.0,0.0,z,'TM')['abs']) for z in v_z])\nv_S = np.array([np.abs(pm.utils.field(m_Kn,m_En,m_Hn,e_list,d_list,0.0,0.0,z,'TM')['S']) for z in v_z])\n\n# field plots\nplt.figure(figsize=(9,7.5))\n\n# transverse magnetic field modulues\nplt.subplot(2,2,1)\n\n# plot\nplt.plot(v_z,v_field,'k',linewidth=3.0)\nplt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)\nplt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)\n\n# labels\nplt.ylabel(r'$|H_{\\mathrm{y}}|$',fontsize=f_size+10)\n\n# ticks\nplt.xticks([0,200,400],fontsize=f_size)\nplt.yticks([0,2,4,6,8],fontsize=f_size)\n\n# grids\nplt.grid(color='gray')\n\n\n# local absorbed power\nplt.subplot(2,2,2)\n\n# plot\nplt.plot(v_z,v_abs,'k',linewidth=3.0)\nplt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)\nplt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)\n\n# labels\nplt.ylabel(r'Abs power (a.u.)',fontsize=f_size+5)\n\n# ticks\nplt.xticks([0,200,400],fontsize=f_size)\nplt.yticks(fontsize=f_size)\n\n# grids\nplt.grid(color='gray')\n\n\n# Sx component of the Poynting vector\nplt.subplot(2,2,3)\n\n# plot\nplt.plot(v_z,v_S[:,0],'k',linewidth=3.0)\nplt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)\nplt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)\n\n# labels\nplt.xlabel(r'z (nm)',fontsize=f_size+10)\nplt.ylabel(r'$S_{\\mathrm{x}}$',fontsize=f_size+5)\n\n# ticks\nplt.xticks([0,200,400],fontsize=f_size)\nplt.yticks(fontsize=f_size)\n\n# grids\nplt.grid(color='gray')\n\n\n# Sz component of the Poynting vector\nplt.subplot(2,2,4)\n\n# plot\nplt.plot(v_z,v_S[:,2],'k',linewidth=3.0)\nplt.axvline(d_list[0],color='gray',linestyle='dashed',linewidth=2.0)\nplt.axvline(d_list[1],color='gray',linestyle='dashed',linewidth=2.0)\n\n# labels\nplt.xlabel(r'z (nm)',fontsize=f_size+10)\nplt.ylabel(r'$S_{\\mathrm{z}}$',fontsize=f_size+5)\n\n# ticks\nplt.xticks([0,200,400],fontsize=f_size)\nplt.yticks(fontsize=f_size)\n\n# grids\nplt.grid(color='gray')\n\n\nplt.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/hub
examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================", "Multilingual Universal Sentence Encoder Q&A Retrieval\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/retrieval_with_tf_hub_universal_encoder_qa.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder-multilingual-qa%2F3%20OR%20google%2Funiversal-sentence-encoder-qa%2F3\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub models</a>\n </td>\n</table>\n\nThis is a demo for using Universal Encoder Multilingual Q&A model for question-answer retrieval of text, illustrating the use of question_encoder and response_encoder of the model. We use sentences from SQuAD paragraphs as the demo dataset, each sentence and its context (the text surrounding the sentence) is encoded into high dimension embeddings with the response_encoder. These embeddings are stored in an index built using the simpleneighbors library for question-answer retrieval.\nOn retrieval a random question is selected from the SQuAD dataset and encoded into high dimension embedding with the question_encoder and query the simpleneighbors index returning a list of approximate nearest neighbors in semantic space.\nMore models\nYou can find all currently hosted text embedding models here and all models that have been trained on SQuAD as well here.\nSetup", "%%capture\n#@title Setup Environment\n# Install the latest Tensorflow version.\n!pip install -q \"tensorflow-text==2.8.*\"\n!pip install -q simpleneighbors[annoy]\n!pip install -q nltk\n!pip install -q tqdm\n\n#@title Setup common imports and functions\nimport json\nimport nltk\nimport os\nimport pprint\nimport random\nimport simpleneighbors\nimport urllib\nfrom IPython.display import HTML, display\nfrom tqdm.notebook import tqdm\n\nimport tensorflow.compat.v2 as tf\nimport tensorflow_hub as hub\nfrom tensorflow_text import SentencepieceTokenizer\n\nnltk.download('punkt')\n\n\ndef download_squad(url):\n return json.load(urllib.request.urlopen(url))\n\ndef extract_sentences_from_squad_json(squad):\n all_sentences = []\n for data in squad['data']:\n for paragraph in data['paragraphs']:\n sentences = nltk.tokenize.sent_tokenize(paragraph['context'])\n all_sentences.extend(zip(sentences, [paragraph['context']] * len(sentences)))\n return list(set(all_sentences)) # remove duplicates\n\ndef extract_questions_from_squad_json(squad):\n questions = []\n for data in squad['data']:\n for paragraph in data['paragraphs']:\n for qas in paragraph['qas']:\n if qas['answers']:\n questions.append((qas['question'], qas['answers'][0]['text']))\n return list(set(questions))\n\ndef output_with_highlight(text, highlight):\n output = \"<li> \"\n i = text.find(highlight)\n while True:\n if i == -1:\n output += text\n break\n output += text[0:i]\n output += '<b>'+text[i:i+len(highlight)]+'</b>'\n text = text[i+len(highlight):]\n i = text.find(highlight)\n return output + \"</li>\\n\"\n\ndef display_nearest_neighbors(query_text, answer_text=None):\n query_embedding = model.signatures['question_encoder'](tf.constant([query_text]))['outputs'][0]\n search_results = index.nearest(query_embedding, n=num_results)\n\n if answer_text:\n result_md = '''\n <p>Random Question from SQuAD:</p>\n <p>&nbsp;&nbsp;<b>%s</b></p>\n <p>Answer:</p>\n <p>&nbsp;&nbsp;<b>%s</b></p>\n ''' % (query_text , answer_text)\n else:\n result_md = '''\n <p>Question:</p>\n <p>&nbsp;&nbsp;<b>%s</b></p>\n ''' % query_text\n\n result_md += '''\n <p>Retrieved sentences :\n <ol>\n '''\n\n if answer_text:\n for s in search_results:\n result_md += output_with_highlight(s, answer_text)\n else:\n for s in search_results:\n result_md += '<li>' + s + '</li>\\n'\n\n result_md += \"</ol>\"\n display(HTML(result_md))", "Run the following code block to download and extract the SQuAD dataset into:\n\nsentences is a list of (text, context) tuples - each paragraph from the SQuAD dataset are splitted into sentences using nltk library and the sentence and paragraph text forms the (text, context) tuple.\nquestions is a list of (question, answer) tuples.\n\nNote: You can use this demo to index the SQuAD train dataset or the smaller dev dataset (1.1 or 2.0) by selecting the squad_url below.", "#@title Download and extract SQuAD data\nsquad_url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json' #@param [\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json\", \"https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json\", \"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\", \"https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json\"]\n\nsquad_json = download_squad(squad_url)\nsentences = extract_sentences_from_squad_json(squad_json)\nquestions = extract_questions_from_squad_json(squad_json)\nprint(\"%s sentences, %s questions extracted from SQuAD %s\" % (len(sentences), len(questions), squad_url))\n\nprint(\"\\nExample sentence and context:\\n\")\nsentence = random.choice(sentences)\nprint(\"sentence:\\n\")\npprint.pprint(sentence[0])\nprint(\"\\ncontext:\\n\")\npprint.pprint(sentence[1])\nprint()", "The following code block setup the tensorflow graph g and session with the Universal Encoder Multilingual Q&A model's question_encoder and response_encoder signatures.", "#@title Load model from tensorflow hub\nmodule_url = \"https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3\" #@param [\"https://tfhub.dev/google/universal-sentence-encoder-multilingual-qa/3\", \"https://tfhub.dev/google/universal-sentence-encoder-qa/3\"]\nmodel = hub.load(module_url)\n", "The following code block compute the embeddings for all the text, context tuples and store them in a simpleneighbors index using the response_encoder.", "#@title Compute embeddings and build simpleneighbors index\nbatch_size = 100\n\nencodings = model.signatures['response_encoder'](\n input=tf.constant([sentences[0][0]]),\n context=tf.constant([sentences[0][1]]))\nindex = simpleneighbors.SimpleNeighbors(\n len(encodings['outputs'][0]), metric='angular')\n\nprint('Computing embeddings for %s sentences' % len(sentences))\nslices = zip(*(iter(sentences),) * batch_size)\nnum_batches = int(len(sentences) / batch_size)\nfor s in tqdm(slices, total=num_batches):\n response_batch = list([r for r, c in s])\n context_batch = list([c for r, c in s])\n encodings = model.signatures['response_encoder'](\n input=tf.constant(response_batch),\n context=tf.constant(context_batch)\n )\n for batch_index, batch in enumerate(response_batch):\n index.add_one(batch, encodings['outputs'][batch_index])\n\nindex.build()\nprint('simpleneighbors index for %s sentences built.' % len(sentences))\n", "On retrieval, the question is encoded using the question_encoder and the question embedding is used to query the simpleneighbors index.", "#@title Retrieve nearest neighbors for a random question from SQuAD\nnum_results = 25 #@param {type:\"slider\", min:5, max:40, step:1}\n\nquery = random.choice(questions)\ndisplay_nearest_neighbors(query[0], query[1])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BenLangmead/comp-genomics-class
notebooks/RepeatMasker.ipynb
gpl-2.0
[ "Repetitive DNA elements (\"repeats\") are DNA sequences prevalent in genomes, especially of higher eukaryotes. Repeats make up about 50% of the human genome and over 80% of the maize genome. Repeats can be categorized as interspersed, where similar DNA sequences are spread throughout the genome, or tandem, where similar sequences are adjacent (see Treangen and Salzberg). Some interspersed repeats are long segmental duplications, but most are relatively short transposons and retrotransposons. Though repeats are sometimes referred to as “junk,” they are involved in processes of current scientific interest, including genome expansion, speciation, and epigenetic regulation (see Fedoroff). Some are still actively expressed and duplicated, including in the human genome (see Witherspoon et al, Tyekucheva et al).\nRepeatMasker\nRepeatMasker is both a tool for identifying repeats in a genome sequence, and a database of repeats that have been found. The database covers some well known model species, like human, chimpanzee, gorilla, rhesus, rat, mouse, horse, cow, cat, dog, chicken, zebrafish, bee, fruitfly and roundworm. People often use RepeatMasker to remove (\"mask out\") repetitive sequences from the genome so that they can be ignored (or otherwise treated specially) in later analyses, though that's not our goal here.\nIt's intructive to click on some of the species listed in the database and examine the associated bar and pie charts describing their repeat content. For example, note the differences between the bar charts for human and mouse, especially for SINE/Alu and LINE/L1.\nWorking with RepeatMasker databases\nLet's obtain and parse a RepeatMasker database. We'll start with roundworm because it's relatively small (only about 2.5 megabytes compressed).", "import urllib.request\nrm_site = 'http://www.repeatmasker.org'\nfn = 'ce10.fa.out.gz'\nurl = '%s/genomes/ce10/RepeatMasker-rm405-db20140131/%s' % (rm_site, fn)\nurllib.request.urlretrieve(url, fn)\n\nimport gzip\nimport itertools\nfh = gzip.open(fn, 'rt')\nfor ln in itertools.islice(fh, 10):\n print(ln, end='')", "Above are the first several lines of the .out.gz file for the roundworm (C. elegans). The columns have headers, which are somewhat helpful. More detail is available in the RepeatMasker documentation under \"How to read the results\". (Note that in addition to the 14 fields descrived in the documentation, there's also a 15th ID field.)\nHere's an extremely simple class that parses a line from these files and stores the individual values in its fields:", "class Repeat(object):\n def __init__(self, ln):\n # parse fields\n (self.swsc, self.pctdiv, self.pctdel, self.pctins, self.refid,\n self.ref_i, self.ref_f, self.ref_remain, self.orient, self.rep_nm,\n self.rep_cl, self.rep_prior, self.rep_i, self.rep_f, self.unk) = ln.split()\n # int-ize the reference coordinates\n self.ref_i, self.ref_f = int(self.ref_i), int(self.ref_f)", "We can parse a file into a list of Repeat objects:", "def parse_repeat_masker_db(fn):\n reps = []\n with gzip.open(fn) if fn.endswith('.gz') else open(fn) as fh:\n fh.readline() # skip header\n fh.readline() # skip header\n fh.readline() # skip header\n while True:\n ln = fh.readline()\n if len(ln) == 0:\n break\n reps.append(Repeat(ln.decode('UTF8')))\n return reps\n\nreps = parse_repeat_masker_db('ce10.fa.out.gz')", "Extracting repeats from the genome in FASTA format\nNow let's obtain the genome for the roundworm in FASTA format. For more information on FASTA, see the FASTA notebook. As seen above, the name of the genome assembly used by RepeatMasker is ce10. We can get it from the UCSC server. It's around 30 MB.", "ucsc_site = 'http://hgdownload.cse.ucsc.edu/goldenPath'\nfn = 'chromFa.tar.gz'\nurllib.request.urlretrieve(\"%s/ce10/bigZips/%s\" % (ucsc_site, fn), fn)\n\n!tar zxvf chromFa.tar.gz", "Let's load chromosome I into a string so that we can see the sequences of the repeats.", "from collections import defaultdict\n\ndef parse_fasta(fns):\n ret = defaultdict(list)\n for fn in fns:\n with open(fn, 'rt') as fh:\n for ln in fh:\n if ln[0] == '>':\n name = ln[1:].rstrip()\n else:\n ret[name].append(ln.rstrip())\n for k, v in ret.items():\n ret[k] = ''.join(v)\n return ret\n\ngenome = parse_fasta(['chrI.fa', 'chrII.fa', 'chrIII.fa', 'chrIV.fa', 'chrM.fa', 'chrV.fa', 'chrX.fa'])\n\ngenome['chrI'][:1000] # printing just the first 1K nucleotides", "Note the combination of lowercase and uppercase. Actually, that relates to our discussion here. The lowercase stretches are repeats! The UCSC genome sequences use the lowercase/uppercase distinction to make it clear where the repeats are -- and they know this because they ran RepeatMasker on the genome beforehand. In this case, the two repeats you can see are both simple hexamer repeats. Also, note that their position in the genome corresponds to the first two rows of the RepeatMasker database that we printed above.\nWe write a function that, given a Repeat and given a dictionary containing the sequences of all the chromosomes in the genome, outputs each repeat string.", "def extract_repeat(rep, genome):\n assert rep.refid in genome\n return genome[rep.refid][rep.ref_i-1:rep.ref_f]\n\nextract_repeat(reps[0], genome)\n\nextract_repeat(reps[1], genome)\n\nextract_repeat(reps[2], genome)", "Let's specifically try to extract a repeat from the DNA/CMC-Chapaev family.", "chapaevs = filter(lambda x: 'DNA/CMC-Chapaev' == x.rep_cl, reps)\n\n[extract_repeat(chapaev, genome) for chapaev in chapaevs]", "How are repeats related?\nLook at the repeat family/class names for the first several repeats in the roundworm database:", "from operator import attrgetter\n\n' '.join(map(attrgetter('rep_cl'), reps[:60]))", "You'll notice a few things. (1) The family names seem to have some hierarchical relationships; e.g. DNA/TcMar-Tc1 seems to be more specific than DNA, (2) some of them end in a question mark, (3) some of them are Unknown. I don't really know what these mean or what to do as a result -- you'll have to navigate that issue. Seems like you can often look up the family names on RepeatMasker's site and find more detailed info (e.g., here are the details for DNA/TcMar-Tc1).\nAlternatives to RepeatMasker\nDfam is an alternative. Note that Dfam ultimately relies on Repbase for its \"seed alignments.\" Also, the only the human genome has a pre-built Dfam database, as far as I can tell." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kylemede/DS-ML-sandbox
notebooks/CS_and_Python.ipynb
gpl-3.0
[ "Random notes while re-studying CS and DS while job hunting.\nxrange vs range looping\nFor long for loops with no need to track iteration use:", "for _ in xrange(10):\n print \"Do something\"", "This will loop through 10 times, but the iteration variable won't be unused as it was never assigned. Also, xrange returns a type of iterator, whereas range returns a full list that can take a lot of memory for large loops.\nAutomating variable names\nTo assign a variable name and value in a loop fasion, use vars()[variable name as a string] = variable value. Such as:", "for i in range(1,10):\n vars()['x'+str(i)] = i", "You can see the variables in memory with:", "print repr(dir())\n\nprint repr(x1)\nprint repr(x5)", "Binary numbers and Python operators\nA good review of Python operators can be found here\nThe wiki reviewing bitwise operations here OR here\nNote that binary numbers follow:\n2^4| 2^3| 2^2| 2^1| 2^0\n1 0 -> 2+0 = 2\n1 1 1 -> 4+2+1 = 7\n1 0 1 0 1 -> 16+0+4+0+1 = 21\n1 1 1 1 0 -> 16+8+4+2+0 = 30\nConvert numbers from base 10 to binary with bin()", "bin(21)[2:]", "Ensuring two binary numbers are the same length", "a = 123\nb = 234\na, b = bin(a)[2:], bin(b)[2:]\nprint \"Before evening their lengths:\\n{}\\n{}\".format(a,b)\ndiff = len(a)-len(b)\nif diff > 0:\n b = '0' * diff + b\nelif diff < 0:\n a = '0' * abs(diff) + a\nprint \"After evening their lengths:\\n{}\\n{}\".format(a,b)", "For bitwise or:", "s = ''\nfor i in range(len(a)):\n s += str(int(a[i]) | int(b[i]))\nprint \"{}\\n{}\\n{}\\n{}\".format(a, b, '-'*len(a), s)", "bitwise or is |, xor is ^, and is &, complement (switch 0's to 1's, and 1's to 0's) is ~, binary shift left (move binary number two digits to left by adding zeros to its right) is <<, right >>\nConvert the resulting binary number to base 10:", "sum(map(lambda x: 2**x[0] if int(x[1]) else 0, enumerate(reversed(s))))", "Building a 'stack' in Python", "class Stack:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def push(self, item):\n self.items.append(item)\n\n def pop(self):\n return self.items.pop()\n\n def peek(self):\n return self.items[len(self.items)-1]\n\n def size(self):\n return len(self.items)\n\ns=Stack()\n\nprint repr(s.isEmpty())+'\\n'\ns.push(4)\ns.push('dog')\nprint repr(s.peek())+'\\n'\ns.push(True)\nprint repr(s.size())+'\\n'\nprint repr(s.isEmpty())+'\\n'\ns.push(8.4)\nprint repr(s.pop())+'\\n'\nprint repr(s.pop())+'\\n'\nprint repr(s.size())+'\\n'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mavillan/SciProg
05_cython/05_cython.ipynb
gpl-3.0
[ "<h1 align=\"center\">Scientific Programming in Python</h1>\n<h2 align=\"center\">Topic 5: Accelerating Python with Cython: Writting C in Python </h2>\n\nNotebook created by Martín Villanueva - martin.villanueva@usm.cl - DI UTFSM - May2017.", "%matplotlib inline\n\nimport numpy as np\nimport numexpr as ne\nimport numba\nimport math\nimport random\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport sys\n\n%load_ext Cython", "Table of Contents\n\n1.- Cython Basic Usage\n2.- Advanced usage\n3.- Pure C in Python\n\n<div id='cython' />\n1.- Cython Basic Usage\nCython is both a Superset of Python and a Python Library that lets you combine C and Python in various ways. There are two main use-cases:\n1. Optimizing your Python code by statically compiling it to C.\n2. Wrapping a C/C++ library in Python.\nIn order to get it properly working, you need Cython and a C compiler:\n1. Cython: conda install cython\n2. C compiler: Install GNU C compiler with your package manager (Unix/Linux) or install Xcode (OSX).\n\nWe will introduce the basic Cython usage by impementing the Eratosthenes Sieve Algorithm, which is an algorithm to find all prime numbers smaller than a given number.", "def primes_python(n):\n primes = [False, False] + [True] * (n - 2)\n i= 2\n while i < n:\n # We do not deal with composite numbers.\n if not primes[i]:\n i += 1\n continue \n k= i+i\n # We mark multiples of i as composite numbers.\n while k < n:\n primes[k] = False\n k += i \n i += 1\n # We return all numbers marked with True.\n return [i for i in range(2, n) if primes[i]]\n\nprimes_python(20)", "Let's evaluate the performance for the first version:", "tp = %timeit -o primes_python(10000)", "And now we write our first Cython version, by just adding %%cython magic in the first line of the cell:", "%%cython\ndef primes_cython1(n):\n primes = [False, False] + [True] * (n - 2)\n i= 2\n while i < n:\n # We do not deal with composite numbers.\n if not primes[i]:\n i += 1\n continue \n k= i+i\n # We mark multiples of i as composite numbers.\n while k < n:\n primes[k] = False\n k += i \n i += 1\n # We return all numbers marked with True.\n return [i for i in range(2, n) if primes[i]]\n\ntc1 = %timeit -o primes_cython1(10000)", "We achieve x2 speed improvement doing (practically) nothing!.\nWhen we add %%cython at the beginning of the cell, the code gets compiled by Cython into a C extension. Then, this extension is loaded, and the compiled function is readily available in the interactive namespace. \nLets help the compiler by explicitly defining the type of the variables with the cdef macro/keyword:", "%%cython\ndef primes_cython2(int n):\n # Note the type declarations below\n cdef list primes = [False, False] + [True] * (n - 2)\n cdef int i = 2\n cdef int k = 0\n # The rest of the functions is unchanged\n while i < n:\n # We do not deal with composite numbers.\n if not primes[i]:\n i += 1\n continue \n k= i+i\n # We mark multiples of i as composite numbers.\n while k < n:\n primes[k] = False\n k += i \n i += 1\n # We return all numbers marked with True.\n return [i for i in range(2, n) if primes[i]]\n\ntc2 = %timeit -o primes_cython2(10000)\n\nprint(\"Cython version 1 speedup: {0}\".format(tp.best/tc1.best))\nprint(\"Cython version 2 speedup: {0}\".format(tp.best/tc2.best))", "Then: In general, Cython will be the most efficient when it can compile data structures and operations directly to C by __making as few CPython API calls as possible__. Specifying the types of the variables often leads to greater speed improvements.\nJust for curiosity let's see the performance Numba's JIT achieves:", "@numba.jit(nopython=True)\ndef primes_numba(n):\n primes = [False, False] + [True] * (n - 2)\n i= 2\n while i < n:\n # We do not deal with composite numbers.\n if not primes[i]:\n i += 1\n continue \n k= i+i\n # We mark multiples of i as composite numbers.\n while k < n:\n primes[k] = False\n k += i \n i += 1\n # We return all numbers marked with True.\n res = []\n for i in range(2,n):\n if primes[i]: res.append(i)\n return res\n\ntn = %timeit -o primes_numba(10000)", "Numba wins this time! but: This is not the final form of Cython... \nInspecting Cython bottlenecks with annotations\nWe can inspect the C code generated by Cython with the -a argument. Let's inspect the code used above.\nThe non-optimized lines will be shown in a gradient of yellow (white lines are faster, yellow lines are slower), telling you which lines are the least efficiently compiled to C. By clicking on a line, you can see the generated C code corresponding to that line.", "%%cython -a\ndef primes_cython1(n):\n primes = [False, False] + [True] * (n - 2)\n i= 2\n while i < n:\n # We do not deal with composite numbers.\n if not primes[i]:\n i += 1\n continue \n k= i+i\n # We mark multiples of i as composite numbers.\n while k < n:\n primes[k] = False\n k += i \n i += 1\n # We return all numbers marked with True.\n return [i for i in range(2, n) if primes[i]]\n\n%%cython -a\ndef primes_cython2(int n):\n # Note the type declarations below\n cdef list primes = [False, False] + [True] * (n - 2)\n cdef int i = 2\n cdef int k = 0\n # The rest of the functions is unchanged\n while i < n:\n # We do not deal with composite numbers.\n if not primes[i]:\n i += 1\n continue \n k= i+i\n # We mark multiples of i as composite numbers.\n while k < n:\n primes[k] = False\n k += i \n i += 1\n # We return all numbers marked with True.\n return [i for i in range(2, n) if primes[i]]", "Alternative usage of Cython: Outside the notebook\nIf you want to use Cython outside the notebook (the way it was thought...), you have to do the work of the magic:\n1. Write the function into a .pyx file.\n2. Cythonize it with cython filename.pyx generating the filename.c file.\n3. Compile it with GCC: \ngcc -shared -fPIC -fwrapv -O3 -fno-strict-aliasing -I/home/mavillan/anaconda3/include/python3.6m -o primes.so primes.c\n<div id='cython++' />\n2.- Advanced usage\nIn this section we will consider the example of computing a distance matrix: Given the matrices $A_{m,3}$ and $B_{n,3}$ (each row is a 3D-position), the distance matrix has entries $D_{i,j} = d(A[i],B[j])$.\nNumPy Arrays\nYou can use NumPy from Cython exactly the same as in regular Python, but by doing so you are losing potentially high speedups because Cython has support for fast access to NumPy arrays.", "# Matrices to use\nA = np.random.random((1000,3))\nB = np.random.random((500,3))\n\ndef dist(a, b):\n return np.sqrt(np.sum((a-b)**2))\n\ndef distance_matrix_python(A, B):\n m = A.shape[0]\n n = B.shape[0]\n D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i],B[j])\n return D\n\n%timeit distance_matrix_python(A,B)\n\n%%cython -a\nimport numpy as np\n\ndef dist(a, b):\n return np.sqrt(np.sum((a-b)**2))\n\ndef distance_matrix_cython0(A, B):\n m = A.shape[0]\n n = B.shape[0]\n D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i],B[j])\n return D\n\n%timeit distance_matrix_cython0(A,B)", "Now let's improve this naive Cython implementation by statically defining the types of the variables:", "%%cython -a\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\n\ndef dist(cnp.ndarray[float64_t, ndim=1] a, cnp.ndarray[float64_t, ndim=1] b):\n return np.sqrt(np.sum((a-b)**2))\n\ndef distance_matrix_cython1(cnp.ndarray[float64_t, ndim=2] A, cnp.ndarray[float64_t, ndim=2] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n cnp.ndarray[float64_t, ndim=2] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i], B[j])\n return D\n\n%timeit -n 10 distance_matrix_cython1(A,B)\n\n%%cython -a\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\nfrom libc.math cimport sqrt\n\ndef dist(cnp.ndarray[float64_t, ndim=1] a, cnp.ndarray[float64_t, ndim=1] b):\n cdef:\n int i = 0\n int n = a.shape[0]\n float ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return sqrt(ret)\n\n\ndef distance_matrix_cython2(cnp.ndarray[float64_t, ndim=2] A, cnp.ndarray[float64_t, ndim=2] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n cnp.ndarray[float64_t, ndim=2] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i], B[j])\n return D\n\n%timeit -n 10 distance_matrix_cython2(A,B)", "Typed Memory Views\nTyped memoryviews allow efficient access to memory buffers, such as those underlying NumPy arrays, without incurring any Python overhead. Memoryviews are similar to the current NumPy array buffer support (np.ndarray[np.float64_t, ndim=2]), but they have more features and cleaner syntax.\nThey can handle a wider variety of sources of array data. For example, they can handle C arrays and the Cython array type (Cython arrays).\nSyntaxis: dtype[:,::1] where ::1 indicates the axis where elements are contiguous.", "%%cython -a\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\nfrom libc.math cimport sqrt\n\ndef dist(float64_t[::1] a, float64_t[::1] b):\n cdef:\n int i = 0\n int n = a.shape[0]\n float ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return sqrt(ret)\n\n\ndef distance_matrix_cython3(float64_t[:,::1] A, float64_t[:,::1] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n float64_t[:,::1] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i], B[j])\n return D\n\n%timeit -n 10 distance_matrix_cython3(A,B)", "Compiler optimization\nWith the -c option we can pass the compiler (gcc) optimization options. Below we use the most common of them:", "%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\nfrom libc.math cimport sqrt\n\ndef dist(float64_t[::1] a, float64_t[::1] b):\n cdef:\n int i = 0\n int n = a.shape[0]\n float ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return sqrt(ret)\n\n\ndef distance_matrix_cython4(float64_t[:,::1] A, float64_t[:,::1] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n float64_t[:,::1] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i], B[j])\n return D\n\n%timeit -n 10 distance_matrix_cython4(A,B)", "Compiler directives\n Compiler directives are instructions which affect the behavior of Cython code. \ncdivision (True / False)\n* If set to False, Cython will adjust the remainder and quotient operators C types to match those of Python ints (which differ when the operands have opposite signs) and raise a ZeroDivisionError when the right operand is 0. This has up to a 35% speed penalty. If set to True.\nboundscheck (True / False)\n* If set to False, Cython is free to assume that indexing operations ([]-operator) in the code will not cause any IndexErrors to be raised. Lists, tuples, and strings are affected only if the index can be determined to be non-negative (or if wraparound is False). Conditions which would normally trigger an IndexError may instead cause segfaults or data corruption if this is set to False. Default is True.\nnonecheck (True / False)\n* If set to False, Cython is free to assume that native field accesses on variables typed as an extension type, or buffer accesses on a buffer variable, never occurs when the variable is set to None. Otherwise a check is inserted and the appropriate exception is raised. This is off by default for performance reasons. Default is False.\nwraparound (True / False)\n* In Python arrays can be indexed relative to the end. For example A[-1] indexes the last value of a list. In C negative indexing is not supported. If set to False, Cython will neither check for nor correctly handle negative indices, possibly causing segfaults or data corruption. Default is True.\ninitializedcheck (True / False)\n* If set to True, Cython checks that a memoryview is initialized whenever its elements are accessed or assigned to. Setting this to False disables these checks. Default is True.\nFor all the compilation directives see here.", "%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\n#!python\n#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False\n\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\nfrom libc.math cimport sqrt\n\ndef dist(float64_t[::1] a, float64_t[::1] b):\n cdef:\n int i = 0\n int n = a.shape[0]\n float ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return sqrt(ret)\n\n\ndef distance_matrix_cython5(float64_t[:,::1] A, float64_t[:,::1] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n float64_t[:,::1] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i,:], B[j,:])\n return D\n\n%timeit -n 10 distance_matrix_cython5(A,B)", "Pure C functions\nWith the cdef keywork we can realy define C function, as we shown below. In such functions all variable types should be defined and should have a return type, and can't be called directly in Python, i.e, only can be called by functions defined in the same module.\nThere is a midpoint between def and cdef which automatically creates a Python function with the same name, so the function can be called directly.", "%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\n#!python\n#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False\n\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\nfrom libc.math cimport sqrt\n\ncdef float64_t dist(float64_t[::1] a, float64_t[::1] b):\n cdef:\n int i = 0\n int n = a.shape[0]\n float ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return sqrt(ret)\n\n\ndef distance_matrix_cython6(float64_t[:,::1] A, float64_t[:,::1] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n float64_t[:,::1] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i,:], B[j,:])\n return D\n\n%timeit -n 10 distance_matrix_cython6(A,B)", "Example of cdef and cpdef", "%%cython -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\n#!python\n#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False\n\ncimport numpy as cnp\nctypedef cnp.float64_t float64_t\n\ncdef float64_t test1(float64_t a, float64_t b):\n return a+b\n\ntest1(1.,1.)\n\n%%cython -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\n#!python\n#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False\n\ncimport numpy as cnp\nctypedef cnp.float64_t float64_t\n\ncpdef float64_t test2(float64_t a, float64_t b):\n return a+b\n\ntest2(1,1)", "Function inlining\nIn computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function. \nAs a rule of thumb: Some inlining will improve speed at very minor cost of space, but excess inlining will hurt speed, due to inlined code consuming too much of the instruction cache, and also cost significant space.", "%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\n#!python\n#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False\n\nimport numpy as np\ncimport numpy as cnp\n\nctypedef cnp.float64_t float64_t\nfrom libc.math cimport sqrt\n\ncdef inline float64_t dist(float64_t[::1] a, float64_t[::1] b):\n cdef:\n int i = 0\n int n = a.shape[0]\n float ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return sqrt(ret)\n\ndef distance_matrix_cython7(float64_t[:,::1] A, float64_t[:,::1] B):\n cdef:\n int m = A.shape[0]\n int n = B.shape[0]\n int i,j\n float64_t[:,::1] D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i,:], B[j,:])\n return D\n\n%timeit -n 10 distance_matrix_cython7(A,B)", "What about Numba?", "@numba.jit(nopython=True)\ndef dist(a, b):\n n = a.shape[0]\n ret = 0\n for i in range(n):\n ret += (a[i]-b[i])**2\n return math.sqrt(ret)\n\n@numba.jit(nopython=True)\ndef distance_matrix_numba(A, B):\n m = A.shape[0]\n n = B.shape[0]\n D = np.empty((m,n))\n for i in range(m):\n for j in range(n):\n D[i,j] = dist(A[i,:], B[j,:])\n return D\n\n%timeit -n 10 distance_matrix_numba(A,B)", "4.- Other advanced things you can do with Cython\nWe have seen that with Cython we can implement our algorithms achieving C performance. Moreover it is very versatile and we can do some other advanced thing with it:\nObject-oriented programming: Classes and methods\nTo support object-oriented programming, Cython supports writing normal Python classes exactly as in Python.", "%%cython -a -c=-fPIC -c=-fwrapv -c=-O3 -c=-fno-strict-aliasing\n#!python\n#cython: cdivision=True, boundscheck=False, nonecheck=False, wraparound=False, initializedcheck=False\n\ncdef class A(object):\n def d(self): return 0\n cdef int c(self): return 0\n cpdef int p(self): return 0\n\n def test_def(self, long num):\n while num > 0:\n self.d()\n num -= 1\n\n def test_cdef(self, long num):\n while num > 0:\n self.c()\n num -= 1\n\n def test_cpdef(self, long num):\n while num > 0:\n self.p()\n num -= 1\n\n%%timeit n = 1000000\na1 = A()\na1.test_def(n)\n\n%%timeit n = 1000000\na1 = A()\na1.test_cdef(n)\n\n%%timeit n = 1000000\na1 = A()\na1.test_cpdef(n)", "C library Wrapping\nWrapper libraries (or library wrappers) consist of a thin layer of code which translates a library's existing interface into a compatible interface. Cython allows us to do this with C libraries... In fact many important project use Cython to do that:\n* Scikit-Learn use Cython to wrap many machine learning routines written in C (LibSVM).\n* OpenCV for python.\n* Scikit-Image.\n* SciPy Wraps BLAS, LAPACK and others.\n* Etc." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Aniruddha-Tapas/Applied-Machine-Learning
Miscellaneous/Plants Clustering.ipynb
mit
[ "Plants Clustering\n<hr>\n\nDataset : https://archive.ics.uci.edu/ml/datasets/Plants\nThis dataset has been extracted from the USDA plants database. It contains all plants (species and genera) in the database and the states of USA and Canada where they occur.\nThe data is in the transactional form. It contains the Latin names (species or genus) and state abbreviations..Each row contains a Latin name (species or genus) and a list of state abbreviations.\nEach row contains a Latin name (species or genus) and a list of state abbreviations.", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import cross_validation, metrics\nfrom sklearn import preprocessing\nimport matplotlib\nimport matplotlib.pyplot as plt\n\ncols = ['Class']\nfor i in range(64):\n str = 'f{}'.format(i)\n cols.append(str)\n\n# read .csv from provided dataset\ncsv_filename=\"data_Mar_64.txt\"\n\n# df=pd.read_csv(csv_filename,index_col=0)\ndf=pd.read_csv(csv_filename,names=cols)\n\ndf.head()\n\ndf.shape\n\ndf['Class'].unique()\n\nlen(df['Class'].unique())\n\nfrom sklearn.preprocessing import LabelEncoder\nle = LabelEncoder()\ndf['Class'] = le.fit_transform(df['Class'])\n\ndf['Class'].unique()\n\ndf.head()\n\nfeatures = df.columns[1:]\nfeatures\n\nX = df[features]\ny = df['Class']\n\nX.head()\n\n# split dataset to 60% training and 40% testing\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)\n\nprint (X_train.shape, y_train.shape)", "Unsupervised Learning\nPCA", "y.unique()\n\nlen(features)\n\n# Apply PCA with the same number of dimensions as variables in the dataset\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=64)\npca.fit(X)\n\n# Print the components and the amount of variance in the data contained in each dimension\nprint(pca.components_)\nprint(pca.explained_variance_ratio_)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.plot(list(pca.explained_variance_ratio_),'-o')\nplt.title('Explained variance ratio as function of PCA components')\nplt.ylabel('Explained variance ratio')\nplt.xlabel('Component')\nplt.show()\n\n# First we reduce the data to two dimensions using PCA to capture variation\npca = PCA(n_components=2)\nreduced_data = pca.fit_transform(X)\nprint(reduced_data[:10]) # print upto 10 elements\n\nkmeans = KMeans(n_clusters=100)\nclusters = kmeans.fit(reduced_data)\nprint(clusters)\n\n# Plot the decision boundary by building a mesh grid to populate a graph.\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nhx = (x_max-x_min)/1000.\nhy = (y_max-y_min)/1000.\nxx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))\n\n# Obtain labels for each point in mesh. Use last trained model.\nZ = clusters.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Find the centroids for KMeans or the cluster means for GMM \n\ncentroids = kmeans.cluster_centers_\nprint('*** K MEANS CENTROIDS ***')\nprint(centroids)\n\n# TRANSFORM DATA BACK TO ORIGINAL SPACE FOR ANSWERING 7\nprint('*** CENTROIDS TRANSFERED TO ORIGINAL SPACE ***')\nprint(pca.inverse_transform(centroids))\n\n# Put the result into a color plot\nZ = Z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(Z, interpolation='nearest',\n extent=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired,\n aspect='auto', origin='lower')\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('Clustering on the seeds dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "Applying agglomerative clustering via scikit-learn", "from sklearn.cluster import AgglomerativeClustering\n\nac = AgglomerativeClustering(n_clusters=100, affinity='euclidean', linkage='complete')\nlabels = ac.fit_predict(X)\nprint('Cluster labels: %s' % labels)", "", "from sklearn.cross_validation import train_test_split\nX = df[features]\ny = df['Class']\nX_train, X_test, y_train, y_test = train_test_split(X, y ,test_size=0.25, random_state=42)", "K Means", "from sklearn import cluster\nclf = cluster.KMeans(init='k-means++', n_clusters=100, random_state=5)\nclf.fit(X_train)\nprint clf.labels_.shape\nprint clf.labels_\n\n# Predict clusters on testing data\ny_pred = clf.predict(X_test)\n\nfrom sklearn import metrics\nprint \"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred))\nprint \"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) \nprint \"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred))\nprint \"Confusion matrix\"\nprint metrics.confusion_matrix(y_test, y_pred)", "Affinity Propogation", "# Affinity propagation\naff = cluster.AffinityPropagation()\naff.fit(X_train)\nprint aff.cluster_centers_indices_.shape\n\ny_pred = aff.predict(X_test)\n\nfrom sklearn import metrics\nprint \"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred))\nprint \"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) \nprint \"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred))\nprint \"Confusion matrix\"\nprint metrics.confusion_matrix(y_test, y_pred)", "MeanShift", "ms = cluster.MeanShift()\nms.fit(X_train)\n\ny_pred = ms.predict(X_test)\n\nfrom sklearn import metrics\nprint \"Addjusted rand score:{:.2}\".format(metrics.adjusted_rand_score(y_test, y_pred))\nprint \"Homogeneity score:{:.2} \".format(metrics.homogeneity_score(y_test, y_pred)) \nprint \"Completeness score: {:.2} \".format(metrics.completeness_score(y_test, y_pred))\nprint \"Confusion matrix\"\nprint metrics.confusion_matrix(y_test, y_pred)", "Mixture of Guassian Models", "from sklearn import mixture\n\n# Define a heldout dataset to estimate covariance type\nX_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split(\n X_train, y_train,test_size=0.25, random_state=42)\nfor covariance_type in ['spherical','tied','diag','full']:\n gm=mixture.GMM(n_components=100, covariance_type=covariance_type, random_state=42, n_init=5)\n gm.fit(X_train_heldout)\n y_pred=gm.predict(X_test_heldout)\n print \"Adjusted rand score for covariance={}:{:.2}\".format(covariance_type, \n metrics.adjusted_rand_score(y_test_heldout, y_pred))\n", "", "X = df[features].values\ny= df['Class'].values\npca = PCA(n_components=2)\nX = pca.fit_transform(X)\n\nc = []\nfrom matplotlib.pyplot import cm \nn=100\ncolor=iter(cm.rainbow(np.linspace(0,1,n)))\nfor i in range(n):\n c.append(next(color))\n\nc[99]\n\nn = 100\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n\nkm = KMeans(n_clusters= n , random_state=0)\ny_km = km.fit_predict(X)\n\nfor i in range(n):\n ax1.scatter(X[y_km==i,0], X[y_km==i,1], c=c[i], marker='o', s=40, label='cluster{}'.format(i))\nax1.set_title('K-means clustering')\n\nac = AgglomerativeClustering(n_clusters=100, affinity='euclidean', linkage='complete')\ny_ac = ac.fit_predict(X)\nfor i in range(n):\n ax2.scatter(X[y_ac==i,0], X[y_ac==i,1], c=c[i], marker='o', s=40, label='cluster{}'.format(i))\nax2.set_title('Agglomerative clustering')\n\n# Put a legend below current axis\nplt.legend(loc='upper center', bbox_to_anchor=(0, -0.05),\n fancybox=True, shadow=True, ncol=10)\n \nplt.tight_layout()\n#plt.savefig('./figures/kmeans_and_ac.png', dpi=300)\nplt.show()", "Classification", "import os\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz\nimport pandas as pd\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import cross_validation, metrics\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom time import time\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.metrics import roc_auc_score , classification_report\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report\n\n\nX = df[features]\ny = df['Class']\n\n# split dataset to 60% training and 40% testing\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)\n\nprint (X_train.shape, y_train.shape)", "Decision Tree accuracy and time elapsed caculation", "t0=time()\nprint (\"DecisionTree\")\n\ndt = DecisionTreeClassifier(min_samples_split=20,random_state=99)\n# dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)\n\nclf_dt=dt.fit(X_train,y_train)\n\nprint (\"Acurracy: \", clf_dt.score(X_test,y_test))\nt1=time()\nprint (\"time elapsed: \", t1-t0)\n\ntt0=time()\nprint (\"cross result========\")\nscores = cross_validation.cross_val_score(dt, X,y, cv=5)\nprint (scores)\nprint (scores.mean())\ntt1=time()\nprint (\"time elapsed: \", tt1-tt0)\n\nfrom sklearn.metrics import classification_report\n\npipeline = Pipeline([\n ('clf', DecisionTreeClassifier(criterion='entropy'))\n])\n\nparameters = {\n 'clf__max_depth': (5, 25 , 50),\n 'clf__min_samples_split': (1, 5, 10),\n 'clf__min_samples_leaf': (1, 2, 3)\n}\n\ngrid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')\ngrid_search.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid_search.best_score_\nprint 'Best parameters set:'\n\nbest_parameters = grid_search.best_estimator_.get_params()\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n\npredictions = grid_search.predict(X_test)\n\nprint classification_report(y_test, predictions)", "Random Forest accuracy and time elapsed caculation", "t2=time()\nprint (\"RandomForest\")\nrf = RandomForestClassifier(n_estimators=100,n_jobs=-1)\nclf_rf = rf.fit(X_train,y_train)\nprint (\"Acurracy: \", clf_rf.score(X_test,y_test))\nt3=time()\nprint (\"time elapsed: \", t3-t2)\n\ntt0=time()\nprint (\"cross result========\")\nscores = cross_validation.cross_val_score(rf, X,y, cv=5)\nprint (scores)\nprint (scores.mean())\ntt1=time()\nprint (\"time elapsed: \", tt1-tt0)\n\n\npipeline2 = Pipeline([\n('clf', RandomForestClassifier(criterion='entropy'))\n])\n\nparameters = {\n 'clf__n_estimators': (5, 25, 50, 100),\n 'clf__max_depth': (5, 25 , 50),\n 'clf__min_samples_split': (1, 5, 10),\n 'clf__min_samples_leaf': (1, 2, 3)\n}\n\ngrid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv=3)\n\ngrid_search.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid_search.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid_search.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n\npredictions = grid_search.predict(X_test)\nprint 'Accuracy:', accuracy_score(y_test, predictions)\nprint classification_report(y_test, predictions)\n ", "Naive Bayes accuracy and time elapsed caculation", "t4=time()\nprint (\"NaiveBayes\")\nnb = BernoulliNB()\nclf_nb=nb.fit(X_train,y_train)\nprint (\"Acurracy: \", clf_nb.score(X_test,y_test))\nt5=time()\nprint (\"time elapsed: \", t5-t4)\n\ntt0=time()\nprint (\"cross result========\")\nscores = cross_validation.cross_val_score(nb, X,y, cv=5)\nprint (scores)\nprint (scores.mean())\ntt1=time()\nprint (\"time elapsed: \", tt1-tt0)", "KNN accuracy and time elapsed caculation", "t6=time()\nprint (\"KNN\")\n# knn = KNeighborsClassifier(n_neighbors=3)\nknn = KNeighborsClassifier()\nclf_knn=knn.fit(X_train, y_train)\nprint (\"Acurracy: \", clf_knn.score(X_test,y_test) )\nt7=time()\nprint (\"time elapsed: \", t7-t6)\n\ntt0=time()\nprint (\"cross result========\")\nscores = cross_validation.cross_val_score(knn, X,y, cv=5)\nprint (scores)\nprint (scores.mean())\ntt1=time()\nprint (\"time elapsed: \", tt1-tt0)", "SVM accuracy and time elapsed caculation", "t7=time()\nprint (\"SVM\")\n\nsvc = SVC()\nclf_svc=svc.fit(X_train, y_train)\nprint (\"Acurracy: \", clf_svc.score(X_test,y_test) )\nt8=time()\nprint (\"time elapsed: \", t8-t7)\n\ntt0=time()\nprint (\"cross result========\")\nscores = cross_validation.cross_val_score(svc, X,y, cv=5)\nprint (scores)\nprint (scores.mean())\ntt1=time()\nprint (\"time elapsed: \", tt1-tt0)\n\nfrom sklearn.svm import SVC\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import grid_search\n\nsvc = SVC()\n\nparameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}\n\ngrid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy')\n\n\ngrid.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n \npredictions = grid.predict(X_test)\nprint classification_report(y_test, predictions)\n\npipeline = Pipeline([\n ('clf', SVC(kernel='rbf', gamma=0.01, C=100))\n])\n\nparameters = {\n 'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1),\n 'clf__C': (0.1, 0.3, 1, 3, 10, 30),\n}\n\ngrid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')\n\ngrid_search.fit(X_train, y_train)\n\nprint 'Best score: %0.3f' % grid_search.best_score_\n\nprint 'Best parameters set:'\nbest_parameters = grid_search.best_estimator_.get_params()\n\nfor param_name in sorted(parameters.keys()):\n print '\\t%s: %r' % (param_name, best_parameters[param_name])\n \npredictions = grid_search.predict(X_test)\nprint classification_report(y_test, predictions)", "Ensemble Learning\nBagging -- Building an ensemble of classifiers from bootstrap samples", "from sklearn.ensemble import BaggingClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n\ntree = DecisionTreeClassifier(criterion='entropy', \n max_depth=None)\n\nbag = BaggingClassifier(base_estimator=tree,\n n_estimators=500, \n max_samples=1.0, \n max_features=1.0, \n bootstrap=True, \n bootstrap_features=False, \n n_jobs=1, \n random_state=1)\n\nfrom sklearn.metrics import accuracy_score\n\ntree = tree.fit(X_train, y_train)\ny_train_pred = tree.predict(X_train)\ny_test_pred = tree.predict(X_test)\n\ntree_train = accuracy_score(y_train, y_train_pred)\ntree_test = accuracy_score(y_test, y_test_pred)\nprint('Decision tree train/test accuracies %.3f/%.3f'\n % (tree_train, tree_test))\n\nbag = bag.fit(X_train, y_train)\ny_train_pred = bag.predict(X_train)\ny_test_pred = bag.predict(X_test)\n\nbag_train = accuracy_score(y_train, y_train_pred) \nbag_test = accuracy_score(y_test, y_test_pred) \nprint('Bagging train/test accuracies %.3f/%.3f'\n % (bag_train, bag_test))", "Leveraging weak learners via adaptive boosting", "from sklearn.ensemble import AdaBoostClassifier\n\ntree = DecisionTreeClassifier(criterion='entropy', \n max_depth=1)\n\nada = AdaBoostClassifier(base_estimator=tree,\n n_estimators=500, \n learning_rate=0.1,\n random_state=0)\n\ntree = tree.fit(X_train, y_train)\ny_train_pred = tree.predict(X_train)\ny_test_pred = tree.predict(X_test)\n\ntree_train = accuracy_score(y_train, y_train_pred)\ntree_test = accuracy_score(y_test, y_test_pred)\nprint('Decision tree train/test accuracies %.3f/%.3f'\n % (tree_train, tree_test))\n\nada = ada.fit(X_train, y_train)\ny_train_pred = ada.predict(X_train)\ny_test_pred = ada.predict(X_test)\n\nada_train = accuracy_score(y_train, y_train_pred) \nada_test = accuracy_score(y_test, y_test_pred) \nprint('AdaBoost train/test accuracies %.3f/%.3f'\n % (ada_train, ada_test))", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/tpu
tools/colab/profiling_tpus_in_colab.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/profiling_tpus_in_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2018 The TensorFlow Hub Authors.\nCopyright 2019-2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nProfiling TPUs in Colab&nbsp; <a href=\"https://cloud.google.com/tpu/\"><img valign=\"middle\" src=\"https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png\" width=\"50\"></a>\nAdapted from TPU colab example.\nOverview\nThis example works through training a model to classify images of\nflowers on Google's lightning-fast Cloud TPUs. Our model takes as input a photo of a flower and returns whether it is a daisy, dandelion, rose, sunflower, or tulip. A key objective of this colab is to show you how to set up and run TensorBoard, the program used for visualizing and analyzing program performance on Cloud TPU.\nThis notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.\nInstructions\n<h3><a href=\"https://cloud.google.com/tpu/\"><img valign=\"middle\" src=\"https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png\" width=\"50\"></a>&nbsp;&nbsp;Train on TPU&nbsp;&nbsp; </h3>\n\n\nCreate a Cloud Storage bucket for your TensorBoard logs at http://console.cloud.google.com/storage. Give yourself Storage Legacy Bucket Owner permission on the bucket.\nYou will need to provide the bucket name when launching TensorBoard in the Training section. \n\nNote: User input is required when launching and viewing TensorBoard, so do not use Runtime > Run all to run through the entire colab. \nAuthentication for connecting to GCS bucket for logging.", "import os\nIS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence\nif IS_COLAB_BACKEND:\n from google.colab import auth\n # Authenticates the Colab machine and also the TPU using your\n # credentials so that they can access your private GCS buckets.\n auth.authenticate_user()", "Updating tensorboard_plugin_profile", "!pip install -U pip install -U tensorboard_plugin_profile==2.3.0", "Enabling and testing the TPU\nFirst, you'll need to enable TPUs for the notebook:\n\nNavigate to Edit→Notebook Settings\nselect TPU from the Hardware Accelerator drop-down\n\nNext, we'll check that we can connect to the TPU:", "%tensorflow_version 2.x\nimport tensorflow as tf\nprint(\"Tensorflow version \" + tf.__version__)\n\ntry:\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\n print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])\nexcept ValueError:\n raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')\n\ntf.config.experimental_connect_to_cluster(tpu)\ntf.tpu.experimental.initialize_tpu_system(tpu)\ntpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)\n\nimport re\nimport numpy as np\nfrom matplotlib import pyplot as plt", "Input data\nOur input data is stored on Google Cloud Storage. To more fully use the parallelism TPUs offer us, and to avoid bottlenecking on data transfer, we've stored our input data in TFRecord files, 230 images per file.\nBelow, we make heavy use of tf.data.experimental.AUTOTUNE to optimize different parts of input loading.\nAll of these techniques are a bit overkill for our (small) dataset, but demonstrate best practices for using TPUs.", "AUTO = tf.data.experimental.AUTOTUNE\n\nIMAGE_SIZE = [331, 331]\n\nbatch_size = 16 * tpu_strategy.num_replicas_in_sync\n\ngcs_pattern = 'gs://flowers-public/tfrecords-jpeg-331x331/*.tfrec'\nvalidation_split = 0.19\nfilenames = tf.io.gfile.glob(gcs_pattern)\nsplit = len(filenames) - int(len(filenames) * validation_split)\ntrain_fns = filenames[:split]\nvalidation_fns = filenames[split:]\n \ndef parse_tfrecord(example):\n features = {\n \"image\": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring\n \"class\": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar\n \"one_hot_class\": tf.io.VarLenFeature(tf.float32),\n }\n example = tf.io.parse_single_example(example, features)\n decoded = tf.image.decode_jpeg(example['image'], channels=3)\n normalized = tf.cast(decoded, tf.float32) / 255.0 # convert each 0-255 value to floats in [0, 1] range\n image_tensor = tf.reshape(normalized, [*IMAGE_SIZE, 3])\n one_hot_class = tf.reshape(tf.sparse.to_dense(example['one_hot_class']), [5])\n return image_tensor, one_hot_class\n\ndef load_dataset(filenames):\n # Read from TFRecords. For optimal performance, we interleave reads from multiple files.\n records = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO)\n return records.map(parse_tfrecord, num_parallel_calls=AUTO)\n\ndef get_training_dataset():\n dataset = load_dataset(train_fns)\n\n # Create some additional training images by randomly flipping and\n # increasing/decreasing the saturation of images in the training set. \n def data_augment(image, one_hot_class):\n modified = tf.image.random_flip_left_right(image)\n modified = tf.image.random_saturation(modified, 0, 2)\n return modified, one_hot_class\n augmented = dataset.map(data_augment, num_parallel_calls=AUTO)\n\n # Prefetch the next batch while training (autotune prefetch buffer size).\n return augmented.repeat().shuffle(2048).batch(batch_size).prefetch(AUTO) \n\ntraining_dataset = get_training_dataset()\nvalidation_dataset = load_dataset(validation_fns).batch(batch_size).prefetch(AUTO)", "Let's take a peek at the training dataset we've created:", "CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']\n\ndef display_one_flower(image, title, subplot, color):\n plt.subplot(subplot)\n plt.axis('off')\n plt.imshow(image)\n plt.title(title, fontsize=16, color=color)\n \n# If model is provided, use it to generate predictions.\ndef display_nine_flowers(images, titles, title_colors=None):\n subplot = 331\n plt.figure(figsize=(13,13))\n for i in range(9):\n color = 'black' if title_colors is None else title_colors[i]\n display_one_flower(images[i], titles[i], 331+i, color)\n plt.tight_layout()\n plt.subplots_adjust(wspace=0.1, hspace=0.1)\n plt.show()\n\ndef get_dataset_iterator(dataset, n_examples):\n return dataset.unbatch().batch(n_examples).as_numpy_iterator()\n\ntraining_viz_iterator = get_dataset_iterator(training_dataset, 9)\n\n# Re-run this cell to show a new batch of images\nimages, classes = next(training_viz_iterator)\nclass_idxs = np.argmax(classes, axis=-1) # transform from one-hot array to class number\nlabels = [CLASSES[idx] for idx in class_idxs]\ndisplay_nine_flowers(images, labels)", "Model\nTo get maxmimum accuracy, we leverage a pretrained image recognition model (here, Xception). We drop the ImageNet-specific top layers (include_top=false), and add a max pooling and a softmax layer to predict our 5 classes.", "def create_model():\n pretrained_model = tf.keras.applications.Xception(input_shape=[*IMAGE_SIZE, 3], include_top=False)\n pretrained_model.trainable = True\n model = tf.keras.Sequential([\n pretrained_model,\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(5, activation='softmax')\n ])\n model.compile(\n optimizer='adam',\n loss = 'categorical_crossentropy',\n metrics=['accuracy']\n )\n return model\n\nwith tpu_strategy.scope(): # creating the model in the TPUStrategy scope means we will train the model on the TPU\n model = create_model()\nmodel.summary()", "Training\nCalculate the number of images in each dataset. Rather than actually load the data to do so (expensive), we rely on hints in the filename. This is used to calculate the number of batches per epoch.", "def count_data_items(filenames):\n # The number of data items is written in the name of the .tfrec files, i.e. flowers00-230.tfrec = 230 data items\n n = [int(re.compile(r\"-([0-9]*)\\.\").search(filename).group(1)) for filename in filenames]\n return np.sum(n)\n\nn_train = count_data_items(train_fns)\nn_valid = count_data_items(validation_fns)\ntrain_steps = count_data_items(train_fns) // batch_size\nprint(\"TRAINING IMAGES: \", n_train, \", STEPS PER EPOCH: \", train_steps)\nprint(\"VALIDATION IMAGES: \", n_valid)", "Calculate and show a learning rate schedule. We start with a fairly low rate, as we're using a pre-trained model and don't want to undo all the fine work put into training it.", "EPOCHS = 12\n\nstart_lr = 0.00001\nmin_lr = 0.00001\nmax_lr = 0.00005 * tpu_strategy.num_replicas_in_sync\nrampup_epochs = 5\nsustain_epochs = 0\nexp_decay = .8\n\ndef lrfn(epoch):\n if epoch < rampup_epochs:\n return (max_lr - start_lr)/rampup_epochs * epoch + start_lr\n elif epoch < rampup_epochs + sustain_epochs:\n return max_lr\n else:\n return (max_lr - min_lr) * exp_decay**(epoch-rampup_epochs-sustain_epochs) + min_lr\n \nlr_callback = tf.keras.callbacks.LearningRateScheduler(lambda epoch: lrfn(epoch), verbose=True)\n\nrang = np.arange(EPOCHS)\ny = [lrfn(x) for x in rang]\nplt.plot(rang, y)\nprint('Learning rate per epoch:')", "Train the model. While the first epoch will be quite a bit slower as we must XLA-compile the execution graph and load the data, later epochs should complete in ~5s.", "# Load the TensorBoard notebook extension.\n%load_ext tensorboard\n\n# Get TPU profiling service address. This address will be needed for capturing\n# profile information with TensorBoard in the following steps.\nservice_addr = tpu.get_master().replace(':8470', ':8466')\nprint(service_addr)\n\n# Launch TensorBoard.\n%tensorboard --logdir=gs://bucket-name # Replace the bucket-name variable with your own gcs bucket", "The TensorBoard UI is displayed in a browser window. In this colab, perform the following steps to prepare to capture profile information.\n1. Click on the dropdown menu box on the top right side and scroll down and click PROFILE. A new window appears that shows: No profile data was found at the top.\n1. Click on the CAPTURE PROFILE button. A new dialog appears. The top input line shows: Profile Service URL or TPU name. Copy and paste the Profile Service URL (the service_addr value shown before launching TensorBoard) into the top input line. While still on the dialog box, start the training with the next step.\n1. Click on the next colab cell to start training the model.\n1. Watch the output from the training until several epochs have completed. This allows time for the profile data to start being collected. Return to the dialog box and click on the CAPTURE button. If the capture succeeds, the page will auto refresh and redirect you to the profiling results.", "history = model.fit(training_dataset, validation_data=validation_dataset,\n steps_per_epoch=train_steps, epochs=EPOCHS, callbacks=[lr_callback])\n\nfinal_accuracy = history.history[\"val_accuracy\"][-5:]\nprint(\"FINAL ACCURACY MEAN-5: \", np.mean(final_accuracy))\n\ndef display_training_curves(training, validation, title, subplot):\n ax = plt.subplot(subplot)\n ax.plot(training)\n ax.plot(validation)\n ax.set_title('model '+ title)\n ax.set_ylabel(title)\n ax.set_xlabel('epoch')\n ax.legend(['training', 'validation'])\n\nplt.subplots(figsize=(10,10))\nplt.tight_layout()\ndisplay_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)\ndisplay_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)", "Accuracy goes up and loss goes down. Looks good!\nNext steps\nMore TPU/Keras examples include:\n- Shakespeare in 5 minutes with Cloud TPUs and Keras\n- Fashion MNIST with Keras and TPUs\nWe'll be sharing more examples of TPU use in Colab over time, so be sure to check back for additional example links, or follow us on Twitter @GoogleColab." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
winpython/winpython_afterdoc
docs/dplyr_pandas.ipynb
mit
[ "Tom Augspurger Dplyr/Pandas comparison (copy of 2016-01-01)\nSee result there\nhttp://nbviewer.ipython.org/urls/gist.githubusercontent.com/TomAugspurger/6e052140eaa5fdb6e8c0/raw/627b77addb4bcfc39ab6be6d85cb461e956fb3a3/dplyr_pandas.ipynb\nto reproduce on your WinPython you'll need to get flights.csv in this directory\nThis notebook compares pandas\nand dplyr.\nThe comparison is just on syntax (verbage), not performance. Whether you're an R user looking to switch to pandas (or the other way around), I hope this guide will help ease the transition.\nWe'll work through the introductory dplyr vignette to analyze some flight data.\nI'm working on a better layout to show the two packages side by side.\nBut for now I'm just putting the dplyr code in a comment above each python call.\nusing R steps to get flights.csv\nun-comment the next cell unless you have installed R and want to get Flights example from the source\nto install R on your Winpython:\nhow to install R", "#%load_ext rpy2.ipython\n#%R install.packages(\"nycflights13\", repos='http://cran.us.r-project.org')\n#%R library(nycflights13)\n#%R write.csv(flights, \"flights.csv\")", "using an internet download to get flight.qcsv", "# Downloading and unzipg a file, without R method :\n# source= http://stackoverflow.com/a/34863053/3140336\nimport io\nfrom zipfile import ZipFile\nimport requests\n\ndef get_zip(file_url):\n url = requests.get(file_url)\n zipfile = ZipFile(io.BytesIO(url.content))\n zip_names = zipfile.namelist()\n if len(zip_names) == 1:\n file_name = zip_names.pop()\n extracted_file = zipfile.open(file_name)\n return extracted_file\n\nurl=r'https://github.com/winpython/winpython_afterdoc/raw/master/examples/nycflights13_datas/flights.zip'\nwith io.open(\"flights.csv\", 'wb') as f:\n f.write(get_zip(url).read())\n\n\n# Some prep work to get the data from R and into pandas\n%matplotlib inline\nimport matplotlib.pyplot as plt\n#%load_ext rpy2.ipython\n\nimport pandas as pd\nimport seaborn as sns\n\npd.set_option(\"display.max_rows\", 5)", "Data: nycflights13", "flights = pd.read_csv(\"flights.csv\", index_col=0)\n\n# dim(flights) <--- The R code\nflights.shape # <--- The python code\n\n# head(flights)\nflights.head()", "Single table verbs\ndplyr has a small set of nicely defined verbs. I've listed their closest pandas verbs.\n<table>\n <tr>\n <td><b>dplyr</b></td>\n <td><b>pandas</b></td>\n </tr>\n <tr>\n <td><code>filter()</code> (and <code>slice()</code>)</td>\n <td><code>query()</code> (and <code>loc[]</code>, <code>iloc[]</code>)</td>\n </tr>\n <tr>\n <td><code>arrange()</code></td>\n <td><code>sort_values</code> and <code>sort_index()</code></td>\n </tr>\n <tr>\n <td><code>select() </code>(and <code>rename()</code>)</td>\n <td><code>__getitem__ </code> (and <code>rename()</code>)</td>\n </tr>\n <tr>\n <td><code>distinct()</code></td>\n <td><code>drop_duplicates()</code></td>\n </tr>\n <tr>\n <td><code>mutate()</code> (and <code>transmute()</code>)</td>\n <td>assign</td>\n </tr>\n <tr>\n <td>summarise()</td>\n <td>None</td>\n </tr>\n <tr>\n <td>sample_n() and sample_frac()</td>\n <td><code>sample</code></td>\n </tr>\n <tr>\n <td><code>%>%</code></td>\n <td><code>pipe</code></td>\n </tr>\n\n</table>\n\nSome of the \"missing\" verbs in pandas are because there are other, different ways of achieving the same goal. For example summarise is spread across mean, std, etc. It's closest analog is actually the .agg method on a GroupBy object, as it reduces a DataFrame to a single row (per group). This isn't quite what .describe does.\nI've also included the pipe operator from R (%&gt;%), the pipe method from pandas, even though it isn't quite a verb.\nFilter rows with filter(), query()", "# filter(flights, month == 1, day == 1)\nflights.query(\"month == 1 & day == 1\")", "We see the first big language difference between R and python.\nMany python programmers will shun the R code as too magical.\nHow is the programmer supposed to know that month and day are supposed to represent columns in the DataFrame?\nOn the other hand, to emulate this very convenient feature of R, python has to write the expression as a string, and evaluate the string in the context of the DataFrame.\nThe more verbose version:", "# flights[flights$month == 1 & flights$day == 1, ]\nflights[(flights.month == 1) & (flights.day == 1)]\n\n# slice(flights, 1:10)\nflights.iloc[:9]", "Arrange rows with arrange(), sort()", "# arrange(flights, year, month, day) \nflights.sort_values(['year', 'month', 'day'])\n\n# arrange(flights, desc(arr_delay))\nflights.sort_values('arr_delay', ascending=False)", "It's worth mentioning the other common sorting method for pandas DataFrames, sort_index. Pandas puts much more emphasis on indicies, (or row labels) than R.\nThis is a design decision that has positives and negatives, which we won't go into here. Suffice to say that when you need to sort a DataFrame by the index, use DataFrame.sort_index.\nSelect columns with select(), []", "# select(flights, year, month, day) \nflights[['year', 'month', 'day']]\n\n# select(flights, year:day) \nflights.loc[:, 'year':'day']\n\n# select(flights, -(year:day)) \n\n# No direct equivalent here. I would typically use\n# flights.drop(cols_to_drop, axis=1)\n# or fligths[flights.columns.difference(pd.Index(cols_to_drop))]\n# point to dplyr!\n\n# select(flights, tail_num = tailnum)\nflights.rename(columns={'tailnum': 'tail_num'})['tail_num']", "But like Hadley mentions, not that useful since it only returns the one column. dplyr and pandas compare well here.", "# rename(flights, tail_num = tailnum)\nflights.rename(columns={'tailnum': 'tail_num'})", "Pandas is more verbose, but the the argument to columns can be any mapping. So it's often used with a function to perform a common task, say df.rename(columns=lambda x: x.replace('-', '_')) to replace any dashes with underscores. Also, rename (the pandas version) can be applied to the Index.\nOne more note on the differences here.\nPandas could easily include a .select method.\nxray, a library that builds on top of NumPy and pandas to offer labeled N-dimensional arrays (along with many other things) does just that.\nPandas chooses the .loc and .iloc accessors because any valid selection is also a valid assignment. This makes it easier to modify the data.\npython\nflights.loc[:, 'year':'day'] = data\nwhere data is an object that is, or can be broadcast to, the correct shape.\nExtract distinct (unique) rows", "# distinct(select(flights, tailnum))\nflights.tailnum.unique()", "FYI this returns a numpy array instead of a Series.", "# distinct(select(flights, origin, dest))\nflights[['origin', 'dest']].drop_duplicates()", "OK, so dplyr wins there from a consistency point of view. unique is only defined on Series, not DataFrames.\nAdd new columns with mutate()\nWe at pandas shamelessly stole this for v0.16.0.", "# mutate(flights,\n# gain = arr_delay - dep_delay,\n# speed = distance / air_time * 60)\n\nflights.assign(gain=flights.arr_delay - flights.dep_delay,\n speed=flights.distance / flights.air_time * 60)\n\n# mutate(flights,\n# gain = arr_delay - dep_delay,\n# gain_per_hour = gain / (air_time / 60)\n# )\n\n(flights.assign(gain=flights.arr_delay - flights.dep_delay)\n .assign(gain_per_hour = lambda df: df.gain / (df.air_time / 60)))\n", "The first example is pretty much identical (aside from the names, mutate vs. assign).\nThe second example just comes down to language differences. In R, it's possible to implement a function like mutate where you can refer to gain in the line calcuating gain_per_hour, even though gain hasn't actually been calcuated yet.\nIn Python, you can have arbitrary keyword arguments to functions (which we needed for .assign), but the order of the argumnets is arbitrary since dicts are unsorted and **kwargs* is a dict. So you can't have something like df.assign(x=df.a / df.b, y=x **2), because you don't know whether x or y will come first (you'd also get an error saying x is undefined.\nTo work around that with pandas, you'll need to split up the assigns, and pass in a callable to the second assign. The callable looks at itself to find a column named gain. Since the line above returns a DataFrame with the gain column added, the pipeline goes through just fine.", "# transmute(flights,\n# gain = arr_delay - dep_delay,\n# gain_per_hour = gain / (air_time / 60)\n# )\n(flights.assign(gain=flights.arr_delay - flights.dep_delay)\n .assign(gain_per_hour = lambda df: df.gain / (df.air_time / 60))\n [['gain', 'gain_per_hour']])\n", "Summarise values with summarise()", "# summarise(flights,\n# delay = mean(dep_delay, na.rm = TRUE))\nflights.dep_delay.mean()", "This is only roughly equivalent.\nsummarise takes a callable (e.g. mean, sum) and evaluates that on the DataFrame. In pandas these are spread across pd.DataFrame.mean, pd.DataFrame.sum. This will come up again when we look at groupby.\nRandomly sample rows with sample_n() and sample_frac()", "# sample_n(flights, 10)\nflights.sample(n=10)\n\n# sample_frac(flights, 0.01)\nflights.sample(frac=.01)", "Grouped operations", "# planes <- group_by(flights, tailnum)\n# delay <- summarise(planes,\n# count = n(),\n# dist = mean(distance, na.rm = TRUE),\n# delay = mean(arr_delay, na.rm = TRUE))\n# delay <- filter(delay, count > 20, dist < 2000)\n\nplanes = flights.groupby(\"tailnum\")\ndelay = (planes.agg({\"year\": \"count\",\n \"distance\": \"mean\",\n \"arr_delay\": \"mean\"})\n .rename(columns={\"distance\": \"dist\",\n \"arr_delay\": \"delay\",\n \"year\": \"count\"})\n .query(\"count > 20 & dist < 2000\"))\ndelay", "For me, dplyr's n() looked is a bit starge at first, but it's already growing on me.\nI think pandas is more difficult for this particular example.\nThere isn't as natural a way to mix column-agnostic aggregations (like count) with column-specific aggregations like the other two. You end up writing could like .agg{'year': 'count'} which reads, \"I want the count of year\", even though you don't care about year specifically. You could just as easily have said .agg('distance': 'count').\nAdditionally assigning names can't be done as cleanly in pandas; you have to just follow it up with a rename like before.\nWe may as well reproduce the graph. It looks like ggplots geom_smooth is some kind of lowess smoother. We can either us seaborn:", "fig, ax = plt.subplots(figsize=(12, 6))\n\nsns.regplot(\"dist\", \"delay\", data=delay, lowess=True, ax=ax,\n scatter_kws={'color': 'k', 'alpha': .5, 's': delay['count'] / 10}, ci=90,\n line_kws={'linewidth': 3});", "Or using statsmodels directly for more control over the lowess, with an extremely lazy\n\"confidence interval\".", "import statsmodels.api as sm\n\nsmooth = sm.nonparametric.lowess(delay.delay, delay.dist, frac=1/8)\nax = delay.plot(kind='scatter', x='dist', y = 'delay', figsize=(12, 6),\n color='k', alpha=.5, s=delay['count'] / 10)\nax.plot(smooth[:, 0], smooth[:, 1], linewidth=3);\nstd = smooth[:, 1].std()\nax.fill_between(smooth[:, 0], smooth[:, 1] - std, smooth[:, 1] + std, alpha=.25);\n\n# destinations <- group_by(flights, dest)\n# summarise(destinations,\n# planes = n_distinct(tailnum),\n# flights = n()\n# )\n\ndestinations = flights.groupby('dest')\ndestinations.agg({\n 'tailnum': lambda x: len(x.unique()),\n 'year': 'count'\n }).rename(columns={'tailnum': 'planes',\n 'year': 'flights'})", "There's a little know feature to groupby.agg: it accepts a dict of dicts mapping\ncolumns to {name: aggfunc} pairs. Here's the result:", "destinations = flights.groupby('dest')\nr = destinations.agg({'tailnum': {'planes': lambda x: len(x.unique())},\n 'year': {'flights': 'count'}})\nr", "The result is a MultiIndex in the columns which can be a bit awkard to work with (you can drop a level with r.columns.droplevel()). Also the syntax going into the .agg may not be the clearest.\nSimilar to how dplyr provides optimized C++ versions of most of the summarise functions, pandas uses cython optimized versions for most of the agg methods.", "# daily <- group_by(flights, year, month, day)\n# (per_day <- summarise(daily, flights = n()))\n\ndaily = flights.groupby(['year', 'month', 'day'])\nper_day = daily['distance'].count()\nper_day\n\n# (per_month <- summarise(per_day, flights = sum(flights)))\nper_month = per_day.groupby(level=['year', 'month']).sum()\nper_month\n\n# (per_year <- summarise(per_month, flights = sum(flights)))\nper_year = per_month.sum()\nper_year", "I'm not sure how dplyr is handling the other columns, like year, in the last example. With pandas, it's clear that we're grouping by them since they're included in the groupby. For the last example, we didn't group by anything, so they aren't included in the result.\nChaining\nAny follower of Hadley's twitter account will know how much R users love the %&gt;% (pipe) operator. And for good reason!", "# flights %>%\n# group_by(year, month, day) %>%\n# select(arr_delay, dep_delay) %>%\n# summarise(\n# arr = mean(arr_delay, na.rm = TRUE),\n# dep = mean(dep_delay, na.rm = TRUE)\n# ) %>%\n# filter(arr > 30 | dep > 30)\n(\nflights.groupby(['year', 'month', 'day'])\n [['arr_delay', 'dep_delay']]\n .mean()\n .query('arr_delay > 30 | dep_delay > 30')\n)", "A bit of soapboxing here if you'll indulge me.\nThe example above is a bit contrived since it only uses methods on DataFrame. But what if you have some function to work into your pipeline that pandas hasn't (or won't) implement? In that case you're required to break up your pipeline by assigning your intermediate (probably uninteresting) DataFrame to a temporary variable you don't actually care about.\nR doesn't have this problem since the %&gt;% operator works with any function that takes (and maybe returns) DataFrames.\nThe python language doesn't have any notion of right to left function application (other than special cases like __radd__ and __rmul__).\nIt only allows the usual left to right function(arguments), where you can think of the () as the \"call this function\" operator.\nPandas wanted something like %&gt;% and we did it in a farily pythonic way. The pd.DataFrame.pipe method takes a function and optionally some arguments, and calls that function with self (the DataFrame) as the first argument.\nSo\nR\nflights &gt;%&gt; my_function(my_argument=10)\nbecomes\npython\nflights.pipe(my_function, my_argument=10)\nWe initially had grander visions for .pipe, but the wider python community didn't seem that interested.\nOther Data Sources\nPandas has tons IO tools to help you get data in and out, including SQL databases via SQLAlchemy.\nSummary\nI think pandas held up pretty well, considering this was a vignette written for dplyr. I found the degree of similarity more interesting than the differences. The most difficult task was renaming of columns within an operation; they had to be followed up with a call to rename after the operation, which isn't that burdensome honestly.\nMore and more it looks like we're moving towards future where being a language or package partisan just doesn't make sense. Not when you can load up a Jupyter (formerly IPython) notebook to call up a library written in R, and hand those results off to python or Julia or whatever for followup, before going back to R to make a cool shiny web app.\nThere will always be a place for your \"utility belt\" package like dplyr or pandas, but it wouldn't hurt to be familiar with both.\nIf you want to contribute to pandas, we're always looking for help at https://github.com/pydata/pandas/.\nYou can get ahold of me directly on twitter." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
leonhardbrenner/buckysoap
RingsAndFields.ipynb
mit
[ "Ring, Field and Series is more of a design pattern then it is a package. I use this pattern for all of my time series work. These examples are trivial and there is very little code in the three modules. So take a look at how they works:\nhttps://github.com/leonhardbrenner/buckysoap/blob/master/src/buckysoap\nring.py - this is the fixer or root\nfield.py - let's us factor our code into smaller components\nseries.py - works with ring to give use indexing of time\n\nIn the example below we make up a timeline(dates). You should already understand properties(x, y, z) but what is interesting here is:\nb = property(B) #look at Field.__getattr__\n\nThis let's us access self.date which is actually a property of an instance of A. Actually, all properties of A are available to B including horizons. A.series(horizon=(-1, 21)) returns a Series object which is bound to and will create instances of A for each of the dates in the timeseries. We can index the series. We can also index each ring to access a ring relative to the current ring.\nIf this sounds complicated then just look at the examples:", "import buckysoap as bs\n\nclass A(bs.Ring):\n\n dates = [str(x) for x in bs.arange(10) + 20100101]\n\n @property\n def x(self):\n return 'x[%s]' % self.date\n \n class B(bs.Field):\n\n @property\n def y(self):\n return 'y[%s]' % self.date\n \n @property\n def z(self):\n return '(%s + %s)' % (self.x, self.y)\n \n b = property(B)\n \nseries = A.series(horizons=(-1, 21))\nprint series['20100107'].b.z\nfor o in series:\n print o.date, o.x, o.b.y, o.b.z", "In this example we are going to replace property y in class B. This is why I use the convention uppercase for the class and lowercase for the property. Here the difference in the code is trivial y becomes y`.", "class A(A):\n \n class B(A.B):\n\n @property\n def y(self):\n return 'y[%s]`' % self.date\n \n b = property(B)\n\nseries = A.series(horizons=(-1, 43))\nprint series['20100108'].b.z\nfor o in series:\n print o.date, o.x, o.b.y, o.b.z", "Now we are going to use the horizons. We create a method change which takes a callable(func). As we iterate through the series we call change passing the lambda which change will use to curry the Ring(s) found at the horizons of the current Ring. Typically, I use this with Atom and Element allowing me to calculate the change in the cross section.", "class A(A):\n \n dates = [str(x) for x in bs.arange(100) + 20100101]\n\n class B(A.B):\n\n def change(self, func):\n ring = self.ring\n horizons = ring.horizons\n values = [func(ring[x]) for x in horizons]\n return ((values[1] / values[0]) - 1) * 100\n \n b = property(B)\n\nseries = A.series(horizons=(-1, 2))\nprint series['20100108'].b.z\nfor o in series:\n if o.date < '20100153':\n print o.date, o.b.change(lambda x: float(x.date))", "I will leave the rest to your imagination but I like that Python makes this so easy. Thank you Guido!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
evanmiltenburg/python-for-text-analysis
Chapters/Chapter 13 - Working with Python files.ipynb
apache-2.0
[ "Chapter 13 - Working with Python files\nIn the previous blocks, we've mainly used notebooks to develop and run our Python code. In this chapter, we'll introduce how to create Python modules (.py files) and how to run them. The most common way to work with Python is actually to use .py files, which is why it is important that you know how to work with them. You can see python files as one cell in a notebook without any markdown.\nBefore we write actual code in a .py file, we will explain the basics you need to know for doing this:\n\nChoosing an editor\nstarting the terminal (from which you will run your .py files)\n\nAt the end of this chapter, you will be able to\n* create python modules, i.e., .py files\n* run python modules from the command line\nIf you have questions about this chapter, please contact us (cltl.python.course@gmail.com).\n1. Editor\nWe first need to choose which editor we will use to develop our Python code. \nThere are two options.\n\n\nYou create the python modules in your browser. After opening Jupyter notebook, you can click File -> New and then Text file to start developing Python modules.\n\n\nYou install an editor.\nPlease take a look here to get an impression of which ones are out there.\nWe can highly recommend Atom (for macOS, Windows, Linux). Other options are BBEdit (for macOS) and Notepad++ (for Windows). A simple way to create a new .py file usually is to open a new file and save it as name_of_your_program.py (make sure to use indicative names). \n\n\nPlease choose between options 1 and 2.\n2. Starting the terminal\nTo run a .py file we wrote in an editor, we need to start the terminal. This works differently for windows and Mac:\n\nOn Windows, please look at Anaconda Prompt\non OS X/macOS (Mac computer), please type terminal in Spotlight and start the terminal\n\nIt's a useful skill to know how to navigate through your computer (i.e., go from one directory to another, all the files and subdirectories in a directory, etc.) using the terminal. \nFor Windows users, this is a good tutorial.\nFor OS X/macOS/Linux/Ubuntu users, this is a good tutorial.\n3. Running your first program\nHere, we'll show you how to run your first program (hello_world.py).\nIn the same folder as this notebook, you will find a file called hello_world.py.\nRunning it works differently on Windows and Mac. Below, instructions for both can be found:\nA.) Running the program on OS X/MacOS\nPlease use the terminal to navigate to the folder in which this notebook is placed by copying the output of following cell in your terminal", "import os\ncwd = os.getcwd()\ncwd_escaped_spaces = cwd.replace(' ', '\\ ')\nprint('cd', cwd_escaped_spaces)", "cd means 'change directory'. Here, you are using it to go to the directory we are currently working in. We use the os module to print the path to this directory (os.getcwd). \nPlease run the following command in the terminal:\npython hello_world.py\nYou've succesfully run your first Python program!\nB.) Running the program on Windows\nPlease use the terminal to navigate to the folder in which this notebook is placed by copying the output of the following cell in your terminal", "cwd = os.getcwd()\ncwd_escaped_spaces = cwd.replace(' ', '^ ')\nprint('cd', cwd_escaped_spaces)", "Please run the output of the following command in the terminal:", "import sys\nprint(sys.executable + ' hello_world.py')", "You've succesfully run your first Python program!\n4. Import your own functions\nIn Chapter 12, you've been introduced to importing modules and functions/methods.\nYou can see any python program that you create (so any .py file) as a module, which means that you can import it into another python program. Let's see how this works.\nPlease note that the following examples only work if all your python files are in the same directory. There are ways of importing python modules from other directories, but we will not discuss them here. \n4.1 Importing your entire module\nWhen importing your own functions from your own modules, several things are important. We have created two example scripts to illustrate them called the_program.py and utils.py. We recommend to open them to check the following things:\n\nThe extension .py is not used when importing modules. import utils will import the file utils.py [line 1 the_program.py]\nWe can use any function from the file. We can call the count_words function by typing utils.count_words [the_program.py line 6]\nWe can use any global variable declared in the imported module. E.g. utils.x and utils.python declared in utils.py can be used in the_program.py \n\n4.2 Importing functions and variables individually\nWe can import specific functions using the syntax from MODULE import FUNCTION/VARIABLE\nThis can be seen in the file the_program_v2.py (lines 1-3). (Open the files the_program_v2.py and utils.py in an editor to check this). \n4.3 Importing functions and variables to python notebooks\nPlease note that you can also import functions and variables from a python program while using notebooks. In this case, simply treat the notebook as the python files the_program.py and the_program_v2.py.", "from utils import count_words\n\nwords = ['how', 'often', 'does', 'each', 'string', 'occur', 'in', 'this', 'list', '?']\n\nword2freq = count_words(words)\nprint('word2freq', word2freq)", "Exercises\nExercise 1: \nPlease create and run your own program using an editor and the terminal. Please copy your beersong into your first program. Tip: simply open a new file in the editor and save it as `beersong.py'. \nExercise 2: \nPlease create two files:\n* my_second_program.py\n* my_utils.py\nPlease create a helper function and store it in my_utils.py, import it into my_second_program.py and call it from there." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
prabhath6/Linear-Regression
Linear Regression.ipynb
apache-2.0
[ "Supervised Learning: Linear Regression\nWe'll be going over how to use the scikit-learn regression model, as well as how to train the regressor using the fit() method, and how to predict new labels using the predict() method. We'll be analyzing a data set consisting of house prices in Boston. We'll start off with a single variable linear regression using numpy and then move on to using scikit learn. We'll do an overview of the mathematics behind the method we're using, but mostly we'll dive deeper into pratical \"hands-on\" coding lessons.\nIn this section we will be working through linear regression with the following steps:\nStep 1: Getting and setting up the data.\nStep 2: Visualizing current data.\nStep 3: The mathematics behind the Least Squares Method.\nStep 4: Using Numpy for a Univariate Linear Regression.\nStep 5: Getting the error.\nStep 6: Using scikit learn to implement a multivariate regression.\nStep 7: Using Training and Validation. \nStep 8: Predicting Prices\nStep 9 : Residual Plots\nStep 1: Getting and setting up the data.\nWe'll start by looking a an example of a dataset from scikit-learn. First we'll import our usual data analysis imports, then sklearn's built-in boston dataset.", "# Standard imports\n\nimport numpy as np\nimport pandas as pd\nfrom pandas import DataFrame, Series\n\n# Plotting\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('whitegrid')\n%matplotlib inline\n\n# scikit learn\nimport sklearn\nfrom sklearn.datasets import load_boston\n\n# Load the housing data sets\nboston = load_boston()\n\nprint boston.DESCR", "Step 2: Visualizing current data\nYou should always try to do a quick visualization fo the data you have. Let's go ahead an make a histogram of the prices.", "# histogram of prices\n\nplt.hist(boston.target, bins=50)\n\nplt.xlabel('Prices in 1000$')\nplt.ylabel('Number of houses')\nplt.title('Prices Vs Houses')\n\nplt.savefig('house_vs_price.png')\n\nplt.scatter(boston.data[:,5], boston.target)\n\n#label\nplt.ylabel('Price in $1000s')\nplt.xlabel('Number of rooms')", "Now we can make out a slight trend that price increases along with the number of rooms in that house, which intuitively makes sense! Now let's use scikit learn to see if we can fit the data linearly.\nLet's try to do the following:\n1.) Use pandas to transform the boston dataset into a DataFrame: \n2.) Then use seaborn to perform an lmplot on that DataFrame to reproduce the scatter plot with a linear fit line.", "# converting into dataFrame\n\nboston_df = DataFrame(boston.data)\nboston_df.columns= boston.feature_names\n\nboston_df.head()\n\n# Creating a price column in dataFrame\n\nboston_df['PRICE'] = boston.target\n\nboston_df.head()", "Now, you might be reminded of the seaborn lmplot function we used during the visualization lectures. You could use it here to do a linear fit automatically!", "# linear regression plot\n\nsns.lmplot('RM', 'PRICE', data=boston_df)", "Step 3: The mathematics behind the Least Squares Method.\nIn this we'll use the least squares method as the way to estimate the coefficients. Here's a quick breakdown of how this method works mathematically:\nTake a quick look at the plot we created above using seaborn. Now consider each point, and know that they each have a coordinate in the form (X,Y). Now draw an imaginary line between each point and our current \"best-fit\" line. We'll call the distanace between each point and our current best-fit line, D. To get a quick image of what we're currently trying to visualize, take a look at the picture below:", "\n# Quick display of image form wikipedia\nfrom IPython.display import Image\nurl = 'http://upload.wikimedia.org/wikipedia/commons/thumb/b/b0/Linear_least_squares_example2.svg/220px-Linear_least_squares_example2.svg.png'\nImage(url)", "Step 4: Using Numpy for a Univariate Linear Regression\nNumpy has a built in Least Square Method in its linear algebra library. We'll use this first for our Univariate regression and then move on to scikit learn for out Multi variate regression.\nWe will start by setting up the X and Y arrays for numpy to take in. An important note for the X array: Numpy expects a two-dimensional array, the first dimension is the different example values, and the second dimension is the attribute number. In this case we have our value as the mean number of rooms per house, and this is a single attribute so the second dimension of the array is just 1. So we'll need to create a (506,1) shape array. There are a few ways to do this, but an easy way to do this is by using numpy's built-in vertical stack tool, vstack.", "# Numpy linear algebra needs to have data in the form data and parameters\n\nx = boston_df.RM\n#print x.shape\n\nx = np.vstack(boston_df.RM)\n#print x.shape\n\ny = boston_df.PRICE\n", "Now that we have our X and Y, let's go ahead and use numpy to create the single variable linear regression.\nWe know that a line has the equation:\ny=mx+b\nwhich we can rewrite using matrices:\ny=Ap\nwhere:\nA=[x 1]\nand\np=[m b]\nThis is the same as the first equation if you carry out the linear algebra. So we'll start by creating the A matrix using numpy. We'll do this by creating a matrix in the form [X 1], so we'll call every value in our original X using a list comprehension and then set up an array in the form [X 1]", "# using list comprehension\nx = np.array([[value, 1] for value in x])\n\n# Now get out m and b values for our best fit line\n\nm, b = np.linalg.lstsq(x, y)[0]\n\n# Plotting the same lm plot that we plotted earlier using seaborn\n\nplt.plot(boston_df.RM,boston_df.PRICE ,'o')\n\n# plotting line\nX = boston_df.RM\n\nplt.plot(X,m*X + b,'red', label = 'Best Fit')\nplt.savefig('bestfit.png')", "Step 5: Getting the error\nWe've just completed a single variable regression using the least squares method with Python! Let's see if we can find the error in our fitted check the link. Checking out the documentation here, we see that the resulting array has the total squared error. For each element, it checks the the difference between the line and the true value (our original D value), squares it, and returns the sum of all these. This was the summed D^2 value we discussed earlier.\nIt's probably easier to understand the root mean squared error, which is similar to the standard deviation. In this case, to find the root mean square error we divide by the number of elements and then take the square root. There is also an issue of bias and an unbiased regression, but we'll delve into those topics later.\nFor now let's see how we can get the root mean squared error of the line we just fitted.", "\"\"\"\nDependent variable always on y axis and independent variable on x axis while plotting.\n\"\"\"\n\nresult = np.linalg.lstsq(x,y)[1]\n\n# Total error\ntotal_error = np.sqrt(result/len(x))\n\nprint \"The root mean square error is: {}\" .format(float(total_error))", "Since the root mean square error (RMSE) corresponds approximately to the standard deviation we can now say that the price of a house won't vary more than 2 times the RMSE 95% of the time. Note: Review the Normal Distribution Appendix lecture if this doesn't make sense to you or check out this link.\nThus we can reasonably expect a house price to be within $13,200 of our line fit.\nStep 6: Using scikit learn to implement a multivariate regression\nNow, we'll keep moving along with using scikit learn to do a multi variable regression. This will be a similar apporach to the above example, but sci kit learn will be able to take into account more than just a single data variable effecting the target!\nWe'll start by importing the linear regression library from the sklearn module.\nThe sklearn.linear_model.LinearRegression class is an estimator. Estimators predict a value based on the observed data. In scikit-learn, all estimators implement the fit() and predict() methods. The former method is used to learn the parameters of a model, and the latter method is used to predict the value of a response variable for an explanatory variable using the learned parameters. It is easy to experiment with different models using scikit-learn because all estimators implement the fit and predict methods.", "# sklearn imports\n\nfrom sklearn.linear_model import LinearRegression\n\n# Create a LinearRegression Object\nlreg = LinearRegression()", "The functions we will be using are:\nlreg.fit() which fits a linear model\nlreg.predict() which is used to predict Y using the linear model with estimated coefficients\nlreg.score() which returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, learn more about it here", "# In order to drop a coloumn we use '1'\n\nx_multi = boston_df.drop('PRICE', 1)\n\ny_target = boston_df.PRICE\n\n# Implement Linear Regression\nlreg.fit(x_multi, y_target)\n\nprint \"The estimated intercept {}\" .format(lreg.intercept_)\n\nprint \"The number of coefficients used {}.\" .format(len(lreg.coef_))\n\ncoeff_df = DataFrame(boston_df.columns)\ncoeff_df.columns = ['Features']\n\ncoeff_df['Coefficient'] = Series(lreg.coef_)\n\n\"\"\" \n These 13 coefficients are used to bild the line that is used as best fit line by \n scikit learn\n \n\"\"\"\ncoeff_df", "Step 7: Using Training and Validation\nIn a dataset a training set is implemented to build up a model, while a validation set is used to validate the model built. Data points in the training set are excluded from the validation set. The correct way to pick out samples from your dataset to be part either the training or validation (also called test) set is randomly.\nFortunately, scikit learn has a built in function specifically for this called train_test_split.\nThe parameters passed are your X and Y, then optionally test_size parameter, representing the proportion of the dataset to include in the test split. As well a train_size parameter. ou can learn more about these parameters here", "# Getting the tranning and testing data sets\n\nX_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(x, boston_df.PRICE)\n\n# The outputs\n\nprint X_train.shape, X_test.shape, Y_train.shape, Y_test.shape", "Step 8: Predicting Prices\nNow that we have our training and testing sets, let's go ahead and try to use them to predict house prices. We'll use our training set for the prediction and then use our testing set for validation.", "legr = LinearRegression()\n\nlegr.fit(X_train, Y_train)\n\npred_train = legr.predict(X_train)\npred_test = legr.predict(X_test)\n\nprint \"Fit a model X_train, and calculate MSE with Y_train: {}\" .format(np.mean((Y_train - pred_train)**2))\nprint \"Fit a model X_train, and calculate MSE with X_test and Y_test: {}\" .format(np.mean((Y_test - pred_test)**2))", "It looks like our mean square error between our training and testing was pretty close. \nStep 9 : Residual Plots\nIn regression analysis, the difference between the observed value of the dependent variable (y) and the predicted value (ŷ) is called the residual (e). Each data point has one residual, so that:\nResidual=Observedvalue−Predictedvalue\nYou can think of these residuals in the same way as the D value we discussed earlier, in this case however, there were multiple data points considered.\nA residual plot is a graph that shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points in a residual plot are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate.\nResidual plots are a good way to visualize the errors in your data. If you have done a good job then your data should be randomly scattered around line zero. If there is some strucutre or pattern, that means your model is not capturing some thing. There could be an interaction between 2 variables that you're not considering, or may be you are measuring time dependent data. If this is the case go back to your model and check your data set closely.\nSo now let's go ahead and create the residual plot. For more info on the residual plots check out this great link.", "# Scater plot the training data\ntrain = plt.scatter(pred_train, (pred_train - Y_train), c='b', alpha=0.8)\n\n# Scatter plot the testing data\ntest = plt.scatter(pred_test, (pred_test - Y_test), c='r', alpha=0.6)\n\n# Horizontal line\nplt.hlines(y=0, xmin=-10, xmax=50)\n\n#Labels\nplt.legend((train,test),('Training','Test'),loc='lower left')\nplt.title('Residual Plots')\n\nplt.savefig('residualplot.png')", "From the plot we can observe that the data is scattered around the horizontal and do not follow any pattern so linear regression can be applied on the data set." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bjshaw/phys202-2015-work
assignments/assignment06/ProjectEuler17.ipynb
mit
[ "Project Euler: Problem 17\nhttps://projecteuler.net/problem=17\nIf the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.\nIf all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?\nNOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of \"and\" when writing out numbers is in compliance with British usage.\nFirst write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above", "def number_to_words(n):\n \"\"\"Given a number n between 1-1000 inclusive return a list of words for the number.\"\"\"\n x = []\n a = {1:'one',2:'two',3:'three',4:'four',5:'five',6:'six',7:'seven',8:'eight',9:'nine',10:'ten',\n 11:'eleven',12:'twelve',13:'thirteen',14:'fourteen',15:'fifteen',16:'sixteen',17:'seventeen',18:'eighteen'\n ,19:'nineteen',20:'twenty',30:'thirty',40:'forty',50:'fifty',60:'sixty',70:'seventy',80:'eighty',90:'ninety'}\n b = 'hundred'\n c = 'thousand'\n d = 'and'\n if n <= 20 and n >= 1:\n x.append(a[n])\n return x\n elif n > 20 and n < 100:\n if n % 10 == 0:\n x.append(a[n])\n return x\n else:\n y = str(n)\n x.append(a[int(y[0] + '0')])\n x.append(a[int(y[1])])\n return x\n elif n >= 100 and n < 1000:\n if n % 100 == 0:\n y = str(n)\n x.append(a[int(y[0])])\n x.append(b)\n return x\n elif n % 10 == 0:\n y = str(n)\n x.append(a[int(y[0])])\n x.append(b)\n x.append(d)\n x.append(a[int(y[1]+'0')])\n return x\n elif str(n)[1] == '0':\n y = str(n)\n x.append(a[int(y[0])])\n x.append(b)\n x.append(d)\n x.append(a[int(y[2])])\n return x\n elif str(n)[1] == '1':\n y = str(n)\n x.append(a[int(y[0])])\n x.append(b)\n x.append(d)\n x.append(a[int(y[1]+y[2])])\n return x\n else:\n y = str(n)\n x.append(a[int(y[0])])\n x.append(b)\n x.append(d)\n x.append(a[int(y[1]+'0')])\n x.append(a[int(y[2])])\n return x\n else:\n x.append(a[1])\n x.append(c)\n return x", "Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.", "assert number_to_words(16) == ['sixteen']\nassert number_to_words(507) == ['five','hundred','and','seven']\nassert number_to_words(735) == ['seven', 'hundred', 'and', 'thirty', 'five']\nassert len(''.join(number_to_words(342))) == 23\n\nassert True # use this for grading the number_to_words tests.", "Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.", "def count_letters(n):\n \"\"\"Count the number of letters used to write out the words for 1-n inclusive.\"\"\"\n z = 0\n x = range(1,n+1)\n for m in x:\n j = number_to_words(m)\n k = len(''.join(j))\n z += k\n return z", "Now write a set of assert tests for your count_letters function that verifies that it is working as expected.", "assert count_letters(6) == 22\n\nassert True # use this for grading the count_letters tests.", "Finally used your count_letters function to solve the original question.", "count_letters(1000)\n\nassert True # use this for gradig the answer to the original question." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aktse/udacity-mlnd
projects/customer_segments/customer_segments.ipynb
mit
[ "Machine Learning Engineer Nanodegree\nUnsupervised Learning\nProject: Creating Customer Segments\nWelcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.\nThe dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.\nRun the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.", "# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the wholesale customers dataset\ntry:\n data = pd.read_csv(\"customers.csv\")\n data.drop(['Region', 'Channel'], axis = 1, inplace = True)\n print \"Wholesale customers dataset has {} samples with {} features each.\".format(*data.shape)\nexcept:\n print \"Dataset could not be loaded. Is the dataset missing?\"", "Data Exploration\nIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.\nRun the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.", "# Display a description of the dataset\ndisplay(data.describe())", "Implementation: Selecting Samples\nTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.", "# TODO: Select three indices of your choice you wish to sample from the dataset\nindices = [338, 154, 181]\n\n# Create a DataFrame of the chosen samples\nsamples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)\nprint \"Chosen samples of wholesale customers dataset:\"\ndisplay(samples)", "Question 1\nConsider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.\nWhat kind of establishment (customer) could each of the three samples you've chosen represent?\nHint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying \"McDonalds\" when describing a sample customer as a restaurant.\nAnswer: I deliberately looked for the records with min(fresh), min(milk) and max(fresh) and it did not dissapoint me to see that they seem to represent vastly different customer segments.\n\nThe first record is in the top 25% for 'Frozen' goods, top 50% for 'Grocery' and 'Delicatessen'and in the bottom 25% for the last 3 categories. This could be a small grocery store which specializes in frozen goods, but has a grocery and deli section as well. The lack of fresh goods (taken to mean produce), however, seems to suggest otherwise. Though the spending is fairly high, it's not incredibly so (I'm not convinced even a small grocery store only sells ~25,000 m.u. worth of goods in a year). Threfore, it's possible that this could also be a small group of individuals (such as college roommates) who primarily eat frozen foods (eg. frozen pizza, fries).\nThe second record has very low spending all around (WAY below the 25th percentile). This customer is probably an individual, and one that shops at other places.\nThis customer exceeds the 75th percentile in all categories, although they only come close to the max value in one category (Fresh). This is likely a grocery store of some kind, which specializes in selling produce.\n\nImplementation: Feature Relevance\nOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.\nIn the code block below, you will need to implement the following:\n - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.\n - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.\n - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.\n - Import a decision tree regressor, set a random_state, and fit the learner to the training data.\n - Report the prediction score of the testing set using the regressor's score function.", "# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature\ntest_label = 'Grocery'\nnew_data = data.drop(test_label, axis = 1)\ntest_feature = data[test_label]\n\n# TODO: Split the data into training and testing sets using the given feature as the target\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(new_data, test_feature, test_size=0.25, random_state=777)\n\n# TODO: Create a decision tree regressor and fit it to the training set\nfrom sklearn.tree import DecisionTreeRegressor\nregressor = DecisionTreeRegressor()\nregressor.fit(X_train, y_train)\n\n# TODO: Report the score of the prediction using the testing set\nscore = regressor.score(X_test, y_test)\nprint score", "Question 2\nWhich feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?\nHint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.\nAnswer: I attempted to predict 'Grocery'. The reported prediction score ranged between 0.78-0.82 when run multiple times (even with a constant random_state). This feature seems to be pretty good for identifying customer spending habits\nVisualize Feature Distributions\nTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.", "# Produce a scatter matrix for each pair of features in the data\npd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');", "Question 3\nAre there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?\nHint: Is the data normally distributed? Where do most of the data points lie? \nAnswer: Grocery seems to be mostly correlated with 'Milk' and 'Detergents_Paper'. The remaining 3 features are not quite as correlated (In fact, they aren't really correlated with anything else at all). This confirms my suspicion that the feature I chose (Grocery) is relevant. The data, however seems to be highly skewed to the left (a few very large outliers) across all features. This suggests that the company perhaps has a few very large (probably corporate) buyers.\nData Preprocessing\nIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.\nImplementation: Feature Scaling\nIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.\nIn the code block below, you will need to implement the following:\n - Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.\n - Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.", "# TODO: Scale the data using the natural logarithm\nlog_data = np.log(data)\n\n# TODO: Scale the sample data using the natural logarithm\nlog_samples = np.log(samples)\n\n# Produce a scatter matrix for each pair of newly-transformed features\npd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');", "Observation\nAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).\nRun the code below to see how the sample data has changed after having the natural logarithm applied to it.", "# Display the log-transformed sample data\ndisplay(log_samples)", "Implementation: Outlier Detection\nDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many \"rules of thumb\" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.\nIn the code block below, you will need to implement the following:\n - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.\n - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.\n - Assign the calculation of an outlier step for the given feature to step.\n - Optionally remove data points from the dataset by adding indices to the outliers list.\nNOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!\nOnce you have performed this implementation, the dataset will be stored in the variable good_data.", "from collections import defaultdict\noutlier_indices = defaultdict(int)\n# For each feature find the data points with extreme high or low values\nfor feature in log_data.keys():\n \n # TODO: Calculate Q1 (25th percentile of the data) for the given feature\n Q1 = np.percentile(log_data[feature], 25)\n \n # TODO: Calculate Q3 (75th percentile of the data) for the given feature\n Q3 = np.percentile(log_data[feature], 75)\n \n # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)\n step = (Q3 - Q1) * 1.5\n \n # Display the outliers\n print \"Data points considered outliers for the feature '{}':\".format(feature)\n rows = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]\n feature_indices = rows.index\n display(rows)\n \n # Track all indices that are outliers\n for index in feature_indices:\n if (index not in indices):\n outlier_indices[index] += 1\n \n \n# OPTIONAL: Select the indices for data points you wish to remove\n# If an index appeared 3 times (was an outlier for at least 3 of the categories), drop the row\noutliers = []\nfor index in outlier_indices:\n if outlier_indices[index] >= 1:\n outliers.append(index)\n\nprint(outliers)\nprint(len(outliers))\n# Remove the outliers, if any were specified\ngood_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)\ndisplay(good_data.describe())\ndisplay(log_data.describe())", "Question 4\nAre there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. \nAnswer:\nThere were 42 unique rows that had at least 1 outlier, 5 with 2+, 1 with 3+, and 0 with 4+.\nBased on this, I have chosen to remove any data point with an outlier (which works out roughly 10% of the original data). I chose to remove any rows with at least 1 feature that is an outlier. I decided to do this because it seemed to lower the average distance between the mean and the median. \nTo determine this, I recorded the mean and median of each feature after removing data points with at least 1 feature that is an outlier, at least 2 features that are an outlier, and without removing any data points. Lastly I calculated the difference between the mean and median of each column and averaged the result. The results were:\n|Min # of Outlier Features|Average Difference Between Mean and Median|\n|---|---|\n|None (Base)|0.123|\n|1|0.0852|\n|2|0.110|\nThere isn't much improvement when only removing the 5 data points, but there is a much larger improvement when removing all 42 outliers.\nConsequently, the first 2 sample points I chose were removed. I have opted to not remove them (and thus, the averages will be slightly different from what I initially calculated, but the overall effect should be the same) by making them an exception and then re-running the code. It is quite surprising that the last sample point wasn't considered an outlier as it had that maximum value in the 'Fresh' category. As a matter of fact, outliers for 'Fresh' were only those that were lower than the 25th percentile - 1.5IQR.\nFeature Transformation\nIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.\nImplementation: PCA\nNow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new \"feature\" of the space, however it is a composition of the original features present in the data.\nIn the code block below, you will need to implement the following:\n - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.", "# TODO: Apply PCA by fitting the good data with the same number of dimensions as features\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=6, random_state=777).fit(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Generate PCA results plot\npca_results = vs.pca_results(good_data, pca)", "Question 5\nHow much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.\nHint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.\nAnswer: 72.13% of the total variance is explained by the first 2 principal components. 92.95% is explained by the first 4 PCs.\nDimension 1: This dimension suggests that an increase in both 'Fresh' and 'Frozen' results in a moderate decrease in spending on 'Milk' and 'Grocery' and a large decrease in spending in 'Detergents_Paper'\nDimension 2: This dimension suggests that small purchases of 'Detergents_Paper' is correlated with a large decrease in spending on 'Fresh', 'Frozen', and 'Deli' (In fact, it is a decrease in spending in all other categories)\nDimension 3: This dimension suggests that large purchases of 'Frozen' and 'Deli' goods are correlated with a large decrease in spending on 'Fresh'\nDimension 4: This dimension suggests that very large purchases of 'Deli' is correlated with a large decrease in spending on 'Frozen' and a moderate decrease in spending on 'Detergents_Paper'\nWhen comparing with the scatter plots from above, an interesting observation can be made. Previously we determined that 'Grocery', 'Milk' and 'Detergents_Paper' were correlated. In fact, according to the scatter plots, they are all positively correlated (that is, and increase in one results in an increase in the other). The correlation between 'Milk' and 'Detergents_Paper' is a bit weaker but the overall shape is there. However, from the PCA, we can see that asides from dimension 1, 'Grocery' and 'Milk' are negatively correlated with 'Detergents_Paper'. 'Grocery' and 'Milk' are positively correlated in all cases except for the last dimension, which only represents ~2.5% of the total variance and can be considered an edge case.\nObservation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.", "# Display sample log-data after having a PCA transformation applied\ndisplay(log_samples)\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))", "Implementation: Dimensionality Reduction\nWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.\nIn the code block below, you will need to implement the following:\n - Assign the results of fitting PCA in two dimensions with good_data to pca.\n - Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.", "# TODO: Apply PCA by fitting the good data with only two dimensions\npca = PCA(n_components=2, random_state=777).fit(good_data)\n\n# TODO: Transform the good data using the PCA fit above\nreduced_data = pca.transform(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Create a DataFrame for the reduced data\nreduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])", "Observation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.", "# Display sample log-data after applying PCA transformation in two dimensions\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))", "Visualizing a Biplot\nA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.\nRun the code cell below to produce a biplot of the reduced-dimension data.", "# Create a biplot\nvs.biplot(good_data, reduced_data, pca)", "Observation\nOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories. \nFrom the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?\nClustering\nIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. \nQuestion 6\nWhat are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?\nAnswer:\nImplementation: Creating Clusters\nDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the \"goodness\" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.\nIn the code block below, you will need to implement the following:\n - Fit a clustering algorithm to the reduced_data and assign it to clusterer.\n - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.\n - Find the cluster centers using the algorithm's respective attribute and assign them to centers.\n - Predict the cluster for each sample data point in pca_samples and assign them sample_preds.\n - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.\n - Assign the silhouette score to score and print the result.", "# TODO: Apply your clustering algorithm of choice to the reduced data \nclusterer = None\n\n# TODO: Predict the cluster for each data point\npreds = None\n\n# TODO: Find the cluster centers\ncenters = None\n\n# TODO: Predict the cluster for each transformed sample data point\nsample_preds = None\n\n# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen\nscore = None", "Question 7\nReport the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? \nAnswer:\nCluster Visualization\nOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.", "# Display the results of the clustering from implementation\nvs.cluster_results(reduced_data, preds, centers, pca_samples)", "Implementation: Data Recovery\nEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.\nIn the code block below, you will need to implement the following:\n - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.\n - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.", "# TODO: Inverse transform the centers\nlog_centers = None\n\n# TODO: Exponentiate the centers\ntrue_centers = None\n\n# Display the true centers\nsegments = ['Segment {}'.format(i) for i in range(0,len(centers))]\ntrue_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())\ntrue_centers.index = segments\ndisplay(true_centers)", "Question 8\nConsider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?\nHint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.\nAnswer:\nQuestion 9\nFor each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?\nRun the code block below to find which cluster each sample point is predicted to be.", "# Display the predictions\nfor i, pred in enumerate(sample_preds):\n print \"Sample point\", i, \"predicted to be in Cluster\", pred", "Answer:\nConclusion\nIn this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.\nQuestion 10\nCompanies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?\nHint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?\nAnswer:\nQuestion 11\nAdditional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.\nHow can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?\nHint: A supervised learner could be used to train on the original customers. What would be the target variable?\nAnswer:\nVisualizing Underlying Distributions\nAt the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.\nRun the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.", "# Display the clustering results based on 'Channel' data\nvs.channel_results(reduced_data, outliers, pca_samples)", "Question 12\nHow well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?\nAnswer:\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
empiricalstateofmind/blog
content/notebooks/mapping-climbs.ipynb
mit
[ "I've always been a fan of maps, especially old maps, and the sense of exploration and history that comes with hovering over them and zooming in on the fine details.\nDespite this I've never really tried to make any maps of my own.\nI recently discovered the nifty Python package, Folium, which can make visually stunning interactive maps within a matter of minutes. \nSince my PhD research is concerned more with the digital world than the real world I decided to look to my love of cycling and plot some of the top cycling climbs in the UK.\nI've published the final map here on the Top100 Climbs Project Page, where (if you're interested) you can find more information on the climbs themselves.\nThe focus of this post is to see how it was made.\nThe most basic Python package for plotting geographical data is Basemap which uses matplotlib.\nHowever, like matplotlib, it can require a lot of work to get something that looks reasonable.\nBy contrast, Folium is simple, intuitive, and utilises the Javascript mapping library leaflet.js to create interactive maps on the fly.\nTo install you can use pip install folium (there is not current a Conda package).", "import folium", "A Basic Example\nThe following example highlights how we can generate an embeded map in just four lines of code.\nThe first line generates the map object.\nThe location and zoom_start keywords sets the latitude and longitude of the centre of the map (we can of course move around), and how zoomed in we are initially.\nThe tiles keyword describes which tileset to want to use.\nThere are lots of different options here, from the geographically detailed, to the more artistic representations of the world.\nYou can see all the available sets here.\nFor this example we'll use the Open Street Map which is on the detailed side of the spectrum.\nThe second line defines a Marker object.\nThis has a specific location specifed by a (lat,lon) pair, some pop-up text, and an icon to be placed onto the map.\nFinally we add the marker to the map (in a very OOP fashion) and view the map object which embeds nicely in a Jupyter notebook.", "map_osm = folium.Map(location=[40.7128, -74.0059],\n tiles='Open Street Map',\n zoom_start=12)\n\nmarker = folium.Marker([40.7484, -73.9857], \n popup='Empire State Building', \n icon=folium.Icon())\n\nmap_osm.add_child(marker)\n\nmap_osm", "This might qualify as one of the most useless maps ever created - but if you didn't know where the Empire State Building was, you do now.\nUsing the Strava API to Acquire Data\nWe first need some data to plot.\nWe use the requests package to query the Strava API.\nIf you're unfamilar with using an RESTful API, all that this really entails is visiting a specific URL and receiving JSON back.\nOf course there's much more to it than that but since we're only trying to access data it's not that much more complicated.\nFor the specifics of the Strava API, click here.\nTo access the API we need a token to identify ourselves. \nI've omitted mine below for obvious reasons.", "import requests as rq\nimport pandas as pd\nimport json\n\ntoken = '<Put your own token here>'\nsegment_payload = {'access_token': token, 'per_page': '200'}\nheaders = {'content-type': 'application/json'}", "I've previously catalogued a list of all the segment ids for the climbs I want to map from Strava, and saved them in a .csv.\nLoading these in with Pandas we get", "segment_ids = pd.read_csv('./strava_segments.csv', index_col='id')\nsegment_ids.head()", "At the moment we have no further information about these segments - where they are, how steep they are, and how long they go on for.\nThankfully we can query the Strava API for each segement (using the id) and save that information into a DataFrame.\nFor each segment in the table above, we generate an API call, receive the JSON, and parse the columns (dictionary keys) that we want to keep before finally creating a DataFrame.", "required_fields = ['athlete_count', 'average_grade', 'city', \n 'climb_category', 'distance', 'effort_count', \n 'elevation_low', 'end_latlng', 'id',\n 'maximum_grade', 'name', 'start_latlng', \n 'total_elevation_gain', 'updated_at']\n\nfull_segment_info = {}\nfor ix, row in segment_ids.iterrows():\n r = rq.get('https://www.strava.com/api/v3/segments/{}'.format(row.segment_id), headers=headers, params=segment_payload)\n data = r.json()\n data = {key: val for key, val in data.items() if key in required_fields}\n full_segment_info[ix] = data\n \nfull_segment_info = pd.DataFrame(full_segment_info).T\nfull_segment_info.index = full_store.index + 1", "Now, looking at the top of the table, we can see we have a multitude of new information:", "full_segment_info.head()", "Since querying the API takes time and there are rate limits on how often we can do this we'll make sure this new data is saved locally.", "full_segment_info.to_csv('./top200climbs-full.csv')\n\n#ignore\nfull_segment_info = pd.read_csv('./top200climbs-full.csv', index_col=0) # Remove eventually", "I had originally planned to use all this information this information in the map however Strava provide a widget (or small piece of website) which summarises the climb in a nice an succinct fashion. \nWe will however need to know where each climb starts (it's lat and lon).\nA Little Fix (Skippable)\nWe want to embed a widget in our markers, which we can do in the form of an IFrame.\nAn IFrame renders a website within a website in a smaller frame.\nFor simplicity we'll use the Strava widget for each climb, like the one below.\n<iframe height='405' width='590' frameborder='0' allowtransparency='true' scrolling='no' src='https://www.strava.com/segments/652851/embed'></iframe>\n\nThe default class for the IFrame in the folium package includes big border and the ability to scroll.\nWe want to disable that by default, so we define a new subclass and overwrite the render method:", "import base64\n# Check HTML on building blog\nclass InvisibleIFrame(folium.element.IFrame):\n\n def render(self, **kwargs):\n \"\"\"Renders the HTML representation of the element.\"\"\"\n html = super(folium.element.IFrame, self).render(**kwargs)\n html = \"data:text/html;base64,\" + base64.b64encode(html.encode('utf8')).decode('utf8')\n\n if self.height is None:\n iframe = (\n '<div style=\"width:{width};\">'\n '<div style=\"position:relative;width:100%;height:0;padding-bottom:{ratio};\">'\n '<iframe src=\"{html}\" style=\"position:absolute;width:100%;height:100%;left:0;top:0; \"'\n 'frameBorder=\"0\" scrolling=\"no\">'\n '</iframe>'\n '</div></div>').format\n iframe = iframe(html=html,\n width=self.width,\n ratio=self.ratio)\n else:\n iframe = ('<iframe src=\"{html}\" width=\"{width}\" '\n 'frameBorder=\"0\" scrolling=\"no\"'\n 'height=\"{height}\"></iframe>').format\n iframe = iframe(html=html, width=self.width, height=self.height)\n return iframe", "Taking the code from Github, we copy the method we want to overwrite.\nOn lines 15 and 23 we've added the HTML to set the border size to 0, and disable scrolling. \nWe leave the rest of the code as is.\nBuilding the Map\nWe only want people looking at the UK, so we introduce some rough bounds on the lat/lon values.", "# Bounds\nmin_lat, max_lat = 48.77, 60\nmin_lon, max_lon = -9.05, 5", "We can now define our map object.\nThe only other new argument here is min_zoom, which prevents people zooming out into space.", "# Make the map\nm = folium.Map(location=[54.584797 , -3.438721],\n tiles='Stamen Terrain', \n zoom_start=6,\n min_lat=min_lat, \n max_lat=max_lat,\n min_lon=min_lon, \n max_lon=max_lon,\n max_zoom=18, \n min_zoom=5)", "Finally before the main loop we define a MarkerCluster object.\nThis prevents too many markers from appearing at once and overlapping by grouping them up by location.\nYou can then click on the cluster or zoom in to expand them.", "# Define a cluster of markers.\nmc = folium.MarkerCluster()", "We're now ready to plot our climbs.\nFor added style, I made a couple of custom markers too, combining the UK road signs for cyclists and steep gradients.\n</br>\n<div class=\"row\">\n<div class=\"col-sm-6\"><img src='https://andrewmellor.co.uk/static/markerr_post.png'></div>\n<div class=\"col-sm-6\"><img src='https://andrewmellor.co.uk/static/markerb_post.png'></div>\n</div>\n\nOur main loop is just like the basic example with a few more things thrown in: we define something to be displayed on a marker, create a marker, and add it to our marker cluster.", "for ix, row in full_segment_info.iterrows():\n \n # The encodes the Strava segment widget\n html = r\"\"\"<center><iframe height='405' width='590' frameborder='0' \n allowtransparency='true' scrolling='no' \n src='https://www.strava.com/segments/{}/embed'></iframe></center>\"\"\".format\n html = html(row.id)\n iframe = InvisibleIFrame(html, width=600, height=370)\n \n # We create the popup thats going to appear when we click a marker.\n # Previously this was just text but now we're adding an IFrame.\n popup = folium.map.Popup(iframe, max_width=2650)\n \n # This defines our CustomIcon, using the icons I created\n if ix <= 100:\n icon_url='https://andrewmellor.co.uk/static/markerr_post.png'\n else:\n icon_url='https://andrewmellor.co.uk/static/markerb_post.png'\n icon = folium.features.CustomIcon(icon_image=icon_url,\n icon_size=(56,56),\n icon_anchor=(28,56))\n\n # We create our marker with our custom icon and popup \n # at the location of the start of each climb\n lat, lon = row.start_latlng\n marker = folium.map.Marker([lat, lon], \n icon=icon,\n popup=popup)\n \n # Add the marker to the cluster.\n mc.add_child(marker)", "All that is left is to add the marker cluster to the map object.", "m.add_child(mc);", "Then we can display the map inline in the notebook.", "m", "Finally, we can save the map as a .html file so we can access it outside of Python/Notebook (note that it won't work offline as it needs the leaflet.js scripts) or embed it into another website.", "m.save(outfile='full-terrain.html')", "...and we're done.\nHaving experimented with other Python packages to plot geographical data and struggled I was surprised at how easy, intuitive, and quick it was to create an interactive map and add data.\nFolium isn't just limited to markers either, there is support for geojson and other complex data structures which can be overlaid like weather forecasts.\nData visualisation at its simplest." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ProfessorKazarinoff/staticsite
content/code/bokeh/mohrs_circle_with_bokeh.ipynb
gpl-3.0
[ "Bokeh is a plotting library for Python like Matplotlib and Alatir (https://altair-viz.github.io/). While Matplotlib is widely used and great for making static 2D and 3D plots, building interactive web graphics is not its strong suite. Bokeh on the other hand is a plotting library designed for the web. You can create static plots with Bokeh, but its real strength is that Bokeh can make online interactive plots (without knowing any Javascript). \nIn this post, we will build a Bokeh plot that plots a Mohr's Circle. Mohr's circle is a useful way for Engineers to visualize the normal and shearing stresses of an element that is rotated relative to the known applied stress. Engineers use Mohr's Circle to help determine how much load a component can withstand before it starts to deform.\nInstalling Bokeh\nBokeh comes installed in the full Anaconda Distribution of Python. If you are using the (base) Anaconda environment, no other installation steps are necessary.\nIf you don't have Anaconda installed or are using a virtual environment, Bokeh can be installed using conda and the Anaconda Prompt using the command:\n```text\n\nconda install bokeh\n```\n\nOr installed at a terminal using pip.\ntext\n$ pip install bokeh\nOnce bokeh is installed you can bring up the Python REPL (the Python prompt) and confirm your installation with the following code:", ">>> import bokeh\n>>> bokeh.__version__", "If you installation was successful, you should see a version Number like '1.3.4'.\nCreate a Mohr's Circle Function\nBefore we start plotting with Bokeh, we'll first make a function for Mohr's Circle. The function below can from an amazing student in on of my courses. The student took on the challenge of building Mohr's Circles with Python and the function below was part of the solution.", "import numpy as np\n\ndef mohrs_circle(stress_x=1,stress_y=1,shear=0):\n \"\"\"\n A function that calculates the critical values to build a Mohr's Circle\n \"\"\"\n \n # calculate the average stress, min stress and max stress\n stress_avg=(stress_x+stress_y)/2\n stress_max=stress_avg+(((stress_x-stress_y)/2)**2+shear**2)**0.5\n stress_min=stress_avg-(((stress_x-stress_y)/2)**2+shear**2)**0.5\n # calculate the radius\n R=((((stress_x-stress_y)/2)**2)+shear**2)**0.5 #Also max shear\n circle_eqn=((stress_x-stress_avg)**2)-shear**2-R**2\n \n # Construct x and y arrays that build the circle\n n=100\n t=np.linspace(0,2*np.pi,n+1)\n x=R*np.cos(t)+stress_avg\n y=R*np.sin(t)\n \n # Construct X and Y arrays that build the line accross Mohr's circle\n X = np.array([stress_x, stress_y])\n Y = np.array([-shear, shear])\n \n # Declare the center\n C = stress_avg\n\n return x,y,X,Y,R,C", "Let's test our function and see the resulting output", "x,y,X,Y,R,C = mohrs_circle(2,5,1)\nprint(X)\nprint(Y)\nprint(C)\nprint(R)", "We see output that looks reasonable. \nBuild Mohr's Circle with Bokeh\nNext we'll use our mohrs_circle() function to build a plot of Mohr's Circle using Bokeh. The imports start our script.", "import bokeh\nfrom bokeh.plotting import figure, output_file, show, output_notebook\nfrom bokeh.models import ColumnDataSource\n\nprint(f\"Bokeh version: {bokeh.__version__}\")", "Next, we'll call our mohrs_circle() function so that we have arrays we need to build the plot.", "x,y,X,Y,R,C = mohrs_circle(2,5,1)", "Now we'll use the arrays x,y and X,Y to create two Bokeh Columnar Data Sources. Bokeh uses the concept of a columnar data source, sort of like a column in a table or excel file to build plots.", "# Create the Bokeh Column Data Source Object from the mohrs_circle() output arrays\ncircle_source = ColumnDataSource(data=dict(x=x, y=y))\nline_source = ColumnDataSource(data=dict(x=X, y=Y))", "The next step is to create a Bokeh figure object that we'll call plot. Bokeh figure objects are the basis for Bokeh plots. Lines, input boxes, sliders and other sorts of things can be added to a figure object. Whatever gets added to the figure object will be shown in the final plot. \nKeyword arguments such as plot_height, plot_width, title and tools can be called out when the figure object is created.", "plot = figure(plot_height=400, plot_width=400, title=\"Mohr's Circle\", tools=\"pan,reset,save,wheel_zoom\")", "Now we can add our circle and line to the plot. This is accomplished by calling plot.line() and specifying the axis 'x','y' and providing our column data sources as keyword arguments. Some line attributes such as line_width and line_alpha can also be specified.", "plot.line('x','y', source=circle_source, line_width=3, line_alpha=0.6)\nplot.line('x','y', source=line_source, line_width=3, line_alpha=0.8)", "OK, we've created our plot, but now we need to see it. There are a couple ways of doing so. Remember that Bokeh is primarily designed for creating plots built for the web. \nShow the plot in a separate window\nThe first way we can see the plot is by using the show() function. Pass in the plot as an argument to the show() function. If you are building the Bokeh Plot in a Jupyter notebook, this will pop out a new browser tab and you'll see your plot.", "show(plot)", "The plot should look something like the plot below.\n\nYou can click the little [save] icon to save the plot.\nShow the plot in a Jupyter notebook\nIf you are working in a Jupyter notebook and want to see the plot inline, call the output_notebook() function at the top of the notebook and then show the plot with show(). Note that you can't show a plot in a separate window and show the a plot inline in the same Jupyter notebook. \nThe code below is the exact same as the code we used to build the plot above, the only difference is the line\noutput_notebook()\nright below the imports. \n```python\nimport bokeh\nfrom bokeh.plotting import figure, output_file, show, output_notebook\nfrom bokeh.models import ColumnDataSource\noutput_notebook()\nx,y,X,Y,R,C = mohrs_circle(2,5,1)\ncircle_source = ColumnDataSource(data=dict(x=x, y=y))\nline_source = ColumnDataSource(data=dict(x=X, y=Y))\nplot = figure(plot_height=400, plot_width=400, title=\"Mohr's Circle\", tools=\"pan,reset,save,wheel_zoom\")\nplot.line('x','y', source=circle_source, line_width=3, line_alpha=0.6)\nplot.line('x','y', source=line_source, line_width=3, line_alpha=0.8)\nshow(plot)\n```\nYou will see the plot in the resulting output cell look something like below\n\nOutput the plot as a .png file\nIf you want to output the plot as a .png file, you first have to make sure that a couple of libraries are installed. This is most easily completed at the Anaconda Prompt using conda.\n```text\n\nconda install selenium pillow\nconda install -c conda-forge phantomjs \n```\n\nThen you can use Bokeh's export_png() function to save the plot to a .png file. Make sure to import export_png before calling the function", "from bokeh.io import export_png\n\nexport_png(plot, filename=\"plot.png\")", "On my Windows 10 laptop, a Windows Defender Firewall window popped up that asked me for access. After I typed in my administrator password, the plot was saved.\nIt should look like the plot below\n\nIf you don't like the pan, zoom, refresh, and save buttons on the side, they can be removed when you create the plot using Bokeh's figure() function and setting the .toolbar.logo attribute to None and the .toolbar_location to None", "import bokeh\nfrom bokeh.plotting import figure, output_file, show, output_notebook\nfrom bokeh.models import ColumnDataSource\nfrom bokeh.io import export_png\n\nx,y,X,Y,R,C = mohrs_circle(2,5,1)\ncircle_source = ColumnDataSource(data=dict(x=x, y=y))\nline_source = ColumnDataSource(data=dict(x=X, y=Y))\n\nplot = figure(plot_height=400, plot_width=400, title=\"Mohr's Circle\", tools=\"\")\nplot.toolbar.logo = None\nplot.toolbar_location = None\n\nplot.line('x','y', source=circle_source, line_width=3, line_alpha=0.6)\nplot.line('x','y', source=line_source, line_width=3, line_alpha=0.8)\n\nexport_png(plot, filename=\"plot_no_tools.png\")", "The plot with no tools is shown below.\n\nConclusion\nIn this post, we build a plot of Mohr's Circle using Bokeh. Bokeh is a Python package that can be installed with conda or pip. We built the plot in a couple of steps. First we imported some function from bokeh. Next, we wrote a Python function that gave us the arrays necessary to plot Mohr's Circle. Using the arrays we created, we defined Bokeh Column Data sources for the circle and the line. Next we created a bokeh figure and added lines (the circle and line) to it. At the end of the post we showed the figure in three different ways: in a seperate window, in a Jupyter notebook and in an exported .png file." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ScienceStacks/jupyter_scisheets_widget
test_notebooks/20171009_scisheets_widget.ipynb
bsd-3-clause
[ "Demonstration of Use Case\n\nUsers can enter step by step explanations of changes made to a SciSheet in a Jupyter notebook\n\nLoad necessary packages", "import json\n\nimport numpy as np\nimport pandas as pd\n\nfrom jupyter_scisheets_widget import scisheets_widget", "Load data into the notebook", "import pandas_datareader as pdr\nibm_data = pdr.get_data_yahoo('IBM')\n\nincome_data = pd.read_csv('income_data.csv', sep=';')\nincome_data", "Display the loaded data as a scisheet widget", "tbl2 = scisheets_widget.HandsonDataFrame(income_data)\ntbl2.show()\n\ntbl2._df\n\ntbl2._widget._model_data\n\ntbl2._widget._model_header" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
mitdbg/modeldb
client/workflows/demos/setup-script.ipynb
mit
[ "Part-of-Speech Tagging with NLTK\nThis notebook is a quick demonstration of verta's run.log_setup_script() feature.\nWe'll create a simple and lightweight text tokenizer and part-of-speech tagger using NLTK,\nwhich will require not only installing the nltk package itself,\nbut also downloading pre-trained text processing models within Python code.\nPrepare Verta", "import six\n\nfrom verta import Client\nfrom verta.utils import ModelAPI\n\nHOST = \"app.verta.ai\"\n\nPROJECT_NAME = \"Part-of-Speech Tagging\"\nEXPERIMENT_NAME = \"NLTK\"\n\nclient = Client(HOST)\n\nproj = client.set_project(PROJECT_NAME)\nexpt = client.set_experiment(EXPERIMENT_NAME)\nrun = client.set_experiment_run()", "Prepare NLTK\nThis Notebook was tested with nltk v3.4.5, though many versions should work just fine.", "import nltk\n\nnltk.__version__", "NLTK requires the separate installation of a tokenizer and part-of-speech tagger before these functionalities can be used.", "# for tokenizing\nnltk.download('punkt')\n\n# for part-of-speech tagging\nnltk.download('averaged_perceptron_tagger')", "Log Model for Deployment\nCreate Model\nOur model will be a thin wrapper around nltk,\nreturning the constituent tokens and their part-of-speech tags for each input sentence.", "class TextClassifier:\n def __init__(self, nltk):\n self.nltk = nltk\n\n def predict(self, data):\n predictions = []\n for text in data:\n tokens = self.nltk.word_tokenize(text)\n predictions.append({\n 'tokens': tokens,\n 'parts_of_speech': [list(pair) for pair in self.nltk.pos_tag(tokens)],\n })\n\n return predictions\n\nmodel = TextClassifier(nltk)\n\ndata = [\n \"I am a teapot.\",\n \"Just kidding I'm a bug?\",\n]\nmodel.predict(data)", "Create Deployment Artifacts\nAs always, we'll create a couple of descriptive artifacts to let the Verta platform know how to handle our model.", "model_api = ModelAPI(data, model.predict(data))\n\nrun.log_model(model, model_api=model_api)\nrun.log_requirements([\"nltk\"])", "Create Setup Script\nAs we did in the beginning of this Notebook,\nthe deployment needs these NLTK resources downloaded and installed before it can run the model,\nso we'll define a short setup script to send over and execute at the beginning of a model deployment.", "setup = \"\"\"\nimport nltk\n\nnltk.download('punkt')\nnltk.download('averaged_perceptron_tagger')\n\"\"\"\n\nrun.log_setup_script(setup)", "Make Live Predictions\nNow we can visit the Web App, deploy the model, and make successful predictions!", "run\n\ndata = [\n \"Welcome to Verta!\",\n]\n\nfrom verta.deployment import DeployedModel\n\nDeployedModel(HOST, run.id).predict(data)", "" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
prisae/blog-notebooks
fftlogtest.ipynb
cc0-1.0
[ "FFTLog\nThis notebook is a translation of fftlogtest.f from the Fortran package FFTLog, which was presented in Appendix B of Hamilton, 2000, and published at http://casa.colorado.edu/~ajsh/FFTLog. It serves as an example for the python package fftlog (which is a f2py-wrapper around FFTLog), in the same manner as the original file fftlogtest.f serves as an example for Fortran package FFTLog.\nReference\nHamilton, A. J. S., 2000, Uncorrelated modes of the non-linear power spectrum: Monthly Notices of the Royal Astronomical Society, 312, pages 257-284; DOI: http://dx.doi.org/10.1046/j.1365-8711.2000.03071.x.\n\nThis is fftlogtest.f\nThis is a simple test program to illustrate how FFTLog works.\nThe test transform is:\n$$\n\\int^\\infty_0 r^{\\mu+1} \\exp\\left(-\\frac{r^2}{2} \\right)\\\nJ_\\mu(k, r)\\ k\\ {\\rm d}r = k^{\\mu+1} \\exp\\left(-\\frac{k^2}{2}\n\\right) $$\nDisclaimer:\nFFTLog does NOT claim to provide the most accurate possible\nsolution of the continuous transform (which is the stated aim\nof some other codes). Rather, FFTLog claims to solve the exact\ndiscrete transform of a logarithmically-spaced periodic sequence.\nIf the periodic interval is wide enough, the resolution high\nenough, and the function well enough behaved outside the periodic\ninterval, then FFTLog may yield a satisfactory approximation\nto the continuous transform.\nObserve:\n1. How the result improves as the periodic interval is enlarged.\n With the normal FFT, one is not used to ranges orders of\n magnitude wide, but this is how FFTLog prefers it.\n2. How the result improves as the resolution is increased.\n Because the function is rather smooth, modest resolution\n actually works quite well here.\n3. That the central part of the transform is more reliable\n than the outer parts. Experience suggests that a good general\n strategy is to double the periodic interval over which the\n input function is defined, and then to discard the outer\n half of the transform.\n4. That the best bias exponent seems to be $q = 0$.\n5. That for the critical index $\\mu = -1$, the result seems to be\n offset by a constant from the 'correct' answer.\n6. That the result grows progressively worse as mu decreases\n below -1.\nThe analytic integral above fails for $\\mu \\le -1$, but FFTLog\nstill returns answers. Namely, FFTLog returns the analytic\ncontinuation of the discrete transform. Because of ambiguity\nin the path of integration around poles, this analytic continuation\nis liable to differ, for $\\mu \\le -1$, by a constant from the 'correct'\ncontinuation given by the above equation.\nFFTLog begins to have serious difficulties with aliasing as\n$\\mu$ decreases below $-1$, because then $r^{\\mu+1} \\exp(-r^2/2)$ is\nfar from resembling a periodic function.\nYou might have thought that it would help to introduce a bias\nexponent $q = \\mu$, or perhaps $q = \\mu+1$, or more, to make the\nfunction $a(r) = A(r) r^{-q}$ input to fhtq more nearly periodic.\nIn practice a nonzero $q$ makes things worse.\nA symmetry argument lends support to the notion that the best\nexponent here should be $q = 0,$ as empirically appears to be true.\nThe symmetry argument is that the function $r^{\\mu+1} \\exp(-r^2/2)$\nhappens to be the same as its transform $k^{\\mu+1} \\exp(-k^2/2)$.\nIf the best bias exponent were q in the forward transform, then\nthe best exponent would be $-q$ that in the backward transform;\nbut the two transforms happen to be the same in this case,\nsuggesting $q = -q$, hence $q = 0$.\nThis example illustrates that you cannot always tell just by\nlooking at a function what the best bias exponent $q$ should be.\nYou also have to look at its transform. The best exponent $q$ is,\nin a sense, the one that makes both the function and its transform\nlook most nearly periodic.\n\nTest-Integral: $\\int_0^\\infty r^{\\mu+1}\\ \\exp\\left(-\\frac{r^2}{2}\\right)\\ J_\\mu(k,r)\\ k\\ {\\rm d}r = k^{\\mu+1} \\exp\\left(-\\frac{k^2}{2}\\right)$\nImport fftlog as well as numpy and matplotlib; some plot settings", "import fftlog\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.style.use('ggplot')\nmpl.rcParams.update({'font.size': 16})", "Define the parameters you wish to use\nThe presets are the Reasonable choices of parameters from fftlogtest.f.", "# Range of periodic interval\nlogrmin = -4\nlogrmax = 4\n\n# Number of points (Max 4096)\nn = 64\n\n# Order mu of Bessel function\nmu = 0\n\n# Bias exponent: q = 0 is unbiased\nq = 0\n\n# Sensible approximate choice of k_c r_c\nkr = 1\n\n# Tell fhti to change kr to low-ringing value\n# WARNING: kropt = 3 will fail, as interaction is not supported\nkropt = 1\n\n# Forward transform (changed from dir to tdir, as dir is a python fct)\ntdir = 1", "Calculation related to the logarithmic spacing", "# Central point log10(r_c) of periodic interval\nlogrc = (logrmin + logrmax)/2\n\nprint('Central point of periodic interval at log10(r_c) = ', logrc)\n\n# Central index (1/2 integral if n is even)\nnc = (n + 1)/2.0\n\n# Log-spacing of points\ndlogr = (logrmax - logrmin)/n\ndlnr = dlogr*np.log(10.0)", "Calculate input function: $r^{\\mu+1}\\exp\\left(-\\frac{r^2}{2}\\right)$", "r = 10**(logrc + (np.arange(1, n+1) - nc)*dlogr)\nar = r**(mu + 1)*np.exp(-r**2/2.0)", "Initialize FFTLog transform - note fhti resets kr", "kr, wsave, ok = fftlog.fhti(n, mu, dlnr, q, kr, kropt)\nprint('fftlog.fhti: ok =', bool(ok), '; New kr = ', kr)", "Call fftlog.fht (or fftlog.fftl)", "logkc = np.log10(kr) - logrc\nprint('Central point in k-space at log10(k_c) = ', logkc)\n\n# rk = r_c/k_c\nrk = 10**(logrc - logkc)\n\n# Transform\n#ak = fftlog.fftl(ar.copy(), wsave, rk, tdir)\nak = fftlog.fht(ar.copy(), wsave, tdir)", "Calculate Output function: $k^{\\mu+1}\\exp\\left(-\\frac{k^2}{2}\\right)$", "k = 10**(logkc + (np.arange(1, n+1) - nc)*dlogr)\ntheo = k**(mu + 1)*np.exp(-k**2/2.0)", "Plot result", "plt.figure(figsize=(16,8))\n\n# Transformed result\nax2 = plt.subplot(1, 2, 2)\nplt.plot(k, theo, 'k', lw=2, label='Theoretical')\nplt.plot(k, ak, 'r--', lw=2, label='FFTLog')\nplt.xlabel('k')\nplt.title(r'$k^{\\mu+1} \\exp(-k^2/2)$', fontsize=20)\nplt.legend(loc='best')\nplt.xscale('log')\nplt.yscale('symlog', basey=10, linthreshy=1e-5)\nax2ylim = plt.ylim()\n\n# Input\nax1 = plt.subplot(1, 2, 1)\nplt.plot(r, ar, 'k', lw=2)\nplt.xlabel('r')\nplt.title(r'$r^{\\mu+1}\\ \\exp(-r^2/2)$', fontsize=20)\nplt.xscale('log')\nplt.yscale('symlog', basey=10, linthreshy=1e-5)\nplt.ylim(ax2ylim)\n\n# Main title\nplt.suptitle(r'$\\int_0^\\infty r^{\\mu+1}\\ \\exp(-r^2/2)\\ J_\\mu(k,r)\\ k\\ {\\rm d}r = k^{\\mu+1} \\exp(-k^2/2)$',\n fontsize=24, y=1.08)\nplt.show()", "Print values", "print(' k a(k) k^(mu+1) exp(-k^2/2)')\nprint('----------------------------------------------------------------')\nfor i in range(n):\n print(\"%18.6e %18.6e %18.6e\"% (k[i], ak[i], theo[i]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xmnlab/AlertaDengue
notebooks/rasterio-fiona.ipynb
gpl-3.0
[ "Handle geographical files\nIn this example, we will work with raster and shapefile formats.", "from glob import glob\nfrom math import floor, log10, ceil\nfrom matplotlib import pyplot as plt\nfrom pprint import pprint\nfrom rasterio.features import rasterize\nfrom rasterio.transform import from_origin, IDENTITY\n\nimport fiona\nimport geopandas as gpd\nimport gdal\nimport geopy.distance\nimport numpy as np\nimport rasterio", "Some extra functions", "def hex_to_rgb(value):\n value = value.lstrip('#')\n lv = len(value)\n return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3))", "Import from GeoJSON to Shapefile", "# geojson_files = glob('../AlertaDengue/static/geojson/*')\n\n# vitoria/es\n# geojson_files = ['../AlertaDengue/static/geojson/3205309.json']\n\n# curitiba/pr\ngeojson_files = ['../AlertaDengue/static/geojson/4106902.json']\n\n# convert from geojson to shapefile\nwith fiona.open(geojson_files[0]) as geojson_file:\n with fiona.open(\n \"/tmp/test.shp\", \"w\",\n crs=geojson_file.crs, \n driver=\"ESRI Shapefile\", \n schema=geojson_file.schema.copy()\n ) as shp:\n for item in geojson_file:\n shp.write(item)", "Open Shapefile", "#shp = fiona.open('zonas_farrapos.shp')\nshp = fiona.open('/tmp/test.shp', 'r', enabled_drivers=['ESRI Shapefile'])\n\ndef show_attrs(shp: \"fiona shapefile\"):\n \"\"\"\n \"\"\"\n shp_struct = [\n (v, 'method'if callable(getattr(shp, v, None)) else \n 'attribute'\n ) \n for v in dir(shp) \n if not v.startswith('_')\n ]\n\n return [\n (shp_attr, getattr(shp, shp_attr))\n for shp_attr, shp_type in shp_struct\n if shp_type == 'attribute'\n ]\n\nshow_attrs(shp)\n\nprint(\n 'keys: %s' % shp[0].keys(), \n 'type: %s' % shp[0]['type'],\n 'id: %s' % shp[0]['id'],\n 'properties: %s' % shp[0]['properties'],\n 'geometry.keys: %s' % shp[0]['geometry'].keys(), \n sep='\\n'\n)\n\ngdf = gpd.GeoDataFrame.from_file('/tmp/test.shp')\ngdf.plot()\nplt.show()", "Calculate the boundaries", "shp.bounds\n\ncoords_1 = shp.bounds[1], shp.bounds[0]\ncoords_2 = shp.bounds[3], shp.bounds[0]\n\nheight = geopy.distance.vincenty(coords_1, coords_2).km\n\ncoords_2 = shp.bounds[1], shp.bounds[2]\n\nwidth = geopy.distance.vincenty(coords_1, coords_2).km\n\nprint('-'*80)\nprint('width (km):\\t', width)\nprint('height (km):\\t', height)\n\n# res = 0.000901\nres_x = (shp.bounds[2] - shp.bounds[0]) / width\nres_y = (shp.bounds[3] - shp.bounds[1]) / height\n\nprint(res_x, res_y)\n\nout_shape = int(height), int(width)\n\nprint('shape:\\t', out_shape)\nprint('-'*80)\nprint('res_x:\\t', res_x)\nprint('res_y:\\t', res_y)\n\ntransform = from_origin(\n shp.bounds[0] - res_x / 2,\n shp.bounds[3] + res_y / 2, \n res_x, res_y\n)\ntransform", "Rasterize", "# https://mapbox.github.io/rasterio/topics/masking-by-shapefile.html\nrgb_values = hex_to_rgb('#ff9900')\nrgb_values\n\nfeatures = [\n [(feature['geometry'], color)]\n for feature in shp\n for color in rgb_values\n]\nprint(\n len(features), \n features[0][0][0].keys(),\n features[0][0][0]['type']\n)\n\n# shapes = [(geometry['geometry'], k) for k, geometry in shp.items()]\n\ndtype = rasterio.float64\nnodata = np.nan\n\nraster_args = dict(\n out_shape=out_shape,\n fill=nodata,\n transform=transform,\n dtype=dtype,\n all_touched=True\n)\n\nrasters = [rasterize(feature, **raster_args) for feature in features]", "Save to GeoTIFF", "f_tiff_path = '/tmp/test.tiff'\n\nwith rasterio.drivers():\n with rasterio.open(\n f_tiff_path, \n mode='w',\n crs=shp.crs,\n driver='GTiff',\n # profile='GeoTIFF',\n dtype=dtype,\n count=len(rgb_values),\n width=width,\n height=height,\n nodata=nodata,\n transform=transform,\n photometric='RGB'\n ) as dst:\n # help(dst.write)\n for i in range(1, 4):\n # print(i, rasters[i-1].shape)\n dst.write_band(i, rasters[i-1])\n dst.write_colormap(\n i, {0: (255, 0, 0),\n 255: (0, 0, 255) })\n # cmap = dst.colormap(1)\n # assert cmap[0] == (255, 0, 0, 255)\n # assert cmap[255] == (0, 0, 255, 255)\n\nds = gdal.Open(f_tiff_path, gdal.GA_Update)\nfor i in range(ds.RasterCount):\n ds.GetRasterBand(i + 1).ComputeStatistics(True)\n print('='*80)\n print(ds.GetRasterBand(i + 1).ComputeStatistics(True))\n \nds = band = None # save, close", "Open GeoTIFF files", "src = rasterio.open('/tmp/test.tiff')\n\nr, g, b = src.read()\nprint('width, heigh:', src.width, src.height)\nprint('crs:', src.crs)\nprint('transform:', src.transform)\nprint('count:', src.count)\nprint('indexes:', src.indexes)\nprint('colorinterp (1):', src.colorinterp(1))\nprint('colorinterp (2):', src.colorinterp(2))\nprint('colorinterp (3):', src.colorinterp(3))\n# print(help(src))\nprint('nodatavals:', src.nodatavals)\nprint('nodata:', src.nodata)\nprint('mask (dtype):', src.read_masks().dtype)\n\nbands = (\n ('r', r),\n ('g', g),\n ('b', b)\n)\n\nfor k, band in bands:\n print('\\n', k, ':')\n print('min: %s (%s)' % (np.nanmin(band), np.min(band)))\n print('max: %s (%s)' % (np.nanmax(band), np.max(band)))\n\nplt.imshow(np.dstack(src.read_masks()))\nplt.show()\nsrc.close()\n\ntotal = np.zeros(r.shape)\nfor i, band in enumerate([r, g, b]):\n \n img_rgb = np.zeros((r.shape + (3,)), 'float64')\n\n img_rgb[..., i] = band/255\n \n plt.imshow(img_rgb, cmap=\"gnuplot\", vmin=0., vmax=1.)\n plt.show()\n\nimg_rgb = np.zeros((r.shape + (3,)), 'float64')\n\nimg_rgb[..., 0] = r/255\nimg_rgb[..., 1] = g/255\nimg_rgb[..., 2] = b/255\n\nprint(img_rgb.shape)\n\nplt.imshow(img_rgb, cmap='gnuplot', vmin=0., vmax=1.)\nplt.show()\n\nimport pandas as pd\nfor i in range(3):\n display(pd.DataFrame(img_rgb[:,:,i]))", "Refs\nhttp://nbviewer.jupyter.org/gist/ocefpaf/53e5be14d58c1b946952a7293f2005cf\nhttps://github.com/mapbox/rasterio/blob/6b02fd304d10995cff818729abe47f28bd7a33b5/examples/rasterize_geometry.py\nhttps://mapbox.github.io/rasterio/topics/masking-by-shapefile.html\nhttp://docs.qgis.org/2.0/es/docs/gentle_gis_introduction/raster_data.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vanheck/blog-notes
QuantTrading/creating_trading_strategy_03-zipline.ipynb
mit
[ "Zipline\nInformace o notebooku", "NB_VERSION = 1,0\n\nimport sys\nimport datetime\nimport pandas as pd\n\nimport zipline\n%load_ext zipline\n\nprint('Verze notebooku:', '.'.join(map(str, NB_VERSION)))\nprint('Verze pythonu:', '.'.join(map(str, sys.version_info[0:3])))\nprint('---')\nprint('Zipline:', zipline.__version__)\nprint('Pandas:', pd.__version__)", "Zipline info\nV minulém příspěvku o backtestu, jsem se zabýval jak pomocí pandas provést backtest. Pandas je velmi mocný pomocník při algoritmickém obchodování, ale backtestování pomocí pandas je náchylné na chyby, už jen z toho důvodu, že pandas je zaměřený na analýzu dat obecně. Všechny datové sloupce si tam člověk musí vytvořit a vypočítat pomocí vzorců manuálně. Existují ale i nástroje, které pandas využívají a funkcionalitu zaměřenou na algoritmické obchodování už mají zabudovanou v sobě. Člověk se tak může zaměřit přímo na obchodování (skládání portfolií a logiku vstupů/výstupů). Backtest a jeho výsledky mu pak provede takovýto nástroj. Jedním z takových nástrojů je právě Zipline. \nZipline je open-source knihovna pro python, kterou vyvíjí lidé kolem Quantopianu a jejich komunita. Podporuje jak backtesting, tak i přímo live-trading a Quantopian používá tuto knihovnu jako backend pro jejich notebooky a algoritmy.\nInstalace\nInstaluje se přes pip - stačí v příkazové řádce spustit příkaz:\nsh\npip install zipline\nPo instalaci je třeba ještě říct zipline, jaký zdroj dat má použít. To se provede příkazem zipline ingest. Tím se aktivuje defaultní zdroj dat Quandl:\nsh\nzipline ingest\nFunkce initialize a handle_data\nKaždý algoritmus v zipline využívá dvě funkce:\n* initialize(context), která se volá jako první na začátku spuštění, v parametru context se definují proměnné, které jsou třeba a nemění se s novými daty.\n* handle_data(context, data), volá se pokaždé, když jsou připravená nová data trhu.\nTakže v jednoduchosti si do funkce initialize() definuju jaký trh chci obchodovat - jaké data mě zajímají, a popř. určité nastvení, které chci uchovat po celý průběh algoritmu.\nDo funkce handle_data() naprogramuju svůj obchodní systém na bázi křížení klouzavých průměrů, jde prakticky o stejnou strategii jako jsem psal v minulém příspěvku.\nNakonec ještě doplním volitelnou funkci analyze(), která se zavolá na konci celého procesu a zobrazí mi výsledky.", "%%zipline --start 2008-1-1 --end 2017-3-1\n\nfrom zipline.api import order_target, record, symbol\nimport matplotlib.pyplot as plt\n\ndef initialize(context):\n context.i = 0\n context.my_smb = 'AAPL'\n context.asset = symbol(context.my_smb)\n\n context.short_period = 30\n context.long_period = 90\n \n\ndef handle_data(context, data):\n # Přeskočím 90 dní, aby se mohl správně vypočítat\n # klouzavý průměr s delší periodou\n context.i += 1\n if context.i < context.long_period:\n return\n\n short_mavg = data.history(context.asset, 'price', bar_count=context.short_period, frequency=\"1d\").mean()\n long_mavg = data.history(context.asset, 'price', bar_count=context.long_period, frequency=\"1d\").mean()\n\n # Obchodní logika - nákup 100 akcií v případě překročení klouzavého průměru nahoru\n if short_mavg > long_mavg:\n order_target(context.asset, 100)\n elif short_mavg < long_mavg:\n order_target(context.asset, 0)\n\n # Pomocí record uložím hodnoty, které můžu zpracovat později\n record(MARKET=data.current(context.asset, 'price'),\n short_mavg=short_mavg,\n long_mavg=long_mavg)\n \n\ndef analyze(context, perf):\n fig = plt.figure(figsize=(16,14))\n ax1 = fig.add_subplot(211)\n perf.portfolio_value.plot(ax=ax1)\n ax1.set_ylabel('Hodnota portfolia v $')\n\n ax2 = fig.add_subplot(212)\n perf['MARKET'].plot(ax=ax2)\n perf[['short_mavg', 'long_mavg']].plot(ax=ax2)\n\n perf_trans = perf.ix[[t != [] for t in perf.transactions]]\n buys = perf_trans.ix[[t[0]['amount'] > 0 for t in perf_trans.transactions]]\n sells = perf_trans.ix[\n [t[0]['amount'] < 0 for t in perf_trans.transactions]]\n ax2.plot(buys.index, perf.short_mavg.ix[buys.index],\n '^', markersize=10, color='m')\n ax2.plot(sells.index, perf.short_mavg.ix[sells.index],\n 'v', markersize=10, color='k')\n ax2.set_ylabel('Vývoj ceny $')\n plt.legend(loc=0)\n plt.show()", "Závěr\nVyužil jsem vzorový příklad z dokumentace Zipline. Samozřejmě, že backtest není vše, co zipline umí. Je příjemné, že pokud připravím kód pro zipline, tak stačí jen malá úprava a můžu algoritmus nasadit pro živé obchodování (NE, že bych tento konkrétní algoritmus chtěl ve skutečnosti obchodovat!), nebo ho použít jako algoritmus do platformy Quantopian a nechat si zobrazit další výsledky, či ho umístit na fórum Quantopian komunity a podělit se s ním se světem.\nBohužel jsem měl problém s daty z defaultního datového zdroje. Vývoj akcie 'APPL' (někdy od poloviny roku 2014) neodpovídá skutečnosti. Mám neblahé tušení, že to má něco společného se službou Quandl a ukončení poskytování zdarma dostupných dat ze služeb Google Finance či Yahoo finance. Ale na vzorový příklad pro zipline, jako hlavní účel tohoto postu, to nemá vliv a zde se opravou zabývat nebudu.\nZdroje\n\nPython For Finance: Algorithmic Trading\nQuandl\nQuantopian\nZipline" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
CamZHU/research_public
lectures/MIT Sloan ARCH, GARCH, and GMM.ipynb
apache-2.0
[ "ARCH and GARCH Models\nBy Delaney Granizo-Mackenzie and Andrei Kirilenko.\nThis notebook developed in collaboration with Prof. Andrei Kirilenko as part of the Masters of Finance curriculum at MIT Sloan.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\nNotebook released under the Creative Commons Attribution 4.0 License.\n\nAutoRegressive Conditionally Heteroskedasticity (ARCH) occurs when the volatility of a time series is also autoregressive.", "import cvxopt\nfrom functools import partial\nimport math\nimport numpy as np\nimport scipy\nfrom scipy import stats\nimport statsmodels as sm\nfrom statsmodels.stats.stattools import jarque_bera\n\nimport matplotlib.pyplot as plt", "Simulating a GARCH(1, 1) Case\nWe'll start by using Monte Carlo sampling to simulate a GARCH(1, 1) process. Our dynamics will be\n$$\\sigma_1 = \\sqrt{\\frac{a_0}{1-a_1-b_1}}$$\n$$\\sigma_t^2 = a_0 + a_1 x_{t-1}^2+b_1 \\sigma_{t-1}^2$$\n$$x_t = \\sigma_t \\epsilon_t$$\n$$\\epsilon \\sim \\mathcal{N}(0, 1)$$\nOur parameters will be $a_0 = 1$, $a_1=0.1$, and $b_1=0.8$. We will drop the first 10% (burn-in) of our simulated values.", "# Define parameters\na0 = 1.0\na1 = 0.1\nb1 = 0.8\nsigma1 = math.sqrt(a0 / (1 - a1 - b1))\n\ndef simulate_GARCH(T, a0, a1, b1, sigma1):\n \n # Initialize our values\n X = np.ndarray(T)\n sigma = np.ndarray(T)\n sigma[0] = sigma1\n \n for t in range(1, T):\n # Draw the next x_t\n X[t - 1] = sigma[t - 1] * np.random.normal(0, 1)\n # Draw the next sigma_t\n sigma[t] = math.sqrt(a0 + b1 * sigma[t - 1]**2 + a1 * X[t - 1]**2)\n \n X[T - 1] = sigma[T - 1] * np.random.normal(0, 1) \n \n return X, sigma", "Now we'll compare the tails of the GARCH(1, 1) process with normally distributed values. We expect to see fatter tails, as the GARCH(1, 1) process will experience extreme values more often.", "X, _ = simulate_GARCH(10000, a0, a1, b1, sigma1)\nX = X[1000:] # Drop burn in\nX = X / np.std(X) # Normalize X\n\ndef compare_tails_to_normal(X):\n # Define matrix to store comparisons\n A = np.zeros((2,4))\n for k in range(4):\n A[0, k] = len(X[X > (k + 1)]) / float(len(X)) # Estimate tails of X\n A[1, k] = 1 - stats.norm.cdf(k + 1) # Compare to Gaussian distribution\n return A\n\ncompare_tails_to_normal(X)", "Sure enough, the tails of the GARCH(1, 1) process are fatter. We can also look at this graphically, although it's a little tricky to see.", "plt.hist(X, bins=50)\nplt.xlabel('sigma')\nplt.ylabel('observations');\n\n# Sample values from a normal distribution\nX2 = np.random.normal(0, 1, 9000)\nboth = np.matrix([X, X2])\n\n# Plot both the GARCH and normal values\nplt.plot(both.T, alpha=.7);\nplt.axhline(X2.std(), color='yellow', linestyle='--')\nplt.axhline(-X2.std(), color='yellow', linestyle='--')\nplt.axhline(3*X2.std(), color='red', linestyle='--')\nplt.axhline(-3*X2.std(), color='red', linestyle='--')\nplt.xlabel('time')\nplt.ylabel('sigma');", "What we're looking at here is the GARCH process in blue and the normal process in green. The 1 and 3 std bars are drawn on the plot. We can see that the blue GARCH process tends to cross the 3 std bar much more often than the green normal one.\nTesting for ARCH Behavior\nThe first step is to test for ARCH conditions. To do this we run a regression on $x_t$ fitting the following model.\n$$x_t^2 = a_0 + a_1 x_{t-1}^2 + \\dots + a_p x_{t-p}^2$$\nWe use OLS to estimate $\\hat\\theta = (\\hat a_0, \\hat a_1, \\dots, \\hat a_p)$ and the covariance matrix $\\hat\\Omega$. We can then compute the test statistic\n$$F = \\hat\\theta \\hat\\Omega^{-1} \\hat\\theta'$$\nWe will reject if $F$ is greater than the 95% confidence bars in the $\\mathcal(X)^2(p)$ distribution.\nTo test, we'll set $p=20$ and see what we get.", "X, _ = simulate_GARCH(1100, a0, a1, b1, sigma1)\nX = X[100:] # Drop burn in\n\np = 20\n\n# Drop the first 20 so we have a lag of p's\nY2 = (X**2)[p:]\nX2 = np.ndarray((980, p))\nfor i in range(p, 1000):\n X2[i - p, :] = np.asarray((X**2)[i-p:i])[::-1]\n\nmodel = sm.regression.linear_model.OLS(Y2, X2)\nmodel = model.fit()\ntheta = np.matrix(model.params)\nomega = np.matrix(model.cov_HC0)\nF = np.asscalar(theta * np.linalg.inv(omega) * theta.T)\n\nprint np.asarray(theta.T).shape\n\nplt.plot(range(20), np.asarray(theta.T))\nplt.xlabel('Lag Amount')\nplt.ylabel('Estimated Coefficient for Lagged Datapoint')\n\nprint 'F = ' + str(F)\n\nchi2dist = scipy.stats.chi2(p)\npvalue = 1-chi2dist.cdf(F)\nprint 'p-value = ' + str(pvalue)\n\n# Finally let's look at the significance of each a_p as measured by the standard deviations away from 0\nprint theta/np.diag(omega)", "Fitting GARCH(1, 1) with MLE\nOnce we've decided that the data might have an underlying GARCH(1, 1) model, we would like to fit GARCH(1, 1) to the data by estimating parameters.\nTo do this we need the log-likelihood function\n$$\\mathcal{L}(\\theta) = \\sum_{t=1}^T - \\ln \\sqrt{2\\pi} - \\frac{x_t^2}{2\\sigma_t^2} - \\frac{1}{2}\\ln(\\sigma_t^2)$$\nTo evaluate this function we need $x_t$ and $\\sigma_t$ for $1 \\leq t \\leq T$. We have $x_t$, but we need to compute $\\sigma_t$. To do this we need to make a guess for $\\sigma_1$. Our guess will be $\\sigma_1^2 = \\hat E[x_t^2]$. Once we have our initial guess we compute the rest of the $\\sigma$'s using the equation\n$$\\sigma_t^2 = a_0 + a_1 x_{t-1}^2 + b_1\\sigma_{t-1}^2$$", "X, _ = simulate_GARCH(10000, a0, a1, b1, sigma1)\nX = X[1000:] # Drop burn in\n\n# Here's our function to compute the sigmas given the initial guess\ndef compute_squared_sigmas(X, initial_sigma, theta):\n \n a0 = theta[0]\n a1 = theta[1]\n b1 = theta[2]\n \n T = len(X)\n sigma2 = np.ndarray(T)\n \n sigma2[0] = initial_sigma ** 2\n \n for t in range(1, T):\n # Here's where we apply the equation\n sigma2[t] = a0 + a1 * X[t-1]**2 + b1 * sigma2[t-1]\n \n return sigma2", "Let's look at the sigmas we just generated.", "plt.plot(range(len(X)), compute_squared_sigmas(X, np.sqrt(np.mean(X**2)), (1, 0.5, 0.5)))\nplt.xlabel('Time')\nplt.ylabel('Sigma');", "Now that we can compute the $\\sigma_t$'s, we'll define the actual log likelihood function. This function will take as input our observations $x$ and $\\theta$ and return $-\\mathcal{L}(\\theta)$. It is important to note that we return the negative log likelihood, as this way our numerical optimizer can minimize the function while maximizing the log likelihood.\nNote that we are constantly re-computing the $\\sigma_t$'s in this function.", "def negative_log_likelihood(X, theta):\n \n T = len(X)\n \n # Estimate initial sigma squared\n initial_sigma = np.sqrt(np.mean(X ** 2))\n \n # Generate the squared sigma values\n sigma2 = compute_squared_sigmas(X, initial_sigma, theta)\n \n # Now actually compute\n return -sum(\n [-np.log(np.sqrt(2.0 * np.pi)) -\n (X[t] ** 2) / (2.0 * sigma2[t]) -\n 0.5 * np.log(sigma2[t]) for\n t in range(T)]\n )", "Now we perform numerical optimization to find our estimate for\n$$\\hat\\theta = \\arg \\max_{(a_0, a_1, b_1)}\\mathcal{L}(\\theta) = \\arg \\min_{(a_0, a_1, b_1)}-\\mathcal{L}(\\theta)$$\nWe have some constraints on this\n$$a_1 \\geq 0, b_1 \\geq 0, a_1+b_1 < 1$$", "# Make our objective function by plugging X into our log likelihood function\nobjective = partial(negative_log_likelihood, X)\n\n# Define the constraints for our minimizer\ndef constraint1(theta):\n return np.array([1 - (theta[1] + theta[2])])\n\ndef constraint2(theta):\n return np.array([theta[1]])\n\ndef constraint3(theta):\n return np.array([theta[2]])\n\ncons = ({'type': 'ineq', 'fun': constraint1},\n {'type': 'ineq', 'fun': constraint2},\n {'type': 'ineq', 'fun': constraint3})\n\n# Actually do the minimization\nresult = scipy.optimize.minimize(objective, (1, 0.5, 0.5),\n method='SLSQP',\n constraints = cons)\ntheta_mle = result.x\nprint 'theta MLE: ' + str(theta_mle)", "Now we would like a way to check our estimate. We'll look at two things:\n1. How fat are the tails of the residuals.\n2. How normal are the residuals under the Jarque-Bera normality test.\nWe'll do both in our check_theta_estimate function.", "def check_theta_estimate(X, theta_estimate):\n initial_sigma = np.sqrt(np.mean(X ** 2))\n sigma = np.sqrt(compute_squared_sigmas(X, initial_sigma, theta_estimate))\n epsilon = X / sigma\n print 'Tails table'\n print compare_tails_to_normal(epsilon / np.std(epsilon))\n print ''\n \n _, pvalue, _, _ = jarque_bera(epsilon)\n print 'Jarque-Bera probability normal: ' + str(pvalue)\n \ncheck_theta_estimate(X, theta_mle)", "GMM for Estimating GARCH(1, 1) Parameters\nWe've just computed an estimate using MLE, but we can also use Generalized Method of Moments (GMM) to estimate the GARCH(1, 1) parameters.\nTo do this we need to define our moments. We'll use 4.\n1. The residual $\\hat\\epsilon_t = x_t / \\hat\\sigma_t$\n2. The variance of the residual $\\hat\\epsilon_t^2$\n3. The skew moment $\\mu_3/\\hat\\sigma_t^3 = (\\hat\\epsilon_t - E[\\hat\\epsilon_t])^3 / \\hat\\sigma_t^3$\n4. The kurtosis moment $\\mu_4/\\hat\\sigma_t^4 = (\\hat\\epsilon_t - E[\\hat\\epsilon_t])^4 / \\hat\\sigma_t^4$", "# The n-th standardized moment\n# skewness is 3, kurtosis is 4\ndef standardized_moment(x, mu, sigma, n):\n return ((x - mu) ** n) / (sigma ** n)", "GMM now has three steps.\nStart with $W$ as the identity matrix.\n\nEstimate $\\hat\\theta_1$ by using numerical optimization to minimize\n$$\\min_{\\theta \\in \\Theta} \\left(\\frac{1}{T} \\sum_{t=1}^T g(x_t, \\hat\\theta)\\right)' W \\left(\\frac{1}{T}\\sum_{t=1}^T g(x_t, \\hat\\theta)\\right)$$\nRecompute $W$ based on the covariances of the estimated $\\theta$. (Focus more on parameters with explanatory power)\n$$\\hat W_{i+1} = \\left(\\frac{1}{T}\\sum_{t=1}^T g(x_t, \\hat\\theta_i)g(x_t, \\hat\\theta_i)'\\right)^{-1}$$\nRepeat until $|\\hat\\theta_{i+1} - \\hat\\theta_i| < \\epsilon$ or we reach an iteration threshold.\n\nInitialize $W$ and $T$ and define the objective function we need to minimize.", "def gmm_objective(X, W, theta):\n # Compute the residuals for X and theta\n initial_sigma = np.sqrt(np.mean(X ** 2))\n sigma = np.sqrt(compute_squared_sigmas(X, initial_sigma, theta))\n e = X / sigma\n \n # Compute the mean moments\n m1 = np.mean(e)\n m2 = np.mean(e ** 2) - 1\n m3 = np.mean(standardized_moment(e, np.mean(e), np.std(e), 3))\n m4 = np.mean(standardized_moment(e, np.mean(e), np.std(e), 4) - 3)\n \n G = np.matrix([m1, m2, m3, m4]).T\n \n return np.asscalar(G.T * W * G)\n\ndef gmm_variance(X, theta):\n # Compute the residuals for X and theta \n initial_sigma = np.sqrt(np.mean(X ** 2))\n sigma = np.sqrt(compute_squared_sigmas(X, initial_sigma, theta))\n e = X / sigma\n\n # Compute the squared moments\n m1 = e ** 2\n m2 = (e ** 2 - 1) ** 2\n m3 = standardized_moment(e, np.mean(e), np.std(e), 3) ** 2\n m4 = (standardized_moment(e, np.mean(e), np.std(e), 4) - 3) ** 2\n \n # Compute the covariance matrix g * g'\n T = len(X)\n s = np.ndarray((4, 1))\n for t in range(T):\n G = np.matrix([m1[t], m2[t], m3[t], m4[t]]).T\n s = s + G * G.T\n \n return s / T", "Now we're ready to the do the iterated minimization step.", "# Initialize GMM parameters\nW = np.identity(4)\ngmm_iterations = 10\n\n# First guess\ntheta_gmm_estimate = theta_mle\n\n# Perform iterated GMM\nfor i in range(gmm_iterations):\n # Estimate new theta\n objective = partial(gmm_objective, X, W)\n result = scipy.optimize.minimize(objective, theta_gmm_estimate, constraints=cons)\n theta_gmm_estimate = result.x\n print 'Iteration ' + str(i) + ' theta: ' + str(theta_gmm_estimate)\n \n # Recompute W\n W = np.linalg.inv(gmm_variance(X, theta_gmm_estimate))\n \n\ncheck_theta_estimate(X, theta_gmm_estimate)", "Predicting the Future: How to actually use what we've done\nNow that we've fitted a model to our observations, we'd like to be able to predict what the future volatility will look like. To do this, we can just simulate more values using our original GARCH dynamics and the estimated parameters.\nThe first thing we'll do is compute an initial $\\sigma_t$. We'll compute our squared sigmas and take the last one.", "sigma_hats = np.sqrt(compute_squared_sigmas(X, np.sqrt(np.mean(X**2)), theta_mle))\ninitial_sigma = sigma_hats[-1]\ninitial_sigma", "Now we'll just sample values walking forward.", "a0_estimate = theta_gmm_estimate[0]\na1_estimate = theta_gmm_estimate[1]\nb1_estimate = theta_gmm_estimate[2]\n\nX_forecast, sigma_forecast = simulate_GARCH(100, a0_estimate, a1_estimate, b1_estimate, initial_sigma)\n\nplt.plot(range(-100, 0), X[-100:], 'b-')\nplt.plot(range(-100, 0), sigma_hats[-100:], 'r-')\nplt.plot(range(0, 100), X_forecast, 'b--')\nplt.plot(range(0, 100), sigma_forecast, 'r--')\nplt.xlabel('Time')\nplt.legend(['X', 'sigma']);", "One should note that because we are moving foward using a random walk, this analysis is supposed to give us a sense of the magnitude of sigma and therefore the risk we could face. It is not supposed to accurately model future values of X. In practice you would probably want to use Monte Carlo sampling to generate thousands of future scenarios, and then look at the potential range of outputs. We'll try that now. Keep in mind that this is a fairly simplistic way of doing this analysis, and that better techniques, such as Bayesian cones, exist.", "plt.plot(range(-100, 0), X[-100:], 'b-')\nplt.plot(range(-100, 0), sigma_hats[-100:], 'r-')\nplt.xlabel('Time')\nplt.legend(['X', 'sigma'])\n\n\nmax_X = [-np.inf]\nmin_X = [np.inf]\nfor i in range(100):\n X_forecast, sigma_forecast = simulate_GARCH(100, a0_estimate, a1_estimate, b1_estimate, initial_sigma)\n if max(X_forecast) > max(max_X):\n max_X = X_forecast\n elif min(X_forecast) < min(max_X):\n min_X = X_forecast\n plt.plot(range(0, 100), X_forecast, 'b--', alpha=0.05)\n plt.plot(range(0, 100), sigma_forecast, 'r--', alpha=0.05)\n\n# Draw the most extreme X values specially\nplt.plot(range(0, 100), max_X, 'g--', alpha=1.0)\nplt.plot(range(0, 100), min_X, 'g--', alpha=1.0);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/7f0f8817512fdcca1eab8f4befa07ff9/plot_metadata_query.ipynb
bsd-3-clause
[ "%matplotlib inline", "Querying epochs with rich metadata\nSelecting a subset of epochs based on rich metadata.\nMNE allows you to include metadata along with your :class:mne.Epochs objects.\nThis is in the form of a :class:pandas.DataFrame that has one row for each\nevent, and an arbitrary number of columns corresponding to different\nfeatures that were collected. Columns may be of type int, float, or str.\nIf an :class:mne.Epochs object has a metadata attribute, you can select\nsubsets of epochs by using pandas query syntax directly. Here we'll show\na few examples of how this looks.", "# Authors: Chris Holdgraf <choldgraf@gmail.com>\n# Jona Sassenhagen <jona.sassenhagen@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n\n# License: BSD (3-clause)\n\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport mne\n\n# First load some data\nevents = mne.read_events(os.path.join(mne.datasets.sample.data_path(),\n 'MEG/sample/sample_audvis_raw-eve.fif'))\nraw = mne.io.read_raw_fif(os.path.join(mne.datasets.sample.data_path(),\n 'MEG/sample/sample_audvis_raw.fif'))\n\n# We'll create some dummy names for each event type\nevent_id = {'Auditory/Left': 1, 'Auditory/Right': 2,\n 'Visual/Left': 3, 'Visual/Right': 4,\n 'smiley': 5, 'button': 32}\nevent_id_rev = {val: key for key, val in event_id.items()}\n\nsides, kinds = [], []\nfor ev in events:\n split = event_id_rev[ev[2]].lower().split('/')\n if len(split) == 2:\n kind, side = split\n else:\n kind = split[0]\n side = 'both'\n kinds.append(kind)\n sides.append(side)\n\n\n# Here's a helper function we'll use later\ndef plot_query_results(query):\n fig = epochs[query].average().plot(show=False, time_unit='s')\n title = fig.axes[0].get_title()\n add = 'Query: {}\\nNum Epochs: {}'.format(query, len(epochs[query]))\n fig.axes[0].set(title='\\n'.join([add, title]))\n plt.show()", "First we'll create our metadata object. This should be a\n:class:pandas.DataFrame with each row corresponding to an event.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>Do not set or change the Dataframe index of ``epochs.metadata``.\n It will be controlled by MNE to mirror ``epochs.selection``.\n Also, while some inplace operations on ``epochs.metadata`` are\n possible, do not manually drop or add rows, as this will\n create inconsistency between the metadata and actual data.</p></div>", "metadata = {'event_time': events[:, 0] / raw.info['sfreq'],\n 'trial_number': range(len(events)),\n 'kind': kinds,\n 'side': sides}\nmetadata = pd.DataFrame(metadata)\nmetadata.head()", "We can use this metadata object in the construction of an :class:mne.Epochs\nobject. The metadata will then exist as an attribute:", "epochs = mne.Epochs(raw, events, metadata=metadata, preload=True)\nprint(epochs.metadata.head())", "You can select rows by passing various queries to the Epochs object. For\nexample, you can select a subset of events based on the value of a column.", "query = 'kind == \"auditory\"'\nplot_query_results(query)", "If a column has numeric values, you can also use numeric-style queries:", "query = 'trial_number < 10'\nplot_query_results(query)", "It is possible to chain these queries together, giving you more expressive\nways to select particular epochs:", "query = 'trial_number < 10 and side == \"left\"'\nplot_query_results(query)", "Any query that works with DataFrame.query will work for selecting epochs.", "plot_events = ['smiley', 'button']\nquery = 'kind in {}'.format(plot_events)\nplot_query_results(query)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
henriquepgomide/caRtola
src/python/desafio_valorizacao/.ipynb_checkpoints/Descobrindo o algoritmo de valorização do Cartola FC - Parte I-checkpoint.ipynb
mit
[ "Descobrindo o algoritmo de valorização do Cartola FC - Parte I\nExplorando o algoritmo de valorização do Cartola.\nOlá! Este é o primeiro tutorial da série que tentará descobrir o algoritmo de valorização do Cartola FC. Neste primeiro estudo, nós iremos:\n\nAvaliar o sistema de valorizção ao longo das rodadas; \nEstudar a distribuição a variação para cada rodada; \nRealizar um estudo de caso com um jogador específico, estudando sua valorização e criando um modelo específico de valorização para o jogador.\n\nAlém disso, você estudará análise de dados usando Python com Pandas, Seaborn, Sklearn. Espero que você tenha noção sobre:\n\nModelos lineares\nAnálise de séries temporais\nConhecimentos básicos do Cartola FC.", "# Importar bibliotecas\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n\n\npd.options.mode.chained_assignment = None # default='warn'\n\n# Abrir banco de dados\ndados = pd.read_csv('~/caRtola/data/desafio_valorizacao/valorizacao_cartola_2018.csv')\n\n# Listar nome das variáveis\nstr(list(dados))\n\n# Selecionar variáveis para análise\ndados = dados[['slug', 'rodada', 'posicao',\n 'status', 'variacao_preco', 'pontos',\n 'preco', 'media_pontos']]\n\n# Explorar dados de apenas um jogador\npaqueta = dados[dados.slug == 'lucas-paqueta']\npaqueta.head(n=15)", "Algumas observações sobre a estrutura dos dados. Na linha '21136', Paquetá está como dúvida é teve pontuação de 0. Na linha abaixo ('21137'), ele está suspenso, no entanto pontuou. \nA explicação para este erro nos dados está ligada em como os dados da API da Globo são organizados. Embora para o front-end do Cartola os dados estejam corretos, para nossa análise eles são inadequados. Por quê?\nVamos pensar que você está escalando para a rodada 38. Para esta rodada, a pontuação do jogador ainda não está disponível, somente a variação do seu preço, sua média e seu preço até a rodada 38. Assim, precisamos ajustar a coluna 'pontos', usando uma técnica simples de deslocar (lag) os dados da coluna. Além disso, precisaremos aplicar o mesmo processo na coluna 'variacao_preco' que também está ligada aos dados da rodada anterior.\nAssim, a coluna 'variacao_preco' e 'pontos' estão deslocadas para cima e precisam ser corrigidas;", "# Criar coluna variacao_preco_lag e pontos_lag\npaqueta['variacao_preco_lag'] = paqueta['variacao_preco'].shift(1)\npaqueta['pontos_lag'] = paqueta['pontos'].shift(1)\npaqueta['media_lag'] = paqueta['media_pontos'].shift(-1)\n\npaqueta[['slug', 'rodada', 'status',\n 'pontos_lag', 'variacao_preco_lag',\n 'preco', 'media_pontos']].head(n=15)", "Como podemos observar na tabela acima, os novos atributos que criamos agora estão alinhados com o status do atleta e poderão nos ajudar na etapa da modelagem. Antes de modelar, vamos explorar ainda nossos dados.\nPrimeira, observação para entendermos o modelo. O jogador quando está suspenso (linha 21137) ou seu status é nulo, não houve variação de preço. Há também outro ponto a ser observado, caso a pontuação do atleta seja positiva, há uma tendência de valorização. Vamos analisar isso nos dois gráficos abaixo.", "# Transformar dados para plotar resultados\npaqueta_plot = pd.melt(paqueta, \n id_vars=['slug','rodada'], \n value_vars=['variacao_preco_lag', 'pontos_lag', 'preco'])\n\n# Plotar gráfico com variacao_preco_lag, pontos_lag e preco\nplt.figure(figsize=(16, 6))\ng = sns.lineplot(x='rodada', y='value', hue='variable', data=paqueta_plot)", "Neste gráfico, podemos observar que o preço do atleta foi razoavelmente estável ao longo do tempo. Ao observar o comportamento das linhas azul e laranja, conseguimos notar que quando uma linha tem inclinação negativa a outra parece acompanhar. Isso nos leva a concluir o óbvio, a pontuação do atleta está ligada diretamente a sua variação de preço.", "plt.figure(figsize=(16, 6))\ng = sns.scatterplot(x='pontos_lag', y='variacao_preco_lag', hue='status', data=paqueta)", "Opa, aparentemente há uma relação entre os pontos e a variação do preço. Vamos analisar a matriz de correlação.", "paqueta[['pontos_lag','variacao_preco_lag','preco','media_pontos']].corr()", "Temos algumas informações uteis que saíram da matriz de correlação. Primeira, a pontuação está correlacionada positivamente com a variação e o preço do atleta negativamente correlacionada. Estas duas variáveis já podem nos ajudar a montar um modelo.", "# Set predictors and dependent variable\npaqueta_complete = paqueta[(~paqueta.status.isin(['Nulo', 'Suspenso'])) & (paqueta.rodada > 5)]\npaqueta_complete = paqueta_complete.dropna()\n\npredictors = paqueta_complete[['pontos_lag','preco','media_lag']]\noutcome = paqueta_complete['variacao_preco_lag']\n\nregr = linear_model.LinearRegression()\nregr.fit(predictors, outcome)\npaqueta_complete['predictions'] = regr.predict(paqueta_complete[['pontos_lag', 'preco', 'media_lag']])\n\nprint('Intercept: \\n', regr.intercept_)\nprint('Coefficients: \\n', regr.coef_)\nprint(\"Mean squared error: %.2f\"\n % mean_squared_error(paqueta_complete['variacao_preco_lag'], paqueta_complete['predictions']))\nprint('Variance score: %.2f' % r2_score(paqueta_complete['variacao_preco_lag'], paqueta_complete['predictions']))", "Boa notícia! Nós estamos prevendo os resultados do jogador muito bem. O valor é aproximado, mas nada mal! A fórmula de valorização do jogador para uma dada rodada é:\n$$ Variacao = 16.12 + (pontos * 0,174) - (preco * 0,824) + (media * 0,108) $$\nVamos abaixo em que medida nossas predições são compatíveis com o desempenho do jogador.", "# Plotar variação do preço por valor previsto do modelo linear. \n\nplt.figure(figsize=(8, 8))\ng = sns.regplot(x='predictions',y='variacao_preco_lag', data=paqueta_complete)\n# Plotar linhas com rodadas para avaliar se estamos errando alguma rodada específica\nfor line in range(0, paqueta_complete.shape[0]):\n g.text(paqueta_complete.iloc[line]['predictions'], \n paqueta_complete.iloc[line]['variacao_preco_lag']-0.25, \n paqueta_complete.iloc[line]['rodada'], \n horizontalalignment='right', \n size='medium', \n color='black', \n weight='semibold')", "Nossa previsão para o jogador Paquetá estão muito boas. Não descobrimos o algoritmo do Cartola, mas já temos uma aproximação acima do razoável. Será que nosso modelo é generalizável aos outros jogadores?\nAcompanhe nossa próxima publicação..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
scollins83/deep-learning
first-neural-network/Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n #def sigmoid(x):\n # return 0 # Replace 0 with your sigmoid calculation here\n #self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n #final_outputs = self.activation_function(final_inputs) # signals from final output layer\n final_outputs = final_inputs\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = error - (y - hidden_outputs)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error * final_outputs * (1 - final_outputs)\n hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output.T) * hidden_outputs * (1 - hidden_outputs)\n\n # Weight step (input to hidden)\n delta_weights_i_h += learning_rate * hidden_error_term * X[:, None] \n # Weight step (hidden to output)\n delta_weights_h_o += learning_rate * output_error_term * hidden_outputs[:,None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = self.activation_function(final_inputs) # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\niterations = 100\nlearning_rate = 0.1\nhidden_nodes = 2\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
shngli/Data-Mining-Python
Mining massive datasets/algorithms.ipynb
gpl-3.0
[ "Generalized BALANCE algorithm", "from math import e", "Calculate psi\n\nx: A has bid x for this query\nf: Fraction f of the budget of A is currently unspent", "psi = lambda x, f: x * (1 - e ** (-f))\n\nxs = [1, 2, 3]\nfs = [0.9, 0.5, 0.6]\n\nprint \"If a query arrives that is bidded on by A and B\"\nfor i in [0, 1]:\n print psi(xs[i], fs[i])\n\nprint \"If a query arrives that is bidded on by A and C\"\nfor i in [0, 2]:\n print psi(xs[i], fs[i])\n\nprint \"If a query arrives that is bidded on by A and B and C\"\nfor i in [0, 1, 2]:\n print psi(xs[i], fs[i])", "Bloom Filter", "import math", "Flagolet-Martin algorithm", "# return 3x + 7\ndef hash(x):\n return (3 * x + 7) % 11\n\nfor i in range(1, 11):\n print bin(hash(i))[2:]\n\n# asymptotic\nfp = lambda k, m, n: (1.0 - math.e ** (-1.0 * k * m / n)) ** k\nprint fp(2, 3, 10)\n\n# tiny bloom filter\nfp = lambda k, m, n: (1.0 - (1.0 - (1.0 / n)) ** (k * m)) ** k\nprint fp(2, 3, 10)", "Flajolet-Martin", "# Decimal number to binary number\ndef dec2bin(dec):\n return bin(dec)[2:]\n\n# Count the number of trailing zeros\ndef counttrailingzero(b):\n cnt = 0\n for i in range(len(b))[::-1]:\n if b[i] == '0':\n cnt += 1\n else:\n return cnt\n return cnt\n\n# Given R = max r(a), estimate number of distinct elements\ndef distinctelements(r):\n return 2 ** r\n\nprint counttrailingzero(dec2bin(10))\nprint counttrailingzero(dec2bin(12))\nprint counttrailingzero(dec2bin(4))\nprint counttrailingzero(dec2bin(16))\nprint counttrailingzero(dec2bin(1))", "Hits algorithm", "import numpy as np\nfrom copy import deepcopy", "Generate adjacency list from nodes and edges\n```\n\n\n\nnodes = ['yahoo', 'amazon', 'microsoft']\nedges = [('yahoo', 'yahoo'), ('yahoo', 'microsoft'), ('yahoo', 'amazon'), ('amazon', 'microsoft'), ('amazon', 'yahoo'), ('microsoft', 'amazon')]\ngetAdjList(nodes, edges)\n{0: [0, 2, 1], 1: [2, 0], 2: [1]}\n```", "def getAdjList(nodes, edges):\n nodeMap = {nodes[i] : i for i in range(len(nodes))}\n adjList = {i : [] for i in range(len(nodes))}\n for u, v in edges:\n adjList[nodeMap[u]].append(nodeMap[v])\n return adjList", "Genereate A from adjacency list\n```\n\n\n\nadjList = {0: [0, 2, 1], 1: [2, 0], 2: [1]}\nA = getA(adjList)\n```", "def getA(adjList):\n N = len(adjList)\n A = np.zeros([N, N])\n for u in adjList:\n vs = adjList[u]\n for v in vs:\n A[u, v] = 1\n return A", "Hits algorithm\n```\n\n\n\nadjList = {0: [0, 2, 1], 1: [2, 0], 2: [1]}\nA = getA(adjList)\nhits(A)\n(array([ 0.78875329, 0.57713655, 0.21161674]), array([ 0.62790075, 0.45987097, 0.62790075]))\n```", "def hits(A, epsilon=10**-6, numiter=1000):\n # initialize\n AT = A.T\n N = len(A)\n aold = np.ones(N) * 1.0 / np.sqrt(N)\n hold = np.ones(N) * 1.0 / np.sqrt(N)\n for i in range(numiter):\n hnew = A.dot(aold)\n anew = AT.dot(hnew)\n hnew *= np.sqrt(1.0 / sum([v * v for v in hnew]))\n anew *= np.sqrt(1.0 / sum([v * v for v in anew]))\n if np.sum([v * v for v in anew - aold]) < epsilon or \\\n np.sum([v * v for v in hnew - hold]) < epsilon:\n break\n hold = hnew\n aold = anew\n return hnew, anew\n\ndef main():\n adjList = {0: [0, 2, 1], 1: [2, 0], 2: [1]}\n A = getA(adjList)\n print hits(A)\n\nif __name__ == '__main__':\n main()\n import doctest\n doctest.testmod()", "LSH Plot", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "Helper functions", "andor = lambda x, r, b: 1 - (1 - x ** r) ** b\norand = lambda x, r, b: (1 - (1 - x) ** b) ** r\ncascade = lambda x, r, b: orand(andor(x, r, b), r, b)\n\nprint andor(0.2, 3, 4)\n\ndef plot():\n # Variable Initialization\n k = 2 \n r = k ** 2\n b = k ** 2\n\n # AND-OR Construction\n x1 = np.arange(0, 1, 0.01)\n y1 = andor(x1, r, b)\n\n # OR-AND Construction\n x2 = np.arange(0, 1, 0.01)\n y2 = orand(x2, r, b)\n\n # Cascade Construction\n x3 = np.arange(0, 1, 0.01)\n y3 = cascade(x3, k, k)\n\n # Show plot\n plt.plot(x1, y1, '-r', x2, y2, '-g', x3, y3, '-b')\n plt.grid(True)\n plt.legend(('and-or', 'or-and', 'cascade'))\n #plt.savefig('lsh.pdf')\n\nplot()", "Min-hashing algorithm", "import numpy as np\n\nclass MinHashing:\n def __init__(self, mat, hashfunc):\n self.matrix = mat\n self.m = len(mat)\n self.n = len(mat[0])\n self.hashfunc = hashfunc\n self.k = len(self.hashfunc)\n def minhash(self):\n self.sig = np.ones((self.k, self.n)) * 2 ** 10\n for j in range(self.m):\n for c in range(self.n):\n if self.matrix[j][c] == 1:\n for i in range(self.k):\n if self.hashfunc[i](j) < self.sig[i][c]:\n self.sig[i][c] = self.hashfunc[i](j)\n def show(self):\n print self.sig\n\nif __name__ == '__main__':\n hashfunc = [lambda x: (3 * x + 2) % 7, lambda x: (x - 1) % 7]\n mat = [[0,1],[1,0],[0,1],[0,0],[1,1],[1,1],[1,0]]\n mh = MinHashing(mat, hashfunc)\n mh.minhash()\n mh.show()\n\n # 2009 final\n mat = [[0,0,1],[1,1,1],[0,1,1],[1,0,0],[0,1,0]]\n hashfunc = [lambda x: x + 1, lambda x: (x - 1) % 5 + 1, lambda x: (x - 2) % 5 + 1, lambda x : (x - 3) % 5 + 1, lambda x: (x - 4) % 5 + 1]\n mh = MinHashing(mat, hashfunc)\n mh.minhash()\n mh.show()", "PageRank algorithm", "import numpy as np\nfrom copy import deepcopy", "Generate adjacency list from nodes and edges", "def getAdjList(nodes, edges):\n nodeMap = {nodes[i] : i for i in range(len(nodes))}\n adjList = {i : [] for i in range(len(nodes))}\n for u, v in edges:\n adjList[nodeMap[u]].append(nodeMap[v])\n return adjList", "Generate M from adjacency list", "def getM(adjList):\n size = len(adjList)\n M = np.zeros([size, size])\n for u in adjList:\n vs = adjList[u]\n n = len(vs)\n for v in vs:\n M[v, u] = 1.0 / n\n return M", "PageRank\nnodes = ['y', 'a', 'm']\nedges = [('y', 'y'), ('a', 'm'), ('m', 'm'), ('a', 'y'), ('y', 'a')]\nadjList = getAdjList(nodes, edges)\nM = getM(adjList)\nprint pageRank(M, 0.8)", "def pageRank(M, beta=0.8, epsilon=10**-6, numiter=1000):\n N = len(M)\n const = np.ones(N) * (1 - beta) / N\n rold = np.ones(N) * 1.0 / N\n rnew = np.zeros(N)\n for i in range(numiter):\n rnew = beta * M.dot(rold)\n rnew += const\n if np.sum(np.abs(rold - rnew)) < epsilon:\n break\n rold = rnew\n return rnew", "Topic-specific pagerank\n```\n\n\n\nnodes = [1, 2, 3, 4]\nedges = [(1, 2), (1, 3), (3, 4), (4, 3), (2, 1)]\nadjList = getAdjList(nodes, edges)\nM = getM(adjList)\nprint topicSpecific(M, {0: 1}, 0.8)\nprint topicSpecific(M, {0: 1}, 0.9)\nprint topicSpecific(M, {0: 1}, 0.7)\n```", "def topicSpecific(M, S, beta=0.8, epsilon=10**-6, numiter=1000):\n N = len(M)\n rold = np.ones(N) * 1.0 / N\n const = np.zeros(N)\n for i in S:\n const[i] = (1 - beta) * S[i]\n for i in range(numiter):\n rnew = M.dot(rold) * beta\n rnew += const\n if np.sum(np.abs(rold - rnew)) < epsilon:\n break\n rold = rnew\n return rnew\n\ndef main():\n nodes = ['y', 'a', 'm']\n edges = [('y', 'y'), ('a', 'm'), ('m', 'm'), ('a', 'y'), ('y', 'a')]\n adjList = getAdjList(nodes, edges)\n M = getM(adjList)\n print pageRank(M, 0.8)\n\ndef test():\n nodes = [1,2,3,4,5,6]\n edges = [(1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (1, 1), (2, 1), (2, 4), (2, 6), (2, 3), (2, 5), (3, 2), (3, 6), (3, 5), (4, 1), (4, 6), (5, 2), (5, 3), (6, 1)]\n adjList = getAdjList(nodes, edges)\n M = getM(adjList)\n print M\n print pageRank(M, 0.8)\n\nif __name__ == '__main__':\n test()", "Random hyperplane", "from numpy import dot\n\ndef rh(x, v):\n return 1 if dot(x, v) >= 0 else -1\n\na = [1, 0, -2, 1, -3, 0, 0]\nb = [2, 0, -3, 0, -2, 0, 2]\nc = [1, -1, 0, 1, 2, -2, 1]\n\nx = [1, 1, 1, 1, 1, 1, 1]\ny = [-1, 1, -1, 1, -1, 1, -1]\nz = [1, 1, 1, -1, -1, -1, -1]\n\nprint \"a\"\nprint rh(a, x)\nprint rh(a, y)\nprint rh(a, z)\n\nprint \"b\"\nprint rh(b, x)\nprint rh(b, y)\nprint rh(b, z)\n\nprint \"c\"\nprint rh(c, x)\nprint rh(c, y)\nprint rh(c, z)", "Estimate angles", "from scipy.spatial.distance import cosine\nfrom numpy import arccos\nfrom numpy import pi\n\na = [-1, 1, 1]\nb = [-1, 1, -1]\nc = [1, -1, -1]\n\nprint arccos(1 - cosine(a, b)) / pi * 180\nprint arccos(1 - cosine(b, c)) / pi * 180\nprint arccos(1 - cosine(c, a)) / pi * 180", "Jaccard similarity", "from scipy.spatial.distance import jaccard\nfrom scipy.spatial.distance import cosine\n\ndef jaccard_sim(u, v):\n return 1 - jaccard(u, v)\n\ndef cosine_sim(u, v):\n return 1 - cosine(u, v)\n\nprint jaccard_sim([1,0,0,1,1], [0,1,1,1,0])\n\nprint cosine_sim([1,0,1], [0,1,1])\n\nprint cosine_sim([5.22, 1.42], [4.06, 6.39])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]